text stringlengths 1.23k 293k | tokens float64 290 66.5k | created stringdate 1-01-01 00:00:00 2024-12-01 00:00:00 | fields listlengths 1 6 |
|---|---|---|---|
Progress of Phototherapy Applications in the Treatment of Bone Cancer
Bone cancer including primary bone cancer and metastatic bone cancer, remains a challenge claiming millions of lives and affecting the life quality of survivors. Conventional treatments of bone cancer include wide surgical resection, radiotherapy, and chemotherapy. However, some bone cancer cells may remain or recur in the local area after resection, some are highly resistant to chemotherapy, and some are insensitive to radiotherapy. Phototherapy (PT) including photodynamic therapy (PDT) and photothermal therapy (PTT), is a clinically approved, minimally invasive, and highly selective treatment, and has been widely reported for cancer therapy. Under the irradiation of light of a specific wavelength, the photosensitizer (PS) in PDT can cause the increase of intracellular ROS and the photothermal agent (PTA) in PTT can induce photothermal conversion, leading to the tumoricidal effects. In this review, the progress of PT applications in the treatment of bone cancer has been outlined and summarized, and some envisioned challenges and future perspectives have been mentioned. This review provides the current state of the art regarding PDT and PTT in bone cancer and inspiration for future studies on PT.
Introduction
Bone cancer is divided into primary bone cancer and metastatic bone cancer, depending on whether the tumors invading the bone tissue are primary tumors or metastatic tumors. Primary malignant bone tumors include osteosarcoma, chondrosarcoma, and Ewing's sarcoma, among others, which often occur in children and adolescents, accounting for about 6% of all cancers [1,2]. Among them, osteosarcoma is the second leading cause of tumor-related deaths in adolescents [3]. The early symptoms of primary bone cancer are not obvious, and patients often have pathological fractures or severe pain before going to the doctor. However, the invasion of primary malignant bone tumor progresses rapidly and can metastasize to other organs, especially the lung, so the early diagnosis and the treatment of primary bone cancer is difficult [4][5][6][7]. Bone metastases often occur in breast cancer, prostate cancer, lung cancer, liver cancer, kidney cancer, and so on. 65-80% of patients with breast cancer and prostate cancer develop bone metastases [8][9][10][11]. Metastatic bone cancer usually occurs in the spine and pelvis, accompanied by corresponding motor dysfunction and neurological symptoms of the affected tissue, as well as pathological fractures, pain, and other symptoms [12,13]. At present, the clinical treatment of bone cancer includes wide surgical resection, radiotherapy, and chemotherapy, often used in combination [14,15]. However, some tumor cells may remain in the local area after resection, and some bone tumors are insensitive to radiotherapy and have a tendency to be resistant to chemotherapy, leading to postoperative recurrence and metastasis [16,17]. In addition, the limb dysfunction caused by surgery and the damage to other physiological cells and tissues caused by radiotherapy or chemotherapy, have also seriously affected the life quality and mental health of patients [18,19]. Therefore, the treatment of bone cancer and other malignant tumors requires alternatives with an efficient and safe strategy.
Phototherapy (PT) involves the local exposure of patients to light to treat disease, including photodynamic therapy (PDT) and photothermal therapy (PTT). Both these therapies have been widely studied for cancer treatments in recent years, as they can eliminate tumor cells without damaging normal tissues [20,21]. PDT is a minimally invasive technique of treating tumor disease with photosensitizer (PS) and light activation. The PS that selectively accumulates in the tumor tissue can be activated by light of a specific nonthermal wavelength to produce reactive oxygen species (ROS), known as singlet oxygen, which can oxidize with nearby biological macromolecules in the tumor cells and thus cause cytotoxicity and cell death [22][23][24]. PTT is also a minimally invasive and highly efficient antitumor approach, which is based on photothermal agent (PTA) with high photothermal conversion efficiency [25,26]. The PTA can gather near the tumor tissue using targeted recognition technology and convert light energy into heat energy to kill cancer cells, as the cancer cells are more sensitive to high temperature than normal cells [27][28][29]. Furthermore, both PDT and PTT can be combined with other treatment methods to ablate tumors synergistically [30][31][32][33]. Given the difficulty of treating bone cancer and the broad prospects for PT, it is imperative to analyze and summarize the application progress of PT for bone cancer in the past three decades and present some envisioned challenges and future perspectives.
PDT
PDT was first discovered to damage paramecium cultured in a fluorescent dye, and then Dougherty et al. developed a variety of available PSs and excitation light sources, and applied them in the field of oncology in the 1970s [34,35]. At present, PDT has been proven to have ideal therapeutic effects of cancers, bacterial infections, skin diseases, and so on [36][37][38]. PDT has three crucial elements including PS, light source, and oxygen [39,40]. The anti-tumor effect of PDT is achieved by inducing direct cytotoxic effects on cancer cells (apoptosis, necrosis, and/or autophagy), destroying the tumor vasculature, and causing local inflammation followed by the systemic immunity [41]. PS can be selectively taken up by tumor tissues and can accumulate in tumor cells, while normal tissues take up less or rapidly metabolize the drug [42,43]. After uptake, the local tumor tissue is irradiated with light of a specific wavelength, and the nontoxic PS is activated to produce a large amount of highly reactive singlet oxygen, which causes the aforementioned biological responses of tumor cells and tissues. Finally, the growth of tumor is inhibited or tumor cells are ablated. In addition, the surrounding normal cells are protected from the PDT-induced cytotoxicity, because physiological cells in the tumor surrounding tissue are less sensitive to the toxicity of ROS [44][45][46]. Therefore, PDT has become an efficient, safe, convenient, and affordable strategy for tumor treatment.
Since the 1980s, hundreds of PSs have been studied, and some have been used in clinical trials [47]. There are currently three generations of PSs [48]. Most of the PSs used in tumor therapy are porphyrins, based on a tetrapyrrole structure which is similar to that of the protoporphyrin contained in hemoglobin [41]. Hematoporphyrin derivative (HpD), the most used first-generation PS which later became known as Photofrin, has been applied for the treatment of lung cancer, bladder cancer, esophageal cancer, and early stage cervical cancer [49]. However, the maximum absorption of HpD is at~630 nm, leading to poor tissue penetration. In addition, the lack of specificity and the cutaneous phototoxicity also limit the widespread use of HpD, stimulating the development of new PSs [50][51][52]. The second-generation PSs include aminolevulinic acid (ALA), benzoporphyrin derivatives (BPDs), acridine orange (AO), and chlorins, among others. They have near infrared (NIR) absorption and high singlet oxygen quantum yield, and thus are characterized by higher efficiency and better penetration to deeply located tissues [53][54][55]. The third-generation PSs generally refer to the modifications of the first-and second-generations based on the synthesis of substances with higher affinity to the tumor tissue [56,57]. The applications of targeted recognition technology and nanocarriers have further improved the selectivity and safety of PS, and are conducive to the combination with other treatment methods such as chemotherapy, radiotherapy, and immunotherapy [58][59][60]. Both of the second-and third-generation PSs are the main directions of current studies.
The light source is another significant component of PDT. Each PS needs a corresponding appropriate light source. At present, light sources include the xenon lamp, light emitting diode (LED), laser beam, and fiber optic devices [61][62][63]. Some scholars believe the use of wavelengths between 600-850 nm is optimal for PDT which is called therapeutic window, while others think the region between 600 and 1200 nm is appropriate for PDT and can be called the optical window of tissue. However, the light with an absorption wavelength exceeding 800 nm will not have enough energy to induce a photodynamic reaction [41,49]. To improve the penetration capacity of light the light sources can be placed near the deep tissue via minimally invasive surgeries such as endoscopic techniques and vertebroplasty (VP). Therefore, the light source should be determined according to each specific situation [64][65][66]. The success of PDT depends not only on the choice of PS and light source, but also on the total light dose and exposure time, as well as other combined treatment strategies.
Preliminary Studies on the Therapeutic Effect of PDT on Bone Cancer
Possibly due to the poor tissue penetration of the first-generation PSs and the uncertainty about the effect of PDT on normal musculoskeletal tissues, Photofrin, the first PS approved by the FDA, was not studied for bone cancer treatment until the end of the 1980s. Fingar et al. applied PDT for chondrosarcoma in rats using Photofrin II. The release of thromboxane from platelets and endothelial cells in tumors was higher than that in tumor-free tissue, leading to microvascular damage followed by tumor destruction [67]. This vascular damage was also related to changes in tumor interstitial pressure [68]. Meyer et al. demonstrated that bone was very resistant to the effects of PDT while muscle and salivary gland were sensitive to PDT. However, all the normal tissues were noted to heal or regenerate well after PDT injury [69]. Hourigan et al. proved that giant-cell tumor, dedifferentiated chondrosarcoma, and osteosarcoma were susceptible to in vitro PDT and the optimal nontoxic incubation concentration of Photofrin was 3 µg/mL [70]. Subsequently, a large amount of studies on PDT for bone cancer appeared.
Dextran-Benzoporphyrin Derivatives (BPD)
Recently, numerous in vitro and animal studies on PDT for bone cancer have been performed based on the discovery of hundreds of the second-and third-generation PSs. BPDs for bone cancer therapy are usually used in a liposomal formulation (benzoporphyrinderivative monoacid ring A, BPD-MA, Visudyne ® ) which was approved by the FDA. BPD-MA was demonstrated to induce long-term chondrosarcoma regression in rats treated with light irradiation 5 min after BPD injection. The timing for light irradiation was related to blood flow stasis which played an important role in PDT-induced tumor destruction [71]. PDT using BPD-MA for primary bone cancer was feasible and effective as reported in the treatment of spontaneous osteosarcomas of the distal radius in dogs [72]. Burch et al. first applied BPD-PDT for bone metastasis. The results showed that BPD-MA selectively accumulated in tumors 3 h post-injection and the MT-1 cells, a human breast cancer cell line, which metastasized to the spine and appendicular bone were eliminated 48 h post-light delivery [73]. Metastatic lesions of MT-1 cells within porcine vertebrae and long bones could also be ablated using BPD-PDT. The average depth of light penetration into trabecular bone was 0.16 ± 0.04 cm while the necrotic/non-necrotic interface extended 0.6 cm. This study demonstrated that the light for BPD-PDT has excellent bone penetration [74]. Akens at al. compared the uptake ratio between BPD-MA and 5-aminolevulinic acid (5-ALA) in spinal metastases in rats. They found 5-ALA did not demonstrate an appreciable uptake difference in tumor-bearing vertebrae compared to spinal cord, while BPD-MA could accumulate specifically in the tumor tissue and reach its highest concentration 15 min after injection. Thus, they speculated that BPD-MA could be used for PDT to treat bone metastasis [75]. Later, they also demonstrated that the safe and effective drug-light dose appeared to be 0.5 mg/kg BPD-MA and less than 50 J light energy for the thoracic spine and 1.0 mg/kg and 75 J for the lumbar spine in rats with bone metastasis of breast cancer [76]. In addition, PDT using BPD-MA was demonstrated to improve vertebral mechanical stability during the treatment of rats with spinal metastasis [77]. Wise-Milestone et al. also found that PDT using BPD-MA promoted new bone formation in non-tumor-bearing vertebrae and suppressed osteoclastic resorption in tumor-bearing vertebrae, leading to a protection of the vertebral structure [78].
Acridine Orange (AO)
AO is a basic dye that can accumulate densely in lysosomes and is specifically taken up by musculoskeletal sarcomas. It is another widely studied PS during last two decades [79,80]. Kusuzaki et al. performed curettage under fluorovisualization and AO-PDT for osteosarcoma elimination in mice. At 2 h after intraperitoneal injection of AO, macroscopic curettage was performed and additional curettage was performed while observing fluorescence of AO bound to residual tumor fragments using a fluorescence stereoscope. Then, the tumor-resected area was irradiated by blue light (466.5 nm) for 10 min to kill the residual cells microscopically. The results showed that local tumor recurrence was significantly inhibited (23%) in the group treated with curettage and PDT, compared to that (80%) in the control group treated with only curettage [81]. At the same time, AO with photoexcitation was demonstrated to have a strong cytocidal effect on multidrug resistance (MDR) mouse osteosarcoma cells [82]. The accumulation of AO in malignant musculoskeletal tumors was possibly related to the pH gradient. The higher the malignancy of the tumor, the greater the pH gradient between the intracellular pH and the extracellular pH or between the intracellular pH and the vacuolar pH are. This acidity of tumors supports AO accumulation [80]. Moreover, different light sources were proved to activate AO and induce cytotoxicity of tumor cells. A study of Ueda et al. showed that strong unfiltered light from a xenon lamp was more effective and feasible than weak filtered blue light for cytocidal effect of osteosarcoma cells using AO-PDT [83]. Satonaka et al. found that the flash wave light (FWL) xenon lamp needed a lower excitation energy and shorter excitation time compared to that of the continuous wave light (CWL) xenon lamp for the cytocidal effect of AO-PDT [84].
Aminolevulinic Acid (ALA)
Due to the poor specificity, there are relatively few reports of ALA used in the treatments of bone cancer [75,76]. However, Dietze et al. confirmed that the intra-articular application of 5-ALA, a precursor of phototoxic molecules, induced a higher protoporphyrin IX (PpIX) accumulation in synovitis tissue compared to non-inflammatory tissue but lower than that in human sarcoma cells (HS 192.T
Chlorin e6 (Ce6)
As the maximum fluorescence excitation and emission wavelength is 403 nm and 669 nm, respectively, and the absorbance peak is at 650 nm, Ce6 can be used not only for in vivo fluorescence imaging of tumors but also for PDT [103,104]. Mohsenian et al. developed Mn-doped zinc sulphide (ZnS) quantum dots loaded with Ce6 for the treatment of chondrosarcoma. Upon exposure of X-rays, the light is generated by the quantum dots and thus activates Ce6. As X-ray irradiation has better tissue penetration, the obtained nanocarriers themselves can serve as an intracellular light source for PS activation which is conducive to eliminating deep tumors [105]. Lee et al. designed hyaluronate dots containing Ce6 with multiligand targeting ability for PDT for bone metastasis. The dots were chemically conjugated with alendronate (ALN, as a specific ligand to bone) and cyclic arginine-glycine-aspartic acid (cRGD, as a specific ligand to tumor integrin αvβ3) for bone and tumor targeting, respectively. The obtained new PS was labeled (ALN/cRGD)@dHA-Ce6. After intravenous injection, these dots sailed to the bone tumor site and were specifically taken up by tumor cells. The multiligand targeting ability was verified by the strong Ce6 fluorescence signal ( Figure 2). The bone metastasis in mice caused by human breast carcinoma (MDA-MB-231) cells was inhibited using PDT based on this novel PS [106]. The nanoformulation with targeted recognition technology has the potential for improving the tumor-targeting efficiency of PSs.
Chlorophyll Derivatives
Bacteria are similar to cancer cells as both are highly metabolic and rapidly dividing and can produce lots of porphyrin-derived photosensitizing metabolites [74]. Therefore, some PSs are first used in bactericidal treatment and then found to be also effective for bone cancer, such as chlorophyll derivatives [107,108]. Na-pheophorbide A is a chlorophyll-derived PS with the peak absorption maxima at 410 and 670 nm. PDT with Na-pheophorbide A induced human osteosarcoma (HuO9) cells apoptosis via activation of mitochondrial caspase-9 and -3 pathways [108]. Pd-Bacteriopheophorbide (TOOKAD) is another chlorophyll derivative and its light absorbance is in the NIR region (763 nm), which allows deep tissue penetration [109]. At 70-90 days after PDT, TOOKAD was demonstrated to completely eliminate 50% intratibial metastases caused by implanting human small cell carcinoma of the prostate (WISH-PC2) cells into proximal tibias of mice [110]. As a derivative of chlorophyll, Pyropheophorbide-a methyl ester (MPPa) can be metabolized rapidly and have strong photoelectric sensitivity for PDT. MPPa-PDT was found to induce human osteosarcoma (MG-63) cells apoptosis via the mitochondrial apoptosis pathway and autophagy via the ROS-Jnk signaling pathway. The autophagy could further promote the apoptosis caused by MPPa-PDT [111]. Moreover, MPPa-PDT could block MG-63 cell cycle and inhibit cell migration and invasion. The PDT-induced apoptosis of MG-63 cells was accompanied by the change of cellular endoplasmic reticulum stress (ERS) and related to the Akt/mammalian target of rapamycin (mTOR) pathway [112].
Benzochloroporphyrin Derivatives (BCPDs)
To solve the synthetic problem in the preparation of biologically active BPD-MA and reduce the toxicity to normal tissues, Yao et al. designed and synthesized a novel PS derived from benzochloroporphyrin (BCPD) [113]. After marginal resection of subcutaneous mice tumors caused by the inoculation of a high-metastatic murine osteosarcoma (LM-8) cell line, BCPD-PDT reduced the local recurrence rate and preserved the adjacent critical anatomic structures including muscles, nerves, and vessels [114]. In addition, another report from the same team indicated that BCPD-PDT induced the apoptosis and the cell cycle arrest at the G2M phase of human Ewing sarcoma (TC-71) cells. The tumor volume in mice with Ewing sarcoma in the flank or tibia could be reduced and the function of tumor-bearing limbs was preserved [115].
Other Porphyrin Derivatives
Porphyrin derivatives are the most widely studied PSs, including HpD, BPDs, BCPDs, and so on. Hematoporphyrin monomethyl ether (HMME), a porphyrin-related PS, could be selectively taken up by murine osteosarcoma (LM-8 and K7) cells, while could not be observed in myoblast cells and fibroblast cells. HMME-PDT significantly inhibited subcutaneous osteosarcoma growth in mice via caspase cascade pathways [116]. Hiporfin is a mixture of HpD derivatives and has been approved by the Chinese State Food and Drug Administration for PDT on the oral cavity and the bladder cancers [117]. Sun et al. found hiporfin was as efficient as HMME at a lower concentration, and it could be systemically injected into patients, which is conducive to the PDT for solid tumors. Hiporfin-PDT exhibited cytotoxicity in osteosarcoma in vitro and in vivo by inducing cell apoptosis and necroptosis. However, the resulting cell autophagy played a protective role for tumor cells [118]. Moreover, in order to obtain a PS more active than Photofrin, Serra et al. synthesized 5,15-bis(3-Hydroxyphenyl)porphyrin for PDT [119]. PDT using this new PS reduced tumor size via increasing cell necrosis in murine cranial and vertebral osteosarcomas, which provided a potential platform for surgically inoperable osteosarcoma [120]. Moreover, PpIX is another porphyrin derivative which has been extensively studied in PDT for cancers. The encapsulation of PpIX using silica nanoparticles (SiNPs) improved the efficacy compared to the naked PpIX. Although the encapsulation reduced the PpIX toxicity to tumor cells, the chemicals used for SiNPs synthesis increased the cytotoxicity and thus PDT using PpIX-SiNP significantly inhibited the viability of osteosarcoma cells [121]. In addition to nanoformulation, PSs or PS-carriers can also be internalized by stem cells to further enhance the ability of targeted delivery, as stem cells have the unique ability to home and engraft in tumor stroma. In a report from Duchi et al., Meso-tetrakis(4-sulfonatophenyl)porphyrin (TPPS) was first loaded by fluorescent core-shell poly methyl methacrylate nanoparticles (FNPs), and then the obtained nanocarriers were uploaded by human mesenchymal stem cells (MSCs). Under laser irradiation, the nanocarrier-laden MSCs underwent cell death and released a large amount of ROS to trigger cell death of osteosarcoma cells [122].
Photodynamic Molecular Beacons (PMBs)
As many first-and second-generation PSs are limited by their non-specific uptake in deep tumors such as spinal metastases, PMBs targeting on specific molecules were proposed to localize the active PSs to the tumors [123,124]. PMBs comprise a PS and a quencher moiety which is photodynamically inactive, until transformed into an activated state through cleavage of the linker. Liu et al. synthesize PMBs activated by matrix metalloproteinases (MMPs) and named it PP MMP B. It consists of the PS Pyropheophorbide-R and black hole quencher 3, linked by the amino acid sequence GPLGLARK, which is an MMP-cleavable peptide. PP MMP B could be specifically taken up and activated by vertebral metastases versus normal tissues [125]. PDT using PP MMP B was also demonstrated to ablate metastatic tumors and disrupted the osteolytic cycle, and thus better preserved critical organs in rats with vertebral metastasis [126].
Other New PSs
The development of PSs also draws inspiration from conventional drugs. For example, Aloe-emodin (AE) is an anthraquinone compound extracted from traditional Chinese medicine plants and has antitumor effects. Recently, it was demonstrated to have fluorescence and phototoxicity and could be used in tumor therapy [127][128][129]. Tu et al. found that AE-PDT induced the autophagy and apoptosis of MG-63 cells via the activation of the ROS-JNK signaling pathway [130]. In addition, many third-generation PSs are constructed based on nanoformulation or internalization by cells, which makes them favorable for specific uptake by tumor cells. Lenna et al. developed a PS delivery system using MSCs internalizing FNPs. The PS, tetra-sulfonated aluminum phthalocyanine (AlPcS4), has a strong absorption peak in the NIR region and can retain activity after loading by FNPs [41,131]. FNPs containing AlPcS4 was the uploaded by MSCs. Photoactivation of this PS delivery system decreased the viability of osteosarcoma cells (MG-63, Saos-2, and U-2 OS). The authors claimed that this system has potential for the therapy for MDR tumors and the MSCs-based PDT is conducive to the design of personalized treatment [132].
PDT Combined with Chemotherapy
Since most bone cancers involve deep tumors, PDT is often used in combination with chemotherapy, radiotherapy, and immunotherapy to ensure complete ablation and prevent recurrence. The combination of PDT and chemotherapy is widely studied and is called photochemotherapy [133,134]. Systemic bisphosphonates (BP) treatment has been demonstrated to inhibit bone resorption in bone metastasis caused by breast cancer and reduce the fracture chance of involved vertebrae [135]. However, BP is less effective for vertebral tumors beyond a critical size [136]. Therefore, Won et al. proposed a combined treatment of bisphosphonate zoledronic acid (ZA, a derivative of BP) and PDT using BPD-MA. This photochemotherapy not only ablated spinal metastases but also reduced bone loss accompanied by improving the structural integrity of vertebral bones [137]. The combined treatment of ZA and PDT could also reduce the risk of burst fracture and restore the pattern of bone strain to that of healthy vertebrae [138]. The pre-treatment with ZA before PDT reduced the cell viability of MT-1 cells up to 20% compared to PDT alone [139]. Moreover, Heymann et al. combined low-level laser therapy (LLLT) with cisplatin or ZA for bone cancer. They found that the irradiation of low-level laser on Saos-2 cells cultured in medium containing cisplatin or ZA directly raised the cytotoxicity of these two drugs. They speculated that this direct phototoxicity of cisplatin or ZA could be caused by photobiomodulation based on a direct mitochondrial stimulation through LLLT [140]. These results indicate that the combination of PDT and chemotherapeutics drugs synergistically enhances the tumoricidal effect.
Recently, many studies are focusing on the development of nanovehicles which can target PSs and chemotherapeutics drugs on cancer lesions, optimize the shortcomings of drugs, and reduce the side effects of PDT and chemotherapy [141][142][143]. Paclitaxel (PTX) is one of the most effective chemotherapeutics drugs for treating breast, ovarian, lung, and pancreatic cancer [144,145]. To improve its poor water solubility, Martella et al. designed a nanoscale Drug Deliv system consisting of high molecular weight and hydrosoluble keratin, Ce6, and the PTX. PTX and Ce6 acted in an additive manner, and the resulting cytotoxicity to osteosarcoma cells was superior to that of PTX or Ce6 alone. The high specificity and efficiency of this Drug Deliv system is a promising therapeutic strategy for MDR osteosarcomas [146]. Doxorubicin (DOX) is usually used as the first-line therapy for osteosarcoma and doxycycline (DOXY) also has efficient cytotoxicity on various cancer cells. The combination of these two drugs can synergistically induce apoptosis of cancer cells [147,148]. Tong et al. synthesized a prodrug of these two drugs via a thioketal (TK) linkage. The obtained DOX-TK-DOXY was encapsulated into the mesoporous silica nanoparticles (MSNs) followed by modification of Ce6 and ZA. ZA helps the nanocarriers target on osteosarcoma cells and the Ce6 can be activated by laser irradiation and produce ROS. ROS cannot only induce cytotoxicity but also disrupt the TK linkage of the prodrug, leading to synchronous release of both DOX and DOXY. The released DOXY can also promote the production of ROS and thus amplify the release of DOX and DOXY. This nanovehicle with the capacity of bone-targeting, burst release of ROS, and continuous release of chemotherapeutics drugs is a novel therapeutic strategy for bone cancer [149]. Bortezomib (BTZ) is the first clinically approved proteasome inhibitor and can be applied in the treatment of bone cancer. BTZ was found to increase intracellular ROS level which can improve the tumoricidal effects of PDT [150,151]. Huang et al. designed a bone-seeking nanoagent for the treatment of bone metastasis. This nanocarrier comprised ALN (as the bone seeker), Zinc phthalocyanine (ZnPc) (as the PS), and BTZ (as the chemotherapeutics drug and the amplifier of ROS). Tumor volume of bone metastasis in a rat model was cut down by 85% using this photochemotherapy, and the tumoridical effect was related to mitochondrial damage and excessive ERS [152]. In addition, a report from Lu et al. has the similar design concept. In this study, nanoparticles based on graphene oxide (GO) was synthesized. Folic acid was conjugated with GO as a targeted agent for cancer cells, ICG was linked to GO as a PS, and ginsenoside Rg3 was loaded by GO as a chemotherapeutics drug. PDT using the obtained nanocarriers inhibited malignant progression and stemness of osteosarcoma cells [153].
PDT Combined with Immunotherapy
PDT can also induce the immune response to eliminate tumors and prevent recurrence. Due to the complex mechanism involved in this process, there are many target points can be studied for the synergistic treatments of PDT and immunotherapy [154,155]. The combination of PDT and immunotherapy cannot only enhance the anti-tumor immune effects but also reduce the side effects [156,157]. Zhang et al. found that HpD-PDT for osteosarcoma induced necrosis of tumor cells and then inhibited the function of dendritic cells (DCs). However, continuous PDT restored the function of DCs by up-regulating heat shock protein 70 [158]. CpG oligodeoxynuleotide (CPG-ODN), synthesized from unmethylated CpG dinucleotides and a phosphorothioate or chimeric backbone, can stimulate innate immune system via toll-like receptor 9 (TLR9), followed by the activation of DCs and other immune-related cells [159][160][161]. Peritumoral injection of CPG-ODN after PDT using BPD could control both local and systemic tumor spread in mice caused by metastatic breast cancer cells. The therapeutic effect of this combined therapy was improved compared to PDT or CPG-ODN alone [162]. At the same time, Marrache et al. developed a nanoparticle delivery platform based on ZnPc-PDT and CPG-ODN for the treatment of metastatic breast cancer. Polymeric core with gold nanoparticles (AuNPs) were used as a controlled release system for ZnPc and CPG-ODN, and CPG-ODN acted as an immunostimulant to enhance the anti-tumor immunity effect caused by PDT via activating DCs [163]. Moreover, the cytotoxic effects on T cells also play an important role in tumor therapy [164]. When the programmed death ligand-1 (PD-L1)/programmed cell death protein-1 (PD-1) pathway was blocked, PD-1 of tumor cells, an inhibitor of T cell proliferation and cytotoxic effects, was down-regulated followed by significant inhibition of osteosarcoma growth [165,166]. As aforementioned, autophagy may protect tumor cells from the cytotoxicity of PDT [87, 118,167]. To suppress autophagy of osteosarcoma cells, 3-MA, an autophagy inhibitor, was applied to enhance the tumoricidal effects of PDT using bovine serum albumin-ZnPc nanoparticles (BSA-ZnPc) (Figure 3). This combination of PDT and immunotherapy inhibited osteosarcoma growth in vitro and in vivo via the inhibition of autophagy and down-regulation of PD-L1 [166]. (c) Mice sera were collected 1 day after combination treatment, and the cytokine levels of TNF-α and IL-12 were measured. * p < 0.05, ** p < 0.01. Reproduced from ref. [166] with permission from Elsevier. Copyright (2019) Biomaterials.
PDT Combined with Hyperthermia
Hyperthermia has been applied to treat tumors since the 1970s. When the temperature comes to 42 • C or higher, the injury of DNA and plasma membrane and the inhibition of protein synthesis and energy metabolism will occur followed by mitochondrial damage [168,169]. Nomura et al. combined HpD-PDT with hyperthermia (45 • C) to treat osteosarcomas in mice. The tumor growth rate in the heat-only or PDT-only group was significantly lower than that in the group without treatment, and was significantly higher than that in the group treated with PDT and hyperthermia [170]. The combination of ALA-PDT and hyperthermia (43.5 ± 0.5 • C) was also demonstrated to synergistically inhibited the viability of human mandibular osteosarcoma cells. In addition, hyperthermia improved the sensitivity of less sensitive tumor cells to PDT cytotoxicity [171]. These studies on hyperthermia for cancer treatment also inspired the development of PTT.
PDT Combined with Radiotherapy
Radiotherapy with the advantage of palliating pain is recognized as one of the most effective therapies for malignant tumors and is a current standard of treatment of spinal metastasis [172,173]. However, different sensitivities to radiotherapy were found in tumors of different types as well as tumors of the same type but from different individuals [174]. Lo et al. demonstrated that the combination of X-ray irradiation at 4 Gy and PDT using BPD-MA significantly improved the bone architecture and bone formation of normal vertebrae at a longer-term (6 week) time-point [175]. In addition, this combination maintained the structural integrity of metastatically involved vertebrae in rats while ablating tumors [176]. PDT combined with radiotherapy can provide a potential platform for patients with recurring spinal tumors that cannot be treated by surgery or only radiotherapy [175,176].
In addition, the clinical application of PDT for musculoskeletal cancers is often combined with radiotherapy [177]. Synovial sarcoma is one of the most common malignant soft-tissue tumors encountered in children and adolescents with high recurrence rate (~80%) after resection. In addition, it often invades adjacent bones, vessels, and nerves [178,179]. Kusuzaki et al. performed AO-PDT with X-ray irradiation at 5 Gy for six patients with synovial sarcoma after resections. The results showed that the low-dose X-ray also excited AO-like photons. The combination successfully inhibited the recurrence and protected the surrounding normal tissues [180]. Then, they performed PDT or the combined therapy for 4 patients with primary bone cancer and 6 patients with primary malignant soft tissue sarcoma. Among them, 5 patients were treated with AO-PDT and 5 patients were treated with AO-PDT and X-ray irradiation at 5 Gy. After a follow-up of 24-48 months, one of the 5 patients treated with PDT showed local recurrence while there was on recurrence in the 5 patients treated with PDT and radiotherapy [181,182]. Although the number of cases involved is small and the grouping principle is imperfect, these studies still provide a preliminary reference for the clinical application of PDT combined with radiotherapy for bone cancers which are difficult to treat by conventional therapies.
Other Applications of PDT for Clinical Bone Cancer
As chondrosarcoma is radioresistant and often not sensitive to chemotherapy, wide excision surgery is the most common therapy [183,184]. However, when chondrosarcoma occurs in the hyoid bone, many patients choose not to sacrifice the larynx, base of tongue, and the hyoid, and thus surgeries will not be accepted. Therefore and improvement in airway. The residual tumor became smaller and could be seen in the subcutaneous tissue away from the hyoid [185]. In addition to this case, the light source of PDT can also get closer to deep tumors with the help of minimally invasive surgeries [66,77,126,186]. Fisher et al. first applied PDT using verteporfin, a second-generation PS derived from porphyrin, to improve the therapeutic effects of VP or Balloon Kyphoplasty (KP) on patients with pathologic vertebral compression fractures caused by vertebral metastasis. Patients treated with PDT under the light from interstitial diffusing fiber at 50 or 100 J/cm felt pain significantly reduced, and no complications directly attributed to PDT were found. These results suggested that VP or KP combined with PDT is safe and can shorten the hospital stay [187]. Moreover, photochemotherapy based on photochemical internalization (PCI) has been developed for clinical use. PCI is a nano Drug Delivery technology delivering endocytosed macromolecules into the cytoplasm. Upon light activation, PSs located in endocytic vesicles will induce rupture of the endocytic vesicles and release the therapeutic macromolecules into the cytosol. This technology aims to avoid the side effects of PDT and chemotherapy, enhance the efficacy of photochemotherapy, and improve the selectivity of PSs [188,189]. Disulfonated tetraphenyl chlorin (TPCS2a)-based PCI of Bleomycin, a third-generation PS for photochemotherapy, was applied in the treatment of a patient with chondroblastic-osteosarcoma of the jaw. This therapy was demonstrated to have increased selectivity and superior anti-tumor activity compared to PDT only. During the follow-up of three months, continuous tumor shrinkage and death of tumor cells were proven by clinical assessment and histopathology, and no recurrence was identified. Unfortunately, the patient succumbed to cardiorespiratory failure six months after the start of the therapy [190]. Although the first clinical trial of PCI-based photochemotherapy for bone cancer failed to have long-term follow-up, these early follow-up results suggest that this therapy seems to be a feasible clinical therapeutic strategy for bone cancer.
PTT
PTT for cancer therapy was inspired by magnetic thermal therapy and first reported by Hirsch et al. in 2003. Silica nanoparticles were surrounded by small gold colloid to form gold-silica nanoshell and then modified by polyethylene glycol (PEG) to retain the stability of the nanoshell colloid. After exposure to NIR light (820 nm, 35 W/cm 2 ), the human breast carcinoma cells cultured with this obtained PTA lost viability, while cells cultured with only NIR light or PTA kept viability. Therefore, normal tissues which cannot take up a large amount of PTA are safe during PTT [191]. PTA and light source are the two key elements in PTT. When PTAs are irradiated by light with a specific wavelength, the energy from photons will be absorbed by PTAs and PTAs will be activated and collide with surrounding molecules to return to the ground state [192]. Therefore, the increased kinetic energy will be turned into heat. Tumor cells are more sensitive to cytotoxicity caused by heat compared to normal cells. When the local temperature increases to 42 • C or higher, some thermolabile cellular proteins are denatured accompanied by coaggregation with native and aggregative-sensitive proteins, leading to inactivation of downstream pathways, physical alteration of chromatin, inhibition of DNA synthesis and repair, and ultimate cancer cell death [193,194]. PTT for cancer treatment can be performed remotely and applied in combination with conventional therapies, and the intensity, interval, and time of light irradiation can be administrated according each case situation. PTT is a noninvasive, controllable, and targeted strategy to eliminate tumor cells, therefore, it was widely studied for bone cancer therapy in the past decade [29].
Various PTAs and corresponding light sources have been developed and reported since 2003. The light sources with absorption in the NIR region are most commonly used for PTT because of the appropriate tissue penetration capacity and the reduction of photodamage on local normal tissues and cells [41,49,195,196]. PTAs can be divided into four categories, including metal-, carbon-, semiconductor-, and organic moleculebased materials [194,197,198]. Metal-based materials have high photothermal conversion efficiency but the cost is also high and not suitable for widespread clinical use [198,199].
Carbon-based materials have large photothermal conversion area but have poor absorption capacity under NIR light irradiation [200][201][202]. Semiconductor-based materials have high photothermal performance and low cost but further nanoformulation is often required to enhance the specificity and the ability of tumor targeting [197,203]. Most organic molecule-based materials have strong NIR absorption capability, solubility, biocompatibility, and dispersibility, but they also need modification to promote bone regeneration or immunomodulation [204,205]. Studies of these four types of PTAs are constantly progressing, and the main purpose is to improve the photothermal conversion efficiency, solubility, biocompatibility, tumor-targeting capacity, and safety via modification and nanoformulation [206][207][208]. Moreover, recently, PTT is usually combined with other therapies to comprehensively improve the therapeutic effects of bone cancer [209][210][211].
Metal-Based PTAs
PTT for bone cancer using metal-based PTAs often involves the precious metals including Au and Pt [212][213][214]. Recently, some common metals including Cu, Fe, Bi, and so on, are also widely studied [215][216][217]. These metals are usually applied for PTT via nanoformulation or coating.
Au
AuNPs have high photothermal conversion efficiency and are one of the most interesting nanomaterials reported in studies on PTT. They are easy to be functionalized via thiol or amine groups for Drug Delivery, and they can generate heat via light irradiation and increase the local temperature to~43 • C [218,219]. Moreover, the shape and size of them can be altered according to different requirements [220][221][222][223]. Liao et al. used ethacrylated gelatin and methacrylated chondroitin sulfate (CSMA) to encapsulate gold nanorods (GNRs) and nanohydroxyapatite (nHA) to form a hydrogel for bone cancer therapy and bone regeneration. This hydrogel with light irradiation eradicated K7M2wt cells (a mouse bone tumor cell line) and promoted proliferation and osteogenic differentiation of MSCs in vitro. PTT using this hydrogel not only ablated postoperative tumors but also repaired bone defects in a mice model of tibia osteosarcoma [224]. Sun et al. enclosed GNRs in MSNs (Au@MSNs) to form a Drug Delivery platform. ZA was then conjugated to Au@MSNs to provide bone-targeting ability and attenuate tumorigenesis and osteoclastogenesis in bone metastasis. PTT using this composite PTA inhibited tumor growth in vitro and in vivo and relieved bone resorption in vivo [225]. Moreover, CD271 monoclonal antibody was also used as a bone-targeting agent to localize PTAs in osteosarcomas, as CD271 was demonstrated to be overexpressed on the surface of osteosarcoma cancer stem cells [226]. Hollow gold nanospheres (HGNs) were conjugated with SH-PEG-COOH and then CD271 monoclonal antibody was physically absorbed by the obtained PEG-HGNs. The PEG modification was used to increase the stability, reduce cytotoxicity, extend blood circulation time of HGNs, and connect HGNs and CD271 monoclonal antibody [227,228]. This novel PTA could target to osteosarcoma cells and be specifically taken up by the tumor cells. Upon NIR laser irradiation, the cells lost viability [229]. Because AuNPs are conducive to Drug Deliv, PTT using AuNPs is often combined with chemotherapy or immunotherapy [229,230]. Betulinic acid (BA) is a natural anticancer agent against numerous tumor types and has the capacity for local immunoregulation but it is hydrophobic [231,232]. Liu et al. developed gold nanoshell-coated BA liposomes to treat bone cancer. BA was encapsulated into liposomes to increase its solubility, and then coated with AuNPs (AuNS-BA-Lips). The AuNPs nanoshell exerted a prominent PTT effect under the irradiation of light in the NIR region, and the increased temperature triggered BA release (Figure 4). These nanocarriers with dual therapeutic functions inhibited cell viability of 143 B and Hela cells [233].
Pt
Unlike Au-based nanomaterials which are non-cytotoxic and have been extensively used in PTT, platinum nanoparticles (PtNPs) are toxic to normal cells [234,235]. Therefore, PTT using PtNPs is required to optimize the size and shape to reduce cytotoxicity [236][237][238][239]. Wang et al. fabricated trifolium-like platinum nanoparticles (TPNs) which showed minimal cytotoxicity to normal cells and could kill cancer cells upon NIR light irradiation. The TPNs inhibited tumor growth and prevented osteolysis in mice with bone metastasis caused by human lung adenocarcinoma (PC9) cells engrafted in the tibias [213]. Yan et al. developed a carboxyl-terminated dendrimer for PtNPs delivery and for targeting to osteolytic lesions in malignant bone tumors. The plentiful carboxyl groups on the dendrimer surface improved the affinity with hydroxyapatite and bone fragments. PtNPs encapsulated by the carboxyl-terminated dendrimer were demonstrated to have minimal cytotoxicity and hematologic toxicity. PTT using the obtained nanocarriers inhibited the tumor growth and tumorassociated osteolysis in mice with bone metastasis caused by injecting MDA-MB-231 cells into tibias [240]. Zhou et al. prepared phytic acid-capped PtNPs with enhanced affinity to hydroxyapatite and osteolytic lesions. These nanocarriers also inhibited the bone tumor growth and the tumor associated-osteolysis in vitro and in vivo upon NIR light irradiation [241].
Cu
Compared with other precious metal-based materials, Cu-based PTAs have the advantages of easy fabrication and low cost. In addition, Cu-based PTAs have better photothermal performance and photostability compared with carbon-based PTAs [242][243][244]. Chang et al. designed copper-doped mesoporous bioactive glass (MBG) for bone cancer. This nanovehicle had both excellent drug loading capacity and photothermal property, and the drug release could be modulated by the photothermal effect. In vitro results showed that PTT using this PTA not only inhibited the tumor cell growth but also induced the formation of apatite mineralization which could promote bone regeneration [245]. Ma et al. developed 3D-printed β-tricalcium phosphate scaffolds coated with MSNs containing Cu for the treatment of residual bone tumors and large bone defects after resection. The composite scaffolds could completely eradiate tumor cells and promote proliferation and osteogenic differentiation of MSCs upon the irradiation of light in the NIR region [246]. Wang et al. prepared platinum-copper alloy nanoparticles modified by aspartate octapeptide, a type of osteotropic peptides, for bone cancer therapy. These nanoparticles could specifically accumulate in bone tumors compared to those without aspartate octapeptide. Under light irradiation, these nanoparticles could not only suppress tumor growth but also reduce the osteoclastic bone destruction [247].
Fe
As Fe can promote the maturation of collagen, and the proliferation and expression of alkaline phosphatase of MSCs, Fe-based materials are also used as PTAs for bone cancer [248][249][250][251]. Liu et al. fabricated 3D-printed bioactive glass-ceramic (BGC) scaffolds containing different metal elements including Cu, Fe, Mn, and Co. Results indicated that Cu-copped scaffolds had the best photothermal performance followed by Fe-copped scaffolds, and PTT using Cu-, Fe-, and Mn-copped scaffolds effectively killed tumor cells in vitro and inhibited tumor growth in vivo. However, only Fe-and Mn-copped scaffolds promoted adhesion and osteogenic differentiation of bone-forming cells. Therefore, Fecopped scaffolds have more promising potential for PTT-mediated tumor therapy and bone regeneration [217]. In addition, inspired by the previous study, Fe-based materials have the capacities of magnetothermal treatment of osteosarcoma and repairing bone defects [250]. Zhuang et al. fabricated Fe-copped 3D-printed akermanite bioceramic scaffolds with a photo/magnetothermal effect for bone tumor therapy. The simultaneous hyperthermia showed higher heating efficiency compared to single-mode hyperthermia of PTT or magnetothermal therapy, leading to the improved tumoricidal efficiency in vitro. In addition, the composite scaffolds promoted osteogenic differentiation of MSCs compared to scaffolds without Fe [252].
Carbon-Based PTAs
Carbon-based nanomaterials such as graphene-family materials, multi-walled carbon nanotubes (MWCNTs), and carbon dots (CDs) are used as PTAs because of their NIR absorbance, abundant functional groups, and large specific surface area [194,200,201]. The applications of PTT using carbon-based PTAs for bone cancer have been studied over the past decade.
Graphene-Family Materials
Graphene-family materials refer to graphene and its derivatives, including GO, reduced graphene oxide (RGO), and graphene quantum dots (GQDs). Graphene-family materials have a large specific surface area which is conducive to the interaction with other biomolecules, and they have tunable thermal properties to match various demands in biomedicine. They also have good biocompatibility and can promote cell adhesion, proliferation, and differentiation of some types of cells [253][254][255]. Therefore, PTT using graphene-family materials cannot only eliminate bone tumors but also promote bone regeneration. He et al. incorporated graphene nanosheets into polyetheretherketone to form nanofillers. These nanofillers boosted MSCs proliferation in vitro and could reach 45 • C in 150 s upon light irradiation. The obtained nanocomposites have strong potential for PTT and bone regeneration [256].
GO is the most widely studied graphene-family PTAs for bone cancer therapy. The functionalization with PEG could enhance the dispersion and stability of GO [257,258]. After PEG-GO nanosheets (40 µg/mL) were taken up by pre-osteoblasts (MC3T3-E1 cells), the cells retained normal ALP levels and matrix mineralization. These nanomaterials are promising PTAs for the treatment of bone cancer [259]. Guo et al. developed a multifunctional scaffold consisting of porous polyurethane (PU) substrate with GO nanosheet/chitosan (CS) hybrid coatings via layer-by-layer assembly process. The GObased coating can load with a variety of drugs, such as MB, silver nanoparticles, and fluorescein sodium for multiple purposes. The drug release can be controlled by local pH value and the photothermal effects can be activated upon light irradiation [260]. Xu et al. introduced GO nanosheets into ricalcium silicate particles via co-precipitation to fabricate dual functional bone cement. The photothermal performance of this cement can be regulated by the laser power and the GO content. This cement could not only ablate bone tumor cells but also promote cell proliferation and enhance the ALP activity of MC3T3-E1 cells [261]. Ge et al. prepared multifunctional scaffolds that comprised GO nanoparticles, hydrated CePO4 nanorods, and CS. Under NIR laser irradiation, the GO component can exert photothermal effect to kill tumor cells. The hydrated CePO4 nanorods could induce M2 polarization of macrophages which secretes vascular endothelial growth factor (VEGF) and arginase-1 (Arg-1), and activate the BMP-2/Smad signaling pathway, promoting bone regeneration ( Figure 5). This composite scaffold is a promising candidate for angiogenesis and osteogenesis after bone tumor resection [262].
In addition to GO, rGO and GQDs are also applied for PTT. Li et al. developed a composite scaffold consisting of nHA and rGO sheets via self-assembly. The scaffolds killed 92% of MG-63 cells and inhibited tumor growth under laser irradiation at 808 nm for 20 min. At the same time, the scaffolds promoted adhesion, proliferation, and osteogenic differentiation of MSCs in vitro and enhanced bone regeneration in rats with calvaria defects [263]. Liu et al. adjusted the absorbance of GQDs to 1070 nm in the NIR-II region to make the light have stronger tissue penetration. GQDs were treated with phenol by tuning the decomposition of hydrogen peroxide under a high magnetic field of 9T, the obtained nanomaterials were labeled 9T-GQDs. 9T-GQDs had tunable fluorescence and high photothermal conversion efficacy (33.45%). Both in vitro and in vivo results showed that 9T-GQDs could ablate tumor cells and inhibit tumor growth under laser irradiation in the NIR-II region. In addition, 9T-GQDs exhibited obviously NIR imaging of tumors in living mice, suggesting the probability of 9T-GQDs for imaging guided PTT [264].
MWCNTs
MWCNTs are a class of nanotubes, and can absorb more NIR irradiation and load with more drugs due to the larger surface area compared to conventional single-walled carbon nanotubes (SWCNTs) [265,266]. Moreover, more absorption of NIR irradiation can reduce the side effects of light irradiation. The superior capacities of photothermal conversion efficiency and Drug Delivery make MWCNTs more appropriate for PTAs and for PTT combined with chemotherapy or immunotherapy [267][268][269]
Other Carbon-Based PTAs
Unlike many carbon-based nanomaterials, CDs not only exhibit photothermal effects but also have water solubility and low cytotoxicity, and are cost-effective [272][273][274]. Lu et al. developed CD doped chitosan/nHA scaffolds which remarkably reduced osteosarcoma cells in vitro and inhibited tumor growth in vivo upon NIR laser irradiation. The scaffolds could also eliminate bacteria (S. aureus and E. coli) under light irradiation. In addition, CD doped scaffolds promoted adhesion and osteogenesis of MSCs in vitro and improved the bone formation at 4 weeks after implantation compared to pure chitosan/nHA scaffolds. Therefore, the application of CDs enhanced the osteogenesis-related capacity of scaffolds and endowed the scaffolds with potential for PTT to treat bone tumors and infections [275]. Carbon aerogel (CA) with 3D open networks is another carbon-based material for PTT. Due to the large surface area, ultralow density, and high porosity, it is suitable for the coating of materials [276,277]. Dong et al. designed a multifunctional beta-tricalcium phosphate bioceramic platform coated with CA. CA coating not only exhibited photothermal effects on ablating osteosarcoma but also promoted bone regeneration in rats via a fibronectinmediated signaling pathway [278].
Semiconductor-Based PTAs
Semiconductor-based materials are metal and non-metallic compounds which can reduce the consumption and cytotoxicity of metal-based materials and improve the photothermal conversion efficiency of non-metallic materials. Due to these excellent characteristics, recently, they are in the most exciting part of the studies on PTAs [194,279,280].
MXene Nanaosheets
In MXene nanaosheets, 'M' refers to transition metal atoms, 'X' means carbon or nitrogen, and 'ene' represents ultrathin 2D structure such as graphene [281]. As MXene nanosheets combine the advantages of metallic materials and non-metallic materials, they have been widely used in biomedicine including biosensing, fluorescent imaging, and PTT [282][283][284][285]. Pan et al. explored the PTT effects of 3D-printed bioactive glass (BG) scaffolds containing titanium carbide (Ti 3 C 2 ) nanosheets on the treatment for osteosarcoma. The incorporation of Ti 3 C 2 MXenes endowed the composite scaffolds with high photothermal conversion efficiency, leading to complete tumor eradication in mice with xenografts of Saos-2 cells. The composite scaffolds could also accelerate bone regeneration after implantation [286]. Yang et al. developed 3D-printed BG scaffolds (BGS) incorporated with S-nitrosothiol-grafted mesoporous silica containing niobium carbide (Nb 2 C) nanosheets (MBS) for the treatment of bone cancer ( Figure 6). Upon NIR laser irradiation, photothermal conversion could be achieved via Nb 2 C MXenes and nitric oxide (NO) release could be triggered and controlled. Tumor ablation was strengthened by the combination of MXene-mediated PTT and NO release, as NO at high concentrations could induce DNA damage and inhibition of DNA repair [287,288]. The tunable NO release could also promote vascularization and osteogenesis [289,290]. Therefore, this composite scaffold has the potential for a multifunctional therapeutic platform for osteosarcoma therapy, vascularization, and bone regeneration [291]. Recently, Yin et al. develop implants with multiple functions which comprised Ti3C2 MXenes loading with tobramycin (an antibacterial drug), gelatin methacrylate (GelMA) hydrogels, and bioinert sulfonated polyetheretherketone (PEEK). PEEK substrates was first coated with polydopamine (PDA) to enhance the adhesion of the surface, and tobramycin-laden MXenes was then bonded to PEEK followed by GelMA coating. The combination of MXenes and PDA endowed the composites with synergistic photothermal effects, and the GelMA coating promoted bone regeneration. The results showed that the obtained composite implants exhibited superior cytocompatibility, antibacterial effect, PTT-mediated anti-tumor effects, and the capacity of promoting osteogenesis [292].
Oxide Semiconductor-Based Materials
Biocompatible conductive oxide semiconductors which have photothermal convertible efficiencies and photostability can be used as PTAs [293,294]. SrFe12O19 nanoparticles were synthesized by Lu et al. MBG/CS porous scaffolds containing SrFe12O19 nanoparticles were demonstrated to trigger osteosarcoma apoptosis and ablation upon NIR laser irradiation. The composite scaffolds also promoted bone regeneration via activating BMP-2/Smad/Runx2 signaling pathway [295]. Then DOX was loaded by this composite scaffold. DOX could be rapidly released from the scaffold with the light irradiation, and the resulting chemotherapy synergistically enhanced the anti-tumor effect of PTT [296]. Jie et al. developed oxygen vacancy-rich tungsten bronze nanoparticles (Na x WO 3 ) via a pyrogenic decomposition process for PTT. These nanoparticles could increase their temperature from 25.8 • C to 41.8 • C in 5 m under the irradiation of 980 nm laser. PTT using these nanoparticles could both eliminate the subcutaneous and intratibial tumors caused by the injection of murine breast cancer (4T1) cells [297]. In addition, the hydrogenated TiO 2 coating with hierarchical micro/nano-topographies was fabricated by induction suspension plasma spraying. This coating exhibited excellent and controllable photothermal effect on inhibiting tumor growth under NIR laser irradiation in vitro and in vivo. The hierarchical surface of the coating promoted adhesion, proliferation, and osteogenic differentiation of rat MSCs. This coating is potential for bone cancer therapy and bone regeneration [298].
Metal-Organic Frameworks
Metal-organic frameworks (MOFs), 2D nanosheets constructed by metal ions or clusters and organic ligands, have also been used as PTAs [299,300]. The structure and function can be precisely tuned by altering the metal or organic component [301]. Qu et al. designed a multifunctional injectable MOF consisting of cobalt coordinated tetrakis(4carboxyphenyl)porphyrin (Co-TCPP). Then calcium phosphate cement (CPC) was modified by this MOF for minimally invasive treatment of neoplastic bone defects. The addition of MOF endowed CPC with the improved compressive strength, shortened setting time, and excellent photothermal performance. The composite cement not only ablated tumors in vitro and in vivo but also promoted osteogenesis and angiogenesis in vivo [302]. In addition, Dang et al. prepared copper coordinated tetrakis(4-carboxyphenyl)porphyrin (Cu-TCPP) as a coating for 3D-printed β-tricalcium phosphate scaffolds. The composite scaffolds could significantly kill osteosarcoma cells in vitro and ablate the subcutaneous bone tumor tissues in vivo under NIR light irradiation. In addition, they can also supported the attachment of MSCs and human umbilical vein endothelial cells (HUVECs), and promoted osteogenesis and angiogenesis in rabbits with femoral defects [303].
Other Semiconductor-Based Materials
To endow the bioceramics with PTT effects for bone cancer therapy, Wang et al. incorporated nano PTAs into the bioceramics. They synthesized a series of bioceramics via magnesium thermal reduction based on phosphate-based (e.g., Ca 3 (PO 4 ) 2 , Ca 5 (PO 4 ) 3 (OH)) and silicate-based ones (e.g., CaSiO 3 , MgSiO 3 ), and the color of these bioceramics changed from white to black, so they called the obtained bioceramics black ceramics. Due to the oxygen vacancies and structural defects within the crystals, the black ceramics exhibited excellent photothermal effect under NIR laser irradiation. These black ceramics had controlled degradability matching with the bone regeneration rate and promoted bone repair. In addition, upon light irradiation, they exhibited anti-cancer effects on both skin and bone tumors [304]. Ti-based ceramics with good biocompatibility are low-cost semimetal material and widely used in surgical tools, bone repair, and PTT [305,306]. TiN is one of the Ti-based ceramics and was used as a coating for tricalcium phosphate scaffolds in a report from Dang et al. The coated scaffolds also loaded with DOX so as to achieve synergistic tumoricidal effects of PTT and chemotherapy for bone cancer therapy. The in vitro and in vivo results indicated that this composite scaffold effectively eradicated tumors upon light irradiation, suggesting that this composite could be used as implanting material for bone defects after surgical interventions [307]. Cu-based chalcogenides are another widely used PTAs due to the low cost, easy fabrication, tunable size and composition, high photothermal conversion efficiency, and good photostability [242,243,308,309]. Dang
Organic Molecule-Based PTAs
Organic molecule-based PTAs have aroused widespread interest among researchers. They are characterized by water solubility, good biocompatibility, and easy bioconjugation [204,311]. They mainly include organic NIR dyes and conductive polymers [312,313].
Organic NIR Dyes
Fluorescence imaging for bone cancer therapy based on NIR dyes has the advantages of visible delivery and therapy [314][315][316]. ICG is a medical imaging and diagnosis NIR dye approved by FDA for clinical use [317,318]. As mentioned above, it can be used not only for PDT but also for PTT. MSCs, nanoparticles, and hydrogels are often used as the carriers of ICG to target to and then accumulate in tumors [319][320][321]. Jiang et al. designed bone-targeting nanoparticles with photothermal effects for bone cancer treatment. They conjugated superparamagnetic Fe 3 O 4 nanoparticles with ZA followed by ICG modification. ZA acted as a bone-targeting factor, while Fe 3 O 4 and ICG were employed as PTAs to enhance the PTT effect. ICG could also provide the capacity of real-time fluorescence monitoring during the treatment. These nanoparticles could rapidly and accurately located in the medullary cavity of the mice tibia, and then ablated the tibial metastasis of breast cancer cells [322].
Conductive Polymers
Conductive polymers are promising for clinical PTAs as they are cost-efficient and their structures can be precisely controlled [204,323,324]. They are usually used as coatings or crosslinkers to modify scaffolds or nanoparticles, leading to materials with multifunction [324,325]. PDA is the most widely used conductive polymer in PTT [326][327][328]. It is the main component of melanin and has good biocompatibility, low toxicity, and biodegradability. Its intense absorption is in the NIR region (700-1100 nm) and its photothermal conversion efficiency is as high as 40% [326,329,330]. Ma et al. coated 3D-printed bioceramic scaffolds with PDA for bone cancer therapy. The scaffold could support attachment, proliferation, and osteogenesis of MSCs. After light irradiation, the scaffold could induce cell death of Saos-2 and MDA-MB-231 cells in vitro and inhibit the growth of subcutaneous tumor [325]. Wang et al. developed ALN-conjugated PDA nanoparticles loaded with SN38 (a chemotherapeutic drug) for bone-targeting chemo-photothermal therapy for bone cancer. ALN could enhance the affinity to hydroxyapatite in bones and the release of SN38 could be triggered by NIR laser irradiation. PTT using these bone-targeting nanoparticles suppressed the growth of bone tumors and reduced the osteolysis [331]. Luo et al. fabricated an injectable hydrogel consisting of oxidized sodium alginate and chitosan, and the hydrogel contained cisplatin for chemotherapy and PDA-decorated nHA for PTT and bone repair. Under light irradiation, this hydrogel ablated 4T1 cells in vitro and suppressed tumor growth in vivo. In addition, the hydrogel could also promote adhesion, proliferation, and ostegenic differentiation of MSCs in vitro, and enhance bone regeneration in vivo [332]. MSCs can be used as a Drug Delivery system to target on tumor cells because of the hypoimmunogenicity and migration capacity; however, MSCs may promote the progression and metastasis of tumor cells [333,334]. Therefore, stem cell membrane which also has bone-targeting ability and is safer than MSC, was chosen to be the delivery system for PDA nanoparticles to treat bone cancer. Stem cell membrane-camouflaged PDA nanoparticles loading with SN38 exhibited lower nonspecific macrophage uptake, longer retention in blood, and more effective accumulation in tumors than that shown by nanoparticles without stem cell membrane. These obtained nanoparticles showed synergistic anti-tumor effects of PTT and chemotherapy on MG63 cells [334]. Recently, Yao et al. prepared 3Dprinted scaffolds based on hydroxyapatite, PDA, and carboxymethyl CS for bone cancer therapy. The incorporation of PDA remarkably enhanced the rheological properties of the slurry for molding, mechanical properties, surface relative potential, and water absorption of composite scaffolds, and also endowed the scaffolds with pthotothermal capacity. Under light irradiation, the scaffolds could not only inhibit tumor growth but also promote osteogenic differentiation of MSCs [335].
Combination of PTT and PDT
Since the design of PSs and PTAs is transformed to nanoformulation, and the optimal light source for PDT and PTT is in the NIR region, many novel nanocarriers, which can play the roles of both PS and PTA, were reported recently [336][337][338][339]. The resulting enhanced PT using these nanocarriers is called synergistic PT. In addition, these nanocarriers can also load with chemotherapeutic drugs and immunoregulatory drugs to improve the anti-tumor efficacy in multiple aspects. Cheng et al. synthesized AgBiS 2 nanoparticles for the synergistic PT for bone cancer. These nanoparticles could convert light into heat with a high photothermal conversion efficiency of 36.51% and remarkably increase the generation of intracellular ROS under NIR laser irradiation. The synergistic PT effectively inhibited the growth of malignant osteosarcomas in vivo and also reduced the viability of S. aureus in vitro [340]. Moreover, as ICG exhibits both PDT and PTT effects under light irradiation, ICG-based nanovehicles can be used for the synergistic PT [341,342]. Zeng et al. developed ICG-laden GO nanosheets modified by (4-carboxybutyl) triphenyl phosphonium bromide (TPP, a mitochondria-targeting ligand), for osteosarcoma therapy, and the obtained nanocarriers were labeled TPP-PPG@ICG. The synergistic PT effects of PDT and PTT were confirmed by the detection of intracellular ROS and thermal imaging, respectively (Figure 7). These mitochondria-targeting nanosheets could, in particular, accumulate in tumor cells and significantly eradicate MDR osteosarcomas under light irradiation [343].
Conclusions and Outlooks
As some bone cancer cells may remain or recur in the local area after tumor resection, some are highly resistant to chemotherapy, and some are insensitive to radiotherapy, there are multiple undesirable results following bone cancer therapy, such as motor dysfunction, neurological symptoms, reduced quality of life, and mental and economic burdens. PT including PDT and PTT, has the advantages of minimally invasive, highly efficient and selective, and easy to combine with other treatments. Therefore, PT is recognized as a new generation of effective treatment for bone cancer. The most used light source in PT is the light with absorbance in the NIR region, which possesses sufficient tissue penetration and minor side effects, and can induce the generation of intracellular ROS or photothermal conversion to ablate tumor cells. Studies on PDT for bone cancer are mainly focused on the development and optimization of PSs, in order to improve the safety and efficiency of the second-or third-generation PSs. Nanoformulation is the main trend in the development of PSs which can endow PSs with bone-or tumor-targeting capacity, the ability of loading chemotherapeutic or immunotherapeutic drugs, and enhanced biocompatibility and residence time. For PTT, semiconductor-based and organic molecule-based PTAs are the most interesting PTAs in recent years due to the low biotoxicity and cost and high photothermal conversion efficiency. Designs of PTAs often take into account the capacity of promoting bone regeneration which can accelerate bone repair in the neoplastic bone defects, as well as the drug loading ability to combine with chemotherapy and immunotherapy. In addition, nanocarriers based on metal nanoparticles or organic NIR dyes exhibit both PDT and PTT effects, and the resulting synergistic PT has stronger tumoricidal effects while the side effects are not improved. Moreover, some researchers are focusing on the specific mechanisms of PT effects on tumor therapy and they want to further improve the effects via altering the expression of involved molecules in corresponding signaling pathways [344]. Recently, computerized medical imaging has also been employed for the diagnosis, planning, and real-time monitoring during PT [345].
However, there are also some crucial challenges or opportunities for further clinical applications of PT. First, the PDT efficiency and side effects depend on the time, intensity, and interval of light irradiation, as well as the amount of PSs. Therefore, guidelines for the clinical use of PDT are necessary. When PDT combined with minimally invasive techniques such as endoscopy is used for deep bone cancer, the clinical protocol can be customized according to existing ones for other superficial tumors. Secondly, unlike studies on PDT, studies on PTT mainly focus on the design and development of PTAs, but the clinical experiments in PTT are rarely reported. The progress of PTT in clinical application lags far behind that of PDT. Thirdly, the long-term metabolism and biocompatibility of the nanoscale PSs and PTAs, and the tumor-targeting capacity and specificity of PSs and PTAs for various cancers, are required further studies. Fourthly, pre-clinical and clinical experiments in real-time monitoring for local immune response and situations of surrounding normal tissues are also needed. Finally, although the synergistic PT and PT combined with other conventional treatments are the most interesting area among studies, the necessity, economic benefits, safety, and efficacy of these combined therapies require detailed discussion depending on each individual. In summary, PT for bone cancer has developed rapidly in recent years, and we strongly believe that PT has great prospects in tumor therapy. We hope this review can provide valuable information and insights for future studies on PT. | 14,609 | 2021-10-21T00:00:00.000 | [
"Medicine",
"Engineering"
] |
An XAI approach for COVID-19 detection using transfer learning with X-ray images
The coronavirus disease (COVID-19) has continued to cause severe challenges during this unprecedented time, affecting every part of daily life in terms of health, economics, and social development. There is an increasing demand for chest X-ray (CXR) scans, as pneumonia is the primary and vital complication of COVID-19. CXR is widely used as a screening tool for lung-related diseases due to its simple and relatively inexpensive application. However, these scans require expert radiologists to interpret the results for clinical decisions, i.e., diagnosis, treatment, and prognosis. The digitalization of various sectors, including healthcare, has accelerated during the pandemic, with the use and importance of Artificial Intelligence (AI) dramatically increasing. This paper proposes a model using an Explainable Artificial Intelligence (XAI) technique to detect and interpret COVID-19 positive CXR images. We further analyze the impact of COVID-19 positive CXR images using heatmaps. The proposed model leverages transfer learning and data augmentation techniques for faster and more adequate model training. Lung segmentation is applied to enhance the model performance further. We conducted a pre-trained network comparison with the highest classification performance (F1-Score: 98%) using the ResNet model.
Introduction
The COVID-19 outbreak has been the most significant pandemic of the 21st century [1], with hundreds of millions of reported cases and over five million deaths worldwide as of 2021 [2]. Though reverse transcription-polymerase chain reaction (RT-PCR) is the reference standard method to identify patients with a COVID-19 infection, Chest X-ray (CXR) and Computed Tomography (CT) have been extensively used in diagnosis, monitoring, and treatment decisions regarding COVID-19 cases [3][4][5]. Pneumonia is the most common radiological manifestation of COVID-19, which can be detected using CXR images [6,7]. Many thoracic imaging societies S. Sarp like the Radiological Society of North America state that routine CT for the identification of COVID-19 pneumonia is currently not recommended in the diagnosis of COVID-19 unless the patient is seriously ill [8][9][10]. Moreover, X-ray images are preferable for COVID-19 case detection because they are captured faster at low cost and are more readily available than CT images [10][11][12]. Manual diagnosing pneumonia in X-ray images is a challenging, time-consuming process and has poor diagnostic performance [13], [14]. Recently, a variety of Machine Learning (ML) based COVID-19 detection methods using X-rays have been developed and implemented [15]. Pneumonia with X-rays could be detected as the first stage of COVID-19 disease [16]. Some models use AI and computer vision associated with the CXR imagery of patients to identify if the patients are diagnosed as COVID-19 positive [17]. The second stage of the analysis is designed to detect if the pneumonia is caused by COVID-19. Antagonistically, modern technology such as AI with imagery inputs like heatmaps and similar data can be used by physicians as decision support tools to minimize human errors and increase diagnosis efficiency [18,19]. For several decades, AI has been used by academia and industry; it is inspired by human mental learning by mimicking the brain's cognitive features to learn and make decisions like a human artificially.
Traditional AI models function as black-box models for most researchers and professionals using them for various tasks, including medical diagnostic purposes. Such traditional AI methods lack details and explanations to help physicians make better decisions and interpretations. Explainable AI (XAI) provides this opportunity, which transfers the AI-based black-box models to more explainable and transparent gray-box models. The major limitations of all the mentioned methods are that they cannot: 1) analyze the level of COVID-19 cases and 2) provide sufficient insights regarding model details. This study proposes a model for COVID-19 case detection and its interpretation using XAI, depicted in Fig. 1. The contributions of the proposed framework are: (i) Detection and classification of COVID-19 cases from affordable CXR images, (ii) Automatic interpretation of COVID-19 cases using a LIME-based heatmap implementation with XAI from X-ray images to assist clinicians and radiologists.
The proposed model utilized lung segmentation, transfer learning, and data augmentation technique for faster and adequate model training. A pre-trained network comparison was performed where the ResNet model achieved the highest classification performance (F1-Score: 98%).
The remaining sections of this paper are organized as follows. Related works are introduced in Section 2. Section 3 presents data collection and processing steps and the validation methods. Section 4 provides details about the methodology and the implementation of the proposed model. The results and related discussions are examined in Section 5. New trends and future work are provided in Section 6. Conclusions are drawn in Section 7.
Related works
Recently, there have been many studies that utilize ML techniques to combat COVID-19 pandemic. For instance, the authors in [20] proposed multi-level thresholding with a Support Vector Machine (SVM) classifier for the early detection of COVID-19 cases. Firstly, features were extracted using a multi-level thresholding technique. After that, an SVM classifier was applied to the extracted features of 40 contrast-enhanced CXR images, and classification accuracy was obtained at 97%. In another study [21], the authors applied an improved SVM classifier to detect COVID-19 cases. They collected an image dataset from 235 patients, of which 43% were confirmed COVID-19 cases. Five ML algorithms, i.e., logistic regression, random forests, gradient boosting trees, neural networks, and SVM, were trained with 70% of the dataset and evaluated their performances with 30%. The results showed that the SVM classifier performs the best in detecting COVID-19 cases compared with other conventional ML methods with an accuracy of 85%. In [22], Random Forest and XGBoost algorithms were applied to the X-ray images to detect COVID-19 cases. The results showed that XGBoost, with an accuracy of 97.7%, provides similar performance to the Random Forest method, with an accuracy of 97.3%. Advanced learning methods based on Convolutional Neural Networks (CNN) have also been proposed and employed to detect COVID-19 cases using X-ray images to overcome the limitation of the conventional ML approaches. Ozturk et al. [23] proposed a Deep Learning (DL) model for the early detection of COVID-19 cases using X-ray images. The proposed model consists of 17 convolutional layers and five pooling layers using Maxpool. Moreover, these layers have different filter numbers, sizes, and stride values. The model was employed on 1125 X-ray images, including 125 for the COVID-19 class, 500 for the pneumonia class, and 500 for the normal class. The model provided a classification accuracy of 98.08% for binary classes and 87.02% for multiclass classification. Toraman et al. [24] proposed Convolutional Capsule Network architecture (CapsNet) to detect COVID-19 cases using CXR images. The method was applied to a dataset containing X-ray images containing COVID-19 [25], No-Findings, and pneumonia [26]. The results showed that the CapsNet approach provides highly accurate diagnostics for COVID-19 with 97.24% and 84.22% for binary and multiclass classification, respectively. In [27], a CNN model has been designed and developed using EfficientNet architecture to automatically diagnose COVID-19 cases with X-ray images. The proposed model uses EfficientNet with 10-fold stratified crossvalidation, which was applied to classify binary multiclass cases using X-ray images containing COVID-19, pneumonia, and normal patients. The proposed method achieved an average recall result of 99.63% and 96.69% for binary and multiclass classification, respectively. A DL-based ML method has been developed by Apostolopoulos et al. to detect COVID-19 cases [28]. The method was applied for both binary and multiclass analysis, and they used a dataset composed of 224 COVID-19 X-rays, 700 bacterial pneumonia, and 500 no-findings images. The proposed model finds high accuracy results, which are 98.78% and 93.48% for binary (COVID-19 vs. No-findings) and multiclass (COVID-19 vs. No-findings vs. pneumonia), respectively. Moreover, Hemdan et al. [29] designed a COVID-19 case detection method based on DL using X-ray images, and the proposed method was compared with seven other DLbased COVID-19 case detection methods. The method was performed for only binary class classification, and an accuracy rate of 74.29% was estimated. Three different automated COVID-19 case detection methods have been developed based on three different DL models, which are ResNet50, InceptionV3, and InceptionResNetv2 in [30]. The developed methods were applied for binary class classification only, and the highest accuracy rate was achieved by ResNet50 with an average of 98%. Islam et al. [31] proposed a combination of two different methods, which are CNN and long short-term memory (LSTM), for detecting COVID-19 cases using X-ray images. In the proposed approach, CNN was first applied to the X-ray images to extract the features. After the obtained features were used by LSTM to classify COVID-19 cases. The method was performed on a collection of 4,575 X-ray images, including 1525 images of COVID-19. The experimental results indicated that the CNN-LSTM performs better than the state-of-the-art methods with an accuracy of 99.4%. Loey et al. [32] used a Generative Adversarial Network (GAN) with deep transfer learning to diagnose COVID-19 from X-ray images. The proposed approach used three different transfer learning pre-trained models, i.e., AlexNet, GoogleNet, and RestNet18. The method was performed on a collection of datasets consisting of 69 COVID-19, 79 pneumonia bacterial, 79 pneumonia viruses, and 79 normal cases. The experimental results showed that using GAN with pre-trained GoogleNet provides the highest accuracy rate with 99.9% for binary class classification problems. Bandyopadhyay et al. [33] developed a hybrid model based on two different ML methods, LSTM and Gated Recurrent Unit (GRU), to detect COVID-19 cases automatically. The proposed method obtained 87% accuracy for the confirmed COVID-19 cases. A DL method was presented in [34] to automatically classify COVID-19 cases from CXR. The proposed model achieved an accuracy of 89.5%, a precision of 97%, and a recall of 100% for COVID-19 cases. In [35], a multi-dilation DL approach (CovXNet) for automatic COVID-19 and other pneumonia case detection from CXR images was proposed. Experiments were performed on two different datasets to evaluate the performance of the CovXNet. The first dataset consisted of 5,856 X-ray images, and another dataset contained 305 X-ray images of different COVID-19 patients. The results showed that the CovXNet method achieved an accuracy of 97.4% for COVID/Normal detection and an accuracy of 96.9% for binary class, and 90.2% for multiclass classification. Other DL methods have been designed and developed based on different pre-trained models such as VGG16, VGG19, ResNet50, DenseNet121, Xception, and capsule networks [36][37][38][39][40][41][42]. Generally, existing approaches attempt to resolve binary and multiclass COVID-19 cases classification problems.
Data collection, preprocessing, and validation
The collected images for COVID-19 cases were preprocessed using various methods. This section explains them, including the data collection, preprocessing, validation, and test/computational environment.
Data collection
The publicly accessible GitHub dataset of CXR and CT images for lung disease patients suspected of having COVID-19 or other viral and bacterial conditions such as MERS, SARS, and ARDS is available [25]. This dataset is gathered from both public sources and indirectly from hospitals and physicians [43]. In this research, the X-ray scan images have been used to create an XAI-based COVID-19 detection model, while the CT images have been disregarded. Also, low-quality photos and pictures with foreign objects (metals, cables, etc.) were omitted. First, the selected X-ray scan images were rescaled to 512×512. Second, various image enhancement techniques were applied to produce enhanced input images, including flipping (right/left and up/down), rotation and translation with five random angles. Our previous work [44] had only 50 positive and 50 negative X-ray scan images for the training and 20 positive and 20 negative samples for testing. In this study, we have benefitted from the existing data repositories to extend and improve the classifier's performance and added an explainer. Another issue with the dataset is class distribution. This dataset contains X-ray scan images from those infected by COVID-19 and other diseases. There were only three records in the dataset with COVID-19 negative samples. Therefore, X-ray pictures with ARDS and Streptococcus results were labeled as COVID-19 negative samples. In this study, 6,000 images are collected from the GitHub repositories mentioned above to increase the number of training and testing samples in the dataset. Thus, an analysis using different neural network structures is conducted, similar to the previous study. This study created a binary classification model by marking labels other than COVID-19 as the "0" class. Of these 6,000 samples, 5,500 are COVID-19 negative, and the rest are COVID-19 positive X-ray images. Furthermore, 1,200 of them were used for testing, and the remaining 4,800 for training.
Data preprocessing
Collected raw data were processed with various techniques to increase the model's classification performance. Samples of the dataset are depicted in Fig. 2.
To improve the classification model's performance and increase the number of samples in the dataset, various image augmentation techniques were employed. The parameters used were a rotation range of 20 degrees, zoom range of 15, width shift range of 0.2, height shift range of 0.2, shear range of 0.15, and horizontal flipping. An example of the image augmentation techniques is illustrated in Fig. 3.
Validation
To assess the performance of the COVID-19 detection part of the framework, we utilized the following performance metrics, F1-Score, recall, precision, and accuracy extracted from the confusion matrix shown in Table 1.
Performance measures are given in Eqs.
Evaluation of the explanation part of the framework was performed by an MD specialized in COVID-19 by reviewing the test images. Generated heatmaps were reviewed and evaluated one by one by the medical professional.
Test computational environment
The proposed COVID-19 XAI framework in this study was implemented using the Keras DL framework built with Python 3.6. The workstation -an Intel® Core™ i7-8700 processor @3.20 GHz -was used to run AI-based models, which has 32 GB memory and GTX 1080 GPU (NVIDIA GeForce). The classification part of the model is trained for 60 epochs, whereas the lung segmentation part is trained for 50 epochs since the accuracy and loss values do not improve notably after these epochs. The complete training of the framework took around 3 hours. We used a constant learning rate of 0.001 and the "RMSprop" optimizer to train classification parts. The segmentation is realized using an "adam" optimizer with a 0.005 learning rate. XAI module training takes around 1 minute on average.
Methodology and implementation
COVID-19 detection has been explored in many studies [45]. In contrast to existing approaches, lung segmentation is utilized in the proposed pipeline to enhance the COVID-19 detection and explanation tasks. It forces the system to better learn the features inside the lung region with improved training. Further, it provides greater emphasis on the lungs during the explanation phase of the framework. After this step, transfer learning is adopted to speed up and ease the feature extraction since the number of X-ray images is limited. Following feature extraction, X-rays are classified, and the classification performance of the model is measured. The framework's third stage focuses on explaining the COVID-19 cases. The COVID-19 positive cases are then fed into the LIME-based heatmap explanation part of the pipeline to spotlight the areas with COVID-19 pneumonia to help physicians during the diagnosis in a non-invasive manner. The following sections discuss lung segmentation, transfer learning models for classification, and XAI tools used in this study.
Lung segmentation
Anomalies in the lung provide information about many diseases. Our study examines the CXR to determine whether the patient has COVID-19. An additional lung segmentation part is added to the proposed pipeline to increase the performance of the detection and explanation part of the proposed model. Manual segmentation is time-consuming and not available for many biomedical applications. Also, human annotations are prone to cause inconsistencies as well as to make mistakes. With lung segmentation, COVID-19 detection and its explanation networks are fed with masked lung images, which force these parts to detect and explain only in the lung section of the CXR. The output of the explanation part is shown in the whole CXR for a better interpretation.
A reference hybrid U-Net [46] architecture that uses a pre-trained VGG11 feature extractor is utilized in the encoder part of U-Net to obtain lung segmentation. Pre-trained networks on a large dataset, i.e., ImageNet, outperform the networks trained from scratch. The U-Net architecture consists of an encoder and decoder structure with skip connections to carry low-level feature maps to the decoder. Concatenating feature maps from encoder to decoder improves the performance and convergence of the network. To train this model, publicly available lung segmentation images are used [47], and various augmentation techniques such as horizontal and vertical shift, minor zoom, and padding are applied. The lung segmentation model has a Jaccard index of 92% and a dice score of 96%. Additionally, morphological operations, i.e., dilation, are implemented to ensure the correct segmentation of the lungs with a kernel size of 90 × 30 and three iterations using the OpenCV function, cv2.dilate (bc, kernel, iterations = 3). Our aim is to segment and interpret all parts of the lung in X-ray images using the proposed approach. Including the perimeter outside the lung does not have an important effect, but the entirety of the lung has a great impact on classification and especially on explanation tasks. Therefore, we have applied dilation operations. Lung segmentation is used as a pre-processing method to increase the performance of COVID-19 classification and explanation tasks. The comparison of the model results with/without lung segmentation is summarized in Table 2.
Transfer learning
Transfer learning is a widely used technique in machine learning that simplifies the process of building models. It involves utilizing knowledge gained from a previously trained model on a related task and applying it to a new, but related problem. This approach is based on extracting features from the input data obtained from a related initial task and transferring these features to the new task to improve accuracy and reduce training time. Specifically, pre-trained DL network models have already been trained on large datasets and have achieved high accuracy, and these pre-trained models can be used as a starting point for the new task. The transfer learning process starts with the previously learned patterns while solving a different problem. Further, it decreases the time-consuming training process and enables the creation of a model with high classification performance. These pre-trained models are based on Deep Evolution Neural Networks (DENN). In deep learning, this method involves initially training a CNN for a classification problem using large-scale training datasets. Since a CNN model can learn to extract the image's discriminative features, the availability of initial training data is an essential part of successful training. The model performance evaluation depends on the model's fitness for transfer learning, which relies on CNN's capacity to select the most important image features.
The VGG-Net model was used in both the segmentation and detection part of this framework, which was developed with a tiny convolution in the neural network by Simonyan et al. [46]. Compared to previous models, the most significant difference is the widespread use of CNN models due to their deeper structure, which typically includes multiple layers of convolution and association. This model consists of nearly 138 million parameters. VGG is one of the popular networks, which is trained with more than a million images from the ImageNet dataset with 1,000 different classes. Therefore, the model can be applied as a helpful tool for the feature extractor of new images.
The ResNet (residual networks) won the ImageNet challenge in 2015 and was proposed by He et al. [48] with a paper titled "Deep Residual Learning for Image Recognition". The version that is used in this model has 50 neural network layers and was trained on the ImageNet dataset having 1,000 different classes. Increased layer quantity brings some challenges, such as model complexity and vanishing gradient. ResNet was inspired by the VGG networks, but it has fewer filters and less complexity. The vanishing gradient problem was mitigated by the skip connections, which allow gradients to flow through alternative paths. This method is the core concept in residual blocks of ResNet to alleviate the vanishing gradient problem. The ResNet50 model has more than 23 million parameters.
The Inception V3 model was developed by Szegedy et al. with a paper titled "Rethinking the Inception Architecture for Computer Vision" published in 2015 [49]. This iteration of the inception architecture is more computationally efficient than the previous models. Larger convolutions are changed within parallel smaller convolutions. Additionally, factorized convolutions and an auxiliary classifier are utilized to improve the model's performance. A new grid size reduction was proposed to combat bottlenecks of expensive computation.
Our study used VGG16, VGG19, ResNet, and Inception V3 neural network models to improve the performance of our X-ray image based on the COVID-19 detection model.
Explainable artificial intelligence (XAI)
AI has diverse applications and provides unprecedented advantages, such as higher efficiency and broader data analysis, for many daily tasks such as manufacturing, finance, and entertainment [50,51]. However, the use of AI is lagging in high-risk systems, especially in healthcare [52]. The inner workings of AI systems comprise complicated mathematical and statistical processes, which are not interpretable. Fortunately, a black-box AI model can be converted to a glass-box model by applying explainable AI tools.
AI models are converted to more understandable systems by making them interpretable or comprehensible. Shallow Learning (SL) methods such as decision trees and regression algorithms are more transparent as the mathematical backgrounds are well-defined and studied. These methods are interpreted by utilizing the underlying math. On the other hand, the inner workings of DL methods, such as CNNs and Recurrent Neural Networks (RNNs), are conceived by finding the relationship between the inputs and the outputs. DL methods consist of nodes and weights associated with the inputs and outputs. This relationship should be clarified to mitigate risks and build trust in AI models for enhanced adoption. A comprehensive program driven by the DARPA showed that XAI improves user trust significantly and increases user adoption through the provided explanation [53,54].
Grad-Cam, Tylor decomposition, and LIME are some of the XAI tools used to make the AI models more understandable. Our model provides a LIME-based heatmap explanation method to detect and find COVID-19 in CXR scans.
In this study, the LIME model-independent general XAI method is used, which finds the statistical connection between the inputs and the outputs of the models. Inputs are perturbed during the training of local surrogates to understand their effects on the output instead of globally training them. This process results in the instance of an interpretable representation and visualization. The mathematical definition of LIME is given in the equation as follows: Where: : An instance of out of the data space for which we desire an explanation for its predicted target value ( , , ): Fidelity function that measures how unfaithful the g is while approximating to f in the locality defined by the . It is the locality-aware loss. ( ( , , )) is minimized for local faithfulness with low complexity of the second term (Ω( )) for interpretability.
After obtaining the explanations, the areas determined by the LIME are fed into the heatmap creation part of the explainer. These areas are highlighted with heatmaps to spotlight the areas with COVID-19 pneumonia to provide additional information to the physicians during the diagnosis of COVID-19. In comparison to the regular LIME output [55], our XAI part takes the LIME output one step further to better localize the COVID-19 affected areas as shown in Fig. 4. Fig. 4. a) Regular LIME [55] and b) our proposed LIME-based XAI model output on the right. Fig. 5. a) Healthy chest X-ray [56] and b) Chest X-ray of a patient with COVID-19 [36].
Results and discussions
Under X-rays, dense structures such as bones and metal blocks are seen as white since they block the X-rays. Less dense areas appear in tones of gray, and the least dense regions, such as lungs, will be black. Healthy and COVID-19 CXR images are shown in Fig. 5.
CXR images of healthy lungs are shown as black (see Fig. 5a), whereas the areas with COVID-19 are shown in white. The complications in the lungs can be detected by examining the CXR. A CXR with COVID-19 has more white areas spread over the lungs, as shown in Fig. 5b.
Classification performances of seven DL models with/without transfer learning and with/without segmentation are given in Table 2. The models (VGG16, VGG19, ResNet, InceptionV3) are developed using transfer learning with getting lung-segmented CXR images as inputs. Performance results of another pre-trained ResNet model without lung segmentation and the model without transfer learning are also compared in Table 2. Weighted averages are used to take the sample size into consideration to provide a more accurate representation of the averages during the calculations.
In our previous studies [44,58], we investigated whether the transfer learning models could be used in detecting COVID-19 positives in the X-ray data to increase the model's classification performance without XAI, which we now focus on in the present study. The results showed we could detect COVID-19 from X-ray images in a similar manner to the current techniques for other imaging approaches, although this was more limited than explainable. In this study, we used the cognitive learning approach to confirm and measure the quality of the predictions based on those of medical doctors, which provides a generalizable model to evaluate the validity of the projections in the X-ray data and the use of the XAI techniques. Table 2 indicates that the pre-trained ResNet transfer model is better suited for detecting COVID-19 with 98% accuracy. The VGG16 and VGG19 models indicate a similar classification performance with an accuracy of 96%. The InceptionV3 model had the lowest classification accuracy at 91%. The VGG16, VGG19, ResNet, and InceptionV3 models performed comparably in COVID-19 classification with F1 scores of 0.96, 0.92, 0.98, and 0.90, respectively. The ResNet model presents the highest performance in classification, i.e., 98% F1-score. It also suggests that the high F1 classification metric performance is more effective in predicting the best classification performance.
Additionally, Table 2 compares models with/without transfer learning and lung segmentation, i.e., the ResNet model without lung segmentation and other studies without transfer learning [36,57]. This comparison indicates that transfer learning provides better resource management and improved efficiency during training. Models without transfer learning require far more complicated models and high-performance computational sources, which also consume more time. However, models with transfer learning do not rely on high computational power as much as those without transfer learning. The previously trained networks are taken as a tool to solve the problem at hand efficiently and faster without expensive and extensive resources. Our proposed model outperforms both of the models without transfer learning [36,57]. In addition, the proposed model has better performance than ResNet without lung segmentation. Fig. 6a shows a COVID-19-positive CXR image. The lung on the left has a pattern, with whiter areas showing more dense pneumonia regions. These areas are well detected and highlighted with a heatmap by the proposed method in Fig. 6b.
The COVID-19-positive patient CXR image is highlighted in Fig. 7. The model correctly identified the disease and stressed the infected regions by pneumonia for better understandability in a low-quality image.
The pneumonia infection areas are depicted and highlighted in Fig. 8a and Fig. 8b, respectively. Another CXR image indicating and highlighting the areas with pneumonia is shown in Fig. 9 with cables and other medical devices. Our model is resilient to these external elements. The proposed model successfully identifies the COVID-19 positive case; however, the explanation of the CXR image is slightly off the correct place in Fig. 10. The heatmap indicates the infected areas with the perimeter of the lung on the left. The whiter areas on the lung perimeter are considered pneumonia.
The deficiencies due to the reflections were eliminated by introducing lung segmentation to the pipeline. The heatmap provided a better explanation for pneumonia areas where it is impossible to notice the color difference with the naked eye. Also, for most X-rays, it is evident that there is still room for improvement with the use of extensive data collection and labeling. The limited color change is observed on one side of the lung, which could be improved as well. In addition, it can be studied for different color scales that will create more contrast of the increased different heat zones specific to the limited area with the lesion in the heatmap. Many countries and regions in the world cannot access tomography. For these, direct radiographs can be the only diagnostic tool. Furthermore, computed tomography cannot be repeated one by one due to high radiation exposure. That's why a well-working heatmap can be very helpful. With the help of this study, even those who are not very experienced can easily see the area where the lesions are. The following observations are deducted from the overall results: Observation 1. COVID-19 could be detected by AI using transfer learning with higher performance (F1-Score: 98%).
Observation 3. The XAI application on CXR images has a high potential for faster diagnosis and prognosis of COVID-19.
Observation 4.
The treatment of COVID-19 can be tracked more easily by applying the proposed model to the CXR images. We would like to emphasize that many learning-based methods have been designed and proposed to classify COVID-19 cases, and these methods have been compared with those of radiologists. The results show that learning-based systems provide better results in terms of precision and time [59][60][61].
Observation 5. The application of XAI methods enables the adoption of AI applications in high-risk industries such as healthcare.
Observation 6. Segmenting lung images as a preprocessing step improves the COVID-19 detection performance and its explanation.
New trends and future work
Under normal conditions, a well-trained physician can detect a pneumonia case by looking at the CXR and providing diagnosis results relatively quickly without investigating thousands or millions of X-ray images. This study is motivated by the power of mental modeling, which is performed by the human brain to understand the concept of pneumonia and its causes by comprehending the associated facts such as human anatomy, fundamentals of virology, how lungs and ribs function, and other information learned during their medical education at school or in a clinic. Researchers and engineers have developed various XAI tools to help professionals and academia improve their understanding and insights regarding AI-based implementations. The following XAI tools have significant potential to be implemented primarily in the field of medical sciences, where image processing and clustering play an important role:
Conclusion
This paper presents an XAI approach for COVID-19 diagnosis using transfer learning with CXR images. The proposed model supports decision-making for COVID-19 cases, i.e., positive and negative. The framework accepts a CXR image as the input and predicts the COVID-19 classification and its explanation as the output. To improve the classification and explanation performances, a lung segmentation model is realized, and its output, segmented lung images, is fed to the framework. An XAI approach, i.e., LIME, is used to faithfully describe predictions of COVID-19 cases in an interpretable manner through heatmaps. The proposed model is also extended through the LIME and heatmap methods to offer better explainability. XAI tools help non-expert end-users understand the black box AI model by providing explainability and transparency. It provides feedback to the end-user and explains, i.e., by providing more information and tracing the insight right back to the inner workings of the black box AI model.
The internal working of AI models, especially the deep learning models, are black box concepts that cannot be explained why the AI model outputs a specific result. Our model requires lung segmentation before classification and explanation, extending the overall processing time. It is also trained on a limited number of CXR images. Additional data, CXR images, will increase the robustness and performance of classification. In addition to this, our model's XAI part has limitations while interpreting the CXR images. Our model first needs to classify the COVID-19 CXR images from the healthy ones; then, it will provide heatmaps indicating the areas with COVID-19 pneumonia. When a healthy CXR image is given to the XAI part of the model pipeline, the model tries to find COVID-19affected areas and provides the highlighted results, which are the closest COVID-19-affected areas. Therefore the classification part is also critical in this project.
This study demonstrated how XAI techniques could be helpful for COVID-19 diagnosis in the healthcare domain when assessing trust and gaining insights into predictions. The proposed hybrid model provides two outputs: (1) COVID-19 diagnosis and (2) Model-decision explanation. Interpretation of the results obtained from the proposed XAI module offers adequate information about COVID-19 diagnosis. This study is expected to benefit researchers and physicians working on COVID-19 diagnosis or related studies by providing insight into XAI's potential.
CRediT authorship contribution statement
Salih Sarp, Ferhat Ozgur Catak, Murat Kuzlu: Conceived and designed the experiments; Performed the experiments; Analyzed and interpreted the data; Contributed reagents, materials, analysis tools or data; Wrote the paper.
Gungor Ates: Analyzed and interpreted the data; Contributed reagents, materials, analysis tools or data; Wrote the paper.
Funding statement
This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.
Declaration of competing interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. | 7,715.4 | 2023-04-01T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
Identification and molecular mapping of resistance genes for adult-plant resistance to stripe rust in spring wheat germplasm line PI660076
Wheat is one of the major food crops worldwide. Stripe rust can cause a great loss of wheat yield, especially when the disease is prevalent. Chemical control not only causes the loss of resistance to stripe rust but also has a serious impact on the human body and environment. Therefore, the most economical measure to control wheat stripe rust is to cultivate resistant varieties. Rapid variation of stripe rust races often leads to rapid "loss" of resistance to stripe rust disease; therefore, breeders and researchers have to continuously explore new stripe rust resistance genes to provide new resistance sources for the rapid variation of stripe rust races. Previous studies have confirmed that PI660076, a spring wheat line, showed stripe rust resistance under natural conditions at the adult stage, which has great value in breeding programs. In this study, a recombinant inbred lines (RIL) population was constructed by crossing the wheat line PI660076 with the stripe rust-susceptible line AvS. Genotyping of the population was performed using a wheat 15 K SNP array. Three QTLs were identified using phenotypic data over four years across three environments. The resistance type of each QTL was determined by inoculating the RIL lines with single and homozygous QTL during the seedling and adult stages under controlled conditions. The all-stage resistance (ASR) QTL, QYr076.jaas-2A (flanked by SNP marker AX-11048464 and Kompetitive Allele-Specific PCR (KASP) marker KASP_4940 ) explained 7.13–16.58% and 6.95–7.25% of infection type (IT) and disease severity (DS), respectively. The adult-plant resistance (APR) resistance QTL, QYr076.jaas-4D.1 (flanked by KASP marker KASP_0795 and SNP marker AX-111567243 ,) explained 6.85–12.70% and 7.94–17.26% of IT and DS, respectively. The APR resistance QTL, QYr076.jaas - 4D.2 flanked by KASP markers KASP_9130 and KASP_6535 , explained 7.97–39.19% and 8.77–20.55% of the phenotypic variation in IT and DS, respectively. All the three QTLs are likely to be new. The obtained results lay a foundation for further utilization of the stripe rust-resistant line PI660076, as well as for fine mapping and molecular marker-assisted selection breeding.
Introduction
Stripe rust, caused by Puccinia striiformis f. sp.tritici (Pst), is a typical airborne disease that can spread with high air flow and occurs in almost all wheat growing areas in the world.Once wheat stripe rust breaks out, it will seriously interfere with the normal growth of wheat.In the epidemic year, it will cause serious yield loss and economic loss to wheat production (Chen 2014;Zhou et al. 2022a).Currently, there are two main control strategies for wheat stripe rust.One is chemical control, which has the advantages of quick effect and strong results.Especially in the year of the stripe rust epidemic, it can effectively reduce the loss of wheat yield.However, this method also has clear weakness; specifically, long-term use will not only make the pathogen produce fungicide resistance but will also have an impact on the environment and human health.Compared with chemical control, planting disease-resistant varieties is the most economic and environmentally friendly method; however, the loss of resistance to stripe rust is caused by virulence variation of stripe rust, which is a major problem in resistant breeding programs (Line 2002).
Many stripe rust resistance genes lost effectiveness within a few years due to racial variation in pathogen populations, such as Yr1-Yr4, Yr6-Yr10, Yr17, Yr20-Yr22, Yr24-Yr29 and Yr43 (Chen 2020).These resistance genes belong to the all-stage resistance (ASR) type (also referred to as seedling resistance), which has the disadvantage of being race specific.In contrast, the other type of resistance, adult-plant resistance (APR), has the characteristics of being nonrace-specific and therefore more durable, which can effectively delay the speed of 'loss' of resistance in wheat varieties, such as the wheat cultivar Libellula grown in Longnan, Gansu, China (Zhou et al. 2003) and Alpowa grown in the US Pacific Northwest (Lin & Chen 2007).To date, 84 officially named stripe rust resistance genes and more than 200 QTLs have been identified, most of which belong to ASR, and only a few are APR (Feng et al. 2018;Klymiuk et al. 2022;Ren et al. 2012a;Wang et al. 2022;Zhou et al. 2014Zhou et al. , 2022b)).Therefore, it is important to identify more APR resistance genes in wheat and its close wild relatives.
PI660076 is a spring wheat line obtained by crossing stripe rust resistance donor PI180957 and stripe rust susceptible wheat cultivar ' Avocet Susceptible (AvS)' in USDA-ARS, Wheat Genetics, Quality, Physiology and Disease Research Unit (Wang et al. 2012).Previous studies have found that PI660076 was susceptible at the seedling stage and resistant at the adult stage, with the exception of Tianshui, Gansu Province, China, in 2013, which showed that PI660076 was susceptible in the field at the adult stage (unpublished data).For this interesting situation, one conjecture was that there was probably a race in the field of Tianshui in 2013, which could interact with a host gene in PI660076 and cause susceptibility, but during the next year, this race did not become dominant, and there was no corresponding race to interact with the host gene, resulting in resistance.However, because the races that caused susceptibility to PI660076 in the Tianshui field in 2013 were not collected and identified, there was no way to verify this susceptible phenotype.On the other hand, PI660076 showed resistance in the adult stage but susceptibility at the seedling stage, and it is necessary to identify and map its resistance QTL.
In this study, 208 F 7 generation recombinant inbred lines (RILs) were constructed by crossing PI660076 with AvS and were genotyped with the 15 K SNP array on a whole genome scale.Combined with the genotypic variation and phenotypic identification of stripe rust in the field, molecular mapping of the stripe rust resistance gene in PI660076 was performed.
Plant materials and stripe rust evaluation in field
A RIL population with 208 lines was produced by crossing the stripe rust-susceptible wheat cultivar AvS as the female parent with PI660076 as the donor of stripe rust resistance.The RIL population and the two parents were tested for their reaction to naturally occurring stripe rust in the field at Mianyang (MY), Sichuan in 2019, 2020, 2021and 2022, and Yangling (YL), Shannxi in 2020.In these field trials, each trial was designed by a random complete block design, with three repetitions at each location.Each trial plot was composed of a single row with a length of 1m, and the distance between adjacent rows was 25cm.Each plot was seeded with approximately 40 seeds of a RIL, and a group of parents was seeded every 60 rows to detect the uniformity of stripe rust infection.A row of AvS was sown around each block and served as induced plants for stripe rust.The infection type (IT) and disease severity (DS) data were collected when the susceptible parent AvS and some susceptible lines of the RIL population were fully infected.The classification standard of infection type (IT) is based on the 0-9 scale.The disease severity (DS) refers to the percentage of disease area in leaves (classified by 0%, 1%, 5%, 10%, 20%, 30%, 40%, 50%, 60%, 70%, 80%, 90%, 100%).
Greenhouse test
Two parents and the RIL lines with single and homozygous QTL were selected for seedling test and adult test.Two-leaf stage seedlings and adult plant (flag leaf fully developed) of the lines were inoculated with mixed Pst races collected from infected wheat plants in field.About 10 seeds of each line and parents were planted in a 7 × 7 × 7cm pot filled with soil mixture and grown in a rust-free growth chamber.The inoculated plants were kept in a dew chamber for 24h at 10 C without light, and then the seedling plants were grown in a growth chamber using a low diurnal temperature cycle gradually changing between 4 C at 2:00 am and 20 C at 2:00 pm with 16h light/8h dark.For the adult plants after inoculation, they were grown under controlled conditons using a low diurnal cycle of temperatures gradually changing from 4 C to 20 C and a high diurnal cycle from 10 C to 35 C. Infection type (IT) based on the 0-9 scale was scored for each line 18 to 21days after inoculation when stripe rust was fully developed on AvS.
DNA extraction
Approximately 2 g of young leaf tissue from each RIL and the parents was harvested and dried in a freeze dryer (Thermo Savant, Holbrook, NY, USA) for 48 h and ground using a Mixer Mill (MM 300, Retsch, Germany) for DNA isolation.After the completion of the DNA extraction using a CTAB method as modified by Zhou et al. (2021), DNA was dissolved in Tris-EDTA (10mM Tris-HCl and 1mM EDTA, pH 8.0) buffer.All DNA samples were quantified by an ND-1000 Spectrophotometer (NanoDrop Technologies, Thermo Scientific, Wilmington, DE, USA) and adjusted to a final concentration of 50ng/μl as stock DNA solutions.Stock DNA solutions were further diluted with sterilized ddH 2 O depending on the requirements of the individual experiment.
Genetic map construction and QTL analysis
Genotyping analysis of the RIL population was performed using a wheat 15 K SNP chip assay, and a marker scoring matrix was created in Excel.For mapping analysis, the association between disease trait data and marker data was calculated by the inclusive composite interval mapping (ICIM) method with the software IciMapping v4.1 (Meng et al. 2015).Marker deletion was performed using the "Bin" function, and genetic mapping was performed with the Kosambi mapping function (Kosambi 2016).Phenotypic variation and correlation analysis were also calculated by SAS 9.0 (SAS Inc., Cary, NC).QTL analysis was performed on the basis of line means from individual experiments in each site-year environment.A threshold likelihood of odds (LOD) of 2.5 was set to claim the significance of the QTL (P < 0.01).
The physical location of the QTL was determined according to the physical locations of the SNP markers linked to the QTL.The IWGCS-RefSeq v1.1 was used as a reference to construct a physical map.MapChart v2.3 (Voorrips 2002) was used to draw the genetic and physical reference maps.
Exon capture sequencing analysis
DNA from the parental cultivars was used to perform exome capture sequencing using BGI T7 platform (150bp paired-end reads) according to the wheat exome capture sequencing protocol described by Dong et al. (2020) by Tiancheng Weilai Technology Co., Ltd., (Chengdu, China).After removing adapter sequences, low-quality bases, or undetected bases, clean reads were aligned to the Chinese Spring (CS) reference genome (IWGSC Ref-Seq v2.1) with the BWA software (default parameters) (Li & Durbin 2009) and single nucleotide polymorphism (SNP) and short insertions and deletions (INDELs) were called using the GATK software (default parameters).
Development of KASP markers
Based on the physical positions of the SNPs obtained from the exon capture sequencing analysis, the SNPs in the targeted QTL intervals were converted to KASP markers.KASP primers were developed following standard KASP guidelines.The allele-specific primers were designed carrying the standard FAM (5′-GAA GGT GAC CAA GTT CAT GCT-3′) and HEX (5′-GAA GGT CGG AGT CAA CGG ATT-3′) tails at the 3′ end of the targeted SNP.Common primers were designed so that the total amplicon length was less than 120 bp.KASP assays were performed in 96-well format in 10 μL reaction volumes containing 5 μL HiGeno 2 × Probe Mix, 0.14 μL KASP primer mix (allele-specific primer 1-FAM (12 μM), allelespecific primer 2-HEX (12μM) and common reverse primer (30 μM)), 2 μL genomic DNA at 30 ng μL −1 and 3 μL ddH 2 O. KASP reactions were carried out using Pherastar scanner (LGC Genomics, United Kingdom), with the following PCR cycling protocol: hot start at 95 °C for 10 min, followed by 10 touchdown cycles (95 C for 20 s; touchdown at 61 C initially and decreasing by 0.6 °C per cycle for 40s), followed by 30 additional cycles of annealing (95 C for 20 s; 55 C for 40 s).
Phenotypic characterization of APR Resistance
In all three field trials, the plants in each trial were fully infected.The average IT and DS of the susceptible parent AvS were 8-9 and 91-100%, respectively, while those of the resistant parent PI660076 were 2-3 and 0-10%, respectively (Figs. 1 and 2).The distribution of the average IT and DS of the F 7 RIL population at the adult stage showed continuous unimodal distribution characteristics (Fig. 2), indicating that the disease resistance at the adult stage belonged to quantitative trait inheritance.Pearson correlation coefficients of the mean IT and DS of the F 7 RILs between different environments were all positive and highly significant (P < 0.001) and ranged from 0.46 to 0.80 for IT and from 0.38 to 0.65 for DS (Table 1).The ANOVA showed that the variations in lines, environments and lines × environment for both the IT and DS (Table 2) were all significant (P < 0.001) (Table 2), suggesting that the expression of APR resistance was consistent across the different environments and over different years.The broad sense heritability of IT and DS was 0.9 for IT and 0.83 for DS (Table 2), which indicated that the disease resistance at the adult stage was mainly determined by genotype and had stable gene expression.
Genetic linkage map construction
The wheat 15 K chip assay scanning results showed that a total of 3494 SNP markers were used for linkage map construction, giving a total map length of 14,025.4cM, with individual chromosomes ranging from 473.5 cM for chromosome 6D to 828.9 cM for chromosome 3B (Table 3).The number of markers per chromosome ranged from 40 for chromosome 4B to 710 for chromosome 2A, with an average number of 166 SNP markers.The average distance between neighboring SNP markers ranged from 1.1 cM/marker for chromosome 2A to 16.8 cM/marker for chromosome 4B, with an average number of 6.9 cM/marker.The map was used to identify significant associations between SNP markers and stripe rust resistance.
Exon capture sequencing and development of KASP markers
In order to facilitate the use of QTL-linked markers, we first converted the flanking SNP markers of the QTLs into KASP markers.Although attempts were made to convert 6 SNP sites into KASP markers, only 3 SNP sites (all for the QTL on chromosomes 4D) could be successfully converted to KASP markers.The successfully converted SNP markers were AX-89349130 (converted to KASP_9130), AX-108766535 (converted to KASP_6535), and AX-110080795 (converted to KASP_0795).Therefore, in order to obtain more SNP markers, we did the exon capture sequencing of parental DNA.We selected the SNP sites within the mapping interval, and successfully developed three more KASP markers (KASP_4940, KASP_2719, KASP_0974).The newly developed KASP markers together with the other SNP markers were verified in the RIL population.
QTL for stripe rust resistance
Three major QTLs were consistently detected in the RIL population across 5 experiments on chromosomes 2A and 4D, and the one QTLs was temporarily named QYr076.jaas-2A(Fig. 3) and two QTLs on chromosome 4D were temporarily named QYr076.jaas-4D.1 and QYr076.jaas-4D.2(Fig. 4).QYr076.jaas-2A was located between the SNP markers AX-110484643 and KASP marker KASP_4940 at LOD scores of 3.32-8.24,and its genetic distance was 4.13 cM (Table 4).This QTL explained 7.13-16.58% of the phenotypic variance explained (PVE).The physical locations of the two closely linked markers were 21.13 Mb and 27.90 Mb, respectively (Table 5).The flanking markers for QTL QYr076.jaas-4D.1 were KASP marker KASP_0795 and SNP marker AX-111567243, with AX-111567243 as the closest marker to the QTL, and the PVEs were 6.85-12.70%(Table 4).Its physical location was 24.26 Mb and 46.99 Mb, respectively, and its genetic distance was 29.5 cM (Table 5).The flanking markers for QTL QYr076.jaas-4D.2were KASP markers KASP_9130 and KASP_6535, with KASP_9130 as the closest marker to the QTL, and the PVEs were 7.97-39.19%(Table 4).Its physical location was 379.78 Mb and 436.91 Mb, respectively, and its genetic distance was 7.41 cM (Table 5).The primer information of the KASP markers was provided in Table 6.
Discussion
In this study, we focused on the unidentified source of resistance in PI660076, a line originating from the USA.PI660076 maintains long-term disease resistance under natural conditions in an experimental field in Pullman, Washington, USA (unpublished).To test its resistance to Chinese predominant stripe rust races, PI660076 Through the identification of stripe rust resistance at the seedling stage and adult stage by Zhou et al. (2015), it was found that PI660076 showed no stripe rust symptoms on its leaves in many years from 2007 to the present in the infected field, whereas AvS consistently showed infection symptoms when grown at the same time and under the same conditions, except for the test in Tianshui in 2013, in which PI660076 showed a susceptible phenotype (IT = 8; DS = 80%).Interestingly, PI660076 still maintained good resistance in other years in Tianshui.Compared with other places at the same time and Tianshui in other years, one explanation is that there was a physiological race in the Tianshui area in 2013, and the stripe rust race could interact with genes in the host, which resulted in the original disease resistance genes becoming susceptible.This could be because that Tianshui is in the southwest China, which is suitable for suitable for overwintering and summering stripe rust.In other words, the Tianshui area belongs to the lair of Chinese stripe rust pathogens.Although this is an interesting phenomenon, there is currently no way to identify this susceptible gene.Because PI660076 maintained good stripe rust resistance over multiple years and locations in China, it is necessary to identify and map its resistance genes.This study identified three QTLs for stripe rust resistance in the germplasm PI660076.The QTL QYr076.jaas-2A was located on the short arm of chromosome 2A.Chromosome 2A is rich in resistance QTLs and resistance genes, but almost all of the resistance genes that have been reported are ASR genes.Wheat chromosome 2AS is also a hot spot where genes for stripe rust resistance are enriched.Currently, this region contains more than 20 genes/QTLs having the characteristics of APR, including one officially named APR gene, Yr17 (Bariana & McIntosh 1993), and 24 QTL with APR to stripe rust such as QYr.inra-2A_CampRemy (Mallard et al. 2005), QYr.inra_2AS.1_Recital(Dedryver et al. 2009), QYr.sun-2A_Kukri (Bariana et al. 2010), QYrst.orr-2AS_Stephens (Vazquez et al. 2015), QYr.According to the physical location of molecular markers linked to the genes/QTLs, the general chromosome physical location of the genes/QTLs can be indirectly determined.Among these genes, Yr17 was the only gene that showed both APR and ASR to stripe rust and was located in a region from 7.5 to 21.2 Mb (Beukert et al. 2020;Jia et al. 2011).QYr.inra-2AS_CampRemy was flanked by the SSR markers Xwgm382a and Xgwm359, which correspond to a physical location of approximately 28.20 Mb (Mallard et al. 2005).QYr.Two QTLs (QYr076.jaas-4D.1 and QYr076.jaas-4D.2) were mapped on chromosome 4D; however, there are few SNP markers on 4D; thus, the location range is still large.To date, a total of seven stripe rust QTLs or genes on chromosome 4D have been reported, including Yr28 (Singh et al. 2000) and YrAS2388 (Huang et al. 2011) on 4DS, QTL-4DL from Israeli wheat Oligoculm (Suenaga et al. 2003), Yr46 (Herrera-Foessel et al. 2011) (Sybil et al. 2010), QYr.sun-4DL (Chhetri et al. 2016) and QYr.caas-4DL (Lan et al. 2009;Ren et al. 2012b) on 4DL, and Yr22 located on 4D, but the gene was not designated to a particular chromosomal location (Chen et al. 1995).Among these QTLs, QYr.caas-4DL and Yr46 belong to adult plant resistance and are probably the same locus, but the relationship between the two loci has not yet been verified (Ren et al. 2012b), while the QYr.sun-4DL and Yr46 genes have been confirmed to be the same (Herrera-Foessel et al. 2011).The QTL-4DL from Israeli wheat Oligoculm also confers APR, and this QTL was approximately 26 cM from QYr.caas-4DL (Suenaga et al. 2003).Yr28 belongs to ASR, and Athiyannan et al. (2022) reported that Yr28, as well as the other two genes YrAS2388 and YrAet672, were identified as haplotypes of the locus, all encoding identical protein sequences but are polymorphic in nontranslated regions of the gene.The gene Yr22 also belongs to the ASR (Chen et al. 1995).Thus, based on the currently identified genes/QTLs, there are only 4 APR genes located on 4D, and the relationship between the QTLs on chromosome 4D (QYr076.jaas-4D.1 and QYr076.jaas-4D.2) and the three other QTLs still needs to be verified by subsequent relevant tests.
Fig. 1
Fig. 1 Frequency distributions of the mean infection type (IT) and disease severity (DS) for 208 RILs from the AvS × PI660076 cross grown at Mianyang (MY) and Yangling (YL) in 2019-2022.Arrows indicate the values of the parental lines
Fig. 3
Fig. 3 Stripe rust resistance QTLs QYr076.jaas-4D.1 (A) and QYr076.jaas-4D.2(B) on the genetic map of chromosomes 4D based on IT and DS data.The x-axis is in centimorgan distance (cM), and all genetic distance values shown are in the same scale; the y-axis indicates the LOD value.The red rectangle on the genetic map indicates the corresponding QTL region.AIT: The average of IT.ADS: average of DS
Table 1
Correlation coefficients (r) of the mean infection type (IT) and disease severity (DS) of the AvS × PI660076-derived recombinant inbred lines tested in different environments together with other wheat germplasm resources were introduced into China for stripe rust resistance testing.
Table 2
Analysis of variance and estimation of broad-sense heritability of the infection type (IT) and disease severity (DS) among 208 RILs from the AvS × PI660076 cross tested in Mianyang (MY) and Yangling (YL) in 2019-2022 a σ2g was estimated for genotypic (line) variances b H 2 indicates the estimated heritability in the broad sense on the basis of the mean across replications and environments (or heritability per mean) c Significance level at P < 0.05 (*), P < 0.01 (**), P < 0.001 (***)
Table 3
Summary of chromosome assignment, number of SNP markers, map length, and marker density of the SNP genetic map of the 208 RILs from the AvS × PI660076 cross
Table 4
Summary of stripe rust resistance QTLs identified using ICIM based on the mean disease severity (DS) and infection type (IT) of the 119 RILs from the cross of AvS × PI660076 tested in Mianyang (MY) and Yangling (YL) in 2019-2022 Add additive effect of the resistance allele a LOD logarithm of odds score b PVE percentage of the phenotypic variance explained by individual QTLs c
Table 5
Genetic (cM) and physical (Mb) positions of flanking markers of each QTL identified based on the mean infection type (IT) and disease severity (DS) of the 208 RILs from the AvS × PI660076 cross tested in Mianyang (MY) and Yangling (YL) in 2019-2022
Table 6
Information of the KASP markers developed in this study KASP_6535TTG AAT GGA AAT ACA GGC AGT GCA TGA ATG GAA ATA CAG GCA GTGCC GTT TCC TTT TTT AAT CGG TCA ACC GKASP_0795CGG CCT TCA TGT CTT CGC TA CGG CCT TCA TGT CTT CGC TC CAA AGA TAC ACA TGC ACA CGA ACA | 5,268.2 | 2024-06-07T00:00:00.000 | [
"Agricultural and Food Sciences",
"Biology"
] |
STABILITY OF MOTION OF RAILWAY VEHICLES DESCRIBED WITH LAGRANGE EQUATIONS OF THE FIRST KIND
Dep. «Cars and Car Facilities», Dnipropetrovsk National University of Railway Transport named after Academician V. Lazaryan, Lazaryan St., 2, Dnipro, Ukraine, 49010, tel. +38 (056) 373 15 19, e-mail<EMAIL_ADDRESS>ORCID 0000-0001-7490-7180 Dep. «Foreign Languages», Prydniprovsk State Academy of Civil Engineering and Architecture, Chernyshevsky St., 24 A, Dnipro, Ukraine, 49000, tel. +38 (056) 756 33 56, e-mail<EMAIL_ADDRESS>ORCID 0000-0001-6725-0280
Introduction
Studies on the railway vehicle motion stability have been under the spotlight since the 1950s.Loss of stability is accompanied by the emergence of large transverse forces that threaten the safety of movement, which prevents from operating cars at high speeds.Among the extensive literature devoted to this issue, we point out [1][2][3][4][5][6][7][8][9][10][11][12][13][14].In accordance with modern concepts, loss of stability is a very complex phenomenon, which near the critical speeds is described by the subcritical Hopf bifurcation.Up to a certain velocity v1 there is only one attractor corresponding to a straight-line motion, then a periodic attractor appears, while the original one remains and disappears at the velocity 2 > 1 .At high velocities, chaotic attractors may appear.
There may be cases when they occur already at the velocity 1 [5].The following methods of motion stability analysis are used [15]: 1) Linearization of the motion equations (Lyapunov's stability criterion of linear approximation [1]); 2) Quasi-linearization; 3) Galerkin-Urabe method [12,13] (quasilinearization by several frequencies, a large amount of computational work is required); 4) «Brute force» method, when one reduces the movement speed and waits for the auto-oscillations to disappear; to determine the unstable limit cycle, one gradually increases the disturbance range [14]; 5) Trajectory tracing method (the motion is assumed to be periodic, and the equation (0) = () is solved; it is not suitable for the study of quasi-periodic and chaotic oscillations).
Despite the obvious unsuitability to analyze the complex picture of the emergence and disappearance of attractors, Lyapunov's stability criterion of linear approximation retains its attractiveness due to its simplicity and ability to do the main thingto evaluate the critical velocity.It is formulated for the systems that describe ordinary differential equations.In the present paper we will extend it to the systems whose motion is defined by Lagrange differential-algebraic equations (DAE) of the first kind.Nowadays, due to the spread of standard integration programs (for example, DASSL), DAE are increasingly used in modeling railway vehicle oscillations, since they make it possible to do both without dependent generalized coordinates and without replacing rigid constraints between the car parts with high rigidity elastic elements.
Purpose
To estimate the stability of the railway vehicle motion, whose oscillations are described by Lagrange equations of the first kind under the assumption that there are no nonlinearities with discontinuities of the right-hand sides.
Methodology
The structure of the railway vehicle motion equations is as follows: (without nonlinear and non-uniform terms describing the movement along a curve).Here q is the generalized coordinate vector; M is the inertial coefficient matrix; C, B are the rigidity and viscosity matrices; K, F are the matrices describing the wheelrail interaction.Equation ( 1) is obtained if we remove the dependent generalized coordinates from the vector q using the equations of constraints.
When applying the Lagrange equation of the I kind, another approach is used: instead of eliminating the elements of the vector q, they are all remained, the constraint equations are included in the full set of equations describing the system motion, and additional unknowns λ are introduced (in the amount equal to the number of constraint equations) so that all these equations can be solved.The result is the following system of equations: The last expression is the equation of the constraints which the mechanical system is subject to.We will assume that the matrix L is constant (depends neither on time nor on system phase coordinates).The system of equations ( 2) and ( 3) is linear, so its solution is: where the constants j C are found from the initial conditions.The indices j p together with nonzero eigenvectors γ j , j l are solutions of the equation It is possible to understand whether motion is stable or not, by the sign of the real part of the values j p -if there are positive numbers among them, the motion is stable.It is inconvenient to search for numbers j p , equating the determinant of the left matrix to zero.Instead, we reformulate the problem so that the indices j p turn out to be eigenvalues of a certain matrix.From (2) it follows that Multiplying the resulting expression by L and using the fact that 0 Lq , we get The matrix 1 T
LM L
is non-degenerate (if the constraint coefficient matrix L has less rows than columns, and the rank is equal to the number of rows, which we assume), therefore Substituting this expression into the original equation, we get 11 Thus, the vector of phase coordinates T qq satisfies the differential equation us consider how they are related to the eigenvalues and eigenvectors of the original system with constraints, that is, if they satisfy the equation ( 4) with a suitable choice of the vector of Lagrange multipliers j l .We will need an obvious correlation 0 LQ .Multiplying the left expression by L 21 ( ) ( ) we will get 2 0 jj pL γ .Therefore, for nonzero j p the vector γ j satisfies the constraint equation 0 j Lγ .Equation ( 4) is easy to rewrite as 2 ( ) ( ) Thus, with nonzero j p the vectors γ j satisfy the equation ( 4) with It is not clear whether the vectors satisfy the equation ( 4) for 0 j p , but, since these solutions correspond to constant processes that are of no interest, we will not deal with them.
Thus, the stability condition of the system with constraints is as follows: where j p are eigenvalues of matrix A. Let us apply the above theory to the study of stability, natural frequencies and vibration modes of a simplified mechanical system consisting of half a car body and a 3-piece bogie, on which it rests (Fig. 1).We consider the motion only in the horizontal plane.The system consists of (half) the body with a bolster, two side frames and two wheel sets.The body and the bolster are connected by a hinge in the center plate arrangement, the bolster with side frames and the side frames with wheel sets -by elastic elements that prevent relative translational movements in the longitudinal and transverse directions, as well as relative angular movements of hunting of the interacting bodies.
There are no dissipative elements in the system.The degrees of freedom are listed in Table 1.x, y, indicate small movements of recoiling, swaying and hunting, for wheel sets the coordinate is chosen so that () is a small deviation of the angular velocity of wheel set rotation relative to its axis from the value / Vr (V is the car velocity, r is the wheel radius), corresponding to the undisturbed motion.
T ab le 1
Body
Degrees of freedom Generalized coordinates Body with bolster (bd) x We will be interested in how the frequencies and forms of oscillations of the system without constraints (SF) and systems, whose displacement is subject to the following restrictions, correlate: SCX -it is prohibited to move the bolster relative to the side frames (in the spring suspension openings) in the longitudinal direction; SAJ -it is prohibited to move the pedestal openings of the side frames relative to the wheel set axle journals (side frames are pivotally connected to the wheel sets).
As for system parameters, the meaning of the notation for rigidity coefficients and basic dimensions is clear from Figure 1: the letters m, I with corresponding indices denote the masses and central moments of body inertia, the coefficients in the expressions for the interaction forces are explained below, the capital letters X, Y, denote the force components and the system body interaction force moments.Without giving a complete derivation of the expressions for the matrices M, L, etc, let us dwell only on certain points that may be of methodical interest.The elements of the matrix C are coefficients for the products of generalized coordinates and their variations in the expression for the virtual work of forces in elastic elements Let us consider the contribution (b) C to the ma- trix C from the elastic elements that are in axle boxes.The components of the displacement of the side frame pedestal opening relative to the wheel set axle box are combined into a vector They are linear combinations of the generalized coordinates Comparing the expressions ( 6) and ( 7), we get: (contributions from other elastic elements).
In order to prohibit linear movements of the pedestal openings of the side frames relative to wheel set axle boxes, it is necessary to require the fulfillment of the conditions: There are 8 rows in the L matrix, which we get by writing the first two rows of each matrix under each other.Thus, the compilation of a system of equations describing the motion of a mechanical system with constraints does not practically require additional calculationsin our case, the matrices (b) mj D were written out at the stage of working with the system without constraints.
Findings
Let us consider the results of the calculation of the eigenvalues and eigenvectors describing the 3piеce bogie oscillations.Our goal is to understand how the eigenvalues and eigenvectors of SF system with constraints and SCX and SAJ systems without constraints are related.We expect that the results for SF with (b) x C , (b) y C will tend to the re- sults for SAJ, and the results for SF with (b) x C to the results for SCX.The subject of the study will be the confirmation of this expectation and a detailed description of the limiting transition nature.
The eigenvalues of the matrix for the SF and SAJ systems are listed in Table 2.The system parameters correspond to the 4-axle car loaded up to deadweight capacity on 18-100 bogies (with an axle load of 23.5 tf).The motion speed 100 V km/h.The eigenvalues were ordered by the QR algorithm, so they can be compared only by values.Even without analyzing the eigenvectors, it is clear that the numbers with 9, 11, 14 j of the SAJ system are the limits for the eigenvalues 25, 27, 29 j of the SAF system.It seems plausible to assume that large negative numbers of one system go into large negative numbers of the other system, both systems have five such numbers, but the correspondence between them is not obvious.It is not quite clear which of the numbers of the SF system goes into the number 6.29 335i of the SAJ system.The numbers 9, , 24 j of SF, except for one pair, apparently correspond to the side frame oscillations on the high rigidity elastic elements in the axle boxes, since these numbers have a large imaginary component.
The study of eigenvectors confirms the conclusions made and allows for some refinements.Let us consider the SAJ system with hinges in axle boxes.Equations of constraints do not violate the first 15 eigenvectors: 1, 2) non-physical solutions, which appeared due to the fact that there are no variables (ws) m in the equations of motion, there are only their derivatives; 3, 4, 13) extremely rapidly decaying solutions describing the motion of wheel sets against pseudo-slip (for example, bogie swaying without hunting); For all these vectors, one can find the corresponding eigenvectors of the SF system with close values of the components.Some vectors j γ are shown in Table 3.The vectors 13 γ , 27 γ for a bogie without constraints, with large rigidity of elastic elements in the axle boxes are almost coincide with the vectors 5 γ , 11 γ for a bogie with hinges in boxes.The vector 9 γ (SF) describes the longitudinal oscillations of the side frames relative to the wheel sets, which is incompatible with the constraints to which the SAJ system is subordinate, and it is impossible to find a corresponding vector among the eigenvectors of the latter.The bogie movement is unstable, the eigenvalues 27 p (SF) and 11 p (SAJ) have a positive real part.Wheel sets perform selfoscillations of hunting and swaying (the ratio between the amplitudes y and is as in the Klingel solution), and the body swaying is twice as large as wheel set swaying. Figure 2 If a rigid longitudinal constraint in the spring suspension is added to the hinges in the axle box (Table 2, column SAJ + SCX), then the oscillation patterns 5, 6, 14, 15 in the SAJ system, which are accompanied by deformations of the spring groups in the longitudinal direction will disappear and four more eigenvectors, corresponding to zero eigenvalues and violating equations of constraints, will be.Other eigenvalues will change slightly.
Originality and practical value
Originality consists in the adaptation of Lyapunov's stability method of linear approximation to the case when the equations of railway vehicle motion are written in the form of differential-algebraic Lagrange equations of the first kind.This written form of the equation of motion makes it possible to simplify the stability study by avoiding the selection of a set of independent generalized coordinates with the subsequent elimination of dependent ones and allows for the coefficient matrix calculation in an easily algorithmized way.Information on the vehicle stability is vitally important, since the truck design must necessarily exclude the loss of stability in the operational speed range.
Conclusions
1.An effective method for studying the stability of railway vehicle motion, described by the Lagrange equations of the first kind, has been proposed.Stability criterion -the real numbers of exponential functions that satisfy the equations of motion -should not be greater than zero.The indicators themselves can be found as eigenvalues of a certain matrix A, depending on the matrices of physical parameters M, B, F, C, K and the matrix of constraint coefficients L, using the QR algorithm [2, chapter 4].
2. The eigenvectors of this matrix, corresponding to nonzero eigenvalues, satisfy the equations of constraints.The advantage of the proposed method is the easy algorithmization of the motion equation derivation (no need to choose independent generalized coordinates).
.
The eigenvectors of the matrix A, corresponding to the eigenvalues j p , has the form
11 c , 22 c
for the longitudinal and transverse directions are considered equal to 3.90.The expression for longitudinal sliding additionally contains terms proportional to the velocities(ws) shows how the components of the corresponding eigenvector change as rigidity changes( ) | 3,421.6 | 2018-11-23T00:00:00.000 | [
"Mathematics"
] |
Probing the Birth and Ultrafast Dynamics of Hydrated Electrons at the Gold/Liquid Water Interface via an Optoelectronic Approach
The hydrated electron has fundamental and practical significance in radiation and radical chemistry, catalysis, and radiobiology. While its bulk properties have been extensively studied, its behavior at solid/liquid interfaces is still unclear due to the lack of effective tools to characterize this short-lived species in between two condensed matter layers. In this study, we develop a novel optoelectronic technique for the characterization of the birth and structural evolution of solvated electrons at the metal/liquid interface with a femtosecond time resolution. Using this tool, we record for the first time the transient spectra (in a photon energy range from 0.31 to 1.85 eV) in situ with a time resolution of 50 fs revealing several novel aspects of their properties at the interface. Especially the transient species show state-dependent optical transition behaviors from being isotropic in the hot state to perpendicular to the surface in the trapped and solvated states. The technique will enable a better understanding of hot electron driven reactions at electrochemical interfaces.
S-1 Experimental principles
This optoelectronic technique to probe the electrode-electrolyte interface combines optical perturbation of the system with coulostatic measurements. As described by Richardson 1,3,4 The introduction of the second pump creates a second photovoltage ∆V 2 only if both pulses match certain delay conditions that are photon energy-dependent. There is no measurable photovoltage change for completely detuned pulses.
Hence, the second pump will interact only with hydrated electrons and precursor states [see A potentiostatic variant of the two-pulse technique was used to report on the picosecond dynamics of photoelectrons in hexane. 5 The detection scheme used by Scott was also different: A fast high voltage pulse (2 kV) moved the electrons out from the sample interface to the sensor electrode, 4 mm away.
The electrical double layer (EDL) acts as a capacitor with Q = C V , where Q is the charge, C is the capacitance of the EDL and V is the voltage. The presence of uncompensated charges in the EDL thus results in a photovoltage. At a concentration of 0.5 M Na 2 SO 4 in water, the Debye length λ D is ∼ 2.5 Å, meaning that we are only probing the charged species located closer than circa two bond lengths from the surface.
As noted above, a voltage can also be established by creating a temperature difference between electrodes. The temperature dependence of the voltage has been exploited previously in temperature jump measurements. 3,6,7 The total photovoltage due to heating of the metal-solution boundary is the sum of the internal drop of potential at the metal-solution boundary V i , of the thermodiffusion potential of the solution (Soret effect) V S , and of the thermal EMF of the metal heating resulting from an ultrafast pulse can yield, in gold, a transient electronic temperature T e as high as 1750 K, which results to an increase of the lattice temperature T l on the order of 15 K.
S-2.1 Spectroelectrochemical cell
The spectroelectrochemical cell (SEC) consists of a set of 3 gold electrodes ( through with two holes to allow the flow of the electrolyte, and separated by a 50 µm PTFE spacer cut in the shape shown in Fig. S2(b), using the assembly displayed in Fig. S2(c). Copper foil is used to electrically contact the electrodes externally.
S-5
The electrolyte consists of Na 2 SO 4 in deionized water (18.2 MΩ· cm, Millipore) at a concentration of 0.5 M. It is deaerated by bubbling dry N 2 in the reservoir for at least 30 min. before the measurement is started. The flow of the electrolyte is assured by a peristaltic pump at a rate of 6 µL / s. In order to avoid any spurious effect by species generated at the CE, the electrolyte inlet is located above the RE and the outlet above the CE.
S-2.2 Electrochemical measurements
A potentiostat (VSP, Bio-Logic Science Instruments) was employed to record the open circuit potential (OCP). It was found useful, before the coulostatic measurements, to "clean" the WE by performing a series of cyclic voltammetry (CV) sweeps until the CV data showed the appropriate profile for a polycrystalline gold electrode in a thin film configuration. The RE would be cleaned in a similar manner whenever the drift was becoming important.
S-2.3 Laser setup
The layout of the laser system that was employed in this study is shown in [Fig. S3]. In brief, the laser system is composed of a Ti:Sapphire oscillator (Vitara, Coherent) and regenerative amplifier photovoltage change due to the second pump pulse. The first pump delay stage is thus swept to find the maximal bleaching.
S-2.3.1 Generation of various wavelengths for the second pump's pulse
Various ultrashort pulses have been employed in this study as the second pump's pulses. They were generated as following: 670 and 720 nm The TOPAS' Signal outputs at 1340 and 1440 nm, respectively, were doubled in a BBO crystal and the fundamental beams were subsequently filtered out.
nm
The 800 nm residual from the TOPAS after the parametric process was separated from the Idler and Signal beams and attenuated to required energy.
S-2.4 Optoelectronic measurements
After the first pump beam shutter is opened, we let ∆V 1 reach an equilibrium value for approximately 10 min before the time delay series of the second pump beam is started. The shutter of the second pump beam is then sequentially opened and closed at 1 min intervals and the delay between the pulses of the first and second pumps is stepped at every repetition while photovoltage is continuously acquired in OCP mode. As can be seen in Fig
S-2.5 Data processing
The photovoltage change ∆V 2 due to the action of the second pump beam is first extracted in (postmeasurement) data processing from the as-measured photovoltage versus elapsed experimental time [shown in Fig. S1(b)]. It is defined as the difference of the photovoltage measured when the second beam has been impinging for 1 min to the photovoltage measured after the second pump beam has been shut off for 1 min. Each photovoltage spike thus corresponds to a different delay of the first and second pump pulses. The photovoltage change is then corrected for the measured pulse energy of the first (P 1 ) and second pump (P 2 ) beams and the absorptivity of water at the wavelength of the second pump for a given angle θ of the second pump beam, a water layer thickness d w and known water extinction coefficients α [8][9][10] calculated from Snell's law and the second pump incidence angle in air (θ air = 0.95993 rad) with d CaF 2 = 3 mm and wavelength-dependent n CaF 2 and n w 11 . The different pulse energy values and water extinction coefficients are tabulated in Table S1. The correction to the raw photovoltage change ∆V raw 2 is expressed as: It is thus ∆V corr 2 that is presented in the main text as ∆V 2 for simplicity. Error on the data was estimated by propagation of the readout noise on ∆V raw 2 and the power fluctuation of the first and second pump laser beams P 1 and P 2 . S-9
S-2.6 Ultrafast pulse dispersion
The effect of CaF 2 optics (15 mm lens + 3 mm window) and water layer (50 µm) on the UV pulse duration (nominally 60 fs, 267 nm, at the amplifier's output) was calculated. [11][12][13] As seen in Fig. S4, the smallest pulse duration is ∼ 106 fs. Given that the output from the amplifier is ∼ 60 fs, that there should be no significant change of the pulse duration in the tripler, and that the curve shown in Fig. S4 is rather flat between 60 and 110 fs, a pulse FWHM of 110 fs for the UV light pulse duration is reasonable and will be used in data analysis and simulations.
S-3 Heating of the interface
Heating of the gold electrode has been simulated numerically using a two-temperature model
S-4 Models and fitting S-4.1 Three-state model
Aiming to model the dynamics at the interface, we used a system of five coupled ordinary differential equations that are solved numerically to find the populations N i at various states S 0 to S 4 in the system, of which 3 states are solution-side [ Fig. 3(a)]: S-10 where and I(t) is the intensity of the time-dependent UV pump pulse. In order to correctly capture the dynamics of the shoulder feature in Fig. 1(d Assuming that only electron populations on the solution side contribute to the signal, the photovoltage change ∆V 2 is related to populations N 2 , N 3 and N 4 through absorption coefficients a i : Only four parameters are thus adjusted for every excitation energy: three absorption coefficients and a delay offset.
Metals dynamics have been taken into account in the model, even though it was found to be mostly insensitive to them: We have therefore lumped the metal-side thermalization into an
S-11
Back capture by the electrode [orange arrow in Fig. 3(a)] has not been explicitly implemented but it is expected to contribute to the effective characteristic times. As the back capture rate is higher for electrons of higher energy, the contribution should be more important for τ 0 and τ 1 .
S-4.2 Two-state model
For the sake of comparison, we also implemented a simpler two-state model where the state S 3 is eliminated: and using: (S12) Using this set of equations, we were unable to obtain a common set of characteristic times that correctly described the traces at every wavelengths. For instance, τ 3 remained fairly unaffected with a value of 52.1 ps. However, τ 1 was very much dependent on the second pump wavelength.
Fitting the model at a second pump wavelength of 800 nm yielded a τ 1 with a value of 900 fs, while a fit at 4 µm gave a value of 110 fs. As can be attested in Fig. S6(a), a satisfying agreement at all second pump wavelengths cannot be reached with a model with only two states on the solution side.
S-5.2 Time delay traces
The full set of time delay traces for various second pump energies is presented in Fig. S6. A zoom near the origin is shown for all energies in Fig. S6(a), while longer traces are displayed in (b) for energies 1.24 to 1.85 eV. The fit results (red dashed lines, computed as described in section S-4) are overlaid on the data (gray circles). As described in the main text, the photovoltage response ∆V 2 to the UV excitation (first pump) is highly dependent on the second pump wavelength. At low energies, the signal rise and decay is fast, on the order of 100 fs. As the second pump energy is increased, the signal persists for much longer, with residual intensity at 100 ps at 1.55 eV and above. We also note the large changes in peak ∆V 2 . At 0.31 eV, the signal reaches approximately 0.18 mV, while the peak signal at 1.85 eV is more than 1000 times weaker. where τ 1 has been determined from a fit at a second pump wavelength of 800 nm (blue dasheddotted line) and 4 µm (green dotted line) as explained above. The discrepancy is most obvious at short delay times in Fig. S6(a).
S-5.4 Simulation of heating of the gold electrode surface by an ultrafast UV pulse
We have simulated the heating of the gold-water interface as described in § S-3 using the two-temperature model and calculated the temperature-related changes due to intraband and interband transitions. Results for both T e and T l are displayed in Fig. S8(a). The UV pulse interaction with the gold surface creates a transient hot electron population with T e rising up to 1750 K. The electron system's temperature T e nevertheless rapidly decreases as the hot electrons diffuse into the substrate and scatter with the lattice's phonons. Upon this action, the latter's temperature T l rises by about 15 K in 4 ps. It is noteworthy that neither temperature profile matches directly the delay-dependent ∆V 2 traces (Fig. S60. The reflectivity change ∆R/R 0 due to temperature-dependent Drude-like intraband transitions has been modeled according to Block et al. ( main text and supplementary materials). 20 The simulations results are presented in Fig. S9 with the evolution as function of delay in (a) and as a function of second pump energy in (b). We note from (a) that the general time-dependent ∆R/R 0 trace is similar to T e in Fig. S8(a). Also, ∆R/R 0 is much larger at lower energies, as can be expected from intraband transitions.
Similarly, we have computed the change in Fermi-Dirac (FD) electron distribution (∆ f / f 0 ) as a function of temperature (Fig. S10). In order to relate ∆ f / f 0 to the optical response due to S-16 interband transitions, a detailed knowledge of gold's complex permittivity in the k-space would be necessary. Nevertheless, the interband transition probability should be dependent on the FD electron distribution and we use here the readily available ∆ f / f 0 parameter to represent the temporal changes in the system. From Fig. S10(a), we can see that ∆ f / f 0 rapidly increases upon excitation, but decays almost as fast. In this case, the larger change can be seen at higher energies [ Fig S10(b)]. Fermi smearing is indeed maximal just above and below the interband threshold (∼ 2.38 eV), yielding the typical "first derivative" shape ( Fig. S11).
S-6 Discussion
S-6.1 Nature of the photovoltage ∆V 2 We discuss here four hypotheses for the origin of the photovoltage ∆V 2 and we expose arguments Fig. 1(b), main text]. Secondly, perturbation of the normal interband transitions of gold results in a differential spectral shape reflecting the change of electron occupancy. 21 The electron occupancy change is itself described by the Fermi smearing induced by the first excitation pump Fig. 2(b), main text]. Relatedly, the contribution of a surface plasmon resonance (SPR) can also be ruled out since gold's SPR is found around 600 nm and, due to momentum conservation, it cannot be excited in free space in the far field. The resonance maximum of a localized surface plasmon resonance (LSPR) may be pushed to the near-infrared, but they appear in cases where the electronic wavefunction is confined, such as in nanoparticles, patterned surfaces or roughened surfaces. Moreover, here also, a possible plasmon contribution does not hold against the all-ornothing experimental evidence provided by the switch from a 267 nm first pump to 400 nm. This behavior suggests the existence of an excitation threshold whose energy is higher than the SPR and interband transitions.
This leads to iv., the resonant excitation of species present at the interface, resulting in an increase in the amount of heat generated at the metal/solution boundary. In bulk water, the excess electron frequency-dependent dynamics are characterized by a high energy absorption band centered around 1.72 eV (720 nm) and by a low energy absorption band peaking in the terahertz region. As discussed in § S-1, a change in the metal-solution boundary temperature produces a potential difference between the electrodes. S-20
S-6.2 Effective spectra
The spectrum a 2 [top panel of Fig. 4(c), main text] corresponding to the hot electron population shows a profile that rises strongly at low energies. It can be best fitted by a Lorentzian peak shape centered at 0 with a full width at half maximum (FWHM) of (0.2 ± 0.1) eV. With the caveat emptor that the sparse data prevents us from reaching a definitive conclusion about the peak shape and position, we chose to use a Lorentzian function to model the a 3
S-6.3 Local concentration in the Helmholtz layers
According to the Gouy-Chapman-Stern model, in the absence of specific adsorption, the inner Helmholtz layer (IHL) of the interface will be covered by water molecules. If we assume the outer Helmholtz layer (OHL) of the interface is occupied by a monolayer containing an equal number of solvated cations and anions, the maximum local concentration of the cations (which have been shown to play more roles in interacting with solvated electron than anions) can be estimated as following: It is known that, in a 0.5 M Na 2 SO 4 solution, the molar ratio between Na + and H 2 O is 1:56. If we assume one ion is solvated by 4-6 water molecules, the ratio of the total number of cations and water molecules (including the IHL's water molecules and those solvating the SO 2− 4 ions) is roughly 1:10, which gives a concentration of about 6 M for Na + at the interface.
The excited electron gets bound to a CTTS state created by the potential well due to solvent S-21 polarization around the now neutral atom or molecule as the solvent molecules did not have the time to reorganize. Reorientation of the solvent molecules in the surroundings of the CTTS state destroys this state and separates the electron from the neutral particle. 27 The electron is thus found in a solvated state that is basically a modified ground state with s symmetry. Multiphoton absorption (more energetic pump) could also lead to ionization by promoting the electron directly to a continuum of states that can transfer to the water conduction band. The latter has more similarities with multiphoton ionization of neat water.
In our experiments, there is no strong incentive to believe that a true analog to a CTTS state is formed. The first UV pump has enough energy to provoke the emission from the hot excited states in the metal to the water conduction band. In fact, this process is more similar to the multiphoton ionization of water 28,29 because a transient hole is left behind in the metal which necessarily interacts with the electron through Coulombic forces. In analogy, the electron interacts with the H 2 O + ion in the water ionization process. In our experiment, in contrast to the multiphoton ionization of water, it is known from gold ultrafast dynamics that the transient hole is effectively screened on a time scale of a few femtoseconds 30 .
We must also distinguish between studies that follow the relaxation of electrons following generation (right after photodetachment or photoionization) and the relaxation of electrons excited from an equilibrated solvated state. 27 Our experiment is more similar to the former case.
Our experiment bears obvious similarities with the two-photon photoelectron spectroscopy (2PPE) technique on metal surfaces where photoemission is used to inject excess electrons in an amorphous ice layer, 31 whereas the detection method differs. In that regard, some level of coupling of the excess electrons to the substrate 32 can be expected, which is a plausible explanation for the transformation from state S 3 to state S 4 .
In summary, photoinjection from a metal electrode is an intermediate approach with its own particularities, which shares features with CTTS and multiphoton ionization of water, and which parallels the mechanism for excess electron generation from 2PPE. In this specific case, the UV photon has enough energy to excite an electron from gold's Fermi level to water's conduction Follows a rearrangement of the water molecules around the electron as described in the main text. | 4,515 | 2020-09-20T00:00:00.000 | [
"Physics",
"Chemistry"
] |
A Closed-loop Control Strategy for Air Conditioning Loads to Participate in Demand Response
Thermostatically controlled loads (TCLs), such as air conditioners (ACs), are important demand response resources—they have a certain heat storage capacity. A change in the operating status of an air conditioner in a small range will not noticeably affect the users' comfort level. Load control of TCLs is considered to be equivalent to a power plant of the same capacity in effect, and it can significantly reduce the system pressure to peak load shift. The thermodynamic model of air conditioning can be used to study the aggregate power of a number of ACs that respond to the step signal of a temperature set point. This paper analyzes the influence of the parameters of each AC in the group to the indoor temperature and the total load, and derives a simplified control model based on the two order linear time invariant transfer function. Then, the stability of the model and designs its Proportional-Integral-Differential (PID) controller based on the particle swarm optimization (PSO) algorithm is also studied. The case study presented in this paper simulates both scenarios of constant ambient temperature and changing ambient temperature to verify the proposed transfer function model and control strategy can closely track the reference peak load shifting curves. The study also demonstrates minimal changes in the indoor temperature and the users' comfort level.
Introduction
Demand Response (DR) is defined as the changes in electric usage by customers from their normal usage patterns with changes in the price of electricity over time.It is considered a critical application in smart grids.With the Advanced Metering Infrastructure (AMI), the power usage of different appliances can be adjusted either directly, e.g., through incentive-based programs (IBP), such as operational parameters/states changing requested by grid operators, direct load control, and interruptible loads; or indirectly, e.g., through price-based programs (PBP) [1], such as real-time pricing, time of use pricing, and critical peak pricing [2,3].By smoothing out the system power demand over time, DR is capable of providing peak shaving, load shifting and ancillary services to achieve system reliability and stability.The performance of DR programs is measured by peak load reduction and demand elasticity.
Traditionally, the power system adopts the strategy of "electricity production determined by loads" to achieve power balance.Loads have been regarded as the passive physics terminal.In recent years, the smart grid concept has become popular in the electric system.The communication capability of the new intelligent service network and the loads' controllability have been greatly improved.Direct Load Control (DLC), which usually reduces the loads by controlling the thermostatically controlled loads (TCLs) of air conditioners, fridges, water heaters of the residential users or the small-business users, is an important type of incentive-based demand response.These kinds of loads are called flexible loads.They have heat energy storage ability and can switch or change the control parameters in a short time, which will not impact users.Meanwhile, they have the potential to balance system power.Flexible loads can have a real-time response to the grid's demand, reduce the backup capacity properly, and improve the level of the power system to run safely and economically [4].They can be incorporated into the normal operation of power system's dispatch by the demand response [5].
Electricity shortages usually occur during peak hours in the summer or winter, when the percentage of the air-conditioning loads reach 30%~40% of the total capacity-the demand peak rises with the number of air conditioners in use, while during off-peak hours, there is usually a power surplus.The types of the air conditioners include centralized and non-centralized [6,7].A centralized air conditioner controls the temperature in different rooms by a mainframe connecting to several terminals through the ventilation system, while non-centralized air conditioners (ACs), which account for a large proportion of all ACs, are installed in individual rooms and control the temperature independently.TCL presents a huge opportunity in the power grid.The effect of controlling the loads of TCLs is equivalent to building a power plant with the same capacity.However, the investment of the demand response is much lower compared with the construction of a new power plant, and reasonable load control has minimal impact on users.The cost for building a peaking power plant is 1200 dollars per kilowatt [8], while the compensation for users about load management is much lower and can be put into use in a short time.
There are several ways to model TCLs, including modeling based on the actual physical process, regression modeling based on historical data, and modeling based on the Fokker-Planck diffusion equations.Other authors focus on a probability characteristics model and the use of black-box model identification techniques [9][10][11].These models are often too complicated for the control system.For example, the physical process model, which needs to be converted to the mathematical model of control, cannot be directly used in numerical simulations.The regression model requires a large amount of historical data.Similarly, the model based on Fokker-Planck equation is very complex and difficult to implement [12].A state sequence model, proposed by Lu and Chassin, considers the uncertainty modeling of thermostatically controlled loads.The disadvantage of this model is that it can only slowly change the load over time.
In most cases, the control scheme of TCLs is based on the idea of optimization.A scheduling decision model has an objective function to maximize the load aggregate interest and to minimize the actual output deviation of ACs.It includes a constraint condition that the indoor temperature has to be within the range of [ min max , T T ] in order to ensure the user's comfort level [13].After the air conditioning model is determined, the air conditioning scheduling scheme is calculated using the optimization algorithm.This method belongs to the open-loop control, with intensive calculation, cannot achieve accurate control and real-time control which is based on load output change.The research by Na et al., on the direct load control (DLC) and controlling AC loads by electricity price in order to achieve the purpose of load reduction does not include control strategies.Manichaikul and Schweppe used the predictive method to control centralized TCLs and provide the auxiliary service for the power grid, taking the minimum error as the target [14].Centralized TCL load management is carried out by using the comfort control method proposed in Malhamé's and Chong's paper.The reference signal which is compared with the actual signal, is the desired aggregated air-conditioning load.Then, the temperature set points were calculated using the intima controller [15].In [16], the electric heat load DLC algorithm is based on the state sequence control algorithm, which aims to stabilize tie-line power fluctuations between distribution network and micro-grid including clean energy.
These methods adopt price signals or power surge signals to control the switch state of the users' equipment.Although they can take part in the ancillary service of the power system, their time scale is usually counted in hours or days, so they cannot follow the target load curve closely.Currently, most load control strategies destroy the diversity of the load group, resulting in power demand oscillation, and produce negative effects to the power system [17].Demand side customers' satisfaction can be taken into consideration using the load management (LM).A fuzzy membership function is established to characterize the users' satisfaction according to the user's continuous control time [18], but it is difficult to express the users' comfort level directly.In this paper, a closed-loop control strategy to solve the above-mentioned problems in existing methods is proposed.
The core contribution and innovation of this paper lies in exploring a simplified two order linear time invariant transfer function model by analyzing the duty cycle of the air conditioning group, proposing a closed-loop control strategy to provide a new way for flexible loads to participate in demand response, and putting forward two suggestions to solve the problem that parameters of the proposed transfer function model vary with the ambient temperature in the case study.Based on the proposed model, accurately load control could be achieved without negatively affecting user experience.
The rest of this paper is organized as follows: Section 2 presents an equivalent thermal parameters model of air conditioning.Section 3 analyzes the influence of AC parameters on its output characteristic and explores a simplified transfer function model for ACs by analyzing the duty cycle of the population.Section 4 presents a design of the model's PID controller based on the PSO algorithm.Section 5 demonstrates the simulation results of both scenarios of constant and changing ambient temperature to verify the proposed transfer function model and control strategy.Section 6 summarizes the discussions and puts forth our conclusions.
An Equivalent Thermal Parameters Model of Air Conditioning
In this section, the correlation between power, temperature and time is established, based on the equivalent thermal parameters (ETP) model.The study focuses on fixed-frequency air conditioning, and uses the Monte Carlo method to simulate the aggregated dynamic response of AC groups.
Air conditioning has a periodic working pattern, as shown in Figure 1: When the air conditioning is on, the temperature inside the house keeps dropping until the temperature reaches the boundary .Then the air conditioner is off and the temperature inside the house keeps rising until the temperature reaches the boundary , after which the air conditioning is switched on again.The AC's periodical properties can be described by the following equivalent thermal parameters model [19]: where ( ) is the air temperature inside the i-th house at time t, a is the outdoor ambient temperature, i C (kWh/°C) is the i-th AC's equivalent thermal capacitance, i R is the i-th AC's equivalent thermal resistance (°C/Kw), i P (kW) is the i-th AC's thermal power, which divided by the coefficient of performance (COP) is the actual load of air conditionings, ( ) i s t is the switch state variable of the i-th AC at time t. ( ) 1 i s t means the i-th AC is on at time t, while ( ) 0 i s t means the i-th AC is off at time t.
The formula for calculating the aggregated demand of n ACs in the group is: D t is a per-unit value, taken as the baseline, whose numerator represents the sum of the running AC loads in the group, and the denominator represents the total power of the group, assuming that all ACs in the group are on.
The traditional thermodynamic model of the air conditioning using Equations ( 1)-( 3) has some limitations.Each air conditioning room is a dynamic system with a nonlinear and independent iteration.When there is a large number of ACs, we have to face the problem of dimension disaster which triggers great difficulties.In addition, the model contains both continuous variable temperature θ, and discrete variable switching state s.In the following section, we present a simplified transfer function model, which can be used to easily solve the aggregated load of the population containing n ACs.
Transfer Function Model for Aggregated Air Conditioning Group
It is difficult to perform an accurate mathematical analysis of the traditional thermodynamic model, since each air conditioned room is a dynamic system with a nonlinear and independent iteration.This section explores a simplified transfer function model by analyzing the duty cycle of the air conditioning group.The excitation signal is step change of temperature set point.
The Influence of Air Conditioning Parameters on the Output Characteristic
In order to maintain the diversity of the air conditioning group, we assume that at the initial stage, all air-conditioned room temperatures uniformly distribute between the boundary temperature value [ , ], which determines whether the AC is on or off.In order to explore the equivalent heat capacity's relationship with ambient temperature and duty cycle of air conditioning group, we assume that all ACs have the same thermal power P and the same equivalent thermal resistance R.This ensures that the decline rate of the room temperature when the air conditioning is on is equal to the rate of the temperature rise when the air conditioner is off.Therefore, the duty cycle is 0.5.We assume that equivalent thermal capacitance C follows lognormal distribution.Then we have:
2(
) As the control signal, the temperature set point of ACs is the same for all the members in the group: . The hysteresis width that consists of the boundary temperature values is the same for all air conditioners.In the simulation of this paper, the hysteresis width is: Then we analyze the air conditioning group based on the assumption of the above conditions.At t = 0, a step change of 0.5 °C of the temperature set point is occurred.Figure 2a will move right into ( , , which is: Figure 2a,c shows that when the air conditioning is off, the room temperature rises, while Figure 2b,d shows that when the air conditioning is on, the room temperature drops.We define the change rate of the temperature of the i-th air conditioning room as follows: Besides, we define state variables: 0 ( ) where: For an air conditioning group based on the thermodynamic model, at t = 0, a step change of 0.5 °C of the temperature set point is occurred.When ( ) [2 , 2 1] i x t k k , the air conditioning is on, and when , the air conditioning is off.Each time when i x becomes an integer, a on-or-off state transformation occurs.The relationship between i x and the state variable ( ) i s t of air conditioning in the thermodynamic model ( 1)-( 3) is: When ( ) i x t < 1, at time 0+, temperature set point rises.From Figure 3, we can see that only 1/3 of the ACs are on, and by Equation ( 8), the state of the air conditioning is distributed in the interval of ( ) 0.5 i x t .The probability that an air conditioning in the population switched on is: where Pr[•] is the probability operator.For an air conditioning group based on the ETP model, at t = 0, the temperature set point of all air conditioners rises by 0.5 °C.Assuming that all ACs have the same power, if they are switched on at the same time, we take their total power as the baseline.Therefore, the normalized power demand D(t) of the air conditioning group which consists of n ACs is equal to the probability of a random AC turning on [20]: D(t) also gives the proportion of the air conditioning that is on in the group at time t.The first term of Equation ( 11) is the duty cycle of the air conditioning group at the moment 0+ when the temperature set point rises, while the second term is the duty cycle of the air conditioning group after t > 0. Assuming that all ACs have the same R, P, and the parameter C obeys a lognormal distribution.Consequently, according to Equation ( 6), the temperature change rate v obeys the lognormal distribution, too.We use . .s d to represent the standard deviation, as well as E to the mean value: rel is defined by: where c is the mean value of the equivalent heat capacity of n ACs, and c is the standard deviation.
Based on Equation (11), when obeys the lognormal distribution, it can be considered that ( ) x t is also subject to the same distribution.According to the nature of the lognormal distribution we obtain: where: ( ) ( ) log 1 ( ) From Equation (11) we can also derive: and: The value of the first term of the above two formulas is very small and can be ignored.Therefore: By plugging Equations ( 17) and ( 19) into Equation ( 14), the following equation is obtained: where erf [•] is the Gauss error function.
Let y = 1 and for 2k and 2k + 1, k = 0, 1, … ,.Then, by plugging Equation ( 19) into Equation ( 11), D(t) can be approximated as follows: Figure 4 shows the average temperature and power for the population composed of 10,000 air conditioners according to different values of rel .At t = 500 min, the temperature set point of all ACs rises 0.5 °C.In Figure 4a-d, for rel = 0.02, 0.05, 0.2 and 0.5 respectively, the top represents the average temperature inside the house before and after the step response, while the bottom represents the total power of all ACs before and after the step response.rel means the standard deviation of log-normal distributions as a fraction of the mean value for R, C and P. According to Equation (3), the normalized power demand D(t) means the proportion of the air conditioning which is on in the group at time t.Assuming that all ACs have the same R, P, while C obeys lognormal distribution.This ensures that, the rate of fall of the room temperature when the air conditioning is on, is equal to the rate of rising of the room temperature when the air conditioner is off.The steady-state value of the duty cycle is 0.5.
As shown in Figure 4, the average temperature of the air conditioning group is in the vicinity of the set point 20 °C before a 0.5 °C step.Then, the room temperature stabilizes at 20.5 °C after a period of oscillation.The temperature and power have the same settling time.More generally, the duty cycle may not always remain at 0.5, as in the hotter days when there are more ACs turning on, the duty cycle is larger than 0.5, but in colder days, the situation is the opposite.
When not all parameters such as R, P are the same, for example, as follows a lognormal distribution, variations of temperature and power of the ACs in the group will be more dramatic.Due to the diversity, the damping will be greater.
The temperature and the power of ACs experienced an underdamped oscillation after the temperature set point had risen 0.5 °C at t = 500 min.Figure 4 shows a decaying oscillation.For rel = / c c , the bigger rel is, the greater the system damping becomes.The average temperature and the power curve have a shorter settling time and a smaller overshoot.By observing changes in the aggregated load before and after a step change of the temperature set point, we use a second-order dynamic model to approximate the air conditioning group.It will be easy to solve the aggregated load of a population consisting of n ACs.Thus we can design a controller based on the transfer function later.In the following simulation we can see that, when we release some restrictions on the assumptions in more general cases, the transfer function model and control strategy on the ACs will still be quite effective in real situations.
A Transfer Function Model for the Second Order Linear Time Invariant System
When the temperature set point rises by 0.5 °C, the response of a population consisting of n ACs will be: where ( ) is the aggregated power demand of the AC group given a constant temperature set point.It is also the duty cycle of the steady state.The term including ( ) p G s , which belongs to a transient process of change, indicates the aggregated demand response of a 0.5 °C step in the form of a second-order linear model: where the hysteresis width H is obtained by the upper and lower limit of temperature which determines whether the AC is operating or switched off: The second order transfer function model of an AC group is: Whose parameters are calculated from the equivalent thermal capacitance, thermal resistance, thermal power, hysteresis width and so on.The damping ratio and the undamped natural oscillation n of the characteristic polynomials in the denominator are: The coefficients in the numerator are: where v represents the mean values of v defined in Equation ( 6): where erf represents the Gaussian error function.The proof of the above equation is given in the Appendix A.
Stability Analysis
The aggregated demand response of a 0.5 °C step based on a second-order linear time invariant (LTI) model is shown in Figure 5, which reveals the process of an underdamped oscillation when the system cross the position of equilibrium.
The simulated parameters are listed in Table 1, whose parameters are subsequently calculated by using Equations ( 25)-( 27), according to the air conditioning parameters given in Table 2. .The decay time of the step response of the control plant to 1% is: ) 520 min.It can be seen that the settling time calculated from the second order LTI model in Figure 5 is consistent with the thermodynamic model in Figure 4. Figure 6 shows the dynamic performance of the open-loop transfer function of the air conditioning group composed of n ACs.We can see from the graph that without the controller, the peak of the two order system is 0.422, the overshoot is 1180%, the rise time is 47.1 s, the adjustment time is 235 s, the system stability is 0.0329, and there exists a static error of 2% for the normalized power demand D(t).Therefore, it is sensible to design a corresponding controller, considering increasing the open loop amplification, and using a proportional control to reduce the static error.
Simulation Analysis
The initial temperature set point of the air conditioning group is assumed to be 20 °C.At t = 500 min, the temperature set point of all ACs is raised by 0.5 °C.The upper and lower limit of the temperature, which determines the operating state of ACs, rises to 20.5 and 19.5 °C.The temperature hysteresis width is still 1 °C.Figure 8 shows a comparison of the aggregated demand between the transfer function model and the equivalent thermal parameters (ETP) model in response to a 0.5 °C step rise.In the ETP model, the number of ACs in the population is 50, 1000 and 10,000 respectively.Figure 8a shows the variation on the average temperature inside the room of the AC group, and Figure 8b shows the variation on the aggregated demand of the AC group.Parameters of the equivalent thermal parameters model can be found in Table 2.When there are only 50 air-conditioning units, although the AC parameters C, R and P are still log-normally distributed in the simulation, the diversity of the population cannot be achieved because of the strong randomness.The simulation curve is very volatile and has many glitches.In contrast, when the number of aggregated ACs in the population is 1000 and 10,000, the population has a strong diversity.The ACs are uniformly distributed in all running states, and the step response curve is very smooth.
We can see that with the parameters in Table 1, the two order linear time invariant transfer function model can be used to approximate the aggregate response of the AC group accurately.The load profile simulated by the transfer function model is very close to which simulated by the ETP model that contains 10,000 ACs.The input signal u(t) represents that, at t = 500, all the ACs' temperature set point has a step rise of 0.5 °C.
The two order LTI model, which has a simple structure, can be used to capture the oscillation characteristics of the aggregate demand response of ACs.This is the key to the design of feedback control strategies for load control.In addition, applicability and complexity can be balanced, which makes the proposed model very practical.
The novelty provided in this section is as follows: the aggregated power of a population of n ACs that respond to the step signal of temperature set point was computed.The dynamics of the response of a numerically simulated population over a realistic range of parameter values were captured.Furthermore, a simplified control model based on the two order linear time invariant transfer function was proposed for the reason that the ETP equations were too complicated to solve.The excitation signal was the step change of temperature set point.The proposed model provided rule-of-thumb about the response with no need for intensive numerical calculations.In addition, it offers facilities for control design.
Proportional-Integral-Differential (PID) Controller Based on Particle Swarm Optimization (PSO) Algorithm
The performance of the PID controller depends on three parameters: Kp, Ki and Kd.Therefore, it is important to optimize these parameters.Currently, the PID controller parameters are mainly adjusted manually, which is not only time consuming, but also cannot guarantee the best performance.The PSO algorithm has been widely used in function optimization, neural network training, pattern classification, fuzzy control system, as well as in other fields [21].This paper will use the PSO algorithm to optimize the parameters of PID controller.Figure 9 shows a load control strategy for the AC group based on the PSO algorithm for PID parameter tuning.The desired reference demand of the ACs minus the actual output power of them is the load tracking error, when the event of load control occurs.This load tracking error is taken as the input of the PID controller, and then, the PSO algorithm is used to adjust its parameters.The calculated temperature set point u(t) is considered as the control signal of the air conditioning group.The bridge between the PSO algorithm and the Simulink model is the corresponding fitness value of the particles, namely, the parameters of the PID controller and the performance index of the system.The optimization process is as follows: firstly, the particle swarm is generated by the PSO algorithm, and the three-dimensional particles in the particle swarm are sequentially assigned to the PID controller parameters Kp, Ki, Kd.Then we run the transfer function model for the AC group, and get the corresponding performance index of the specified parameters.Finally, we take the performance index as the fitness value of PSO to determine whether the exit conditions have been met.If so, the calculation process is terminated, otherwise the particle swarm will continue to update.
The performance index, which is presented by the integral formula of the deviation between the expected system's power demand and the actual feedback output, is a measure of the performance of the control system.In this paper, the instantaneous error function e(t) of the control system is evaluated by the integrated time and absolute error (ITAE) index: The transient response oscillation of the control system based on the ITAE performance index is small.Parameters also have good selectivity.
The novelty provided in this section is as follows: after the analysis of the stability of the proposed model, this paper designs its PID controller based on the PSO algorithm.Compared with the general optimization algorithm, the closed-loop control and feedback strategy are applied to provide a new way for flexible loads like air conditioners to be involved in demand response.
However, the PSO algorithm itself is not the present authors' invention, and it is part of the innovation of this paper.It is applied to tune parameters for the PID controller in order to obtain smaller overshoot, shorter settling time and higher accuracy throughout the control process, and to meet with requires of demand response applications, such as load shifting and peak shaving.The principle and flowchart of PSO algorithm tuning PID controller parameters can be found in Appendix B.
Scenario 1: Simulation under Constant Ambient Temperature Conditions
A population of ACs of similar characteristics can be used as a control group [22,23].We choose the downtown area of the city of Nanjing, China for this research, where there are 18,000 ACs of 10,000 residents participating in the demand response plan.Parameters of ACs in the group are as follows: thermal resistance .When the ambient temperature is constant (e.g., 32 °C), the parameters of the two order transfer function of the aggregate AC group are shown in Table 2.In this scenario the parameters remain unchanged.
First, we do not have any control over the AC population.Therefore the temperature set point would stay fairly stable, and the total power demand of the whole AC population is shown in Figure 10.The temperature of 30 random air-conditioned rooms among the population is presented in Figure 11, and the mean temperature inside the house of all ACs is presented in Figure 12.The simulations were run in MATLAB (The MathWorks, Inc., Natick, MA, USA.) on a desktop computer equipped with an Intel Corei5-3230M 2.60GHz CPU, 4.00 GB memory, and 64 bit Windows 8 operating system.
Figures 10-12 show that, without any control, the AC group will keep running independently.The profile of the aggregated demand of AC group is gentle, the duty cycle in steady state is 0.467.The temperatures of 18,000 air-conditioned rooms are uniformly distributed between 19.5-20.5 °C.The hysteresis width, which determines the upper and lower limits of temperature inside the air-conditioned rooms, is 1 °C.Besides, the mean temperature of all AC rooms is close to the temperature set point 20 °C.
A scenario analysis by varying the temperature set point offset to control the aggregated power demand of air-conditionings is started.Assuming that at the initial stage, 44.6% of the ACs are on, while the rest of them are off, the whole load control scenario is divided into three stages.For the first 100 min, the ACs run independently without any control.The second stage is from 100 to 350 min, during which the grid is in a peak load period.There is a power shortage and therefore needs a 20% reduction of air conditioning load, which is equivalent to 4.3 MW.The load aggregator will raise the temperature set point of the ACs to achieve peak load shifting.The third stage takes place between 350 and 600 min, when the peak load period of the power grid is over, it needs an additional 20% increase of air conditioning load to meet the power consumption of the off-peak period.The load aggregator will drop the temperature set point of the ACs to achieve a satisfactory comfort level.The load management scenario finalizes at t = 600 min.In this scenario, the two order transfer function of the AC group remains unchanged because of the constant ambient temperature, as is shown in Table 2.The particle swarm optimization algorithm is applied to tune parameters for the PID controller.The parameters of the PSO algorithm are set as follows: population size m = 100, dimension D = 3, the 3 parameters Kp, Ki and Kd, to be optimized are in the range of 0-300, the inertia weight w = 0.6, the maximum number of iterations t = 200, acceleration coefficients The IEAT index is chosen as the fitness function, with the minimum fitness value being 0.1.The particle velocity is between [−1, 1].How to select the parameters of the PSO algorithm in detail is explained in [24,25].Figure 13a,b shows the optimization of PID parameters Kp, Ki and Kd by using the PSO algorithm, and Figure 14 shows a variation of the ITAE performance index.With the increase of the number of iterations of the particle swarm, the ITAE performance index, which is considered as the fitness function, gradually stabilizes at around 1.06.Therefore, we get the optimal PID parameters and the performance index as shown in Table 3, in which we tune PID parameters by using the particle swarm optimization algorithm and make a comparison of tuning results with the Ziegler-Nichols method.The parameters ts represents settling time, and tp represents peak time.In order to demonstrate the advantages of the PSO algorithm, we compared it with the classical Ziegler-Nichols method on tuning PID parameters of the two order transfer function model for aggregated ACs [26].Simulation results indicate that, although the peak time of the tuning process is slightly longer, the optimized controller based on the PSO algorithm outperforms the conventional one greatly in the settling time, overshoot and performance index.
Figure 15 shows the comparison between the desired reference value of the aggregated demand of a population of ACs for load shifting, the power output of the population based on conventional PID controller, and the power output of the population based on PSO tuning PID parameters, when the outdoor temperature is a constant 32 °C.In Figure 15, the black curve is the ideal load control reference output, the green curve is the actual power output of the population using the Ziegler-Nichols method to tune PID parameters, and the red curve is the actual power output of the population using the particle swarm optimization algorithm to tune PID parameters.Figure 15b,c is obtained by zooming in on the parts circled by a dotted curve in Figure 15a, and show the effects of reducing the power output in 100-350 min and increasing the power output in 350-600 min, respectively.Table 3. Tuning proportional-integral-differential (PID) parameters using the Ziegler-Nichols method in comparison with the particle swarm optimization algorithm.As can be seen from the figure, using PSO to tune PID parameters is significantly better than using the Z-N algorithm.In response to the step changes of reference values of load control at time 100 and 350 min, the Z-N tuning method has a larger overshoot, a longer settling time, a continuous oscillation throughout the control process and more glitches.In contrast, the control effect on PSO algorithm tuning PID parameters is quite satisfactory, and it meets with requires of rapidity, accuracy and stability of the control system.
Parameters
We can also achieve a 100% load reduction by adjusting the expected reference value of the air conditioning load to zero. Figure 15d shows the simulation results of the aggregated demand of n ACs in this extreme case.We assume that, there appears a serious power shortage and needs a 100% reduction of air conditioning load.The aggregated power consumption of ACs based on PSO tuning PID parameters drops to zero when the reference signal is applied at time t = 100 min, and the whole stage continues for 30 min.
The cost of doing so is that all of the ACs have been switched off and the air temperature inside the house will gradually rise.From a technological perspective, we can achieve any arbitrary adjustment of ACs by using the proposed model and control strategy, but in a real-world scenario, if the duration of 100% load reduction is longer, it will affect the user's comfort and lead to their dissatisfaction on a hot day.Thirty min later, the power demand increases to the initial value, and all of ACs return to the steady state without any parasitic oscillation.
Figure 16 shows the variation of the indoor temperature of 30 random ACs.The temperature hysteresis width H = 1 °C.The air temperature inside the house changes in response to the variation of the temperature set point u(t).We also observe that, the ACs' temperature set point can be controlled by the load aggregator who sets a higher value to reduce the load, as well as setting a lower value to increase the load.After 600 min, the control process is over, but the room temperature does not directly return to the initial value [19.5 °C, 20.5 °C].Instead, it has undergone a nearly 200 min variable process, and the final stable temperature is slightly lower than the initial value.The reason for this phenomenon is that the characteristics of air conditioning itself make the load rebound before and after the load control event.[27][28][29], which reduced two-way broadcasting to one-way and eliminated the problem of unwanted synchronization of TCLs with high quality control and without requiring a channel for consumer-to-grid communication.To quickly compensate for a mismatch between generation and load, the authors put forward strategies based on the timer-based safe protocols (SPs), including SP-1, SP-2 and SP-3, and focus on developing the algorithms to generate a set of power pulses that are useful in spinning reserve applications.
This paper presents a comparison of power demand response between the above hybrid SP protocol combining SP-1 and SP-2 and the proposed closed loop strategy in Figure 17.The simulation scenario is unchanged, from 100 to 350 min, there needs a 20% reduction of air conditioning load (Figure 17a), and an additional 20% increase of air conditioning load to meet the power consumption of the off-peak period (Figure 17b).The green line is the power response under hybrid SP protocol which combines SP-1 (yellow) and SP-2 (blue).How these protocols work in detail is explained in [28,29].The proportion of the ACs undergoing SP-1 is P = 0.2, and the rest of them follows SP-2.By combining two SPs, peak reduction can be achieved.It can be observed that from 100 to 160 min, and from 400 to 600 min, the power demand of ACs based on the hybrid protocol combing two SPs has been tracking the reference output closely.It responded to the signal instantly and maintained the low power at the expected value for an hour (the natural TCL cycle time), but in the remaining time, the green curve gradually deviates from the reference value, and is slowly saturating close to the initial value.In contrast, at all times during the control period (100-350 min and 400-600 min), the power demand curve based on the proposed model and control strategy (red) is tightly maintained.
Although there are many advantages, the SP strategy is an open loop approach, and cannot compensate for the output error.On the contrary, the proposed closed loop control can adjust the control signal according to the results to obtain a higher precision.
Scenario 2: Simulation under Variable Ambient Temperature Conditions
In this scenario, we consider a more realistic situation, which means an ever-changing ambient temperature.We choose 5 August 2014, which was a hot summer day, and the total load was very high.The temperature sensors of the weather station deployed in the area collect practical ambient temperatures once every 10 min.We draw the ambient temperature profile of 24 h as shown in Figure 18.It is observed that from 12:00 to 15:00, the ambient temperature is high, and the maximum temperature is 36.3°C. Figure 19 represents 24 h power load of a downtown area in Nanjing, China, where there are 18,000 ACs of 10,000 residents operating on the hot day 5 August 2014.The real data for the load profile, which was collected every 15 min, was provided by State Grid Nanjing power supply company.The real-time temperature on the same day, which was collected every 10 min, was provided by the Jiangsu Provincial Meteorological Bureau.In Figure 19, the top blue curve is the 96 points load characteristic curve of the customers in the area on the maximum load day.We simulated the load values each second, by using the method of spline interpolation.Peak hours are from 10:30 a.m. to 16:00 p.m. and from 20:00 p.m. to 22:00 p.m.During these periods, the electricity demand outstrips supply, and a power shortfall will appear.As can also be seen, a low load period is from 4:00 a.m. to 8:00 a.m.The maximum load is 5.75 MW and the minimum load is 3.51 MW, and the load peak and off-peak difference is 38.96% of the maximum load.The bottom red curve in Figure 19 is for the baseline load of 18,000 ACs.The curve is obtained by simulating the two order linear time invariant transfer function model, which is based on the parameters in Table 1 and the ambient temperature curve in Figure 19.The total load characteristic curve minus the AC load curve based on the TF model is the green curve of the non-air conditioning load, which is in the middle of the figure.
The population consists of 18,000 ACs.Assuming that at the initial moment, 44.6% of the ACs are on, while the rest of them are off, the whole load control scenario is divided into two stages.The first stage is from 12:00 p.m. to 16:00 p.m., when the grid is in a peak load period.We decide to cut the peak load of ACs to 2 MW.The load aggregator will raise the temperature set point of the ACs to achieve peak load shifting.The second stage takes place between 4:00 a.m. and 8:00 a.m., when the peak load period of the power grid is over, we decide to increase the load of ACs to 1.2 MW to maintain the power consumption of the trough period.The load aggregator will drop the temperature set point of the ACs to realize users' comfort recovery.
In this scenario, the two order transfer function model of air conditioning is no longer linear time invariant, and instead, it is changing with time.The model parameters vary with the outdoor temperature in the case of equivalent thermal resistance, thermal capacitance and other parameters remaining unchanged.There are two ways to solve this problem.This paper focuses on the load control of ACs to achieve peak load shifting.Since the peak load duration is not long, the first method takes the average of the ambient temperature at intervals of half an hour within the required period of peak load shifting.In less than half an hour, it can be assumed that the ambient temperature is constant.During the period when ACs are participating in demand response, the change of the ambient temperature is not dramatic, especially for the existence of the heat island effect in the city center area.Therefore another method calculates the parameters of the transfer function whenever the outdoor temperature changes 0.5 °C.If the temperature variation is within 0.5 °C, the transfer function of the model is believed to remain unchanged.
In this paper, we adopt the second method.According to the different ambient temperature, the two order transfer function parameters are shown in Table 4.The numbers in Table 4 are subsequently calculated by using Equations ( 21)- (27).The relevant AC parameters used in the calculation can be found in Table 2.In the process of calculation, the ACs' other parameters are as follows: .The period when ACs could be controlled by the load aggregator is from 4:00 a.m. to 8:00 a.m. and from 12:00 p.m. to 16:00 p.m. Thus, according to the real-time ambient temperature on the warmest day, 5 August 2014, the research scope is identified as (28 °C, 32 °C) and (34 °C, 36 °C).
Figure 20 illustrates the effect of the AC load control based on the PSO tuning parameters of the PID controller for load shifting on a hot day when the ambient temperature continues to change.In Figure 20a, the upper blue curve represents the aggregated power 18,000 ACs without any control, the middle red dashed curve is the desired AC load reference value for load shifting, and the solid black line at the bottom represents the actual output power of AC load based on PSO tuning parameters of the PID controller.Figure 20b,c is obtained by zooming in on the parts circled by a dotted curve in Figure 20a.They show the effects of increasing the power output from 4:00 a.m. to 8:00 a.m. and reducing the power output from 12:00 p.m. to 16:00 p.m, respectively.Figure 20d shows the probability density distribution of the tracking error between the actual output and the reference value.Based on the discussions above, one can draw the conclusion that through the PID controller based on the PSO algorithm, the output power of 18,000 ACs can be controlled by adjusting the temperature set point offset.Throughout the control period, the output power of the air conditionings has been following the reference value closely, and the tracking error is small.During peak hours, the load aggregator can reduce the power of ACs by up to 6 MW, and maintain the total load of n ACs at about 20 MW.To some extent, the power shortage has been eased.In peak-off hours, the load aggregator can increase the power of ACs by 4 MW to make the load curve smoother, and maintain the total load of n ACs at about 12 MW.
Figure 21 shows the variation of ACs' temperature set point offset, which is also the control signal of the two order transfer function model for the air conditioning group.The range of air conditioning set point temperature is [−0.4 °C, 0.95 °C]. Figure 22 shows the variation of the indoor temperature of 30 random ACs.The hysteresis width of the room temperature, which varies with the air conditioning temperature set point u(t), has remained unchanged H = 1 °C.The figure illustrates such a process: the ACs' temperature set point can be controlled by the load aggregator who sets a higher value to reduce the load as well as setting a lower value to increase the load.The variation range of the indoor temperature is At 16:00, when the load control period is over, the temperature set point offset does not return to 0. Instead, it continues changing (in Figure 21) and the room temperature does not directly return to the initial value [19.5 °C, 20.5 °C] (in Figure 22).The reason for this phenomenon is that, within the short time for the air-conditioning chillers to restart after shutdown, the power demand is usually significantly higher.This is the pattern of the air conditioning which makes the load rebound before and after the load control event.In order to bring the load curve back to the state before controlling, such as the curve after 16:00 p.m. in Figure 20a, a continuous control is required for some time after the load management.The novelty provided in this section is as follows: in the variable ambient temperature scenario, the two order transfer function model of air conditioning is no longer linear time invariant, and instead it changes over time.The model parameters vary with the outdoor temperature.This paper puts forward two suggestions to solve this problem, and we adopt the second one.
Conclusions
This paper presented the design of a closed-loop control strategy for air conditioning loads to participate in load control.A core contribution in this paper is to present the aggregated power demand of the AC population by describing how the proportion of operating ACs varies over time, in response to a step shift in the temperature set point of all ACs.Assuming that the thermal capacitances are distributed log-normally among ACs, a formula for the transient response to a temperature set point offset was derived.The temperature and power of ACs experienced an underdamped oscillation after the temperature set point had risen by 0.5 °C.By observing this phenomenon, we can use a simplified second-order dynamic model to approximate the air conditioning group.As far as we know, this is the first work to characterize a mathematical approximation for the period of such response.Based on the above work, a simplified control model based on the two order linear time invariant transfer function was derived.
Another contribution of this paper was the application of the closed-loop control and feedback strategy to provide a new way for flexible loads like air conditioning to be involved in demand response.In the feedback control strategy, the error between the expected demand reference value and the actual power of the AC group were used as an input variable for the controller.The output of the controller was the offset of the temperature set point, which was also the control signal of the AC group.
A simulation model for controlling the offset of the temperature set point was developed to verify that the proposed transfer function model and control strategy can closely track the reference peak load shifting curves.Two scenarios were selected for the simulation, including the constant ambient temperature and the variable one.The simulation results demonstrated the effectiveness of the controller design.The control model could lower electricity demand during peak hours effectively, fill the gap between peak and off-peak loads, and balance the supply and demand of electricity.The proposed closed-loop control strategy provided a new way for flexible loads to participate in demand response. .According to Section 3.1, the duty cycle is 0.5, so the ratio between the distance of two successive peaks that we call Proposing z1 = 0.9 and applying Equation (A4), we have: Besides, from the final value theorem in [30], we obtain: Assuming that the particle swarm contains i particles, the information of the particle i ( {1, 2,..., }) i l can be represented by the D-dimensional vector which indicates the number of parameters that need to be optimized.i x = 1 2 ( , ,..., ) x represents the space position, and i v = 1 2 ( , ,..., ) represents the speed.After the two optimal solutions , i best p and , i best g are obtained, the particle swarm updates the velocity and position according to Equations (B1) and (B2): In the above formulas, ( ) i v t is the speed of particle i in the D-dimensional space at the time t, and ( ) x t is the position of the particle i in the D-dimensional space at the time t., ( ) i best p t represents the optimal solution of a single particle i itself, and , ( ) i best g t represents the optimal solution of the whole group. 1 c and 2 c are the acceleration factors and their general values are between (0, 2). 1 r , 2 r are random functions, scaling values range from 0 to 1. is a non-negative weight, which affects the overall optimization ability.Figure 10 shows the flowchart of the PSO algorithm tuning PID controller parameters.The complete optimization procedure is as follows: Step 1: Initialize the PSO's population size, maximum number of iterations, the learning factor, as well as the initial position and velocity of particles.
Step 2: Choose the integrated time and absolute error (ITAE) as the fitness function.Calculate the fitness of each particle, find out the best individual in the initial particle swarm optimization, and initialize it as the best particle , ( ) i best g t in the group.Besides, the fitness of the particles themselves are taken as the initial value of the particle's individual optimal , ( ) i best p t .
Step 3: If the current fitness is better, take the current position of the particle as the best position , ( ) If the fitness of the particle's optimal position is higher than that of the swarm's optimal position, then take the optimal position of the particle as the best position , ( ) i best g t of the population.
Step 4: Adjust the speed and position of each particle according to the Equations (B1) and (B2).
Step 5: The terminating condition is that a predetermined maximum number of iterations or the lower limit of fitness has been reached.Otherwise, go to Step 3.
Step 6: Output the global optimal particles which are best parameters of the PID controller.
Figure 1 .
Figure 1.Periodical properties of the temperature inside the house and the power of air conditioners (ACs).
,b indicates the temperature distribution before the step response.Figure 2c,d illustrates the temperature change after the step response.Therefore, the temperature boundary ,
Figure 2 .
Figure 2. The temperature distribution of ACs before and after a step change of 0.5 °C of the temperature set point.
0 Figure 3 .
Figure 3.The state distribution of ACs instantly after the temperature set point step change.
Figure 4 .
Figure 4.The average temperature and power for the population composed of 10,000 ACs before and after the step signal of temperature set point, respectively under conditions of rel = 0.02, 0.05, 0.2 and 0.5.rel is the standard deviation of log-normal distributions as a fraction of the mean value for R, C and P. (a) rel =0.02;(b) rel =0.05; (c) rel =0.2;(d) rel =0.5.
Figure 5 .
Figure 5.The aggregated demand response of a 0.5 °C step based on the second-order linear time invariant (LTI) model.
Figure 6 .
Figure 6.The dynamic performance of the transfer function.By analyzing the stability of the transfer function p G , we get the Bode diagram.As shown in Figure 7, the system amplitude stability margin p G and phase angle crossover frequency cg W are defaults, and the phase margin angle m P = −107.9061,gain crossover frequency cp = 0.0227.Therefore, the closed-loop system with stability parameters marked in the figure is stable.
Figure 7 .
Figure 7. Bode diagram of the system transfer function p G .
Figure 8 .
Figure 8.(a) The variation on the average temperature inside the room and (b) the variation on the aggregated demand of the AC group when the ACs' temperature set point has a step rise of 0.5 °C.The two order linear time invariant transfer function model is compared with the ETP model which contains 50, 1000 and 10,000 ACs.
Figure 9 .
Figure 9.A load control strategy for the AC group based on the particle swarm optimization (PSO) algorithm for Proportional-Integral-Differential (PID) parameter tuning.
Figure 15 .
Figure 15.(a), (b), (c) A comparison between the desired reference value of the aggregated demand of a population of ACs for load shifting, the power output of the population based on conventional PID controller, and the power output of the population based on PSO tuning PID parameters, when the outdoor temperature is constantly 32 °C.(d) The aggregated demand of n ACs in the extreme case of a 100% load reduction for 30 min.
Figure 16 .
Figure 16.Variation of the indoor temperature of 30 random ACs in the population.
Figure 19 .
Figure 19.24 h power load of a downtown area of Nanjing, China where there are 18,000 ACs of 10,000 residents operating on the hot day 5 August 2014.
Figure 20 .
Figure 20.(a), (b), (c) A comparison between the aggregated power of 18,000 ACs without any control, the desired AC load reference value for load shifting, and the actual output power of AC load based on PSO tuning parameters of the PID controller.(d) Probability density distribution of the tracking error between the actual output power and the reference value.The model is simulated for a hot day, and the ambient temperature continues to change.
Figure 21 .Figure 22 .
Figure 21.The variation of ACs' temperature set point offset.
,
above are parameters of the denominator of the transfer function.Next we calculated parameters of the numerator.D(t) is the expected value of the expression the system is in steady state.In which on T and off T depend on the reference temperature Therefore, from the initial value theorem that is mentioned in[30], we get: Laplace transform of a derivative and the initial value theorem, we obtain:
Figure B1 .
Figure B1.The flowchart of PSO algorithm tuning PID controller parameters.
Table 1 .
Parameters of two order linear time invariant transfer function.
Table 2 .
Parameters of the air conditioners.
n
Table 4 .
Parameters of two order transfer function model for ACs. ) | 12,587.4 | 2015-08-14T00:00:00.000 | [
"Engineering",
"Environmental Science"
] |
Revisiting monotop production at the LHC
Scenarios of new physics where a single top quark can be produced in association with large missing energy (monotop) have been recently studied both from the theoretical point of view and by experimental collaborations. We revisit the originally proposed monotop setup by embedding the effective couplings of the top quark in an SU(2)L invariant formalism. We show that minimality selects one model for each of the possible production mechanisms: a scalar field coupling to a right-handed top quark and an invisible fermion when the monotop system is resonantly produced, and a vector field mediating the interactions of a dark sector to right-handed quarks for the non-resonant production mode. We study in detail constraints on the second class of scenarios, originating from contributions to standard single top processes when the mediator is lighter than the top quark and from the dark matter relic abundance when the mediator is heavier than the top quark.
Introduction
The first phase of the LHC experiments has given two important messages: a scalar resonance closely resembling the Standard Model Higgs boson has been discovered, and new physics beyond the Standard Model has not been found. The latter result would imply that new states or effects beyond the Standard Model predictions may be much more difficult to spot at the LHC than we previously thought. In fact, very strong bounds have been posed on easy-catch models, like the constrained version of the Minimal Supersymmetric Standard Model [1, 2].
Many theorists have therefore recently turned their attention on a more signaturebased strategy, focusing on unusual final states which are difficult to detect or have not been considered yet by experimental collaborations. One such final state that has been gaining popularity among phenomenologists [3][4][5][6][7][8][9][10][11][12][13][14][15][16][17] and experimentalists [18,19] is the monotop signature: a single top quark produced in association with a large amount of missing energy. Although the production of this final state is very suppressed in the Standard Model, it is however not easy to obtain this kind of events in realistic and complete models of new physics. Two main production mechanisms can lead to a monotop state [8,15], arising either from the resonant production of a coloured bosonic state which further decays into a top quark plus an invisible neutral fermion; or via the production of a single top quark in association with an invisible boson that has flavour-changing couplings to top and light quarks. Examples of the first class of models include R-parity violating supersymmetry, where the produced resonance is a top squark decaying into a top plus a long-lived neutralino [3,5,6,12]. The second class of models has been described in scenarios of dark matter from a hidden sector that couples to the Standard Model via flavour-violating couplings of a bosonic mediator [7,9,14,20].
All such models can be described in terms of a simple effective Lagrangian [8], which contains all the possible couplings giving rise to a monotop signal. A very general analysis of this framework can be found in Ref. [15], the limiting case of higher-dimensional operators has been discussed in Ref. [10], while monotop production via flavour-changing interactions of quarks with an invisible Z-boson has been detailed in Ref. [4]. Although the effective description has the advantage of being complete, it has the drawback of containing too many free parameters to be efficiently scanned by an experimental search. Furthermore, the included couplings do not respect the symmetries of the Standard Model, as they are intended to describe the model dynamics after the breaking of the electroweak symmetry. In this way, this approach ignores other interactions needed to restore gauge invariance which can give rise to new physics signals in different search channels, the latter possibly implying stronger constraints on the parameters of the model than the monotop search itself. In this work, we revisit the effective parametrisation originally proposed in Ref. [8] by paying particular attention to the embedding of the Lagrangian description into SU (2) L × U (1) Y invariant operators. Our analysis allows us to restrict the number of "interesting" scenarios, i.e., the cases where the monotop signal is genuinely the main signal of new physics to be expected at the LHC. Equivalently, this reduces the number of free parameters to a manageable number. Finally, we discuss in detail how the effective model could be completed in order to guarantee that the missing energy particle produced in association with the top quark is indeed either long-lived or decaying into invisible states.
The rest of this work is organized as follows. In Section 2, we describe how to embed the effective monotop description of Ref. [8] in the Standard Model gauge structure, considering separately the resonant and the flavour-changing monotop production modes. We then focus on non-resonant scenarios which turn out to be less "standard" and investigate, in Section 3, the conditions under which the invisible state is effectively invisible, and other experimental observations, which can further constrain the model. Our conclusions are presented in Section 4.
Resonant monotop production
In the first class of scenarios yielding the production of a monotop system at colliders, the produced top quark recoils against an invisible fermionic state χ. Note that, being singly produced, χ cannot be stable, thus it is either long-lived or it decays into a pair of stable particles: in either case, it has to be a neutral and colour-singlet state. Both particles in the final state arise from the decay of a heavy scalar ϕ or vector X field, lying in the fundamental representation of SU (3) c , that is resonantly produced from the fusion of two down-type (anti-)quarks. The effective Lagrangian describing those scenarios is given by [8] where i, j are flavour indices and where we omit all colour indices for clarity. The Lagrangian L kin includes kinetic and mass terms for all new fields, while the other terms focus on their interactions with the Standard Model quarks. As the colour contraction in the interactions with two down-type quarks is antisymmetric, necessarily the scalar, pseudoscalar and axial-vector coupling matrices (a q SR ), (b q SR ) and (b q V R ) are antisymmetric under the exchange of the flavour indices, while the vector coupling strength matrix (a q V R ) is a symmetric matrix in flavour space. Consequently, parton density effects enhance the production modes dd → X and ds → ϕ at hadron colliders (with the relevant coupling strengths being non-vanishing), as already pointed out in previous works [8,12,15]. Finally, the parameters a 1/2 and b 1/2 appearing in Eq. (2.1) represent the strengths of the interactions of the resonant states with the monotop tχ system.
All these interactions are completely generic, and in particular no assumption is made on the chirality of the Standard Model quarks that are involved. However, SU (2) L gauge invariance will necessarily constrain such couplings and force the invisible state χ and the extra coloured fields ϕ and X to lie in possibly non-trivial representations of the group, as already briefly shown in Ref. [11] for the scalar case. This implies the existence of additional component fields whose masses cannot be much larger that those of the ϕ, X and χ fields. In the simplified picture above, any mass splitting can indeed only be generated by the vacuum expectation value of the Higgs field, so that larger mass differences will induce sizable corrections to the electroweak precision observables [21,22] and are thus strongly disfavoured.
We show in the rest of this section that studying the SU (2) L embedding of the effective Lagrangian of Eq. (2.1) can allow us to derive precious constraints on viable and realistic scenarios that deserve to be further studied in high-energy physics experiments. We start our analysis with the scalar case. The ϕ field, as any scalar field, can only couple to two fermions with opposite chiralities. As a consequence, its coupling to down-type quarks can only have the form recalling that the charge conjugate of the right-handed quark d C R is left-handed while the one of the left-handed quark d C L is right-handed. Moreover, we have introduced generic couplings λ S and numbered indices to distinguish between both terms in the following discussion. The productd C R d R transforms as a singlet of SU (2) L with an hypercharge quantum number of −2/3. This implies that the charge of the ϕ 1 field under U (1) Y is 2/3 and that this field is not charged under the weak gauge group. Analogously, thed C L d L product of quark fields belongs to a combination of two left-handed doublets of SU (2) L lying in the adjoint representation of the group and whose hypercharge is 1/3. This enforces the ϕ 2 field to belong to a weak triplet of fields with an hypercharge of −1/3.
The above discussion demonstrates that the ϕ 1 and ϕ 2 fields are necessarily two different fields, whose representation under the Standard Model gauge group is given bȳ (2. 3) where in the right-hand side of both relations, the subscripts s and t stand for the singlet and triplet representation of SU (2) L , respectively, and the superscripts indicate the electric charge of all component fields. Equivalently, the possible couplings of the new scalar field ϕ of Eq. (2.1) to right-handed and left-handed down-type quarks have very different gauge structure, and must arise from two different scalar fields ϕ 1 and ϕ 2 whose interactions are given instead by Eq. (2.2). In general, both new scalar fields can mix with the Standard Model Higgs doublet. However, the resulting mass splitting is constrained to be small by the perturbativity of the couplings [23] and corrections to the S and T parameters [21,22]. Any new trilinear and quartic scalar interaction is thus neglected. The last term of the first line of the Lagrangian of Eq. (2.1) describes the couplings of the ϕ field to the top quark and the invisible fermion χ. It must be modified accordingly when considering that the ϕ field has to lie in either the singlet or the triplet representation of the weak isospin group. In the singlet case, a gauge-invariant interaction term can easily be written down as When we consider instead an SU (2) L triplet of fields ϕ 2 , no coupling with an electroweak singlet χ is allowed so that χ must be embedded into a larger representation of SU (2) L .
We have two such possibilities, (2.5) As in the above syntax, the subscripts d and t indicate the representation under SU (2) L and the superscripts refer to the electric charges of the various component fields. Under both setups, additional single top processes where the top quark is resonantly produced in association with one of the charged component fields of the χ multiplet are expected, The mass splitting between the various components of χ is expected to be small (see above), so that these extra processes will accompany any hint for new physics in the monotop channel and cannot therefore be ignored. New charged long-lived particles are however heavily constrained by current searches [24,25], which renders these scenarios unlikely to be realized. Finally, additional constraints arise when the new scalar field ϕ 2 couples to a lefthanded top quark and a fermionic field χ d . Gauge invariance induces a coupling to the left-handed bottom quark too, which leads to a fast decay of the neutral χ field via a virtual ϕ 2 scalar, As a consequence, the initial monotop signal has to be traded with new physics contributions to more standard single top production in association with jets.
We now turn to cases where monotop systems are produced from the decay of a spin-1 resonance. Vector fields couple to spinors of the same chiralities, so that the surviving couplings of the X-field to down-type quarks in Eq. (2.1) are of the form where we denote by λ V generic coupling strengths. In order for such couplings to be SU (2) L -invariant, the X-boson must belong to a weak doublet with hypercharge 1/6, the superscripts referring again to the electric charge of the component fields. In this case, the decay into a monotop system can only be generated by the coupling of the X-field to an electroweak fermionic singlet χ and a left-handed top quark, (2.10) Weak isospin gauge invariance enforces such a Lagrangian term to be accompanied with the interaction of a left-handed bottom quark to the second component of the X-doublet. This induces the fast decay of the neutral χ fermion via an off-shell X-state, 11) so that this setup does not predict any monotop signal. From all the above considerations, a monotop signature arising from the decay of a new coloured resonance can only be generated via scalar mediator ϕ 2/3 s , singlet under the weak isospin group and coupling only to right-handed fermions, In terms of the effective model of Refs. [8,15] shown in Eq. (2.1), this implies that This case corresponds to an R-parity violating supersymmetric scenario where the only Rparity violating terms of the superpotential are the so-called U DD interactions violating the baryon number. The scalar field ϕ s is then identified with a right-handed stop and χ with a neutralino (bino). As ϕ s is a colour triplet, it will also be pair produced via standard QCD interactions and could be directly searched for in non-monotop processes, pp → ϕ s ϕ * s → jj jj , jj tχ and tχtχ . (2.14) As a consequence, existing LHC searches at a center-of-mass energy √ s = 8 TeV could further constrain the model. We however assume the new scalar field to be heavy, so that the new physics contributions to the three channels above are phase-space suppressed. This allows one to evade the bounds possibly arising from the analysis of data recorded during the LHC run I. However, the situation will change with run II at √ s = 13 TeV. In this case, combining monotop searches to analyses of events whose final states feature paired dijet and top-antitop plus missing energy signatures may further constrain the model parameters.
Non-resonant monotop production
In the second class of scenarios yielding the production of monotop states, the top quark is produced in association with an invisible bosonic field that couples in a flavour-changing way to top and light up-type (up or charm) quarks. Denoting these new states by φ (in the scalar case) and V µ (in the vector case), this series of models can be described by the effective Lagrangian [8] where L kin contains kinetic and mass terms for all new fields and the coupling strengths a F C and b F C are symmetric matrices under the exchange of flavour indices. The states φ and V are in general not stable since they couple to quarks. A missing energy signature is therefore enforced by requiring these fields either to be long-lived so that they decay outside of the detector, or to decay predominantly into a pair of additional neutral stable particles.
In particular, the latter possibility has been proposed in the framework of flavourful dark matter models [9], where the extra boson (φ or V ) is a mediator of the interactions of the dark matter candidate with the Standard Model particles. The main issue of this class of models is to make sure that the new boson leads to a missing energy signature in a detector. In this work, we address it by assuming that the φ/V field dominantly decays into a pair of dark matter candidate particles. 1 In this case, extra constraints arise from the requirement that the particle the boson decays into is a good candidate for dark matter, or that at least it does not overpopulate the Universe.
As already stated in Section 2.1, the interactions of a scalar field to quarks involve both the right-handed and left-handed components of the fermions. There are thus two possible structures allowed for the couplings. Focusing on contributions leading to a monotop hadroproduction rate enhanced by parton density effects, the associated Lagrangian (the first line in Eq. (2.15)) can be rewritten as (2. 16) where the y parameters denote generic coupling strengths. Consequently, the scalar φ field must transform as a doublet of SU (2) L with an hypercharge quantum number of 1/2, Rendering the Lagrangian of Eq. (2.16) gauge invariant implies the addition of interactions between the charged component field φ + and quarks. The φ + field will therefore always promptly decay into two-body final states, φ + → ub or td. Analogously, the neutral component φ 0 could also decay into an associated particle pair comprised of a top and an up quark, φ 0 → ut + tū, as well as into a three-body final state via the exchange of a virtual charged scalar field 2 , All these decay channels are however assumed to be negligible when compared to a decay into a pair of dark matter particles. In this case, no minimal coupling to a single stable state is achievable since φ is a doublet of SU (2) L , and one must design an interaction of the φ state to two extra fields whose combination forms a doublet of SU (2) L . If we restrict ourselves to φ 0 -decays into fermionic particles, the most minimal option is given by the Lagrangian where χ s is an electroweak singlet and χ d a weak doublet with an hypercharge of 1/2. This term induces decays of both components of φ the charged component χ + d being taken heavier than, but close in mass to, the neutral component χ 0 d so that both neutral fields χ s and χ d can be seen as viable dark matter candidates.
As a consequence of this non-minimal dark sector of the model, monotop production via flavour-changing interactions of up-type quarks with a new invisible scalar field will always be accompanied by an extra single top production mode (2.20) The nature and magnitude of the associated effects are very benchmark dependent. For instance, a small mass splitting between the component fields of χ leads to very soft Wboson decay products, so that the process of Eq. (2.20) would imply new contributions to monotop production. On the other hand, in the case of larger mass splittings, related new physics scenarios feature an LHC signature comprised of a single top quark and an isolated lepton. Nevertheless, we choose to keep the focus on minimal models, and therefore ignore, in the rest of this work, scenarios where monotop states are produced from flavour-changing interactions of up-type quarks with a scalar particle mediating dark matter couplings to the Standard Model.
When the mediator is a vector boson V , one can design very simple models since it can be singlet under the electroweak group. In this setup, the associated couplings (shown in the second line of the Lagrangian of Eq. (2.15)) involve either right-handed or left-handed quarks and take the form where the a L,R parameters denote the strengths of the interactions of the V -field with up and top quarks. As in the rest of this section, we have restricted ourselves to interactions focusing on the monotop hadroproduction modes enhanced by parton densities. The Lagrangian terms of Eq. (2.21) open various decay channels for the V -field. First, the left-handed couplings allow the mediator to always promptly decay into jets, V → bd + db. Next, the importance of the decays into top and up quarks (this time both in the context of left-handed and right-handed couplings) depends on the mass hierarchy between the mediator and the top quark, the tree-level decay V → tū + ut being only allowed when m V > m t . Furthermore, when m V < m t , a triangle loop-diagram involving a W -boson could also contribute to the decay of the V -field into a pair of jets, V → d idj . Finally, when m W < m V < m t , the three-body decay channel V → bW +ū +bW − u is open, mediated by a virtual top quark. A monotop signal is thus expected only when the V -field is invisible and dominantly decays into a pair of dark matter particles. Since V is an electroweak singlet, the associated couplings can be written, in the case of fermionic dark matter, as where χ is a Dirac fermion, singlet under the Standard Model gauge symmetries. The consistency of the model, i.e., the requirement that V always mainly decays into a pair of χ-fields and not into one of the above-mentioned visible decay modes, implies constraints on the Lagrangian parameters. They will be studied in details in the next section, together with other requirements that can be applied to viable non-resonant monotop scenarios. Summarising all the considerations above, the minimal gauge-invariant Lagrangian yielding monotop production in the flavour-changing mode is given by (2. 23) In the notations of Ref. [8,15] employed in Eq. (2.15), the above choice corresponds to setting Moreover, the parameter basis of Eq. (2.15) (a 1 F C , b 1 F C ) is fully equivalent to the one of Eq. (2.23), (a L , a R ).
Monotop phenomenology specific to non-resonant models
Some features of the resonant models mediated by a scalar, like the lifetime of the invisible fermion produced in association with the top, have been studied in details in Ref. [11].
In the following, we therefore focus on various features of non-resonant spin-1 models by studying the effective lifetime of the invisible vector, associated single top signals, and the dark matter relic density.
We separately consider two regions of the parameter space which have very different phenomenology: the case where the mediator is lighter than the top quark (its mass m V being smaller than the top mass m t ) and the case where it is heavier, with m V > m t .
Mediators lighter than the top quark
When the spin-1 mediator V is lighter than the top quark, its possible decay modes into a pair of top and lighter quarks are kinematically forbidden. At tree-level, V can therefore only decay into a multibody final state such as V → ubW − orūbW + , where the W -boson is virtual when m V < m W (m W denoting the W -boson mass). In this mass range, loopinduced decays must however be considered too. For instance, a triangle loop-diagram with a W -boson exchange generates couplings to down-type quarks, which consequently opens a dijet decay channel. As the decay channels in this region are either kinematically or loop-suppressed, one may wonder whether V may be long-lived without the need for an additional invisible decay channel. Another interesting property of this mass region is that a new decay of the top quark is allowed, t → uV , and extra constraints on monotop scenarios could therefore be extracted from, e.g., top width measurements or the analysis of tt events when one of the top quarks decays into a jet plus missing energy.
Loop-induced decays of the mediator
Light mediators, below the top mass threshold, may decay dominantly into two jets via loop-induced interactions. The structure of the loop crucially depends on the chirality of the monotop couplings, and we can study separately the two limiting cases a L = 0, a R = 0 and a L = 0, a R = 0, the two setups being prevented from interfering in the limit of massless light quarks.
We start with the left-handed scenario, a L = 0 and a R = 0. It has been shown in Section 2.2 that embedding this class of monotop effective theories within SU (2) L implies that the mediator V couples to down-type quarks, and therefore always decays to two jets at tree-level. As a consequence, an extra invisible fermion χ had been added to the theory, allowing one to tune the partial width related to the process V → χχ to be dominant and preserve in this way the monotop signature. The V dd vertices play also an important role for the consistency of the theory. Assuming an anomalous coupling approach, one could imagine an effective model where the left-handed couplings of the mediator to up and top quarks are allowed and those to down-type quarks neglected. The latter will however be regenerated via triangle-loop diagrams involving a W -boson that are logarithmically divergent in the ultraviolet limit. Following a standard procedure, these divergences must be treated with appropriate counterterms that naturally appear after renormalization of the (complete) Lagrangian of Eq. (2.23). This consequently motivates the use of an SU (2) Linvariant Lagrangian from the beginning. In a similar fashion, those couplings will generate mediator interactions to all combinations of up-type quarks at the loop level, so that the initial hypothesis of a unique coupling between up and top quarks is unphysical and extra decay channels must be considered too. All these higher-order contributions to the total width are however loop-and/or CKM-suppressed and can thus be neglected, in particular as we recall that the invisible partial decay width is tuned to be dominant. Consequently, hints for new physics are still expected to occur in monotop events, although additional quark-mediator couplings could induce other observable effects that may imply stronger bounds on the parameter space. For instance, it is not unlikely that a monotop signal could be accompanied with a monojet signal in scenarios with a non-vanishing a L parameter.
We now turn to the study of right-handed scenarios with a L = 0 and a R = 0. The Lagrangian of Eq. (2.23) simplifies and there are no more interactions of down-type quarks with the mediator. However, they are as above generated at the loop-level via W -boson triangle diagrams. Since weak interactions are left-handed, the chiralities of the quarks involved in these diagrams must be flipped, which implies that the loop-induced couplings are proportional to the product of the up and top masses m u m t . Contrary to setups where monotops are produced from left-handed interactions of the mediator with quarks, the loop-induced V d L d L couplings are this time finite, in line with the fact that no associated counterterm appears after renormalization. The interaction strength reads, in the limit of small light quark masses, where α stands for the electromagnetic coupling constant, s W for the sine of the weak mixing angle and V ij for the elements of the CKM matrix. In addition, the loop factor c 0 = m 2 t C 0 (p 1 , −(p 1 + p 2 ); m W , m t , 0) (3.2) depends on the Passarino-Veltman three-point function C 0 where p 1 and p 2 are the momenta of the external down-type quarks. We can therefore calculate the partial width associated with the decay V →d i d j which reads, after summing over all down-type quark flavours, We observe that it exhibits both a loop-suppression and a (m u /m t ) 2 factor, so that it is expected to be numerically small.
In Figure 1, we show the partial width in Eq. (3.3) as a function of the mediator mass for a R = 0.04 (left panel). On the right panel of the figure, the partial width is translated as an upper bound on the value of a R in order for V to have a mean decay length of at least 50 metres so that it is long-lived enough to decay outside of typical hadron collider detectors. The figure shows that the lifetime of V would be long enough only for values of the coupling satisfying a R 10 −2 . Such small values may however challenge the possible observation of a monotop signal at the LHC by reducing the associated production cross section. It should also be mentioned that above the W -boson threshold, a tree-level threebody decay is kinematically open, which further shortens the decay length of V . Finally, In summary even for monotop scenarios in which the mediator cannot decay into a top quark, its lifetime is generally too short and one needs to complete the model by adding a decay channel into an invisible state. Although the class of minimal scenarios described in this section features a light extra vector boson, the setup is compatible with current Tevatron and LHC bounds on monotop production as the latter are always derived under the assumption of very large coupling values of O(0.1) [18,19]. They could however be constrained by other observations, as will be shown in the next subsections.
Single top constraints on monotop scenarios
Motivated by minimality principles, we have discussed, in the previous section, appealing monotop scenarios in which the mediator V is lighter than the top quark. In this case, the former couples to up and top quarks via right-handed couplings and one needs to add an invisible decay channel to potential dark matter particles χ to guarantee a monotop signature, unless the coupling strength a R is very small. On different grounds, these scenarios feature a new decay channel for the top quark, t → uV . This observation can be used to further restrict the viable regions of the parameter space by imposing that new physics contributions to the top width do not challenge the measured value of Γ t = 2.0 ± 0.5 GeV [26]. Assuming a good agreement between the Standard Model expectation and the top width measurement, the partial width Γ(t → V u) can thus be enforced to be of at most 0.5 GeV. On Figure 2, we present the dependence of this partial width on the coupling a R and the mediator mass m V . We observe that for couplings smaller than 0.01, new physics effects in the top width are predicted to be very small, except when the mediator is almost massless. This consequently disfavours such setups in which the mediator is very light, even in cases with coupling strengths of O(0.001).
Kinematically allowed t → V u decays also imply that monotop events can be issued from the production of a top-antitop pair when one of the top quarks decays into a V -boson and a light quark, This process induces additional contributions to the production of a monotop system (tV ortV ) in association with an additional jet, a signature already accounted for in the LHC monotop analysis of Ref. [19]. How much this new channel will contribute to the monotop signal depends on the cuts employed in the experimental analysis. However, due to the large tt cross section, these effects cannot be neglected.
Complementary constraints on this channel could be deduced from Standard Model single top analyses whose signal regions could capture monotop events as above. For instance, both CMS [27] and ATLAS [28] have analyses dedicated to the measurement of the single top cross section in the t-channel which contain a region that could be populated by monotop events as above. 3 In the CMS analysis, events are selected by requiring one single isolated electron or muon and exactly two jets, one of them being b-tagged. The background is reduced by requiring an important amount of missing energy and by imposing that the transverse mass computed after combining the lepton transverse momentum with the missing transverse momentum is large. A final selection is preformed by means of an advanced multivariate technique. We have nevertheless to ignore this last step of the selection as the amount of information provided in the experimental publication is not sufficient for satisfactorily recasting it (see Ref. [29] for more information on this aspect).
We simulate our new physics signal by using the monotop model [8] implemented in the FeynRules package [30,31], tuning the model parameters to the setup of Eq. (2.23), so that we can export the model to a UFO library [32] that is then linked to Mad-Graph5 aMC@NLO [33]. The generated parton-level events have subsequently been processed by Pythia [34] for parton showering and hadronization and by Delphes [35] for detector simulation, making use of the recent 'MA5Tune' [36] of the CMS detector description of Delphes. The CMS analysis of Ref. [27] has finally been implemented in the MadAnalysis5 framework [37,38], which has allowed us to derive exclusion bounds at the 95% confidence level in the (m V , a R ) plane, as shown on Figure 3. The figure also shows the constraint from the top width, and from the dedicated CMS monotop search [19]. The monotop search is currently more sensitive. However, the bound from the single top is a rough estimate, and the bound may be much stronger once the full analysis, including the multivariate selection, is taken into account. Nevertheless, our result shows that the constraints from single top searches can play an important role in constraining monotop scenarios.
Dark matter constraints
We have argued that, even for mediator masses below the top threshold, an invisible decay channel is typically needed in order for the monotop signature to be present. The simplest way out is to couple V to a fermionic stable dark matter candidate χ. However, in a minimal scenario where V is the only mediator for the interactions of the dark matter candidate, one needs to ask whether the relic abundance of χ is enough to fulfill the bounds from observations. Below the top threshold, the main annihilation process χχ → V → tū andtu is kinematically forbidden, so that the annihilation of dark matter particles can only proceed to a three-body or four-body final state (via a virtual top quark), or via loop-diagrams χχ → V → d idj . As discussed in Section 3.1.1, the loop contributions are suppressed by the mass of the light up-type quark that the mediator couples to, so that the χχ annihilation rate may be too slow for the stable particle χ not to overpopulate the Universe.
We are therefore left with either a scenario of small a R coupling, where the mediator V is a long-lived particle, or with a non-minimal model with an invisible decay channel where χ is either the next-to-minimal odd particle or long-lived itself. In any case, the constraint from the dark matter abundance plays a crucial role for the viability of the model and should be carefully verified.
Mediators heavier than the top quark
Scenarios exhibiting mediator masses above m t are very different from the light case discussed in Section 3.1: the mediator V can always decay into a top quark. Including in the model a V -decay channel into an invisible state to be considered as a dark matter candidate is thus always necessary. Moreover, the top quark cannot decay into the mediator, which allows one to avoid constraints from standard single-top signatures. Focusing on the minimal case, we study below the interesting interplays between the requirement that the invisible channel dominates and bounds originating from the relic density of the dark matter candidate.
Tree-level decays of the mediator
When the V -boson is heavier than the top quark, it can decay into either a pair of downtype quarks, an associated pair comprised of a top quark and a lighter quark or a pair of dark matter particles, as already discussed in Section 2.2. Since the first two decay modes are driven by the same interaction vertices allowing one for monotop production, we need to make sure that the invisible decay channel always dominates. The relevant partial widths are given by 4 where we neglect all quark masses but the top mass. In addition, we denote by m χ the mass of the dark matter candidate. Focusing on the simplest subclass of scenarios where the couplings of the V -boson to left-handed quarks are all vanishing (a L = 0), we study typical constraints that can be imposed on ratios of the g Lχ , g Rχ and a R parameters when they are all assumed to be real quantities. Since ratios of branching ratios are equivalent to ratios of partial widths, we use this latter quantity and show, in Figure 4, the maximum value of the a R coupling strength in units of the χV coupling that ensures the V -field to decay invisibly in at least 99% of the cases. In the left panel of the figure, we consider scenarios where g Rχ vanishes (the same result holds for vanishing g Lχ ), while in the right panel of the figure, we assume vector-like couplings, g Lχ = g Rχ = g V χ . In general, the coupling to the top a R (that is responsible for the monotop signal) has to be quite small compared to the coupling to the dark matter candidate in order for the mediator V to be invisible, unless the mass of the mediator V is close to the top mass. On the contrary, if the mass of V is close to the χχ threshold, the invisible decays are suppressed. This study shows that it is not straightforward to have V to decay invisibly, and this constraint may play an important role in the interpretation of the signal, especially when associated with the study of the properties of χ as a dark matter candidate. We study more in detail this question in the next subsection. Figure 4. Maximum value of a R necessary to enforce the mediator V to decay invisibly in 99% of the cases. We focus on scenarios where the couplings of the mediator to dark matter are chiral with g Rχ = 0 (or g Lχ = 0) in the left panel, and vector with g Lχ = g Rχ = g V χ in the right panel. The four curves correspond to m χ = 5, 75, 100 and 150 GeV from the lower to the upper ones in each figure.
Similar conclusions would hold in less minimal models, like the one with a left-handed coupling a L where the decay to down-type quarks is open and dominant also below the top threshold.
Dark matter constraints
We have seen that, in order to avoid visible decays of the mediator V , it has to be coupled to a stable particle χ and the decay V → χχ must always dominate. If χ is stable, and if the model is minimal in the sense that V is the only mediator of interactions between the dark sector and the Standard Model, then the only annihilation process that will determine the thermal relic abundance of χ is χχ → V → tū andtu. Such process is proportional to the same coupling that gives rise to the monotop signature at the LHC, and also to the coupling of V to dark matter. By studying the relic abundance of χ one can therefore derive interesting constraints on the couplings, especially when imposing that the relic abundance is smaller than the measured density of dark matter. Those restrictions can in principle always be evaded by assuming that there are additional mediators, or that χ is not a stable particle but rather a long-lived one that decays on cosmological time scales. In the rest of the section, we nevertheless focus on the minimal case of χ being the only dark matter candidate.
As the relic abundance decreases with increasing annihilation cross sections, one can calculate an upper bound on the product of a R with the couplings of V to the dark matter. This has been computed by implementing the model described by the Lagrangian of Eq. (2.23) in CalcHep [40] and using approximate formulas for the relic abundance. We consider, for concreteness, a vectorial model with g Lχ = g Rχ = g V χ . The results of the calculation are shown in Figure 5, where we present the lower bound on a R × g V χ as a function of the mediator mass m V and the dark matter mass m χ . We restrict ourselves to values of the χ mass above the top threshold, 2m χ > m t , so that a two-body process is kinematically allowed. Below the top threshold, the dark matter candidate can only annihilate into three-body final states or via loop-induced processes, so that the annihilation cross section is too small and the χ particle overpopulates the Universe. The figure shows that the product of couplings is bound to be larger than about 0.1, with the lower bound increasing towards the top threshold as the phase space closes down, and becomes smaller towards the V threshold 2m χ = m V where the resonant V exchange enhances the annihilation. We recall that the V -boson mass must be at least twice as large as the dark matter candidate mass to allow invisible decays for V . The corresponding regions of the parameter space are tagged as kinematically inaccessible. This result, very interesting per se, can be combined with other constraints to better determine the viable regions of the parameter space of the model. The requirement that the invisible V -decay dominates has allowed us, in Section 3.2.1, to calculate a lower bound on the ratio g V χ /a R which depends on the mediator and dark matter masses (see Figure 4). Multiplying it with the limits derived from the relic abundance predictions, we extract a lower bound on g V χ independently of the value of a R : the results are shown in Figure 6. The lower bound on g V χ is found to grow with smaller values of the χ mass. Moreover, near the top threshold, it reaches values well above unity, tending hence to the non-perturbative regime.
Under the assumption that χ is the only dark matter candidate of the theory, we can further restrict our analysis to parameter space regions where the values of the couplings are such that the bound from the dark matter abundance is saturated. We first reinterpret, as a function of the masses, the limits calculated in the CMS monotop search [19] by accounting for an invisible branching ratio of the mediator that may not be 100%. Next, we correlate these to the dark matter results: for increasing values of a R , the coupling g V χ has to be smaller to satisfy the dark matter constraints. This indicates that an enhancement of the tV production rate (by increasing a R ) is accompanied by a reduction of the invisible branching ratio of V , which possibly reduces the production cross section of monotop systems. A general bound on a R can be obtained using the relation whereΓ denote the partial widths into χχ and tu final states given by Eq. (3.5) stripped by the coupling strengths, a R−CM S is the upper bound on a R derived from the CMS analysis that assumes that V decays are always invisible, and k is the lower bound on g V χ × a R deduced from the dark matter relic abundance in Figure 5. On the left panel of Figure 7, we extract the bound on a R−CM S from the CMS analysis of Ref. [19]. Inverting the above equation, the upper bound on a R for a χ particle saturating the dark matter relic abundance can then be rewritten as The result is shown on the right panel of Figure 7. Above the blue curve, the argument of the square root is negative and the inequality of Eq. (3.7) has no solution, therefore there is no bound that can be applied on a R . Below the blue line, near the top threshold, the Dark Matter constraint requires larger couplings and therefore larger monotop rates are allowed, thus a bound on a R can be calculated. Naturally, larger portions of the parameter space are expected to be covered with the upcoming run II of the LHC. The region where the monotop signal is suppressed can have interesting additional features. The boson V may dominantly decay into top and lighter quarks, yielding at the same time a signature comprised of same-sign top quark pairs (tV → ttū) and extra contributions to top-antitop production (tV → ttu) that may be difficult to observe due to the overwhelming tt Standard Model background. These extra channels deserve a particular attention, in particular in upcoming data from LHC collisions at √ s = 13 TeV.
Conclusions
Monotop final states comprised of a single top quark produced in association with missing energy can be a striking sign of new physics at the LHC. The main production mechanisms can be divided into two classes: resonant production, where a heavy coloured boson is first produced in the s-channel and further decays via its couplings to a single top quark and an invisible neutral fermion, and non-resonant production where the top quark is produced in association with an invisible boson that couples to top and up (or charm) quarks. A complete and model independent parametrisation of the two channels has been provided in Ref. [8]. In the present work, we have revisited this description by embedding the effective interactions in an SU(2) L × U(1) Y invariant formalism. In doing so, we have shown that, depending on the chirality of the tops, a complete model contains necessarily extra states and couplings that may spoil the monotop signal, or add more new physics signatures that should be studied in association with the monotop one. We have identified two minimal setups. In the first case, a scalar field is resonantly produced by the fusion of a pair of down-type quarks and couples to a right-handed top quark and a new invisible fermion, like a right-handed stop in R-parity violating supersymmetry. In the second case, a vector state couples to right-handed top and up quarks and decays dominantly into new invisible fields, like in models of dark matter where the dark sector couples to the Standard Model via a flavour-sensitive mediator. We have further investigated the phenomenology of the second class of models that can be split into two subclasses, depending on the mass of the mediator.
For mediators lighter than the top quark, their visible decay modes are either loopsuppressed or CKM-suppressed, or both. Nevertheless, one always needs to add (and tune the couplings of) an invisible field to prevent the mediator from decaying inside a typical hadron collider detector as this would otherwise spoil the monotop signature originally motivating the model. An important feature of these scenarios is that they allow for the top quark to decay into the mediator and an extra jet. This feature can enhance the monotop production rate, as the monotop system can be produced in association with an extra jet from tt events when one of the top quarks decays in the exotic channel. Such events could also be searched for in standard typical single-top searches, as they are expected to populate signal regions of associated analyses. We have indeed observed that a CMS analysis of single top events could imply significant constraints on the mediator couplings, competitive and sometimes stronger than those obtained from monotop searches.
Scenarios with a mediator mass above the top threshold have a very different phenomenology as the mediator decays significantly into top quarks and jets. One needs a large coupling to the invisible sector in order to preserve the monotop signature. Describing the dark sector with a new fermion χ, we have found that the latter could be a viable dark matter candidate if heavier than half the top quark mass, with a correct relic abundance driven by its annihilation via an s-channel mediator into a top and an up quark. We have used relic abundance constraints to derive lower bounds on the product of the couplings of the mediator to quarks and to the dark matter candidate. We have then further restricted the monotop parameter space by combining cosmological and collider results and enforcing the mediator to decay mostly invisibly. We have found that the issue of the perturbativity of the model could be raised for dark matter masses close to the top mass and that the parameter space turns out to be largely constrained when the χ fermion is demanded to reproduce the observed relic density. However, a large portion of the parameter space is still left unconstrained by current data and future experimental results are in order, in particular analyzing a same-sign top quark pair final state arising from the visible decays of the mediator. | 11,324.6 | 2014-07-28T00:00:00.000 | [
"Physics"
] |
Microphysical properties and fall speed measurements of snow ice crystals using the Dual Ice Crystal Imager (D-ICI)
. Accurate predictions of snowfall require good knowledge of the microphysical properties of the snow ice crystals and particles. Shape is an important parameter as it strongly influences the scattering properties of the ice particles, and thus their response to remote sensing techniques such as radar measurements. The fall speed of ice particles is another important parameter for both numerical forecast models as well as representation of ice clouds and snow in climate models, as it is responsible for the rate of removal of ice from these models. 5 We describe a new ground-based in-situ instrument, the Dual Ice Crystal Imager (D-ICI), to determine snow ice crystal properties and fall speed simultaneously. The instrument takes two high-resolution pictures of the same falling ice particle from two different viewing directions. Both cameras use a microscope-like set-up resulting in an image pixel resolution of approximately 4 µm/pixel. One viewing direction is horizontal and is used to determine fall speed by means of a double exposure. For this purpose, two bright flashes of a light emitting diode behind the camera illuminate the falling ice particle and 10 create this double exposure and the vertical displacement of the particle provides its fall speed. The other viewing direction is close to vertical and is used to provide size and shape information from single-exposure images. This viewing geometry is chosen instead of a horizontal one because shape and size of ice particles as viewed in the vertical direction are more relevant than these properties viewed horizontally as the vertical fall speed is more strongly influenced by the vertically viewed properties. In addition, a comparison with remote sensing instruments that mostly have a vertical or close to vertical viewing 15 geometry is favoured when the particle properties are measured in the same direction. The instrument has been tested in Kiruna, northern Sweden (67.8°N, 20.4°E). Measurements are demonstrated with images from different snow events, and the determined snow ice crystal properties are presented.
Introduction
Accurate knowledge of atmospheric ice crystals and snowflakes, or snow particles is needed for meteorological forecast, climate and forecast models (see, e.g., Tao et al. (2003); Stoelinga et al. (2003)).In particular, microphysical properties that are difficult to measure with remote sensing such as size, area, shape, and fall speed are important.Knowledge about these microphysical properties can, for instance, help improve parameterizations of snow particles in atmospheric models.
To retrieve precipitation amount reaching the ground, the microphysical properties of the snow particles on their way down have to be known.Fall velocity plays a significant role for modelling of the microphysical processes.It determines the number of particles present in the measuring volume and the snowfall rate, or the rate of particle removal from clouds.
Snow fall has long been monitored by ground-based instruments.However, instruments that can measure size, shape, and fall speed simultaneously are still scarce.
Instruments can be classified into different categories depending on what is measured primarily.Disdrometers, for example, measure shadow or attenuation as droplets or snow particles pass one or several light beams.Fall speed can be estimated either from the duration between the two beam passages, in case of instruments that have two parallel beams with known vertical spacing, or from the duration of attenuation.Three common disdrometers are Parsivel (Particle Size Velocity disdrometer, see, e.g., Battaglia et al., 2010), 2-DVD (Two-Dimensional Video Disdrometer, see, e.g., Kruger and Krajewski, 2002) and HVSD (Hydrometeor Velocity and Shape Detector, see, e.g., Barthazy et al., 2004).The latter two are optical array instruments, where the shadow of the particles is detected with a linear array of detectors.Thus, a shadow image can be reconstructed and the particle shape discerned.Disdrometers, generally, are designed for snowflakes with larger dimensions and their size limit (pixel size) is as large as 200 µm.
Another category of instruments uses camera systems for optical imaging of snow particles.One example is SVI (Snowflake Video Imager, in a newer version also called PIP, Particle Imaging Package).It consists of a video camera wit a pixel resolution of 100 µm and a halogen lamp which is placed approximately 2 m from the camera for background illumination.The higher frame rate (380 s −1 ) of PIP allows determination of the fall speed with image analysis software that follows particles over several frames.The Ice Crystal Imaging probe (ICI) uses a high-resolution CCD camera system with a pixel resolution of 4.2 µm (Kuhn and Gultepe, 2016).It has also been used to measure fall speed by double-exposing snow particles using two flashes of illuminating light triggered at a known time difference.
There are instruments designed for aircraft that have also been used on the ground for snow measurements.CIP (cloud imaging probe, see Baumgardner et al., 2001) is an optical array probe and has been used on ground as GCIP (Gultepe et al., 2014).VIPS is a video camera system (see Appendix of McFarquhar and Heymsfield, 1996) with a high pixel resolution.On ground it has been used for example for ice fog particles with a pixel resolution of 1.1 µm (Schmitt et al., 2013).CPI (Cloud Particle Imager) uses a CCD camera to produce shadow graphs, or images in cases where ice particles are in focus, with a pixel resolution of 2.3 µm.All three instruments used aspiration to produce similar inlet flows as encountered on the aircraft.
Holographic imaging has the advantage of a larger depth of field when compared to so-called 'in-focus imaging'.Shadowlike images of out-of focus particles can be reconstructed and their position determined.Holographic Detector for Clouds (HOLODEC) is an aircraft instrument (Fugal et al., 2004) and HOLIMO (Holographic Imager) is a ground-based instrument (Amsler et al., 2009).HOLIMO II, a newer version, is used for ground-based in-situ measurements of particles in mixed-phase clouds (Henneberger et al., 2013).PHIPS (Particle Habit Imaging and Polar Scattering) uses a combination of optical imaging and scattering (with polar nephelometer).A first version of the instrument had a high pixel resolution, better than the 2 µm optical resolving power (Schön et al., 2011).The next version, PHIPS-AIDA, added a second camera at an angle of 60°to the first camera to produce stereo images and has been used for cloud chamber experiments (Abdelmonem et al., 2011).MASC (Multi Angle Snowflake Camera) uses three cameras to image snow from three angles, while simultaneously measuring their fall speed with two sets of IR emitter-receiver pairs registering the shadow twice (Garrett et al., 2012).The cameras are viewing horizontally and are separated by 36°.Different pixel resolutions may be used by the cameras, and the version described by Garrett et al. (2012) used pixel resolutions between 9 and 32 µm.Such multi imagers provide more detail about the 3D structure of the snow particle that adds valuable information to the microphysical data collected by imaging instruments.This is useful, for example, to provide better estimates of snow particle mass.This work presents a novel instrument that uses two cameras for simultaneous particle imaging and fall-speed measurement.
It is called Dual Ice Crystal Imager (D-ICI) and is a development of ICI (Kuhn and Gultepe, 2016).The D-ICI has two cameras; the first camera is using a horizontal viewing direction (side view) where the second camera views the falling snow particle vertically (top view).
The cross-sectional area as seen from the top is better related to the particle drag and terminal fall velocity than the area determined from the side view.Additionally, particle size and area from top view are also more useful when comparing to optical remote sensing, which often uses vertical viewing geometries too.Sections 2 and 3 describe the design of D-ICI and data processing methods, Sect. 4 presents measurements to evaluate the instrument's capabilities.
Instrument
2.1 Instrument set-up D-ICI uses passive sampling with a vertically pointing inlet.Its set-up can be seen schematically in Fig. 1.Ice particles, small ice crystals, snow crystals, and snowflakes, falling into the inlet will fall down the sampling tube and traverse the optical cell.
In the centre of the optical cell is the sensing volume.If a particle is falling through the sensing volume it is detected by the detecting optics.Upon detection, the ice particle is optically imaged from two different directions.Figure 2 shows an example of the resulting pair of images for one ice particle.One of the two viewing directions is looking horizontally from the side, called side view, and the other vertically from the top, called top view.The former will allow to measure the fall speed, if using a double-exposure (see Sect. 3.3).The latter will provide area and shape as seen in vertical direction, which are more relevant for fall speed and radiative properties.Because particles fall vertically, an exact vertical viewing geometry for the top view is difficult to achieve as part of the optics would obstruct the particles' motion.Therefore, the top view is a near-vertically viewing configuration that looks through the optical cell inside the vertical sampling tube at a shallow angle to the vertical axis (17 °).The side view, on the other hand, uses exactly a horizontal viewing geometry.Figure 3 shows a photograph of D-ICI, and a more detailed description is given in the following sections.
Inlet and sampling tube
Similarly to the ICI probe (Kuhn and Gultepe, 2016), also D-ICI has a funnel-shaped inlet, wider at the top, with a sharp upper edge (see Figures 1 and 3).Ice particles fall freely into this inlet.The inlet has a diameter of 25 mm at the top and narrows down to an inner diameter of 8 mm.It is concentrically mounted atop of the vertical sampling tube with inner diameter of 12 mm.
Ice particles falling through the inlet are therefore transferred into the sampling tube.After falling about 160 mm vertically through the sampling tube, ice particles come to the section containing the sensing volume.The length of the sampling tube upstream of the sensing volume is sufficient (more than ten times the diameter of the sampling tube) so that particles can relax from motion induced by the wind and surface turbulence.However, higher wind speeds may disturb fall speed measurements inside the sampling tube.Hence, the fall speed of ice particles is not affected by light wind or turbulence as the sampling tube is shielding against them, whereas at higher wind speeds one should use caution.Also, the collection efficiency of the inlet will be affected by higher wind speeds, as observed for snow gauges (Goodison et al., 1998), so that snow fall rate and concentration measurements, which will be discussed in Sect.3.1, become more uncertain with stronger wind speeds.
Imaging optics
In the sensing volume (see Sect. 2.4), particles are optically imaged by two imaging systems, each using a a monochromatic CCD camera (Chameleon 1.3 MP Mono USB 2.0, Point Grey, now FLIR) having a 1280×960 pixel sensor chip with pixels that are 3.75 µm × 3.75 µm in size.These camera systems are similar to the microscope optics used in ice crystal imaging set-ups with single imaging systems (Kuhn et al., 2012;Kuhn and Gultepe, 2016).They consist of a microscope objective followed by a tube lens, as indicated in Fig. 1.For the horizontal view, i.e. side view, the microscope objective (RMS4X, Thorlabs) has a focal length of 45 mm.For the top-view system, the objective is a single convex lens, a positive achromatic doublet (AC254-050-A, Thorlabs) with focal length of 50 mm.This has, compared to the microscope objective, a longer working distance of 43 mm, which is required for the top-view configuration.
The tube lens of the side-view optics is a positive achromatic doublet (AC254-045-A, Thorlabs) with the same focal length as its microscope objective, 45 mm.As tube lens of the top-view optics, the same achromatic doublet as for its objective is used.Thus, the resulting magnifications are the same for both systems, M = 1.Both camera systems have therefore a pixel resolution, i.e. the size of a feature of the imaged object that appears on the image as one pixel, equal to the pixel size of 3.75 µm.And the field of view (FOV) is equal to the exposed sensor area, i.e. 4.8 mm × 3.6 mm.
Both imaging systems use bright-field illumination from the back.This is achieved by a light emitting diode (LED) with a simple focusing lens optics allowing for an even illumination of the FOV.Each of these two lens-LED configurations is arranged along the optical axis of the respective imaging optics on the opposite side of the sensing volume (see Fig. 1).While this illumination scheme reveals some details of the inner structure for most snow particles, due to orientation or particle complexity some parts of the particle can become opaque for the illumination (see, for example, the particles in Fig. 2).This can be considered a limitation of the current illumination set-up.However, the details that can be seen on one or both of the high resolution images (top and side view) will allow shape classification in most cases (Vázquez-Martín et al., 2020).
The top-view optical system uses a mirror between the sensing volume and the objective lens.This allows to fold its optical axis so that it is parallel to the optical axis of the side-view system for a simpler mechanical set-up.
Detection and sensing volume
The sensing volume, i.e. the volume in which particles are detected and imaged, is defined as the intersection of the laser beam for detection with the overlapping FOVs of the imaging systems.The laser beam, which has a wavelength of 650 nm and power of 1 mW, is aligned perpendicular to the optical axes of both imaging optics.It is shaped by an aperture to about 1 mm horizontal width, which corresponds approximately to the depth of focus of the side-view camera.The laser beam is further shaped by a cylindrical lens (LJ1960L1, Thorlabs) with focal length of 20 mm such that its vertical height, originally about 3 mm, is focused to approximately 0.1 mm in the centre of the FOV of the side-view camera.Thus, the laser beam forms a light sheet with width of approximately 1 mm and height of 0.1 mm.Both the side-and top-view cameras are focused so that their focal planes are aligned with this resulting laser sheet.As a consequence, all detected particles are in focus for both images.
To determine the snowfall rate or the snow crystal number concentration, the sensing area, i.e. the area through which detected particles fall, needs to be known rather than the sensing volume.The sensing area is the horizontal cross section of the sensing volume (i.e. the cross section perpendicular to the vertical falling motion).The area is therefore given by the product of the width of the FOV of the cameras and the sum of laser beam width (1 mm) and particle size.This sum has to be used instead of laser beam width only, because particles that are only partially in the laser beam will be detected too.Thus, the sensing area is size dependent (larger particles have a larger sensing area).When assuming a constant sensing area corresponding to a particle size of 500 µm, the concentrations of particles larger than this size would be overestimated.This overestimation is compensated by the size-dependent probability of a particle to touch one of the image borders.Larger particles are more likely to touch an image border, i.e. to be partially outside the image.Such ice particles that touch one image border are therefore excluded from data analysis (see Sect. 3.2).This exclusion from further analysis results in an underestimation of larger particles, hence compensating the overestimation due to size-dependent sensing area.Thus, the assumption of a constant sensing area does not cause a significant uncertainty, as was also discussed by Kuhn and Gultepe (2016), and the sensing area to be used is 4 mm × (1 mm + 500 µm) = 6 mm 2 .Here, we use 4 mm as FOV instead of 4.8 mm mentioned earlier due to the fact that the FOV of the top-view camera is somewhat restricted as a consequence of incomplete illumination of the whole camera FOV (see Sect. 3.2 and Fig. 4 for an example of a complete image).
Scattered light from the part of the laser beam within the sensing volume is collected and focused on a photodiode (FDS010, Thorlabs) by two plano-convex detector lenses (LA1951-A, Thorlabs).The photodiode is located along the axis of the laser beam, which is stopped by a light trap mounted in the centre of the first lens.The diameters of the light trap and the lens tube holding the detector lenses are such that the photodiode detects light scattered by ice particles in the sensing volume in near-forward direction in the range of scattering angles between approximately 10°and 32°.The photodiode has a circular sensitive area with a diameter of 1 mm.Its small area means that most particles that are outside the sensing volume, but still in the laser beam, do not scatter light that can be detected by the photodiode.This minimizes false triggers, i.e. detected scattering leading to empty images as particles are outside the FOVs of the two cameras.
The current of the photodiode is converted to a voltage and amplified (effective current-to-voltage amplification of 2.2 MΩ).
The resulting photo-detector voltage, proportional to the scattered light's intensity, is compared to a threshold voltage (approximately 0.15 V).A trigger signal is issued whenever the photo-detector voltage is larger than this threshold.The trigger signal is used to trigger the two images to be taken of the detected ice crystal as well as the two background-illuminating LED flashes.
Hence, all particles larger than a certain threshold size are detected and then imaged.With the help of Mie scattering calculations (see, e.g., Bohren and Huffman, 1983) this threshold size (diameter of spherical ice) can be estimated as approximately 10 µm.
Computer and data collection
Both imaging systems are triggered by the same signal (see Sect. 2.4).To guarantee simultaneous imaging by the two cameras, each of the two imaging systems has its own dedicated computer for operation and data collection.In this way, there are no particular requirements about the computer's performance, and two Raspberry Pi's are used for D-ICI.Each computer stores its own image data on an SD card, which is connected to the computer's USB port via a card reader.One of the two computers acquires also temperature inside and outside the instrument, registered by two thermistors, and the outside relative humidity with a HIH-4000 sensor (Honeywell) with an accuracy of ±3.5%.
Both computers are connected to a network via ethernet cables.This allows to synchronize them with each other.Consequently, corresponding side-and top-view images can be recognized by their time stamp, which is part of the file name.Both computers can be accessed through an additional laboratory or office computer, which is connected to the same network via cable or internet, if the network provides internet access.Data can then be retrieved using this laboratory computer.Alternatively, the SD cards can be collected to copy the image data.Then, the image data will be processed by the laboratory computer as described in Sect.3.2.
Snowfall rate and number concentration
While the focus of D-ICI is high-resolution images for shape and fall speed measurements, snowfall rate and number concentration can also be determined from the acquired data.For that, here snowfall rate r s is defined as number of snow crystals falling on a given area during a given sampling time t.The inlet is sampling falling snow crystals from a larger area than the cross section of the sampling tube, which results in an enhanced number of snow crystals in the sampling tube.To account for this enhancement, an effective sensing area A is used.It is larger than the sensing area by a factor equal to the ratio of the areas of the 25-mm inlet and the 12-mm sampling tube, i.e. a factor of 4.3.This yields A = 4.3 • 6 mm 2 = 26 mm 2 .Then, r s is determined as number of snow crystals N divided by the effective sensing area A and sampling time t: The number concentration n is calculated from N divided by the sampling volume V .To determine V a constant fall speed v of 0.5 m s −1 is assumed, which corresponds approximately to the average fall speed of the data used here.With this assumption, the effective sampling flow rate of D-ICI becomes Av = 13 cm 3 s −1 .Finally, n is calculated using Eq. 2.
The size dependencies of the sensing area and the probability of the particle being partially outside the FOV cancel out to a good approximation (see Sect. 2.4).This size dependency may be corrected, and the correction factor for number concentration would vary between 1.07 and 1.09 for particles with maximum dimensions between 1.0 and 2.0 mm, reaching down to a minimum of 1.03 for particles of 1.4 mm.And for particles down to 0.5 or up to 2.5 mm it would increase to approximately 1.25.The assumption of constant snow fall speed v, mentioned above, introduces a new uncertainty.When the constant speed of 0.5 m s −1 is overestimating the actual particle fall speed, then the concentration n of these particles is underestimated.And conversely, underestimating the speed results in overestimating concentration.About two third of the data used here had fall speeds between 0.3 and 0.85 m s −1 , that means that for those particles the error in concentration ranges from, respectively, underestimating concentration by about 40% to overestimating it by 70%.This may be corrected for with correction factors based on measured fall speed, which would then vary for the two third of data considered here between about 1.7 and 0.6, respectively.An additional uncertainty in estimating the effective sensing area is resulting from the uncertainty in determining the laser beam width, which may be on the order of ±20%, however is difficult to measure.
These uncertainties affect both n and r s .We have not yet verified the uncertainties experimentally.Also, wind speed, likely affecting these measurements (Goodison et al., 1998), has not been considered yet.Hence, n and r s determined with D-ICI using the assumptions and estimates outlined above should be considered estimates of the actual number concentration and snowfall rate.
Image processing
The images have pixels with grey levels between 0 (black) and 255 (white).An automated image processing algorithm is applied to all top-view images to retrieve ice particle size, area, area ratio, and aspect ratio.It first removes non-particle features from the background.Then the particles on the images are detected and their edges are found.This algorithm has been used by Kuhn and Gultepe (2016); Vázquez-Martín et al. ( 2020) and is a simplified implementation in Matlab of the algorithm described in Kuhn et al. (2012).Here, we summarize this implementation briefly.In the following the different steps of the algorithm are described, of which some are shown in Fig. 4 for an example image.
A background image without any ice particle is used to correct for uneven background illumination, i.e. remove non-particle features from the background.For this, the difference between background and image to be analysed is determined.The difference is positive where the presence of a particle makes the image darker than the background.For regions where the image is brighter than the background, the resulting negative values are set to zero.These are usually only regions within an ice particle, where transmitted light can appear as a brighter spot, surrounded by darker features or the edge of the particle.Now, images are rejected from further analysis if no particle was captured on them, i.e. images that are very similar to the background.For this, a lower threshold is applied to the difference.The image is rejected if the difference does not exceed the threshold for any pixel.A suitable threshold is 30; images with ice particles exceed this by a large margin.
Then, for the remaining images, the difference to the background is first scaled to increase the dynamic range of grey values.This is done for each pixel individually, so that the possible maximum difference (when image pixel is black), at any background pixel becomes 255.Effectively, the scaling factor at any pixel is 255/bg, where bg is the grey level of the corresponding background pixel.To avoid large scaling factors where the background is dark (bg is small), the factor is limited to 2.5.For very dark background (bg < 20) the scaling is set to 1.This scaled difference is then inverted by subtracting it from 255, so that the resulting grey level image represents the image cleaned from background features.This can be seen for an example image in Fig. 4, where panel a) shows the original image and b) the image after the background has been removed.
Regions of the original image that were identical to the background or had brighter spots are now white (255) in this cleaned image, and regions where the original image was darker than the background show now grey levels (< 255).
The following steps in the image processing apply to the cleaned image resulting from the background removal described above.For detecting in-focus particles two thresholds are applied, a grey-level threshold and a gradient threshold.The greylevel threshold is used to find particles and their edges, and the gradient threshold is used to reject out-of-focus particles.First, images that do not have any pixel darker than the grey-level threshold are discarded.This rejects particles that are much out of focus.Then, a binary mask, i.e. a black-and-white image, of the same dimension as the original image is created where logically True entries represent image pixels that are darker than the grey-level threshold.The binary mask is then smoothed to remove variations at the one-pixel level, which are considered to not reflect the actual variations in the edge of the ice particle.
The smoothing is achieved by first dilating each True pixel in the binary mask so that the four neighbouring pixels (above, below, right, and left) will also be True.Then, the dilated binary mask is eroded, to restore its original size, by setting the four neighbours of each False pixel to be also False.Between the dilation and erosion steps, the binary mask is also filled, i.e. all pixels that are False but completely enclosed by True pixels are converted to True.This will include the brighter spots, which many ice crystals show on the images, to the particle they belong to.Then, on the resulting black-and-white image (see example in Fig. 4 c), ice particles are represented by connected True pixels in the binary mask.
All particles, i.e. regions of connected pixels that are now included in this binary mask are then identified and their edges are found (with the Matlab function bwboundaries).For each particle, this results in both a list of coordinates of the edge pixels and a mask containing all pixels that belong to the particle.Each particle can then be processed individually.
Firstly, out-of-focus particles are rejected.For this purpose, a gradient matrix is computed from the image.The values of this matrix, are used as a parameter indicating in-or out-of-focus particles.For computing the gradient values, the image is filtered (using the Matlab function imfilter) with a Sobel horizontal edge-emphasizing filter (generated with the Matlab command fspecial('sobel')) and its transpose, i.e. with the corresponding vertical filter.The resulting matrices represent the horizontal and vertical gradients, respectively.The values of the gradient parameter are then calculated as the sum of the absolute values of these horizontal and vertical gradients (Kuhn et al., 2012).For each particle, the maximum gradient value of all pixels associated to that particle is then compared to the gradient threshold.The particle is rejected as out-of-focus if this maximum is lower than the threshold.For the example image of Fig. 4, two ice particles are found using the grey level threshold (see panel c), however, one of these two particles is rejected based on the low values in the gradient matrix shown in panel d).
Secondly, particles with apparent problems are marked with quality flags.A particle that is in part out of focus can sometimes have parts of the edge not being detected yielding an apparently fragmented edge with narrow gaps.Similarly, if thin ice particle features result in brighter pixels than the grey-level threshold, a fragmented edge is the consequence.To account for this, two or more detected particles that appear very close to each other are joined and the resulting particle is marked as being 'fragmented'.The area of such a particle as determined from the detected pixels will be too small.The resulting error is not large, because the gaps are only small, and by joining the fragmented pieces, the particle may still be considered.However, being marked, it can also easily be excluded from further analysis.An example of an ice particle detected with fragmented edge is given in Fig. 5, panel b).The other ice particle in the same figure shows the un-fragmented edge of the example particle from Fig. 4. In addition, particles that are touching the image border are marked with another flag as 'on-border'.Their size and area are under-estimated as they are in part outside the image.Thus, using this flag they can be excluded from analysis when size and area matter.Figure 6 shows an example of an ice particle with the 'on-border' flag.A further problem is related to incomplete illumination of the top-view images due to restricted geometry in the longer light path in top-view compared to side-view optics.This results in dark corners where ice particles cannot be seen.Consequently, also particles touching these dark corners have to be excluded from analysis as their size cannot be known, similarly as for 'on-border' particles.To allow this, these particles are marked with an additional flag as 'in-darkregion' when they have at least one pixel within the dark corners.For this, a mask containing the corresponding dark pixels (darker than a certain threshold) in the corner regions is constructed from the background image.Figure 6 shows an example of an ice particle with the 'in-darkregion' flag.
Lastly, area and size information is determined for each detected ice particle.As a parameter describing a characteristic size of the detected particle we are using maximum dimension, i.e. the diameter of the smallest circle that completely encloses that particle on the image (see Fig. 5 for an example).The area corresponds to the number of pixels that represent the particle in the binary mask.Both size and area are given in units of pixels.They are then converted to actual length and area by multiplying with the pixel resolution and squared pixel resolution, respectively.
As this method is the same as used for the imager described by Kuhn et al. (2012), which used similar optics, sizing accuracy is expected to be similar.There, the determined size of a small particle (about 50 pixel in size) varied by about two pixels when the location within the depth of focus was changed.Larger inaccuracy is avoided by rejecting out-of-focus particles.To this uncertainty, one pixel should be added to account for uncertainty of particle edge location.Thus, a combined sizing accuracy of approximately 10 µm (or three pixels) is expected for D-ICI.Consequently, for a 200 µm particle, the expected error in area should be on the order of 10%.
Snow fall speed measurement
The side-view camera can be operated in a fall-speed mode, in which the falling ice particle is captured twice on the same image by using a double exposure.This concept has been tested with ICI in a configuration without inlet, so that ice particles could freely fall through the instrument (Kuhn and Gultepe, 2016).For D-ICI, the inlet and sampling tube are designed so that particles fall vertically undisturbed before they reach the sensing volume, thus the set-up does not need to be modified to allow measurements of fall speed.In the fall-speed mode, two very short illumination flashes are used, which have a time separation of ∆t = 1.26 ms ±0.01 ms.This time difference is long enough to yield a clear separation of the two particle appearances on the image, but also short enough so that the particle does not fall out of the vertical FOV of the imaging optics.Thus, the particle's fall speed v can be determined from the vertical fall distance ∆s, as measured on the image, and the time separation ∆t of the two exposure flashes simply as v = ∆s ∆t . (3) The vertical fall distance ∆s is measured by manual inspection of the side-view images.Two or three points at extremes of each particle to be analysed (e.g. a far right corner and far left corner point) are identified and their coordinates on the image are recorded.The same points are then also identified and recorded on the second appearance of the particle on the image, and the vertical distance is determined as the difference of the vertical coordinates of pairs of corresponding points of the two appearances.From the two or three vertical distances determined in this way an average vertical fall distance is calculated.
While falling, the difference of the horizontal coordinates is usually close to zero.Such a difference could be caused by sideway or rotating (tumbling) motion.Horizontal winds, which affect other instruments, with an open sampling volume, such as PIP and MASC do not cause a sideway motion in the enclosed sensing volume of D-ICI.Thus, only a tumbling particle can be responsible for a difference of the horizontal coordinates, and tumbling of ice particles is not often seen (see Sect. 4.2).If it occurs, it is detected by significantly different values of the individual vertical distances measured for a point on the right and left side of the particle, respectively, so that particles that are tumbling too much may be excluded from analysis of fall speed data.When tumbling, one side of the snow particle falls faster and one slower than the average that is determined from the averaged fall distances ∆s.Thus, by rejecting tumbling particles, e.g.those that rotate by more than 10°, the error in fall speed can be limited to approximately 7%.Uncertainties in ∆t have a negligible effect on fall speed error.Also the error related to accuracy of point selection (about two pixels), which translates to an additional uncertainty in ∆s, however, only on the order of 1%.
While side-view images are not processed automatically, the top-view images are (see Sect. 3.2).Results from this automatic processing of top-view images provide size, area, area ratio, and aspect ratio for the particles, whose fall speeds are determined from the corresponding side-view images.
Images and shapes
According to the design, the pixel resolution should be equal to the pixel size of the CCD cameras, 3.75 µm (see Sect. 2.3).
This has been confirmed by imaging a calibration target, a graticule with 10 µm/division and total length of 1 mm.The lengths in pixels corresponding to 1 mm from several such images have been converted to pixel resolutions yielding an average of 3.74 µm/pixel with a standard deviation of 0.02 µm/pixel for the side-view imaging optics, and, from separate images, the same values for the top view.
Figure 7 shows a few examples of ice particle images from snow fall in early winter (2014-10-23 in Kiruna), when the ambient surface temperature was about −5 °C.Each ice particle is shown in the two views, where the top view is shown in the upper panel and the corresponding side view in the lower panel.These detailed images of ice particles allow to recognize their shapes.On 2014-10-23 the ice particles had predominantly bullet-rosette and similar shapes, but also plate-like and cappedcolumn shapes (see Fig. 7).On another day, 2014-10-19, with similar ambient surface temperatures of about −3 to −6 °C, two dominant shapes were observed, graupel (heavily rimed snow crystals) and rimed needles (see Fig. 8).Most of the rimed needles on that day seemed to be agglomerates or ensembles of two or more single needles (called bundles of needles by Magono and Lee, 1966).
Fall speed
Figure 9 shows examples of double-exposed images from the side view, showing the falling ice particles twice, used to determine fall speed.The data considered in the following are from 2014-10-19, a day with relatively low wind speeds with on average 2 m s −1 (as measured at the nearby Kiruna airport).Therefore, we do not consider these data to be affected much by issues related with higher wind speeds.The images from 2014-10-19 (top row of Fig. 9) also include a few drizzle droplets.
The heavy riming on that day indicates the presence of cloud droplets, and the imaged drizzle droplets originate from such cloud or fog droplets that have grown large enough to precipitate and fall into the inlet of D-ICI.They were, with only very few exceptions, smaller than all snow particles.
One of the particles shown in Fig. 9 is tumbling (right-most ice particle in lower row).The rotation, around an axis perpendicular to the image plane, of the particle between the two exposures is approximately 8°, which seems still acceptable if one wants to determine fall speed with an error of below about 10%.Hence, 10°, or perhaps up to 15°, may be used as limit, above which the image has to be discarded for fall speed measurement.Selecting a few days randomly and analysing the ice particle images on those days (total of 946 particle images) yields that approximately 8% of ice particles are tumbling by more than an angle of 10°, and only 3% more than 15°.This means, that particles in general tumble somewhat, however, the majority of ice particles tumbles so little in the time between the two side-view exposures, that fall speed can still be measured.
Cross-sectional area
Using the top-view images, the ice particles' projected area in fall direction (i.e.area projected on a surface perpendicular to the vertical fall direction) can be determined.When the ice particles are classified according to their shapes, power laws can be fitted to the resulting subsets of data to find relationships describing area for specific shapes.On 2014-10-19 two dominant shapes were observed, graupel and rimed needles (see Fig. 8).The fitted power laws for these two shapes are indicated in Fig. 10 by coloured lines and are given by Rimed needles : with D in µm.The groups of particles used for these fits are shown in Fig. 10 as coloured symbols and correspond to a selection of the most compact-looking graupel and almost all particles that could be identified as rimed needles.
The images from 2014-10-19 also show a few drizzle droplets, which can be seen in Fig. 10 with areas very close to the area-dimensional relationship for spheres.Droplets are the smallest particles measured by D-ICI on that day, with maximum dimensions of below 200 µm for the smallest droplets.Due to their spherical shapes, the determined area ratios were very close to 1, and all particles with area ratio larger than 0.9 were droplets.For these, the fitted area-dimensional power law is A = 6.79 • 10 −13 m 2 • D 2.02 (D in µm, R 2 = 1.00), which is very close to the cross-sectional area of spheres.
When looking at the area-dimensional relationship for a certain shape, the fit to the power law can be very good.An exception here are rimed needles.However, they still have a fairly good fit, better than the fit to all data with one common power law, which would predict poorly the area for any of the shapes here, droplets, graupel, and rimed needles (see Fig. 10).
Figure 10 also shows for comparison two relationships reported by Mitchell (1996), one for rimed long columns (as thin line in magenta) and one for lump graupel (blue).While the latter agrees very well with our graupel, the rimed long columns have a larger cross-sectional area than our rimed needles, which one would expect for columns compared to thinner needles.These fitted power laws are shown in Fig. 11 as solid lines.The relationships for area (see Fig. 10) and mass reported by Mitchell (1996) can be used to derive the corresponding fall speed (Mitchell, 1996, Eq. 12 and 20).The resulting relationships for rimed long columns and one for lump graupel are shown in Fig. 11 for comparison.As for area, the relationship for lump graupel agrees well with the D-ICI graupel measurements, whereas there are differences for rimed long columns compared to our rimed needles.These discrepancies are probably related to the larger area and mass of columns compared to needles.
Fall speed measurements
The figure also shows the fall speed measured for the drizzle droplets.As expected, the droplets have the strongest dependence on size.With increasing complexity of particle shape, from droplets over graupel to rimed needles, the size dependence becomes weaker, the spread in data larger, the speed (at same size) slower, and R 2 of the fit to a power law smaller.Droplets have the simplest shape (spherical), and also the largest area ratio of larger than 0.9.The compact graupel particles that have been selected to fit the fall speed-size relationship have a somewhat lower area ratio, on average 0.63 (with standard deviation of 0.08).Rimed needles have the lowest area ratio of on average 0.17 (st.dev.0.04).Thus, one can also observe that with decreasing area ratio the size dependence of fall speed becomes weaker and at the same time the fit to the power law worse.
So, if instead of compact graupel all particles with area ratios between 0.25 and 0.9 are selected, a group that includes graupel with more structure and smaller area ratio compared to compact graupel, then we expect the fit quality to deteriorate.And in fact, for this group with average area ratio of 0.56 (st.dev.0.15) the results of a fit are v = 0.0079 m s −1 • D 0.66 , D in µm (with R 2 = 0.20).
Summary
We have described the Dual Ice Crystal Imager (D-ICI), a ground-based in-situ instrument to determine snow ice crystal properties and fall speed simultaneously.Dual images are taken of detected snow particles using two CCD cameras that image along a horizontal and close-to-vertical viewing direction, respectively.The horizontal, or side-view, is used to determine fall speed from images taken with double exposures.The close-to vertical, or top view, is used to determine size and area.
Both cameras use the same pixel resolution of approximately 4 µm/pixel.The high-resolution images provide enough detail to determine shape in most cases.Having two views of the same particle helps to avoid ambiguities in shape determination that may arise, if only one image were used, due to either an unfavourable particle orientation or particle complexity obscuring internal structure in the current illumination set-up.Hence, D-ICI can be used for classification studies (Vázquez-Martín et al., 2020).Microphysical properties may then be studied specifically for certain shapes.The necessity to discriminate shapes has been demonstrated by fitting one common power law for area versus size to all data during a certain measurement period.The relationship that has been found would fit the area poorly for any of the shapes encountered in that period, droplets, graupel, and rimed needles.By selecting subsets of the data corresponding to certain shapes better fitting relationships have been found and reported (see Sect. 4.3).Similarly, fall speed-size relationships have been found to differ from shape to shape with varying correlations, which, however, are all better than correlation if shape is not considered (see Sect. 4.4).Thus, an instrument that allows to discern shape and measures fall speed at the same time is required.
Snow particles fall some distance vertically through the sampling tube before images are taken, from which speed is derived.
Therefore, the fall speed measurements of D-ICI are not affected by the vertical component of the wind speed or by turbulences close to ground.The accuracy of fall speed measurements has been discussed and is mainly limited by tumbling of snow particles.However, tumbling is not observed frequently.Rejecting particles that tumble with a rotation of more than 10°as detected on the side-view image, the error can be limited to 7%.
Snow particle size and area are determined from top-view images, i.e. as projected along the vertical fall direction.These properties are more appropriate than the same properties determined from a horizontal view, as done by most instruments, when studying relationships to the fall speed or comparing to vertically pointing remote sensing measurements.For reference a size bar with length corresponding to 1 mm is shown.all data (black dots) and to three subsets corresponding to graupel, rimed needles, and droplets are shown as lines in the same colour as the corresponding data points.In addition, two relationships predicted from area and mass relationships using the method reported by Mitchell (1996, Eq. 12 and 20, referred to as M96 in the legend) are shown as thinner lines; one for rimed long columns (magenta) and one for lump graupel (blue).
Figure10shows these projected, or cross-sectional areas A from snowfall measured on 2014-10-19 between approximately 6 and 19 UTC (at temperatures on the ground between −3 °C and −6 °C) as a function of particle size, i.e. maximum dimension D also determined from the top-view images.On this logarithmic plot, the cross-sectional area of spheres having a diameter equal to the maximum dimension is represented by a straight line given by A = π / 4 • D 2 .A power law A = γD β can be fitted to the data to find the parameters γ and β.For the data shown in Fig.10this yields A = 4.72 • 10 −11 m 2 • D 1.24 , D in µm, with a correlation coefficient R 2 = 0.71.
Figure 11
Figure11shows the fall speeds versus the maximum dimension of individual ice particles from the snowfall measured on 2014-10-19.The spread of the data is considerable, and fitting to a power law of the form v = cD b yields v = 0.55 m s −1 • D −0.019 (D in µm) with R 2 = 0.0004, i.e. no dependence of speed on size is found, indicated by the exponent b and R 2 being close to zero.The parameter c coincides with the average fall speed of 0.55 m s −1 .As mentioned in Sect.4.3, the dominant shapes on that day were graupel and rimed needles.Using the subsets of the data representing these two shapes, now fits to the power law reveal significant correlations for graupel.However, for rimed needles the power law does not fit the data well.The results from these fits are graupel: v = 0.0013 m s −1 • D 0.98 , D in µm (with R 2 = 0.83), rimed needles: v = 0.020 m s −1 • D 0.41 , D in µm (with R 2 = 0.054).
Figure 1 .
Figure 1.Schematic cut-views of the set-up of D-ICI.Panel a): cut through a plane defined by the optical axes of the imaging optics showing inlet, sampling tube, and the side-and top-view imaging optics and illumination; panel b): perpendicular cut showing laser detection consisting in laser, light trap, lens for collection of scattered light, and photodiode.In both panels the optical cell with the sensing volume at its centre is indicated by the image of an ice crystal (not to scale).
Figure 2 .
Figure 2. Two examples of ice crystals imaged in two viewing geometries, top view and side view.The ice crystal shown in panel a) has a width of approximately 1.2 mm, the one in panel b) 0.4 mm.Both ice crystals in panel a) and b) use the same scaling, and, for reference, a size bar with length corresponding to 1 mm (and width of 10 µm) is shown.
Figure 3 .
Figure 3. Photograph of D-ICI (door of enclosure is removed).
Figure 4 .
Figure 4. Automated image processing steps shown for an example image.Panel a) shows the original image; b) cleaned image (background features removed); c) binary mask, where logical True values correspond to regions on the cleaned image that are darker than the grey-level threshold, here shown as black; d) gradient matrix computed from the cleaned image, values scaled to grey levels for representation (largest gradient value corresponds to black and zero gradient to white).See description in text for details of the processing procedure.The resolution is indicated by a size bar of 1 mm.
Figure 5 .Figure 6 .
Figure 5. Detected edges of processed ice particle images.The edges are shown in red and have been enlarged to a thickness of 3 pixel for better visibility in this figure.One example, panel a), shows the edge of the ice particle from Fig. 4. The smallest circle enclosing the particle is shown with a dashed line; its diameter, i.e. the maximum dimension of the ice particle, is 1.34 mm (or 358 pixel).The other example in panel b) shows an ice particle that has been detected with fragmented edge due to parts of the actual particle edge being too bright (see text for more details).
Figure 7 .
Figure 7. Ice particles as imaged in two viewing geometries, top view and side view.Each ice particle is shown as a pair of these two views, with the top view in the upper panel, and the corresponding side view in the respective lower panel.Two rows of such pairs are shown.All images have the same resolution, for reference a size bar with length corresponding to 1 mm is shown.
Figure 9 .Figure 11 .
Figure 9. Example side-view images of doubly-exposed falling ice particles.The fall speed is determined from the vertical separation of the two instances of the particle on the same image.The top row shows measurements from 2014-10-19, the bottom row from 2014-10-23.For reference a size bar with length corresponding to 1 mm is shown. | 11,386.2 | 2019-11-04T00:00:00.000 | [
"Environmental Science",
"Physics"
] |
Form to Fabrication
Corresponding Author: Stefan Reich Department of Architecture, Facility Management and Geoinformation, Anhalt University of Applied Sciences, building envelope research group, Dessau, Germany Email<EMAIL_ADDRESS>Abstract: A creative idea is always constrained with lot of factors such as aesthetics, material, fabrication, tools etc. With the introduction of the digital and robotic fabrication, some constrains can be denied and at the same time some new constrains are added. In this study, we discuss how to prototype a creative idea with different fabrication approaches in the framework of student studio course. The student groups compare two different digital fabrication technics using robots. The task of the students is to design and fabricate a full-scale textile concrete furniture. In order to cast respectively laminate the concrete, students need to build a formwork. Free form designs are complex and strenuous work. For this reason, an industrial robot is used for the fabrication of these molds. Due to the limitations of the robot hardware and processes, not all the forms are feasible for fabrication. In this study the workflow, fabrication methods along with its limitations and the result of a full-scale textile reinforced concrete furniture are discussed.
Introduction
In addition to the use of the robot for classic production work for prototyping and model making (milling/drilling, etc.) the question of the necessity of a robot in the field of architecture education arises repeatedly. In our view, expanding digital literacy and deepening the understanding of technological dependencies and ways of working today is essential for architects and all designing professions. Our world, which is characterized by comprehensive networking and digitization and ever faster, technical dynamics, must also be taken into account in the education of the future generation of architects by promoting problemsolving skills in the technical field.
The digital linking and linking of the trades, subareas and working methods, which are also described in the concept Industry 4.0, can be made directly tangible on a small scale and in certain technological areas involving the robot.
In addition, the approach of so-called parametric design (also computational design, algorithmic modeling) is becoming increasingly widespread among creative students. Computer-aided software applications are used to develop shapes and objects that deviate from conventional architectural solutions. Biomorphic geometries and free-form surfaces can be found but also implemented. In order to be able to offer a consistent digital chain from design through 3d modelling software to robotic production, the use of the industrial robot and the training of architecture students in this field of technology is certainly desirable and useful. This paper presents the students' project called "praxis-project". This project is an initiation with a motive to focus and deepen the understanding of building technics and materials both in theoretical and practical way. Each semester a building material and building technic is defined as project task. This year the task is to design and fabricate 3D-textile reinforced concrete furniture. This requires the fabrication of a mold either using the conventional or digital fabrication technic. Four groups with four students in each have worked together. The project has four phases for the semester: Material handling and testing, design, fabrication of mold and laminating or casting. The workflow, drawbacks, limitations of the fabrication and the results of the groups are compared and presented.
Introduction to Fiber Reinforced Concrete
The first production methods of glass fiber reinforced concrete are developed in 1970's. These methods are developed to produced flat panels are limited when dealing with complex shapes. Currently two alternative production methods exist for production of FRC, sprayed methods and the premixed method.
For the sprayed method, fibers are mixed with cement slurry, which is sprayed on mold in layers using air pressure gun. Each layer is sprayed perpendicular to another and periodically compressed with small rollers to minimize the porosity and enhance the density. This method produces high fiber content, good surface quality, no visual fibers and good fiber distribution. However, this method has low tensile capacity of concrete and labor intense.
On the other hand, for premixed methods the fibers are mixed in the cement slurry. The fiber content is calculated based on the intended use, which cannot be higher than 2%. The fibers need to be uniformly distributed while mixing the cast while making sure the fibers do not break during the mix. For this method advantages such as ultra-high performance concrete, self-compacting concrete can be used, the mold can be vibrated and the process is less labor intense. However, this method produces the low fiber ratio, no uniform distribution of fiber and consistent surface quality is not achieved. (Henriksen, 2017).
Contrary to fiber reinforcements of concrete where the reinforcement is meant to reduce after break crack propagation Textile Reinforcement of Concrete (TRC) performs similar to steel reinforcement and increases structural performance for areas with high tensile stresses. The fabric mesh material consist of fiber rovings in one or two directions. Rovings are made from textile fibers that are aligned and glued by a matrix (Koutas et al., 2019).
Mold/Formwork
In general wooden, steel, rubber and polystyrene foam molds are used for single curved and flat surfaces. For complex geometries such as double curved or freeform surfaces, 3D Computer Numerical Controlled (CNC) milled molds, flexible tables with pistons or actuators and membranes are used. Each method has its limitation based on the size, labor costs, production time, labor intensity and reusability (Henriksen, 2017) (Table 1).
Material Testing and Handling Concrete
In first part of the Material testing phase, students evaluate the mechanical properties of concrete. As an exercise, concrete blocks are casted and tested for 7, 14 and 28 days for compressive strength to obtain the efficiency of curing. The mortar used consists of a fine aggregate (0-2 mm), a cement of the strength class 42.5 N/mm² was used as a binder.
In order to achieve suitable material properties and an improvement in workability, the mixing ratio of limestone powder was added. To determine the material properties, mortar prisms were produced and subjected to a bending tensile and compressive test.
Flexural Strength
The flexural tensile strength on mortar prisms was determined in accordance with DIN EN 12390-5. The prism samples with the dimensions 40×40×160 mm were placed on two rolls as shown in Fig. 1. The prism is loaded at the center of the sample until it fails.
The maximum failure load determined is converted to the flexure tensile strength f ct using the following formula:
Compressive Strength
The compressive strength of the mortar samples was determined based on DIN EN 12390-3. The fracture halves of the flexural tensile test are used here as test specimens. This turns 3 prisms into 6 compressive test specimens (Fig. 2).
The compressive strength f c is determined from the maximum compressive force as follows: The results from the compressive test series are shown in Table 3. The compressive resistance of sample 1.7.2 differs significantly from the other values; the reason for this is an error in the test. The named value hence rated as an outlier and is no longer taken into account.
From the Fig. 3 the compressive strength tests result in a 28-day compressive strength of 53.32 N/mm² with a standard deviation of 1.73 N/mm².
Reinforcing Textiles
Textile reinforcement for concrete structures is stateof-the-art and widely used for the main reasons that it is very light, needs less concrete cover and do not corrode.
The concrete textile composite has a much higher structural performance than a (polymer) fiber reinforced concrete. The fabric reinforcements have comparable tensile strength as steel. The woven fabric reinforcement is placed and aligned in our case in the mold according the relevant maximum stresses (Rempel, 2013). Two different textile materials (regarding base material and fabric construction) are used for the fabrication of the furniture (Fig. 4). On the one hand a non-crimp mesh fabric from carbon rovings (according the properties in Table 4) is used. The mesh openings are approx. 30×30 mm. It is used in the field of refurbishment of concrete structures (e.g., bridges, shells) (Seidel et al., 2013).
Due to the fact that the fabric is comparatively stiff and hard to use for small radii an alternative material is offered. It is a common open mesh leno-fabric from glass fiber filaments for reinforcement of thermal insulation composite systems and plaster systems (StarTex Grob, Baumit) ( Table 5).
For both materials, the base properties (weight, yarn count and cross section) are evaluated. The mechanical properties are determined in tensile tests.
Following the material behavior/rigidity the students used the carbon fabric for the reinforcement of more flat and laminar areas of the furniture. For the curved parts with small radii the more flexible glass fabric was used.
In the second part of the training, students are introduced to textile reinforced concrete. Fine aggregates, concrete admixtures such as superplasticizers are introduced. Students evaluate the bending strength from thin textile reinforced beams as shown in the Fig. 5.
Apart from the evaluation of mechanical properties, students also evaluate the surface quality of the concrete for their project. Two important factors that influence the surface quality are the separation agent and surface lamination of the mold.
To obtain a good surface quality of the concrete, various lamination materials such as bee wax, epoxy, putty (plaster and dispersion adhesive) and solvent free paint are tested on the mold surface and are compared in Table 6. The results of the tests are shown in the Fig. 6 and 7. Epoxy among all the lamination agents resulted in very fine surface quality and the mold was sustainable for the second lamination without any further releasing agent shown in Fig. 6. Furthermore, pigments for coloring are tested Fig. 8 and lamination technics with textiles are exercised as shown in Fig. 9.
Adapting the Design Concept/Idea
After the training process, students develop and present ideas in the form of hand sketches and models with either paper or clay. The designs are evaluated and refitted based on aesthetics, structural and functional aspects. The production difficulties, which play an important role when working with the robot and special tools, are discussed in detail. The available machine processes are milling and hotwire cutting.
Geometry of Surfaces
The question which fabrication process and material suits best the surface design (with the main goal to achieve a reduction of fabrication complexity) can be answered after dealing with the following considerations on surface curvature. For instance, one can fabricate at ease all flat, conical and cylindrical shapes (Gauss curvature = 0) with a hot wire cutting device from a Styrofoam block. Furthermore, ruled surfaces can be produced using the hot wire tool as the straight profile line (Henriksen et al., 2016).
Only limited by the reachability and the dimension of the cutting tool (length and depth) the fabrication strategy and the programming of the tool movement can be a big challenge nevertheless.
All the surfaces that are more complex like synclastic and freeform shapes need a 2,5 to 6 axis milling procedure. The freedom of form and detailing is much more elaborate with this process than with the later. For the multiple material options for mold making different parameters need to be specified and the machine setup have to be figured out. All the common fabrication technological aspects like feeds and speeds as well as the overall milling strategies need extensive programming.
Surface Quality
The question of the final surface quality is crucial for the product impression. The traces left by milling need to be optimized either by using appropriate milling tools or by adjusting the toolpaths (roughing and smoothing) as shown in the Fig. 10. For an optimal result with the hot wire cutter, the speed of the tool is adjusted to the width of the cut as shown in Fig. 11.
Fabrication Time
In current scenario with high economical priority all fabrication processes follow time optimization and reduction of complexity. Various strategies to simplify surfaces are common. For surfaces that are originally developable, not much effort is needed. However, for surfaces that are more complex one can either approach simplification by discretization or remodeling the geometry into developable surface patches.
Milling is a process with a tedious effort to cut by stock removal. Whereas with hot wire larger blocks of material is cut away in shorter time span.
Workspace and Dimensions
With Kuka KR 16-2 the maximum reachability from the robot center point is 1800 mm in radius. To maintain accuracy of the processes only 1000 mm radius range is free for fabrication. For this reason, the fabrication is planned in segments of working piece. The principal reachability of the tool of every production step should be ensured. This is checked with offline robotic simulation.
The width of the hot wire tool is 750 mm but the cutting width is between 500 to 600 mm depending on the type of material.
The accuracy of both of the processes depend on the position of the tool. The more away from the center of the robot the lesser accuracy can be achieved.
Structural Performance
Beside the digital modelling of the model and the robotic programming for fabrication, Grasshopper is used for the structural analysis of the projects. With the use of the plugin for Grasshopper Karamaba3D (Fig. 12) students were able analysis It provides various analysis tools e.g., the shell line analysis for force flow, principal stresses, deformation and stress ISO-lines. From the nurbs-surface model (generated in Rhino) a consistent and sophisticated meshing procedure (with different Grasshopper mesh tools) transfers the model into an adequate mesh object for FEA analysis. The supports, the load cases as well as the material properties and shell thickness must be defined, the model must then be assembled and the deflections are calculated with the Karamba3D solver. This plugin helped the students to orient their design not just on the ascetic values but also the structural properties of the project.
To evaluate the results obtained from Grasshopper from students, a results from professional software Ansys (structural FEM software) is used for verification (Table 7). In order to simplify the model, the concrete is modelled without reinforcement with young's modulus of 25 MPa and Poisson's ratio of 0.3. A critical load case scenario with two people (100 kg each) sitting on either side of the bench is applied as a surface load as shown in the Fig. 13 is considered for verification.
Another aspect of the structural performance is the design and orientation of the reinforcement, based on the stress trajectories as shown in the Fig. 14 for the same load case. The tensile and compression zones are calculated and the textile is oriented for maximum efficiency in tensile zone. Though the principle stresses results in Table 7 are comparable. The stress trajectories obtained from grasshopper are not comfortable with Ansys. But for a holistic approach, the results obtained from grasshopper are satisfactory.
Fabrication of Mold
The last phase was the implementation of the designs as a full-scale prototype. For this, the digital fabrication groups had to convert the design from the physical model into a digital model. This either can be done by 3D scanning or by 3D modelling using computer aided design programs like Rhino. The segmenting of the object into feasible working piece blocks with a focus on seam lines and stripping of the mold adds complexity and design feature to the project.
The modelling of the design object as well as the generation of the adequate mold elements need elaborate digital modelling. Flipping the mindset from a positive object into a negative mold requires a specific 3d thinking skills.
One design is a bench with the dimension of 2200×400×550 mm. It consists of two flat segments to sit on and a twisted intermediate and connecting element (Fig. 15). The side segments are of regular dimensions and rectangular so conventional fabrication methods are used. For intermediate element which is a ruled helicoid spiral surface the formwork was fabricated with the help of the hot wire cutting tool. A perfect example of a ruled surface geometry. The formwork was modelled in Rhino and the robot was programmed with Grasshopper and KukaPRC ( Fig. 16 to 18).
The second project is a lounger (Fig. 19), which is adapted to the human stature. The dimensions of the chair are 1510×700×1030 mm. There is a main part for laying down and a supporting element attached to the bottom side. The main part is divided into two pieces that fit into the Styrofoam stock size of 1000×1000×500 mm (Fig. 20). The freeform shape in the overall design and the elaborate detailing requires the use of a milling process (Fig. 22).
Autodesk Fusion 360 was used for the programming of the milling paths (Fig. 21). The complete range of machining parameters need to be set up for the programming. The students get an insight into the topics of tool selection, milling strategies, feeds and speeds. With these experiences, the students developed the following evaluation sheet (Table 8). This comparison shows that the machine-and CNC-based manufacturing methods are not always the first choice (Table 8). The statement that (almost) everything is possible with the robot should always be supplemented by fundamental questioning. Each manufacturing approach requires a careful balancing of the above parameters such as machining time, costs, accuracy, freedom of form etc. (Stavric and Kaftan, 2012).
Bench
The prepared Styrofoam molds were joined together (Fig. 23) and prepared for the concreting process. Figure 24 shows how to attach the spacers to maintain the concrete layer thickness.
Because of the hotwire cutting process, the produced Styrofoam surface shows regular unevenness. However, this should not be reflected in the concrete surface, so different grinding processes and surface finishes were tested in advance ( Fig. 6 and 7). Favored was a multiple application of putty (dispersion adhesive), which resulted in an even and cost efficient surface.
A conventional formwork oil from the manufacturer PCI was applied as release agent between the final surface coating and the concrete (Fig. 25). The concrete was applied layer by layer to the formwork in a time consuming lamination process. A carbon fiber mat was used to prevent cracks in the near-surface area (Fig. 26). Subsequently, additional layers of concrete were applied until the necessary material thickness of 2-3 cm was achieved (Schneider, 2013) (Fig. 27 and 28).
A special challenge was the stripping of the hardened concrete from the Styrofoam formwork. As the mold was not damaged, subsequently another work piece was produced.
Lounger
The steps for making the lounger (Fig. 29 to 34) are the same as the sequence of the bench (Fig. 23 and 29). Due to a non-effectiveness of the release agent the reuse of the Styrofoam formwork was not possible. The only way to extract the lounger from the mold was a destructive method as shown in Fig. 33.
In general, the concrete mix had to be adapted to the production process. The essential factor here was to choose the water-cement ratio and the fine fraction of the concrete so that the plasticity of the material corresponds to a leveling compound. Since partly the concrete had to be applied to vertical surfaces.
Furthermore, it was necessary to observe the heat of hydration during the hardening process. As Styrofoam is a good insulating formwork, the heat generated during hydration can be released only to one side to the uninsulated side. This can lead to temperature-induced stress cracks. Preliminary tests showed no problematic temperature gradients within the concrete cross-section during hardening, so a monitoring of the concrete temperatures was not needed.
Conclusion
The project demonstrated the student's ability to design, develop and fabricate textile reinforced furniture as free-from robotic-fabricated products. The use of the furniture fulfills the design and user requirements. The concrete quality exceeds standard concrete in terms of strength and crack dimensions. The permanent use as an outdoor product has been demonstrated. Satisfying outdoor long-term experiences exist with comparative objectives at our campus built for years ago. | 4,638.2 | 2020-03-23T00:00:00.000 | [
"Engineering"
] |
DEVELOPMENT OF A VIRTUAL MUSEUM INCLUDING A 4D PRESENTATION OF BUILDING HISTORY IN VIRTUAL REALITY
In the last two decades the definition of the term “virtual museum” changed due to rapid technological developments. Using today’s available 3D technologies a virtual museum is no longer just a presentation of collections on the Internet or a virtual tour of an exhibition using panoramic photography. On one hand, a virtual museum should enhance a museum visitor’s experience by providing access to additional materials for review and knowledge deepening either before or after the real visit. On the other hand, a virtual museum should also be used as teaching material in the context of museum education. The laboratory for Photogrammetry & Laser Scanning of the HafenCity University Hamburg has developed a virtual museum (VM) of the museum “Alt-Segeberger Bürgerhaus”, a historic town house. The VM offers two options for visitors wishing to explore the museum without travelling to the city of Bad Segeberg, Schleswig-Holstein, Germany. Option a, an interactive computer-based, tour for visitors to explore the exhibition and to collect information of interest or option b, to immerse into virtual reality in 3D with the HTC Vive Virtual Reality System.
INTRODUCTION
A function of a museum is to aide non-specialists in understanding information and context via an interaction of short duration.Ideally museums should also deepen visitors' interest in the subjects that they present.In accordance with their educational mission, museums must constantly present and represent complex issues in ways that are both informative and entertaining, thus providing access to a wide target audience.Visitors with prerequisite knowledge, prior experiences, as well as associated individual interests and objective tend to take a more active role in engaging with museums (Reussner, 2007).Today, these fundamental ideas are increasingly being implemented through so-called "serious games", which embed information in a virtual world and create an entertaining experience through the flow of and interaction with the game (Mortara et al. 2014).
For the museum field, the consolidation and implementation of culture and information technology is often called Virtual Museum (VM).The definition of a Virtual Museum is, however, not fixed.Since the 1990s many different definitions for a VM have been published with significant differences depending on the contemporary status of information and communication technology (ICT) (Shaw 1991;Schweibenz 1998;Jones & Christal 2002;Petridis, et al. 2005;Ivarsson 2009;Styliani et al. 2009).According to V-MusT (2011) "a virtual museum is a * Corresponding author digital entity that draws on the characteristics of a museum, in order to complement, enhance, or augment the museum experience through personalization, interactivity and richness of content.Virtual museums can perform as the digital footprint of a physical museum, or can act independently …".Pujol & Lorente (2013) use the term VM to refer to a digital spatial environment, located in the WWW or in the exhibition, which reconstructs a real place and/or acts as a knowledge metaphor, and in which visitors can communicate, explore and modify spaces and digital or digitalized objects.Pescarin et al. (2013) evaluated VMs.They found that the impact of interactive applications on the user seems to be depend on the capability of the technology to be "invisible" and to allow a range of possibilities for accessing content.To achieve this, VMs need a more integrated approach between cultural content, interfaces, and social and behavioural studies.However, VMs are using different media, such as text, images, sound and animated 3D models, to act as an interactive platform for the informative supplement of the real museum visit (Samida 2002).The design of the VM varies from simple Web pages (Bauer 2001) to panorama-based virtual tours (Kersten & Lindstaedt 2012) to interactive apps for smartphones or tablets (Gütt 2010).A good example for a VM is AfricanFossils.org,which presents as a virtual lab a spectacular digital collection of fossils and artefacts found mostly at Lake Turkana in East Africa in the Internet (http://africanfossils.org/).The digital collection of animals, human ancestors, and ancient stone tools offers a unique tool for Figure 1.Front view of the Old-Segeberg town house (left), its textured 3D model (centre) and plan of the ground floor (right) scholars and enthusiasts to explore and interact with the collection online.Another example for a digital collection of exhibits is Smithsonian X 3D (https://3d.si.edu/), for which various 3D capture methods are applied to digitize iconic collection objects.The idea of Smithsonian X 3D is to promote the use of 3D data for many applications.
A VM that is retrievable on the Internet would offer the possibility of making a time-and location-independent virtual visit to the museum.It would also facilitate preparation for and evaluation of an actual museum visit, as this medium stimulates the attention of the visitor while also providing further information.The great strength of a VM is the ability to utilise current ICT to supplement conventional exhibition techniques via the presentation and integration of content into the real exhibition, thus significantly contributing to a visitor's understanding.
The Laboratory for Photogrammetry & Laser Scanning of the HafenCity University Hamburg has developed a VM for the museum of Old-Segeberg town house (Alt-Segeberger Bürgerhaus) as an interactive tour for a windows-based computer system and as a virtual reality application in 3D using the Virtual Reality System HTC Vive.Based on this concrete example, this contribution provides examples of how museums can fulfil the technological and media requirements in the 21st century using detailed geo data and appropriate ICT.
THE OLD-SEGEBERG TOWN HOUSE
Even at the end of the 19 th century, the Old-Segeberg town house (Fig. 1), located in the city of Bad Segeberg 40 km northeast of Hamburg, was already known as the oldest house of the city.Today it is one of only a few well-preserved, small urban town houses from the beginning early modern period in the federate state of Schleswig-Holstein.In the new installed council book from 1539, the building was already included in the historic rent listing.After Segeberg was almost completely destroyed in June 1534 during the Count feud from 1533-1536/37, the town house was re-established in 1541.Firstly, as a simple hall building with a single-storey, in-frame construction with brick-bracing to the property that is today's Lübecker Straße No. 15.The method of construction was poor and building materials from neighbouring ruined properties (e.g. in the roof framing) were partly recycled.However, the basic structure of the framework construction was established from fresh wood (oak).It is presumed that the cellar with walls, constructed from boulders, hailed from the medieval predecessor building (Reimers & Hinrichsen 2015).In the following centuries the house was extended and converted several times.
With support of a historian and based on historical sources (Reimers & Hinrichsen 2015) six construction phases of the building could be identified.These were each modelled in AutoCAD and are presented in chronological order in Figures 2 and 3, from left to right.: (1) construction work (1541), (2) the first extension (around 1587), (3) Stall addition at the south front (before 1805), (4) extension of the living space (from 1814), ( 5) renovation and conversion of the front façade (ca.1890), and ( 6) refurbishment of the building and conversion to a museum (1963/64).A detailed description of the six construction phases of the building are presented in Kersten et al. (2014).
After the refurbishment in 1963-64, the building contained the local museum of the City Bad Segeberg.For the next few decades, exhibits from the petty-bourgeois living and working environment of the 19th and 20th centuries were shown in its historic rooms.After the adult education centre (Volskhochschule) Bad Segeberg took over sponsorship of the museum in 2012, it was renamed "Museum Alt-Segeberger Bürgerhaus" and successive permanent exhibitions on the topics "500 years development of civic culture in the mirror of a 470year-old house" and "800 years history of the city of Segebergfrom the medieval castle settlement to the modern resort" were hosted.
PROJECT WORKFLOW
The entire museum Old-Segeberg town house was modelled in 3D so that visitors could virtually explore the exterior and interior of the building in close relationship to the various exhibits in the museum.Special focus was given to developing visitors' understanding of the complex history of the building via an interactive visualisation of the nearly 500-year-old museum building's extensive construction history.The project has been roughly divided into three major phases of development (Fig. programming (bottom -red) Existing VMs are typically either digitized collections or virtual tours based on panoramas with little or no interaction.In this work, to further exploit the potential of digital visualisation, the serious games approach was selected because it enabled consideration of the use of elements known from games, such as extensive free movement and the option of looking closer at surroundings or interesting objects/exhibits.
Initially, several methods of control and presentation of information were tested in a rough test model.In later stages, other planned functions were implemented to enable optimal use of the program.
MODELLING
The 3D object recording was conducted over two separate acquisition campaigns, on 21 April and 2 August 2011, using the IMAGER 5006h terrestrial laser scanner and two digital SLR cameras, Nikon D40 and D90, for the exterior and interior areas, respectively.The base for the development of the virtual museum was this detailed 3D recording of the exterior and interior of the building using digital photogrammetry and terrestrial laser scanning.Intensive 3D CAD modelling, using coloured point clouds from laser scanning and manually-measured photogrammetric 3D points, represented the second stage of activity.For the 3D modelling and visualisation of huge point clouds in AutoCAD, the plug-in PointCloud, from the company Kubit in Dresden, Germany, was used.Using this plug-in, CAD elements, e.g.surfaces such as the half-timbered bars, could be directly digitised in the point cloud.In the oriented images, each object point was measured manually in at least four photos from different camera stations.After the image point measurements were completed for one object element, for example a window, the computed 3D object points for this element were transferred to AutoCAD.There, polylines were generated from these points, and these were later used to generate surfaces.Simple object parts were constructed using geometrical primitives (e.g.cuboid, pyramid, cylinder, cone, circle, ring, etc.), while some more complex object parts were created with the Boolean operators in CAD (union, subtraction and intersection).Kersten et al. (2014) give a detailed description of the data acquisition and modelling of the town house.
Based on this reconstructed 3D model further 3D modelling was carried out to fulfil the requirements for the development of the virtual museum.To bring the interior to life, the most important exhibits, information panels and furnishings were also modelled and these were placed in their appropriate places within the building.Additionally the six different historical construction stages of the Old-Segeberg town house were modelled, in collaboration with the historian Nils Hinrichsen (Director of the Museum Old-Segeberg town house).The appearance of the building, in particular for the early construction phases, is only assigned to parts based on historical scientific evidence collected in recent years.For example, a dendrochronological analysis of individual timbers was conducted, which determines the age of various parts of the building and which could be assigned to the corresponding construction phases (Reimers & Hinrichsen 2015).For data reduction, the six construction phases were modelled together, i.e. objects, which occur in several construction phases, were created only once and stored in a database, thus allowing utilisation by the program in multiple phases.Fig. 3 shows the first four construction stages and their most distinctive changes from the same perspective.based on terrestrial photos and Google Earth data, the environment of the building was also reconstructed to ensure that this historic building was embedded in its urban environment.As a stylistic device, the surrounding buildings were coloured grey to emphasize the museum in the visualisation.
The texture mapping of the model was carried out using the software Autodesk 3ds Max.The photos used for texturing were mainly locally-taken photos.However, textures that were freely available online were also integrated after appropriate editing.Furthermore, bump and alpha textures were used to improve the depth effect and the appearance of details.In total, 239 textures were used for visualisation.
GAME ENGINE UNREAL
A game engine is a software framework designed for the creation and development of video games for consoles, mobile devices and personal computers.The core functionality typically provided by a game engine includes a rendering engine for 2D or 3D graphics to display textured 3D models (spatial data), a physics engine or collision detection (and collision response) for the interaction of objects, an audio system to emit sound, scripting, animation, artificial intelligence, networking, streaming, memory management, threading, localisation support, scene graph, and may include video support for cinematics.A game engine controls the course of the game and is responsible for the visual appearance of the game rules.For the development of a virtual museum, game engines offer many necessary concepts with much functionality so that users can interact with the VM.
In the past, the development of game engines was mostly based on the development of a specific game with paid licensing to external game developers.In recent years, however, most of the large engine providers have focused more on the advancement of engines and additionally offer free access for developers.
Examples of game engines with free potential use are the engine Unity from Unity Technologies, the CryEngine of the German development studio Crytek, and the engine Unreal from Epic Games (www.epicgames.com).A current overview and comparison of different game engines can be found e.g. in O'Flanagan (2014) and Lawson (2016).The selection of the appropriate engine for a project is based on the provided components mentioned above, the adaptability in the existing work processes as well as special preferences of the (game) developer.In the framework of this project, the game engine Unreal was selected due to the opportunity to develop application and interaction logics using a visual programming language, the so-called Blueprints.Visual programming with Blueprints does not require the writing of machine-compliant source code.Thus it provides opportunities for non-computer scientists to program all functions for a VM using graphic elements.The saving in time associated with this method of software development allows for the generation of additional scenarios and for more intensive user testing.Game engines are, therefore, very well-suited to the development of virtual museums.
VIRTUAL REALITY SYSTEM HTC VIVE
HTC Vive (www.vive.com) is a virtual reality headset (with a weight of 555 grams, Fig. 6) for room-scale virtual reality.It was developed by HTC and the Valve Corporation, was released on 5 April 2016, and is currently available on the market for EUR 899.Basic components are the headset for the immersive experience, two controllers for user interactions and two "Lighthouse" base stations for tracking the user's movement.The device uses a gyroscope, accelerometer, and laser position sensor to track the head's movements as precisely as one-tenth of a degree.Wireless controllers in each hand, with precise SteamVR-tracking, enable the user to freely explore virtual objects, people and environments, and to interact with them.The VIVE-controller is specifically designed for VR with intuitive control and realistic haptic feedback.The Lighthouse system uses simple photo sensors on any object that needs to be captured.To avoid occlusion problems, this is combined with two lighthouse stations that sweep structured light lasers within a space.
Windows-based VM implementation
The major part of the work dealt with the programming of the user movement in the museum, with information queries and the corresponding animations in the game engine "Unreal Engine".The intuitive handling of the program was an essential prerequisite, allowing also easy access to and use of the VMs for inexperienced PC users (Fig. 7).The control of the software is exclusively available via mouse interaction and is based on many well-distributed positions throughout the building, which can be directly selected by clicking on a map or approached using a defined camera path through the 3D environment.In these positions, the users can freely look around using a 360 0 panorama.Users can also zoom in and out, and click on the available information button.As a special highlight of the VM, the visualisation of the building history was realized with a "model in the model" (Fig. 8, Fig. 9 left and Fig. 11 right).
Figure 8. Graphical User Interface for the building history with an animated view into the interior of a construction phase At one station of the virtual tour, which is located in front of the model, the user can open this model to display and animate, upon request, each of the building states including all related information.The user can look at the building model from all sides using virtual rotation.In addition, with a mouse click, the roof can be removed, the individual floors can be driven apart by animation, and the appearance of the building interior in the respective building phase will be shown (Fig. 5).Each room can now be selected to display information about the development of building use.Furthermore, it is possible to start an animated transition from the previous construction phase, which shows and describes the structural changes to the next state for each construction phase.These animations required a subdivision of all 3D models into 387 smaller objects, to precisely control the movement of the objects for each animation.In the animation, the user is guided by predefined camera movements to appropriate viewpoints.
Menus and information boards, which can be opened during the individual tour using the info button placed next to the selected objects, were created for the exhibits.These menus include brief explanations and mostly a figure that can be enlarged via mouse click.Some information is directly imparted using detailed pointof-view shots in the 3D environment.In such cases, cameras were distributed in the whole Museum at appropriate points.These can also be selected using the info button.
Finally, comfort functions were created such as tool tips, an overview map and a help menu.For quality assurance, the VM program has been tested by several people with different PC experience to subsequently customize the software details.
Virtual Reality Application with HTC Vive
Based on the modelled and textured 3D data in the game engine, an immersive virtual reality visit was developed utilizing the new Virtual Reality System HTC Vive.The visit offers the possibility of experiencing the museum and the history of the building from a real person's point of view and interaction scheme.For this purpose, the controlling positions have been replaced by the possibility of free movement by the user.To bridge long distances in the virtual object, a teleportation function is available for the navigation of the user (Fig. 10).
The users' hands can interact with various components of the virtual world to control the building's presentation.The object selection and menu operation is enabled via a "laser beam", which is controlled by the motion controller (Fig. 11 left).The highlight of the virtual museum visit is the animation of the architectural building history, which is vividly represented by the HTC Vive glasses directly in front of the visitor's eyes on the basis of the 3D models (Fig. 6).Different historically-confirmed construction phases are visibly demonstrated in 3D and changes are illustrated by transition animations.
CONCLUSION AND OUTLOOK
This contribution described the successful development and implementation of a Virtual Museum for the museum Old-Segeberg town house with two options: a) interactive software application for windows-based computer systems and b) virtual reality application for the VR system HTC Vive.Many visitors and participants have tried and tested the VM by using both a windows-based computer system and the VR system HTC Vive.The Old-Segeberg town house can look back on 475 years of architectural development, identified and explained in form of animations, which are the highlights of the virtual museum.The developed computer program contains 13 guided viewpoints distributed at important positions in the museum and 52 info menus with detailed information for visiting the virtual museum.The program has a size of 500 MByte and is executable as a standalone program on Windows operating systems.It is developed in the game engine Unreal, which offers, not only complex visualisations of 3D objects, but also provides every programming tool necessary for creating extensive interactions between the user and the environment.To make this program accessible for the visitors, a PC-terminal will be provided in the Old-Segeberg town house in the next museum season.It allows, in addition to the current exhibition, a multimedia interaction with the history of the city and the building.Thus, based on this building, it carries out an important educational contribution for the urban development of Bad Segeberg and about 500 years of housing tradition in Schleswig-Holstein.Developed entirely in 3D, the VM is unique in this form in Germany as an informative component of a museum.
The VR application using the HTC Vive gives the opportunity to check the geometric quality of the modelled 3D data during the VR visualisation.However, walking through the virtual museum of the Old-Segeberg town house, collecting all of the information in the exhibition and seeing all of the different animations, which explain the construction changes of the building over the centuries, is a very immersive experience.
The emerging technology Augmented Reality also offers great potential by combining the advantages of a VM with the real museum visit.It also enables the museum visit in situ using a Smartphone or tablet for digital superimposition of the current state with a historic building state, for example at sites of (former) historic ground (Canciani et al. 2016).
In general, VR applications or systems can also be used outside the museum context -for example, for product visualisations, for facility management, for trade fairs or for tutorials of workflows (e.g. for the fire brigade).
4): A) Concept creation and test of all planned functions, B) Modelling and texturing of the Museum, its content and the models for the visualisation of architectural history, and C) Integration into a program and generation of interaction possibilities.
Figure 6 .
Figure 6.The virtual reality system HTC Vive in use.The screen in the background shows the same sequence as appears to the user in the VR glasses (right) The technical specifications of the HTC Vive are summarized in the following: a) two screens with a field of view of approximately 110 degrees, one per eye, each having a display resolution of 1080x1200 with a refresh rate of 90 Hz, b) more than 70 sensors including a MEMS (Microelectromechanical systems) gyroscope, accelerometer and laser position sensors, c) 4.6 by 4.6 m tracking space for user operation using two "Lighthouse" base stations for tracking the user's movement with sub-millimetre precision by emitting pulsed IR lasers, d) SteamVR running on Microsoft Windows as platform/operating system, e) Controller input by SteamVR wireless motion tracked controllers, and f) front-facing camera for looking around in the real world to identify any moving or static object in a room as part of a safety system.The following technical specification is required as the minimum for the computer to be used: processor Intel™ Core™ i5-4590 or AMD FX™ 8350, graphic card NVIDIA GeForce™ GTX 1060 or AMD Radeon™ RX 480, 4 GB RAM, video output 1x HDMI 1.4-connection or DisplayPort 1.2 or newer, 1x USB 2.0connection or newer, operating system Windows™ 7 SP1, Windows™ 8.1 or more up to date or Windows™ 10.
Figure 7 .
Figure 7. Graphical User Interface for the virtual museum tour
Figure 9 .
Figure 9. Impressions from the 3D environment of the virtual museum Old-Segeberg town house including the model of the building (left), the stairways to the attic (centre) and a view at the attic (right) | 5,394.2 | 2017-02-23T00:00:00.000 | [
"Art",
"Computer Science",
"History"
] |
Efficient Production of 3′-Sialyllactose by Single Whole-Cell in One-Pot Biosynthesis
Sialyllactose (SL) is one of the most important acidic oligosaccharides in human milk, which plays an important role in the health of infants. In this work, an efficient multi-enzyme cascade was developed in a single whole cell to produce 3′-SL. We constructed two compatible plasmids with double cloning sites to co-express four genes. Different combinations were assessed to verify the optimal catalytic ability. Then, the conversion temperature, pH, and stability under the optimal temperature and pH were investigated. Moreover, the optimal conversion conditions and surfactant concentration were determined. By using the optimal conditions (35 ◦C, pH 7.0, 20 mM polyphosphate, 10 mM cytidine monophosphate (CMP), 20 mM MgCl2), 25 mL and 4 L conversion systems were carried out to produce 3′-SL. Similar results were obtained between different volume conversion reactions, which led the maximum production of 3′-SL to reach 53 mM from 54.2 mM of sialic acid (SA) in the 25 mL system and 52.8 mM of 3′-SL from 53.8 mM of SA in the 4 L system. These encouraging results demonstrate that the developed single whole-cell multi-enzyme system exhibits great potential and economic competitiveness for the manufacture of 3′-SL.
Introduction
Breastfeeding is the gold standard of infant nutrition, and human milk oligosaccharides (HMOs) are unique and important bioactive ingredients in human milk [1,2]. SL is one of the most abundant and representative acidic oligosaccharides, which accounts for about 10-30% of total HMOs [3]. SL has important physiological functions in human health, such as gut maturation, resistance to gut pathogens, and prebiotic effects [4,5]. According to the position of the glycoside bond between SA and lactose, SL is divided into 3 -SL and 6 -SL [6]. At present, other rich HMOs have been used in infant formula [7], but SL has not been widely used due to technical limitations. Therefore, research on the efficient preparation of SL has important application significance.
Enzyme-catalyzed synthesis is one of the methods for preparing 3 -SL and has a wide range of application prospects [8]. Two kinds of enzymes are involved in the catalytic reaction, trans-sialidase [9,10] and sialyltransferase [11,12]. The former transfers the sialic acid of the donor to the lactose to form 3 -SL. This catalytic reaction does not require an additional energy donor, but trans-sialidase can only recognize α-2, 3-bound sialic acid of the donor [13], resulting in low substrate utilization and low yield. In contrast, sialyltransferase can also transfer the sialic acid monomer to lactose, but sialic acid must be in active form, e.g., cytidine-5 -monophospo-N-acetylneuraminic acid (CMP-sialic acid) [14]. Therefore, using SA as a substrate requires CMP-sialic acid synthetase for preparation of activated intermediates. Several research works have tried to use sialyltransferase to produce 3 -SL. Endo [15] used five types of permeabilizated cells to prepare 3 -SL. Although 52 mM of 3 -SL was obtained after 11 h of reaction and the actual conversion rate was rather high, a large number of cell additions and cumbersome operations limited further industrial applications of 3 -SL. Moreover, the fusion expression of CMP-sialic acid synthetase and sialyltransferase was used to prepare 3 -SL [16]. After 7 days of reaction in the 2.2 L system, 48.6 mM of 3 -SL was obtained from 165 mM of SA. It can be clearly seen that the reaction time was long and the conversion rate was low. Additionally, the more expensive substrate phosphoenolpyruvate (PEP) was used to regenerate cytidine triphosphate (CTP) from CMP. Therefore, a fast, efficient, and low-cost method for preparing 3 -SL is an important prerequisite for industrialization.
CTP is the important substrate for preparing CMP-sialic acid, but its high price makes it unrealistic to directly use as the substrate for the production of 3 -SL. Several CTP regeneration pathways have been proposed. Sun-Gu Lee [17] used CMP kinase and acetate kinase as the catalyst to produce CTP from CMP and acetyl phosphate. Wang [18] utilized CMP kinase, polyphosphate kinase, and nucleoside-diphosphate kinase to synthesize CTP from CMP, adenosine triphosphate (ATP), and polyphosphate. However, these CTP regeneration systems require either more expensive substrates or more types of enzymes. In contrast, due to the fact that the substrate used was cheap and easily available, the combination of CMP kinase and polyphosphate was considered to be the most promising method [19], which was adopted in this study.
In this study, the previously constructed multi-cell enzyme system [20] was reconstructed into a single recombinant strain. Two compatible plasmids were used to complete the generation of 3 -SL and regeneration of CTP ( Figure 1). More importantly, transformation conditions were optimized, and the optimal conversion system was conducted in 25 mL and 4 L systems to investigate the production of 3 -SL.
Processes 2021, 9, x FOR PEER REVIEW 2 o the donor [13], resulting in low substrate utilization and low yield. In contrast, sialyltra ferase can also transfer the sialic acid monomer to lactose, but sialic acid must be in act form, e.g., cytidine-5′-monophospo-N-acetylneuraminic acid (CMP-sialic acid) [1 Therefore, using SA as a substrate requires CMP-sialic acid synthetase for preparation activated intermediates. Several research works have tried to use sialyltransferase to p duce 3′-SL. Endo [15] used five types of permeabilizated cells to prepare 3′-SL. Althou 52 mM of 3′-SL was obtained after 11 h of reaction and the actual conversion rate w rather high, a large number of cell additions and cumbersome operations limited furt industrial applications of 3′-SL. Moreover, the fusion expression of CMP-sialic acid s thetase and sialyltransferase was used to prepare 3′-SL [16]. After 7 days of reaction in 2.2 L system, 48.6 mM of 3′-SL was obtained from 165 mM of SA. It can be clearly s that the reaction time was long and the conversion rate was low. Additionally, the m expensive substrate phosphoenolpyruvate (PEP) was used to regenerate cytidine triph phate (CTP) from CMP. Therefore, a fast, efficient, and low-cost method for preparing SL is an important prerequisite for industrialization. CTP is the important substrate for preparing CMP-sialic acid, but its high price ma it unrealistic to directly use as the substrate for the production of 3′-SL. Several CTP generation pathways have been proposed. Sun-Gu Lee [17] used CMP kinase and acet kinase as the catalyst to produce CTP from CMP and acetyl phosphate. Wang [18] utiliz CMP kinase, polyphosphate kinase, and nucleoside-diphosphate kinase to synthes CTP from CMP, adenosine triphosphate (ATP), and polyphosphate. However, these C regeneration systems require either more expensive substrates or more types of enzym In contrast, due to the fact that the substrate used was cheap and easily available, combination of CMP kinase and polyphosphate was considered to be the most promis method [19], which was adopted in this study.
In this study, the previously constructed multi-cell enzyme system [20] was rec structed into a single recombinant strain. Two compatible plasmids were used to compl the generation of 3′-SL and regeneration of CTP ( Figure 1). More importantly, transf mation conditions were optimized, and the optimal conversion system was conducted 25 mL and 4 L systems to investigate the production of 3′-SL.
Materials
3 -SL was purchased from Carbosynth (Carbosynth, China), SA was obtained from CASOV (Wuhan, China), and CMP was purchased from Huaren (Wuhu, China). All other chemicals used in the study were commercially available and were of analytical grade. Molecular biology reagents used in this study are listed in Table S1.
Plasmids and Strains
The pET-22b (+) plasmid harboring the gene encoding CMP-sialic acid synthetase, α-2, 3-sialyltransferase, CMP kinase, and polyphosphate kinase was previously constructed [20]. The gene of CMP-sialic acid synthetase was amplified with NcoI and NotI restriction sites by using plasmid pET-CSS as template. Then, NcoI and NotI were used to digest amplified fragments and pCOLADuet-1. Double enzyme fragments were recovered from agarose gel and ligated by DNA ligase. The ligation products were transformed into competent cells and spread on the plate containing Km R for screening. The positive transformants were confirmed by PCR verification and sequencing. After the verification, the plasmid pCOLADuet-CSS was obtained. The gene of α-2, 3-sialyltransferase was also cloned and inserted into the second multiple cloning sites of pCOLADuet-CSS between the restriction sites of NdeI and XhoI to achieve recombinant expression plasmid pCOLADuet-CSS-ST. Chromosomal gene disruption of the host strain was carried out with the λred homology recombination method [21], with a slight modification. Transformants carrying pKD46 were grown in 5 mL LB cultures with ampicillin and L-arabinose at 30 • C, and then electrocompetent cells were produced by centrifuging and washing with ice-cold 10% glycerol. PCR products with the FRT-flanked resistance gene were gel purified and suspended in ddH 2 O. Electroporation was conducted by using an electroporator (MicroPulser, Bio-Rad). Shocked cells were added to 1 mL LB culture, incubated at 37 • C for 2 h, and then spread onto agar to select Km R transformants. After PCR verification, positive mutants were transformed with pCP20 and selected at 30 • C. The colony was cultured at 42 • C and tested for the loss of Km R . Strain E. coli BL21 Star (DE3) ∆lacZ was previously constructed [20]. Gene cluster nanETKA in the E. coli BL21 Star (DE3) ∆lacZ genome was disrupted by homology recombination. E. coli BL21 Star (DE3) ∆lacZ nanETKA was used as the expression strain for protein expression. Plasmids and strains are shown in Table 1.
Optimization of Reaction Conditions
The optimization experiments were carried out in a 100 mL flash shaker with 25 mL mixtures. The effect of temperatures on the 3 -SL content in the range of 25-45 • C was compared. The bioconversion at different pH ranging from 5.0 to 10.0 was conducted, including 50 mM of sodium acetate buffer (pH 5.0-6.0), 50 mM of Tris-HCl (pH 7.0-8.0), and 50 mM of glycine-NaOH (pH 9.0-10.0). The catalytic stability of the bioconversion system at optimum temperature and pH was also evaluated.
The optimal addition of the cell extracts was investigated by varying the wet weight cells from 10 to 50 g/L. In addition, the effect of different concentrations of CMP, MgCl 2 , and polyphosphate on the content of 3 -SL was assessed. The substrate concentrations were set at 0 mM, 5 mM, 10 mM, 20 mM, 40 mM, and 60 mM.
Furthermore, to simplify the catalytic process, the effect of different surfactants of Triton X-100, Tween-20, sodium dodecyl sulfate (SDS), and cetyl trimethyl ammonium bromide (CTAB) on 3 -SL biosynthesis was investigated. Finally, the effect of the Triton X-100 concentration from 0.2 to 1.6% on 3 -SL content was studied. All the other components were fixed at 50 mM SA, 60 mM lactose, 20 mM polyphosphate, 10 mM CMP, and 20 mM MgCl 2 . The samples were taken after 2 h, and 3 -SL content was detected by HPLC.
Enzyme Activity Assays
The activity of CSS was measured with 0.2 M Tris-HCl (pH 8.5), 20 mM MgCl 2 , 5 mM SA, 5 mM CTP, and an enzyme sample. The reaction was performed at 37 • C for 10 min.
The activity of ST was measured with 0.2 M Tris-HCl (pH 8.5), 20 mM MgCl 2 , 5 mM SA, 5 mM CTP, and 10 mM lactose. Both CSS and ST were used to start the reaction, and the reaction was performed at 37 • C for 30 min.
One unit of enzyme activity was defined as the amount that catalyzes the formation of 1 µmol target product per min.
Production of 3 -SL
The reaction was performed with 25 mL and 4 L of mixtures in a 100 mL flash shaker and 5 L fermenter, respectively. The reaction mixture contained 50 mM SA, 60 mM lactose, 20 mM MgCl 2 , 20 mM polyphosphate, 20 mM CMP, 40 g/L recombinant cells, and 0.8% (v/v) Triton X-100. The reaction in the 25 mL system was incubated at 35 • C in a water bath with a magnetic stirrer, and in the 4 L system, it was automatically controlled by the temperature at 35 • C. The reaction system was maintained at pH 7.0 using 4 N NaOH. A reaction sample was taken every 2 h, and the concentrations of SA and 3 -SL were analyzed by HPLC. All the experiments above were performed in triplicates.
Analytical Method
The concentrations of SA and 3 -SL were analyzed by HPLC (Shimadzu, Kyoto, Japan) equipped with a UV detector and the detection wavelength was 210 nm. A TSK-Gel Amide-80 column was used, and 10 mM ammonium formate (pH 4.0) and acetonitrile at a ratio of 30:70 were used as a mobile phase. The flow rate was set at 1.0 mL/min. Quantitative analysis of CDP and ATP was performed using HPLC (LC-16, Shimadzu, Kyoto, Japan), which was equipped with a UV detector at 271 nm and a Zorbax C18 column. The mobile phase was 0.6% phosphate-triethylamine (pH 6.6), and the methanol ratio was 89:11. Quantitative analysis of CMP-NeuAc was detected at 210 nm, and the mobile phase was 20 mM pH 8.0 phosphate buffer. The samples were detected at 30 • C at a flow rate of 0.6 mL/min. All tested samples were boiled for 2 min and centrifuged at 12,000 rpm for 5 min. The supernatant was filtered with a 0.22-micron filter membrane and diluted to a certain concentration before testing. The relative content of 3 -SL was calculated as follows: The highest content o f 3 -SL in the experimental group × 100%
Statistical Analysis
The relative content of 3 -SL, the concentration of 3 -SL, and relative activity were evaluated statistically. Evaluation of statistical significance (p < 0.05) was calculated according to the one-sample t-test.
Single-Cell Construction
To investigate the overexpression of css, st, cmk, and ppk affecting the 3 -SL synthesis, two compatible plasmids, pETDuet-1 and pCOLADuet-1, were used to express four genes in a recombinant E. coli strain by using the promoter from the bacteriophage T7. As shown in Figure 2, based on the different positions of the gene in the double cloning site, four recombinant strains were constructed. However, the best combination was E. coli/pCSPC, and the catalytic content of 3 -SL was 22.8 ± 1.7 mM. In contrast, the worst combination was E. coli/pSCCP, which resulted in 22.2 ± 2.2 mM of 3 -SL. No obvious difference in 3 -SL content was observed between different expression systems (p > 0.05). Due to strain E. coli/pCSPC having the highest average value, the strain E. coli/pCSPC was selected for the subsequent study. This shows that the adjustment of the genes in the plasmid containing the double cloning site did not significantly affect the protein expression.
Optimization of Biotransformation Temperature and pH
Optimization of biotransformation was conducted to increase 3 -SL production. As seen in Figure 3a, when the temperature was lower than 35 • C, with the increase in reaction temperature, 3 -SL showed a gradually increasing trend. 3 -SL decreased with the temperature over 40 • C, and there was no significant difference in 3 -SL content between 35 and 40 • C (p > 0.05). Considering energy consumption, 35 • C was chosen as the optimal temperature.
Optimization of Biotransformation Temperature and pH
Optimization of biotransformation was conducted to increase 3′-SL production. As seen in Figure 3a, when the temperature was lower than 35 °C, with the increase in reaction temperature, 3′-SL showed a gradually increasing trend. 3′-SL decreased with the temperature over 40 °C, and there was no significant difference in 3′-SL content between 35 and 40 °C (p > 0.05). Considering energy consumption, 35 °C was chosen as the optimal temperature.
The biotransformation showed the maximum production of 3′-SL at pH 7.0 ( Figure 3b). When the pH of the biotransformation system was adjusted to 5.0, the relative content of 3′-SL dropped sharply. When the pH reached 10.0, the relative concentration of 3′-SL was only 2% of the highest concentration (p < 0.01). Therefore, an appropriate pH is essential to 3′-SL formation, and a higher or lower pH could seriously affect the content of the product.
In addition, the catalytic stability at different temperatures was also investigated. As shown in Figure 3c, 4 °C and 35 °C represent the cell extracts that were placed at pH 7.0 at 4 °C and 35 °C for 8 h, respectively. After the cell extracts were placed at pH 7.0 and 4 °C for 8 h, no significant loss of catalytic activity was observed compared with 0 h (p > 0.05). However, under the condition of pH 7.0 and 35 °C, the catalytic activity of the cell extracts was gradually lost with the time increase. At 6 h, the relative catalytic activity was only 3% of the initial catalytic activity (p < 0.01). This shows that the multi-enzyme catalytic system has a poor thermal stability.
Next, we investigated which enzyme or enzymes caused poor thermal stability. To study the thermal stability of a single enzyme, the expression host containing a single enzyme was used. pET-CSS, pET-ST, pET-CMK, and pET-PPK recombinant plasmids were constructed in the previous study [20] and transformed into four single strains, which formed four single-gene expression strains. As depicted in Figure 3d, CSS, ST, CMK, and PPK represent the relative activities of CMP-sialic acid synthetase, sialyltransferase, CMP kinase, and polyphosphate kinase, respectively. Using the single-gene expression strains to investigate the thermal stability of each enzyme, it was found that the thermal stability of different enzymes changed at the optimal catalytic temperature. Among them, PPK has the best thermal stability, and there was no obvious loss of enzyme activity after 8 h of incubation compared with 0 h (p > 0.05). CMK has the worst thermal stability. The catalytic activity was only 11% of the initial catalytic activity after 8 h of incubation (p < 0.01), while the enzyme activity of CSS was only 30% of the initial activity (p < 0.01). Therefore, the poor stability of more than one enzyme leads to the poor stability of the entire catalytic system.
Optimization of Cell Extracts, Polyphosphate, CMP, and MgCl2
Different concentrations of cell extracts were assessed to determine the optimal cell extract addition (Figure 4a). With the number of cell extracts increased, the 3′-SL content increased accordingly, but there was no significant difference in 3′-SL content at 40 g/L and 48 g/L cell extract additions (p > 0.05); therefore, it was considered that the optimal amount of cell extracts was 40 g/L.
Polyphosphate provides phosphate for CTP regeneration, meaning an appropriate concentration of polyphosphate helps to promote the accumulation of 3′-SL. When the amount of polyphosphate added exceeded 20 mM, the 3′-SL content decreased significantly compared with the 20 mM addition (p < 0.01). This may be due to excessive poly- The biotransformation showed the maximum production of 3 -SL at pH 7.0 (Figure 3b). When the pH of the biotransformation system was adjusted to 5.0, the relative content of 3 -SL dropped sharply. When the pH reached 10.0, the relative concentration of 3 -SL was only 2% of the highest concentration (p < 0.01). Therefore, an appropriate pH is essential to 3 -SL formation, and a higher or lower pH could seriously affect the content of the product.
In addition, the catalytic stability at different temperatures was also investigated. As shown in Figure 3c, 4 • C and 35 • C represent the cell extracts that were placed at pH 7.0 at 4 • C and 35 • C for 8 h, respectively. After the cell extracts were placed at pH 7.0 and 4 • C for 8 h, no significant loss of catalytic activity was observed compared with 0 h (p > 0.05). However, under the condition of pH 7.0 and 35 • C, the catalytic activity of the cell extracts was gradually lost with the time increase. At 6 h, the relative catalytic activity was only 3% of the initial catalytic activity (p < 0.01). This shows that the multi-enzyme catalytic system has a poor thermal stability.
Next, we investigated which enzyme or enzymes caused poor thermal stability. To study the thermal stability of a single enzyme, the expression host containing a single enzyme was used. pET-CSS, pET-ST, pET-CMK, and pET-PPK recombinant plasmids were constructed in the previous study [20] and transformed into four single strains, which formed four single-gene expression strains. As depicted in Figure 3d, CSS, ST, CMK, and PPK represent the relative activities of CMP-sialic acid synthetase, sialyltransferase, CMP kinase, and polyphosphate kinase, respectively. Using the single-gene expression strains to investigate the thermal stability of each enzyme, it was found that the thermal stability of different enzymes changed at the optimal catalytic temperature. Among them, PPK has the best thermal stability, and there was no obvious loss of enzyme activity after 8 h of incubation compared with 0 h (p > 0.05). CMK has the worst thermal stability. The catalytic activity was only 11% of the initial catalytic activity after 8 h of incubation (p < 0.01), while the enzyme activity of CSS was only 30% of the initial activity (p < 0.01). Therefore, the poor stability of more than one enzyme leads to the poor stability of the entire catalytic system.
Optimization of Cell Extracts, Polyphosphate, CMP, and MgCl 2
Different concentrations of cell extracts were assessed to determine the optimal cell extract addition (Figure 4a). With the number of cell extracts increased, the 3 -SL content increased accordingly, but there was no significant difference in 3 -SL content at 40 g/L and 48 g/L cell extract additions (p > 0.05); therefore, it was considered that the optimal amount of cell extracts was 40 g/L.
Processes 2021, 9, x FOR PEER REVIEW products in cell extracts for CTP synthesis, which is consistent with the result of the bacterial catalytic system [20]. Due to the reduction in the number of cell extracts, a CMP addition was used, and a higher 3′-SL catalytic yield was obtained, which great significance to reduce the cost of catalysis.
Different concentrations of MgCl2 were set to verify the effect on the 3′-SL c (Figure 4d). The 3′-SL content first increased and then decreased with the increase MgCl2 concentration. When the MgCl2 concentration was 20 mM, the 3′-SL c reached the maximum value, which was obviously higher than that of 10 mM and (p < 0.05). The results indicate that the optimal MgCl2 concentration was 20 mM.
Optimization of Cell Permeability
Surfactants can change cell permeability and increase the transfer rate of intrac and extracellular substances [24]. Triton X-100 had the highest 3′-SL production, was 44% higher than the control group (p < 0.01) and 19% higher than Tween-20 (p (Figure 5a). However, the 3′-SL content of the CTAB and SDS groups was lower th of the control group, which may be due to the protein denaturation caused by the Polyphosphate provides phosphate for CTP regeneration, meaning an appropriate concentration of polyphosphate helps to promote the accumulation of 3 -SL. When the amount of polyphosphate added exceeded 20 mM, the 3 -SL content decreased significantly compared with the 20 mM addition (p < 0.01). This may be due to excessive polyphosphate forming a chelate with magnesium ions, thereby affecting the formation of 3 -SL. The amount of 20 mM of polyphosphate was determined as the optimal addition according to the different additions of polyphosphate (Figure 4b).
Additionally, CMP is an important substrate for CTP regeneration, and the concentration of CMP has an important impact on 3 -SL yield and production cost. As shown in Figure 4c, when the concentrations of CMP were lower than 10 mM, the content of 3 -SL increased with the increase in CMP concentration. With the concentration of CMP increased, the yield of 3 -SL gradually decreased when the concentration of CMP was higher than 10 mM. Additionally, the content of 3 -SL in the 10 mM CMP addition was significantly higher than that at 5 mM and 20 mM (p < 0.05). Therefore, the optimal CMP concentration was 10 mM. It can be seen that about 18 mM of 3 -SL was produced in the conversion system without adding CMP. This may be due to the use of RNA degradation products in cell extracts for CTP synthesis, which is consistent with the result of the multi-bacterial catalytic system [20]. Due to the reduction in the number of cell extracts, a lower CMP addition was used, and a higher 3 -SL catalytic yield was obtained, which was of great significance to reduce the cost of catalysis.
Different concentrations of MgCl 2 were set to verify the effect on the 3 -SL content (Figure 4d). The 3 -SL content first increased and then decreased with the increase in the MgCl 2 concentration. When the MgCl 2 concentration was 20 mM, the 3 -SL content reached the maximum value, which was obviously higher than that of 10 mM and 40 mM (p < 0.05). The results indicate that the optimal MgCl 2 concentration was 20 mM.
Optimization of Cell Permeability
Surfactants can change cell permeability and increase the transfer rate of intracellular and extracellular substances [24]. Triton X-100 had the highest 3 -SL production, which was 44% higher than the control group (p < 0.01) and 19% higher than Tween-20 (p < 0.01) (Figure 5a). However, the 3 -SL content of the CTAB and SDS groups was lower than that of the control group, which may be due to the protein denaturation caused by these two surfactants. Moreover, the production of 3 -SL treated by Triton X-100 was still lower than that in the ultrasonic group. This may be due to the low amount of surfactant added, which led to cell permeability not being completely released. Therefore, Triton X-100 was selected as the optimal surfactant for further investigation.
Processes 2021, 9, x FOR PEER REVIEW 9 of 12 SL than the control group, which could be used to improve cell permeability and produce 3′-SL.
Production of 3′-SL
Based on the optimized reaction conditions, a different conversion system was conducted to verify the production of 3′-SL. In the 25 mL conversion system, the 3′-SL content increased with time, while the substrate content gradually decreased (Figure 6a). After an 8-h reaction, the substrate was completely consumed, and the 3′-SL content reached the maximum. An amount of 53.0 mM of 3′-SL was achieved from 54.2 mM of SA, which resulted in a high substrate conversion rate of 97.9%. According to the reaction time and The 3 -SL content gradually increased as the content of Triton X-100 increased; however, the 3 -SL content hardly increased when the amount of Triton X-100 was higher than 0.8%. The concentration of 3 -SL in the 1.2% or the 1.6% Triton X-100 addition was not significantly higher than that in the 0.8% addition (p > 0.05) (Figure 5b). Therefore, the optimal concentration of Triton was determined to be 0.8%. Comparing with the ultrasonic group, the 3 -SL content of the experimental group with Triton X-100 was 97% of its 3 -SL content, indicating that the addition of Triton X-100 did not obtain fully permeable whole cells. Overall, the addition of Triton X-100 was able to obtain a higher content of 3 -SL than the control group, which could be used to improve cell permeability and produce 3 -SL.
Production of 3 -SL
Based on the optimized reaction conditions, a different conversion system was conducted to verify the production of 3 -SL. In the 25 mL conversion system, the 3 -SL content increased with time, while the substrate content gradually decreased (Figure 6a). After an 8-h reaction, the substrate was completely consumed, and the 3 -SL content reached the maximum. An amount of 53.0 mM of 3 -SL was achieved from 54.2 mM of SA, which resulted in a high substrate conversion rate of 97.9%. According to the reaction time and product yield, the productivity rate was 6.63 mM/h. Since no obvious substrate loss was found, the conversion rate was the actual conversion rate calculated from the substrate content.
Discussion
Enzyme-catalyzed synthesis systems are increasingly being used to produce fine chemicals or pharmaceutical products [25][26][27]. Several routes have been developed for biosynthesizing 3′-SL, and sialyltransferase can be used to produce relatively high-content products, which was considered to be one of the potential catalysts for industrialization. Considering the specificity of substrate utilization and the economics of product preparation, the utilization of CMP-sialic acid synthetase and construction of a cofactor regeneration of an expensive substrate were considered to be reasonable solutions. A similar CTP regeneration system was also used in the enzymatic synthesis of 3′-SL, which also contained CMP kinase and polyphosphate kinase [28]. The experiment used multi-cell and multi-enzyme coupling catalysis; however, the yield of 3′-SL was low and an expensive cofactor still needed to be added. In contrast, the multi-enzyme system in a single whole cell simplified the manufacturing process and exhibited the potential for industrial manufacture.
In our study, efficient multi-enzyme whole-cell catalysis for the production of 3′-SL in engineered E. coli was developed. Compatible plasmids were used to co-express key enzymes to increase the production of target products, which are widely used in multienzyme expression [29,30]. Based on the optimized conversion conditions, different volume transformation experiments were performed, and a relatively high yield and high conversion rate were obtained. However, the thermal stability of CMP kinase and CMPsialic acid synthetase was poor, which suggests that improving the thermal stability of the enzyme may increase the catalytic efficiency and reduce the reaction time. Currently, the thermal stability modification experiment of the enzyme is in progress, and the catalytic verification experiment will be carried out. According to the result of the 25 mL conversion system, the 4 L conversion system in the 5 L fermenter was also investigated. The result was similar to the 25 mL conversion system, in which the reaction could completely convert the substrate into 3 -SL. An amount of 52.8 mM of 3 -SL was achieved from 53.8 mM of SA, and the substrate conversion rate was 98.1%. Th chromatogram of the HPLC of the 4 L conversion system is demonstrated in Figure S5, It is obvious from the chromatogram that with the increase in the catalytic time, the peak of SA gradually decreased to disappear, while the peak of SL gradually increased. Although the expansion of the reaction volume has no obvious effect on the 3 -SL yield and substrate conversion rate, it can be seen that the time for the complete conversion of the substrate in the 4 L conversion system was 2 h longer than the 25 mL conversion volume, which resulted in the productivity rate being 5.28 mM/h. This may be due to the stirring speed having a certain influence on the catalytic efficiency in a larger conversion system. In conclusion, although the 25 mL and 4 L conversion systems have similar catalytic yields and substrate conversions, the 25 mL catalytic system has obvious advantages in terms of the productivity rate.
At present, it has been reported in the literature that the highest yield of 3 -SL achieved is 52 mM [15]. Although the catalytic production did not increase significantly, 98.1% of the actual conversion of sialic acid was obtained in a 4 L conversion system, which was the largest conversion volume and substrate conversion. The research results have important guiding significance for the scale-up of 3 -SL.
Discussion
Enzyme-catalyzed synthesis systems are increasingly being used to produce fine chemicals or pharmaceutical products [25][26][27]. Several routes have been developed for biosynthesizing 3 -SL, and sialyltransferase can be used to produce relatively high-content products, which was considered to be one of the potential catalysts for industrialization. Considering the specificity of substrate utilization and the economics of product preparation, the utilization of CMP-sialic acid synthetase and construction of a cofactor regeneration of an expensive substrate were considered to be reasonable solutions. A similar CTP regeneration system was also used in the enzymatic synthesis of 3 -SL, which also contained CMP kinase and polyphosphate kinase [28]. The experiment used multicell and multi-enzyme coupling catalysis; however, the yield of 3 -SL was low and an expensive cofactor still needed to be added. In contrast, the multi-enzyme system in a single whole cell simplified the manufacturing process and exhibited the potential for industrial manufacture.
In our study, efficient multi-enzyme whole-cell catalysis for the production of 3 -SL in engineered E. coli was developed. Compatible plasmids were used to co-express key enzymes to increase the production of target products, which are widely used in multi-enzyme expression [29,30]. Based on the optimized conversion conditions, different volume transformation experiments were performed, and a relatively high yield and high conversion rate were obtained. However, the thermal stability of CMP kinase and CMPsialic acid synthetase was poor, which suggests that improving the thermal stability of the enzyme may increase the catalytic efficiency and reduce the reaction time. Currently, the thermal stability modification experiment of the enzyme is in progress, and the catalytic verification experiment will be carried out.
Conclusions
In summary, a multi-enzyme single whole-cell route for the production of 3 -SL was developed in this study. The catalytic process can be divided into two parts: product generation and cofactor regeneration. The product generation includes CMP-sialic acid synthetase and sialyltransferase, which was constructed to generate 3 -SL from SA. The cofactor regeneration was designed for the CTP formation from CMP, which includes CMP kinase and polyphosphate kinase. To obtain the optimal catalytic effect, conversion conditions were optimized. In the 25 mL and 4 L conversion systems, 53.0 mM and 52.8 mM of 3 -SL were obtained, which lead to conversion rates of 97.9% and 98.1%, respectively. This multi-enzyme system in the single whole-cell process provides the foundation for industrial-scale production of 3 -SL. | 7,841.4 | 2021-05-26T00:00:00.000 | [
"Chemistry",
"Engineering"
] |
Cutaneous Metastasis after Surgery, Injury, Lymphadenopathy, and Peritonitis: Possible Mechanisms
Cutaneous metastases from internal malignancies are uncommon. Umbilical metastasis, also known as Sister Joseph nodule (SJN), develops in patients with carcinomatous peritonitis or superficial lymphadenopathy, while non-SJN skin metastases develop after surgery, injury, and lymphadenopathy. In this review, the possible mechanisms of skin metastases are discussed. SJNs develop by the contiguous or lymphatic spread of tumor cells. After surgery and injury, tumor cells spread by direct implantation or hematogenous metastasis, and after lymphadenopathy, they spread by extranodal extension. The inflammatory response occurring during wound healing is exploited by tumor cells and facilitates tumor growth. Macrophages are crucial drivers of tumor-promoting inflammation, which is a source of survival, growth and angiogenic factors. Angiogenesis is promoted by the vascular endothelial growth factor (VEGF), which also mediates tumor-associated immunodeficiency. In the subcutaneous tissues that surround metastatic lymph nodes, adipocytes promote tumor growth. In the elderly, age-associated immunosuppression may facilitate hematogenous metastasis. Anti-VEGF therapy affects recurrence patterns but at the same time, may increase the risk of skin metastases. Immune suppression associated with inflammation may play a key role in skin metastasis development. Thus, immune therapies, including immune checkpoint inhibitors reactivating cytotoxic T-cell function and inhibiting tumor-associated macrophage function, appear promising.
Introduction
The skin is a complex organ consisting of the epidermis, dermis, and skin appendages, including the hair follicle and sebaceous gland [1]. Cancer metastasis to the skin is uncommon and the incidence of skin metastases ranges from 1.0% to 4.6% in patients with internal malignancies [2,3]. The incidence of skin metastasis for different internal malignancies is variable: the most common primary tumors developing skin metastasis are breast and ovary in women and lung and colon in men [2].
Skin metastases usually develop at the umbilicus, surgical scars, including laparoscopic port sites, and in the vicinity of metastatic lymph nodes . Skin metastases are often a late manifestation of the disease; however, in certain cases they may sometimes be the first sign of internal malignancies such as lung, renal, and ovarian cancers [47]. In this review, the possible mechanisms of skin metastases are discussed according to the site of appearance and the preceding medical conditions. Moreover, the roles of wound healing, inflammation, and adipose tissue in the development of skin metastases are also discussed. Skin involvement that develops as direct invasion from underlying primary tumors, such in breast or prostate cancer, is not discussed in this review.
Patterns of Skin Metastases
Based on the site of the lesion and the history of previous surgery, skin metastases are classified as metastatic umbilical tumors, which are known as Sister (Mary) Joseph nodules (SJNs), and non-SJN skin metastases. SJNs do not include umbilical metastases that develop as a port-site recurrence after laparoscopic surgery. Non-SJN skin metastases can further be divided into three major patterns, based on the preceding medical procedures or conditions as skin metastasis after surgery, injury, and lymphadenopathy (Table 1).
SJN
SJNs usually develop in patients with gastrointestinal and gynecological cancers [4][5][6][7][8][9][10]. SJN refers to a metastatic cancer of the umbilicus and is named after Sister Joseph, a nurse who frequently assisted Dr. William Mayo at St. Mary's Hospital in Rochester, Minnesota, USA. She was the first person to observe that a firm umbilical nodule was often associated with intra-abdominal cancer: in a study evaluating 407 patients with SJNs, the most common origins are stomach (23%), ovary (17%), colon and rectum (15%), pancreas (9%), and uterus (6%) [48]. An SJN develops as either the first sign or a sign of recurrence in patients with peritoneal dissemination. Even though peritoneal dissemination responds completely to chemotherapy, an SJN could develop without accompanying recurrences [4]. In addition, SJNs could develop in patients with extensive involvement of the superficial lymph nodes, such as axillary and inguinal nodes [9].
Non-SJN Metastasis after Surgery
Skin metastases after surgery often develop at the site of surgical incision. Skin recurrences usually occur within the abdominal scar after surgery for gynecological and gastrointestinal cancers [4,5,[11][12][13][14][15][16]. Similarly, many cases of port-site recurrences after laparoscopic surgery have been reported [5,21,22]. The estimated incidence of port-site recurrences in patients who underwent laparoscopic surgery for malignant disease is approximately 1-2% [49]. In rare cases, skin metastases at the site of surgical incision are the first sign of an undiagnosed cancer.
A skin recurrence can also occur at a surgical site remote from the primary tumor. For example, in patients with oropharyngeal cancer, skin metastases have occurred at percutaneous gastrostomy site [30], while skin metastases have developed in pacemaker pockets in patients with breast cancer [31].
Non-invasive tumors can also develop skin metastases. Borderline ovarian tumors, which lack destructive invasion microscopically, can metastasize to port sites after laparoscopic surgery [23]. Surgical scar endometriosis, i.e., an implant of normal endometrial tissue at the surgical incision, develops after cesarean section in 1-2% of patients [19] and it also develops at episiotomies and port sites [18,24].
Non-SJN Metastasis after Injury
Skin metastases could develop at the site of a traumatic injury, even though the site is remote from the primary tumor. In a patient with advanced prostate cancer who was treated with subcutaneous goserelin, skin metastasis developed at the injection site [35]. Skin metastasis at the site of the inflammatory response to skin test antigen (Dinitrochlorobenzene) developed in a patient with colon cancer [34]. In a patient with laryngeal cancer without lymph node metastasis, numerous superficial nodules developed circumscribed within the area that previously encased the body spica cast of a previous trauma [38].
Non-SJN Metastasis after Lymphadenopathy
After lymphadenopathy, skin metastases could develop in the vicinity of the metastatic superficial nodes. Skin metastases in the chest wall occurred after axillary node metastasis in patients with breast cancer. Skin metastases in the lower abdomen, scrotum, and penis occurred after inguinal node metastasis in patients with prostate cancer. Furthermore, skin metastases in the lower abdomen, vulva, and upper thigh occurred after inguinal node metastasis in patients with cervical cancer [39,40,43].
Possible Mechanisms of Skin Metastases
The process of skin metastasis involves two steps: the first step is the spreading of tumor cells to the skin. In SJNs, contiguous spread and lymphatic flow appear to be important. In non-SJN metastases, direct implantation, hematogenous metastasis, and extranodal extension play key roles ( Figure 1). The second step is the proliferation of tumor cells at the site, which involves wound healing, inflammation, and the presence of adipose tissue.
( Figure 1). The second step is the proliferation of tumor cells at the site, which involves wound healing, inflammation, and the presence of adipose tissue.
Several Routes to the Umbilicus
Tumor cells spreading to the umbilicus may occur via several routes. The most common mechanism is a contiguous spread, from the intraperitoneal metastasis to the umbilicus [7,50]. The vast majority of patients with metastases to the umbilicus, which is the thinnest part of the abdominal wall, have peritoneal dissemination. In addition, SJNs may be caused alternatively by spread through lymphatic channels or transvascular. However, these alternative mechanisms rarely cause the umbilical nodule, as there are no lymph nodes in and around the umbilicus and hematogenous dissemination in the absence of other sites of blood-borne metastasis is highly improbable [50]. However, it was reported that a patient without peritoneal dissemination with inguinal node metastasis developed an SJN [9]. In this case, an SJN may result from the alteration of the lymphatic flow by the tumor, which causes the obstruction of lymphatic pathways, shunting the lymphatic flow to the cutaneous lymphatics [42]. Obliterated umbilical arteries and the urachus could also provide pathways for tumor spread [7] and in a very rare case an SJN can occur via hematogenic pathway [10].
Direct Implantation
The direct implantation of viable exfoliated tumor cells is the most likely mechanism for skin metastasis at surgical incisions and at laparoscopy port sites, where resected tumor tissues, both invasive and non-invasive, have passed during the procedure. Port site metastases can occur even at the trocar sites, where tumor tissues have not passed. Several mechanisms have been proposed for the development of port-site metastases. These mechanisms include direct wound contamination and implantation, the multiple effects of pneumoperitoneum, effects of the gases used for insufflation, "chimney effect," aerosolization of tumor cells, and surgical technique [49]. Advanced malignancy and the presence of ascites may also be associated with port recurrences. Similarly,
Several Routes to the Umbilicus
Tumor cells spreading to the umbilicus may occur via several routes. The most common mechanism is a contiguous spread, from the intraperitoneal metastasis to the umbilicus [7,50]. The vast majority of patients with metastases to the umbilicus, which is the thinnest part of the abdominal wall, have peritoneal dissemination. In addition, SJNs may be caused alternatively by spread through lymphatic channels or transvascular. However, these alternative mechanisms rarely cause the umbilical nodule, as there are no lymph nodes in and around the umbilicus and hematogenous dissemination in the absence of other sites of blood-borne metastasis is highly improbable [50]. However, it was reported that a patient without peritoneal dissemination with inguinal node metastasis developed an SJN [9]. In this case, an SJN may result from the alteration of the lymphatic flow by the tumor, which causes the obstruction of lymphatic pathways, shunting the lymphatic flow to the cutaneous lymphatics [42]. Obliterated umbilical arteries and the urachus could also provide pathways for tumor spread [7] and in a very rare case an SJN can occur via hematogenic pathway [10].
Direct Implantation
The direct implantation of viable exfoliated tumor cells is the most likely mechanism for skin metastasis at surgical incisions and at laparoscopy port sites, where resected tumor tissues, both invasive and non-invasive, have passed during the procedure. Port site metastases can occur even at the trocar sites, where tumor tissues have not passed. Several mechanisms have been proposed for the development of port-site metastases. These mechanisms include direct wound contamination and implantation, the multiple effects of pneumoperitoneum, effects of the gases used for insufflation, "chimney effect," aerosolization of tumor cells, and surgical technique [49]. Advanced malignancy and the presence of ascites may also be associated with port recurrences. Similarly, paracenteses in ovarian cancer with massive ascites, and needle biopsies for lung cancer can develop skin metastases by direct implantation [5,33].
Normal endometrial tissue, which is composed of endometrial glands with the surrounding stroma, can implant at the site of wounds formed by abdominal incision and episiotomy procedures. It seems possible that the direct implantation of benign tissues, including endometriosis, can occur when epithelial cells are supported by their stromal tissues supplying nutrients. As most successfully metastasizing tumors are those with the capability to induce stroma production in their new metastatic sites [51], endometriosis can metastasize even in the lung and/or in the pleura via the transdiaphragmatic passage [52].
Hematogenous Metastasis
Cancer cells could spread to the skin via hematogenous routes. Skin metastases after surgery and injury usually develop in patients with advanced stage cancers, as circulating tumor cells (CTCs) are often detected in these patients. CTCs are found in frequencies on the order of 1-10 CTC per 1 mL of whole blood, which includes approximately 10 million leukocytes and 5 billion erythrocytes in cancer patients [53]. The detection of circulating tumor cells is a predictor of survival for patients with breast, colon, prostate, lung, and ovarian cancer [54][55][56][57][58]. However, patients with apparently early-stage cancer may develop hematogenous skin metastases after surgery, since hematogenous dissemination may occur very early in tumor development [59].
Skin metastases can develop not only at incisional scars after cancer surgery, but also at incisional scars after surgery for a benign disease performed in the presence of an undiagnosed cancer at the time of surgery. Skin metastases in patients with pancreatic and colon cancer have developed within surgical scars, where the surgery for a benign disease was performed one to twelve months before the cancer diagnosis [25][26][27][28][29]. These metastases may form by the colonization of cancer cells that are already present in the blood stream at the time of surgery for benign diseases, as these cancers tend to frequently develop hematogenous metastasis.
Extranodal Extension
Cancer cells in the superficial lymph nodes can extend through the lymph node capsule into the surrounding subcutaneous adipose tissue. Patients with breast, prostate, and head and neck cancers tend to develop metastases to superficial lymph nodes (axillary, supraclavicular, and inguinal nodes) and often develop skin involvement [40,43]. In these cancers, extranodal extension is a prognostic factor for survival [60][61][62][63].
Wound Healing and Inflammation
After surgery and injury, the colonization and proliferation of tumor cells at the cutaneous tissues exploit the wound healing mechanisms. Many of the molecular mechanisms and signaling pathways are important for wound healing, and they have been involved in cancer cell proliferation [64]. Epithelial, endothelial, mesenchymal, and immune cells are interacted through growth factor/cytokine signaling pathways during wound healing and cancer progression [64,65].
Hemostasis is initiated immediately following injury: a fibrin clot is formed at the wound site to minimize blood loss. The inflammatory phase begins with blood coagulation and platelet activation. Growth and chemotactic factors such as platelet-derived growth factor (PDGF), insulin-like growth factor I (IGF-I), epidermal growth factor (EGF), and transforming growth factor-β (TGF-β) are released during this phase [65]. Lymphocytes, polymorphonuclear leukocytes (PMN), and monocytes/macrophage infiltrate the wound, in response to these factors [65]. For the migration of these cells, the fibrin plug is used as an interim matrix [64]. The following proliferative phase is characterized by angiogenesis and fibroblast proliferation [65]. In the remodeling phase, dermal fibroblasts/myofibroblasts actively reshape the dermal matrix by secreting collagen fibers and matrix metalloproteinases (MMPs) to restore it to pre-injury conditions [64]. In general, cancer-associated inflammation is characterized as a non-resolving condition; the inflammatory responses during wound healing being hijacked by tumor cells to facilitate tumor growth [66].
Inflammation is an immune response that is elicited by cellular damage due to noxious stimuli and conditions, such as infection and tissue injury. [67,68]. Inflammatory responses help an organism to restore tissue function and homeostasis through repair mechanisms [66]. Macrophages are crucial drivers of tumor-promoting inflammation by altering the tumor microenvironment and play critical roles in promoting metastatic invasion, proliferation, and survival of tumor cells through various mechanisms [69,70]. Macrophages are derived from circulating monocytes and recruited into tumors and are often the most abundant immune cells in the tumor. Based on their function and cytokine expression profile, macrophages are divided into two categories, or polarization: classical M1 and alternative M2 macrophages [66,71]. M1-polarized macrophages are pro-inflammatory and anti-tumorigenic, and activated by interferon gamma (IFN-γ). M2-polarized macrophages are anti-inflammatory, pro-tumorigenic, and activated by interleukin (IL)-4 [64,72]. Tumor-associated macrophages (TAMs) exhibit functions similar to those of the M2-polarized macrophages [71]. TAMs produce growth and survival factors for tumor cells (e.g., EGF, fibroblast growth factor [FGF], IL-6, and IL-8) and angiogenic factors (EGF, FGF, vascular endothelial growth factor [VEGF], PDGF, TGF-β, and CXC chemokines) and suppress the T-cell dependent antitumor immunity [73]. A distinct population of CD11b+ macrophages may recognize emigrating tumor cells and assist these cells with the invasion process [74]. Polarization of TAMs toward a M2 phenotype, as reflected by a lower M1/M2 ratio, is an independent predictor of shorter survival in locally advanced cervical cancer [75]. A recent study has reported that the presence of M1 macrophage in the tumor microenvironment increases the metastatic potential of ovarian cancer cells through the activation of the nuclear factor-κB signaling pathway by releasing tumor necrosis factor alpha (TNF-α) [76].
In addition, platelets can influence inflammation, immune regulation, and cancer progression, especially cancer metastasis. Platelets initiate hemostasis, inflammation, and wound healing. They are activated during chronic inflammation, cancer progression, and metastasis. Platelets invade into tumors with the use of leakiness or inflammatory reaction occurring during angiogenesis and help cancer cells to escape immune surveillance by adhering to them [77,78] Platelets also facilitate the arrest of disseminated tumor cells in the vascular system, and enhance invasive potential and extravasation of tumor cell [78].
Normal neutrophils mount a first defensive response against tumor cells and play major roles in linking inflammation and cancer, but they also actively involved in tumor progression and metastasis [79]. Neutrophils are the most abundant white blood cells in the circulation system and are the first responders in sites of acute tissue damage and infection. In a chronic inflammation context, neutrophils remain in tissues and this persistent presence is associated with cancer progression [80]. Neutrophils in the tumor microenvironment, also called tumor-associated neutrophils (TANs), are a heterogenous group of neutrophils consisting of antitumor phenotype (N1 TANs) and protumor phenotype (N2 TANs), analogous to M1/M2 macrophages [81]. N1 TANs directly target tumor cells and stimulate T cell immunity, and N2 TANs suppress T cell responses and upregulate angiogenic factors, such as VEGF [82]. Protumor functions, such as promoting tumor progression, invasion, metastasis, and angiogenesis appear to be preponderant [81].
Myeloid-derived suppressor cells (MDSCs) represent a heterogeneous population of immature myeloid cells and are able to suppress immune responses and stimulate tumor cell proliferation and angiogenesis [83]. MDSCs promote tumor growth by inhibiting the tumoricidal activity of T cells [84]. MDSCs also promote metastasis through promoting pre-metastatic niche formation [85]. Polymorphonuclear (PMN)-MDSCs and neutrophils share the same biological origin and many morphological and phenotypic features [86]. PMN-MDSCs directly interact with CTCs to promote their dissemination and enhance their metastatic potency [84].
Angiogenesis, which is the growth of new blood vessels from pre-existing vessels, is essential to cancer progression and wound healing, and is stimulated by inflammation. Angiogenesis plays a pivotal role in tumor growth by providing tumors with oxygen and nutrients and in tumor metastasis by providing a route for spread to distal tissues [87,88]. In acute wounds, VEGF, which is the most important pro-angiogenic factor, is secreted by neutrophils, macrophages, endothelial cells, and keratinocytes [64]. Following the initiation of angiogenesis, also called the "angiogenic switch," tumors can more readily expand in size [63]. Low pericyte coverage and a hyperpermeable vasculature, driven by the overexpression of VEGF, can result in a more permissive environment for tumor cell intravasation, extravasation, and dissemination [89]. VEGF also inhibits the functional maturation of dendritic cells [90] and directly triggers regulatory T cell (Treg) proliferation [91], both of which are mechanisms that allow the escape of tumor cells from the host immune system. In addition, Tregs are actively recruited by tumors and suppress both adaptive and innate immune responses [92].
Fibroblasts and cancer cells are strongly interrelated in the tumor microenvironment [93]. Fibroblasts are activated in response to stimuli such as tissue injury. In cancer tissues, where persistent injurious stimuli exist, growth factors secreted by cancer cells stimulate the recruitment and activation of fibroblasts [94]. These cancer-associated fibroblasts (CAFs) produce IL-6, which promotes tumor growth by stimulating angiogenesis, cancer cell proliferation, and invasion. IL-6 also has inhibitory effects on immune cells [88,95]. CAFs are a major component of the cancer stroma, providing a fertile soil for tumor progression [93]. CAFs consist of different subpopulations with distinct functions and a subset of CAFs mediate chemoresistance [96].
Adipose Tissue
The subcutaneous adipose tissue influences tumor growth, particularly in cases of direct implantation and extranodal extension [97]. In the tumor-surrounding adipose tissue, adipocytes at the tumor invasive front decrease in size and contain less lipids [98]. These cancer-associated adipocytes (CAAs) play a role as an energy source for cancer cells through the direct transfer of lipids to cancer cells [97]. CAAs also secrete several adipokines, such as TNF-α, IL-6, and IL-8, which support tumor cell growth [97]. The pre-existing inflammation such as radiation dermatitis and lymphangitis, often associated with lymphedema, may facilitate tumor growth in the adipose tissue. Chronic inflammation in the skin may alter angiogenesis and/or lymphangiogenesis [99], thus affect the development of skin metastasis.
Modifying Factors
Skin metastases are modified by host, tumor, and treatment factors.
Host Factors
The most important host factor is age, which is the biggest risk factor for cancer. The aging process is linked to a gradual decline in the functional capacity of both adaptive and innate immune system, so called "immunosenescence" [100]. Immunosenescence appears to play a role in the development of distant metastasis. While SJN and non-SJN metastases that develop by direct implantation and extranodal extensions can occur both in the young and old, hematogenous skin metastases usually occur in the old (Table 1). In the old, anti-cancer immunity may be compromised because (i) the ability of neutrophils and macrophages to phagocytose pathogens decrease with aging, and (ii) the function of cytotoxic T cells is also compromised with aging [101]. Older individuals are also more susceptible to inflammatory diseases that promote tumor growth [102]. The numbers of MDSCs in the bone marrow and lymph nodes increase with aging in mice and MDSCs enhance the functions of other immunosuppressive cells, such as Tregs [100].
Obesity is a common cause of chronic inflammation and is strongly associated with poor prognosis in cancer patients [76,103,104]. A state of chronic inflammation in adipose tissue, which is observed in the majority of obese individuals, promotes cancer progression through accumulation of macrophages [76,104]. The metabolic syndrome, that includes hypertension, dyslipidemia, and insulin resistance, is associated with adipose inflammation and may promote tumor growth [104]. Thus obesity-associated chronic inflammation may be related to the development of tumor metastasis to the subcutaneous adipose tissue [76].
Tumor Factors
The tumor factors associated with the development of skin metastases include the number of tumor cells at the metastatic site. For hematogenous metastasis, CTCs appear to play an important role-even though a hematogenous metastatic process is a highly inefficient process accomplished only by a minority of disseminated tumor cells [105], the probability of cancer cell survival increases as the number of cancer cells in the blood stream increases. For direct implantation, the continuous drainage of tumor cell-positive ascites through a port site is associated with the development of port-site metastasis.
The aggressiveness of the tumor cells is also an important factor. The rare metastatic cells arising as a result of selective pressure in the primary tumor have the ability to adopt migratory and invasive behavior [106]. Epithelial to mesenchymal transition (EMT) may also play a role in blood borne dissemination [105,107]: for example, cutaneous nasal metastasis has developed at presentation in a patient with undifferentiated ovarian carcinoma [46].
Treatment Factors
The probability of developing skin metastases after cancer surgery depends on the surgical method. The probability is higher after laparoscopy than after open surgery [108]. Likewise, the probability is higher at smaller trocar sites than at larger trocar sites [109]. A previous study reported that the difference in recurrence rates between 5 mm and 10 mm diameter trocar sites was statistically significant [109]. An explanation for these observations is that tumor cell density, i.e., the number of tumor cells per unit volume, appears to be an important factor for direct implantation. In addition, the skin closure type affects the occurrence of implantation metastasis at the trocar incision scars. The incidence of recurrence at the trocar site have been statistically higher in patients undergoing a laparoscopy in which only the skin was closed at the end of the procedure than in the patients undergoing a laparoscopy with closure of all layers, i.e., the peritoneum, rectus sheath, and skin [110].
Adjuvant chemotherapy after surgery also affects the development of recurrences at surgical incision scars. In patients with gynecological cancer who underwent laparoscopic surgery, port-site recurrences developed only in patients who did not receive chemotherapy [109]. In patients with advanced ovarian cancer who underwent open laparoscopy, which is the separation of the different layers of the abdominal wall through a small incision (minilaparotomy), port-site metastases developed in 17% of the patients. However, all port-site metastases disappeared during primary therapy including chemotherapy [22]. Chemotherapy can eradicate tumor cells that colonize at surgical incisions, as the occurrence of surgical site metastasis of chemosenstive tumor, such as ovarian cancer, is uncommon.
Anti-VEGF antibodies, such as bevacizumab, may influence the development of skin recurrences. Although angiogenesis inhibitors that target the VEGF pathway may restrict tumor growth and metastatic ability [111], they concomitantly elicit tumor adaptation and progression to increased local invasion and distant metastasis occurrence [89]. Acquired resistance is common in VEGF-targeted therapies and the mechanisms that underlie the modest efficacy of anti-angiogenesis therapies may involve the active recruitment of macrophages to the tumor microenvironment, where they are responsible for the emergence of anti-VEGF therapy resistance [112].
Wound healing and inflammation enhance cancer stem cell (CSC) populations [123], therefore CSCs could also be a therapeutic target. CSCs have been recognized as the root of tumor drug resistance, recurrence, and metastasis [124]. Inhibition of CD47, that is highly expressed "don't eat me" signal on cancer cells, particularly in CSCs [125], promotes the destruction of cancer cells by phagocytes such as macrophage and neutrophils [126]. As phagocytosis may result in antigen uptake and presentation, blockade of the CD47/signal regulatory protein alpha axis may synergize with immune checkpoint inhibitors that target the adaptive T-cell mediated immunity [127].
Immunotherapy is a promising therapeutic strategy for treating distant metastasis. Although effects of immunotherapy may be attenuated in old people due to immunosenescence, recent meta-analyses indicate that age may have little effect on efficacy of immune checkpoint inhibitors [128,129]. However, age-related changes in natural killer cells, which are involved in the efficient recognition of malignant target cells, may influence immune surveillance in a minority of elderly people [130]. As skin metastases often develop in obese individuals, adiponectin-based therapies inhibiting cancer advancement may provide a therapeutic approach to delay cancer progression in this type of patient [131].
Concluding Remarks
Metastases selectively develop in certain organs but not others, however tumor cells can reach the vasculature of all organs. This happens because malignant cells require a receptive microenvironment to engraft distant tissues and form metastases, which is stated in the "seed and soil" hypothesis [105,132]. Inflammation appears to alter tissues after traumatic injury into favorable microenvironments ('niches') for cancer cells, as the development of cancer metastases to inflammatory sites, such as surgical traumas, intestinal anastomosis, and bone fractures have been reported [133][134][135]. Comprehensive understanding of inflammation and cancer metastasis is needed to facilitate development of effective therapies to inhibit tumor progression in patients with metastatic cancer. | 6,080.2 | 2019-07-01T00:00:00.000 | [
"Medicine",
"Biology"
] |
Saturated and mono-unsaturated lysophosphatidylcholine metabolism in tumour cells: a potential therapeutic target for preventing metastases
Background Metastasis is the leading cause of mortality in malignant diseases. Patients with metastasis often show reduced Lysophosphatidylcholine (LysoPC) plasma levels and treatment of metastatic tumour cells with saturated LysoPC species reduced their metastatic potential in vivo in mouse experiments. To provide a first insight into the interplay of tumour cells and LysoPC, the interactions of ten solid epithelial tumour cell lines and six leukaemic cell lines with saturated and mono-unsaturated LysoPC species were explored. Methods LysoPC metabolism by the different tumour cells was investigated by a combination of cell culture assays, GC and MS techniques. Functional consequences of changed membrane properties were followed microscopically by detecting lateral lipid diffusion or cellular migration. Experimental metastasis studies in mice were performed after pretreatment of B16.F10 melanoma cells with LysoPC and FFA, respectively. Results In contrast to the leukaemic cells, all solid tumour cells show a very fast extracellular degradation of the LysoPC species to free fatty acids (FFA) and glycerophosphocholine. We provide evidence that the formerly LysoPC bound FFA were rapidly incorporated into the cellular phospholipids, thereby changing the FA-compositions accordingly. A massive increase of the neutral lipid amount was observed, inducing the formation of lipid droplets. Saturated LysoPC and to a lesser extent also mono-unsaturated LysoPC increased the cell membrane rigidity, which is assumed to alter cellular functions involved in metastasis. According to that, saturated and mono-unsaturated LysoPC as well as the respective FFA reduced the metastatic potential of B16.F10 cells in mice. Application of high doses of liposomes mainly consisting of saturated PC was shown to be a suitable way to strongly increase the plasma level of saturated LysoPC in mice. Conclusion These data show that solid tumours display a high activity to hydrolyse LysoPC followed by a very rapid uptake of the resulting FFA; a mechanistic model is provided. In contrast to the physiological mix of LysoPC species, saturated and mono-unsaturated LysoPC alone apparently attenuate the metastatic activity of tumours and the artificial increase of saturated and mono-unsaturated LysoPC in plasma appears as novel therapeutic approach to interfere with metastasis.
Background
Metastatic spread is the leading cause of death in the course of malignant diseases, causing about 90 % of all cancer deaths [1]. While conventional therapeutic approaches target distinct tumour cells, there is no standard therapy available which specifically interferes with the certain steps of the metastatic process. Cancer patients often show dramatically reduced phospholipid (PL) plasma levels. For example, prostate cancer patients [2] and patients with acute leukaemia [3] had significant lower levels of total plasma PL compared to healthy subjects. PL levels seem to decline during disease as patients with advanced cancer show even lower levels [4,5]. In the present project the main focus is on the PL Lysophosphatidylcholine (LysoPC), which is a common plasma constituent with a concentration of approximately 300 μM in healthy persons [3,[6][7][8]. Blood plasma contains a mixture of LysoPC species carrying both saturated or unsaturated fatty acids (FA), with about 40 to 44 % unsaturated LysoPC species [9,10].
Although a few studies, such as from Okita et al. [11] refer to increased LysoPC levels in patients suffering from cancer, the majority of studies focusing on LysoPC in cancer patient reported reduced LysoPC levels associated with malignant diseases. Colorectal cancer patients [10] as well as patients suffering from renal cell carcinoma [12] show significantly reduced LysoPC plasma levels. LysoPC concentrations were already decreased in the early stages of the disease of digestive tract tumours and renal cell carcinoma [8]. In a study with 59 patients suffering from various tumour entities (breast, prostate, lung, lymphoma, gastrointestinal), their reduced LysoPC levels have been found to be associated with increased parameters of inflammatory processes (CRP, albuminreduction) as well as with severe weight losses [6]. In vitro studies confirmed that the tumour cells might be responsible for the increased LysoPC metabolism. It was reported that B16.F10 mouse melanoma cells in vitro rapidly remove exogenously added LysoPC from the supernatant [13]. The observed LysoPC removal appeared as an extremely fast, and for repeated exogenous administrations, unsaturable process. In these experiments, tumour cells were incubated with LysoPC carrying the saturated FA C17:0 (450 μM). Concordant with the decrease of LysoPC in cell culture supernatant, a strong increase of the LysoPC bound saturated FA (C17:0) was observed in cellular lipids from about 5 % to more than 50 % within 72 h of incubation [13]. Furthermore, this induced functional consequences, since an ex vivo pre-incubation of B16.F10 cells with saturated LysoPC led to a reduction by about 50 % in lung metastatic spread compared to untreated B16.F10 cells [13]. It was postulated that the strong increase of saturated FA and the subsequent decrease of ω-6 polyunsaturated fatty acids (PUFA) in the cellular lipids caused by the saturated LysoPC species impede the generation of lipid second messengers which are required for metastatic processes [14,15]. Mechanistic consequences of tumour cell treatment with saturated LysoPC species were attenuated tumour cell adhesion and motility, shown under in vitro conditions. Pronounced morphological and functional surface changes were detected in cells treated with saturated LysoPC, which might contribute to the anti-metastatic effect by preventing integrin and selectin binding functions, but not affecting the expression levels of these adhesion receptors [13].
However, the molecular mechanisms of anti-metastatic activity were not understood and it remains open whether this is a peculiarity of the saturated nature of the LysoPC used in this study. Consequently it is questionable whether those effects can be transferred to the physiological LysoPC situation considering that more than a third of physiological LysoPC species carry unsaturated FA. To provide an insight into the underlying mechanisms of this area of LysoPC metabolism by tumours and potential consequences for metastatic spread, this study aims to address three main questions: -Is the massive uptake and metabolism of LysoPC, as previously shown, a feature of melanoma cells, or a general characteristic of solid tumour cells and tumours of haematogenous origin? -What is the fate of the LysoPC molecules in tumour cells, and is there a dependency on the saturation of the LysoPC bound FA, focusing on saturated and mono-unsaturated LysoPC species? -If LysoPC indeed can affect the metastatic spread, can LysoPC levels be modified in vivo to use LysoPC or LysoPC precursors as active agents to interfere with metastatic properties of tumours?
LysoPC removal by solid tumour cells and FA incorporation into cellular lipid pools
In the LysoPC-or FFA-supplemented media, the proliferation rates of all tested tumour cell lines were statistically identical to the proliferation in non-supplemented BSA (control) medium (BrdU-assay, data not shown) proving no cytotoxic or growth reducing effects of LysoPC in the following assays. In media containing LysoPC (C17:0) and BSA, LysoPC was rapidly eliminated by all ten tested solid tumour cell lines (Fig. 1a) Simultaneous with the rapid degradation of LysoPC, a strong increase of the LysoPC bound FA C17:0 could be observed in the cellular lipids. While the physiological content of FA C17:0 in the cellular lipids is about 5 % of total FA, incubation with LysoPC 17:0 caused an increase by about 30 to 50 % of total FA (Fig. 1b). The strongest rise was found in PC3 cells; AsPC1 cells showed the smallest rise which was about 30 % after 72 h of incubation. According to the increasing amount of C17:0 in cellular lipids, the relative amount of the other analysed FA decreased.
LysoPC removal and FA incorporation into cellular lipid pools of leukaemic cells
Compared to the solid tumour cell lines, LysoPC-removal in the supernatant of leukaemic cell lines was much slower (Fig. 1c). In accordance to that, the incorporation of the administered C17:0 is less pronounced. After 72 h, only an average ratio of 18 % C17:0 was reached in the cellular lipids of the leukaemic cells while the average ratio in the solid tumour cells was twice as high (41 %). It has to be mentioned that the differences between leukaemic and solid tumour cells may be even more pronounced as shown here, since due to experimental reasons, higher cell counts were used in the experiments with leukaemic cells as in the experiments with solid tumour cells.
Metabolism of saturated and unsaturated LysoPC species by solid tumour cells
LysoPC with saturated and unsaturated FA (C18:0 and C18:1) were identically degraded by three selected solid tumour cell lines. Carrying a saturated FA seems not to play a decisive role in LysoPC degradation (Fig. 2a). Comparison of the FA incorporation kinetics in the cellular lipids also revealed no differences between the saturated and unsaturated LysoPC species (Fig. 2b). In addition to that, simultaneous application of LysoPC 18:0 and 18:1 in various ratios showed that neither the saturated nor the unsaturated FA was preferentially taken up by the tested cell line (Fig. 3c).
Using B16.F10 cells as a representative cell line, it could be shown that simultaneous with the degradation of saturated or unsaturated LysoPC in the supernatant, the respective LysoPC bound FA was released into the supernatant (Fig. 2d). Incubation without LysoPC (only BSA-medium), or incubation of LysoPC containing media without cells, did not result in a FA increase, suggesting that lysophospholipase (LysoPLA)-activity is associated with the cells.
Further investigations showed that the degradation of LysoPC continued even without cell contact. Therefore LysoPC medium was pre-incubated on cells for 6 h and further incubated after separation of the cells. The degradation rate in the cell free supernatant was about 40 % of the degradation rate in the presence of the cells, indicating that the LysoPLA-activity of the tumour cells is partly released into the cell culture supernatant (Fig. 2f). Regarding LysoPLA, it was shown that the products of LysoPC cleavage, FFA and GlyceroPC, had no inhibitory effect on the LysoPLA-activity (no product inhibition), even when applied in very high concentrations (450 μM).
To explore whether LysoPC cleavage to FFA outside the cells contributed to the observed LysoPC-induced changes, we compared the modification of the cellular lipid composition, induced by saturated or unsaturated LysoPC species and the corresponding FFA, respectively ( Fig. 2e). Indeed, FFA induced a very similar but somewhat faster change of the cellular FA composition compared to the corresponding LysoPC species; the ratios of the applied FA reached the same level after 72 h. This finding strengthen the assumption that FA uptake is dependent on LysoPC cleavage also explains the somewhat slower cellular FA incorporation caused by LysoPC. The increase of the neutral lipids in B16.F10 cells is accompanied by an impressing increase of intracellular lipid droplets (LD), as visualised by fluorescence confocal scanning laser microscopy (Fig. 3c). In control cells, only a green shimmer but no differentiated spots could be recognised (Fig. 3b).
Effects of saturated and unsaturated LysoPC species on membrane fluidity and cell migration
Compared with control cells, treatment of B16.F10 cells with saturated LysoPC 18:0 showed reduced membrane fluidity indicated by lateral lipid membrane mobility measured by the fluorescence recovery after photobleaching (FRAP) technique. Treatment with unsaturated LysoPC 18:1 had only a minor effect resulting in a slightly, but not significantly reduced membrane fluidity (Fig. 4a). This has functional consequences for cell migration as indicated in Fig. 4b showing data of scratching assays. LysoPC 18:0 pre-incubated cells display the slowest migratory capacity on both, uncoated or collagencoated surfaces compared to untreated or LysoPC 18:1 pre-treated cells. Despite of the minor effect of LysoPC 18:1 on membrane fluidity (Fig. 4a), the monounsaturated LysoPC also induced a statistically significant attenuated migration on both surfaces, although less pronounced compared to LysoPC 18:0.
In vivo effects of pre-incubated B16.F10 cells with saturated and unsaturated LysoPC and FFA species All mice tolerated the injections of the tumour cells pretreated for 10 days with medium supplemented with LysoPC 18:0, LysoPC 18:1, FFA C18:0, or FFA C18:1, as well as the BSA treated control cells, very well. Mice began to lose weight 15 days after tumour cell injection, thereby no significant differences between the various groups were found. Macroscopic visual lung metastasis after 18 days (Fig. 5c) revealed that the mice receiving control cells, incubated only with BSA, had the highest metastatic activity. This was confirmed by measuring the luciferase activity of the homogenised lungs. The lowest values, corresponding to the lowest metastatic burden were found in the lungs of mice carrying cells incubated with LysoPC 18:0 cells (80 % reduction). Pre-incubation with LysoPC 18:1 and FFA C18:0 also caused significant reduction of metastasis like lung invasion (reduction of 60 % and 65 %, respectively), while FFA C18:1 had no significant effect ( Fig. 5a and b).
LysoPC plasma level in healthy mice and tumour bearing mice
Compared to healthy mice, LysoPC plasma levels in mice were significantly reduced two or three weeks after injection of B16.F10 melanoma cells while this difference was not yet detectable already one week after injection of the tumour cells. No differences were found between healthy mice and tumour bearing mice one week after (Fig. 6). With regard to the different LysoPC species, the most pronounced decrease was found for LysoPC 16:0. There was also an apparent decrease in LysoPC 18:2 and LysoPC 20:4; however these differences were not significant.
Manipulation of the LysoPC plasma levels and its effects on metastatic spreading
With the aim to reduce metastatic spreading by increasing the LysoPC plasma level and/or by changing the ratio of the different LysoPC species towards saturated LysoPC, different approaches were investigated: chow supplemented with saturated PC containing mostly C18:0, s.c. injection of saturated LysoPC, and injection of liposomes containing the same saturated PC.
Chow supplemented with saturated PC did not cause any significant changes compared to non-supplemented chow.
Investigating the LysoPC species and levels 2 h after injection of high doses of liposomes consisting predominantly of saturated PC showed a significant increase (p < 0.001) of LysoPC 18:0 (179 ± 36 μM before vs. 311 ± 69 μM after injection, n = 7). The other analysed LysoPC species were not significantly changed (Fig. 7b).
Next, the impact of these treatment regimens on metastases was investigated. Consistent with the finding that PC-supplemented chow caused no change of plasma LysoPC levels or composition, PC-supplemented chow given before and after tumour cell injection (EPC pro ) or given only after tumour cell injection (EPC ther ) had no effect on metastatic spreading three weeks after tumour cell injection (Fig. 7a, row 2 & 3).
S.c. injections of LysoPC 17:0 in addition to the injection of the B16.F10 cells (six times in 12 h intervals, starting 23 h before injection of tumour cells) caused a slight but not significant decrease of metastatic spreading compared to animals receiving no treatment (Fig. 7a, row 4 & 5). The anti-metastatic effects of liposome injections were not investigated here, as the anti-metastatic effects have already been shown [16,17].
Discussion
Previous studies have shown that B16.F10 melanoma cells in vitro rapidly remove exogenously added saturated LysoPC from the supernatants and that the LysoPC bound saturated FA are incorporated within the cellular lipids. Both correlate with reduced adhesion properties of the cells in vitro and strongly reduced metastatic spreading of the cells in vivo [13].
Here we provide a first insight into the underlying mechanisms and the potential interplay between LysoPC and tumour metastasis. We demonstrate that the rapid removal of LysoPC is not only a peculiarity of melanoma cells but also seems to be a general characteristic of solid tumour cells. We propose a model of extracellular LysoPC degradation and cellular FFA uptake that appears independent of the saturation of LysoPC. Based on our findings the potential anti-metastatic activity of saturated as well as mono-unsaturated LysoPC appears as an attractive pathway for a therapeutic interference. These aspects will be discussed below.
LysoPC removal by different tumour cell lines
Strongly reduced LysoPC plasma levels can be observed in patients suffering from various tumour entities [6,8,10]. Investigating ten different epithelial tumour cell lines and six leukaemic cell lines, a very rapid decrease of LysoPC from cell culture supernatant and the incorporation of the LysoPC bound FA into the cellular lipid pools was exclusively observed for the epithelial tumour cells (Fig. 8). Interestingly, a study comparing the LysoPC degradation in HUVEC, monocytes, erythrocytes and platelets, found that HUVEC effectively degrade extracellular LysoPC to FFA, while monocytes clearly showed less LysoPC degrading activity. Erythrocytes and platelets had nearly no LysoPC degrading activity [18]. Thus, the rapid removal of extracellular LysoPC by solid tumour cells with metastatic potential might be a general characteristic of these cells and can be discussed as a necessary but surely not sufficient feature for metastatic spreading.
The ability of the tested epithelial tumour cells to rapidly cleave extracellular LysoPC might also explain the reduction of the LysoPC plasma levels in most patients with epithelial tumours as well as the negative correlation of the LysoPC levels with the progress of the disease [6]. Concordant to the decrease of LysoPC in patients we found a slight but significant reduction of LysoPC in mice with B16.F10 lung metastases. The lower reduction in mice compared to humans with cancer can be explained by the twofold higher plasma LysoPC levels in mice (600-700 μM) compared to humans, as shown in this and other studies [19,20], indicating a higher LysoPC turnover in mice which might better compensate the tumour induced LysoPC decreases.
LysoPC metabolism of tumour cells
Our studies concerning the fate of the extracellular LysoPC and the increase of the LysoPC bound FA in the cellular lipids revealed (i) that incubation with saturated LysoPC species as well as the corresponding FFA results in an almost identical incorporation of the respective FA within the cellular lipids, and (ii) that removal of LysoPC from the cellular supernatant is accompanied by a corresponding increase of extracellular FFA. Obviously, the main mechanism for the cellular uptake of LysoPC bound FA consists of a rapid cleavage of the sn-1-ester bond of LysoPC by a LysoPLA activity followed by the rapid cellular uptake of the resulting FFA, supported by the high FFA gradient between the intracellular and extracellular environment and the rapid transmembrane movement of FFA [21]. High LysoPLA activities have been observed in certain mammalian cells and tissues (e.g. liver, gastric mucosa, kidney, brain, lung, and macrophages) [22,23]. Data for LysoPLA activities in tumour tissues/cells are not yet available. The proposed LysoPLA activity that we found in tumour cells is at least partly released into the cell culture supernatant, as LysoPC degradation continues when the supernatant is further incubated cell-free. A model summarizing these processes is suggested in Fig. 9.
The most obvious way for LysoPC molecules to enter the cell so far known is a passive uptake into the cell membrane, involving a flip to the inner membrane layer, where it may become part of the Lands' cycle [24]. This is in contrast to the path suggested here, to take up the LysoPC-derived FFA but not the LysoPC molecule itself.
Within the Lands' cycle, LysoPC is reacylated to PC and PC is deacylated to LysoPC, thus introducing the LysoPC bound FA into the membrane PLs. However, an additional direct uptake of LysoPC cannot be excluded from our data, at least at the beginning of the incubation when extracellular LysoPC is not yet degraded and its amount greatly exceeds the total lipid amount of the cells. [25,26]. Neutral lipids and subsequently FFA can be mobilised for energy generation via ß-oxidation [27]. Furthermore, LD are a repository for membrane building blocks, including PL and sterols [26].
Bozza et al. [28] reported the finding of an increased number of LD in cancer tissues. As LD play a role in inflammation and neoplastic processes, they are highly regulated organelles. Various enzymes involved in eicosanoid synthesis can be found localised at LD. Thus, LD are sites of eicosanoid generation and particularly active in the metabolism of arachidonyl lipids [28]. Significant correlations have been found between LD and enhanced formation of COX-derived eicosanoids [29,30] thereby possibly contributing to pro-metastatic activity. However, in contrast to physiological LysoPC carrying a mix of various FA including pro-metastatic ω-6 PUFA (C18:2 and C20:4), in our study the formation of LD is predominantly caused by saturated or mono-unsaturated LysoPC, which might contribute to the observed anti-metastatic effects by the reduction of pro-metastatic ω-6 PUFA in LD [14,15]. The probable corresponding pro-metastatic effects of LysoPC-species carrying ω-6 PUFA was not the issue of this study and has to be investigated in future studies.
Different saturation grades of LysoPC and FFA species and their impact on membrane fluidity
So far, the anti-metastatic effect of LysoPC was observed using saturated LysoPC species [13]. Here we found that saturated and at least mono-unsaturated (C18:1) LysoPC were identically eliminated from the supernatant of three representative cell lines and comparably increased the FA-content in the cellular lipid pools. The use of various mixtures of LysoPC 18:0 and 18:1 for the cell cultivation did not reveal a preferred metabolism for either LysoPC species.
The increase of FA 18:0 within the cell membranes was accompanied by reduced membrane fluidity. Membrane fluidity determines various biological processes such as adhesion, receptor activity and cell motility [31,32] and thus appears as functional link to explain attenuated metastatic spread, as shown previously [13] (Fig. 9). Interestingly, tumour cell incubation with the unsaturated LysoPC 18:1 and the subsequent increase of oleic acid in the cellular PL also resulted in a decrease of the membrane fluidity, however, not as distinct as for incubation with the saturated LysoPC 18:0. This can be explained by the fact that oleic acid carrying only one double bond has not the same effect on membrane fluidity as higher unsaturated FA. This explanation of the effects of LysoPC 18:0 and 18:1 on Fig. 9 Proposed uptake/metabolism of LysoPC in tumour cells. The majority of LysoPC is extracellularly degraded to GlyceroPC and FFA by a LysoPC degrading factor. This factorprobably a LysoPLAis partly released into the supernatant of the tumour cells. The resulting extracellular FFA can subsequently be taken up and incorporated into membrane PL and neutral lipids. Excess of neutral lipids can be stored as LD. Another possible uptake route for LysoPC is its incorporation into the cellular membrane as a whole molecule, where it becomes part of the Lands' Cycle: the deacylation and reacylation process membrane fluidity is further supported by our findings that both LysoPC 18:0 and 18:1 had a clear effect on cell migration, and again, the effect was somewhat weaker for LysoPC 18:1.
Effects of saturated and unsaturated LysoPC/FFA on metastatic spread in vivo
Investigating the metastatic behaviour of LysoPC and FFA pre-treated tumour cells in an in vivo mouse model revealed the strongest reducing effect on metastatic spread for the saturated LysoPC 18:0, the effects of the mono-unsaturated LysoPC 18:1 were still impressive but less pronounced. LysoPC and its corresponding FFA, respectively, showed comparable effects on metastatic spread, but the effects of the FFA were somewhat weaker (effect of FFA 18:1 missed significance), implicating that the anti-metastatic effect was at least partly caused by the entire LysoPC molecule, but the underlying mechanism remained unclear. It should be mentioned that there are no significant differences between the treatment groups.
In agreement with our results, an anti-tumoural effect of oleic acid (C18:1) was seen in epidemiological and in animal studies [33]. The effects were explained by inhibition of proliferation in different tumour cell lines [34] or suppression of oncogenes which play a role in invasive progression and metastasis [35]. Reduced synthesis of arachidonic acid derived eicosanoids were also considered to inhibit the growth of tumours [36]. In human studies the "Mediterranean diet", rich in oleic acid displays protective effects against cancer [37]. Furthermore, the saturated stearic acid (C18:0) has been described to have anti-cancer properties in vitro and in vivo, targeting tumour proliferation, migration and tumour invasion. Mice with orthotopically growing breast cancer showed reduced tumour size (approximately 50 % reduction) and partially reduced lung metastases when they were supplied with a diet rich in saturated FA [38].
Prospects for affecting LysoPC levels in term of anti-metastatic approaches
Following the idea that the increase of plasma LysoPC might influence metastatic cells also in vivo, we investigated and compared different ways to increase the plasma levels of saturated LysoPC and its effects on metastatic tumour growth in mice. Since LysoPC cannot directly be injected intravenously into mice due to its haemolytic activity [39], three alternative approaches were tested: the oral application of the LysoPC precursor PC, s.c. LysoPC injection and i.v. injection of liposomes consisting of saturated PC.
Dietary PC (2 % in the chow) was chosen since PC can be degraded to LysoPC intestinally and subsequently can be absorbed [20]. However, PC enriched feed had no effects on LysoPC plasma levels of mice and no effect on metastatic growth, neither if mice received the feed as pre-treatment nor as therapeutic treatment. The lacking effects might especially be due to a rapid hydrolysis of LysoPC to FFA in the intestine prior uptake [40].
The s.c. injection of LysoPC (2 mg/kg bodyweight) only results in a slight increase of LysoPC plasma levels. This was in contrast to the observations of Yan et al. [19] who reported a LysoPC increase by 11 % one hour after injection (577 ± 17 μM to 633 ± 17 μM). Nevertheless, s.c. injection of LysoPC seems to reduce the metastatic spreading of tumour cells when mice were treated with saturated LysoPC in 12 h intervals, however, the effect was not significant. However, despite the fact that the differences were not significant, the observed reduction in metastatic spreading warrants future studies focusing on antimetastatic effects of LysoPC (s.c.), including different time schemes and more animals.
I.v. injection of liposomes containing saturated PC was expected to increase LysoPC plasma levels by hydrolysis of liposomal PC in the systemic circulation by enzymes physiologically metabolising lipoproteins (endothelial lipase, LCAT) [41,42], or after accumulation in the tumour tissue by phospholipase A 2 secreted by the tumour cells [43,44]. In fact, two hours after injection of high doses of drug-free (empty) liposomes consisting mainly of hydrogenated egg-phosphatidylcholine (307.2 mg PC/kg, mainly Di-C18:0-PC [16,17]) there was a significant increase in LysoPC 18:0 of about 130 μM. Future studies have to investigate how long this increase of LysoPC will last. In a recent study the same liposomes showed an impressive anti-metastatic effect in a mouse model with orthotopically implanted pancreatic tumour cells (MIA PaCa2) [17]. This effect of "empty liposomes" could be reproduced using another metastases model in mice, the pancreatic tumour cell line AsPC1 with the same experimental setup [16]. The effects of saturated LysoPC on metastases of MIA PaCa2 tumours were more pronounced than the effects on AsPC1-induced metastases, which correlates with our finding that MIA PaCa2 cells degrade LysoPC and take up the saturated FA twice as fast as AsPC1 cells. Taken together, our results provide an explanation for the very impressive anti-metastatic effects of empty liposomes in our mice studies. Thus, liposomal drug delivery at least has the chance to effectively combine chemotherapy for treating the primary tumour and additional anti-metastatic effects.
However, to put these findings into the right perspective, it has to be mentioned that the lipid dose which caused the described anti-metastatic effects and the strong increase in saturated LysoPC in vivo was rather high. The lipid doses, which are currently in use for registered liposomal formulations in oncology, are about an order of magnitude lower than the doses used in our experiments: The dose of the liposomal-hydrogenated PC used as carrier control in mice studies with liposomal Gemcitabine (GemLip: 6 mg/kg (MTD)) and which showed the above described anti-metastatic effects, was 307.2 mg/ kg. In contrast, in comparable mice experiments with Caelyx®/Doxil®, only 43.2 mg/kg hydrogenated soy PC were used (which correspondents to 9 mg/kg liposomal Doxorubicin (MTD)) [45].
For patients suffering from metastatic breast or ovarian cancer, the recommended Caelyx®/Doxil® dose is 50 mg per m 2 every four weeks, which corresponds to a PC amount of 240 mg/m 2 . Thus, this relatively low amount of saturated PC applied as liposomal phospholipid to these patients is most probably the reason why no extra "antimetastatic" effect of the liposomal carrier has been yet described.
Conclusions and outlook
Metastasis is a life-threatening complication of cancer and unfortunately to date there are hardly any therapeutic options for treating metastasis. The observations we made here lead us to the hypothesis that the rapid extracellular hydrolysis of LysoPC by metastatic tumour cells and the subsequent cellular uptake of the resulting FFA seems to be a necessary prerequisite for metastatic potential of epithelial tumour cells, allowing the cells to rapidly satisfy their high demand on various FA, for energetic purposes, for maintaining a certain membrane fluidity and probably also for generating pro-metastatic lipid second messengers. As a consequence, disturbing or inhibiting this process might be a promising way to reduce metastases, which should be investigated in future studies. Further experiments regarding the LysoPLA activity are required; this includes the inhibition of its secretion as well as inhibition of its LysoPC cleaving activity. Inhibiting LysoPLA should elucidate whether this has the potential to reduce metastasis. A first and promising step for reduced metastasis is presented here which indicates that manipulating the lipid metabolism of metastatic cells by supplying saturated or mono-unsaturated LysoPC species greatly reduced their metastatic potential. It appears promising that this effect could also be achieved in vivo by slightly increasing the ratio of saturated LysoPC species in the blood by applying liposomes. For the LysoPC and FFA containing media, bovine serum albumin (BSA, PAA Pasching, Austria) were added to DMEM (10 % FCS medium) at a concentration of 40 mg/ml to prevent cytotoxic effects, mainly cell lysis, due to high concentrations of unbound LysoPC or FFA. Both, FCS and BSA, physiologically contained no additional LysoPC; this was verified by HPLC-MS. The amounts of FFA in FCS and BSA were very low and can be ignored in relation to the FFA-and LysoPCsupplemented media; this was validated by gas chromatography. For the LysoPC-supplemented media a concentration of 450 μM of the respective LysoPC species was added to the BSA containing medium. As LysoPC has good (micellar) water solubility, a stock solution of LysoPC in PBS was prepared at a concentration of 180 mM and dissolved in BSA containing medium to obtain the final concentration of 450 μM. Long chain fatty acids have poor water solubility and were first dissolved in EtOH at 56°C and at a concentration of 90 mM. The resulting clear solution was very slowly added to the BSA containing medium under intensive stirring. LysoPC removal of LysoPC 17:0 (1-heptade canoyl-2-hydroxy-sn-glycero-3-phosphocholine), LysoPC 18:0 (1-stearoyl-2-hydroxy-sn-glycero-3-phosphocholine), and LysoPC 18:1 (1-oleoyl-2-hydroxy-sn-glycero-3-phos phocholine; all obtained from Avanti Polar Lipids, USA) were investigated using 2 × 10 5 solid epithelial tumour cells and 1 × 10 6 leukaemia tumour cells, respectively, cultivated in 24-well tissue culture plates with 1 ml medium, either BSA medium as control, or 450 μM LysoPC supplemented medium. Cells and supernatants were separated after 0, 24, 48, and 72 h of incubation, cell culture supernatants were collected from triplicates of LysoPC-treated and untreated control cells. Cells were removed using 0.25 % trypsin/EDTA. Centrifuged supernatants and washed cell pellets were stored at −20°C and −80°C until further analysis, respectively.
Cell proliferation was investigated using a BrdU-assay (Roche diagnostics GmbH, Germany) with cells grown in 96-well plates in 200 μl medium for 48 h. Adherent cells were labelled and the BrdU-assay was performed according to the manufacturer's instructions.
Determination of LysoPC and FFA concentration in supernatants
LysoPC concentrations in supernatants were determined after cultivating 2 × 10 5 cells with LysoPC (450 μM) for 0, 24, 48, and 72 h, and separating cells and supernatants. The LysoPC concentration was determined by an enzymatic PL (PC/LysoPC) assay containing phospholipase D and choline oxidase (mti diagnostics GmbH, Germany). For the specific determination of free choline, a similar assay without containing phospholipase D was developed. Glycero-3-phosphocholine (GlyceroPC) was determined using the assay for free choline adding 1 IU/ mL of sn-glycerol-3-phosphocholine phosphodiesterase (Sigma). We ensured the specificity of the three assays by analysing different concentrations of LysoPC, free choline, GlyceroPC, and PC.
The degradation of LysoPC in cell-free supernatant after pre-incubation with B16.F10 cells was determined by incubating cells grown confluent in 24-well plates with 1 ml LysoPC 17:0 for 6 h, followed by separation of the supernatant and subsequent cell-free incubation for 0, 24, 48, 72, and 96 h. As control experiments, LysoPC 17:0 medium without cells and LysoPC 17:0 with cells were accordingly incubated and processed.
Determination of cellular FA composition (total, neutral lipid and PL)
Total lipid from harvested and frozen cell pellets (2 × 10 6 ) after 0, 24, 48, and 72 h of incubation with either BSA, LysoPC, or FFA supplemented medium, were isolated by lipid extraction according to a modified method of Bligh and Dyer [46]. Neutral lipids and PL were separated using solid phase extraction as previously described [13,47]. FA analysis of either total FA, or of the neutral lipids and PL-fraction separately, was performed by using a Gas Chromatograph HP-5890 Series II Plus analyser with flame ionisation detector. The settings were used as described previously [13], it was possible to identify changes in FA-patterntotal FA changes as well as the neutral lipids and PL-fraction separatelyof tumour cells induced by the certain LysoPC or FFA-treatment.
Confocal laser scanning microscopy
For the visualisation of lipid droplets (LD), B16.F10 cells were grown sub-confluent on sterile glass-cover-slips, either in LysoPC 17:0 medium (450 μM), or as a control in BSA medium. Following fixation with 4 % paraformaldehyde, cells were washed with PBS. Nuclei were stained with DAPI (Life Technologies GmbH, Germany), LD were stained with BODIPY 493/503 (Life Technologies GmbH, Germany) 1:500 dilution in 0.9 % NaCl of the stock solution (1 mg/ml in ethanol) for 10 min followed by a washing step. The glass cover slips were attached to a cover glass with mounting medium (MobiGlow, MoBi-Tec GmbH, Germany) and the samples were then analysed in a confocal laser scanning electron microscope LSM 510 Meta with 20 x/1.4 NA objective lens (Zeiss, Germany). Three to four pictures were taken from each sample and then processed using the Zen2009 software programme.
Cell migration
B16.F10 cells (1 × 10 5 ) were seeded into each well of a 24 well plate (Greiner Bio-One, Germany), partially coated with 10 μg/ml collagen (Roche Diagnostics GmbH, Germany), and incubated with LysoPC. After 72 h, a scratch wound was induced into the confluent cell monolayer by a pipette tip. Wound healing was observed for 12 h at 37°C and the speed of migration was determined by linear regression.
Animal experiments
All animal experiments were performed in accordance with the German Animal License Regulations (Tierschutz gesetz) identical to UKCCCR Guidelines for the welfare of animals in experimental neoplasia [49]. Male C57Bl/6 mice were obtained from Charles River (Sulzfeld, Germany) at an age of 8 to 12 weeks.
Injection of tumour cells, detection of metastatic spread
To induce lung metastases, 2 × 10 5 B16.F10 melanoma cells in 100 μl PBS were intravenously injected into the tail vein of each mouse. The B16.F10 cells used were luciferase-transduced as previously described [16]. To investigate the effects of the pre-treatment of tumour cells with different lipid containing media on their metastatic behaviour, each of the 10 mice received B16.F10 cells pre-treated for 10 days with 450 μM LysoPC 18:0, LysoPC 18:1, FFA C18:0, or FFA C18:1; BSA treated cells were used as control. Experiments were terminated on day 18 after injection of tumour cells, lungs were removed and weighted, and stored as snap-frozen samples. Metastatic lesions were quantified by homogenising the mice's lungs in luciferase lysis buffer (Promega, Germany) and measured in a luciferase assay as previously described [50].
Plasma LysoPC levels
For blood collection, mice were anaesthetised with isoflurane (2.5 %, 3 l/min O 2 ) and samples were collected carefully from the retro-orbital plexus by using glass capillaries in heparinised tubes. Blood samples were centrifuged at 2320 × g for 5 min and plasma was stored at −80°C until analysis. For the lipid extraction, 440 μl PBS was pipetted into 15 ml glass tubes, frozen plasma samples were thawed and 20 μl was added to the PBS. 20 μl of LysoPC 19:0 (100 μM) were added to each sample as internal standards. Extraction was performed as described by Zhao et al. [10] with three repeated extraction steps. The combined solvents were evaporated under a nitrogen stream at 40°C until complete dryness. Prior to analysis, the dried lipid extracts were dissolved in 100 μl methanol.
LysoPC-analysis was carried out using a Quadrupol API 2000 MS/MS mass spectrometer (AB Sciex, Germany), equipped with an Agilent 1100 LC system (Agilent Technologies, USA). Acetonitrile was used as mobile phase A, and H 2 O with 10 mM ammonium acetate, pH 8 was used as mobile phase B with a Waters XBridge BEH HILIC Column, 130 Å, 3.5 μm, 3 mm × 150 mm analytical column (Waters, USA). The LC gradient started with 90 % A from 0 to 0.1 min, down to 76.5 % A at 22.5 min, returning from 76.5 % A to 90 % A at 22.6 min and held at 90 % A until 27.5 min. Flow rate was 0.5 ml/min and column temperature was maintained at 50°C throughout the analysis. Parameters for the analysis in the positive ion mode with the TurboIonSpray® source were: source temperature: 400°C, capillary voltage: 5500 V, desolvation gas 20 l/h, focusing potential: 380 V, declustering potential: 35 V, collision energy: 40 V. The nitrogen required as collision and curtain gas was produced by an NGM 11-LC-MS nitrogen generator. 10 μl of each sample was injected by the Agilent 1100 LC systems auto sampler.
Quantitative analysis was performed in the multiple reaction monitoring (MRM) mode, monitored ions were at m/z 468. LysoPC standard calibration curves were established for quantitative analyses and were performed for each analysis batch. Therefore plasma samples were spiked with known concentrations of LysoPC 19:0 (1, 10, 50, 100, 250, and 500 μM). A standard curve was derived which served as calculation template for the analysed LysoPC. The peak area of each analyte was integrated and results were calculated using the API 2000 software and a self-programmed Microsoft-EXCEL evaluation template.
Manipulation of LysoPC plasma levels and effects on metastatic spreading due to different supplementations Standard laboratory chow was supplemented with 2 % hydrogenated egg-phosphatidylcholine (EPC3; Lipoid GmbH, Germany) and was purchased from Ssniff Spezialdiäten GmbH, Germany. The experiment included five treatment groups with five animals per group. Group 1 (healthy mice) served as control group, these animals received EPC3-supplemented chow and no tumour cells. In the PC feeding groups, mice received EPC3-supplemented chow administered either as prophylaxis, starting one month prior to tumour cell injection (group 2 -"EPC3 pro ") or as therapy, starting after tumour cell injection (group 3 -"EPC3 ther "), the latter mice were kept on a normal diet until the tumour cells were injected, and were set onto the EPC3-enriched diet one day after injection until the end of the experiment. During the whole study, mice had free access to feed chow. For the LysoPC s.c. treatment (group 4), animals were kept on a normal diet and were subcutaneously injected with LysoPC 17:0 at a dose of 20 mg/kg in PBS containing 2 % BSA. Injections were given six times at 12 h intervals, beginning 23 h before tumour cell injection. The "tumour growth control group" (group 5) received the standard laboratory chow without EPC3-supplement and served as a control for the growth of tumour cells, which were injected simultaneously with the other groups. Mice were supervised for weight progression, food intake and general behaviour daily. | 9,247 | 2015-07-11T00:00:00.000 | [
"Biology",
"Medicine"
] |
Single Bin Sliding Discrete Fourier Transform
The conventional method for spectrum analysis is the discrete Fourier transform (DFT), usually implemented using a fast Fourier transform (FFT) algorithm. However, certain applications require an online spectrum analysis only on a subset ofM frequencies of an N-point DFT ðM < NÞ. In such cases, the use of single-bin sliding DFT (Sb-SDFT) is preferred over the direct application of FFT. The purpose of this chapter is to provide a concise overview of the Sb-SDFT algorithms, analyze their performance, and highlight advantages and limitations. Finally, a technique to mitigate the spectral leakage effect, which arises when using the Sb-SDFT in nonstationary conditions, is presented.
Introduction
The estimation of frequency, amplitude and phase of single-frequency and multifrequency signals has applications in many fields of engineering. In general, estimation methods are based on Fourier analysis or parametric modeling. The advantage of Fourier-based methods is their computational efficiency, compared with the mathematical complexity of the parameters-based algorithms, which demand a high amount of computational resources. The standard method for Fourier analysis in digital signal processing is the discrete Fourier transform (DFT). For some real-time applications, the direct application of the conventional DFT may result in an excessive computational cost. However, certain applications require an online spectrum analysis only over a subset of M frequencies of an N-point DFT ðM < NÞ. For this scenario, the common practice is to utilize a single-bin sliding DFT (Sb-SDFT) technique. These recursive algorithms efficiently calculate a unique spectral component of an N-point DFT. Nevertheless, the direct application of DFT-based methods for spectral analysis may lead to inaccuracies due to the spectral leakage phenomenon. These unwanted effects are related to the frequency variation and improperly selected sampling time window. This problem can be solved using an adaptive coherent sampling mechanism. One of these mechanisms is known as variable sampling period technique (VSPT) and is characterized for the dynamic adjustment of the sampling period to exactly N times the fundamental frequency, thereby avoiding the above-mentioned problems.
The chapter is organized as follows: Section 2 presents a brief review of Sb-SDFT. Section 3 evaluates and compares the four selected Sb-SDFT algorithms in diverse operational conditions, identifying the similarities between them. In order to mitigate the inaccuracies resulting from the spectral leakage effect, a scheme for coherent sampling based on VSPT is introduced in Section 4. Altogether a unified model is also presented to generalize this scheme to all Sb-SDFT along with simulation results. Finally, the conclusions of this chapter are drawn in Section 5.
Single-bin sliding discrete Fourier transform
The discrete Fourier transform (DFT) is a numerical approximation of the theoretical Fourier transform (FT) of a continuous and infinite duration signal. It represents the most common tool for engineers to extract the frequency content of a finite and discrete signal sequence, obtained from the periodic sampling of a continuous wave form in time domain.
Let us consider a continuous time signal xðtÞ that is sampled at the rate f s ¼ N · f o (where f o is the fundamental frequency of xðtÞ) to produce the time sequence x½n. Then the DFT of the sequence x½n is defined as: where XðkÞ is the DFT output coefficient, W N ¼ e j2π=N is the complex twiddle factor, N is the sequence length, k is the frequency domain index ð0 ≤ k ≤ N−1Þ , and n is the time domain index [1].
If Eq. (1) is not properly designed and implemented, the DFT calculation in real-time might represent a considerable bottleneck when developing a DFT-based estimation algorithm, in terms of both measurement reporting latencies and achievable reporting rates. In this respect, in order to improve both latencies and throughput, several efficient techniques to compute the DFT spectrum have been proposed in literature, which can be classified as nonrecursive and recursive algorithms. Among the nonrecursive class, the fast Fourier transform (FFT) algorithm is extensively used for harmonic analysis over an extended portion of the spectrum. When, on the other hand, only a subset of the overall DFT spectrum is necessary to accomplish the desired estimate, the so-called single-bin sliding DFT (Sb-SDFT) turns out to be very effective.
The DFT can also be computed by recursive algorithms which are characterized by a minor number of operations to calculate a single DFT bin. Regardless of this advantage with respect to the class of nonrecursive algorithms, the performances of the two categories usually are not the same. Especially, most of the algorithms in the recursive category suffers of errors due to either the approximations made to perform the recursive update or the accumulation of the quantization errors related to a finite word-length precision [2,3].
In what follows, four of the most efficient techniques to compute a portion of the DFT spectrum, namely the sliding discrete Fourier transform (SDFT), the sliding Goertzel transform (SGT), the Douglas and Soh algorithm (D&S), and the modulated sliding discrete Fourier transform (mSDFT) will be presented and described.
Sliding discrete Fourier transform
A very effective Sb-SDFT method for sample-by-sample DFT bin computation is the so-called sliding discrete Fourier transform (SDFT) technique [4]. Starting from Eq. (1), the DFT can be potentially updated every time-step n, based on the most recent set of samples within a sliding window {x½nÀN þ 1, x½nÀN þ 2, …;x½n}. The time window is advanced one sample at a time, and a new N-point DFT is calculated. Figure 1(a) illustrates the time domain indexing within the sliding window by showing the input samples used to compute k-bin of an N-points DFT when n ¼ n o . The principle used for SDFT is known as the DFT shifting theorem, or the circular shift property [1].
Based on this property, the SDFT can be recursively implemented to calculate Eq. (1) for a desired k-bin, as: where X k ½n is calculated by phase shifting the sum of the previous X k ½nÀ1 with the difference between the current and delayed input sample, x½n and x½nÀN, respectively [4,5]. The complex output of the SDFT could be rewritten as: where X rk ½n and X ik ½n are real and imaginary components of the DFT output coefficient, respectively. The SDFT provides an accurate estimation for the kth component as its amplitude (A k ½n) and phase (ϕ k ½n) can be determined by computing the modulus and the argument of the complex result X k ½n, as stated by Figure 1. (a) Samples used to compute X k ½n within a sliding window, when n ¼ n o . (b) Guaranteed-stable SDFT implementation as IIR filter as given by (5). SDFT is computationally efficient, as it only requires one (complex) multiplication and two additions per time instant. Nevertheless, the implementation of Eq. (2) as an infinite impulse response (IIR) filter in a system with finite word-length precision brings about a rounding error in the implementation of the W k N coefficient, which may turn the algorithm unstable and/ or increment the estimation error. The first one is a direct consequence of wrong cancellations between singularities and by poles displacement outside the unit circle [2,3]. Commonly, a damping factor (r, with 0 < r < 1) is used to ensure that all singularities are placed inside the unit circle, hence instability is no longer an issue. Then, the intrinsically stable version of the SDFT isX whereX k ½n is the estimated DFT output coefficient. While Eq. (5) is numerically stable, it no longer computes the exact value of XðkÞ in Eq. (1), since a small error is induced by the damping factor. The z domain transfer function for the estimated kth bin of the SDFT is The stable SDFT algorithm given by Eq. (5) leads to the filter structure shown in Figure 1(b). This structure is basically an IIR filter that comprises a comb filter followed by a complex resonator. The comb filter makes the transient response N−1 samples in length; therefore, the output will reach steady state when the stored waveform equals the input signal.
Sliding Goertzel transform
The number of multiplications required in the SDFT can be reduced by creating a new pole/ zero pair in its H SDFT ðzÞ system function. This is achieved by multiplying the numerator and denominator of H SDFT ðzÞ in Eq. (6) by the factor ð1−rW −k N z −1 Þ yielding: The transfer function represented by Eq. (7) is commonly known as the sliding Goertzel transform (SGT). Because the poles are placed on the z domain unit circle, the SGT implementation is also potentially unstable. Once more a damping factor r can be used in Eq. (7), to move the singularities inside the unit circle and to ensure the system stability.
This method can be implemented by the following pair of finite difference equations: Fourier Transforms -High-tech Application and Current Trends where C 1 ¼ 2r cos ð2πk=NÞ and C 2 ¼ r 2 , with 0 < r < 1. The SGT is implemented as an IIR filter that consists of a comb filter followed by the standard Goertzel filter, as depicted in Figure 2(a). The resulting system only has real coefficients so its computational complexity is decreased in relation to that of the SDFT [6,7].
Douglas and Soh algorithm
The implementation of a SDFT or SGT requires a damping factor to guarantee the algorithm stability. The trade-off for the system stability is that the calculated value is no longer exactly equal to the kth-bin of an N-point DFT in Eq. (1). In Ref. [8], a technique that significantly reduces this error, without compromising the stability, is developed. This method is a periodically time-varying system designed to generate anX k ½n output that is mathematically equal to XðkÞ in Eq. (1) at every Nth time instant.
This technique is implemented by the following pair of finite difference equations: The algorithm described by Eq. (9) will be referred to as the Douglas and Soh algorithm (D&S). The filter implementation of Eq. (9), shown in Figure 2(b), requires two multiplications and two additions as well as the control logics to determine when n mod N ¼ 0. In the figure, the Figure 2. (a) Guaranteed-stable SGT implementation as IIR filter as given by (8). (b) Guaranteed-stable D&S algorithm implemented as IIR filter as given by (9). change between Eqs. (9a) and (9b) is performed by switch S 1 Therefore, the switching period of S 1 in Figure 2(b) is equal to N · T s ,whereT s is the sampling period, and its duty cycle is equal to one sample. It is worth mentioning that the effect of the nonlinear operation of D&S algorithm in the dynamic response is negligible as it only changes its structure every N samples.
Modulated sliding discrete Fourier transform
There is an alternative way of avoiding the reduction in accuracy generated by the damping factor, without compromising stability. SDFT implementation in Eq. (2) is marginally stable, however, for the particular case of k ¼ 0 (DC component estimation). It takes the following form: The absence of the W k N coefficient, which typically leads to stability issues when it is represented with finite precision, allows to implement the recursive expression without the damping factor r. Therefore, the recurrence in Eq. (10) is unconditionally stable and does not accumulate errors. The modulated sliding discrete Fourier transform (mSDFT) algorithm uses the Fourier modulation property to effectively shift the DFT bin of interest to the position k ¼ 0 and then use Eq. (10) for computing that DFT bin output. This is accomplished by the multiplication of the input signal x½n by the modulation sequence W −k n N . This approach allows to exclude the complex twiddle factor from the resonator and avoids accumulated errors and potential instabilities [9]. The recursive realization of the mSDFT is: where X 0 k ½n is a complex constant related to the phase of the complex twiddle factor, since the modulation moves the desired kth-bin to k ¼ 0 (0 Hz). The relation between the desired X k ½n and the computed X 0 k ½n is given by Eq. (11b). It is worth noticing that if the application only requires DFT magnitude estimation, the complex multiplication in Eq. (11b) is unnecessary because jX 0 k j is equal to jXðkÞj. The filter structure of the mSDFT algorithm in Eq. (11) is depicted in Figure 3(a). In contrast of traditional recursive DFT algorithms, the mSDFT method Figure 3. (a) Guaranteed-stable mSDFT implementation as IIR filter as given by (11). (b) Guaranteed-stable mSDFT implementation as IIR filter as given by (12).
Fourier Transforms -High-tech Application and Current Trends is unconditionally stable and does not accumulate errors because its singularities are exactly placed on the unit circle, regardless of the finite precision used. These advantages are possible due to the removal of the complex twiddle factor from the resonator loop.
If multiple DFT frequency bins are to be computed, the mSDFT in Eq. (11) requires a comb filter for each frequency bin. On the other hand, given the periodicity of W −k n N , as shown in Ref. [9], Eq. (11) can be rewritten as Whenever multiple DFT frequency bins are to be computed, Eq. (12) becomes a more efficient approach as only one comb filter is needed (Figure 3(b)).
Performance comparison
This section discusses the key features of each of the Sb-SDFT that were presented in Section 2.
The aim of this analysis is to find underlying similarities and differences between these methods. To this end, a study on statistical efficiency and accuracy is presented in the following subsections. Finally, the section ends with a discussion over the limitations and inaccuracies of the Sb-SDFT inherited by every DFT-based method.
Statistical efficiency
It is common knowledge that the statistical efficiency and noise performance of estimators is determined by comparison with the Cramer-Rao lower bound (CRLB). The CRLB deals with the estimation of the quantities of interest from a given finite set of measurements that are noise corrupted. It assumes that the parameters are unknown but deterministic, and provides a lower bound on the variance of any unbiased estimation. The CRLB is useful because it provides a way to compare the performance of unbiased estimators. Furthermore, if the performance of a given estimator is equal to the CRLB, the estimator is a minimum variance unbiased (MVU) estimator [10].
Computer simulations have been performed to evaluate the performance of the SDFT, the SGT, the mSDFT and D&S algorithm for a single real sinusoid polluted with white Gaussian noise: x½n¼A cos ðω n þ φÞþwgn ½n (13) where A and φ are the amplitude and initial phase, respectively, n is the time domain index, ω denotes the normalized angular frequency (ω ¼ 2πf o =f s ) and wgn[n] is a zero-mean white Gaussian noise of variance σ 2 n . For this case the CRLB for amplitude estimation is approximated by Kay [10]: uniformly distributed between ½0, 2πÞ. The signal-to-noise ratio (SNR) is equal to A 2 =ð2σ 2 n Þ, whereas different SNR levels were obtained by properly scaling the noise variance σ 2 n . All simulation results provided are the averages of 1000 independent runs. above the CRLB and asymptotically approximate the −43.5 dB bound. This is mainly due to the fact that the inaccuracy caused by the damping factor in Eqs. (5) and (8) is more relevant than the consequence of SNR level. The D&S algorithm exhibits the same behavior, but beginning at SNR = 60 dB and with σ asymptotically approaching the −91 dB bound for higher levels.
When compared to the performances of the SDFT and the SGT, the D&S algorithm behaves as an MVU estimator for a wider range of SNR, at the cost of a slightly increased computational complexity and a nonlinear functioning. For the range of SNR levels shown in Figure 4(a) beyond the threshold, the variance in  computed by the mSDFT remains on CRLB curve, so its performance corresponds to an MVU estimator.
This test was repeated for r ¼ 0:9999, and the results are shown in Figure 4(b). It is seen that the performances of the SDFT, SGT and D&S algorithm are better than exhibited in the previous case. This improvement is reflected through an increase in the range of SNR values for which the estimations correspond to an MVU estimator. The results obtained for mSDFT are consistent with those obtained previously, because this estimator does not require a damping factor to ensure stability.
The effect of the damping factor on the σ is shown in Figure 4(c). The simulation is performed for SNR = 80 dB because at this level, SDFT, SGT and D&S algorithms do not lie on CRLB curve and have converged to their final values listed in Figure 4(b). For this scenario, the σ of the mSDFT is constant and equal to the CRLB, because it does not required a damping factor to achieve stability. Instead, for r ! 1 and SNR beyond threshold level, the σ for SDFT, SGT and D&S algorithm approximates the CRLB as it is reflected by Figure 4(c). From the analysis of this figure, it is possible to conclude that for the ideal situation (r ¼ 1) and SNR levels beyond the threshold, all reviewed algorithms reach the CRLB and therefore their statistical efficiency is identical.
Finally, the σ versus N at SNR = 30 dB are illustrated in Figure 4(d). As expected, N increase, that is, the length of the sliding window reduces the variance of  in the four methods. This is mainly because the estimations are computed in a larger sliding time window, that is, more samples are used for the estimation.
Accuracy analysis
In this section, the accuracy of the Sb-SDFT methods on the estimation of a single-frequency signal, both in steady-state and dynamics conditions, is analyzed through simulations. The adopted accuracy index is the so-called total vector error (TVE) that combines the effect of magnitude, angle and time synchronization errors on the desired component estimation accuracy. The TVE is defined in the Standard IEEE C37.118.1-2011 [11] as whereX r ½n andX i ½n are the sequences of estimations given by the Sb-SDFT method under test, X r ½n and X i ½n are the sequences of theoretical values of the input signal at the instants of time (n), and the subscripts r and i identify the real and imaginary parts of the desired component, respectively. The TVE is a real number that expresses the Euclidean distance between the true frequency domain complex bin and estimated one.
Steady-state condition
At first, the analysis is assessed in steady-state conditions assuming an input signal equal to Eq. (13). Parameters were assigned to A ¼ 1, f o ¼ 50Hz, f s ¼ 6:4K H zN ¼ 128 and φ ¼ 0r a d and the damping factor is set to r ¼ 0:9999. The curves plotted in Figure 5(a-d) show the estimated amplitude of the test signal for all Sb-SDFT algorithms in steady state, where the reference value is displayed with a black solid line. Figure 5(e) shows the TVE values as a function of time. SDFTand SGT have the same steady-state TVE values; this error has a mean value with an overlaid ripple that is a direct consequence of the use of a damping factor in Eqs. (5) and (8). For both algorithms, the maximum TVE value is 0.7335%. The D&S algorithm significantly reduces the TVE and maintains the same damping factor than the two previous cases, resulting in improved system performance, with a maximum TVE value of 0.01%. In Figure 5(c), it is shown that when (nmodN) = 0, the estimation is accurate, which is consistent with the period of the fundamental component of the test signal. On the other hand, mSDFT provides precise estimation with a 0% TVE, since it does not require a damping factor to ensure stability.
Dynamic condition
The accuracy under dynamic condition of the SDFT, the SGT, the mSDFT and D&S algorithm are evaluated through multiple simulations under the effect of various transient disturbances. The comparison is performed by means of the following test signal: where A o is the nominal amplitude, δ s is the amplitude step depth factor, δ r is the amplitude ramp slope factor, δ am is the modulation depth factor, ω am is the normalized modulating angular frequency ðω am ¼ 2πf am =f s Þ, ω denotes the normalized nominal angular frequency (ω ¼ 2πf o =f s ), ω g is the normalized off-nominal angular frequency offset (ω g ¼ 2πf g =f s ) and φ is the initial phase. In the following, the performance of the Sb-SDFT is evaluated under the effect of amplitude step, amplitude ramp, amplitude modulation and static frequency offsets. The accuracy is assessed exhaustively, by varying the test signal parameters over a suitable range, in order to determine the maximum TVE values. This approach leads to a fair performance comparison between the considered techniques. Unless otherwise stated, parameters were assigned to First, the step response of the Sb-SDFT estimators is evaluated. For this purpose, the parameters of Eq. (16) are set to: δ s ¼ 0:1 and n o ¼ 640. Figure 6(a) shows the estimated amplitude (Â) Figure 6. Transients for the estimation of the amplitude of (16) and the evolution of the TVE for the selected Sb-SDFT algorithms, under different test conditions. (a) A step change in amplitude with δ s ¼ 0:1, δ r ¼ 0, δ am ¼ 0 and ω g ¼ 0. (b) A ramp-change in amplitude with δ s ¼ 0, δ r ¼ 0:1, δ am ¼ 0 and ω g ¼ 0. (c) A sudden amplitude modulation with δ s ¼ 0, δ r ¼ 0, δ am ¼ 0:1, ω am ¼ 2π=f s and ω g ¼ 0. Figure 5(e). Further, simulation results (not reported here for the sake of brevity) confirm that the TVE value in steady state, due to an amplitude step, is the same regardless of the value of δ s .
The accuracy of the considered estimators is analyzed in Figure 6(b), assuming that the waveform x[n] is subjected to linear variation of its amplitude. Therefore, the parameters of Eq. (16) were adjusted as follows: δ r ¼ 0:1 and n o ¼ 640, to create ramp change in the amplitude of the test signal. Once more, the Sb-SDFT exhibit similar dynamics in their amplitude estimation performance. Figure 7(a) shows the worst-case TVE values, after the transient response, returned by the four considered estimators as a function of δ r in the range [0,0.1] p. u.. As can be seen, the maximum TVE value achieved by the Sb-SDFT worsens linearly with this parameter. In addition, a gap of 0.78% is observed, between the SDFT, SGT and the other two algorithms, which remains constant for the analyzed range.
The effect of a modulating signal on the estimation accuracy is analyzed in Figure 6 returned by the four considered estimators as a function of δ am in the range ½0; 0:1p: u: with f am ¼ 1 Hz. Figure 7(c) shows the worst case TVE values given by the Sb-SDFT as a function of f am in the range ½0, 5 Hz with δ am ¼ 0:1p: u: Note that the TVE increment linearly with δ am or f am , and that the behavior of the Sb-SDFT estimators is very similar.
Finally, the influence of a simple static off-nominal frequency offset on the Sb-SDFT estimators' performance is analyzed in Figure 7(d). The figure shows the maximum TVE values, in steady state, when the signal (Eq. 16) phase varies as a function of the off-nominal frequency offset f g in the range [−1,1] Hz. As expected, the accuracy of all the considered estimators degrades monotonically as the frequency offset increases due to the spectral leakage effect.
The similarities between the Sb-SDFT algorithms found through Figures 6 and 7 are explained by the fact that all implementations of this type of algorithms result from applying Fourier properties and mathematical operations to standard DFT definition (Eq. 1).
Sb-SDFT limitations
The direct application of Sb-SDFT may lead to inaccuracies due to aliasing and spectral leakage, common pitfalls inherited by every DFT-based method. Aliasing is generally corrected by employing anti-aliasing filters or increasing the sampling frequency to a value that satisfies the Nyquist sampling criterion. Instead, when the sampling is not synchronized with the signal under analysis, the DFT is computed over a noninteger number of cycles of the input signal which leads to the spectral leakage phenomenon [1]. Spectral leakage is typically reduced (not eliminated) by selection of the proper nonrectangular time domain windowing functions, to weigh the sequence data at a fixed sampling frequency [12]. This process increases the computational complexity and does not take advantage of the recursive nature of Sb-SDFT methods. Otherwise, spectral leakage can be avoided entirely by ensuring that sequence of samples is equal to an integer number of periods of the input signal [13].
Coherent sampling approach
In order to avoid the spectral leakage phenomenon, the sequence of samples within a sliding window of a Sb-SDFT must be equal to an integer number of fundamental periods of the input signal. An integer number of periods will be sampled if and only if the coherence criterion holds: where f o is the signal frequency, f s is the sampling frequency, N is the sampled sequence length and m is an integer number. This is equivalent to ensuring that an integer number m of sine periods is present in the data sample of length N, and in that case there is no spectral leakage. If Eq. (17) holds, f s is referred to as coherent or synchronous sampling frequency.
A variable sampling period approach, named variable sampling period technique (VSPT), was developed by the authors to design synchronization methods that maintain a coherent sampling with the input signal fundamental frequency [14]. This technique has recently been adapted to dynamically adjust the sampling frequency in a harmonic measurement method based on mSDFT [15]. In Ref. [16], the VSPT is generalized so as to be used with any Sb-SDFT algorithm.
In this section, the technique of variable sampling period is briefly described, and a unified small-signal model, which allows to use the VSPT with any Sb-SDFT, is also presented.
Variable sampling period technique
VSPT allows to adapt the sampling frequency to be N times the fundamental frequency of a given input signal. This technique has proven to be efficient both in three-phase and in singlephase applications yielding a robust synchronization mechanism, whose effectiveness has been tested under different conditions and scenarios [14,17]. Figure 8(a) illustrates the basic VSPT scheme for single-phase implementation, where the input signal is sampled and the input phase ϕ u ½n is extracted by the phase detector. Concomitantly with the input sampling, the reference generator provides a signal called reference phase: The method achieves a null phase error (e ϕ ½n) between ϕ ref ½n and ϕ u ½n, by varying the sampling period T S ½n as a function of e ϕ ½n. The controller G c ðzÞ provides the value of the sampling period and then the sampling generator produces a clock signal (CLK) that starts the conversion and increments the reference phase. The implementation of the phase detector and phase error calculation is key for the proper functioning of this technique. The operating principle is based on the dynamic adjustment of the sampling frequency. An exhaustive explanation of the key elements of this technique can be found in Refs. [14,17].
Unified small-signal model
VSPT allows to adapt the sampling rate to a multiple of the fundamental frequency of a given input signal, so the coherence criterion holds, thereby preventing the DFT's shortcomings when is used to analyze nonstationary signals. An error signal, related to the phase difference between the fundamental component of the input signal and the reference phase, is needed to adapt the sampling period. Based on this, phase error is feasible to develop a closed-loop control to synchronize the sampling period.
As mentioned in Section 3, when r ! 1 and for a real input signal, the Sb-SDFT algorithms become equivalent. Therefore, for this scenario and for small-signal conditions, these methods supply the same estimation of the kth-bin of an N-points DFT. Based on this concept, Figure 8(b) shows a phase error estimation scheme that employs an Sb-SDFT algorithm, which allows to estimate the phase difference between the fundamental component of the input signal and the reference phase. This scheme obtains the phase error signal from three basic operations, first an Sb-SDFT algorithm with k ¼ 1 is used to estimate the fundamental component (X 1 ½n)ofanNpoints DFT, from a given input sequence of samples (x[n]). Then the phase of the input signal (ϕ u ½n) is estimated by computing the argument of the complex result X 1 ½n, as stated by Eq (4b). Finally, a simple subtraction operation is used to estimate the phase error (e ϕ ½n) between the incoming signal and the reference.
Since all the Sb-SDFT methods are derived from Eq. (1), for small-signal condition, they are mathematically equivalent, and the system phase error (e ϕ ½n) for small deviation is approximately equal. Therefore, a mathematical model can be extrapolated for implement the VSPT scheme shown in Figure 8(a) with the phase error estimation scheme shown in Figure 8(b). Figure 8(c) presents the small signal model of a coherent sampling scheme for the Sb-SDFT algorithms based on the VSPT, which allows to avoid the spectral leakage phenomenon. The complete mathematical derivation of this model is available in Ref. [16].
Validation
The specifications and requirements to be met by the controller (G c (z)) are determined by the application. Several applications require zero phase error and frequency synchronization for normal operation. In these cases, the controller must be proportional integral to achieve zero phase error in steady state; the resulting system being a type II system.
Then the transfer function for the controller in the z domain is As an example of design, ω ¼ 2π · 50 rad=s and N ¼ 128 are adopted. Concerning dynamics, a phase margin of 45°and maximum bandwidth are adopted as design criteria for G c ðzÞ. Based on this, and using the design methodology proposed in Ref. [15], the parameters of the controller are K ¼ 1:7304 Á 10 −5 and a ¼ 0:9974, with a bandwidth of 5.905 Hz.
The estimations obtained by the Sb-SDFT algorithms with coherent sampling supplied by the VSPT, in situations where the input signal frequency deviates from its nominal value, are evaluated in two possible scenarios. The first simulation analyzes the effect of a frequency step of −0.5 Hz on the performance of the proposed method. Hence, the parameters of Eq. (16) were adjusted as follows: To complete the evaluation of the accuracy of coherent sampling achieved by the VSPT, the influence of a simple static off-nominal frequency offset on the Sb-SDFT estimators performance is analyzed in Figure 9(b). The figure shows the maximum TVE values, in steady state, when fundamental frequency of Eq. (16) varies as a function of the off-nominal frequency offset f g in the range [−1,1] Hz. Due to the VSPT, in steady-state sampling, frequency is coherent with the fundamental frequency of the test signal, ensuring that exactly one period is present in the data sample of length N, and in that case, the Sb-SDFT avoids the spectral leakage phenomenon. Therefore, compared with the results shown in Figure 7(d), the TVE values do not worsen with f g , instead remain constant and equal to those shown in Figure 5(e).
Conclusions
In this work, a comparative study of four Sb-SDFT algorithms is conducted. The comparison includes filter structure, stability, statistical efficiency, accuracy analysis, dynamic behavior and implementation issues on finite word-length precision systems limitations. Based on theoretical studies as well as on simulations, it is deducted that all reviewed Sb-SDFT techniques are equivalent, primarily due to the fact that they are derived from the traditional DFT, therefore in various applications can be applied indistinctly.
It proves that SDFT and SGT have identical performances, in regard to disturbance rejection and precision on spectral estimation. Both of these techniques are used extensively due to their straightforward implementation, although the two have an error in accuracy due to the use of a damping factor. For applications requiring greater precision, this error can be reduced by using the D&S algorithm. On the other hand, it can be eliminated by using mSDFT due to the absence of damping factor, resulting in better performance. The results of the study have shown that mSDFT is the best option when it comes to precision and noise rejection.
The direct application of a Sb-SDFT may lead to inaccuracies due to the spectral leakage phenomenon, common pitfall inherited by every DFT-based method. Spectral leakage arises when the sampling process is not synchronized with the fundamental tone of the signal under analysis and the DFT is computed over a noninteger number of cycles of the input signal. In this sense, a unified small-signal system model is presented, which can be used to design a generic adaptive frequency loop that is based on a variable sampling period technique. The VSPT allows to obtain a sampling frequency coherent with the fundamental frequency of the analyzed signal, avoiding the error introduced by the spectral leakage phenomenon. | 7,805 | 2017-02-08T00:00:00.000 | [
"Engineering",
"Computer Science",
"Physics"
] |
DFKI’s experimental hybrid MT system for WMT 2015
DFKI participated in the shared translation task of WMT 2015 with the German-English language pair in each translation direction. The submissions were generated using an experimental hybrid system based on three systems: a statistical Moses system, a commercial rule-based system, and a serial coupling of the two where the output of the rule-based system is further translated by Moses trained on parallel text consisting of the rule-based output and the original target language. The outputs of three systems are combined using two methods: (a) an empirical selection mechanism based on grammatical features (primary submission) and (b) IBM 1 models based on POS 4-grams (contrastive sub-mission).
Introduction
The system architecture we will describe has been developed within the QTLEAP project. 1 The goal of the project is to explore different combinations of shallow and deep processing for improving MT quality. The system presented in this paper is the first of a series of MT system prototypes developed in the project. Figure 1 shows the overall architecture that includes: • A statistical Moses system, • the commercial transfer-based system Lucy, • their serial combination ("LucyMoses"), and • an informed selection mechanism ("ranker").
The components of this hybrid system will be detailed in the sections below.
Translation systems Moses
Our statistical machine translation system was based on a vanilla phrase-based system built with Moses (Koehn et al., 2007) trained on the corpora Europarl ver. 7, News Commentary ver. 9 (Bojar et al., 2014), Commoncrawl (Smith et al., 2013) and MultiUN . Language models of order 5 have been built and interpolated with SRILM (Stolcke, 2002) and KenLM (Heafield, 2011). For German to English, we also experimented with the method of pre-ordering the source side based on the target-side grammar (Popović and Ney, 2006). As a tuning set we used the news-test 2013.
Lucy
The transfer-based Lucy system (Alonso and Thurmair, 2003) includes the results of long linguistic efforts over the last decades and that has been used in previous projects including EURO-MATRIX, EUROMATRIX+ and QTLAUNCHPAD, while relevant hybrid systems have been submitted to WMT (Chen et al., 2007;Federmann et al., 2010;Hunsicker et al., 2012). The transferbased approach has shown good results that compete with pure statistical systems, whereas it focuses on translating according to linguistic struc-tures. Its functionality is based on hand-written linguistic rules and there are no major empirical components. Translations are processed on three phases: • the analysis phase, where the sourcelanguage text is parsed and a tree of the source language is constructed • the transfer phase, where the analysis tree is used for the transfer phase, where canonical forms and categories of the source are transferred into similar representations of the target language • the generation phase, where the target sentence is formed out of the transfered representations by employing inflection and agreement rules.
LucyMoses
As an alternative way of automatic post-editing of the transfer-based system, a serial trans-fer+SMT system combination is used, as described in (Simard et al., 2007). For building it, the first stage is translation of the source language part of the training corpus by the transfer-based system. In the second stage, an SMT system is trained using the transfer-based translation output as a source language and the target language part as a target language. Later, the test set is first translated by the transfer-based system, and the obtained translation is translated by the SMT system. In previous experiments, however, the method on its own could not outperform Moses trained on a large parallel corpus. The example in Figure 1 (taken from the QTLEAP corpus used in the project) nicely illustrates how the serial coupling operates. While the SMT output used the right terminology ("Menü Einfügen" -"insert menu"), the instruction is not formulated in a very polite manner. In contrast, the output of the transfer-based system is formulated politely, yet mistranslating the menu type.
The serial system combination produces a perfect translation. In this particular case, the machine translation is even better than the human reference ("Wählen Sie im Einfügen Menü die Tabelle aus.") as the latter is introducing a determiner for "table", which is not justified by the source.
Sentence level selection
We present two methods for performing sentence level selection, one with pairwise classifier and one based on POS 4-gram IBM1 models.
2.1.1 Empirical machine learning classifier (primary submission) The machine learning (ML) selection mechanism is based on encouraging results of previous projects including EUROMATRIX+ (Federmann and Hunsicker, 2011), META-NET (Federmann, 2012), QTLAUNCHPAD (Avramidis, 2013;. It has been extended to include several features that can only be generated on a sentence level and would otherwise blatantly increase the complexity of the transfer or decoding algorithm. In the architecture at hand, automatic syntactic and dependency analysis is employed on a sentence level, in order to choose the sentence that fulfills the basic quality aspects of the translation: (a) assert the fluency of the generated sentence, by analyzing the quality of its syntax (b) ensure its adequacy, by comparing the structures of the source with the structures of the generated sentence.
All produced features are used to build a machine-learned ranking mechanism (ranker) against training preference labels. Preference labels are part of the training data and rank different system outputs for a given source sentence based on the translation quality. Preference labels are generated either by automatic reference-based metrics, or derived from human preferences. The ranker was a result of experimenting with various combinations of feature sets and machine learning algorithms and choosing the one that performs best on the development corpus.
The implementation of the selection mechanism is based on the "Qualitative" toolkit that was presented at the MT Marathon, as an open-source contribution by QTLEAP (Avramidis et al., 2014).
Feature sets We experimented with feature sets that performed well in previous experiments. In particular: • Basic syntax-based feature set: unknown words, count of tokens, count of alternative parse trees, count of verb phrases, PCFG parse log likelihood. The parsing was performed with the Berkeley Parser (Petrov and Klein, 2007) and features were extracted from both source and target. This feature set has performed well as a metric in WMT-11 metrics task .
• Basic feature set + 17 QuEst baseline features: this feature set combines the basic syntax-based feature set described above with the baseline feature set of the QuEst toolkit as per WMT-13 (Bojar et al., 2013). This feature set combination got the best result in WMT-13 quality estimation task (Avramidis and Popović, 2013). The 17 features set includes shallow features such as the number of tokens, LM probabilities, number of occurences of the target work within the target probability, average numbers of translations per source word in the sentence, percentages of unigrams, bigrams and trigrams in quartiles 1 and 4 of frequency of source words in a source language corpus and the count of punctuation marks.
Machine Learning As explained above, the core of the selection mechanism is a ranker which reproduces ranking by aggregating pairwise decisions by a binary classifier (Avramidis, 2013). Such a classifier is trained on binary comparisons in order to select the best out of two different MT outputs given one source sentence at a time. As a training material, we used the evaluation dataset of the WMT shared tasks (years 2008-2014), where each source sentence was translated by many systems and their outputs were consequently ranked by human annotators. These preference labels provided the binary pairwise comparisons for training the classifiers. Additionally to the human labels, we also experimented on training the classifiers against automatically generated preference labels, after ranking the outputs with METEOR (Banerjee and Lavie, 2005). In each translation direction, we chose the label type (human vs. METEOR) which maximizes if possible all automatic scores on our development set, including document-level BLEU. We exhaustively tested all suggested feature sets with many machine learning methods, including Support Vector Machines (with both RBF and linear kernel), Logistic Regression, Extra/Decision Trees, k-neighbors, Gaussian Naive Bayes, Linear and Quadratic Discriminant Analysis, Random Forest and Adaboost ensemble over Decision Trees. The binary classifiers were wrapped into rankers using the soft pairwise recomposition (Avramidis, 2013) to avoid ties between the systems. When ties occurred, the system selected based on a predefined system priority (Lucy, Moses, LucyMoses). The predefined priority was defined manually based on preliminary observations in order to prioritize the transfer-based system, due to its tension to achieve better grammat-icality. Further analysis on this aspect may be required.
Best combination The optimal systems are using: 1. the Basic feature set + 17 QuEst baseline features for GermanrightarrowEnglish, trained with Suppor Vector Machines (Basak et al., 2007) against human ranking labels.
2. the basic syntax-based feature set for English→German, trained with Support Vector Machines against METEOR scores. ME-TEOR was chosen since for this language pair, the empirical mechanism trained on human judgments had very low performance in term of correlation with humans.
2.1.2 POS 4-gram IBM1 models (contrastive submission) Using the IBM1 scores (Brown et al., 1993) for automatic evaluation of MT outputs without reference translations has been proposed in , and the best variant in terms of correlation with human ranking was the target-fromsource direction based on POS 4-grams. Therefore, we investigated this variant for our sentence selection, and we submitted the obtained translation outputs as contrastive.
The IBM1 scores are defined in the following way: where s j are the POS 4-grams of the source language sentence, S is the POS 4-gram length of this sentence, h i are the POS 4-grams of the target language translation output (hypothesis), and H is the POS 4-gram length of this hypothesis.
A parallel bilingual corpus for the desired language pair and a tool for training the IBM1 model are required in order to obtain IBM1 probabilities p(h i |s j ). For the POS n-gram scores, appropriate POS taggers for each of the languages are necessary. The POS tags cannot be only basic but must have all details (e.g. verb tenses, cases, number, gender, etc.).
The bilingual IBM1 probabilities used in our experiments are learnt from the German-English part of the WMT 2010 News Commentary bilingual corpora. Both German and English POS tags were produced using TreeTagger (Schmid, 1994). (Papineni et al., 2002), word F-scores and POS F-scores (Popović, 2011) for all individual systems and system combinations for both translation directions. The following interesting tendencies can be observed:
Experimental results
• German→English: -Moses and LucyMoses are comparable on the word level (BLEU and WORDF) -LucyMoses is best on the syntactic (POS) level -LucyMoses achieves better scores than both its components -using all three systems with a selection mechanism is the best option • English→German: -Lucy is comparable with Moses on the word level and better on syntactic level -LucyMoses improves all scores -LucyMoses+Moses (LM+M) is the best combination for word level scores -Lucy+LucyMoses (L+LM) is comparable with the combination of all three systems (L+LM+M) for the syntactic oriented POSF score We submitted the combination of all three systems for both selection mechanisms and for both translation directions. It should be noted that the ML classifier is used for the project's first official prototype, whereas the IBM1 classifier has been investigated only recently in the framework of the project -therefore the primary submission for the shared task is the ML classifier although it yielded lower automatic scores than the IBM1 classifier.
In order to estimate the limits of the classifiers for the given three MT systems, upper bound scores are presented in the last two rows, when selecting criteria were the WORDF and POSF scores themselves. It can be seen that there is a room for improvement for both selection methods. Further investigation, tuning and extension of the selection mechanisms will provide more insights and has potential for future improvements of the selection itself as well as of the MT systems.
Preliminary results concerning analysis of differences between the systems and behaviour of classifiers are shown in the following section.
Analysis of the results
First step towards better understanding of the selection mechanisms is to investigate the contribution of each of the individual systems in the final translation output. The results are presented in Table 2 in the form of percentage of sentences selected from each system. It is notable that: • the ML classifier mostly favors the transferbased output; • for the English→German translation, the same holds for the IBM1 classifier; for the other translation direction, Lucy is selected very rarely -for less than 2% sentences; • upper bound selection yields a more or less uniform distribution, however WORDF is clearly biased towards LucyMoses and POSF towards Lucy.
First indication is that the deep features of the ML classifier are active and therefore this classifier has a bias towards the transfer-based output. Furthermore, system contributions of upper bound selection methods indicate that the transfer-based outputs are more grammatical and thus favored by the syntax-oriented POSF score, whereas the LucyMoses system, which can be seen as a lexical reparation of a grammatical output, is favored by the lexical WORDF score. Nevertheless, these first hypotheses need to be confirmed by further studies that are planned. Table 3 shows examples of differences between the selection methods as well as between the three individual MT systems. The sentences are taken from the WMT-15 test set. First column denotes the selection method which choose the particular translation output. Sentence 1 illustrates the differences between two classifiers as well as between two F-scores; POSF score and ML classifier opt for the transfer-based translation, whereas IBM1 choses Moses and WORDF score prefers Lucy-Moses. Sentences 2-4 show the discrepance between the ML classifier and the automatic scores; the IBM1 score selection differs from the upper bound selections only for the sentence 4. Such sentences are the most probable reason for lower overall MLC performance in terms of automatic scores. The last sentence shows an example where both classifiers agree, but they disagree with both F-scores. Table 2: Percentage of selected sentences from each individual system.
The table also illustrates advantages of the serial LucyMoses system -this system produces the best translation output for all presented sentences except for sentence 3.
Summary and outlook
We described a hybrid MT system based on three different individual systems where the final translation output is produced by a sentence level selection mechanism, with the possibility to include deep linguistic and grammatical features. Preliminary analysis suggests that various improvements are possible, starting from improvements on the transfer-based system (handling of lexical items such as terminology, MWEs, OOVs and robustness of parsing), the serial combination (e.g., improved disambiguation), up to more detailed analysis and testing and improvement of the selection mechanism (e.g., integrating more "deep" information from external parsing). | 3,418.8 | 2015-09-01T00:00:00.000 | [
"Computer Science"
] |
A study on the effect of fingerprints in a wet system
In this paper, we study the influence of the fingerprint and sweat on the fingerprint on the friction between the hand and an object. When sweat contacts a finger or an object, it is sometimes easy to pick up the object. In particular, we can see this phenomenon when grasping a thin object such as paper and vinyl. The reason for this phenomenon is the increase of friction force, and this paper physically analyzes this natural phenomenon. To this end, we investigate the cause of the friction force between a solid and liquid to calculate the friction force when water is present within the fingerprint. To support the theoretical analysis, we conduct experiments to measure the friction force by making a finger-shaped silicon specimen. By comparing the theoretical and experimental results, we defined the change of friction force if there was water in the fingerprint. Through this study, it is possible to analyze the role of the fingerprint and sweat on the finger, and thereby explain the friction change depending on the amount of sweat.
A study on the effect of fingerprints in a wet system
Donghyun Kim & Dongwon Yun *
In this paper, we study the influence of the fingerprint and sweat on the fingerprint on the friction between the hand and an object. When sweat contacts a finger or an object, it is sometimes easy to pick up the object. In particular, we can see this phenomenon when grasping a thin object such as paper and vinyl. The reason for this phenomenon is the increase of friction force, and this paper physically analyzes this natural phenomenon. To this end, we investigate the cause of the friction force between a solid and liquid to calculate the friction force when water is present within the fingerprint. To support the theoretical analysis, we conduct experiments to measure the friction force by making a finger-shaped silicon specimen. By comparing the theoretical and experimental results, we defined the change of friction force if there was water in the fingerprint. Through this study, it is possible to analyze the role of the fingerprint and sweat on the finger, and thereby explain the friction change depending on the amount of sweat.
The parts of the human body have their respective roles. The hands serve to hold objects, the feet to walk, the eyes to see. Eyebrows prevent foreign substances from entering the eye and hair plays a role in protecting the skull. In this study, we examined the role of fingerprints. While fingerprints are used to identify individuals 1 , this is not their fundamental role from a biological perspective. In this study, we inferred that fingerprint helps humans hold objects. If we can show that the fingerprint facilitates the friction force between the object and the hand, then we can potentially exploit control over the friction force in many different fields. Therefore, we studied the effect of fingerprints on the change of friction force.
In general, people think that fingerprints increase the friction between objects and fingers. However, previous studies have shown that fingerprints of human hands play a role in lowering friction 2 . Some studies have presented results showing that it is possible to hold heavier objects when the hand has no fingerprints 3 . G. Chimeta et al. concluded that the presence of fingerprints reduces the frictional force through an experiment to measure the friction force with and without fingerprints by making artificial silicon fingers 4 . However, in other studies, it was concluded that fingerprints can increase friction force, and they can control friction force by changing the contact area according to the angle of the human finger 5,6 . By controlling friction force using fingerprints, some studies proved that fingerprints can help to grasp [7][8][9] . Although the researchers have continued to the friction of fingers, the majority of research on this phenomenon has been carried out through experiments rather than theoretical analysis. Researchers have studied how fingerprints change friction force through experiments in molecular units and found that the fingerprints play a major role in controlling the friction force 10 . Generally, the friction force only depends on the constituents of the two objects and the shape of the object cannot change the friction force. However, when the contact area between objects is larger, the frictional force becomes greater 6 . This means that there is a correlation between force and contact area, and many papers have proved this experimentally.
As the contact area increases, the friction force tends to increase accordingly. However, some studies have concluded that the friction force can change even when the contact area is constant [11][12][13] . In the case of high humidity, the frictional force between an object and the hand becomes higher. Empirically, if we have water on our hands, we can hold an object well. Although there is no way to directly increase the friction force only by the fingerprint itself, if the water is in the fingerprint, the friction force can change 14 . The relationship between water and friction is not a simple linear relationship. Previous researchers revealed through an experiment a tendency of the friction force to increase up to a certain point, and decrease thereafter like a quadratic function 8 .
Furthermore, the possibility of increasing friction in many materials other than the hand using water has been reported in many studies. We can calculate the friction force precisely using a two-dimensional molecular structure instead of the complex actual form 15,16 17 . The force of this surface tension is large enough that a snail can move by controlling the mucus 18 . We call the force generated by the surface tension adhesive force. The adhesive force can affect the motion of an object. Some researchers have made movement by determining the material and shape of a surface to control adhesive force 19 . Additional friction force comes from surface tension, and researchers derived equations of additional friction force using experimental result 20 . Previous studies have proved through experimental analyses that the principle underlying the surface tension is intermolecular attraction 21,22 . By using the principle of surface tension, it is possible to calculate the friction force when there is a water bridge on a rough surface 23 .
Many studies have suggested the possibility of controlling the friction force by water and numerous studies have explained this phenomenon by experimental methods. Also, based on these principles, we can see that the fingers work more strongly in high humidity environments. There is a paper that experimentally calculates the additional friction force and formulates the additional force through the numerical analysis 20 . However, it was not possible to derive the cause of the additional friction force and the theoretical formula accordingly. In this paper, we have analyzed the relationship between the fingerprint geometry and the amount of water, and examined the role of the fingerprint and sweat in gripping objects with human fingers.
We have experimentally and theoretically investigated how the friction force works in each case depending on the amount of water around the fingerprint. It is easy to judge whether the change of the friction force is meaningful from the magnitude of the friction force. This could be exploited in various fields if analyzing the change of friction force is possible. In particular, we can create a mechanism to control the force applied to an object using friction force. Figure 1 illustrates this paper's concept.
Basically, the factors that influence the increase of friction due to water are the same as those that generate friction force. There are various factors such as intermolecular force and electromagnetic force. The sum of these values is implied in the concept of surface energy. This energy is measured largely between solids and solids, and it depends on the force acting on the interface between the objects. It works like vertical force. The surface force between a solid and a liquid has a different tendency. The most powerful force between a solid and a liquid is the intermolecular force. The water molecules consist of hydrogen bonding and thus can polarize to the surrounding surfaces. By this coupling, an attraction between water and the surface acts to attract each other and thus water can affect the contact between objects, which can control the friction of the surface.
There are two ways in which adhesive force can influence the friction force. One is the case where adhesive force arises due to the surface energy between the object. In the case of water between the fingerprints, the surface energy between the water and the hand causes the adhesive force to adhere to the two objects. The magnitude of this force is 0 when the object does not move, but when the object moves, the adhesive force acts in the opposite direction of the movement. Adhesive force can be determined through between the surface energy and the width of the contact surface [24][25][26] . In addition, it is available to calculate the additional friction force due to the amount of water.
Viscous force occurs when water comes into contact with the object. If there is more water than the volume of fingerprint, the water will spill over the fingerprint. In this paper, we assume that a thin water film will form below the finger when it moves. In this system, the viscosity that occurs during movement by the water film plays a role of friction force. This paper derives theoretical formulas for the friction force added by the adhesive force. The next chapter covers this contents.
Results
Adhesive force due to water in the fingerprint. The factors affecting adhesive force are Van der Waals force and electromagnetic force. These forces appear as surface energies, which appear as constants of matter in the same environment (temperature, humidity, area). The adhesive force acts when the interfacial adhesion state changes, and we can express it through the area and the work of adhesion. Work of adhesion depends on the arrangement of the object. If an object is in the air, the work of adhesion is the interaction between the surface of the object and the air. If we assume that there is fluid around the attached objects, there will be surface energy between these two objects and the fluid. When two attached objects fall apart, a new surface energy occurs between the air and the two objects and the energy between the two existing objects disappears. The difference in this energy acts as work of adhesion. Therefore, if object i, j falls in fluid k, the value of work of adhesion appears as given in Eq. 1.
In general, we use this equation to measure the surface energy between a solid and a solid. But we also can use it to measure the energy between a solid and a liquid. γ ik denotes the surface energy between i, k. To measure the surface energy between two objects, we should classify the dispersion force and polar force because these different forces act on each other. Equation 2 shows how to calculate it.
shows how to calculate the work of adhesion between water and objects. In a fingerprint situation, two cases of energy occur: energy between the hand and water and energy between the ground and water. Equation 3 shows the way to calculate it. First, we can calculate the work of adhesion between the water and hand (ϖ SL ) and between water and object( LU ϖ ). ϖ SL is the result of Eq. 3 when material i is water, and material j is silicon. ϖ LU is the result of Eq. 3 when material i is water, and material j is plastic, the material of the ground plate. Equation 4 then shows the way to determine the adhesive force.
L U shows the adhesive force at the top and bottom of the fingerprint. adh SL β ϖ shows the force at the side of the fingerprint. We should divide force like these two factors because the surface tension interacts in parallel plates. adh α and adh β are constants that can change according to the fingerprint's condition like depth, length, shape.
If there is a water film between the plate and the ground, we can calculate the force from water using the law of surface tension. Its effects are similar to adhesive force, and Eq. 5 shows the adhesive force when the height of the water film is h, the contact area is A, and the surface tension between the water and plate is ω.
ad ω = Equation 5 shows that the adhesive force and distance between the plate and ground have an inverse relation. Using this phenomenon, we can calculate adh α and adh β when we know the shape of the fingerprint. It is easy to obtain this force when the shape of the fingerprint is simple like a rectangular parallelepiped. Considering the number and size of fingerprints, we can obtain Eq. 6 when the shape of the fingerprint is a rectangular parallelepiped. d is the width of the fingerprint, n is the number of fingerprints, and B is the surface area of the fingerprint's side face.
Viscous force due to water in the fingerprint. The tendency of the frictional force when the water overflows the fingerprint is completely The tendency of the frictional force when the water overflows the fingerprint is completely different from non-overflow. In this case, viscous force can work like friction force. We will assume that the water in the fingerprint does not flow and moves with the fingerprints and fingers. A water film exists under this water, and we can assume that this film is a Newtonian fluid, and this film creates a force that interferes with the moment. We call this viscous force F v and Eq. 7 gives the magnitude of this force. (9)).
Hence, we can see F v in Eq. (10) σ is the surface energy of water. 16 3 When the vertical drag is not excessive, or when the water is overflowing, the friction force by the viscosity will act, and when the vertical drag exceeds the individual level, or when the water is located only inside the fingerprint, the friction force by the adhesive force will act.
Combination of fingerprint and water.
Adhesive force changes when the amount of water changes. adh α and adh β then change such that the tendency of the equation changes. The surface tension of water determines the maximum height of water at which water can be attached to the ceiling portion of the fingerprint. We call this height h l and can derive three types of equations when the height of water h w changes, as given in Eqs 11-13. Figure 2 shows how the adhesive force works between the object, hand, and water.
represents the force when the water drops and thus the ceiling water disappears. Equation 12 gives the time when the water in the column disappears. Equation 13 shows the case where the water overflows to the lower part. Figure 1 shows the relationship between the amount of water in these three cases and the fingerprint size. Each figure contains a case where Eqs 11-13 occur.
Experimental material. We carried out experiments to verify the theory. One of the reasons that water can greatly change the friction force on the fingers is the softness of the fingers, that is, the small Young's modulus. Using a soft material increases the contact area according to the gripping force, which can result in greater additional friction of the water. To reflect this, we used a silicon, a soft material. Figure 3(a) shows a rectangular-shaped silicon sample. The silicon is translucent with a hardness of 20. To make it, we used a plastic mold, which was www.nature.com/scientificreports www.nature.com/scientificreports/ filled with molding silicon. Since it is difficult to perform the experiment with an actual fingerprint, this approach makes it relatively easy to compare the result with the theoretical value. Figure 3 shows the exact size of the fingerprint. The reason why we used this size is that, in order to maximize the amount of water in the fingerprint, there should be no water loss during the experiment. In the case of real fingerprints, we thought that the silicon pads could fulfill this role faithfully, and hence we chose a size that is similar to actual fingerprint size.
In order to make use of the characteristics of the soft material, we measured the amount of additional force when the contact area changes. To this end, we made a spherical silicon finger that has a fingerprint, as shown in Fig. 3(c). We made its finger body to the hemisphere shape, and made a circular fingerprint based on the pole region of the finger body. The direction of the fingerprint was made in the normal direction of the finger body, so its shape is as similar to the human finger. We used 3D printer to create a mold of the finger. Figure 4 shows the shape of actual finger. We conducted the same experiment inserting fingerprints into spherical silicon. Using spherical silicon, it is possible to calculate the contact area according to the height, as well as the change in the additional friction force according to the area.
We used a load cell to measure the friction force 27 . The load cell is a force sensor that measures the force difference between both ends through current change. We used a load cell that can measure 3 kgf, or about 30N, and has a resolution of 0.01N. We chose it because the magnitude of the force that we need is not large but requires precise measurement. There are two forces to measure, and the directions of these two forces are orthogonal, and therefore two experiments should be conducted. In the case of soft material, the contact area changes according to the vertical force, and the friction force also changes.
Therefore, the position when measuring two forces are important, and the stage can control it. The two degree of freedom stage can adjust the position of the object precisely, and we also can use the stage when measuring the kinetic friction force. Finally, we calculated the friction force according to the change of water; because we needed a uniform surface, we used a plastic plate. Figure 4 shows the experiment setting.
Relation between water and friction force. We conducted an experiment to obtain the friction force between the ground and silicon. It is common to obtain the friction force through the difference between the force when object does not move and the force when it moves. However, when the silicon pad moves, stiffness of the silicon pad generates the repulsive force, so it is difficult to measure the friction force. Since the friction force acts in the opposite direction of the motion, we conducted the experiment two times, changing the direction of movement. We assumed that the difference between the two measured values is twice the actual value. www.nature.com/scientificreports www.nature.com/scientificreports/ There is no way to accurately measure the amount of water in the fingerprint. We assume that the point where the friction force is maximal is the point where the fingerprint is completely filled with water. We performed three experiments (no water in fingerprint, fingerprint is full of water, fingerprint is half-filled). We used a pipette to control the amount of water. To calculate the friction force when the fingerprint is completely filled, we used 0.02 mL of water. In the half-filled case, we used 0.01 mL of water. Figure 5 shows the results of the experiment. We can see that there is a significant difference in friction force between half water and full water. The reason of this phenomena is that the size of the contact surface between the fingerprint and water changes the friction force proportionally. When water is full, the water is attached to the ceiling point of fingerprint, so the variation of the friction force become larger.
Initially there is no change in the friction force, but when the object begins to move, the friction force increases. The friction force reaches a peak and then gradually falls. After the object has stopped, friction falls but the stiffness of the object disturbs the calculation of the real friction force, and therefore the object was moved to the opposite side. In this process, the force acts on the other side and the friction force value becomes negative. Figure 6 shows the theoretical and experimental results. The red line shows the theoretical results of friction force. The blue dot indicates the experimental results in the three cases. To derive the theoretical results, we used Eqs 11-13 and the equation of friction force when there is no water (..). The surface energy creates additional friction force before the water height reaches 0.2 mm, and the viscous force creates friction force when the height is greater than 0.2. F ad is the additional force and thus the real friction force is the sum of F ad and N k μ . Equation 14 shows the real friction force when water is present. We calculated the coefficient of friction and the vertical force through an additional experiment. The COF is 0.45 and the vertical force is 6.6 N. We assumed that the velocity is 1 m/s when calculating the viscous force. Table 1 shows the remaining constant values. www.nature.com/scientificreports www.nature.com/scientificreports/
SL LU SL
The next section shows the relationship between the contact area and friction force.
Relation between contact area and friction force. We performed this experiment to confirm that the water can increase the friction force more when there is large contact area compared to the case of small contact area. Making silicon finger as presented in Fig. 3(c) is advantageous to change the contact area. We can change the contact area by changing the vertical force. The vertical force changes when the Z stage changes. Figure 7 shows the relationship between the Z stage and vertical force. Theoretically, the contact area is proportional to w 2/3 20 . w is the vertical force. Equation 15 shows the equation to derive the contact area and vertical force 20 .
The contact area is proportional to w 2/3 and we can represent constant values using Young's modulus (E), Poisson's ratio (v), and the radius of the finger (R s ) 28 . We can calculate this through the experiment. Equation 15 and Fig. 7 show the correlation relation between them, and Fig. 8 also shows the relation.
2/3 = Due to the nature of the silicon, the stiffness is inevitably large. In general, the stiffness according to the speed does not change. Therefore, it is necessary to measure the friction force regardless of speed. However, it is difficult to measure the friction force accurately by moving the silicon quickly because the restoration speed of the silicon is slow. Figure 9 shows the additional force due to water at different speeds and we can see that the portion where the friction force varies depending on the constant area is the low speed portion. Therefore, we set the difference of the friction force at the lowest speed as the actual experimental value.
At low velocity data is the most reliable. We can compare this data with Eq. 6. Unlike the rectangular form fingerprint, the length of the fingerprint is not constant and the area of the side space is different. Since the area is proportional to the diameter of the finger that forms the fingerprint, we can calculate length of fingerprint(B). Table 2 shows the constant values for the spherical finger. Figure 10 shows the theoretical results of the friction force when the Z stage changes and the experimental results.
Discussion
Experiments with a rectangular fingerprint proved the hypothesis that the adhesion of water inside the fingerprint will increase the friction force. We can see that the surface force of water can change the friction force between two objects. A rectangular fingerprint experiment also shows that the additional force follows Eqs 11-13 and this means that we can control the friction force precisely. Theoretically and experimentally, we can see that this www.nature.com/scientificreports www.nature.com/scientificreports/ change is significant with respect to the act of picking up real objects. The surface tension that acts between two parallel plates makes this additional force, and the force is proportional to the contact area and is inversely proportional to the distance between the two plates. Also, the proportional constant is a specific constant determined between the object and the object, and surface tension can determine this value. From the spherical silicon experiment, we can see that the contact area increases, and the friction force also increases when there is a wet system. The reason for the error is the stiffness of silicon. The stiffness increases when the height of the object increases. A spherical finger has considerable height, and hence there is large error.
In future studies the authors will carry out additional experiments for more accurate measurement of friction forces. By processing the shape of the fingerprint more precisely and accurately measuring the amount of water in the fingerprint, we can conduct experiments that can measure the friction force more accurately. The results presented in this paper include the approximate tendency of the theory, but in order to put the technology into practical use we must measure the amount of water and the magnitude of the force. If this is possible, this technology will greatly contribute to the development of science and technology. | 5,972.4 | 2019-11-12T00:00:00.000 | [
"Materials Science"
] |
A Comparative Analysis of Transformer-based Protein Language Models for Remote Homology Prediction
Protein language models based on the transformer architecture are increasingly shown to learn rich representations from protein sequences that improve performance on a variety of downstream protein prediction tasks. These tasks encompass a wide range of predictions, including prediction of secondary structure, subcellular localization, evolutionary relationships within protein families, as well as superfamily and family membership. There is recent evidence that such models also implicitly learn structural information. In this paper we put this to the test on a hallmark problem in computational biology, remote homology prediction. We employ a rigorous setting, where, by lowering sequence identity, we clarify whether the problem of remote homology prediction has been solved. Among various interesting findings, we report that current state-of-the-art, large models are still underperforming in the "twilight zone" of very low sequence identity.
INTRODUCTION
An explosion in the number of known protein sequences is allowing researchers to leverage the Transformer [29] architecture and build Protein Language Models (PLMs) [4,11,13].PLMs are highly appealing due to their ability to learn task-agnostic representations of proteins.In particular, they provide an alternative framework to link protein sequence to function without relying on sequence alignments and similarity.Sequence representations learned via PLMs have been shown useful for various prediction tasks, including predicting secondary structure [11], subcellular localization [11,26], evolutionary relationships within protein families [14], and superfamily [15] and family [20] membership.
Observations from recent studies indicate that PLMs, though trained exclusively on sequence data, learn structural information; work in [24] suggests that sequence-only PLMs indeed learn structural aspects.Scaling up to 15 billion parameters in ESM-2 (and training over 65 million unique sequences) yields representations that, harnessed through an equivariant NN, additionally predict tertiary structure (though not at AlphaFold2 accuracy) [17].These reports are not entirely surprising; PLMs capture the well-understood selective pressures that have been exerted on protein sequences throughout millennia of evolution.These pressures originate from the functional requirements of proteins, which in-turn determine their structure by affecting the evolution of their underlying sequences.This ability to encode structure is perhaps also a major aspect of the utility of PLMs in downstream prediction tasks related to protein function, even if limited to superfamily prediction, function co-localization, Gene Ontology categorization [16,18] and more.
We caution, however, that such performance, though seemingly impressive, may be somewhat exaggerated for various reasons.First, care has to be taken when constructing training datasets to remove sequence redundancy as well as to avoid data leakage, where proteins in the test data set may have high sequence identity with proteins in the training dataset.Second, structure and function are well preserved above 30% sequence identity [25].Proteins with similar structure and function are also present below this level of identity but cannot be detected from sequence similarity alone [25].It remains unclear how PLMs perform in this zone (which some authors have taken to referring to as the "twilight zone" [25]).
One challenging, hallmark problem in computational molecular biology, remote homolog detection, is a suitable stress test for how much a PLM has learned from sequence information alone, and whether indeed it can detect remote homologs in the twilight zone.It is worth noting that (protein) remote homology detection refers to the identification of proteins that are similar in structure but share low sequence identity; this is a working definition.The term remote homology was originally introduced to refer to proteins that share a superfamily1 but not a family2 .For the purpose of computational studies, this working definition lends itself to a gradated problem, where one lowers the sequence identity between proteins in the "test" dataset with the query/target protein, and determines whether proteins similar to the query can be detected.This is the setting for this paper, and it is in this setting, over decreasing levels of sequence identity, in which we evaluate pre-trained, transformerbased PLMs (over exclusively protein sequences) of various sizes for their ability to detect remote homologs.
Remote homology prediction is a particularly appropriate problem to determine whether a PLM pre-trained exclusively over protein sequences has also encoded/learned structure information.As one lowers sequence identity, it becomes increasingly difficult to identify homologous proteins based on sequence; remote homologs are those that retain their function (and structure) similarity at low levels of sequence identity.So, if a PLM allows identification of homologs at very low levels of sequence identity, then it has additionally encoded structure in its learned representations.
In this paper, we select powerful, representative, state-of-the-art transformer-based PLMs (trained exclusively over protein sequence data) and evaluate whether representations learned by them aid in remote homology detection/prediction.We employ a rigorous setting, where, by lowering sequence identity, we clarify whether the problem of remote homology prediction has been "solved."Indeed, in contrast to existing pre-prints and other reported findings that enthusiastically declare the problem solved (see Section 2), we show through a careful evaluation that these reports are highly exaggerated.The problem, particularly as one reaches the truly challenging setting of 30% or lower sequence identity, remains challenging for all current, SOTA PLMs, including large ones such as ESM2.This is one of the major findings of this paper.
An additional contribution of this paper is the presentation of metrics to objectively determine whether the distance between PLM-learned representations of proteins correlates with distance between corresponding sequences.This becomes particularly important after removing from consideration easy, high-sequence identity pairs.This analysis and others clarify and allow us to better understand the success and failure cases of PLMs for remote homology prediction.For instance, as we show here, we identify which protein domains are most and least amenable to remotehomology prediction based on PLM representation-similarity; we provide several visualizations to aid our understanding of whether useful structural information is easily-obtainable (or not) from PLMlearned representations of proteins.
The rest of this paper is organized as follows.We first relate some definitions, preliminaries, and necessary details about existing PLMs in Section 2. Section 3 relates our analysis setting, and the metrics utilized.Section 4 reports our findings, and Section 5 concludes the paper.
RELATED WORK AND BACKGROUND 2.1 Protein Classification and Homology
Currently, the most commonly used definition of remote homology in computational studies is based on the hierarchical classification system for proteins provided in the SCOP2 [2,3] and SCOPe [5,12] databases [6].These databases divide protein sequences into "domains" in levels of classes, folds, superfamilies, families, protein regions, and protein types.Generally, the criteria for family membership are related to sequence-level similarities; SCOP's documentation indicates that all sequences sharing sequence identity above 30% are grouped in the same family.
However, this appears to be a simplification of the actual criteria, as the analysis in [6] is based on similarity-based sequence clustering rather than all-to-all alignment and comparison of all protein sequence pairs in the database.Using this system, proteins belonging to the same superfamily are referred to as superfamilylevel homologs [27].Proteins in the same superfamily but in different families are considered remote homologs at the superfamily level [6,22,27].
Protein Language Models
Several iterations of PLMs have been developed since the advent of the transformer architecture.In particular, in this paper we employ three publicly available, pre-trained, SOTA PLMs to obtain representations for our analysis: (1) ESM-1 is the Evolutionary-Scale Modeling PLM [23].ESM has been trained on 250 million protein sequences (a total of 86 billion amino acids) on masked-language-modelling tasks.While there are several lighter-weight ESM-1 variants, we utilize the ESM-1b variant with 33-layers and 650 M parameters.
(2) ESM-2 is a more recent update to the ESM-1 architecture and was trained with variations spanning from 8 M to 15 B parameters [17].For consistency, we used the 33-layers, 650 M parameter version.(3) ProtTrans T5 [10] is another, more recent PLM with selfsupervised training, based on the original T5 model [21] for natural language processing.Specifically, ProtTrans-T5 is a 3 B parameter encoder-decoder model, and it was trained on a denoising task where 15% of the amino acids in the input were randomly masked.
All three of these models employ masked-token prediction as their training objective.
Classic Definition: Remote Homology
In this study, we utilize the Structural Classification of Proteins SCOP2 [2,3] database (latest update: 29 June 2022), containing 5, 936 families and 2, 816 superfamilies.SCOP2 defines family as a group of closely related proteins with clear evidence for their evolutionary origin and superfamily as a group that brings together more distantly-related protein domains.The similarity among proteins in a superfamily is frequently limited to common structural features that, along with a conserved architecture of active or binding sites Language Models for Remote Homology BCB '23, September 3-6, 2023, Houston, TX, USA or similar modes of oligomerization, suggest a probable common evolutionary ancestry.Following the definition from [6,22,27], we first define that a pair of proteins, and , are remote homologs if they belong to the same superfamily but different families, as follows: where and define the superfamily and family label annotation of the -th protein.
Hardened Definition: Remote Homology
We harden the above definition to accommodate the sequence identity threshold and focus on the truly hard cases; that is, no pair of remote homologs will share sequence identity more than a predefined threshold.This threshold will ensure that this pair falls into the "twilight zone" [25] in terms of sequence identity.The sequence identity is computed as a pairwise global alignment score.The extended equation is as follows: We report experiments and results considering both of the above equations which allows us to truly gauge the performance of various PLMs as the problem becomes harder (lower sequence identity).We observe in our study that there is a high number of sequence pairs in different families with above 30% sequence identity than the sequence pairs belonging to the same family domain in the SCOP2 database (Figures not shown).This reinforces our hypothesis that extra filtering may be required if we want to identify the nontrivial remote-homologs without high sequence-level similarity, to test PLMs with.Fig. 1 shows the pairwise sequence identity distribution in the SCOP2 [2,3] database.
Learned Amino-acid Level and Protein-level Representations
For a protein, 1 ≤ ≤ , in the SCOP2 database, each defined by its sequence of amino acids, we obtain a corresponding representation ∈ R × from a PLM transformer; in this representation, each amino acid of a protein is mapped into R .Given a learned ∈ R × , we obtain protein-level representation ∈ R 1× by taking the average of the learned amino-acidlevel features over the sequence length as in:
Comparing PLM-learned Representations of Proteins
Following the methodology of Rives et al. [22], we adopt cosine similarity between a pair of protein representations as our similarity metric.Specifically, for each pair of sequences in SCOP2, we compute the representation similarity as follows:
Comparing Sequences of Proteins
To enable our analysis of the embedding representations of remote homologs in PLMs, we compute pairwise sequence alignments and identity scores for each of the 2 × 2 pairs of sequences in the SCOP2 database.To compute these, we used Biopython's [7] pairwise alignment tool with default parameters.
From Representation Similarity to Prediction of Remote Homology
We employ several metrics and forms of analysis to evaluate whether structural commonalities between pairs of sequences are reflected in their embeddings.
3.6.1 Query-based Analysis.Using each sequence's PLM-learned embedding as a query ( ), we exclude all other sequences from the same family ( ) from the corpus of sequences (C) that will be queried.In our case, C refers to the set of all N sequences in SCOP2.
We then exclude from C all sequences sharing a sequence identity above a given threshold ℎ with the query sequence.The remaining query-sequence pairs are denoted as {( , )| ∈ , ∈ }, where For evaluating the performance, we consider the ground truth to be (i.e., the sequences are true homologs) if a sequence in the test dataset is from the same superfamily as the query and false otherwise, in accordance with Equation 2.
(3) Hit-10 [19] is the percentage of queries for which a true homolog was in the top-10 sequences with the most similar embeddings.
3.6.2Clustering Analysis.We perform k-means clustering on the embeddings of sequences from the most-successfully-predicted and least-successfully-predicted superfamilies from each PLM based on the AUC (see above).We evaluate the quality of the resulting clusters and their agreement with the ground truth (i.e., whether sequences from the same superfamily are likely to be clustered together).
RESULTS & ANALYSIS 4.1 Experimental Setup
Our experimental setup is designed with the goal of accessibility and reproducibility.
4.1.1Data Preprocessing.We opt to perform our analysis using all sequences in SCOP2 with minimal preprocessing or filtering.One exception to this is the removal of sequences where multiple spans were indicated within the same sequence, due to the ambiguity this creates when assessing the domains of the sequence and subsequences.We remove 506 such sequences compared with the total of 36, 900 sequences provided in SCOP2 database.Consequently, we have 2, 260, 440 remote-homolog pairs at the superfamily level.Note that we analyze significantly more remote-homologs (24 times) compared to Rives et.al [22] that reports performance on 92, 944 pairs of remote-homologs from SCOPe due to heavy filtration.
4.1.2Sequence-Identity Thresholding.We compute the performance metrics using all protein sequences as individual queries.The thresholds we choose vary from 10% to 100% sequence identity with 5% increment.To compute AUROC, AUPRC, and HIT-10, we do not perform any sub-sampling or averaging of the protein sequences but instead choose to calculate all query-vs-ground-truth pairs and compute the metrics once over all samples, for each value of the sequence-identity threshold.This has the advantage of providing robust and reliable metrics, but this strategy also weights our results in favor of the larger superfamilies when compared with the strategy of sampling a single query from each superfamily.So, to provide a more fine-grain analysis at the superfamily level, we also report the same metrics for individual superfamilies from "hard" and "soft" domains, that is, difficult-to-predict and easy-to-predict superfamilies for each PLM.
Performance in the Twilight Zone
Figures 2, 3, and 4 show AUROC, AUPRC, and HIT-10 respectively for all three PLMs at varying levels of the sequence identity threshold.These metrics are also reported in numerical form in Tables 1, 2, and 3.For reference, "random" is added as a random baseline model; in it, the distribution of ground-truths is unchanged, but random numbers are used for embedding similarities.In Table 1 the DeLong variance is reported below the AUROC scores.These results appear to confirm that PLMs still struggle to identify remote homologs in the "twilight zone" [1,25] from the sequence alone.We observe AUROC dropping sharply when the sequence identity threshold is lowered below 40%, indicating that above this threshold the problem is much easier.We also note much lower performance than Rives et.al [24] at remote homology, even with no filtering (see "AUROC (Eq.1)" in Table 1, or th=100% in Figure 2 ).Because the dataset used in their study of remote-homology prediction is not publicly available, we can only speculate as to the lower performance observed here.It is possible that it is due to the differences in the filtration applied to the dataset mentioned in Section 4.1.1,or differences in methodology for computing embeddings or calculating metrics that go beyond the details listed in their paper.Because remote homolog pairs are exceedingly rare when compared with the number of possible sequence pairs in SCOP2, this created a significant class-imbalance in the ground truth, calculating the AUROC scores.Thus, DeLong variances are also provided to give a measure of the reliability of the provided AUROC scores, especially as the threshold is lowered and the number of positiveground-truth examples becomes even lower.In addition, we observe the similar trend of decreasing performance in AUPRC scores, indicating that these results are not simply an artifact of the worsening class imbalance as the threshold is lowered.The random baselines shown in Figures 2, 3, and 4 also confirm that the changing ground truth distribution for different values of the threshold are not to blame for the decrease in performance.
Language Models for Remote Homology BCB '23, September 3-6, 2023, Houston, TX, USA The Hit-10 scores show a similar trend regarding model performance in the "twilight zone", but with the difference that ESM-1b now outperforms ProtTransT5 and ESM-2 on this metric.Because this metric is calculated at the query level and then averaged over all queries, this may indicate that there are some classes of query where ESM-1b can identify the remote homologs at least to some degree, but the other two models completely fail to assign a high "top-10" rank to the true homologs.
Protein Domain Analysis for Remote Homology Prediction
In addition to calculating AUROC across all queries in the SCOP2 database, we also calculate the same metrics separately for query sequences coming from each superfamily in SCOP2.To identify the "hard" and "soft" (i.e., difficult-to-predict and easy-to-predict) superfamilies for each PLM, we start with the 150 superfamilies with the highest number of remote homologs in SCOP2, and identify the 10 superfamilies with the highest AUC and the 10 with the lowest AUC when attempting to predict homologs based on PLM embeddings, when using queries from that superfamily.Notably, the better-performing superfamilies tended to have fewer included sequences on average, indicating that these may be the superfamilies with more refined and restrictive definitions than the larger superfamilies shown in Table 4 and 5. Another explanation is that the inflated AUROC scores may be caused by the increased class imbalance for queries from the smaller superfamilies.However, the PRC column indicates that generally the bottom-10 superfamilies tended to also have lower PRC.To a lesser degree, this also holds for the Hit-10 scores, despite the fact that even many the top-10 superfamilies had a hit-10 score of zero.
Table 4 shows the superfamilies with the highest AUROC when using embeddings from the ProtTransT5 model (the best-performing PLM, judging by its AUROC in Table 1) to predict remote homologs, and Table 5 shows the superfamilies where the AUC was lowest.Similar tables, giving the hard and soft domains for the other two PLMs, are provided in the supplement.Note that the AUC scores used here are using the sequence-identity filtering threshold of 30%.
Visual Analysis of Hard and Soft sets
To visualize how well the "hard" and "soft" domains are separated in the representational space of the PLMs, we perform a T-SNE [28] dimensionality reduction to view the embeddings from these superfamilies in a two dimensional plot.The T-SNE transformation is fit using all sequences in the SCOP2 database.Note that superfamilybased filtering is only applied later, when producing the visualizations.In Figure 5, 6 and 7, we report the top-5 "soft" domains in the top panel and the bottom-5 "hard" domains in the bottom panel for ESM1b, ESM2 and ProtTransT5, respectively.Subjectively, in all cases this appears to show cleaner and more defined clusters when considering the "soft" domains, relative to the "hard".This indicates some level of agreement between the distances between sequences in our T-SNE projection, and the cosine similarity between pairs of sequences that we used to define remote homologs in the high dimensional protein embedding space.
Distribution of Pairwise Embedding Similarity
To better understand the significance of a given similarity level between two sequences in the representational space of the PLMS, we visualize the distribution of embedding similarities across all sequence pairs in SCOP2 in all three PLMs in Figure 8.All three PLMs show unimodal distributions of embedding similarities.However, the distributions for both ESM models is skewed heavily toward higher similarities between embeddings.Interestingly, the distribution for ProtTransT5 embedding similarities is almost identical to BCB '23, September 3-6, 2023, Houston, TX, USA A. Kabir, A. Moldwin, and A. Shehu Table 4: Superfamilies with highest<EMAIL_ADDRESS>in Prot-TransT5, selected from a list of 150 superfamilies with the most remote homologs in the SCOP database.Note that the maximum possible number of sequence pairs that can be remote homlogs is actually higher than the number of sequences in the superfamily.This is because the number of pairs is 2 × | | 2 .Also note that the sequence counts reported in the "Num Seqs" and "Num RHs" columns are prior to applying the threshold.7 and 8.
the distribution of pairwise sequence identities in SCOP2 shown in Figure 1.However, while this may seem to indicate that the sequence information is retained by the model, it may actually be a coincidence: Figure 9 shows little correlation between pairwise sequence identity and pairwise embedding similarity in the ProtTransT5 model.
Clustering Analysis
To better quantify how well-defined and separated the superfamilies from the "hard" and "soft" domains are in the representational space of PLMs, we provide a clustering analysis.Table 6 shows the performance of k-means clustering on the "hard" and "soft" domains using embeddings from each of the PLMs.Note that for each PLM, we use its own "hard" and "soft" domains based on the AUROC of that PLM's embeddings at predicting the remote homologs in those domains.Predictably, this unsupervised clustering is more successful at differentiating the "soft" superfamilies from each other than it is at differentiating the "hard" superfamilies.These results serve to bolster the results achieved using pairwise cosine similarity, indicating that this distinction between "hard" and "soft" domains holds, even at the cluster (rather than just level.
Table 6: Cluster separability and accuracy metrics for Kmeans applied to embeddings for all sequences coming from the top-10 "hard" and "soft" domains (i.e., superfamilies) for each model, where the superfamily label is taken as the ground truth.
CONCLUSION
Through our rigorous experiments where we carefully controlled the difficulty of the setting for remote homology prediction, we have gained valuable insights into the current state of PLMs in identifying remote homology and capturing structural features of protein sequences.Our main set of results largely conflicts with the analogous analyses performed by other research groups investigating their own state-of-the-art PLMs.In summary, remote homology prediction remains difficult for PLMs where it matters; that is, as sequence identity gets lower.By conducting analyses in the challenging "twilight zone" and excluding numerous trivial samples from the dataset used to evaluate remote homology prediction metrics, we have shed light on the behavior of PLMs under difficult conditions.We have examined specific superfamilies where PLMs effectively capture remote homologs as well as cases where they exhibit poor performance, offering valuable insights for improving future PLMs and even facilitating the development of novel protein-modeling approaches beyond the traditional PLM paradigm.
In addition, our thorough analysis includes visualizations of various aspects of PLM representations that provide further understanding of their successes and failures.These visualizations complement our main conclusion and offer useful insights into the factors contributing to PLMs' performance.
We also uncovered important details regarding the distribution of protein domains and pairwise sequence identities in the SCOP database that supplement their original documentation and provide missing information regarding the presence of many sequences officially categorized as being in different families, that in reality share a high sequence identity.
In future work, we plan to leverage these findings to inform our exploration of different training regimes and model architectures.Rather than relying on sequence-level similarity, we aim to focus on performance in the "twilight zone" using a new benchmark dataset.Furthermore, we aspire to incorporate more biological knowledge to explain the successes and failures of existing PLMs through further analysis.
We believe that our work will be valuable to researchers dedicated to advancing protein structure models.The datasets, code, and analyses presented here are available at: github.com/amoldwin/plm-remote-homolog-analysis.
Figure 1 :
Figure 1: Histogram showing the sequence identity distribution of sequence pairs from the SCOP2 database.
Figure 2 :
Figure 2: AUROC and DeLong variance embedding similarity as a predictor of homology for embeddings from all three PLMs, as the filtering sequence identity threshold is decreased from 100% to 10%.A threshold of 100% indicates no filtering beyond removal of sequences in the same family as the query, following Eq. 1. PLM embeddings of each sequence from the sequences in SCOP2 are used as queries.
Figure 3 :
Figure 3: AUPRC of embedding similarity as a predictor of homology, as filtering threshold is decreased from 100% to 10%, from all three PLMs.
Figure 4 :
Figure 4: HIT-10 of embedding similarity as a predictor of remote-homology, as filtering threshold is decreased from 100% to 10% for all three PLMs.
Figure 7 :
Figure 7: Top: T-SNE plot of ProtTransT5 embeddings for sequences from superfamilies that had the (top panel) top-5 AUROC and (bottom) bottom-5 AUROC shown in Tables4 and 5.
Figure 8 :
Figure 8: Histogram showing pairwise embedding similarities (cosine) for each model, using all pairwise comparisons between sequences from the SCOP2 database.
Figure 9 :
Figure 9: Scatterplot of embedding similarity vs Sequence identity for 1000 randomly-sampled pairs of sequences in the SCOP2 database.Embeddings shown are from ProtTransT5.
Table 1 :
AUROC Comparison.DeLong variances are shown below the AUROC score. | 5,717.6 | 2023-09-03T00:00:00.000 | [
"Computer Science"
] |
In Silico Prediction of New Inhibitors for Kirsten Rat Sarcoma G12D Cancer Drug Target Using Machine Learning-Based Virtual Screening, Molecular Docking, and Molecular Dynamic Simulation Approaches
Single-point mutations in the Kirsten rat sarcoma (KRAS) viral proto-oncogene are the most common cause of human cancer. In humans, oncogenic KRAS mutations are responsible for about 30% of lung, pancreatic, and colon cancers. One of the predominant mutant KRAS G12D variants is responsible for pancreatic cancer and is an attractive drug target. At the time of writing, no Food and Drug Administration (FDA) approved drugs are available for the KRAS G12D mutant. So, there is a need to develop an effective drug for KRAS G12D. The process of finding new drugs is expensive and time-consuming. On the other hand, in silico drug designing methodologies are cost-effective and less time-consuming. Herein, we employed machine learning algorithms such as K-nearest neighbor (KNN), support vector machine (SVM), and random forest (RF) for the identification of new inhibitors against the KRAS G12D mutant. A total of 82 hits were predicted as active against the KRAS G12D mutant. The active hits were docked into the active site of the KRAS G12D mutant. Furthermore, to evaluate the stability of the compounds with a good docking score, the top two complexes and the standard complex (MRTX-1133) were subjected to 200 ns MD simulation. The top two hits revealed high stability as compared to the standard compound. The binding energy of the top two hits was good as compared to the standard compound. Our identified hits have the potential to inhibit the KRAS G12D mutation and can help combat cancer. To the best of our knowledge, this is the first study in which machine-learning-based virtual screening, molecular docking, and molecular dynamics simulation were carried out for the identification of new promising inhibitors for the KRAS G12D mutant.
Introduction
Cancer is one of the primary causes of mortality globally [1].In 2023, 1,958,310 new cancer cases and 609,820 cancer deaths are projected to occur in the United States [2].Radiation, bacteria, and viruses account for about 7% of all cancer cases [3].Various genetic alterations, including point mutation, deletion, and amplification, can result in the production of oncogenes [4].Mutations in genes that play an important role in cell proliferation and differentiation are the primary cause of the majority of malignancies.Mutation in the KRAS gene is also responsible for the formation of cancer [5].KRAS is a member of the RAS superfamily of genes and is located on chromosome 12.KRAS acts as a switch to regulate many signal transduction pathways by cycling between active and inactive states (GTP and GDP-bound, respectively).The RAF-MEK-ERK pathway is one of these signal transduction cascades [6].The three genes (HRAS, NRAS, and KRAS) encode the four RAS proteins KRAS4A, KRAS4B, HRAS, and NRAS [7].The two isoforms KRAS4A and KRAS4B result from the alternative splicing of exon 4, and these two isoforms have a difference in the C-terminal region [8].However, KRAS4B is the most prevalent isoform in human cells, whereas KRAS4A expression is more comparable to viral KRAS [9].Single-point mutations in KRAS are the most common cause of human cancer.In humans, oncogenic KRAS mutations are responsible for at least 30% of lung, pancreatic, thyroid, liver, and colon cancers [10].Codons 12,13, and 61 are frequently the sites of cancer-promoting KRAS mutations, with G12 accounting for the majority of these mutations (89%).Among the KRAS mutants, KRAS G12D is the most prevalent (36%) followed by KRAS G12V, (23%), and KRAS G12C (14%) [11].The G12D variant is responsible for pancreatic cancer and is a target for drug development initiatives [12].Because KRAS lacks binding pockets, its structure has shown to be extremely resistant to small-molecule modification [13].To date, no FDA-approved drugs have been made available for the KRAS G12D mutant.However, one of the products of Mirati MRTX1133 is in clinical trials for patients with advanced solid tumors associated with the KRAS G12D mutation [14].
New drug development is time-consuming and expensive.It may take 10-15 years and cost up to $2 billion [14].Conversely, in silico approaches for drug design are costeffective and fast [15].The drug development process has been significantly influenced by computer-assisted drug discovery (CADD) tools [16].These in silico approaches and the advancement of supercomputing capabilities have impressively improved the effectiveness of lead discovery in pharmaceutical research [17].Artificial intelligence (AI) and machine learning techniques are frequently used for the identification of new lead compounds [18,19].The identification and design of new lead compounds that bind to the therapeutic drug targets are greatly enhanced by artificial intelligence and ML approaches [20].
The present study aims to identify new promising inhibitors for the KRAS G12D mutant.We used different machine learning models to identify new promising hits from the ZINC database against the KRAS G12D cancer drug target.Using Lipinski's rule of five, drug-like compounds were selected from the ZINC database.The drug-like molecules were docked against the KRAS G12D mutant.The complexes with the top docking scores were simulated for 200 ns.The newly identified hits were found to be more stable during MD simulation.The findings indicated that these new hits may be KRAS G12D protein inhibitors, which may be important for cancer treatment.
Preparation of Dataset
From the binding databank database, a total of 2526 compounds with reported IC50 values for KRAS G12D were obtained.Those compounds for which the IC50 value was not reported were removed from the dataset.The compounds were labeled as active or inactive based on the IC50 value of the standard compound MRTX1133 (6.1 nM) [21].The active and inactive compounds in the dataset were denoted by the labels 1 and 0, respectively.The compound with an IC50 value less than or equal to the reference was labeled as active while the compound with an IC50 value higher than the reference was labeled as inactive.In our dataset, 422 compounds were found as active while the remaining were labeled as inactive.MOE (2016) software was employed to compute 208 2D descriptors in total.To prevent overfitting and improve the model's generalizability, the dataset underwent preprocessing to eliminate any zero and NA values.After preprocessing, there were only 172 descriptors left.
Chemical Space and Diversity
The chemical diversity of a dataset significantly affects the reliability of the ML algorithm.Adequate chemical space is needed for model performance [23].The significant chemical gap between logP and molecular weight (MW) is shown in Figure 1.A substantial chemical gap between active and inactive inhibitors was observed, with logP ranging from −4 to 8 and MW ranging from 250-600 Da, respectively.
Chemical Space and Diversity
The chemical diversity of a dataset significantly affects the reliability of the ML algorithm.Adequate chemical space is needed for model performance [23].The significant chemical gap between logP and molecular weight (MW) is shown in Figure 1.A substantial chemical gap between active and inactive inhibitors was observed, with logP ranging from −4 to 8 and MW ranging from 250-600 Da, respectively.
Performance Evaluation of Models
Several supervised ML models, such as KNN, SVM, and RF, were trained using Python v3.9.Several metrics like accuracy, sensitivity, specificity, and MCC were computed to access each model performance.Among all models, the accuracy of RF model was computed as 99% and the MCC value of RF model was 0.96 so it was ranked as the best model.The KNN model was ranked second based on accuracy and MCC value.The accuracy of the KNN model was found as 98% and MCC was found as 0.94.The SVM model was ranked third with an MCC value of 0.90 and an accuracy of 96%.Table 1 shows the performance evaluation of all the models.To obtain reliable results we employed fivefold cross-validation.Analyzing the ROC-AUC curve is one of the most reliable methods to assess model performance.With an area under the curve (AUC) value of 0.99 the RF model outperformed the KNN and SVM models, with an AUC value of 0.98 and 0.95, respectively, as shown in Figure 2.
Virtual Screening
Among the ML algorithms, the RF model revealed good accuracy and MCC score so it was used for the virtual screening of a total of 20,000 drug-like compounds retrieved
Virtual Screening
Among the ML algorithms, the RF model revealed good accuracy and MCC score so it was used for the virtual screening of a total of 20,000 drug-like compounds retrieved from the ZINC database.A total of 82 hits were predicted as active against the KRAS G12D mutant.Among these 82 hits, ten hits were found to be toxic, so these compounds were removed from the database while the non-toxic compounds were docked against the KRAS G12D mutant.
Molecular Docking Study
All 72 hits were docked into the active site of the KRAS G12D mutant.The docking analysis revealed that most of the newly identified hits revealed good docking scores and interactions with the KRAS G12D mutant.MRTX-1133 was selected as the control compound in the docking study.Compound ZINC05524764 was identified as the most promising with a docking score of −7.91 (kcal/mol).Compound ZINC05524764 establishes five hydrogen bonds with Glu62, Asp92, Asp12, His95, and Gly60 and one ionic interaction with Glu62 residues of KRAS G12D.Compound ZINC05828661 was found to be the second most potent compound with a docking score of −6.85 (kcal/mol).Compound ZINC05828661 made six hydrogen bond interactions with the Asp12, Lys16, Ala59, and Arg68 active site residues.The docking score of compound ZINC05725307 was predicted as −6.70 (kcal/mol).Compound ZINC05725307 made three hydrogen bond contacts with Asp12 and Arg102 and one ionic interaction with Lys16, one arene-H interaction with Ala59, and one arene-cation interaction with Arg68 residue of the KRAS G12D receptor.Control compound MRTX1133 revealed four hydrogen bonds with the Asp12, Glu62, and His95 active site residues of KRAS G12D while one arene-cation interaction with Arg68 was also observed.Table 2 shows the docking score and interactions of the most promising hits of the ZINC database.The 3D interactions of the most promising compounds in comparison with the control compound are shown in Figure 3.
Docking Validation
The docking procedure was validated by removing the co-crystal ligand (PDB ID: 7RPZ) and then re-docking it into the active site using MOE (2016) software [23].The RMSD value between the top-ranked docked conformation and the co-crystallized ligand was predicted to be 0.148 Å (Figure 4), revealing the validity of the MOE docking protocol.
Docking Validation
The docking procedure was validated by removing the co-crystal ligand (PDB ID: 7RPZ) and then re-docking it into the active site using MOE (2016) software [23].The RMSD value between the top-ranked docked conformation and the co-crystallized ligand was predicted to be 0.148 Å (Figure 4), revealing the validity of the MOE docking protocol.
Docking Validation
The docking procedure was validated by removing the co-crystal ligand (PDB ID: 7RPZ) and then re-docking it into the active site using MOE (2016) software [23].The RMSD value between the top-ranked docked conformation and the co-crystallized ligand was predicted to be 0.148 Å (Figure 4), revealing the validity of the MOE docking protocol.
Drug-Likeness and Toxicity Analysis of the Compounds
In evaluating the drug-likeness of the compounds, one widely accepted criterion is the Lipinski rule of five.In this study, the MOE software was employed to calculate the drug-likeness of the compounds.The Lipinski rule of five for the most promising compounds is present in Table 3.All the compounds obeyed the Lipinski rule of five.Our newly identified compounds against the KRAS G12D target possess drug-likeness.Furthermore, the virtual toxicity of the compounds was evaluated by using the MOE software.All the compounds were predicted non-toxic as presented in Table 4.
Drug-Likeness and Toxicity Analysis of the Compounds
In evaluating the drug-likeness of the compounds, one widely accepted criterion is the Lipinski rule of five.In this study, the MOE software was employed to calculate the drug-likeness of the compounds.The Lipinski rule of five for the most promising compounds is present in Table 3.All the compounds obeyed the Lipinski rule of five.Our newly identified compounds against the KRAS G12D target possess drug-likeness.Furthermore, the virtual toxicity of the compounds was evaluated by using the MOE software.All the compounds were predicted non-toxic as presented in Table 4.
Drug-Likeness and Toxicity Analysis of the Compounds
In evaluating the drug-likeness of the compounds, one widely accepted criterion is the Lipinski rule of five.In this study, the MOE software was employed to calculate the drug-likeness of the compounds.The Lipinski rule of five for the most promising compounds is present in Table 3.All the compounds obeyed the Lipinski rule of five.Our newly identified compounds against the KRAS G12D target possess drug-likeness.Furthermore, the virtual toxicity of the compounds was evaluated by using the MOE software.All the compounds were predicted non-toxic as presented in Table 4. 2.9.Post-Simulation Analysis 2.9.1.RMSD Analysis One of the most acceptable methods for examining the underlying stability of protein-ligand complexes is the performance of MD simulations.The stability of the complexes was evaluated by RMSD analysis.For the 200 ns production simulations, the RMSD of the KRAS G12D was plotted and the result was compared to the control complex.The RMSD of the ZINC05524764 complex was initially stable up to 50 ns but minor fluctuations were observed between 50 and 55 ns then the system converged and remained stable to 120 ns.After 120 ns, the RMSD gradually increased up to 170 ns, then the system attained stability and remained stable up to 200 ns.The RMSD of the ZINC05828661 complex revealed stability during the first 50 ns, after that minor deviations were seen between 50 and 70 ns, then the system attained stability and remained stable up to 200 ns, except for some deviation between 125 and 175 ns.However, when compared to the control system, the RMSD of the two systems were found to be highly stable during the 200 ns MD simulation.The control system revealed unstable behavior between 60 and 125 ns but overall, a stable RMSD was observed for all the systems.The average RMSD of the ZINC05524764, ZINC05828661, and control systems was found to be 2 Å, 2.1, and 2.5 Å, respectively.Figure 5 displays the RMSD plots for all of the complex systems.The ligand RMSD also showed limited fluctuation, indicating that once bound, the ligand remains consistently positioned within the binding site of the KRAS G12D protein.
The minimal deviation of the RMSD ligand from the RMSD complex suggests a synergistic stability between the ligand and the protein, an indication of a stable complex that is less likely to dissociate under physiological conditions.This result suggests that ZINC05524764 has the potential to act as an inhibitor for the KRAS G12D protein.Figure S1 shows the RMSD ligand plots, while Figure S2 shows the complex systems before and after MD simulation.
RMSF Analysis
The root mean square fluctuation (RMSF) allowed for a more thorough examination of the protein's backbone flexibility.The RMSF plots for all the complexes are shown in Figure 6.The loop regions had the highest variations, with an overall comparable tendency in the fluctuations.Residues Asp30, Glu31, Tyr32, Asp33, Pro34, Thr35, Ile36, Ser65, Ala66, Met67, Arg68, and Asp69 revealed high fluctuations during MD simulation.Conversely, a decrease in flexibility was noted in the region where the inhibitor was bound, indicating the impact of inhibitor interactions with the active site residues of KRAS G12D.
RMSF Analysis
The root mean square fluctuation (RMSF) allowed for a more thorough examination of the protein's backbone flexibility.The RMSF plots for all the complexes are shown in Figure 6.The loop regions had the highest variations, with an overall comparable tendency in the fluctuations.Residues Asp30, Glu31, Tyr32, Asp33, Pro34, Thr35, Ile36, Ser65, Ala66, Met67, Arg68, and Asp69 revealed high fluctuations during MD simulation.Conversely, a decrease in flexibility was noted in the region where the inhibitor was bound, indicating the impact of inhibitor interactions with the active site residues of KRAS G12D. Figure 6.The loop regions had the highest variations, with an overall comparable tendency in the fluctuations.Residues Asp30, Glu31, Tyr32, Asp33, Pro34, Thr35, Ile36, Ser65, Ala66, Met67, Arg68, and Asp69 revealed high fluctuations during MD simulation.Conversely, a decrease in flexibility was noted in the region where the inhibitor was bound, indicating the impact of inhibitor interactions with the active site residues of KRAS G12D.The number of residues is displayed on the X-axis and the RMSF value of each system is present on the Y-axis.
Structure Compactness Analysis
We calculated the structural compactness in a dynamic setting to determine the binding and unbinding processes that took place during the simulation.The radius of gyration (Rg), as a function of time, was used to evaluate the structural compactness.The Rg of ZINC05828661 showed a similar trend to that of RMSD, as shown in Figure 7.For a short period in the first 50 ns, the complex first reported low Rg values.After that, the Rg value increased to 15.9 Å, then decreased again, and continued to follow a consistent pattern up to 200 ns.The average Rg value for the ZINC05524764 system (green) was found to be 15.2-15.6Å, the Rg value for the ZINC05828661 system was observed to be 15.1-15.8Å, and for the control system, the Rg value was found to be 15.3-15.7 Å. Figure 6 displays the Rg plots for all the systems.Pharmaceuticals 2024, 17, x FOR PEER REVIEW 11 of 18
Structure Compactness Analysis
We calculated the structural compactness in a dynamic setting to determine the binding and unbinding processes that took place during the simulation.The radius of gyration (Rg), as a function of time, was used to evaluate the structural compactness.The Rg of ZINC05828661 showed a similar trend to that of RMSD, as shown in Figure 7.For a short period in the first 50 ns, the complex first reported low Rg values.After that, the Rg value increased to 15.9 Å, then decreased again, and continued to follow a consistent pattern up to 200 ns.The average Rg value for the ZINC05524764 system (green) was found to be 15.2-15.6Å, the Rg value for the ZINC05828661 system was observed to be 15.1-15.8Å, and for the control system, the Rg value was found to be 15.3-15.7 Å. Figure 6 displays the Rg plots for all the systems.
DCCM Analysis
By computing the correlation among residues of receptor the dynamic cross-correlation map (DCCM) was employed to obtain information regarding correlated motions during the MD simulation.Inter-residue correlation analysis, or DCCM, was carried out to
DCCM Analysis
By computing the correlation among residues of receptor the dynamic cross-correlation map (DCCM) was employed to obtain information regarding correlated motions during the MD simulation.Inter-residue correlation analysis, or DCCM, was carried out to elucidate the correlations among the residues in the systems.Figure 8 displays the DCCM results for all of the complex systems.The motions of the amino acids appeared positively correlated, indicating that they were strongly associated with correlated motions.If the amino acids are moving in the opposite or reverse direction, demonstrated anti-correlations of motion.The anti-parallel and parallel directions, respectively, represent the negative and positive correlations between the residues of the systems [24].The dark brown region in the plots shows a negative correlation while the green regions indicate positive correlations between the residues.More positive correlations were observed in ZINC05524764 and ZINC05828661, as compared to the control system.
Binding Energy Calculation
Using the binding free energy method, or MM-GBSA, to measure the binding strength of small molecules is a frequently used technique to confirm the ligand binding and docking stability.In terms of calculation, the MM-GBSA approach which was previously reported is less expensive and, as compared to the rational scoring functions, is one of the most accurate techniques [25].We also used this method to determine the binding free energy for the ZINC05524764, ZINC05828661, and control complexes, keeping in mind its applicability.Total binding free energy (TBFE) estimates for the ZINC05524764 complex were −39 kcal/mole, for the ZINC05828661 complex the binding energy was calculated as −35 kcal/mole, and for the control system, the binding free energy was found as −30 kcal/mole.Table 5 shows the results of the MMGBSA analysis.
Discussion
The second most common cause of cancer death is considered to be pancreatic ductal adenocarcinoma (PDAC) in the US.For metastatic PDAC, the 5-year survival rate is less than 5% due to the restricted therapeutic choices available [26,27].Human malignancies are often linked to the activation of missense mutations of RAS genes (KRAS, HRAS, and NRAS), which are crucial in oncogenic transformation [28].Due to the absence of binding sites appropriate for small-molecule inhibitors, oncogenic RAS proteins have long been thought to be undruggable [29].Most KRAS mutations occur at codon 12, where G12D mutations account for the largest frequency (35%), followed by G12V (20-30%), G12R (10-20%), Q61 (~5%), G12C (1-2%), and other uncommon mutations.[30] FDA has approved sotorasib (AMG510) and adagrasib (MRTX849) for the treatment of advanced lung cancer
Binding Energy Calculation
Using the binding free energy method, or MM-GBSA, to measure the binding strength of small molecules is a frequently used technique to confirm the ligand binding and docking stability.In terms of calculation, the MM-GBSA approach which was previously reported is less expensive and, as compared to the rational scoring functions, is one of the most accurate techniques [25].We also used this method to determine the binding free energy for the ZINC05524764, ZINC05828661, and control complexes, keeping in mind its applicability.Total binding free energy (TBFE) estimates for the ZINC05524764 complex were −39 kcal/mole, for the ZINC05828661 complex the binding energy was calculated as −35 kcal/mole, and for the control system, the binding free energy was found as −30 kcal/mole.Table 5 shows the results of the MMGBSA analysis.
Discussion
The second most common cause of cancer death is considered to be pancreatic ductal adenocarcinoma (PDAC) in the US.For metastatic PDAC, the 5-year survival rate is less than 5% due to the restricted therapeutic choices available [26,27].Human malignancies are often linked to the activation of missense mutations of RAS genes (KRAS, HRAS, and NRAS), which are crucial in oncogenic transformation [28].Due to the absence of binding sites appropriate for small-molecule inhibitors, oncogenic RAS proteins have long been thought to be undruggable [29].Most KRAS mutations occur at codon 12, where G12D mutations account for the largest frequency (35%), followed by G12V (20-30%), G12R (10-20%), Q61 (~5%), G12C (1-2%), and other uncommon mutations.[30] FDA has approved sotorasib (AMG510) and adagrasib (MRTX849) for the treatment of advanced lung cancer with a KRASG12C mutation.Additionally, MRTX 1133, a KRAS G12D inhibitor, has demonstrated encouraging preclinical development outcomes, and it is presently undergoing a phase 1 clinical trial.To date, no FDA-approved drugs are available for the KRAS G12D mutant.So, there is a need to develop a new and effective drug for KRAS G12D [31].The pharmaceutical industry has benefited greatly from the deployment of several machine learning algorithms in drug discovery.Predicting bioactivity, drug-protein interactions, and enhancing the bioactivity and safety profile of compounds are among the common uses of these algorithms [32].For the identification of new inhibitors against different drug targets, ML-based virtual screening is widely used [33,34].
In this study, different machine learning models were used to identify new promising hits from the ZINC database against the KRAS G12D cancer drug target.Among the 82 hits predicted as active, a total of 10 hits were found to be toxic.These toxic compounds were removed, and the remaining hits were docked into the active site of KRAS G12D.The molecular docking analysis confirmed six compounds as the most promising inhibitors for KRAS G12D.A previous study identified three promising inhibitors Quercetin, Psoralidin, and Resveratrol for the KRAS G12D mutant.These promising inhibitors formed hydrogen bonding with the Gly10, Thr58, Asp69, Tyr96, Gln61, Glu62, Tyr64, Met72, and Arg68 active site residues of KRAS G12D [35].Our promising inhibitors also made interactions with the active site residues including Gly10, Asp12, Lys16, Thr58, Glu62, Gly60, Arg68, Met72, and His95.Following molecular docking, a 200 ns MD simulation was carried out for the top two complexes along with the standard complex to determine their stability.The identified hits revealed stable binding to the protein confirmed by the RMSD analysis, demonstrating that these compounds are appropriate inhibitors of KRAS G12D.The stability of the ZINC05524764 complex in comparison to all other complexes was further corroborated by the RoG analysis, which is consistent with the RMSD profile.Furthermore, MMGBSA analysis revealed the strong binding energy of the two complexes as compared to the control complex.
Dataset Preparation
A total of 2526 compounds for the KRAS G12D mutant found in the Binding DB were extracted.MRT1133 was considered as the standard compound.The standard compound's IC50 value was found to be 6.1 nM [21].Based on the IC50 value, the compounds were divided into active and inactive categories.For 526 compounds, the IC50 value was not reported so these were removed.A total of 1578 compounds were categorized as inactive because their IC50 value exceeded that of the reference compounds, while 422 compounds were considered active because their IC50 value was equal to or less than that of the reference compound.In the target class, the active and inactive compounds were indicated by 1 and 0, respectively.
Features Extraction and Dataset Cleaning
The experimentally validated compounds against the KRAS G12D mutant were obtained from Binding DB.Then, descriptors were calculated in MOE (2019) software [36].A total of 206 features were computed by MOE software.All the 0 and null (NA) values were removed from the dataset using python v3.9.The dataset cleaning was carried out using the pandas library of python [37].Then, the dataset was split into training (70%) and test (30%) subsets.The train_test_split function was used to divide the dataset into training and test sets [38].
Feature Selection
To develop a computationally inexpensive model and to improve model performance, optimum features selection was carried out.We employed SVM-RFE to choose optimum features for model development [39].
ML Models
Using open-source Python v3.9, three models such as the k-nearest neighbors, support vector machine, and random forest models were developed.All the models were developed using the scikit-learn package of the Python software v3.9 [23].
K-Nearest Neighbor (kNN)
The k-nearest neighbors (KNN), also known as a lazy algorithm, can solve the problems of classification as well as regression.First, the distance between the nearest neighbors in the data can be measured [40].The parameter n_neighbors can be used to select the nearest neighbors [41].The optimal k value was found to be 11.
Support Vector Machine (SVM)
The SVM model can tackle the problems of regression and classification [42].Apart from binary classification, SVM can address multiclass classification problems.SVM classifies data with the help of an optimum hyper-plane.Various kernel functions (linear, polynomial, sigmoid, and radial base functions) are used to convert low-dimensional data into a higher dimensional space [43].The grid search method and RBF were employed to predict the optimal values for the C and γ parameters.Finally, C = 1000 and γ = 1 were found to be the ideal values.
Random Forest (RF)
The RF algorithm was first presented by Breiman [44].It is a favored model for data categorization or regression tasks.A bootstrap sample is used to train the random forest tree, and predictions are made by the majority vote of the trees.Max_features and n_estimators, which indicate the number of trees built before predictions, were the two main hyperparameters that were optimized during model development [41].Some 100 to 500 estimates were taken during model generation.
Models Validation and Performance Evaluation
In the case of unbalanced datasets, accuracy alone is not sufficient to access the strength of a classification model [45].In the case of binary classification problems, the MCC parameter can be used to evaluate the performance of a model.The receiver operating characteristic (ROC) curve is another useful tool for evaluating the models' performance.A ROC curve can be used to visually represent the true positive rate against the false positive rate [46].For ML model evaluation, several parameters were calculated, including accuracy, F1 score, MCC score, and ROC curves.We employed five-fold cross-validation in this study.
Virtual Screening and Molecular Docking Study
The model that revealed high accuracy and MCC values was used for the virtual screening of the 20,000 drug-like compounds of the ZINC database [47].The hits obtained from the RF model were docked against the KRAS G12D mutant.The 3D structure of the KRAS G12D mutant (PDB ID: 7RPZ) was retrieved from the PDB database.The water molecules were removed from the structure before docking [48].Energy minimization was carried out using an RMS gradient of 0.05.The protein preparation module of the MOE version 2016 (Chemical Computing Group, Montreal, QC, Canada) software was used to prepare the structure.The KRAS structure was 3D protonated.Ten conformations were generated in total for each hit [49].Finally, for docking analysis, the PyMOL version 2.5 (Schrödinger, New York, NY, USA) and MOE version 2016 (Chemical Computing Group, Montreal, QC, Canada) software were used.
MD Simulation
Using the AMBER version 2022 (Schrödinger, San Francisco, CA, USA) package [24], MD simulation was carried out for 200 ns to examine the stability and dynamic evaluation of the best complexes.For protein and ligand molecules, the FF19SB force field and the general amber force field (GAFF), respectively, were used [50].Na + ions were added to counteract the effects of any charge, and energy reduction was accomplished in two phases (using the steepest descent and conjugate gradient methods) [51].The heating and equilibration processes were then carried out.Then, the production run of 200 ns for each complex was run.The particle mesh Ewald algorithm was applied to the long-range electrostatic interactions using cutoff distance of 10.0 Å [52].Lastly, the simulations were conducted using PMEMD.cuda,and the trajectories were analyzed using the CPPTRAJ package [53].
Binding Free Energy Calculations
The most frequently utilized method in various research studies is the assessment of the potency of small molecule binding by calculating the binding free energy (BFE) using the MM/GBSA approach [54].We employed the MMPBSA.pyscript to calculate the binding free energy of the protein-ligand complexes by taking into account 2500 snapshots.To calculate the BFE, the following formula was applied: The binding energy of a protein, drug, or complex is represented by the symbols ∆G receptor, ∆G ligand, and ∆G complex, respectively, while the overall binding energy is represented by the symbol ∆G bind [25].
Conclusions
The KRAS G12D variant is responsible for pancreatic cancer and is a target for cancer drug development initiatives.In this study, different computational approaches were used to identify new promising inhibitors for the KRAS G12D mutant.Among the 72 active hits against KRAS G12D, two compounds ZINC05524764 and ZINC05828661 were found to be the most promising for the KARS G12D mutant.As compared to the standard compound MRTX 1133, our reported compounds revealed high stability during the 200 ns MD simulation.Our identified hits have the potential to inhibit the KRAS G12D mutation and can help combat cancer.This study provides hope for the development of new drugs to treat the cancer caused by the KRAS G12D mutation.This work sets the stage for continued innovation in the field of drug discovery.It is further recommended to evaluate the inhibitory potential of these compounds through in vitro and in vivo approaches.
Figure 1 .
Figure 1.The chemical space and diversity distribution of the dataset.The scatter plot indicates the average results from the cross-validation.The molecular weight and LogP are shown on the X and Y axes, respectively.
Figure 1 .
Figure 1.The chemical space and diversity distribution of the dataset.The scatter plot indicates the average results from the cross-validation.The molecular weight and LogP are shown on the X and Y axes, respectively.
Figure 2 .
Figure 2. The ROC-AUC curve developed in Python v3.9 shows the TP against the FP rate on the cross-validation.
Figure 2 .
Figure 2. The ROC-AUC curve developed in Python v3.9 shows the TP against the FP rate on the cross-validation.
Figure 3 .
Figure 3. Three-dimensional interactions of (A) ZINC05524764, (B) ZINC05828661, and (C) the control compound with the KRAS G12D mutant.The blue dotted lines indicate hydrogen bonds, the red dotted line indicates the ionic bond, and the pink dotted line indicates the arene-cation bond, while ligands are shown as green sticks.
Figure 4 .
Figure 4. Superposition of co-crystallized and docked conformations of the ligand.The magenta color represents the native co-crystallized ligand and the cyan color is the docked ligand.
Figure 3 .
Figure 3. Three-dimensional interactions of (A) ZINC05524764, (B) ZINC05828661, and (C) the control compound with the KRAS G12D mutant.The blue dotted lines indicate hydrogen bonds, the red dotted line indicates the ionic bond, and the pink dotted line indicates the arene-cation bond, while ligands are shown as green sticks.
Figure 3 .
Figure 3. Three-dimensional interactions of (A) ZINC05524764, (B) ZINC05828661, and (C) the control compound with the KRAS G12D mutant.The blue dotted lines indicate hydrogen bonds, the red dotted line indicates the ionic bond, and the pink dotted line indicates the arene-cation bond, while ligands are shown as green sticks.
Figure 4 .
Figure 4. Superposition of co-crystallized and docked conformations of the ligand.The magenta color represents the native co-crystallized ligand and the cyan color is the docked ligand.
Figure 4 .
Figure 4. Superposition of co-crystallized and docked conformations of the ligand.The magenta color represents the native co-crystallized ligand and the cyan color is the docked ligand.
One of the most acceptable methods for examining the underlying stability of protein-ligand complexes is the performance of MD simulations.The stability of the com-
18 Figure 5 .
Figure 5. RMSD plot for ZINC05524764 (green), ZINC05828661 (purple), and the control (red) systems.Time in ns is shown on the X-axis and the RMSD value of each system is shown on the Y-axis.
Figure 5 .
Figure 5. RMSD plot for ZINC05524764 (green), ZINC05828661 (purple), and the control (red) systems.Time in ns is shown on the X-axis and the RMSD value of each system is shown on the Y-axis.
Figure 6 .
Figure6.RMSF plot for ZINC05524764 (green), ZINC05828661 (purple), and the control (red) systems.The number of residues is displayed on the X-axis and the RMSF value of each system is present on the Y-axis.
Figure 6 .
Figure 6.RMSF plot for ZINC05524764 (green), ZINC05828661 (purple), and the control (red) systems.The number of residues is displayed on the X-axis and the RMSF value of each system is present on the Y-axis.
Figure 7 .
Figure 7. RoG plot for ZINC05524764 (green), ZINC05828661 (purple), and the control (red) systems.The number of frames and the RoG value are presented on the X and Y axis.
Figure 7 .
Figure 7. RoG plot for ZINC05524764 (green), ZINC05828661 (purple), and the control (red) systems.The number of frames and the RoG value are presented on the X and Y axis.
Figure 8 .
Figure 8. DCCM plot for the (A) ZINC05524764, (B) ZINC05828661, and (C) control systems.The X and Y axis shows the number of residues.
Figure 8 .
Figure 8. DCCM plot for the (A) ZINC05524764, (B) ZINC05828661, and (C) control systems.The X and Y axis shows the number of residues.
: RMSD plot for ligands ZINC05524764 (Green) ZINC05828661 (Purple) and Control (Red) systems.Time in ns is shown on the X-axis and the RMSD value of each ligand is shown on the Y-axis; Figure S2: (A-C) indicates the complex ZINC05524764, ZINC05828661, and Control systems before MD simulation while (D-F) indicates the ZINC05524764, ZINC05828661, and Control systems after MD simulation.Author Contributions: Conceptualization, A.A. (Amar Ajmal), M.D. and M.Z.; methodology, A.A. (Arif Ali) and M.D.; software, S.Z. and A.A. (Amar Ajmal); validation, M.N., M.D. and A.A. (Arif Ali); formal analysis, S.Z. and M.D.; investigation, M.N. and A.A. (Amar Ajmal); resources, D.W.; data curation, A.A. (Arif Ali) and M.D.; writing-original draft preparation, M.Z. and A.A. (Amar Ajmal); writing-review and editing, D.W. and C.H.; visualization, K.F.A., M.Z., M.E.A.Z. and A.A. (Arif Ali); supervision, D.W. project administration, D.W.; funding acquisition, D.W.All authors have read and agreed to the published version of the manuscript.Funding: Dong-Qing Wei is supported by grants from the National Science Foundation of China (Grant Nos.32070662, 61832019, 32030063), the Intergovernmental International Scientific and Technological Innovation and Cooperation Program of the National Key R&D Program (2023YFE0199200), and the Joint Research Funds for Medical and Engineering and Scientific Research at Shanghai Jiao Tong University (YG2021ZD02).Institutional Review Board Statement: Not applicable.Informed Consent Statement: Not applicable.Data Availability Statement: Data are contained within the article.
Table 1 .
Performance evaluation of machine-learning models.
Table 1 .
Performance evaluation of machine-learning models.
Table 2 .
Docking score and interactions of the most potent compounds of ZINC database.
Table 3 .
Drug-likeness of the compounds.
Table 4 .
Two-dimensional structures and toxicity analysis of the most promising compounds.
Table 3 .
Drug-likeness of the compounds.
Table 4 .
Two-dimensional structures and toxicity analysis of the most promising compounds.
Table 3 .
Drug-likeness of the compounds.
Table 4 .
Two-dimensional structures and toxicity analysis of the most promising compounds.
Table 5 .
MMGBSA analysis indicating the binding energy of all the complexes.
Table 5 .
MMGBSA analysis indicating the binding energy of all the complexes. | 8,447.6 | 2024-04-25T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
The Resilience of Sharia and Conventional Banks in Indonesia during the Covid-19 Pandemic Crisis
. This study aims to analyze the effect of Covid-19 on banking resilience in Indonesian Islamic and conventional banks. Using panel regression with robust standard error on 38 Islamic and conventional banks going public in Indonesia, covering the period before and during the Covid-19 pandemic, shows that the Covid-19 crisis has a significant effect on all bank financial performance but not significant on all bank risk indicators. Using the independent t-Test test with the assumption of unequal variance and Welch correction on six panels of criteria, this study finds that Islamic banks are more resilient than conventional banks. were found and before due Covid-19 pandemic (24,085 and is for in both types of
Introduction
Unlike the previous financial crises caused by financial system indiscipline and bubbles, the crisis since 2020, which is still ongoing until now, is caused by the Covid-19 pandemic, which impacts various sectors, especially the economy. Covid-19 reduced World GDP by -2.2 in May 2020. The financial sector experienced fluctuations as indicated by foreign capital outflows and fluctuating exchange rates. This condition triggers an increase in credit risk, thus motivating policymakers around the world to take extraordinary steps to provide assistance to affected borrowers (World Bank, 2020). Although banking regulators are trying to control the situation by continuing to make policy revisions, the OECD (2021) noted that there was an increase in the percentage of NPLs (Non-Performing Loans) in 2020 from the previous year in several regions, as in North America increased by around 75%, Europe 40%, the Asia Pacific by 80%, and EMEs increased by 42%. Studies also show that Covid-19 threatens to trigger a worldwide liquidity and solvency crisis (Adrian and Natalucci, 2020;Ari, Chen, and Ratnovski, 2020). So that banks are required to increase provisions (Miglionico, 2019) and control their risks (Abu Hussain & Al-Ajmi, 2012;Ben Selma Mokni et al., 2014;Khan & Ahmed, 2001) so as not to worsen their balance sheet.
In Indonesia, Covid-19 has caused a decline in macro and microeconomic indicators. These include recessions, current account deficits, and exchange rate volatility. Macro conditions contributed to the slowdown in the banking sector. Some of the risks that threaten banks are a decrease in TPF (Third Party Funding), and an increase in NPL/NPF. The growth of ATM-Debit, Credit Cards, and Electronic Money is slowing, but the volume of digital banking transactions is increasing (Warjiyo, 2020). Other impacts on the banking sector include liquidity problems, increased credit/financing risk, decreased profits, and the need for financial restructuring. The long-term impacts include reducing bank capital, reducing its ability to channel financing, a decrease in the quality of financing for Islamic rural banking, especially for SMEs (Hidayat, 2020).
In response to these problems, the Government of Indonesia issued various fiscal stimulus policies with a larger fiscal deficit and the Government's national economic recovery program. The government has increased the deficit to 6.34% of GDP (Rp 1,093.2 T), including the cost of the National Economic Recovery of Rp 582.15 T: health of Rp. 87.55 T, social protection of Rp. 203.90 T,Incentives Rp. 120.61 T,MSMEs Rp. 123.46 T,Corporate Financing Rp. 44.57 T, Sectoral and Regional Government Rp. 97.11 T. In the banking sector, regulators (Indonesia's Central Bank (BI/Bank of Indonesia) and Indonesia's Financial Service Authority http://journal.uinjkt.ac.id/index.php/iqtishad DOI: 10.15408/aiq.v13i2.23964 (OJK/Otoritas Jasa Keuangan)) issued various policies to maintain banking stability and performance while taking into account the national banking intermediation function. BI has issued monetary and macroprudential stimulus, while OJK has issued a credit restructuring policy and relaxed a number of microprudential provisions. These policies include stabilizing the Rupiah exchange rate, reducing interest rates, providing liquidity funds such as SBN (National Security Assets) repos, reducing the statutory reserves, and relaxing macroprudential policies.
In the midst of these efforts, the performance of the national banking sector is still showing a slowdown. Credit has decreased from Rp. The dual banking system (Islamic and conventional) raises the question of the extent to which banks are better able to absorb crisis shocks. Until now, the resilience study between Islamic and conventional banks in facing the crisis continues to show inconsistent results. Study shows that Islamic banks are more resilient to crises than conventional banks (Alqahtani et al., 2017;Chazi & Syed, 2010;Fakhfekh et al., 2016;Hashem, 2017;Khediri et al., 2015;Rajhi & Hassairi, 2013), and perform better during the crisis (Johnes et al., 2014;Majeed & Zainab, 2021). Another study by Cihak & Hesse (2010), Hassan & Dridi (2011), and Beck et al. (2013 found that Islamic banking has a lower level of resilience to economic crises than conventional banking (Beck et al., 2013;Čihák & Hesse, 2010). Then, several studies have found no difference between Islamic banks and conventional banks in dealing with the crisis (Bourkhis & Nabi, 2013;Johnes et al., 2014;Olson & Zoubi, 2017).
Regarding the impact of Covid-19 on the performance of Islamic and conventional banking, recent studies have shown that the Covid-19 outbreak has adversely affected financial performance across various financial performance indicators (i.e., accounting-based and market-based performance measures) and financial stability (i.e., risk indicators) in the global banking sector (Elnahass et al., 2021). However, Hartadinata & Farihah (2021) found that there was no difference in performance between before (2019) and during the Covid-19 crisis (2021) in Indonesian banking based on return on assets (ROA). Thus, this study aims to contribute to the academic debate on the resilience of Islamic and conventional banking in facing the Covid-19 pandemic crisis. It will examine the effect of the Covid-19 crisis on Indonesian banking resilience based on financial performance indicators and risk indicators and analyze a comprehensive comparison of the resilience of conventional banks and Islamic banks in Indonesia, before and during crisis Covid-19.
Literature Review
There are three streams of opinions on the resilience of Islamic banks compared to the conventional ones. The first stream argues that Islamic banking is better at resisting shocks due to the crisis than conventional banking. Rosman et al. (2014) found that most Islamic banks in Middle Eastern and Asian countries were able to survive the 2007-2008 crisis even though their incomes decreased due to their smaller scale of operations. In addition, during the crisis period, Islamic banks in Persian Gulf countries were relatively more stable and able to improve their credit growth performance compared to conventional banks (Al-Khouri and Arouri 2016; Hasan and Dridi 2011). Similarly, the ability of Islamic banking to maintain a better capital ratio during the global financial crisis is also better than that of conventional banking (Chazi and Syed 2010). Recent findings conclude that Islamic banking is more resilient than conventional banking because the latter is more volatile than the former (Fakhfekh et al., 2016). Hashem (2017) found that conventional banks are the sector that is least resilient to systemic events and is one that has the highest contribution to systemic risk during times of crisis.
The second stream argues that Islamic banking is more vulnerable to shocks due to the crisis than conventional banking. Beck et al. (2013) found that Islamic banking has a lower level of resilience to economic crises than conventional banking. Researching the determinant factors, Hassan and Dridi (2011) argue that Islamic banking has poor risk management. Johnes et al. (2014) concluded that both banks were affected by the 2008 crisis and began to recover in 2009; however, he found that although Islamic banks performed quite efficiently during the crisis, conventional banking operating systems are more efficient during crisis periods. As a result of this financial crisis, the regulators have to strengthen various indicators to create a healthier financial ecosystem and provide a safety net for banks and their customers. As a strategy to protect their customers, banks are required to save a certain amount of funds from bearing the risk of a crisis that can be used to settle certain obligations. However, Grira et al. (2016) found that sharia banking deposit insurance premiums did not increase during the crisis. This finding http://journal.uinjkt.ac.id/index.php/iqtishad DOI: 10.15408/aiq.v13i2.23964 indicates that Islamic banking has lower deposit insurance premiums in times of crisis than conventional banking, which explains the fundamental differences in the business models of the two types of banking. Therefore, regulators and policy makers need to consider the differences between Islamic and conventional banking when formulating policies for these two different types of banking.
The third stream argues that there is no difference between Islamic banks and conventional banks facing a crisis (Bourkhis & Nabi, 2013;Johnes et al., 2014;Olson & Zoubi, 2017). Both experienced decreased profitability and increased risk during the crisis since both are intermediary companies.
The crisis due to Covid-19 has a different nature from 1998 Asian and the 2008 global crisis. The impact of the Covid-19 was significant on the decline in people's incomes and business activities which then spread to the financial sector. This resulted in a global financial market panic, capital outflows, and the exchange rate weakening. Governments from various affected countries issued significant economic stimulus policies, both fiscal stimulus and monetary and financial stimulus (World Bank, 2021). However, credit risk proxied by NPL/NPF still shows an increase in 2020 (OECD, 2021). The study of Ghosh & Saima (2021) found that most banks in Bangladesh were affected by the Covid-19 pandemic crisis, indicated by the decline in capital adequacy, liquidity ratio, profitability, non-performing loans, and resilience capacity to adverse effects of the Covid-19 pandemic. Elnahass et al. (2021) also showed that the Covid-19 pandemic crisis lowered bank profitability and bank stability indicators. Financial institutions are said to be financially stable if they meet the elements of profitability, liquidity, solvency (Ghassan & Krichene, 2017). This study adopts the previous literature indicating that capital adequacy, liquidity ratio, and non-performing loans (NPLs) are commonly used in measuring the resilience of financial institutions (Maheswaran and Rao, 2014; Patra and Padhi, 2020) and performance volatility (Z-Score) as stability measurement (Fu et al., 2014;Gamaginta & Rokhim, 2009;Khediri et al., 2015).
Methods
This study analyzes and compares the effect of the Covid-19 pandemic crisis on the resilience of Islamic and conventional banks in Indonesia. Bank resilience in this study is measured using two indicators, namely financial performance, and risk indicators. Referring to the study by Elnahass et al. (2021), bank performance in this study is measured by profitability which shows the bank's ability to earn a profit, proxied by Return on Assets (ROA) and Return to Equity (ROE), and bank efficiency is measured using operating expense ratio (OER). While for the risk indicators, this study refers to Aldoseri & Worthington (2020); Ali & Puah (2019); Ghassan & Krichene (2017); Mohammad Yusuf & Reza Nurul Ichsan (2021) that employed liquidity risk, credit risk, and capital adequacy. Profitability shows the bank's ability to earn profit, proxied by Return on Assets (ROA) and Return on Equity (ROE).. Liquidity is measured by using Loan/Financing to Deposit Ratio (LDR or FDR). Credit quality is measured using Non-Performing Loan/Financing. To examine the available capital of a bank in relation to extended credit, this study employs capital adequacy ratio (CAR).
To find out whether the Covid-19 pandemic crisis influences bank resilience, this study uses panel regression model as follows: Then, to find out whether there are differences in resilience between Islamic and conventional banks in the period before and during the crisis, this study adopts an independent t-test with the assumption of unequal variance and Welch correction on the resilience indicators as follows:
Data Sources
This study uses a balanced set of panel data from 47 banks (Islamic and Conventional) listed in the Indonesia stock exchange. The data in this study is quarterly, from Q3 2018 to Q2 2021, covering the period before the Covid-19 pandemic (Q3 2018-Q4 2019) and during the Covid-19 pandemic (Q1 2020 -Q2 2021). The data is sourced from each bank's annual and financial reports, which can be accessed online from the OJK website. Table 2 below shows a statistical description of a sample of Indonesian Islamic and conventional banks before and during the Covid-19 pandemic crisis. Table 2 indicates that almost all dependent variables have standard deviation and variance values that are higher than the mean so that it can be interpreted that all bank resilience indicator variables have a high level of variation and distribution of data. This is due to the diversity of the bank samples, both in terms of bank size and type. Table 2 shows that only two of the six dependent variables are normally distributed based on the skewness value. Still, based on the Kurtois value, all variables have a value not equal to 3, so it can be concluded that the data are not normally distributed, which is reasonable in the data panel. Therefore, GLS (Generalized Least Square) is more appropriate to use in this study. Table 2 shows the correlation matrix of the independent variables, and all variables have a correlation value of less than 0.8, so it can be said that there is no multicollinearity. Because this study uses various performance models, to ensure that multicollinearity is not a serious problem in this study, we performed the Variance inflation factor (VIF) test on each model test. We obtained a VIF value of less than 10, so it can be concluded that there is no multicollinearity problem. However, several models have heteroscedasticity problems based on the Breusch-Pagan / Cook-Weisberg test. Thus, we use a robust approach for each best model estimator.
Results and Discussion
The mean value for the Islamic banks variable is only 0.079, which means that the Islamic bank samples are smaller than 10% of the total observations, so this study does not use the dummy variable to distinguish Islamic and conventional banks. Instead, this study adopts the Independent T-Test test with the assumption of unequal variance and Welch correction for further comparative analysis. However, this study uses the dummy variable to distinguish the influence of the Covid-19 pandemic on the bank resilience before and during the crisis. Table 3 shows the least square panel estimation with a robust standard errors approach to test the impact of Covid-19 on the resilience of Islamic and conventional banks in Indonesia based on performance indicators (panel A) and risk indicators (panel B). This study found that Covid-19 significantly affected all bank performance indicators while not on bank risk indicators based on the two-panel groups.
An empirical test for the bank performances
In the bank performance indicator panel (A), it was found that Covid-19 had a significant effect on both profitability indicators and bank efficiency indicators. In table 3, it is known that there is a negative relationship between Covid-19 and the two profitability indicators. This means that the Covid-19 has reduced the profitability of Islamic and conventional banks in Indonesia, both based on ROA and ROE. In contrast, the Covid-19 has a positive effect on bank efficiency. This means that the Covid-19 has forced the banks to be more efficient.
In the control variables, all bank-specific variables (leverage & bank size) were found to have a significant effect on all bank performance indicators. It is known that leverage has a significant negative effect on profitability (ROA and ROE) but has a significant positive effect on bank efficiency (OER). Macro variables (GDP and Inflation) were found to have no significant effect on all bank performance indicators except GDP on ROE, where this effect had a negative relationship.
In the bank risk indicator panel (B), it was found that Covid-19 did not significantly affect all risk indicators of Indonesian Islamic and conventional banks. However, the relationship between the impact of the crisis on liquidity risk is positive, meaning that the ratio of financing/credit disbursed by the banks to the third party fund increases with the presence of Covid-19. There are two possibilities that can explain this increase in liquidity risk. The first is a condition in which total credit/financing growth increases with constant growth in the third-party fund. The second is that the total credit/financing growth is constant, but the number of third-party funds decreases during the Covid-19 pandemic. Regarding the control variables, bank-specific factors were also found to have a significant influence on all bank risk indicators. While leverage has a negative effect on liquidity risk and bank adequacy, credit risk has a positive effect. On the other hand, bank size positively affects liquidity risk and bank adequacy, while credit risk has a negative effect. This shows that the larger the bank, the smaller the credit risk faced by the bank. Meanwhile, based on macroeconomic factors, GDP and inflation significantly affect liquidity risk and bank adequacy but not on bank credit risk.
The Resilience of Islamic and Conventional Banks in Indonesia
The comparison of the resilience of Indonesian Islamic and conventional banks in this study uses an independent t-test with the assumption of unequal variance and Welch correction. Comparative tests were carried out on six panels of criteria, namely 1) comparing indicators of Indonesian banking resilience before and during the Covid-19 pandemic; 2) comparing indicators of resilience between Islamic and conventional banks before Covid-19; 3) comparing indicators of resilience between Islamic and conventional banks during Covid-19; 4) comparing indicators of Indonesian banking resilience between before and during Covid-19; 5) comparing the conventional bank resilience indicators between before and during Covid-19, and 6) comparing the Islamic bank resilience indicators between before and during Covid-19 Table 4 shows that out of three bank financial performance indicators (ROA, ROE, and OER), only OER does not have significant differences on all six previous panel criteria above. However, the mean value of the OER of Islamic banks tends to be lower than the conventional ones both in the pre and during Covid-19. In general, the OER of banks in Indonesia increased during the Covid-19 pandemic. The results of this comparison support the regression findings in table 4, which finds a positive relationship between the Covid-19 crisis and OER, meaning that the Covid-19 increases the OER of banks, which means banks are increasingly inefficient. However, when divided by type of bank, the increase only occurred in conventional banks, while Islamic banks tended to have lower OER values during the Covid-19 pandemic. So it can be assumed that Islamic banks were more efficient than conventional banks during the Covid-19 pandemic. On the ROA indicator, the results show that there are differences in performance based on the ROA indicator on all six-panel criteria above except for panel F, namely the comparison of the performance of Indonesian Islamic banks before and during the crisis, where the p-value shows a value of more than a significance level of 1%, 5% even 10% meaning that there is no significant difference in ROA of Indonesian Islamic banks between before and during the Covid-19 pandemic. Comparison of ROA of Islamic and conventional banks shows a significant difference with the mean ROA of Islamic banks being higher than conventional banks, both in the entire study period, the period before the crisis, and during the crisis. Suppose Indonesia's banking performance based on ROA is compared based on the crisis period. In that case, it is found that there is a significant difference in the performance of Indonesian banking where before the crisis is better than during the crisis. However, if divided by type of bank, conventional banks appear to perform significantly worse during the crisis period. On the other hand, Islamic banks also experience a decrease in performance but not significantly.
Comparison of Resilience of Islamic and Conventional Banks in Indonesia: Financial Performance Indicators
The performance of Islamic banks and conventional banks based on the ROE indicator found significant differences in the entire study period, where the ROE of Islamic banks (10,586) was higher than the ROE of conventional banks (7,064), but when divided by crisis period, there was no significant difference in ROE between conventional and Islamic banks both in the pre-crisis period and during the crisis. In general, table xx also shows that there was a significant decline in performance based on ROE in Indonesian banking during the crisis period. Where conventional banks had an average ROE level of 7,974 before the crisis and to 6,154 during the crisis, Islamic banks also experienced a decline in ROE performance on average from 11,301 before the crisis and to 9,870 during the period during the crisis.
In general, the comparison of conventional and Islamic banks' performance shows that Islamic banks perform better than conventional banks in terms of profitability (ROA and ROE) in the entire research period, the period before the crisis, and during the crisis although with declining growth. However, if grouped by type of bank, the decline in ROA and ROE performance of Islamic banks is not significant, so it can be concluded that Islamic banks are more resilient than conventional banks in facing a crisis.
Comparison of Resilience of Islamic and Conventional Banks in Indonesia: Risk Indicators
The liquidity risk indicator (LFDR) of banks shows that Islamic banks have a significantly higher liquidity risk (94,801) than conventional banks (87,064) during the period during the Covid-19 pandemic crisis. In the period before the crisis, Islamic banks had a lower risk than conventional banks but were not significant. Meanwhile, no significant differences were found in the general period and the period before the Covid-19 pandemic. If grouped by period, it is generally found that liquidity risk decreased significantly during the Covid-19 pandemic crisis. It seems that this significant decrease was caused by a significant reduction in the liquidity risk of conventional banks because, in Islamic banks, it was found that liquidity risk increased during the Covid-19 pandemic but was not significant.
Indonesia's banking credit risk generally decreased during the crisis, from 1,839 before the crisis to 1,725 during the crisis, although not significantly. Interestingly, the comparison test results show that with the increase in the LFDR of Islamic banks during the Covid-19 pandemic, the risk level of Islamic bank financing has decreased significantly. The calculation results in the 5 table also show that credit risk also decreased slightly during the Covid-19 pandemic but was not significant. When compared by type of bank, in general, Islamic banks have a higher credit risk than conventional banks, 2,064 and 1,758, respectively, but the difference is not significant. Significant differences between Islamic and conventional banks were found only in the period before the crisis due to the Covid-19 pandemic, with conventional bank credit risk being lower than Islamic banks at 1,777 and 2,563, respectively. Table 4 shows that there is no significant difference in the capital adequacy ratio between the period before and during the crisis in general in Indonesian banks, Islamic and conventional banks, although there was an increase in the adequacy ratio in the period during the Covid-19 pandemic crisis which was caused by an increase in the adequacy ratio. Islamic bank CAR. Significant CAR differences were found between Islamic and conventional banks in the entire study period (24,794 and 29,864) and before the crisis due to the 085 and 29,181), which is higher for Islamic banks in both types of periods. Based on the comparison above, this study finds that Islamic banks were more resilient during the Covid-19 pandemic, despite declining profitability. These results contribute to the academic debate by giving the empirical evidence in the first stream opinion, which states that Islamic banks are more resilient than conventional banks, where they survived the crisis despite decreasing profitability, similar to the findings of Rosman et al. (2014).
The results of this study support the study by Al-Khouri and Arouri (2016) and Hasan and Dridi (2011), which also found that although Islamic banks have a higher level of liquidity risk than conventional banks, proxied by the higher the F/LDR ratio of Islamic banks indicates that the level of disbursement Islamic bank financing is higher than conventional banks during the Covid-19 pandemic crisis. This shows that Islamic banks do not hold back their funds and provide opportunities for a capital deficit and help accelerate economic recovery. Although liquidity risk has increased, Islamic banks have shown good performance with a decreased level of credit risk, meaning that the quality of Islamic bank financing has increased during the Covid-19 pandemic crisis.
Although the difference in capital adequacy between Islamic banks and conventional banks is not significant during the Covid-19 pandemic, it can be assumed that Islamic banks have better capital adequacy than conventional banks based on the higher capital adequacy ratio of Islamic banks in all six panels comparison criteria. These results support the findings of Chazi and Syed (2010), who found that Islamic banks were better at maintaining their capital than conventional banks during the crisis.
Regarding the impact of Covid-19 on the reliance of bank indicators, this study found different results from the study of Elnahass et al. (2001), who found that the Covid-19 crisis affected various indicators of resilience, both indicators of financial performance and risk. Meanwhile, this study found that Covid-19 only affected financial performance indicators but not risk indicators in Indonesia. These results indicate that it seems that the Indonesian government's efforts to carry out economic recovery by providing stimulus and relaxation of credit and policies are able to contain bank risks.
This study also found different results from Hartadinata & Farihah (2001), who found that there was no difference in performance between before (2019) and during the Covid-19 crisis (2021) in Indonesian banking-based on return on assets (ROA). By using not only ROA indicators, this study also finds significant differences in both ROA and ROE indicators. This difference could be due to
Conclusion
This study aims to provide empirical evidence regarding the effect of the Covid-19 on indicators of banking resilience in Indonesia and to find differences in the impact of Covid-19 on two groups of resilience indicators. The results show that Covid-19 affects all bank financial performance indicators and is not significant for all bank risk indicators. The existence of the Covid-19 crisis was found to have a negative effect on all profitability indicators and had a negative effect on bank efficiency ratios.
This study also aims to compare indicators of the resilience of Islamic and conventional banks based on six panels of criteria, on the first criterion, 1) the comparison of indicators of Indonesian banking resilience before and during the Covid-19 pandemic shows that Islamic banks, in general, have significantly higher profitability and liquidity risk. 2) the comparison of indicators of resilience between Islamic and conventional banks before the Covid-19 pandemic crisis shows that Islamic banks have a higher return on assets but have higher credit risk and capital adequacy than conventional banks before the Covid-19 pandemic crisis; 3) the comparison of indicators of resilience between Islamic and conventional banks during the Covid-19 pandemic crisis also shows that Islamic banks have a higher return on assets but have higher credit risk and capital adequacy than conventional banks during the Covid-19 pandemic crisis; 4) the comparison of indicators of Indonesian banking resilience between before and during Covid-19 shows that in general Indonesian banks experienced a decline in profitability and liquidity during the Covid-19 crisis; 5) the comparison of conventional bank resilience indicators between before and during Covid-19 shows a significant decrease in all indicators of profitability and liquidity risk. 6) Comparing the indicators of the resilience of Islamic banks between before and during Covid-19 shows no significant differences in all indicators except for the credit risk indicator, which was lower during the Covid-19 crisis. So it can be concluded that Islamic banks are more resilient than conventional ones. | 6,374.2 | 2021-12-31T00:00:00.000 | [
"Economics"
] |
Regional pragmatic variation in French: A contrastive study of complaint realizations in Cameroon and France
: This study examined and compared complaints by speakers of French in Cameroon and France. Although complaints have been extensively analyzed, to date, little attention has been devoted to complaints across regional varieties of French. This study aimed to fill this knowledge and research gap by analyzing strategies used by speakers of Cameroon French and Hexagonal French to complain in three situations. The study is at the intersection of variational and postcolonial pragmatics and it is based on data provided by 20 Cameroonian and 19 French university students, who were asked to fill a DCT questionnaire. The results reveal some similarities in both French varieties regarding the use of complex complaint utterances. However, many differences were found concerning preferences for specific complaint strategies, external modifiers, internal modification devices and address terms.
Introduction
The present paper offers a study of complaint realizations by speakers of French in Cameroon and France.Although complaints have been extensively examined, very little attention has been given to their realization patterns across regional varieties of pluricentric languages in general and French in particular.The present analysis is an attempt to fill this research gap by examining complaint realizations by speakers of French in Cameroon and France.The rest of the paper is organized as follows.Section 2 presents the theoretical background of the study, focusing on the definition of complaints, a brief literature review, and variational pragmatics, the framework adopted.In section 3 the methodology is presented, more precisely the participants, the instrument, and the data coding scheme.The results are presented and discussed in section 4, followed by the conclusion and suggestions for future research in section 5.
Literature review 2.1 The communicative act of complaining
The communicative act of complaining is generally defined in the literature as an act that is realized "in the form of exerting blame on those actors who are held accountable for the complainable" (Vladiminou et al., 2021:51).According to Chen et.al. (2011: 255), complaints are voiced to express "negative feelings towards the hearer, because the hearer is believed to be responsible for a socially unacceptable event".It is noteworthy that Boxer (1996) identifies two categories of complaints: (a) direct complaints, i.e. those which are realized to communicate the speaker's annoyance or displeasure about an offensive act or behavior of the addressee and they are directly addressed to the offender; (b) indirect complaints, i.e. those addressed to a third party or which represent expressions of discontentment with someone or something that is not present.The focus of the present study is on complaints that are directly addressed to the offenders.Overall, complaints have been classified as expressive acts (Searle, 1986) and, in terms of rapport management and face-work, they are considered face-threatening acts (cf.Trosborg, 1995), because they threaten the hearer's positive face wants of being admired or appreciated as well as the hearer's negative face wants of being free from imposition.Complaints are also described as conflictive acts (cf.Leech 1983).
Research on Complaints
There has been a considerable number of studies on complaints in different languages and across different languages and language varieties.Some of the intralingual studies include investigations on American English (Boxer, 1996, Hartley, 1998), Chinese (Du, 1995), Peruvian Spanish (Garcia, 1996(Garcia, , 2009)), German (Gunthner, 2000).Studies adopting a cross-cultural pragmatics perspective include Chen et al., (2011), Essien Otung (2019), Van Meeren (2016).Also available are studies from an interlanguage pragmatics perspective such as Olstain, & Weinbach (987; 1993; Murphy & Neu (1996), Trosborg (1995), Kraft andGeluykens, 2002, 2006).Some of the studies that adopt a variational pragmatics approach include Rinnert & Iwai (2003), Lochtman (2022) Mulo Farenkia, 2015, 2022).Since there is, to the best of the author's knowledge, no study on complaints across regional varieties of French, it would be interesting to know how speakers of different varieties of French complain in different situations.The framework adopted for such a study is variational pragmatics.
Variational pragmatics
Variational pragmatics is an emerging field in pragmatics that was conceptualized by Schneider and Barron (2008) as a discipline at the interface of pragmatics and sociolinguistics that examines intralingual pragmatic variation according to macro-social factors, such as region, gender, social class, age, and ethnic identity.Variational pragmatics differs from cross-cultural or contrastive speech act studies (variation across different languages and cultures) (Wierzbicka 2003) in that while cross-cultural studies seem to perceive languages "as homogenous wholes from a pragmatic point of view" (Barron 2005: 520) variational pragmatics is based on the assumption that "speakers who share the same native language do not necessarily share the same culture" (Barron and Schneider 2009: 425), that region is a macro-social factor that impacts intralingual pragmatic variation and that "pragmatic differences may occur across varieties of the same language" (Barron and Schneider 2009: 425).These differences may be observed on the formal, actional, interactive, topic, and organizational levels of analysis (for details, cf.Barron 2015, Schneider andBarron 2008;Schneider 2010).
Focusing on the region as a macro-social factor of pragmatic variation, the analysis carried out in the present paper adds to studies of language use across regional varieties of pluricentric languages (cf.Clyne 1992).These studies have highlighted the relevance of analyzing regional varieties of a language in relation to their individual cultural contexts.Also noteworthy is that macro-social variation is examined on several levels of pragmatic analysis.These are (1) the formal level, with a focus on forms such as discourse markers; (2) the actional level, with a focus on speech acts and their realizations; (3) the interactional level, with a focus on longer stretches of discourse (e.g.conversational openings); (4) the topic level, with a focus on content units and their management; (5) the organizational level, concerned with turn-taking phenomena; (6) the stylistic level, focused on issues of (in)formality; (7) the prosodic level, focused on paralinguistic parameters; (8) the non-verbal level, focused e.g. on gaze, gestures, and posture, and (9) the metapragmatic level, focused on talking about communication (for details, cf.Schneider, 2021).The present study on complaints in Cameroon French and Hexagonal French is an analysis on the actional level.It should be underlined that one of the language varieties under study is a postcolonial variety of French.More precisely it is an ex-colonial language in use in a postcolonial setting, which is characterized, like other postcolonial societies, by cultural, ethnic and linguistic diversity.This hybridity also affects the complaint realization patterns of Cameroon French speakers.According to Janney (2006: 3), "just as colonisation led to new hybrid varieties of the colonial languages of power, it also led to new, culturally and linguistically mixed, patterns of communicationand to new pragmatic strategiesin these varieties."These observations also indicate that a postcolonial pragmatic approach would be helpful to explain how aspects of the postcolonial community influence the choice of some complaint moves, external modifiers (e.g.attention getters, forms of address, interjections), address terms etc., which are not found in the Hexagonal French examples.
Participants
Two groups of participants took part in the study.The first group consisted of 20 Cameroonian (seven male and thirteen female) students of the University of Yaoundé 1.They ranged in age between 18 and 26 years and had been speakers of French since elementary school in a multilingual context.The second group consisted of 19 French (three male and sixteen female) students at the University of Toulouse Le Mirail.They were aged between 18 and 23, and all native speakers of French.The Cameroonian data were collected in Yaoundé in 2013 while the French data were collected in Toulouse in 2014.
Instrument
The data were elicited from a written Discourse Completion Task questionnaire consisting of tasks related to the production of speech acts such as refusals, complaints, thanks, advice-giving, etc. the tasks designed to elicit complaints in three situations were described as follows.
a) Situation 1 [Friend]: Your friend borrowed your jacket and when s/he returns it, you discover a hole in it.You say to him/her1 : b) Situation 2 [Stranger]: The man or woman sitting next to you at the cinema is making so much noise that you cannot concentrate on the movie anymore.You say to him or to her. c) Situation 3 [Professor]: Your professor returns your exam paper.You are not happy with the final grade.You go to his office and say to him or her.
To account for the impact of social distance and power asymmetry on complaint performance, I included these three situations in the questionnaire, and the respondents in both countries were asked to imagine that they find themselves in the situations described and to write down what they actually would say to complain.The social variables that were built into the questionnaires were the type of horizontal relationship between the speaker (complainer" and the hearer (offender) and type of vertical relationship (equality or power asymmetry) between them.While the relationship between the interactants in situation 1 (friend) is a close one (peer equality: -D, = P); in situation 2 (stranger) they don't know each other, but they have equal status as cinema customers (+ D and =P).In situation 3 (professor), the recipient of the complaint has a higher power position (professor) and the student and professor know each other as acquaintances (-D and S<P).Table 1 summarizes these scenarios.
Data analysis procedure
The participants produced 116 answers for the three questionnaire tasks: 60 complaint utterances by the Cameroonians and 56 responses by the French.These responses were analyzed according to the types (pragmatic status) and number of discursive moves used by the respondents to realize their complaints.Each response was considered as a complaint turn or a communicative act (cf.Trosborg, 1995) consisting of either one move (head act only) as in (1) or multiple moves as in (2-3).In (2), the turn consists of a request for repair/behavior change (in italics and underlined an indirect complaint) and a grounder/justification (an external modification).In (3), the complaint turn begins with a rhetorical conflictual question (in italics and underlined, which is the head act) and it is followed by two rejection acts (external modifications).The discursive moves in the complaint utterances were segmented and, following the schemes used in previous studies (cf.Trosborg; 1995), a distinction was made between head acts, i.e. units that can be used alone to realize a complaint, and external modifications, i.e. additional moves.The next step consisted in classifying the head acts or complaint strategies, based on their pragmatic functions/definitions.Table 2 presents the seven main strategies which were identified in the data.The third step consisted in classifying types of external modification performed by the respondents of both groups.Two categories of additional moves were found: preparatory acts or pre-modifiers and supportive acts or post-modifiers.Table 3. presents the external modifiers attested in both data sets.Finally, the use of address terms was examined.The participants used pronominal terms (e.g.tu, vous, on, nous) as well as nominal terms (e.g.kinship, endearment terms and terms of respect and deference) (cf.Table 5).
Results and discussion
The focus, in this section, is on the following aspects: overall distribution of discursive moves (4.1), complaint strategies (4.2), external modification (4.3), internal modification (4.4.), and use of address terms (4.5).
Overall distribution of discursive moves
Table 6 presents the distribution of discursive moves, i.e. head acts/complaint strategies and external modification in both French varieties.The results indicate that both groups mostly head acts to construct their complaints, albeit with a difference.The Cameroonian participants used much more head acts than the Hexagonal French speakers did (Cameroon: 58.2% vs. France: 55.7%).
Complaint strategies
Table 7 presents a breakdown of the seven main complaint strategies attested in both data sets.The most popular strategy among the respondents from both countries is the 'request for repair strategy'.However, we can see that the French informants used requests much more than their Cameroonian counterparts (France: 39.7% vs. Cameroon: 33.3%).The second most common strategy is the 'accusation strategy'.It represents 20.6% of the French examples, while it accounts for 19% of the Cameroonian data set.Concerning the third most frequent strategy, the "interrogation and exclamation strategy", Table 2 shows that the Cameroonian participants employed much more tokens than their French counterparts (Cameroon: 18% vs. France: 14%).Differences also emerged regarding the use of the other complaint strategies.For instance, we notice with the 'disappointment strategy' a huge difference between both groups: while this strategy accounts for 14.4% of the Cameroonian examples, it represents only 2.6% of the Hexagonal French data set.Conversely, concerning the use of the "disbelief strategy', we can see that this complaint strategy represents 11.5% of the Hexagonal French examples while it accounts for only 5.4% of the Cameroonian productions.Overall, the analysis reveals that the complaint strategies were employed with different degrees of preferences within each group.
External modification
Two categories of external modifiers were identified: preparatory acts or pre-modifiers and supportive acts or post-modifiers.Their distribution in both French varieties is presented in Table 8.We see that the participants of both groups used more preparatory than supportive acts Overall, four different types of preparatory acts were used.These are attention-getters, apologies/disarmers, greetings, and explanation of purpose.They occurred differently across both French varieties.While attention-getters were the most preferred preparatory acts by the Cameroonian respondents (21 tokens of 51; 26.3%), apologies and disarmers were the most frequently used preparators by the French participants (12 occurrences of 39; 19.4%).The second most popular pre-modifier among the Cameroonians was apologies, while attention-getters were the second most common preparatory acts in the Hexagonal French examples.Greetings were much more employed by the French informants.Explanations of purpose accounted for 8.7% of the Cameroonian data set while they represented 11.4% of the French external modifiers.Table 8 also indicates that three types of supportive acts occurred.These are grounders/comments, rejections, and thanks.The complaints realized by the participants of both groups were overwhelmingly supported by grounders and comments, with the French informants having a higher number of these moves (France: 34% vs. Cameroon: 28.8%).Rejections were used only by the Cameroonians.The least frequent of all post-modifiers, thanks, were employed much more by the Hexagonal French speakers.
Internal modification
This section focuses on morphological, lexical, and syntactical elements or units found in the complaint head acts or the external modifiers that serve to modify aspects of the complaint utterances.These elements were divided into mitigators/softeners and intensifiers/upgraders.In total, 171 internal modifiers were used, namely 95 mitigators and 76 intensifiers.The Cameroonians used 47 mitigators (54%) and 40 intensifiers (46%), while the Hexagonal French informants used 48 mitigators (57%) and 36 intensifiers (43%).Overall, the respondents of both groups used more mitigators than intensifiers, with the French having a much higher percentage of softeners.Tables 9 and 10 present the distribution of the mitigators and intensifiers respectively in both French varieties.As can be seen in Table 9, there were eight different types of mitigators in the data.These are modal constructions, politeness markers, subjectivizers, understaters, avoidance strategies, consultative devices, supplication markers, and inclusion markers.Their distribution is different across both French varieties.Politeness markers were, with 13 occurrences, i.e. (27.7%), the most popular mitigators among the Cameroonians while modal constructions were, with 12 tokens (25%), the preferred softeners by the French participants.The third most frequent mitigators were avoidance strategies and they were much more employed by the French informants.While the other mitigators also show different degrees of preferences by both groups, we can see that consultative devices occurred only in the Cameroon examples.As can be seen in Table 10, three sub-categories of intensifiers were used by the participants of both groups.These are lexical intensifiers, i.e. negatively loaded adjectives, verbs, adverbials or nouns, contrast or consequence markers, and insistence markers.Lexical intensifiers were, by far the most frequent upgraders in Cameroon French (70%) and Hexagonal French (75%).The second commonly used intensification devices accounted for 19.4% of the French examples and 17.5% of the Cameroonian responses.The frequency of the least employed intensifiers, i.e. insistence markers, is distinctively higher in the Cameroonian data set.
Use of address terms
Given their role in signaling existing as well as intended relationships between the interlocutors, in expressing closeness, solidarity, reverence, respect, deference, etc. address terms also play an important role in determining the complaint perspective as well as the interpretation of complaints in terms of rapport management.Table 11 summarizes the distribution of all the address terms in the data.Firstly, the participants of both groups used more pronominal than nominal address terms.However, the percentages of the address terms diverge across both French varieties.In the Hexagonal French examples, the pronominal forms represent 95.6% while the nominal terms account only for 4.4%.In the Cameroonian data set, the pronominal forms represent 78.4% and the nominal terms account for 21.6%.
Conclusion
This study examined aspects of regional pragmatic variation in French, focusing on the speech act of complaining in Cameroon and France.It was found that the complaints realized by the participants from both countries in the three situations were mostly complex, generally consisting of head acts and external modifications.The quantitative analysis has revealed differences in the degrees of preference regarding the choice of complaint strategies, external modifiers, internal modification devices and address terms.Since the focus of the study has been on quantitative analyses, the next step should be to provide a detailed qualitative examination of the realization forms and situational variation of the complaint moves and additional discursive moves attested in the data.Such an analysis will shed more light on similarities and differences concerning the complaint behavior of Cameroon and Hexagonal French speakers.
Table 1 .
Summary of the DCT Complaint Scenarios
Table 2 .
Types of complaint strategies.Je ne ferai plus cette erreur-là!(S1-CamF) 'Honestly, I am disappointed.I won't make this mistake again.')Threat or warning Speaker threatens to retaliate or mention negative consequences of the offense (e.g.La prochaine fois tu ne l'auras pas.(S1-CamF) 'Next time you will not get it.'C'est la dernière fois que je te prête mes affaires.(S1-HexF) 'This is the last time I will lend you my things.') AccusationSpeaker blames the interlocutor for committing the offense or criticizes the hearer (e.g.Tu l'as abimée.C'est fichu, ce n'est plus nécessaire de me la remettre.(S1-CamF) 'You damaged it.It's ruined.It's no longer necessary to return it to me.') Disappointment Speaker is disappointed with the hearer's attitude or the results of the latter's action (e.g.Franchement hein je suis déçue.
Table 3 .
Types of external modifiers
Type of external modifier Definition and Example Preparatory acts
Attention getters Serve to catch the hearer's attention.The elements used include interjections, address terms, other expressions, alone or combined Gars, c'est how ?(…) T'as vu ce que tu as fait de ma veste ?(S1-CamF) 'Guy, what is this?(…) Have you seen what you did to my jacket?' Ma soeur et ma très chère amie, tu n'es pas gentille.Regarde ce grand trou.(S1-CamF) 'My sister and my very dear friend, you are not nice.Look at this hole.'Oh putain!Non mais (…) T'as vu ce que tu as fait?(S1-HexF) 'Oh damn!No but (…) Have you seen what you did?' Merde, il y a un trou maintenant!Tu abuses là.(S1-HexF) 'Shit, there is a hole now!You are going too far.'Monsieur le professeur, je suis très déçu par rapport à ma note finale.Regarde ce grand trou.Pourquoi tu m'as fait ça ?Tu sais bien que c'est ma 'dernière valise' et que je n'ai plus d'argent.(S1-CamF) 'Look at this big hole.Why did you do this to me? Yu know that it is my best outfit and that I don't have money anymore.' Pourriez-vous faire moins de bruit?Je ne peux pas suivre le film.(S3-HexF) 'Could you make less noise?I can't watch the movie.'
Table 5 .
Types of address terms
Type of address term Definition and Example Pronominal address terms Tu
To show familiarity, directness e.g.Tu te fous de moi?Vous To index social distance, deference, respect, e.g.Pouvez-vous recorriger ma copie ?'Can you regrade my paper ?' On To defocalize reference to the speaker or hearer, e.g.On veut suivre le film.'We want to watch the movie.'Nous To defocalize reference to the speaker, e.g.Vous n'êtes pas chez vous, permetteznous de suivre.'You are not at your place.Allow us to watch the movie.'
Table 6 .
The overall distribution of discursive moves in Cameroon and Hexagonal French
Table 7 .
Distribution of complaint strategies in Cameroon French and Hexagonal French
Table 8 .
Distribution of external modifications in Cameroon French and Hexagonal French
Table 9 .
Distribution of mitigators in Cameroon French and Hexagonal French
Table 10 .
Distribution of intensifiers in Cameroon French and Hexagonal French Table 11also indicates that of the different pronominal address terms used, vous was, with a preference rate of 58.6% in Hexagonal French and 39.8% in Cameroon French, the most frequently used, followed by tu.The use of nominal address terms shows diverging frequencies across the French varieties.We see that while nominal terms represent 21.6% of the Cameroonian examples, only 4.4% of them appeared in the French data set.
Table 11 .
Distribution of address terms in Cameroon French and Hexagonal French | 4,779.6 | 2024-02-20T00:00:00.000 | [
"Linguistics"
] |
Three novel oligosaccharides synthesized using Thermoanaerobacter brockii kojibiose phosphorylase
Background Recently synthesized novel oligosaccharides have been produced primarily by hydrolases and glycosyltransferases, while phosphorylases have also been subject of few studies. Indeed, phosphorylases are expected to give good results via their reversible reaction. The purpose of this study was to synthesis other novel oligosaccharides using kojibiose phosphorylase. Results Three novel oligosaccharides were synthesized by glucosyltransfer from β-D-glucose 1-phosphate (β-D-G1P) to xylosylfructoside [O-α-D-xylopyranosyl-(1→2)-β-D-fructofuranoside] using Thermoanaerobacter brockii kojibiose phosphorylase. These oligosaccharides were isolated using carbon-Celite column chromatography and preparative high performance liquid chromatography. Gas liquid chromatography analysis of methyl derivatives, MALDI-TOF MS and NMR measurements were used for structural characterisation. The 1H and 13C NMR signals of each saccharide were assigned using 2D-NMR including COSY (correlated spectroscopy), HSQC (herteronuclear single quantum coherence), CH2-selected E-HSQC (CH2-selected Editing-HSQC), HSQC-TOCSY (HSQC-total correlation spectroscopy) and HMBC (heteronuclear multiple bond correlation). Conclusion The structure of three synthesized saccharides were determined, and these oligosaccharides have been identified as O-α-D-glucopyranosyl-(1→2)-O-α-D-xylopyranosyl-(1→2)-β-D-fructofuranoside (saccharide 1), O-α-D-glucopyranosyl-(1→2)-O-α-D-glucopyranosyl-(1→2)-O-α-D-xylopyranosyl-(1→2)-β-D-fructofuranoside (saccharide 2) and O-α-D-glucopyranosyl-(1→[2-O-α-D-glucopyranosyl-1]2→2)-O-α-D-xylopyranosyl-(1→2)-β-D-fructofuranoside (saccharide 3).
Background
The synthesis of oligosaccharides with various functions has been actively performed for some time. Such oligosaccharides are primarily synthesized by hydrolases and glycosyltransferases. Although phosphorylases have been the subject of few studies, they are expected to give good results via their reversible reaction.
In this paper we report when xylosylfructoside is used as a substrate, Thermoanaerobacter brockii kojibiose phosphorylase catalyzes glucosyltransfer from β-D-G1P to position 2 of the xylose residue. However transfer to other saccharides lacking glucose residues does not occur, with the exception of sorbose.
We also carried out structural analysis of the synthesized oligosaccharides using NMR spectroscopy. Structural analysis using NMR of the saccharides with a high degree of polymerization by NMR is now becoming a standard technique. However, it is difficult to assign the proton ( 1 H) and carbon ( 13 C) signals in oligosaccharides whose residues are similar, particularly in oligosaccharides with numerous methylene (CH 2 ) groups, such as fructooligosaccharides and kojioligosaccharides.
The purpose of this study is to synthesize three novel oligosaccharides by kojibiose phosphorylase and carry out the full assignment of the 1 H and 13 C signals using 2D-NMR techniques such as COSY, HSQC, CH 2 E-HSQC, HSQC-TOCSY and HMBC.
Results and discussion
Oligosaccharide synthesis and identification Saccharides 1, 2 and 3 were synthesized from xylosylfructoside [O-α-D-xylopyranosyl-(1→2)-β-D-fructofuranoside] and β-D-G1P using kojibiose phosphorylase. The HPAEC chart of saccharides 1, 2 and 3 synthesized after 54 h reaction is shown in Figure 1. From the reaction mixture, saccharides 1, 2 and 3 were isolated by successive chromatographic procedures using carbon-Celite and ODS columns, and finally obtained as a white powder. Saccharides 1, 2 and 3 were shown to be homogenous using HPAEC [t R , retention time of sucrose = 1.00; 1.51, 1.80 and 2.35]. The physical value of the three saccharides was measured. The value for saccharide 1 was +79.5, while no value was obtained for saccharides 2 and 3. This is due to the small quantity of saccharides 2 and 3 obtained, and not enough to assess this value. The degrees of polymerization were confirmed as being 3 (saccharides peaks corresponding to methyl 2, 3, 4, 6-tetra-O-methyl-D-glucoside (t R , 1.03 and 1.47), methyl 1, 3, 4, 6-tetra-Omethyl-D-fructoside (t R , 1.03 and 1.27) and methyl 3, 4di-O-methyl-D-xyloside (t R , 1.47). Furthermore, the methanolysate of permethylated saccharide 2 and 3 exhibited four peaks, which corresponded to the same methyl glycosides as those observed for saccharide 1 and two peaks corresponding to methyl 3, 4, 6 tri-O-methyl-D-glucoside (t R , 2.86 and 3.45). The area of peaks corresponding to the methyl glycosides obtained from the methanolysate of permethylated saccharide 3 were larger than those of permethylated saccharide 2. The peak area of methyl 3, 4, 6 tri-O-methyl-D-glucoside indicating 1→2 glucosyl linkage of each saccharide, was increased by additional units of glucose.
Strategy for NMR analysis
The glucose, xylose and fructose residues of the synthesized saccharides are represented as, Glc: glucopyranosyl, Glc': glucopyranosyl', Glc": glucopyranosyl", Xyl: xylopyranosyl and Fru: fructofuranosyl as shown in Figure 2. The proton and carbon positions in a particular residue are represented by H-1-Glc and C-1-Xyl, respectively.
The basic strategy for the assignment of the 1 H and 13 C NMR signals of each compound is as follows: one anomeric position for the xylose residue; one, two or three anomeric positions of glucose residues; and one quaternary carbon for the fructose residue in each saccharide molecule. The starting point for the assignment is the anomeric protons. The two-dimensional (2D) 1 H-1 H COSY spectrum [9,10] reveals the connectivities of the protons in each spin system from H-1 to H-5 in the xylose and H-1 to H-6 in the glucose residues. The results from HSQC [11] and HSQC-TOCSY [12] then enable the assignment of the carbons attached to those protons and the separation of the carbon and proton of each saccharide. The HMBC spectra are also used to confirm the intra-residual assignments in the region where the protons cannot be assigned by 1 H-1 H COSY owing to signal overlapping [13,14]. With regard to the fructose units, one particular quaternary carbon (C-2) should be correlated with hydroxymethine protons at H-3 or H-4 by HMBC.
The proton network of Fru from H-3 to H-6 can be assigned by 1 H-1 H COSY and the attached carbons by HSQC. The residual H-1 and C-1 can be correlated with C-2 or C-3 and H-3, respectively, by HMBC.
The inter-residual HMBC correlation peaks between H-1-Xyl and C-2-Fru, and H-1-Glc and C-2-Xyl determined the attachment of Fru to C-1 of Xyl and the Xyl to C-1 of Glc. The linkages of one, two and three glucose residues were identified with the inter-residual H-1-Glc'/C-2-Glc and H-1-Glc"/C-2-Glc' correlation peaks in the HMBC spectrum.
Finally, the coupling patterns of overlapped 1 H signals were analyzed using SPT (selective population transfer) experiment.
The HSQC spectrum was unhelpful in the assignment of these methylene signals, since the chemical shift difference between the C-6 carbons of interest is very small. Therefore, the resolution enhancement of the 2D HSQC method could be achieved by CH 2 -selected editing (E)-HSQC in which the 13 C spectral width was limited in the range of the methylene carbons. This enabled sufficient 13 C resolution to separate each CH 2 signal of H-5/C-5 in the xylose residue, and H-6/C-6 in the glucose and fructose residues, thus leading to the unambiguous assignment of the methylene proton's chemical shift.
Xylosylfructoside
The 1D 1 H and 13 C NMR spectra of xylosylfructoside showed anomeric proton (δ H 5.34 ppm, d, 3.7 Hz) and carbon (δ C 93.17 ppm) signals for the xylopyranosyl residue. Since the methylene proton signals of H-6-Fru and H-5-Xyl buried in the overlapped region were not separated by conventional HSQC (cf. Figure 3a), the CH 2 -selected E-HSQC spectrum of fructosylxyloside was used (cf. Figure 3b). In this spectrum, each correlation peak was well separated, and thus the chemical shift of each methylene proton was determined. Finally, the coupling patterns of the overlapped 1 H signals were analyzed by SPT experiment.
Saccharide 1
The 1D and C-2-Xyl (δ C 76.39 ppm) indicated that C-2 of Fru and that of Xyl are attached to C-1-Xyl and C-1-Glc, respectively.
Unambiguous assignments of the methylene protons H-6-Fru and H-5-Xyl buried in the overlapped region were achieved using the CH 2 -selected E-HSQC of saccharide 1 (cf. Figure 4a). In this spectrum each correlation peak was well separated, and thus the chemical shift of each methylene proton was determined. Finally, the coupling patterns of the overlapped 1 H signals were analyzed by SPT experiment.
Saccharide 3
The assignment of saccharide 3 was also begun from the three anomeric protons of the xylopyranosyl residue (δ H 5.53 ppm, d, 3.2 Hz) and that of the glucopyranosyl residues (δ H 5.32 ppm, d, 3.4 Hz, δ H 5.28 ppm, d, 3.2 Hz and δ H 5.10 ppm, d, 3.8 Hz), and carried out in the same manner as for saccharide 2. The inter-residual HMBC correlation between H-1-Glc" (δ H 5.10 ppm) and C-2-Glc' (δ C 77.54 ppm) determined the connectivity between the two sugar moieties.
Conclusion
By using kojibiose phosphorylase, three novel oligosaccharides have been synthesized. These saccharides were purified and their structure was fully determined.
Matrix assisted laser desorption ionization time of flight mass spectrometry (MALDI-TOF-MS)
MALDI-TOF-MS spectra were obtained on a Shimadzu-Kratos mass spectrometer (KOMPACT Probe) using 2, 4dihydroxybenzoic acid matrix.
Methylation and methanolysis
Methylation of the oligosaccharides was carried out according to the Hakomori method [18]. The permethylated saccharides were methanolysed by heating in 1.5% methanolic hydrochloric acid at 96°C for 10 or 180 min. The reaction mixture was treated with Amberlite IRA-410 (OH -) to remove hydrochloric acid, and dried under vacuum. The resulting methanolysate was dissolved in a small volume of methanol and analyzed using gas chromatography.
1D nomal 1 H and 13 C spectra
1D 1 H and 13 C spectra were recorded with 32 K data points for a spectral width of 8064 Hz at 500.133 MHz ( 1 H) and with 64 K data points for a spectral width of 33333 Hz at 125.772 MHz ( 13 C). Exponential multiplication (LB = 0.2 for 1 H and 1.0 for 13 C) was performed prior to Fourier transformation. For the 13 C spectrum, complete proton decoupling was derived by attenuation of the high-power output of the decoupler (p/2 pulse duration 100 ms). For the SPT spectrum, selective irradiation was performed by attenuation of the low-power of the decoupler (115 dB) for 2 s.
H-H COSY spectra
The 1 H-1 H COSY spectra were measured with a relaxation delay of 1.9 s covering a spectral width of 2762 Hz in both dimensions with 1024 K data points using one, one and four transients for each of the 256 t 1 increments [9,10]. Zero-filling to 512 for F 1 and multiplication with a sinebell window in both dimensions were performed prior to 2D Fourier transformation. The total measuring times for xylosylfructoside, saccharides 1, 2 and 3 were ca. 9, 9, 9 and 36 min, respectively.
HSQC spectra
The gradients selected for HSQC spectra covering a spectral width of 2762 and 6666 Hz in both dimensions were measured with 1024 data points using four transients for each of the 512 t 1 increments [11]. The relaxation and evolution delays [1/4 1 J (C, H)] were set to 2.0 s and 1.9 ms, respectively. Zero-filling to 1024 for F 1 and multiplication with a squared sine-bell shifted by π/2 for F 2 and π/6 for F 1 windows in both dimensions were performed prior to 2D Fourier transformation. The total measuring times for xylosylfructoside, saccharides 1, 2 and 3 were ca. 50, 78, 78 and 78 min each.
HSQC-TOCSY spectra
The phase-sensitive HSQC-TOCSY spectra were determined by the sequence including inversion of direct resonance (IDR). The TOCSY mixing for 264 ms was composed of MLEV-17 composite pulses guarded by trim pulses (2.5 ms) derived from the high-power output of the 1 H pulse attenuation by 14 dB (π/2 pulse duration, 40 μs). The delays for relaxation and evolution [1/4 1 J (C, H)] were set to 2.1 s and 1.8 ms respectively. The HSQC-TOCSY spectra of 1, 2 and 3 were measured using the sequence covering a spectral width of 2762 Hz in F 2 and 6667, 6849 and 6667 Hz in F 1 with 1024 data points using 32, 32 and 64 transients for each of the 512, 460 and 512 t 1 increments. Zero-filling to 1024 for F 1 and multiplication with a sine-bell windows shifted by π/2 for F 2 and π/2, π/6 and π/2 for F 1 and in both dimensions were performed prior to 2D Fourier transformation. The total measuring times for saccharides 1, 2 and 3 were circa 12, 11 and 23 h, respectively.
HMBC spectra
The HMBC spectra were obtained using the pulse sequence of CT-HMBC 2 proposed by Furihata and Seto [14]. The HMBC spectra of xylosylfructoside, saccharides 1, 2 and 3 were measured by the sequence covering a spectral width of 2762 Hz in F 2 and 6667 Hz in F 1 with 1024 data points using 4, 32, 48 and 64 transients for each of the 512 t 1 increments. Zero-filling to 1024 for F 1 and multiplication with a Lorenz-Gaussian window (GB = 0.5, LB = -2) in F 2 and multiplication with a sine-bell shifted by π/ 8 window F 1 were performed prior to 2D Fourier transformation. The delays for relaxation, low-pass J-filter [1/2 1 J (C, H)] and evolution [1/2 LR J (C, H)] were set to 1.7 s, 3.5 ms and 80 ms, respectively. The total measuring times for xylosylfructoside, saccharides 1, 2 and 3 were ca. 1, 9, 13 and 18 h, respectively.
CH 2 -selected E-HSQC
The CH 2 -selected E-HSQC spectra of xylosylfructoside, saccharides 1, 2 and 3 were measured by the sequence covering a spectral width of 2762 Hz in F 2 and 352, 353, 323 and 353 Hz in F 1 . For the CH 2 -selected E-HSQC spec- | 3,054.8 | 2007-06-28T00:00:00.000 | [
"Biology",
"Chemistry"
] |
Studies on Quark Confinement in a Proton on the Basis of Interaction Potential
This study describes quark confinement in terms of linear interaction potentials. The three quarks in a proton are assumed to revolve around a common center and have masses determined as if they were Dirac particles. Under these assumptions, the magnetic moment of a proton is derived via Maxwell’s equations. Moreover, the rotational motion of the quarks can be thought of as an electrical current that induces a magnetic field. Thus, the scalar product of the magnetic moment and the magnetic field describes a linear interaction potential between the quarks that gives the mass of the proton. The proton mass as predicted by this physical model is in good agreement with experimental observations and requires no numerical calculations. Thus, the simple physical model suggests a solution for the problem of quark confinement by modeling the strong force as an interaction potential.
Introduction
Hadrons and quarks have been studied for about half a century.A proton is composed of three quarks [1][2][3][4][5][6][7].The proton mass has been measured with good accuracy, but cannot be predicted theoretically using analytic solutions.The required numerical calculations to determine the proton mass are well known in the field of quantum chromodynamics (QCD), which can calculate the mass of hadrons using a supercomputer [8][9][10][11][12].Moreover, the QCD model is described by the SU(3) and Yang-Mills fields [13][14][15].However, the calculation time needed is extremely long even with supercomputers, and the color charges employed by QCD are controversial because the color charge of the gluons that mediate the strong interaction has not yet been measured.Considering these facts, we claim that QCD is not inadequate but insufficient, and the quark-confinement problem, or the question of why the proton persists, remains unsolved because the numerical calculations involved in lattice QCD implies so-called an ideal experiment, which does not teach us why the quark is not measured singly and why the proton is not destroyed permanently.Although the lattice QCD calculations for the mass of hadrons have been conducted, based on experimental observations of Ω particles i.e., the parameters were determined from known experimental values such that the numerical calculations predict the masses of other unobserved hadrons [16].This result is meaningful to some extent, but the calculations cannot stand independently without numerical modeling.Thus, these results also fail to explain the basic physical picture of quark confinement.
In this paper, we use more simple and basic calculations than the QCD approach with color charges to calculate the mass of a proton.This physical model yields the linear interaction potential between quarks, the magnetic moment, and the mass of a proton without numerical calculations.This paper assumes that three quarks have rotational moments and the quark mass is determined from the energy gap.The quarks' rotational motion is thought of as an electrical current that induces a magnetic field, which explains the proton's magnetic moment.The scalar product of the magnetic moment and the magnetic field produces the linear interaction potential between quarks that predicts the mass of a proton.
When comparing these analytic results with experimental observations, the analytic model corresponds well with the observed mass of a proton.The derived linear potential provides a mechanism that explains how a larger relative distance between quarks causes a stronger attractive force between them.This fact implies that quarks cannot be measured in isolation.Although several articles have modeled quark confinement numerically [12,13], the present paper employs analytic expressions without numerical methods, thus giving a purely physical model for quark confinement.
Theory
As shown in Fig. 1, circular coordinates with radius r are considered.Each apex has a quark whose electric charge is 1/3q.In this model, each quark revolves around the origin O, maintaining a constant relative distance between the three quarks.Fig. 1 therefore gives a simple model of a proton.
The rotational motion of the three quarks can be modeled as the current I if we think of the quarks as charged particles, where T, q, and ω denote the period of the rotation movement, the charge of an electron, and the angular frequency of the motion, respectively.A magnetic moment is generally defined as P = IS, (2) where P, I, and S denote the magnetic moment, the current, and the area, respectively.The following assumptions are introduced.
The area S is simply
1) The quark mass is determined by the energy gap of the vacuum (i.e., it is a Dirac particle) 2) The rotational angular frequency is determined by the energy gap from the quarks' vacuum.
3) This Dirac particle is described as where the left-hand side gives the zero-point energy of the quark in a vacuum and the right-hand side is the sum of the three quarks' energy at rest, with being the mass of a single quark.
Note that, in a vacuum, the following equation holds for an electron: where me denotes the mass of the electron.
Considering the above, the magnetic moment of a proton p is The magnetic field H induced by current I is Thus, in combination with the magnetic moment p, the linear interaction potential results: This potential yields the proton mass: where, mp denotes the proton mass.
Result and Discussion
This section compares experimental observations with the value of the proton mass calculated using the model described above.
The physical parameters listed in Table 1 were used for the calculations.The results are listed in Table 2. Experimental data are averaged from two sources [15,17].As shown, the theory of this paper well predicts the mass of a proton.Typically, strong-force interactions are calculated using lattice QCD.However, this calculation using QCD requires numerical methods, so the resulting values do not have the picture sufficiently, which describes why the quark is not measured singly and why the proton is not destroyed permanently.Here, we started with a simple physical model of three quarks forming an equilateral triangle, and used basic physical relations to derive the magnetic moment and mass of the proton.In the process, we derived the linear interaction potential.Thus, the interaction potential increases along with the relative distance between quarks.This relation explains why protons persist and why single quarks have not been observed.Thus, the mass of the proton is explained by the model above.
The physical model above conceives of quarks revolving around a common center and having mass correlated with the energy gap in a vacuum.The rotational motion explains the magnetic moment of the proton.Moreover, because the quarks' rotational motion induces a current and magnetic field, the scalar product of the magnetic field and the magnetic moment produces a linear interaction potential between the quarks that explains the mass of the proton.Note that the derived magnetic moment is combined with the magnetic field via the scalar product, as indicated in eq. ( 8).This implies that the single magnetic moment p is independent of the generally measured magnetic moment, i.e., the magnetic moment p cannot be observed in isolation.Therefore, the generally measured magnetic moment must be considered separately for that of the present paper.
Conclusion
This paper has formulated the problem of quark confinement in a novel manner.
Three quarks revolve around a common center and have mass.This rotational motion produces a current and the magnetic field, from which the magnetic moment of the proton can be calculated.
Moreover, following Maxwell's laws, the scalar product of the magnetic moment and the magnetic field results in a linear potential between the quarks that yields the mass of the proton.This derived linear potential between the quarks explains the problem of quark confinement, since the potential increases along with the distance between quarks.This relation between the quark spacing and the linear potential explains why quarks are never observed in isolation, and why the proton coheres.
In future work, the same methods will be used to predict the mass of a meson. | 1,801.8 | 2019-02-02T00:00:00.000 | [
"Physics"
] |
Vis Medicatrix naturae: does nature "minister to the mind"?
The healing power of nature, vis medicatrix naturae, has traditionally been defined as an internal healing response designed to restore health. Almost a century ago, famed biologist Sir John Arthur Thomson provided an additional interpretation of the word nature within the context of vis medicatrix, defining it instead as the natural, non-built external environment. He maintained that the healing power of nature is also that associated with mindful contact with the animate and inanimate natural portions of the outdoor environment. A century on, excessive screen-based media consumption, so-called screen time, may be a driving force in masking awareness of the potential benefits of nature. With global environmental concerns, rapid urban expansion, and mental health disorders at crisis levels, diminished nature contact may not be without consequence to the health of the individual and the planet itself. In the context of emerging research, we will re-examine Sir J. Arthur Thomson's contention that the healing power of the nature-based environment - green space, forests and parks in particular - extends into the realm of mental health and vitality.
The healing power of nature -vis medicatrix naturaeis an ancient medical principle that includes reference to the innate ability of the body to heal itself. While acknowledging that vis medicatrix naturae can be influenced by anything from physician bedside manner to belief in placebo, medical scholars have typically defined it as an internal healing response designed to repair and rebuild [2]. Consider, for example, the healing of a fracture; "naturae" in the contemporary context is what we now recognize as the production of immune chemicals and the initiation of enzymatic reactions, a proper balance of pro-and anti-inflammatory cytokines, osteoblast and osteoclast activity etc., in the remodeling of bones. However, a century ago biologist Sir John Arthur Thomson provided an additional interpretation of the word nature within the context of vis medicatrix, defining it instead as the natural, non-built physical environment in which humans live their lives -i.e. that the healing power of nature is also that associated with mindful immersion in and contact with the animate and inanimate natural portions of our external world [1]. In our review we will re-examine the contentions of Sir J. Arthur Thomson, and in particular his suggestion that the healing power of the nature-based environment extends into the realm of mental health.
Screen time and displacement of green time
Some researchers have suggested that a significant portion of modern children and adults may be experiencing suboptimal levels of exposure to green space and time spent in natural settings [3]. Typically nature or natural settings are broadly defined in this context as inclusive of outdoor areas rich in vegetation and non-human animal life, including forests, urban parks, waterside areas and relatively untouched wilderness regions. Several recent studies have suggested that the expansion of screen-based entertainment (televisions, computers, video games, Smartphones) has contributed, at least in part, to the downward trend in nature-based recreation over the last two decades. While this is obviously difficult to prove, the declining visits to national and state parks, historic sites and wilderness areas -down in many locations between 25-50% -have certainly occurred in tandem with massive increases in non-academic and non-occupational daily screen time and screen-based media consumption [4][5][6][7][8]. Most of the research in the area of the health-related consequences of excessive screen time has focused on implications related to obesity, cognitive performance, anxiety and depression. The results of recent prospective studies are now reporting that the accumulation of screen time is a risk factor for, and not a mere consequence of, mental health disorders [9][10][11]. Perceptions of cyber-based information overload are predictive of more frequent and more severe health problems [12]. Screen time, however, cannot be viewed in isolation, it can be a surrogate marker for lack of physical activity or less time spent in meaningful social interaction with a pronounced health payoff. Those experiencing high levels of cyber-based information overload are much less likely to engage in contemplative activities [12]. It remains unknown to what extent the loss of green time -time spent outdoors in nature, or at the very least, a view to nature -is itself a risk factor for mental health disorders and cognitive difficulties. Put another way, an unanswered question is whether or not the loss of contact with nature, its displacement by the screen, removes a layer of psychological resiliency. Nature, stress physiology and brain imaging Over 30 years have passed since scientist Roger S. Ulrich first began to examine some of the psycho-physiological changes induced by vegetation-rich scenes of nature (relative to urban scenes). His initial studies found that immediately subsequent to a required one-hour course examination, undergraduate students who viewed photographic scenes of nature (vs. urban built scenes) had a rapid improvement in positive mental outlook and a decline in reported fear and arousal [13]. These subjective reports were subsequently corroborated in separate work involving objective markers of stress physiology including electromyography (EMG), skin conductance (SC) and pulse transit time (PTT) [14]. Specifically, after viewing a stressful video on workplace accidents ("It didn't have to happen" -a video previously confirmed to elevate a stress response), participants subsequently viewed images of nature scenes or an urban built environment for 10 minutes. The physiological markers (EMG, SC, PTT) showed a consistent pattern of rapid and more complete recovery from stress/arousal upon exposure of vegetation-rich nature scenes. Ulrich was the first to use electroencephalograph (EEG) apparatus to evaluate brain wave activity while otherwise healthy adults viewed photographic scenes of nature vs. urban built scenes [15]. The results confirmed higher alpha wave activity when viewing scenes of vegetation-rich (and aesthetically unspectacular) nature, indicative of a state of relaxed wakefulness and lowered anxiety.
The original work of Ulrich has been validated to some extent by various international investigators. Nature scenes -steams, valleys, river terraces, orchids, forests, farms and bodies of water -have been shown to positively influence the same objective markers of EEG (higher alpha wave activity), EMG (decreased muscle tension) and skin conductance (decreased autonomic arousal) [16][17][18][19]. Lower levels of the stress hormone cortisol have been reported in adults subsequent to performing the same mental activities in a garden setting vs. an indoor classroom [20]. In a Japanese investigation, researchers examined physiological stress markers in 119 adults who transplanted non-flowering plants from one pot to another. Compared to adults who simply filled pots with soil, the individuals working hands-on with the plants had higher EEG alpha wave activity, decreased muscular tension as measured by EMG, as well as subjective reductions in fatigue [21].
A variety of separate Japanese studies under the umbrella terms "shinrin-yoku" (which translates as "taking in the forest air", or "forest bathing") and "forest medicine" have shown that spending time walking or contemplating in a forest setting is associated with lower cortisol, lower blood pressure, pulse rate, and increased heart rate variability. Collectively these studies have involved over 1000 subjects in studies centered in some 2 dozen different forests, and in many cases there was a control or cross-over group engaged in the same activity (physical activity and/ or contemplation) within an urban built environment [22,23]. Evaluation with near-infrared time-resolved spectroscopy (NITRS), a device which measures oxygen use in the brain via the reflection of near infrared light from red blood cells, reveals that 20 minutes of contemplation in a forest setting (vs. urban control) altered cerebral blood flow in a manner indicative of a state of relaxation [24]. The shift in stress physiology, lowered stress hormones in particular, has also been proposed to explain the improvement in immune functioning of subjects involved in various forest medicine studies. Compared to time spent in urban built environments, visits to forest settings have been shown to improve natural killer cell activity and the production of anti-cancer proteins [25].
The consistent preference for natural scenic views over urban streets is well documented, indeed the preferences for nature scenes are apparent even when they are presented for a mere 1/100 th of a second [26,27]. To add to the weight of the EEG and NITRS studies, Korean researchers recently utilized functional magnetic resonance imaging (fMRI) to investigate brain activation patterns while viewing nature [28][29][30]. In a series of studies, the researchers evaluated brain activity while participants viewed a set of either rural (mountains, forests) or urban built scenes for 2 minutes each, followed by a 30 second rest. To minimize the influence of intrusive thoughts and a wandering mind, every 1.5 seconds a new photo was shown. The urban scenes showed pronounced activity in the amygdala, a region that typically shows enhanced activity in response to aversive stimuli. Hyperactivity of this area has been linked to impulsivity and anxiety, while shifts from negative affect to positive mental outlook are associated with a decrease in amygdala activity. Moreover, chronic stress and the stress hormone cortisol itself may promote amygdala activity, and a consistently overactive amygdala may enhance memorization for negative vs. neutral stimuli, short-circuiting the areas that would otherwise dampen amygdala activity [31,32].
Recently it was reported that otherwise healthy urbanites (vs. rural residents) have enhanced activity in the amygdala while performing challenging cognitive tasks under conditions of perceived social stress [33].
In contrast, the Korean fMRI studies showed that nature scenes produced a pronounced activity in the anterior cingulate and the insula -increased activity in both of these areas is associated with heightened empathy and altruistic motivation [34]. This is an interesting finding when considering that the mere visualization of being in a natural setting (vs. urban center) is associated with experimental altruism in young adults [35]. Meanwhile, greater activity in the anterior cingulate is associated with emotional stability and a positive mental outlook [36], and activity in the insula is associated with love. For example, when individuals are shown photographs of loved ones while in an MRI scanner, insula activity has been shown to increase [37]. Urban scenes did not influence activity in the anterior cingulate or the insula.
Nature and cognition
In recent years there has been some scientific support to the notion that viewing scenes of nature or engaging in activities within natural settings is favorable to cognitive restoration [38][39][40][41]. Objective measurements using an Eye Position Detector System (EPDS) have shown that eye fixations, indicative of the amount of attention engaged when viewing a scene, are significantly lower while viewing highly fascinating nature scenes vs. built urban settings [42]. This suggests that natural settings are less likely to place a burden on the inhibitory pathways in the brain -i.e. in nature there is less energy expended in efforts to filter out non-pertinent stimuli. For example, after researchers induced mental fatigue in subjects via a cognitively demanding task, half of the group then viewed images that had been independently reported to be high in cognitive restoration potential (forests, water views, mountains, ocean side etc). The other half of the mentally fatigued group viewed low restoration pictures such as city streets with multiple cars, industrial zones, housing developments, factories etc. After viewing some 25 photographs of either high or low restorative potential for about 6 minutes, the subjects repeated the same cognitively demanding task for another 5 minutes. Upon repeat testing, the group who viewed the restorative nature scenes had enhanced accuracy in target detection, faster reaction time, and a higher number of correct responses to challenge vs. those viewing urban scenes [43]. The same research group has recently replicated the findings of improved reaction time (after induced mental fatigue and re-challenge with cognitive test) after viewing nature scenes rated high in fascination. They also reported overall better memory recall after viewing scenes of nature vs. built urban scenes [44]. In separate work, researchers induced mental fatigue with a series of challenging brain games designed to place demands on sustained attention. Immediately following a 35 minute period of intense cognitive effort, the subjects either took a walk (for a little less than an hour) in a vegetation-rich park or on city streets. After the walk, the cognitive tests were repeated, the results showing a significant performance difference in favor of those who had spent time in nature. An important finding of the study was that the cognitive restoration was occurring without changes in mood state per se in these otherwise healthy adults [45]. In other words, we cannot assume the cognitive gains provided by nature are simply an artifact of a more positive outlook. Recently, Korean researchers set up an experiment to evaluate the cognitive effects of a walk through a pine forest vs. downtown streets. In a cross-over design, the subjects completed cognitive and mood tests before and after a 50 minute urban or forest walk. The results showed the expected elevations in mood among the forest vs. built urban walkers; however, they also showed that only after the forest walks did participants show significant improvements in post-walk cognition [46]. Furthermore, a European study combing aerial photography and standardized cognitive assessments showed that children (aged 4-6) attending schools where play areas had more trees, shrubs and hilly terrain were least likely to present with behaviors of inattention [47]. This finding was not associated with socio-economic status of the children. In a study involving 101 public high schools in Michigan, classroom and primary cafeteria views were scaled for the degree and types of nature (i.e. how much green and the type of green -trees, shrubs, cut grass, athletic fields etc).. Even after controlling for socio-economic factors, class size, age of the school facilities and other factors, the results showed that classroom and cafeteria views to green vegetation were significant factors in academic performance on standardized tests. Moreover, views to trees and shrubs were associated with higher graduation rates and future plans for attendance at 4year university programs. Unlike trees and shrubs, a view to a well kept lawn was not associated with academic performance [48].
Based on the successful results using photographic images of nature, it might seem safe to presume that a virtual nature view (a wall mounted plasma TV displaying a window view to nature) might also afford cognitive benefit. Researchers examined this question in a study involving 90 young adults; subjects completed of a series of complex cognitive tasks for 30 minutes at a workstation either close to an actual window view to a nature scene, or close to a wall-mounted high-definition flat screen TV of similar size to the window with the scene depicting that of the actual nature view, or, in the third group, simply facing a blank wall. Lighting level was kept constant for each group. Each participant remained at the workstation for a 5 minute waiting period before and after the cognitive tasks, during which they could freely gaze. The actual window view held the participants attention longer than did the same view depicted on a plasma screen, and physiological markers of stress showed greatest recovery in the group who viewed the actual nature scene outside the window vs. either the plasma TV set or the blank wall. The plasma TV was better than a blank wall, but not as good as a view of nature as impeded only by a thin pane of glass [49].
Researchers from the USA first reported almost a decade ago that green outdoor activities may be associated with symptom reduction of ADHD (attention-deficit hyperactivity disorder) vs. the same activities conducted in built environments. Initial parental surveys suggested that the greenness of play areas was associated with milder symptoms of attention deficit, and that windowless indoor play areas were associated with more severe symptoms [50]. Following up with a larger study, the researchers used data from 452 parents of children formally diagnosed with ADHD and examined the setting of some 50 different activities (from reading to playing sports) to determine if there were differences in attention. Regardless of age, the presence or absence of hyperactivity in the child, economic status, geographic location with the USA, and rural or urban residence, green outdoor activities were associated with symptom reduction [51]. More recently, investigators have performed cognitive testing of attention in children with ADHD after time spent in natural vs. built environments. In a European study, researchers conducted a test of concentration after children had engaged in a period of light to moderate physical activity in a natural wooded area or a built town area. The results showed that performance on concentration tasks were higher in the wooded environment [52]. In a separate study, children with diagnosed ADD completed a series of challenging puzzles designed to tax cognitive attention, after which they walked in one of three different environments for 20 minutes. One group walked through a vegetation-rich urban park, another in a downtown built area, and the third in a residential area clustered with houses. The child was subsequently driven back to a quiet indoor setting for a series of cognitive evaluations of attention and executive functioning. The results were clearly in favor of the urban park as a means of cognitive restoration [53].
Studies have also shown that simulated drives through natural settings (forest roads) appear to be less taxing to the autonomic nervous system vs. simulated drives through urban settings [54]. In experimental research, the presence of equal levels of traffic noise was presented to adult volunteers with two different visual environments, one rich in vegetation and the other an urban built city scene. The results showed significantly less psychological distress and amplified signs of relaxation via EEG assessment when the noise was presented with views to green vegetation. In separate work involving 106 adults, researchers showed that the amount of vegetation along a highway may help mediate driver frustration. In this case the volunteers were cognitively fatigued with mental challenges, after which they proceeded on a simulated drive where the modified variable was roadside vegetation. The participants had a much higher threshold for frustration tolerance after simulated driving on roadways with more vegetation in sight. Furthermore, the researchers had the drivers work at a complex cognitive task after the differing simulated drives. Participants who drove on the high vegetation parkways were less likely to give up on the postdrive mental challenge, working at it for a significantly longer period than those who had driven in the built areas [55].
A variety of studies have also shown that indoor vegetation can also make a difference in cognitive performance. For example, researchers compared a no vegetation room to one manipulated by the addition of four plants (two small flowering plants on a window ledge, a 1-foot tall green plant on the desk and a 4-foot tall floor plant). The participants were asked to perform memory recall and complex proof-reading exercises, and those operating in the room with potted plants showed improved performance between baseline and an evaluation 10 minutes later [56]. Furthermore, Japanese researchers manipulated a small office room and reported that the presence of a 4-foot tall corn plant improved mood and performance scores among women on a task designed to evaluate creativity [57]. Recently, researchers from Australia have reported that indoor plants placed in a classroom may influence academic scores among younger students [58]. Specifically, the researchers placed just 3 plants in half of classrooms belonging to middle-school students of 3 different Brisbane, Australia school districts. There were over 350 students involved, all of whom completed standardized academic tests prior to plant installation and again six weeks following the placement of plants in select rooms. Researchers reported significant improvements in mathematics, spelling and science among students drawn from classrooms where the plants had been placed. The results await scientific peer-review and formal publication.
In vivo psychotherapy
It has been postulated that in vivo (Latin for "within the living") counseling may, in select cases, offer some advantages over traditional office-based therapy. In the past, wilderness and other natural settings have been described as helpful for group psychotherapy, particularly when incorporated into so-called camp or wilderness therapy [59,60]. While there has been little in the way of proper scientific evaluation of these broad claims of success, a recent study does suggest merit to nature-based psychotherapy. In a study involving 63 patients with moderate to severe depression, participants were assigned to once-weekly cognitive-behavioral therapy in either a hospital setting or a forest setting (arboretum), while a third group acted as a control and were treated using standard outpatient care in the community. The overall depressive symptoms were reduced most significantly in the forest group, and the odds of complete remission were relatively high -20-30% higher than that typically observed from medication alone. Moreover, the forest therapy group had more pronounced reductions in physiological markers of stress, including lower levels of the stress hormone cortisol and improvements in heart rate variability, a marker of adequate circulatory system response to stress. The researchers conclude that the settings wherein psychotherapy is conducted are not merely 'places', rather they can become part of the therapy itself [61]. Although much more research is required, the results certainly lend credence to vis medicatrix naturae as interpreted by Sir J. Arthur Thomson, and they support the claims of ecopsychologists currently conducting psychotherapy in natural settings [62].
Nature, mood and mortality
Epidemiological investigations provide further support to the subjective and objective findings indicating that nature is a stress buffer of sorts [63,64]. Neighborhood greenness within urban geography is associated with individual life satisfaction and perceived satisfaction with the neighborhood itself [65]. Among over 4,500 adults, those living within a 3 km radius containing a high amount of green space (as measured by National Land Cover Classification Database) were less likely to experience negative health impacts of stress. Among those who had experienced recent life stressors (major losses, financial problems, relationship problems, legal issues etc), having a more dense green space within 3 km radius was associated with fewer health complaints vs. those with a low amount of green space [66]. A separate study involving over 11,000 adults from Denmark showed that living more than 1 km away from green space (forests, parks, beaches, lakes) were 42 percent more likely to report high stress and had the worst scores on evaluations of general health, vitality, mental health and bodily pain [67]. In addition, after examining the medical records of 195 family physicians, Dutch researchers reported that the annual prevalence rate of 15 of the top 24 disease states were lowest among those with the highest green space within a 1 km radius from home. A mere 10% increase in green space vs. group average was associated with resiliency against chronic disease. Those with only 10% green space within 1 km had a 25% greater risk of depression and a 30% greater risk of anxiety disorders vs. those with the highest area of green space near the home [68].
If neighborhood greenness can positively influence mental outlook, stress physiology, and human immune system defense, it would seem reasonable to presume that neighborhood green space might be associated with lower mortality. Japanese researchers recently compared data on the percentage of forest coverage in all Prefectures and national cancer mortality rates provided by the Ministry of Health. After controlling for smoking and socioeconomic factors, there was a significant association between higher forest coverage within Prefectures and lower rates of various cancers -lung, breast, uterine, prostate, kidney and colon cancers [69]. In a study involving the residents of Shanghai, China, researchers reported that a higher proportion of neighborhood parks, gardens and green areas were associated with a reduced risk of mortality [70]. In the USA, researchers examined 5 years worth of data on stroke mortality and found that geographic green space (as measured via satellite technology) offered significant protection, while areas low in green space were associated with a significantly higher risk of stroke mortality [71]. In a recent United Kingdom study, researchers compared a land use database for green space and compared it to national mortality records from the United Kingdom Office for National Statistics. They found the same independent association between residence in the most green areas with lower rates of dying from circulatory diseases and all cause mortality. Since greater access to green space may simply be a surrogate marker for the other health advantages (healthcare access, nutrition, lower cumulative stress levels, cortisol etc) in affluent 'green' neighborhoods, the researchers controlled for socio-economic status. Green space, it was reported, filled in the gap in health inequalities. Among those with low income and high levels of residential greenery, the mortality rates vs. those with higher socio-economic status were similar. However, when low income was associated with little surrounding green space, the differences in mortality rates became clearly visible. The researchers concluded that green space was an independent variable capable of saving thousands of lives per year in lower income populations [72]. There is, of course, the possibility that much of the epidemiological findings in favor of green space as a variable in health, and mental health in particular, is simply due to green space providing opportunity for physical activity. Given the sound relationship between physical activity and mental health this would be a reasonable presumption. However, there is also evidence indicating that exercise conducted in outdoor settings or green space may be of more value to mental health, physical performance, and motivation to maintain exercise adherence [73][74][75][76][77][78][79][80].
Nature, urban growth and environmental implications
Within the next several decades the human transition from rural to city residence will accelerate at an even faster rate, with some 90 percent of North Americans and 70 percent of global residents projected to call a city their home [81]. Humans are incredibly social creatures, so it is not at all un-natural that urban centers should grow and thrive. There are, however, some alarming concerns with this inevitable trend. Research shows that cities are far from a panacea for mental health disorders, indeed, rates of depression, anxiety and schizophrenia are consistently reported to be higher among urban residents [33]. Based on the research discussed above, and assuming for a moment that it grows more robust in its scientific strength, the need for access to urban green space may be a mental health necessity. Access to green space and other natural settings affords opportunity for connectivity to nature, and this connectivity, in turn, may provide a layer of insulation against the psychological downsides to urban living; among almost 550 urban men and women, higher scores on the connectivity to nature scale was associated overall psychological well-being, vitality and meaningfulness in life [82]. These strong connections between nature connectivity and personal well-being are found broadly in the population -from private sector executives, high-ranking government employees, to university students, the positive relationship is evident [83]. Urban green space also provides opportunity for contemplation and mindfulness, and a recent study involving over 450 university students shows that mindfulness appears to act as a conduit between connectivity to nature and overall psychological well-being [84].
Some intriguing research suggests that there may be a two-way interaction between the potential mental health benefits of nature and the maintenance of biodiversity. A number of studies have shown that experience in nature, higher connectedness to nature, fosters pro-environmental attitudes and behaviors [85][86][87][88][89][90]. On the other hand, preliminary investigations have shown that species biodiversity is a variable in the ability of urban green space to influence mental well-being. In a United Kingdom study it was reported that the mental health benefits of 15 different urban green space settings were positively associated with a greater richness of various plant and bird species within these local settings [91]. Australian researchers have extended these results, and even after controlling for various confounders, well-being within urban neighborhoods was associated with species variety and abundance of local birds and totality of vegetation cover [92]. These are important lines of research, particularly when recent evidence suggests that internet-based learning may dilute knowledge and protective concerns related to local biodiversity, in favor of more glamorous species residing in distant locales [93]. There may be a payoff to both personal mental well-being and environmental efforts by raising awareness of potential psychological benefits of local green space and its biodiversity. Canadian researchers have recently reported that contact with nature can foster positive mood state, which in turn facilitates a sense of nature relatedness. The researchers evaluated the psychological effects of walking different routes taken by young adult volunteers -one through buildings and tunnels and the other outdoors through mixed green space -to specific locations in and around the campus. Walking for just 15 minutes through green space, as expected, was associated with more positive post-walk mental outlook. However, the researchers also discovered that the university students were unable to forecast, prior to the walks, that taking differing indoor and outdoor routes could influence mood [94]. A lack of anticipation of benefits derived from urban nature might be cause for alarm, particularly if there is indeed a legitimate displacement of nature-based contact via the omnipresent screen. Although the erosion of our connection to nature may be obscuring its perceived benefits, and research does show that young adults in university settings continue to have minimal awareness of and concern about global climate change and other environmental issues [95], there is reason for optimism -critically, the researchers also showed that walking in nature lifted mood, and mood elevation via nature exposure appears to increase relatedness to nature. The researchers refer to this as a happy path to sustainability, a cycle that can be maintained by fostering awareness that nature has the potential to influence mood [94].
Future directions
Our review, since it is neither a meta-analysis or systematic review, may unwittingly give the impression that nature's influence in quality of life, stress reduction, mental health and even longevity is positive and iron-clad. However, it must be acknowledged that not all studies have found benefit. For example, a recent cross-sectional study of Japanese adults (average age 52) found no association between the frequency of forest walking and the prevalence of hypertension [96]. Moreover, an examination of 49 of the largest cities in the United States did not find that green space coverage was associated with mortality from heart disease, diabetes, or lung cancer. The US cities with higher green space coverage are more sprawling and associated with greater use of motor vehicles [97]. It is also true that natural settings, forests and wilderness areas in particular, are not without risk. These settings can be the habitat of animals that require their own space, and the risk of contact with vectors carrying infectious diseases (tick-borne diseases as one example) increases in these areas [98][99][100][101]. In addition, the Japanese experience with new-growth cedar forests and their association with increasing rates of cedar pollinosis raises questions about the types of trees and plants that are most suitable for parks and urban forests [102][103][104]. Interestingly, a large prospective study has recently indicated that nature-based occupation (farm work) is associated with reduced risk of developing cedar pollinosis in Japan [105].
In short, there are many questions that researchers must address in the years ahead. Are there individual and cultural differences in preference for natural settings that can influence health outcomes? What might be an appropriate "dose" (duration and frequency) of nature contact? What are the mechanisms of action and what groups of individuals (e.g. children, older adults, and individuals living in deprived communities, those with mental health disorders) might have the most to gain from nature contact [106]? Are certain types of activities (e.g. gardening, walking in forest settings, contemplating in an urban park) more effective than others? How does technology and "virtual nature" fit in? To what extent are human behaviors being dictated by lack of nature contact around the home [107]? How does all of this fit into conservation efforts and global environmental issues? Research-based answers to these and other questions should provide helpful insight to policy makers and planners as our global cities expand.
Conclusion
The available evidence suggests that nature does minister to the mind, and there are more than a few scientific hints suggesting that individuals may need to be made more aware of the potential psychological benefits of nature. A century ago Sir J. Arthur Thomson maintained that the millennia had shaped the far-reaching relations between humans and nature, and that these relations could not be ignored, could not be abandoned, without loss in the realm of positive mental health. While it is difficult to determine to what extent the potential losses might be, it seems fair to suggest that the losses may be more than currently appreciated by most physicians and mental health experts. A lack of anticipated psychological benefits of time spent in nature, as recently reported among a group of young adults on an urban campus, suggests that we have indeed, as Sir J. Arthur Thomson feared, "put ourselves beyond a very potent vis medicatrix". Given the positive relationship between nature connectedness, personal well-being, and conservation/pro-environmental attitudes, the experience of even nearby nature might also provide a more sustainable path towards sustainability. Ultimately, an awareness of vis medicatrix naturae in the framework of positive psychology can sidestep the dominant negative messaging associated with sustainability and biodiversity. This otherwise fear-based approach is one wherein well-intentioned individuals may be more likely to feel disempowered and throw in the environmental towel [108]. Hopefully, further research will continue to shed light on the ways in which excessive screen time and displacement of time spent in nature might interact to influence mood and cognition. In the meantime, there is enough evidence to suggest that screen time quotas and nature as an opportunity for physical activity, contemplation and mindfulness, are worthy talking points in clinical settings. | 7,817.6 | 2012-04-03T00:00:00.000 | [
"Medicine",
"Philosophy",
"Environmental Science"
] |
Integrability in three dimensions: Algebraic Bethe ansatz for anyonic models
We extend basic properties of two dimensional integrable models within the Algebraic Bethe Ansatz ap-
proach to 2+1 dimensions and formulate the sufficient conditions for the commutativity of transfer matrices of different spectral parameters, in analogy with Yang–Baxter or tetrahedron equations. The basic ingredient of our models is the R-matrix, which describes the scattering of a pair of particles over another pair of par-
ticles, the quark-anti-quark (meson) scattering on another quark-anti-quark state. We show that the Kitaev model belongs to this class of models and its R-matrix fulfills well-defined equations for integrability.
© 2015 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY license
(http://creativecommons.org/licenses/by/4.0/). Funded by SCOAP3.
The importance of 2D integrable models [1][2][3][4][5] in modern physics is hard to overestimate. Being initially an attractive tool in mathematical physics they became an important technique in low dimensional condensed matter physics, capable to reveal non-perturbative aspects in many body systems with great potential of applications. The basic constituent of 2D integrable systems is the commutativity of the evolution operators, the transfer matrices of the models of different spectral parameters. This property is equivalent to the existence of as many integrals of motion as number of degrees of freedom of the model. It appears, that commutativity of transfer matrices can be ensured by the Yang-Baxter (YB) equations [3][4][5] for the R-matrix and the integrability of the model is associated with the existence of the solution of YB-equations.
Since the 80s of last century there was a natural desire to extend the idea of integrability to three dimensions [6], which resulted in a formulation of the so-called tetrahedron equation by Zamolodchikov [7]. The tetrahedron equations (ZTE) were studied and several solutions have been found until now [7,8,10,13,14,[16][17][18][19][20]22]. However, earlier solutions either contained negative Boltzmann weights or were slight deformations of models describing free particles. Only in a recent work [15] non-negative solutions of ZTE were obtained in a vertex formulation, and these matrices can be served as Boltzmann weights for a 3D solvable model with infinite number of discrete spins attached to the edges of the cubic lattice.
The lack of solutions of tetrahedron equations giving rise to models with finite degrees of freedom at the sites, which one expects in any realistic experimentally relevant situation, rises an immediate question: are there criteria sufficient for integrability of 3D models, that have finite degrees of freedom? This is the precise question we address in this letter.
Although initially the tetrahedron equations were formulated for the scattering matrix S of three infinitely long straight strings in a context of 3D integrability they can also be regarded as weight functions for statistical models. In a Bethe Ansatz formulation of 3D models their 2D transfer matrices of the quantum states on a plane [8,14,17] can be constructed via three particle R-matrix [9,14,21], which, as an operator, acts on a tensorial cube of linear space V , i.e.
Motivated by the desire to extend the integrability conditions in 3D to other formulations we consider a new kind of equations with the R-matrices acting on a quartic tensorial power of linear spaces V which can be represented graphically as in Fig. 1a. An important observation in this direction is that Kitaev model [12] can be formulated by the use of this type of R-matrix, which we identify below. Since the model has as much integrals of motion as degrees of freedom, one expects existence of appropriate integrability equations, satisfied by the R-matrix of Kitaev model. Solutions of this integrability equations will lead to the construction of the new type of 3D integrable models, which are essentially different from the Kitaev model.
The main result of this paper is the derivation of a new set of equations -termed as cubic equations -that are very different from tetrahedron equations, and define criteria for integrability in 3D. We also show that cubic equations are satisfied by the R-matrix of Kitaev model. We believe that there are many more integrable models in 3D that can be studied within the developed approach. The R-matrix (1) can be represented also in the form displayed in Fig. 1b, where the final spaces are permuted (V 1 and V 2 with V 3 and V 4 , respectively): R 1234 =Ř 1234 P 13 P 24 . Explicitly it can be written as follows Identifying the space V 1 ⊗ V 2 and V 3 ⊗ V 4 with the quantum spaces of quark-anti-quark pairs connected with a string one can regard this R-matrix as a transfer matrix for a pair of scattering mesons. Within a terminology used in the algebraic Bethe Ansatz for 1 + 1 integrable models this R-matrix can be viewed also as a matrix, which has two quantum states and two auxiliary states.
The space of quantum states t = ⊗ (n,m)∈L V n,m of the system on a plane is defined by a direct product of linear spaces V n,m of quantum states on each site (n, m) of the lattice L (see Fig. 2a). We fix periodic boundary conditions on both directions: V n,m+L = V n,m and V n+L,m = V n,m . The time evolution of this state is determined by the action of the operator/transfer matrix T : t+1 = t T , which is a product of local evolution operators, R-matrices as follows. First we fix a chess like structure of squares on a lattice L and associate to each of the black squares a R-matrix Ř (n+1,m)(n+1,m+1)(n,m)(n,m+1) , which acts on a product of four spaces at the sites. In this way the whole transfer matrix becomes where the Trace is taken over states on boundaries. The indices of the R-matrices in the first and second lines of this product just ensure chess like ordering of their action. In Fig. 2b we present this product graphically. First we identify the second pair of states (2n − 1, 2m), (2n − 1, 2m + 1) (in first row) and (2n + 1, 2m − 1)(2n + 1, 2m) (in second row) of R-matrices with the corresponding links on the lattice. Then we rotate the box of the R-matrix by π/4 in order to ensure the correct order for their action in a product. In the same way we define the second list of the transfer matrix, which will act in the order T B T A . Fig. 2c presents a vertical 2D cut of two lists of the product T B T A drawn from the side. The π/4 rotated lines mark the spaces V n,m attached to sites (n, m) of the lattice. Though transfer matrix (3) is written in Ř formalism, it can easily be converted to the product of R-matrices. The arrangement of R-matrices in the first row (first plane of the transfer matrix T B ) acts on the sites of dark squares of the lattice while R-matrices in the second row (second plane of the transfer matrix T A ) act on the sites of the white squares.
Being an evolution operator the transfer matrix should be linked to time. According to the general prescription [4,5] the transfer matrix T (u) is a function of the so-called spectral parameter u and the linear term H 1 in its expansion T (u) = r u r H r defines the Hamiltonian of the model, while the partition function is Z = TrT N . Integrable models should have as many integrals of motion, as degrees of freedom. This property may be reached by considering two planes of transfer matrices with different spectral parameters, T (u) and T (v) and demanding their commutativity [T (u), T (v)] = 0, or equivalently demanding the commutativity of the coefficients [H r , H s ] = 0 of the expansion. This means, that all H r , r > 1 are integrals of motion. In 2D integrable models the sufficient conditions for commutativity of transfer matrices are determined by the corresponding YB-equations [3][4][5].
In order to obtain the analog of the YB equations, which will ensure the commutativity of transfer matrices (3) we use the so-called railway construction. Let us cut horizontally two planes of the R-matrix product of two transfer matrices (on Fig. 2b we present a product of R-matrices for one transfer matrix plane) into two parts and substitute in between the identity which maps two chains of sites, (2n, m), m = 1 · · · L and (2n + 1, m), m = 1 · · · L + 1, into itself. The Trace have to be taken by identifying spaces 1 and L + 1. In this expression we have introduced another set of R -matrices, called intertwiners, which will be specified below. For further convenience we distinguish Ř (2n+1,m)(2n+1,m+1)(2n,m)(2n,m+1) matrices for even and odd values of m marking them as Ř 3 and Ř 4 respectively. In the left side of Fig. 3 we present one half of the plane of R-matrices together with an inserted chain of Ř 3Ř4 as intertwiners. The chain of intertwiners can also be written by R-matrices. Now let us suggest, that the product of these intertwiners with the first double chain of Ř -matrices from the product of two planes of transfer matrices is equal to the product of the same operators written in opposite order. Namely we demand, that 2m)(2n+1,2m+1)(2n,2m)(2n,2m+1) Graphically this equation is depicted in Fig. 3. We move the column of intertwiners from the left to the right hand side of the column of two slices of the R-matrix product, simultaneously changing their order in a column, changing the order of spectral parameters u and v of the slices and demanding their equality. We can use the same type of equality and move the chain of intertwiners further to the right hand side of the next column of the two slices of the Ř -matrix product. Then, repeating this operation multiple times, one will approach the chain of inserted Ř −1 intertwiners inside the Trace from the other side and cancel it. As a result we obtain the product of two transfer matrices in a reversed order of spectral parameters u and v. Hence, the set of equations (5) ensures the commutativity of transfer matrices.
The set of equations (5) can be simplified. Namely, it is easy to see, that the equality can be reduced to the product of only 2 Ř -matrices, Ř (u) and Ř (v) and two intertwiners, Ř 3 and Ř 4 . In other words, it is enough to write the equality of the product of Ř -matrices from the inside of the dotted line in Fig. 3. Graphically this equation is depicted in Fig. 4.
We see, that in this equation the product of Ř -matrices acting on a space ⊗ 9 i=1 V i (for simplicity we numerate the spaces from 1 to 9) can be written aš Here we have introduced a short-hand notation for Ř -matrices simply by marking the numbers of linear spaces of states, in which they are acting; id 3 and id 7 are identity operators acting on spaces 3 and 7 respectively. Eq. (6) can also easily be written by use of R.
This is the set of equations, sufficient for commutativity of transfer matrices. The same set of equations are sufficient for commuting Ř -matrices in the second column in Fig. 1. Equations (6) form an analog of YB equations ensuring the integrability of 3D quantum models. Since they have a form of relations between the cubes of the R-matrix picture (see Fig. 1) we call them cubic equations.
We will show now that the Kitaev model [12] can be described as a model of the prescribed type and its R-matrix fulfills the set of cubic equations (6). The full transfer matrix of Kitaev model is a product T A T B of two transfer matrices of type (3) defined by Ř -matrices The linear term of the expansion of T A T B in the spectral parameter u will produce the Kitaev model Hamiltonian The integrability of the Kitaev model is trivially clear from the very beginning since all terms in the Hamiltonian defined on white and dark plaquettes commute with each other. The latter indicates, that the number of integrals of motion of the model coincides with its degrees of freedom. However it is important to point out, that the standard Algebraic Bethe Ansatz approach was so far inapplicable to the Kitaev model. The reason is that the corresponding R matrix did not fulfill the tetrahedron equations, which, as in 2D case, would allow to generalize and construct new 3D, similar to Kitaev integrable models. In this paper we have developed the appropriate 3D Algebraic Bethe Ansatz approach, and show, that the Kitaev model belongs to this class of integrability.
Namely, we will show now, that R A and R B -matrices of the Kitaev's model fulfill Eq. (6). The explicit form of Eq. (6) by use of indices according to the definition in Fig. 1a readš where Ř1 (u) =Ř A (u) and Ř2 (v) =Ř B (v). It appears, that the intertwinerš where R −1 A (u) = 1 ⊗ 1 ⊗ 1 ⊗ 1 − uσ x ⊗ σ x ⊗ σ x ⊗ σ x fulfill the cubic equations (8) for any parameters u and v. This can be directly checked both, by a computer algebra program and analytically. The commutativity of transfer matrices T A (u) with T A (v) and T B (u) with T B (v) is trivial in the Kitaev model.
Summary.
We have formulated a class of three dimensional models defined by the R-matrix of the scattering of a two particle state on another two particle state, i.e. a meson-meson type scattering. We derived a set of equations for these R-matrices, which are a sufficient conditions for the commutativity of the transfer matrices with different spectral parameters. These equations differ from the tetrahedron equations, which also ensure the integrability of 3D models, but are based on the R-matrix of 3 particle scatterings. Our set of equations will be reduced to tetrahedron type of equations by considering the two auxiliary spaces in the R-matrix as one (fusion) and replacing it by one thick line. We showed that the Kitaev model [12] belongs to this class of integrable models. This give rise a hope, that other solutions of integrability equation (6) and (8) with finite degrees of freedom at the sites may be found, which will be non-trivial extensions of Kitaev model. | 3,421.6 | 2015-10-01T00:00:00.000 | [
"Physics",
"Mathematics"
] |
Extending the clinical capabilities of short- and long-lived positron-emitting radionuclides through high sensitivity PET/CT
This review describes the main benefits of using long axial field of view (LAFOV) PET in clinical applications. As LAFOV PET is the latest development in PET instrumentation, many studies are ongoing that explore the potentials of these systems, which are characterized by ultra-high sensitivity. This review not only provides an overview of the published clinical applications using LAFOV PET so far, but also provides insight in clinical applications that are currently under investigation. Apart from the straightforward reduction in acquisition times or administered amount of radiotracer, LAFOV PET also allows for other clinical applications that to date were mostly limited to research, e.g., dual tracer imaging, whole body dynamic PET imaging, omission of CT in serial PET acquisition for repeat imaging, and studying molecular interactions between organ systems. It is expected that this generation of PET systems will significantly advance the field of nuclear medicine and molecular imaging.
Introduction
Since the 1970s, when the first PET systems were built, there has been a significant evolution in PET technology. Over the last couple of years, progress in development of detector technology from photomultiplier tubes to silicon-based photomultiplier (SiPM) detector elements has led to the development of commercially available digital PET/CT scanners. With the introduction of SiPM-based digital PET/CT systems, time-of-flight (TOF) improved to a range of 210-400 ps and sensitivity increased up to 20 kcps/MBq [1,2]. Because of the compact size of SiPM-based detector elements, crystals of less than 4 × 4 mm in cross section could be implemented allowing for improved spatial resolution. This increased spatial resolution, combined with higher sensitivity and improved TOF resolution, has resulted in better noise properties. These improved physical performance characteristics subsequently translated into improved image quality and a more efficient use of digital PET systems in daily clinical practice.
The latest improvement in PET system technology is the development of long axial field of view (LAFOV), or so-called "total-body", PET/CT systems. Also equipped with SiPM-based detectors, these systems surround the patient with many more detectors in the axial direction resulting in two major improvements [3]: 1. higher detection efficiency as more photon pairs are captured. 2. one bed position covers all relevant organs of interest simultaneously. van Sluis et al. Cancer Imaging (2022) 22:69 To date, three LAFOV systems have been introduced: the uEXPLORER (United Imaging Healthcare America) [4] with a 194-cm-long axial FOV; the Siemens Biograph Vision Quadra PET/CT (Siemens Healthineers) [5] with a 106-cm-long axial FOV; and the PennPET Explorer (University of Pennsylvania) [6,7] with a 64-cm-long axial FOV (the UPENN project is a nonregulatory approved academic research project).
This review will elaborate further on the advantages of LAFOV and provide an overview of novel clinical applications made possible by the use of short-and long-lived positron-emitting tracers within the context of LAFOV PET. Furthermore, the typical challenges encountered when implementing and validating a LAFOV system for clinical use will be discussed.
General advantages of LAFOV Pet
The main characteristic of novel LAFOV PET/CT scanners is the possibility to cover the whole body (uEXPLORER) or the most important part of the body within an oncological setting (from skull vertex to mid-thigh) including all relevant organs (Quadra) in one single bed position. This provides four major advantages over PET/CT systems with a conventional FOV: 1. Decreased acquisition time and the possibility to implement fast or ultra-fast acquisition protocols, thereby reducing motion artifacts and the need for sedation in e.g. children, which is particularly useful for scanning "difficult" patients, such as patients admitted to the Intensive Care Unit (ICU), severely debilitated patients, claustrophobic patients, or patients who cannot lay still due to neurological disorders or extreme pain. 2. The possibility to reduce administered activity of radiopharmaceuticals with, a corresponding reduction in radiation exposure, which can be of invaluable importance in small children or babies, and in pregnant women. 3. The improved spatial resolution and increased sensitivity may lead to higher diagnostic accuracy, especially in those cases which led to false-negative scan results on conventional FOV scanners, due to e.g., a very low grade tumor or a chronic low-grade infection site. 4. The possibility to perform whole body dynamic PET imaging, without the need for arterial blood sampling, and including all relevant organs in the large FOV, providing the possibility to look at all relevant organs and possible (tumor) lesion simultaneously. Obvious benefits are, e.g., to increase the number of performed scans per produced batch/production run of these tracers because of the ability to inject a lower dose, scan faster, and inject at a timepoint longer after the production run -as the very high sensitivity of LAFOV PET systems will still allow for good quality scans. As these short lived tracers can be highly costly to produce, improving the utilization per production run in a clinical or research context may also be highly relevant from a financial perspective.
Specific benefits for short-lived radionuclides
Furthermore, an obvious benefit of LAFOV combined with ultra-short half-life tracers such as 15 O-H 2 O is that it potentially allows for evaluating tracer uptake throughout the body before it decays beyond detectability, which currently requires multiple injections of the same tracer for multiple bed positions in case of conventional FOV systems. Capturing tracer dynamics with a single bed position covering alle relevant organs of interest is another benefit for short half-life tracers which, for example, brings whole-body 15 O-H 2 O perfusion measurements with a single injection within reach, which could be highly relevant in infection/inflammation, cardiovascular and oncological imaging. A practical example of LAFOV PET/CT enabling late imaging of a short-lived radiotracer within the context of recurrent prostate cancer imaging using 68 Ga-PSMA is provided by Alberts et al., who compared late imaging (4 h post injection (p.i.)) with standard imaging (1 h p.i.), with the aim of improving lesion to background and contrast [8]. This study showed improved TBR and SNR for late acquisitions, and suggests that late imaging might be the preferred approach on LAFOV PET/CT systems in this specific context. Also, and possibly clinically one of the most relevant benefits, LAFOV allows for combining a short-lived and longer-lived radioisotope scan within the same scan session or on the same day -while still staying within clinically acceptable acquisition time and cumulative patient dose limits. A good example of this is a recent study by Alberts et al. combining 68 Ga-PSMA with 18 F-FDG in a dual-tracer same-day imaging protocol in patients referred for 177 Lu-PSMA-radioligand therapy. In this protocol patients were scanned with the Quadra LAFOV PET/CT scanner 1 h postinjection of a standard dose of 68 Ga-PSMA (150 MBq) and an additional low-dose (40 MBq) 18 F-FDG scan one hour thereafter -with the combined protocol identifying lesions with low 68 Ga-PSMA but high 18 F-FDG avidity in 1 out of 14 (7%) patients [9].
Specific benefits for long-lived radionuclides
The most commonly used long-lived radionuclide is Zirconium-89 ( 89 Zr). Advantages of 89 Zr such as the long half-life of 78.4 h, matching the pharmacokinetic behavior of antibodies, and good in vivo stability, make it suitable for labeling monoclonal antibodies (mAbs) [10]. 89 Zr-immunoPET can provide whole body information on (tumor) target expression [11]. Another long-lived radiotracer of interest is 124 I, which is used for the detection of differentiated thyroid cancer [12]. However, both tracers have a low positron abundance (23%, as opposed to 18 F with an abundance of 96%) [10,12]. Hence, PET imaging suffers from a low signal to noise ratio when acquiring PET images on conventional FOV PET/CT systems. In addition, the long physical half-life limits the amount of radiotracer activity that can be administered to keep radiation exposure within acceptable limits [13].
Currently, immunoPET is used almost exclusively in research settings in oncological patients with a relatively shorter life expectancy, because of the high mean effective doses (ranging from 0.36 to 0.66 mSv/MBq) associated with 89 Zr-labeled mAbs [14]. Administering a standard amount of 37 MBq of 89 Zr activity results in a radiation exposure of up to 25 mSv.
LAFOV PET opens up several possibilities in the field of PET-imaging with long-lived tracers. The increased sensitivity leads to a better signal-to-noise ratio (Fig. 1).
Furthermore, it opens the possibility of lowering the amount of administered radioactive 89 Zr activity so that radiation exposures below 10 mSv become possible. This could allow for the use of immunoPET not only as a last resort in oncology, but also in (younger) patients with benign or inflammatory disorders, first in a research setting, and in the future maybe also in a routine clinical setting. For 124 I imaging, improved image quality could lead to improved lesion detectability in thyroid cancer.
The higher sensitivity of LAFOV may also enable prolonged uptake time which is expected to result in an improved (tumor) lesion-to-background ratio. Combining delayed imaging with novel radioactive agents allows extended study of in vivo biology [15]. Furthermore, labelled immune cells together with a LAFOV PET/CT system, capturing all relevant organ tissues of interest simultaneously could be used to study crosstalk between different organ systems, e.g., organ axes or the human connectome.
Overview of current clinical applications
Clinical experiences with LAFOV PET systems have been compared with analog and digital conventional FOV PET systems. As well as experiences regarding clinical optimization using LAFOV PET alone. As the PennPET explorer is still in its prototype stage, this subsection will focus on existing comparison studies between commercially available conventional FOV and LAFOV PET systems, published up to November 2022.
Alberts et al. [16] reported the first clinical experiences in using a LAFOV Biograph Vision Quadra PET/ CT with respect to a conventional FOV digital Biograph Vision PET/CT (Siemens Healthineers). A head-to-head comparison was performed between image quality of the Vision Quadra (sensitivity of 174 cps/kBq and a TOF performance of 219 ps [17]) with that of the Vision system (sensitivity of 16.4 cps/kBq and a TOF performance of 210 ps [18]). This intra-individual head-to-head comparison was performed in 44 patients referred for routine oncological 18 F-FDG, 18 F-PSMA-1007, and 68 Ga-DOTA-TOC examinations. The comparison showed improved lesion detectability, reduced image noise levels, and visually improved visually image quality, all in favor of the Vision Quadra. In addition, it was concluded that LAFOV images of equivalent quality to images acquired for ∼ 16 min on the conventional digital FOV system can be obtained in 2 min. This reduction in scan duration was found to be interchangeable with reducing the amount of administered radiotracer activity. This potential to reduce scan duration for oncological 18 F-FDG imaging using an LAFOV Vision Quadra PET/CT was confirmed by Van Sluis et al. [19] in a study exploring European Association of Nuclear Medicine Research Ltd. (EARL) compliance, who also showed that semiquantitative accuracy was maintained for reduced scan durations. For EARL standard compliant acquisition and reconstruction protocols, scan durations could even be reduced to 1 min.
Another, previously mentioned study by Alberts et al. [8] on an LAFOV Vision Quadra showed that late time point acquisitions using 68 Ga-PSMA-11 at 4 h p.i. were not only feasible, but even produced improved image quality compared with conventional FOV systems.
With respect to long-lived radionuclides, immunoPET with 89 Zr-labeled mAbs showed a remarkable improvement in image quality of patients scanned 4 days post p.i. [20]. In this study, images were obtained on an LAFOV Vision Quadra and on either a conventional digital Vision PET/CT or an analog mCT PET/CT (Siemens Healthineers) for a direct visual comparison of image quality. Images as short as 3 min obtained on the LAFOV system showed comparable image quality as 32 and 45 min acquisition times on conventional FOV Vision and mCT systems, respectively.
The first clinical experiences with the uEXPLORER have been described by Badawi et al. in 5 patients undergoing different acquisition protocols including dynamic 18 F-FDG total-body imaging [21]. The uEXPLORER, with a measured system sensitivity of 174 cps/kBq and a TOF performance of 505 ps [4], was found to image better, faster (as fast as 18.75 s), at later time points after injection (e.g., up to 10 h after injection) or with lower amounts of administered radiotracer (e.g., with only 5.7 MBq injected 18 F-FDG activity) compared with conventional PET/CT imaging. Furthermore, it was shown that the system was able to acquire total-body dynamic imaging data with high temporal resolution.
Regarding evaluation of pediatric malignancies with half-dose 18 F-FDG protocols (1.85 MBq/kg), Chen et al. [22] found that acquisition times as short as 1 min resulted in images of adequate diagnostic image quality with sufficient lesion detectability [23] which is imperative for pediatric patients undergoing frequent PET imaging during disease management. Furthermore, ultra-fast 30 s 18 F-FDG total-body PET imaging in 88 oncologic patients (3.7 MBq/kg) resulted in images with sufficient quality to meet clinical diagnostic requirements [24], although a clear reduction in image quality was seen for the 30 s images compared with the 300 s images. This study concluded, that for patients unable to lie still for 5 min, a 30 s scan would still enable clinical diagnosis.
In addition, one study examined the pathophysiological changes in CD8 + T cell distribution in recovering COVID-19 patients, using a 89 Zr-labeled minibody [25]. When injecting < 37 MBq of 89 Zr-labeled mAb, high quality images were obtained with the possibility of deriving parametric Patlak images. This study highlighted that it is feasible to follow in vivo migration of T-cells using LAFOV PET, which allows for exploring functional aspects such as vaccine responses, but which may also be important for immunological research in general [25].
Finally, improvements in calculated liver dosimetry using the LAFOV uEXPLORER versus the conventional analog mCT PET/CT in transarterial radioembolization of liver tumors with 90 Y microspheres was investigated in two patients by Costa et al. [20]. Even though images obtained using LAFOV PET showed increased signal to noise ratio, they found that the total absorbed dose in the liver showed excellent agreement regardless of PET/ CT system, but that there were differences of up to 60% when comparing liver segment doses [26]. The improved signal to noise ratio obtained using LAFOV PET, especially in lower count regions of interest, is expected to improve dosimetry calculations which warrants further investigations.
Oncology
The advantages of a highly sensitive LAFOV PET system over conventional PET systems in oncology can be divided in three major areas: reduction in administered activity or faster scanning in critically ill patients, prolonged time point imaging, and quantification of uptake as a marker of total tumor load. Especially in oncology, early response assessment is pivotal in distinguishing responders from non-responders. Reduction in administered activity opens the possibility to perform these response assessments more frequently. Furthermore, it opens op opportunities to perform scans with multiple different tracers to more accurately map the status of the disease. In case a patient is not responding to the treatment, a switch to an alternative treatment line can be made more swiftly, potentially resulting in less treatment related toxicity [27]. As such, it may contribute to better personalized treatment strategies, eventually leading to increased survival in this patient group. Due to better and more effective treatment strategies, survival from any malignancy has improved in the last decades [28]. As a result, patients are scanned more often during their (extended) follow-up. Reduction in administered activity during these follow-up investigations is pivotal for keeping the cumulative radiation burden within acceptable ranges. This also accounts for repeat imaging necessary in (younger) Hodgkin lymphoma or melanoma patients with a relatively high life expectancy [29].
In addition, LAFOV PET/CT scanning will contribute to a better understanding of the biodistribution of newly developed tracers, since different organ axes can be visualized and studied in one image. The addition of dynamically acquired kinetic information can play a role in the assessment of therapeutic efficacy [30]. Furthermore, dynamic acquisition helps to better quantify tracer uptake in tumor lesions, free from confounding signals such as non-specific uptake, as well as (interinstitutional) comparison of tracer uptake and lesion-to-normal tissues ratio of different tracers for the same application.
For 18 F-FDG, the most commonly used radiopharmaceutical in clinical practice, the main advantages of an LAFOV PET/CT system is the reduction in scanning time which may lead to a higher patient throughput. It is not expected that the diagnostic accuracy, which is already high for most oncological diseases, will further increase (Fig. 2).
The published studies so far in this field predominantly compare the diagnostic performance between standard and reduced scan acquisition times. In a study on 78 patients with hepatic tumors, no significant differences were seen in the number of detected hepatic lesions between standard (15 min) and fast (2 min) scans [31].
For non 18 F-FDG tracers the benefits may be larger. For 18 F-FES, used to evaluate estrogen receptor expression in patients with metastasized breast cancer, improved sensitivity may lead to a better differentiation between low and high ER expression within a single tumor lesion. Use of LAFOV may lead to improved image quality and better signal-to-noise ratios for the 68 Ga-labelled tracers, as generally lower amounts of activity are inject for these types of tracers. Improved signal-to-noise ratio also holds true for 18 F-FDOPA (Fig. 3) and 68 Ga-DOTA-TATE for imaging of neuro-endocrine tumors. Regarding 11 C-Choline PET, using highly sensitive LAFOV PET allows to acquire whole body data in a single bed position. LAFOV prevents dealing with decay of the tracer in consecutively acquired bed positions influencing count statistics per step-and-shoot because of the short half-life as would be the case in conventional FOV scanners; all data is acquired simultaneously in one single bed-position (Fig. 4). 11 C-Choline PET could be helpful in detecting hepatocellular carcinomas as these are known to frequently exhibit low 18 F-FDG accumulation [32]. As stated earlier, immunoPET imaging with e.g., long-lived radiotracers such as 89 Zr-labeled mAbs will be an area in which the substantial increase in sensitivity leads to a substantial improvement in image quality using LAFOV PET/CT scanners in the oncological setting [20]; enabling further development regarding labeling of mAbs beyond primarily the oncological research setting.
Infection/inflammation
18 F-FDG PET/CT is widely used for diagnosis and therapy evaluation in a variety of infectious and inflammatory diseases. Both infectious and inflammatory tissues actively take up 18 F-FDG, and fungal and bacterial cells use 18 F-FDG for their own metabolism. In addition, inflammatory mediators may also cause a local upregulation of glucose transporters [33]. The diagnostic accuracy of 18 F-FDG PET/CT in this setting is high, but important issues to solve still remain.
First of all, the relative non-specificity of 18 F-FDG is a major problem, and differentiation between tumor activity, inflammation and infection is not possible. Dynamic imaging with all the major organs in the field of view of an LAFOV PET/CT system, may solve this problem. As the distribution of 18 F-FDG throughout the different organs and towards the different lesions is a dynamic process, differences in glucose metabolism may be more apparent in dynamic imaging than in routine static images one hour after the administration of 18 F-FDG.
Secondly, for some indications imaging at later time points may be beneficial to have a better ratio between the inflammatory lesion and blood pool activity, for example in large vessel vasculitis or cardiac sarcoidosis. Since the improved sensitivity of an LAFOV PET/CT scanner allows for scanning even after 4 or 5 half-lives, this may be a worthwhile option.
Thirdly, low-grade chronic infectious processes, processes characterized by low bacterial load, and biofilms on prosthetic material, are hard to detect on the conventional FOV PET/CT systems, due to the limited sensitivity and low uptake. The increased sensitivity may enable a better detection of small and low-grade 18 F-FDG avid foci. It may also help in the detection of smaller inflamed vessels such as in medium-sized vasculitis or inflamed cranial vessels in cranial large vessel vasculitis.
Last but not least, ultrafast imaging may allow for imaging critically ill patients (Fig. 5) and patients admitted to the ICU with persistent inflammation or infection. In addition, it allows for scanning children without sedation. This will allow for more flexibility in hospital planning and increase patient capacity.
Cardiovascular
For cardiovascular imaging, LAFOV PET/CT imaging holds several advantages as well. The shorter scan duration increases the accessibility of PET/CT for patients that cannot remain supine for an extended time due to e.g., orthopnea or hemodynamic impairment. The potential to reduce administered activity could also improve the cost-benefit balance for performing baseline scans to facilitate the evaluation of intracardiac prostheses showing reactive 18 F-FDG uptake. This may render 18 F-FDG PET/CT at later timepoints difficult to interpret, i.e., in settings of suspected infection. Examples of this are Bentall protheses and left ventricular assist devices. Evidence is currently limited to case studies, but baseline scans have shown promise in suspected LVAD infections [34], Reactive 18 F-FDG uptake, frequently seen in Bentall prostheses would make these interesting targets for this approach as well [35]. Other specific advantages of LAFOV PET are the possibility to performing dynamic scans, which may facilitate differentiating between reactive 18 F-FDG uptake and uptake due to e.g. vasculitis or infective processes, and cardiac motion correction which could provide more accurate visualization of mobile structures in the heart, e.g., vegetations in suspected endocarditis frequently are missed on conventional PET/ CT systems [36].
Neurology
In a road map to implementation and especially new possibilities of LAFOV PET/CT scanners Slart et al. [37] already pointed out that brain imaging might enable combined assessment of brain and spinal cord, providing a more comprehensive assessment of the molecular basis of neurodegenerative diseases [37]. In addition, imaging of organ-axis interactions may be facilitated by these systems. This was already shown to be relevant for the brain-gut axis in Parkinson's disease and for the cardiacbrain axis, as the latter connects cardiovascular function, neurochemical asymmetries and depression [37]. While these studies already take advantage of additional information inherent to the large axial field of view, dynamic imaging capabilities obtained from different organs and regions simultaneously further strengthen the opportunities for less invasive absolute brain quantification, first in a research setting and possibly also in a clinical setting, and for more detailed translational research of the aforementioned organ-axis interactions. Increased sensitivity of the LAFOV systems using specific tracers further may allow for exploring involvement of previously undetectable and/or unrecognized brain regions in several neuropsychiatric disorders, while the versatility of the systems allows for lower radiation exposure or shorter scanning times, enabling brain imaging of previously more vulnerable or difficult to examine patient groups, such as children, intensive care patients or patients in general suffering from movement disorders, psychiatric pathology or claustrophobia.
The most common sites of primary cancer which metastasize to the brain are lung, breast, colon, kidney and skin cancers. Although some metastases may give rise to a wide variety of symptoms, such as headache, ataxia, seizures or paresthesia already at a very early stage, others may remain more silent for a long time.
On the other hand early detection and recognition of brain metastases may have a significant impact on treatment strategies and/or prognosis.
Using 18 F-FDG PET/CT in 2502 patients with solid extracranial neoplasms, a routine whole body 18 F-FDG PET/ CT scan in the absence of symptoms detected brain metastasis in 1% percent of the patients when brain was included in the scan protocol [38]. The authors concluded that while on the one hand whole body PET/CT cannot replace routine imaging techniques, on the other hand positive findings provide early and crucial information for patient management, especially in asymptomatic patients [38]. It should be noted that this conclusion was drawn based on the most commonly used tracer in cancer stratification, i.e. 18 F-FDG, for which tumor to background contrast ratio, and hence detectability, may be hindered because of the high physiological background uptake of FDG in the brain. Interestingly, in contrast to 18 F-FDG, a new promising candidate for tumor diagnosis, therapy stratification and follow-up, the fibroblastic activation protein inhibitor (FAPI), either labeled with 68 Ga or 18 F, shows negligible background activity in the brain, resulting in In higher tumor to background ratios for brain metastases from gastric, breast, lung and liver cancers, and with a higher detection rate than for 18 F-FDG [39].
The term "chemo-brain" is sometimes used to denote deficits in neuropsychological functioning, including difficulties with memory, attention, and other aspects of cognitive function, that may occur as a result of cancer treatment consisting of chemo-or systemic therapy. In the future, systematic PET imaging (using 18 F-FDG or other radiopharmaceuticals) for oncological stratification and follow-up may, at least in theory, provide in better understanding of this poorly understood syndrome as a basis for example for prevention, treatment or prognostication.
Finally, novel probes for imaging of translocated protein and somatostatin receptor overexpression to assess immune system reactions appear to be of additional clinical value for radiation and therapy monitoring [40]. Although from a perspective of combined brain imaging, TSPO and somatostatin tracers may be more limited with regard to their clinical application, immunoPET tracers showing tumor dissemination and load, as well as interand intra-tumoral expression and heterogeneity should have large clinical potential in predicting on an individual basis the most (cost) effective treatment regimens (precision medicine). With regard to the latter, several immu-noPET studies already have demonstrated the detection of additional brain metastases, suggesting that even when using these tracers, patients may benefit from an LAFOV window that enables simultaneous brain imaging.
Organ axes
It has become clear that many diseases and conditions, originally thought to be confined to a single organ, are much more complex, being involved in a cross-talk between organs, and with other organ systems [41]. Cardiorenal syndrome is defined as acute kidney injury caused by acute cardiac dysfunction such as acute decompensated heart failure and acute coronary syndrome. Deteriorating renal function can further complicate cardiac dysfunction resulting in a downward trend. The brain-heart axis is implicated in post-stroke cardiovascular complications known as the stroke-heart syndrome, sudden cardiac death and the Takotsubo syndrome, amongst other neurocardiogenic syndromes. Dynamic 15 O-H 2 O PET brain imaging can identify the central nervous pathways of angina pectoris, highlighting the interplay between the brain and the heart in such patients [42]. There is also evidence that connects cardiovascular function, neurochemical asymmetries and depression [43]. An 18 F-FDG PET/CT study has linked resting amygdalar activity with cardiovascular events, indicating a potential mechanism to predict risk of cardiovascular disease caused by stress [44].
Another example is the gut-brain axis. Bacteria in the gut could have profound effects on the brain, and might be tied to a whole family of disorders [45]. There is also evidence that gut microbiota and their metabolites interfere with the host's immune and endocrine systems [46].
Using LAFOV PET/CT systems, organ interactions can be studied before and also during therapy. Again, a better understanding of these interactions may lead to precision medicine for individual patients.
Opportunities for artificial intelligence (AI)
The increased sensitivity and the large coverage of LAFOV systems means that number of photons originating from the body are registered by the PET detectors of the scanner. This, in turn, this results in enormous "raw" datasets. Part of this extra information translates into improved image quality as described above, but also a lot of information is not utilized during conventional image reconstruction. Storing the raw data can cause significant challenges in a hospital environment, as the datasets can be up to 1 Tb per scan depending on the tracer type, injected dose and, of course, overall scan duration. This requires high performance storage hardware such as a PB RAID array in order to prevent that these datasets need to be transferred over traditional hospital IT networking systems. However, when datasets are stored, they can provide a wealth of additional information that can be extracted using both conventional methods and artificial intelligence (AI). AI is expected to play an increasingly critical part of imaging equipment reconstruction and post-processing pipelines in the field of nuclear medicine [47].
A good example of the use of AI is given by a study of Ma et al., who showed that a deep learning reconstruction algorithm using raw LAFOV Quadra PET data as input had the potential to speed up image reconstruction and improve image quality without additional CT images [48]. Another example is a study by Sari et al., who used a deep-learning based framework to generate whole body attenuation maps on an LAFOV PET scanner by only using the system's own lutetium-based (LSO) scintillator background radiation [49]. This would enable CT-free attenuation and scatter correction on LAFOV systems.
In summary, in the (near) future, applying AI-based methods to the wealth of data produced by LAFOV PET/ CT systems can help in improving image quality and quantification and even reduce the reliance on CT-based information (thereby reducing overall radiation exposure) for corrections.
Hurdles to overcome
Considering all the advantages mentioned above, one might think that buying and installing an LAFOV PET/ CT system is a must, leading to lower administered activities, scanning new indications, new patient groups, scanning faster leading to a higher patient throughput. However, the last item is a big issue and does not simply come the purchase of an LAFOV PET/CT scanner. Several prerequisites have to be met [37], such as a radiochemistry department that is able to produce the needed amount of radiopharmaceuticals and the need of an infrastructure to allow for of rapid successive injections. This requires investment in production capacity, e.g., the need for an onsite cyclotron and the need for a laboratory which is fully automated according to Good Manufacturing Practice. Another prerequisite is the need for an update and extension of the patient facilities. More preparation rooms, waiting rooms, and changing rooms are necessary to inject and scan substantially more patients. Besides, investments are necessary for additional personnel, for the production, scanning, and reporting part, to keep up with the associated patient logistics. Furthermore, to fully explore all the possibilities of LAFOV PET/ CT scanners, and to be cost effective, work hours may have to be extended, which also requires more personnel and may demand for working in shifts. Ideally, this has to be anticipated before purchasing and installing a LAFOV PET/CT scanner in a department.
Conclusion
This review paper aimed to provide an overview of the clinical opportunities and applications for clinical practice. Apart from improved image quality and lesion detectability with respect to conventional FOV PET/CT systems, LAFOV allows e.g., reduction in acquisition times, reduction in amount of radiotracer administration, but also delayed imaging to follow tracers including labelled mAbs in vivo over an extended period of time. Furthermore, the larger axial FOV allows simultaneous investigation of the functional crosstalk between organ systems as well as continuous dynamic PET imaging of all relevant organ structures simultaneously to map pharmacokinetic behavior of (new) tracers. The future holds many opportunities for optimizing existing clinical applications using LAFOV PET, for example with the development and application of AI-based methods, and many more that have yet to be explored and introduced. | 7,245 | 2022-12-16T00:00:00.000 | [
"Physics"
] |
Optimization of Glass Edge Sealing Process Using Microwaves for Fabrication of Vacuum Glazing
: Among the various methods used for glass edge sealing, this study uses microwaves to seal glass edges. Through basic experiments, the main process conditions for edge sealing of glass were derived, and the experimental plan and analysis were carried out using the Box-Behnken method of response-surface analysis based on 3 factors and 3 levels. The step height which influences sealing was set as a response variable. If the step height becomes too large, the glass can be damaged, and if the step height is too small, the edge sealing will not be performed. Accordingly, process optimization that edge sealing is possible while minimizing the step height was carried out. A predictable regression equation was derived for the step height of edge sealing and the main-effect analysis was performed for the step height. Using the response-optimization tool, we derived the optimum process condition that minimized the step height of the edge sealing and verified that it matched the error value of 4.1% compared with the target value of the step height measurement result confirmed through the verification experiment.
Introduction
A vacuum glass is a representative product wherein vacuum is used to improve the insulation performance. Various studies on fabricating vacuum glasses are currently being conducted, and glass edge sealing is a core process in the fabrication of vacuum glasses.
The edge sealing process is a technique whereby glass bonding (frit) is applied to the glass surface to bond two pieces of glass together. Numerous studies on this process have been conducted [1,2]. Edge sealing techniques using frits can be applied to various areas, from display technologies to home appliances and windows. However, the reduced strength due to the difference between the thermal expansion coefficients of the glass and glass bonding remains a problem [3].
Using hydrogen gas torches in the edge sealing process resolves the problem of the thermal expansion coefficient mismatch. However, the resulting sealed edges sag, making them inappropriate for panel fabrications [4,5]. Therefore, this study used microwaves to resolve this problem. Microwave is a strong energy source that has many practical applications in both industrial and commercial fields. It possesses excellent reproducibility and helps reduce the processing time. Moreover, the uniform heating process improves the quality of the final product.
When microwaves are used to sinter ceramics, the temperature of the material increases from the inside. Therefore, by combining microwaves with surface heating technologies, a highly uniform thermal energy distribution can be achieved. Furthermore, microwaves can be used to heat specific regions, such as interfaces, by exploiting the interactions between the microwaves and the material. Thus, applying microwaves to the sealing process can result in a faster processing time, excellent performance, and allow for the manipulation of characteristics in the interfaces of composite materials [6][7][8].
In this paper, microwaves were used to seal glass edges. The levels of process variables for the glass edge sealing were determined through basic experiments, and the step height of the sealed edge was set as the characteristic value. Furthermore, the target step height of the sealed edge was determined through a liquid penetrant examination. Based on the process variables and the target step height, additional experiments were conducted for process optimization. For the optimization of edge sealing, a response-surface design was applied. ANOVA and regression equations were used to determine the sealing characteristics with respect to each process variable. Additionally, optimization tools were used to improve the process of deriving the target step height (characteristic value). Further experiments were carried out using the optimized process to test the validity of the process.
Equipment Setup
The microwave chamber for glass edge sealing was designed and constructed based on the electromagnetic wave distribution analysis using the HFSS program by ANSYS [6]. This experiment used the waveguide model WR-340 with a frequency range of 2.20-3.30 GHz, a voltage standing wave ratio of 1.25:1, and physical dimensions of 86.36 × 43.18 mm. The magnetron for microwave emission comprised six power sources with an output of 1 kW. The dimensions of the heating chamber were 400 (w) × 400 (d) × 192 (h) mm. Figure 1 shows the layout of the microwave chamber.
Appl. Sci. 2019, 9, x FOR PEER REVIEW 2 of 11 regions, such as interfaces, by exploiting the interactions between the microwaves and the material. Thus, applying microwaves to the sealing process can result in a faster processing time, excellent performance, and allow for the manipulation of characteristics in the interfaces of composite materials [6][7][8].
In this paper, microwaves were used to seal glass edges. The levels of process variables for the glass edge sealing were determined through basic experiments, and the step height of the sealed edge was set as the characteristic value. Furthermore, the target step height of the sealed edge was determined through a liquid penetrant examination. Based on the process variables and the target step height, additional experiments were conducted for process optimization. For the optimization of edge sealing, a response-surface design was applied. ANOVA and regression equations were used to determine the sealing characteristics with respect to each process variable. Additionally, optimization tools were used to improve the process of deriving the target step height (characteristic value). Further experiments were carried out using the optimized process to test the validity of the process.
Equipment Setup
The microwave chamber for glass edge sealing was designed and constructed based on the electromagnetic wave distribution analysis using the HFSS program by ANSYS [6]. This experiment used the waveguide model WR-340 with a frequency range of 2.20-3.30 GHz, a voltage standing wave ratio of 1.25:1, and physical dimensions of 86.36 × 43.18 mm. The magnetron for microwave emission comprised six power sources with an output of 1 kW. The dimensions of the heating chamber were 400 (w) × 400 (d) × 192 (h) mm. Figure 1 shows the layout of the microwave chamber. A thin sheet of glass with a thickness of 0.5 mm was placed in between the glass and glass to keep the distance between the two soda lime glasses, each with a thickness of 5 mm. Generally, microwaves permeate through glass instead of reflecting from or heating the glass. However, manufacturers add impurities or additives so that the glass interacts with the microwaves. Therefore, when glass is heated using microwaves, the thermal energy is often concentrated resulting in thermal shocks and damage. To prevent this and enable efficient heat distribution, graphite plates were placed above and beneath the glass. Figure 2a,b shows the schematics of the chamber for glass sealing and an image of the fabricated chamber, respectively. A thin sheet of glass with a thickness of 0.5 mm was placed in between the glass and glass to keep the distance between the two soda lime glasses, each with a thickness of 5 mm. Generally, microwaves permeate through glass instead of reflecting from or heating the glass. However, manufacturers add impurities or additives so that the glass interacts with the microwaves. Therefore, when glass is heated using microwaves, the thermal energy is often concentrated resulting in thermal shocks and damage. To prevent this and enable efficient heat distribution, graphite plates were placed above and beneath the glass. Figure 2a,b shows the schematics of the chamber for glass sealing and an image of the fabricated chamber, respectively.
Basic Experiment
The basic experiment was carried out to set the process and reaction variables. The factors that affected the quality of the sealed edge when the glass edges were sealed using microwaves were examined. Furthermore, different factors were derived in this stage to verify the characteristics of the sealed edge.
The microwave radiation and the glass edge sealing were carried out at the glass transition temperature in this experiment. The sealing result showed inadequately pressed parts on the edges, thus creating unsealed areas. A liquid penetrant examination was conducted to verify the sealing [9,10]. The liquid permeated through the glass to areas that were not sealed. Figure 3 shows an image of the liquid penetrant examination for the resulting edge sealing in the basic experiment.
Basic Experiment
The basic experiment was carried out to set the process and reaction variables. The factors that affected the quality of the sealed edge when the glass edges were sealed using microwaves were examined. Furthermore, different factors were derived in this stage to verify the characteristics of the sealed edge.
The microwave radiation and the glass edge sealing were carried out at the glass transition temperature in this experiment. The sealing result showed inadequately pressed parts on the edges, thus creating unsealed areas. A liquid penetrant examination was conducted to verify the sealing [9,10]. The liquid permeated through the glass to areas that were not sealed. Figure 3 shows an image of the liquid penetrant examination for the resulting edge sealing in the basic experiment. A thin sheet of glass with a thickness of 0.5 mm was placed in between the glass and glass to keep the distance between the two soda lime glasses, each with a thickness of 5 mm. Generally, microwaves permeate through glass instead of reflecting from or heating the glass. However, manufacturers add impurities or additives so that the glass interacts with the microwaves. Therefore, when glass is heated using microwaves, the thermal energy is often concentrated resulting in thermal shocks and damage. To prevent this and enable efficient heat distribution, graphite plates were placed above and beneath the glass. Figure 2a,b shows the schematics of the chamber for glass sealing and an image of the fabricated chamber, respectively.
Basic Experiment
The basic experiment was carried out to set the process and reaction variables. The factors that affected the quality of the sealed edge when the glass edges were sealed using microwaves were examined. Furthermore, different factors were derived in this stage to verify the characteristics of the sealed edge.
The microwave radiation and the glass edge sealing were carried out at the glass transition temperature in this experiment. The sealing result showed inadequately pressed parts on the edges, thus creating unsealed areas. A liquid penetrant examination was conducted to verify the sealing [9,10]. The liquid permeated through the glass to areas that were not sealed. Figure 3 shows an image of the liquid penetrant examination for the resulting edge sealing in the basic experiment. If a high pressure was applied to the four edges, the probability of complete sealing increased. However, in that case, a large step height between the sealed edges and the center of the glass was generated. This may increase the surface stress of the glass after the experiment, which could lead to breakage. Thus, appropriate fabrication conditions were required such that the step height between the center of the glass and the edges were minimized while allowing for completely sealed edges.
To derive the minimal step height, five basic experiments were carried out with different step heights. In each experiment, the sealed glass was divided into four sections vertically and horizontally, resulting in 16 separate subsections. To measure the height of the separate regions, a laser displacement sensor (CD-33-30N-422; Optex, Ogoto, Japan) was used. Figure 4 shows the layout of the step height measuring system. Figure 5 shows the schematics of the step height measurement. If a high pressure was applied to the four edges, the probability of complete sealing increased. However, in that case, a large step height between the sealed edges and the center of the glass was generated. This may increase the surface stress of the glass after the experiment, which could lead to breakage. Thus, appropriate fabrication conditions were required such that the step height between the center of the glass and the edges were minimized while allowing for completely sealed edges.
To derive the minimal step height, five basic experiments were carried out with different step heights. In each experiment, the sealed glass was divided into four sections vertically and horizontally, resulting in 16 separate subsections. To measure the height of the separate regions, a laser displacement sensor (CD-33-30N-422; Optex, Ogoto, Japan) was used. Figure 4 shows the layout of the step height measuring system. Figure 5 shows the schematics of the step height measurement. Step height measurement schematics.
After the glass height measurement, the measured glass sections were cut using the water jet technique. The exposed surfaces were put through liquid penetrant examination to ensure complete sealing. The sealed edges were further analyzed using Dino-Lite, which is a digital microscope. Figure 6 shows the cross-sections of the microwave sealed edges. Figure 6a shows an unsuccessfully sealed specimen, whereas Figure 6b shows a successfully sealed specimen. If a high pressure was applied to the four edges, the probability of complete sealing increased. However, in that case, a large step height between the sealed edges and the center of the glass was generated. This may increase the surface stress of the glass after the experiment, which could lead to breakage. Thus, appropriate fabrication conditions were required such that the step height between the center of the glass and the edges were minimized while allowing for completely sealed edges.
To derive the minimal step height, five basic experiments were carried out with different step heights. In each experiment, the sealed glass was divided into four sections vertically and horizontally, resulting in 16 separate subsections. To measure the height of the separate regions, a laser displacement sensor (CD-33-30N-422; Optex, Ogoto, Japan) was used. Figure 4 shows the layout of the step height measuring system. Figure 5 shows the schematics of the step height measurement. Step height measurement schematics.
After the glass height measurement, the measured glass sections were cut using the water jet technique. The exposed surfaces were put through liquid penetrant examination to ensure complete sealing. The sealed edges were further analyzed using Dino-Lite, which is a digital microscope. Figure 6 shows the cross-sections of the microwave sealed edges. Figure 6a shows an unsuccessfully sealed specimen, whereas Figure 6b shows a successfully sealed specimen. After the glass height measurement, the measured glass sections were cut using the water jet technique. The exposed surfaces were put through liquid penetrant examination to ensure complete sealing. The sealed edges were further analyzed using Dino-Lite, which is a digital microscope. Figure 6 shows the cross-sections of the microwave sealed edges. Figure 6a shows an unsuccessfully sealed specimen, whereas Figure 6b shows a successfully sealed specimen. The experimental results showed that if the step height was less than 0.66 mm, a significant portion of the sealed edge was not sealed. Even when the step height was the highest, i.e., 0.93 mm, there existed regions where the sealing was incomplete. This suggested that when the step height was 0.93 mm, sealing faults could occur in specific sections. Therefore, a minimum step height of 0.93 mm is required for complete sealing.
Variable Setting
The process and response variables were set by conducting basic experiments on the glass edge sealing using a microwave heating chamber. Among the various process variables used for glass edge sealing, the heating rate for the glass and the holding time for pressing the glasses together were set as the process variables. If the heating rate is too high, the glass could be damaged because of thermal shock [11][12][13], and if the heating rate is too low, the fabrication process could take too long. The sealing temperature was set approximately equal to the transition temperature of the glass. The holding time was set as the duration after the maximum temperature was reached, if this is too short the glass could break.
For glass edge sealing using microwaves, the thermal energy must be concentrated at the edges of the glass and a constant pressure must be maintained. The pressure was kept constant along all the edges owing to the design characteristics of the mechanical pressing device. Thus, in this research, three factors in three levels were identified in the basic experiment. The heating rate was set in the range of 6-8 °C/min, the sealing temperature was set in the range of 560-580 °C, and the holding time was set in the range of 30-50 min. Based on the response variable, it was determined in the basic experiment that the maximum step height corresponding to incomplete sealing was 0.93 mm. Therefore, considering the errors in the basic experiments, a nominal-is-best experiment was performed with a target step height of 0.98 mm in this study. Table 1 lists the conditions for the process and response variables.
Response variable
Step height (mm) 0.98 The experimental results showed that if the step height was less than 0.66 mm, a significant portion of the sealed edge was not sealed. Even when the step height was the highest, i.e., 0.93 mm, there existed regions where the sealing was incomplete. This suggested that when the step height was 0.93 mm, sealing faults could occur in specific sections. Therefore, a minimum step height of 0.93 mm is required for complete sealing.
Variable Setting
The process and response variables were set by conducting basic experiments on the glass edge sealing using a microwave heating chamber. Among the various process variables used for glass edge sealing, the heating rate for the glass and the holding time for pressing the glasses together were set as the process variables. If the heating rate is too high, the glass could be damaged because of thermal shock [11][12][13], and if the heating rate is too low, the fabrication process could take too long. The sealing temperature was set approximately equal to the transition temperature of the glass. The holding time was set as the duration after the maximum temperature was reached, if this is too short the glass could break.
For glass edge sealing using microwaves, the thermal energy must be concentrated at the edges of the glass and a constant pressure must be maintained. The pressure was kept constant along all the edges owing to the design characteristics of the mechanical pressing device. Thus, in this research, three factors in three levels were identified in the basic experiment. The heating rate was set in the range of 6-8 • C/min, the sealing temperature was set in the range of 560-580 • C, and the holding time was set in the range of 30-50 min. Based on the response variable, it was determined in the basic experiment that the maximum step height corresponding to incomplete sealing was 0.93 mm. Therefore, considering the errors in the basic experiments, a nominal-is-best experiment was performed with a target step height of 0.98 mm in this study. Table 1 lists the conditions for the process and response variables. Step height (mm) 0.98
Designing Experiments Using DOE
Based on the process variables identified in the basic experiments, additional experiments were carried out to derive the optimal processing conditions wherein nominal-is-best characteristics were achieved by fabricating sealed edges with a uniform thickness. This experiment was designed using the Box-Behnken design, which is used for response-surface methodologies. The Box-Behnken design is a statistical method which compares the relationship of design variables (minimal 2) and the response level while also deriving the optimal process variables for the best response variable [14]. Table 2 lists the experimental conditions for each process variable. A total of 30 experiments were carried out.
Analysis of Experimental Results
The analysis based on the design condition tables were carried out to assess the significance of each factor and the significance of the interaction terms. Based on a 95% confidence, if the p-value was lower than 0.05, it was pooled as an error term and only the significant terms were verified. The glass sealing temperature and the holding time were found to affect the step height. Moreover, it was determined that the squared holding time, heating rate, and sealing temperature exhibited an interactive behavior. The R 2 value was the coefficient of determination and it showed the effectiveness of the model. If the R 2 value was close to 100%, it indicated that the model was representative of the observed values. An R 2 value of 66.80% was obtained using the regression equation based on the three process variables, indicating a significance within 5%. Table 3 lists the analyzed results. Equation (1) shows the obtained regression equation. where: Step height (mm) h r = Heating rate (/min) s t = Sealing temperature () h t = Holding time (min) Figures 7 and 8 show the contour and surface plots of the step height based on the process variables. The step height showed a curvature effect depending on the holding time. It was also found that the step height of the sealed edge was less affected by the heating rate, and that the step height was proportional to the temperature of the sealed edge. 7 and 8 show the contour and surface plots of the step height based on the process variables. The step height showed a curvature effect depending on the holding time. It was also found that the step height of the sealed edge was less affected by the heating rate, and that the step height was proportional to the temperature of the sealed edge.
Deriving Optimal Fabrication Conditions
To satisfy the minimal step height of 0.98 mm, as previously determined in the basic experiment, optimal fabrication conditions were derived using response-optimization tools. The optimal values of the heating rate, sealing temperature, and holding time were found to be 6.13 °C/min, 573.13 °C, and 31.30 min, respectively. Figure 9 shows the graph of the optimized response, which was used to predict the step height of the sealed edge based on the process variables. 7 and 8 show the contour and surface plots of the step height based on the process variables. The step height showed a curvature effect depending on the holding time. It was also found that the step height of the sealed edge was less affected by the heating rate, and that the step height was proportional to the temperature of the sealed edge.
Deriving Optimal Fabrication Conditions
To satisfy the minimal step height of 0.98 mm, as previously determined in the basic experiment, optimal fabrication conditions were derived using response-optimization tools. The optimal values of the heating rate, sealing temperature, and holding time were found to be 6.13 °C/min, 573.13 °C, and 31.30 min, respectively. Figure 9 shows the graph of the optimized response, which was used to predict the step height of the sealed edge based on the process variables.
Deriving Optimal Fabrication Conditions
To satisfy the minimal step height of 0.98 mm, as previously determined in the basic experiment, optimal fabrication conditions were derived using response-optimization tools. The optimal values of the heating rate, sealing temperature, and holding time were found to be 6.13 • C/min, 573.13 • C, and 31.30 min, respectively. Figure 9 shows the graph of the optimized response, which was used to predict the step height of the sealed edge based on the process variables.
Verification of the Optimized Process Conditions
To verify the optimized process conditions derived in the experimental design section, verification experiments were carried out. In this experiment, the heating rate of the microwave chamber was set to 6 °C/min, the sealing temperature was set to 573 °C, and the holding time was set to 31 min. Due to equipment limitations, digits after the decimal point were omitted.
Laser sensors were used to measure the step height of the glass sample created under the derived optimized fabrication conditions. The step height was found to be 0.94 mm, which was 4.1% lower than the target step height. Furthermore, a liquid penetrant examination was applied to ensure that the edges were sealed completely. Figure 10 shows the glass sample created under the derived optimized fabrication conditions.
The sealed glass was cut through water jet processing and the sealed cross-section was observed using a digital microscope. Figure 11 shows the observation results, it was confirmed that all glass edges were completely sealed.
Verification of the Optimized Process Conditions
To verify the optimized process conditions derived in the experimental design section, verification experiments were carried out. In this experiment, the heating rate of the microwave chamber was set to 6 • C/min, the sealing temperature was set to 573 • C, and the holding time was set to 31 min. Due to equipment limitations, digits after the decimal point were omitted.
Laser sensors were used to measure the step height of the glass sample created under the derived optimized fabrication conditions. The step height was found to be 0.94 mm, which was 4.1% lower than the target step height. Furthermore, a liquid penetrant examination was applied to ensure that the edges were sealed completely. Figure 10 shows the glass sample created under the derived optimized fabrication conditions.
The sealed glass was cut through water jet processing and the sealed cross-section was observed using a digital microscope. Figure 11 shows the observation results, it was confirmed that all glass edges were completely sealed.
Verification of the Optimized Process Conditions
To verify the optimized process conditions derived in the experimental design section, verification experiments were carried out. In this experiment, the heating rate of the microwave chamber was set to 6 °C/min, the sealing temperature was set to 573 °C, and the holding time was set to 31 min. Due to equipment limitations, digits after the decimal point were omitted.
Laser sensors were used to measure the step height of the glass sample created under the derived optimized fabrication conditions. The step height was found to be 0.94 mm, which was 4.1% lower than the target step height. Furthermore, a liquid penetrant examination was applied to ensure that the edges were sealed completely. Figure 10 shows the glass sample created under the derived optimized fabrication conditions.
The sealed glass was cut through water jet processing and the sealed cross-section was observed using a digital microscope. Figure 11 shows the observation results, it was confirmed that all glass edges were completely sealed.
Conclusions
When microwaves are used to seal the glass edges, the sealed edges are pressed because of the various process variables. Therefore, process conditions were derived to optimize the edge sealing based on the pressed edges.
The initial process conditions under which the edges were sealed using microwaves were determined through basic experiments. Furthermore, to analyze the sealing characteristics based on the process conditions, the step heights were measured and water jets were used to cut the sealed glass. A liquid penetrant examination was then applied to obtain a complete sealing, based on which the target step height of the pressed edge was set. To analyze the correlations between the three factors and the three levels of the process variables established through the basic experiment and the pressing of the sealed edge, as well as to derive the optimal process conditions for edge sealing, additional experiments were performed using the response-surface methodology, which was a DOE method. The regression equation was derived by analyzing the experimental result, and the heating rate of 6 °C/min, the sealing temperature 573 °C, and the holding time 31 min of the optimum process conditions were derived reflecting the nominal-is-best characteristics. Additional experiments were conducted based on derived optimal process conditions. By comparing the predicted value of the optimum condition with the experimental value and confirming the error of 4.1%, the validity of the optimum process condition and the regression equation was verified.
It is expected that the glass edge sealing process using microwaves under the optimal conditions derived in this study can be applied to the production of vacuum glasses.
Conclusions
When microwaves are used to seal the glass edges, the sealed edges are pressed because of the various process variables. Therefore, process conditions were derived to optimize the edge sealing based on the pressed edges.
The initial process conditions under which the edges were sealed using microwaves were determined through basic experiments. Furthermore, to analyze the sealing characteristics based on the process conditions, the step heights were measured and water jets were used to cut the sealed glass. A liquid penetrant examination was then applied to obtain a complete sealing, based on which the target step height of the pressed edge was set. To analyze the correlations between the three factors and the three levels of the process variables established through the basic experiment and the pressing of the sealed edge, as well as to derive the optimal process conditions for edge sealing, additional experiments were performed using the response-surface methodology, which was a DOE method. The regression equation was derived by analyzing the experimental result, and the heating rate of 6 • C/min, the sealing temperature 573 • C, and the holding time 31 min of the optimum process conditions were derived reflecting the nominal-is-best characteristics. Additional experiments were conducted based on derived optimal process conditions. By comparing the predicted value of the optimum condition with the experimental value and confirming the error of 4.1%, the validity of the optimum process condition and the regression equation was verified.
It is expected that the glass edge sealing process using microwaves under the optimal conditions derived in this study can be applied to the production of vacuum glasses. | 6,785.8 | 2019-02-28T00:00:00.000 | [
"Engineering"
] |
Overexpression, purification, and properties of Escherichia coli ribonuclease II.
Ribonuclease II (RNase II) is a major exonuclease in Escherichia coli that hydrolyzes single-stranded polyribonucleotides processively in the 3′ to 5′ direction. To understand the role of RNase II in the decay of messenger RNA, a strain overexpressing the rnb gene was constructed. Induction resulted in a 300-fold increase in RNase II activity in crude extracts prepared from the overexpressing strain compared to that of a non-overexpressing strain. The recombinant polypeptide (Rnb) was purified to apparent homogeneity in a rapid, simple procedure using conventional chromatographic techniques and/or fast protein liquid chromatography to a final specific activity of 4,100 units/mg. Additionally, a truncated Rnb polypeptide was purified, solubilized, and successfully renatured from inclusion bodies. The recombinant Rnb polypeptide was active against both [3H]poly(A) as well as a novel (synthetic partial duplex) RNA substrate. The data show that the Rnb polypeptide can disengage from its substrate upon stalling at a region of secondary structure and reassociate with a new free 3′-end. The stalled substrate formed by the dissociation event cannot compete for the Rnb polypeptide, demonstrating that duplexed RNAs lacking 10 protruding unpaired nucleotides are not substrates for RNase II. In addition, RNA that has been previously trimmed back to a region of secondary structure with purified Rnb polypeptide is not a substrate for polynucleotide phosphorylase-like activity in crude extracts. The implications for mRNA degradation and the proposed role for RNase II as a repressor of degradation are discussed.
Because the rate of synthesis of any given protein is directly proportional to the concentration of its message, regulating the balance between mRNA decay and its synthesis is an important aspect of gene expression. In Escherichia coli, it is widely accepted that mRNA decay is initiated by a series of endonucleolytic cleavages catalyzed by RNase E (1)(2)(3) or occasionally by RNase III (4,5) followed by processive exonucleolytic degradation of the message to oligo-and mononucleotides (1)(2)(3). Two 3Ј-exonucleases have been implicated in this process: ribonuclease II (RNase II) 1 and polynucleotide phosphorylase (PNPase) (6). RNase II, which is responsible for the majority of the exonucleolytic activity in E. coli extracts (7), hydrolyzes RNA to release 5Ј-mononucleotides (8), while PNPase phospho-rylyzes RNA to mononucleoside diphosphates (9). Although RNase II activity was first described over three decades ago (10,11) and purified from whole cells several years later (12)(13)(14), details of its role in mRNA degradation are still poorly understood.
RNA structure, known to be an important determinant of mRNA stability, can protect upstream sequences from digestion by the 3Ј-exonucleases. The Rho-independent terminator sequence (trp t) of the tryptophan operon (15) and the intergenic (malE-malF) REP sequence of the maltose operon (16) are classic examples of secondary structures that protect upstream RNA from 3Ј-exonucleolytic degradation both in vitro and in vivo. These investigations implied that the observed protection by 3Ј-stem-loop structures was the result of an impediment to the processive activities of RNase II and, to a lesser extent, of PNPase (15)(16)(17). Recent observations of the decay of RNA-OUT, the antisense RNA that regulates Tn10/ IS10 transposition, demonstrate that the higher the thermal stability of the RNA structure, the larger the barrier to degradation by RNase II (18). Degradation by PNPase is much less affected by the relative stability of the RNA-OUT structure (18). Interestingly, RNA-OUT appears to be stabilized approximately 3-fold against PNPase attack by RNase II (18). In addition, the rpsO mRNA is also stabilized significantly by the presence of RNase II (19). Although the mechanism by which RNase II shelters upstream sequences from further exonucleolytic attack is not understood, the observed protection was attributed to the formation of a stable RNase II-RNA complex, which sequesters the 3Ј-end of the transcript (18,19).
As part of the investigation of the functional and biophysical properties of RNase II and its role in the overall decay of mRNA, we have overexpressed RNase II and developed a rapid and simple purification of the enzyme free of other nucleases. The purified enzyme was used to investigate the mechanisms by which stem-loop structures impede exonucleases and the ability of RNase II to act as a repressor of PNPase activity.
EXPERIMENTAL PROCEDURES
Bacterial Strains and Plasmids-The E. coli strain 18 -11 (rna Ϫ , rnb Ϫ , rnd Ϫ , rbn Ϫ , rnt Ϫ ) (20) was obtained from Dr. M. P. Deutscher (University of Connecticut Health Center, Farmington), while the strain CF881 F ⌬lac argA trp recB1009 ⌬(xthA-pnc) ⌬rna was obtained from Dr. M. Cashel (National Institutes of Health). The vector pET-11 and its host stain BL21(DE3) (21) were obtained from Novagen. The plasmid pRP40 (22) was obtained from Dr. N. Sonenberg (McGill University, Montreal). The following oligonucleotide primers were synthesized based on the previously published rnb sequence (23): fP1 (5Ј-GCGAGGATCCAGGAGGTGACAATTATGTTTCAGGACAAC) and rP1 (5Ј-GCGAGGATCCTTTCCATGCGGACTTCGGCATTA). An additional reverse primer rP2 (5Ј-GCGAGGATCCATCGACGGTCAGACTCATCA-TCA) was constructed based on the partial DNA sequences of pRZA17 and pRZA18 obtained from Dr. C. M. Arraiano (Centro de Tecnologia Química e Biológica, University of Lisbon, Portugal) which contain the 3Ј-untranslated region of the rnb gene. The predicted coding sequence of the rnb gene of E. coli was amplified from genomic DNA of strain MV1190 by the polymerase chain reaction. The products were cleaved with BamHI and ligated into the unique BamHI site of pET-11. The orientation of the 2.4-(fP1-rP2) and 1.9-kilobase pair (fP1-rP1) BamHI fragments in the recombinant plasmids was verified by restriction mapping and DNA sequencing of the entire rnb gene. The resulting plasmids, pGC100 and pGC101, were used to transform BL21(DE3) to yield strains GC100 and GC101, respectively.
RNase II Assays-The 92-nucleotide (nt) partial duplex RNA substrate, which we call t40B (previously called RNA I) (22), was generated from the plasmid pRP40 linearized with the restriction enzyme BamHI. Synthesis of uniformly labeled t40B was directed from an SP6 promoter in the presence of [␣-32 P]CTP as described previously (24). Assays for RNase II activity were assembled in a 70-l reaction volume containing 10 pmol of labeled t40B in a reaction buffer containing 17 mM HEPES⅐NaOH, pH 7.5, 0.5 mM MgAc 2 , 100 mM KCl, 2 mM DTT, 5% glycerol, and 10 g/ml acetylated bovine serum albumin (New England Biolabs). Protein was added last to the final concentration specified in the figure legends, and incubations were performed at 37°C. Samples were withdrawn at various times and quenched in 3 volumes of loading buffer containing 90% deionized formamide, 22 mM Tris, 22 mM boric acid, 0.5 mM EDTA, 0.1% xylene cyanol FF, and 0.1% bromphenol blue. The products were resolved by electrophoresis on 10% polyacrylamide gels containing 8 M urea and visualized by autoradiography or with a Molecular Dynamics PhosphorImager system. Activity was also determined by release of acid-soluble radioactivity from [ 3 H]poly(A) (25). 1 unit of RNase II activity is defined as the release of 1 mol of AMP/h.
Preparation of Crude Extracts-Cultures of CF881 and 18 -11 grown in 1 liter of rich medium (21) to late logarithmic phase were harvested by centrifugation and frozen at Ϫ70°C until use. The thawed cells were resuspended in 3 volumes of buffer A (60 mM Tris⅐HCl, pH 7.5, 10 mM MgCl 2 , 60 mM NH 4 Cl, 0.05 mM EDTA, 1 mM DTT) and ruptured by passage through an Aminco French pressure cell at 15,000 p.s.i. The cell lysate was centrifuged at 30,000 ϫ g for 30 min in a Beckman JA-20 rotor at 4°C. The supernatant (S-30) was then centrifuged at 150,000 ϫ g in a Beckman Ti70.1 rotor for 2 h at 4°C. The supernatants, S-30 and S-150, were the source of crude extracts for subsequent experiments.
Preparation of RNase II (Rnb) from an Overexpressing Strain-Cultures of GC100 were grown in a rich medium (21) at 30°C to early logarithmic phase and induced with 0.4 mM isopropyl -thiogalactopyranoside (IPTG) for 5 h. The cultures were chilled, and the cells were harvested by centrifugation at 4,000 ϫ g for 10 min. All subsequent procedures were performed at 4°C. Cell pellets were resuspended in 20 -25 ml of buffer B containing 50 mM HEPES⅐NaOH, pH 7.5, 500 mM NaCl, 1 mM MgCl 2 , 0.1 mM EDTA, 5 mM DTT, 0.1 mM phenylmethylsulfonyl fluoride, 0.8 g/ml leupeptin, and 2 g/ml aprotinin. The cells were ruptured by passage through an Aminco French pressure cell at 15,000 p.s.i. The lysate was centrifuged at 30,000 ϫ g for 60 min in a Beckman JA-20 rotor to pellet unbroken cells and insoluble material. Approximately 60 mg of the S-30 was loaded onto a column of Affi-Gel blue (Bio-Rad) (1.25 ϫ 21.5 cm) previously equilibrated with 3 column volumes of buffer C (25 mM HEPES⅐NaOH, pH 7.5, 5% glycerol, 2 mM DTT, 1 mM MgCl 2 , 0.1 mM EDTA) containing 500 mM NaCl. The column was washed with 3-5 column volumes of this buffer at a flow rate of 8.3 ml/h (6.75 cm/h) driven by a P1 peristaltic pump (Pharmacia Biotech Inc.). The Rnb polypeptide was eluted with 5 column volumes of buffer C containing 3 M NaCl. The eluent was pumped directly onto a column of hydroxylapatite (Bio-Rad) (0.75 ϫ 8.5 cm) at a flow rate of 6.7 ml/h (15 cm/h). After washing with 5 column volumes of buffer C containing 1 mM sodium phosphate, pH 7.5, the Rnb polypeptide was eluted with a 50-ml gradient of sodium phosphate, pH 7.5 (1-250 mM), in buffer C at a concentration of 75 mM sodium phosphate. Fractions containing the Rnb polypeptide were divided into pool A or pool B based on the contaminants present in the fractions. A portion of pool A was loaded onto a column of Affi-Gel heparin (Bio-Rad) (0.75 ϫ 8.0 cm). The column was washed with 3-5 column volumes of buffer C at a flow rate of 7.2 ml/h (16 cm/h). The Rnb polypeptide was eluted from the column with a 50-ml gradient of NaCl (0 -400 mM) in buffer C at a concentration of 130 -140 mM NaCl. Alternatively, chromatography (FPLC) of pool A on a Resource Q column (Pharmacia) was substituted for the Affi-Gel heparin step. After loading the sample and washing it with 5 column volumes of buffer C containing 150 mM NaCl, the Rnb polypeptide was eluted from this resin with a 50-ml gradient of NaCl (100 -400 mM) in buffer C at a concentration of 220 mM NaCl. The presence of the Rnb polypeptide in various fractions was monitored qualitatively by polyacrylamide gel electrophoresis and quantitatively by enzyme assay (see above). The pooled fractions obtained from heparin-agarose chromatography were the source of purified Rnb polypeptide in all subsequent experiments.
UV Photocross-linking-Assay mixtures were prepared as described above with 160 fmol of t40B substrate. After incubation on ice for 2-5 min, the sample was subjected to a single 2-6-ns pulse (40 -50 mJ) with a 266-nm UV laser (Spectra Physics) as described previously (26). The sample was then incubated with 5 g of RNase A and 5 units of RNase T1 at 37°C for 45 min to remove excess RNA. Each digested sample was boiled in an equal volume of SDS sample buffer and separated electrophoretically on a 15% SDS-polyacrylamide gel. The cross-linked proteins were visualized by autoradiography.
Exonucleolytic Activity in Crude Extracts From E. coli-A
partially duplexed RNA substrate ( Fig. 1a) was used to assay extracts generated from various E. coli strains for putative RNA helicase activities. Instead of detecting an activity that could unwind the duplexed RNA to monomers, we observed the partial degradation of the synthetic substrate in extracts that are wild type for RNase II activity but not in extracts deficient for a number of exonucleases including RNase II (Fig. 1, compare b and c). Complete conversion of the 92-nt substrate to a relatively stable 77-nt degradative intermediate was observed in crude extracts prepared from strain CF881 over a 60-min time course (Fig. 1b). The exact size of the product was determined on a sequencing gel (data not shown). In contrast, crude extracts prepared from strain 18 -11 were unable to digest the substrate (Fig. 1c). Several additional experiments were undertaken to confirm that the 77-nt degradation product (shown in Fig. 1a) corresponds to the product of RNase II stalling 9 nucleotides 3Ј to the double-stranded region of the substrate. First, the denatured 77-nt product retains a 5Ј-end label (data not shown). Second, the partial duplex substrate is resistant to digestion by the purified Ams/Rne/Hmp-1 polypeptide, the catalytic subunit of RNase E (27), under conditions where authentic substrates would be processed to completion (data not shown). Third, incubation of the 92-nt substrate under conditions where PNPase, the other major exonucleolytic activity in E. coli, would be active also generates a 77-nt product but only in the presence of 10 mM sodium phosphate ( Fig., 2a). In this case, however, the 77-nt product can be degraded further in prolonged incubations (data not shown). Moreover, extracts prepared from a strain containing the mutant pnp-7 allele, which largely lacks PNPase activity but does contain RNase II activity, also generate the 77-nt product in the presence or absence of phosphate (data not shown). Although contributions from other exonucleases cannot be excluded completely, the phosphate-independent formation of the 77-nt product is most consistent with RNase II activity. This was confirmed (see below) using purified recombinant RNase II.
Overexpression and Purification of RNase II (Rnb)-The predicted coding sequence of the rnb gene of E. coli was amplified by the polymerase chain reaction as described under "Experimental Procedures." All primers contained BamHI restriction sites, and fP1 also contains a Shine-Dalgarno sequence 5Ј to the rnb start codon such that the amplified product could be cloned into the unique BamHI site of pET-11 and subsequently overexpressed using the T7 RNA polymerase encoded by BL21(DE3) (21). The partial structures of plasmids containing all or part of the rnb gene are depicted in Fig. 3, b and c. Due to errors in the previously published rnb sequence, which predicted a stop codon at position 2078 (23), plasmid pGC101 ( Fig. 3c) contains most of the rnb coding sequence except for a deletion of 26 nucleotide residues, which is replaced by 66 nucleotide residues of vector-derived sequence at the 3Ј-end of the construct. Plasmid pGC100 (Fig. 3b) contains the entire predicted 1932-nucleotide residue open reading frame, the 3Јuntranslated region including the putative Rho-independent terminator, and approximately 400 nucleotide residues of intercistronic spacer under the control of the T7-lac promoteroperator region in pET-11 (21).
Upon induction of cultures of GC100 or GC101 with IPTG, the Rnb polypeptide was expressed to the extent that it represented the most abundant polypeptide in whole cell extracts and a significant fraction of the total cellular protein (Fig. 4, lane 3). When assayed against poly(A), crude extracts (S-30) from strain GC100 displayed a specific activity of 1,184 units/mg (Table I), 300-fold higher than that obtained from crude extracts prepared from the haploid strain CF881 (specific activity ϭ 3.9 units/mg). An efficient method of purification was developed in part by exploiting several effective steps from previously published methods (12)(13)(14). The initial step relies on Cibacron blue-agarose chromatography to remove the bulk of the nucleic acids and contaminating proteins while the majority (Ͼ90%) of the Rnb polypeptide remains bound to the column. Considerable efficiency was gained by loading the 3 M NaCl eluate from this column directly onto a hydroxylapatite column. This proved to be an invaluable step in the purification method since concentration, desalting, and significant purification of the Rnb polypeptide could take place in a single step. The apparent loss of activity after hydroxylapatite chromatography (Table I) may have been due to the inhibition by Ca 2ϩ ions leached from the column at high ionic strength, as Ca 2ϩ has been reported to inhibit RNase II activity (10,11). Final purification of Rnb from most contaminants could be achieved by affinity chromatography on heparin-agarose or by ion exchange chromatography (FPLC). A sample of the purified Rnb polypeptide is shown in Fig. 4, lanes 7 and 8. Based upon Coomassie Blue or silver staining of overloaded polyacrylamide gels, the preparation was judged to be about 95% pure with a few faint minor contaminating bands. The specific activity of the Rnb polypeptide purified to the end of the heparin-agarose step was determined to be 4,100 units/mg, which is nearly 2-fold greater than that reported by others for the enzyme purified from whole cells (11)(12)(13). However, the specific activity of this preparation is approximately 2.5-fold lower than the best reported purification (14). It is quite possible that not all of the overexpressed Rnb polypeptide is properly folded or fully active. Nonetheless, this method provides a more rapid and facile purification of RNase II with good yields and activity.
If recoveries from the heparin-agarose chromatography step in Table I are extrapolated to include all the material in pool A from the hydroxylapatite column, the overall yield is 29%. This apparent overall yield is low for two reasons. First, the activity in crude extracts represents the sum of activities of a number of endo-and exonucleases and overstates the activity of RNase II. Second, fractions were pooled to maximize purity rather than yield particularly after hydroxylapatite chromatography.
The addition of 22 amino acid residues, derived from the vector pET-11, to the C terminus of the truncated Rnb* polypeptide, resulted in the formation of insoluble inclusion bodies upon induction of cultures of GC101 with IPTG. The inclusion bodies were subsequently purified to near homogeneity by differential centrifugation in the presence of detergent. Authentic RNase II activity was recovered following solubilization, reduction, and refolding of the truncated Rnb* polypeptide from the inclusion bodies. Further purification of the renatured truncated Rnb* polypeptide from most contaminants could be achieved by ion exchange chromatography (FPLC). The truncated Rnb* polypeptide eluted from the Resource Q column over a broad range of NaCl concentrations likely reflecting the several different populations of misfolded and inactive polypeptides present in the preparation. RNase II activity eluted from the column as a sharp peak at a NaCl concentration of 220 mM. Although a significant amount of activity could be recovered from the inclusion bodies, the specific activity of this preparation was quite poor, 54 milliunits/mg, a small fraction of that obtained for the full-length Rnb polypeptide.
Properties of the Rnb Polypeptide-The purified Rnb polypeptide was active against the partial duplex t40B substrate in a manner similar to the activity originally detected in crude extracts from strain CF881 (Fig. 5a, lanes 2-6). Under conditions in which enzyme is limiting (molar ratio of substrate to enzyme 2300:1), the 3Ј-single-stranded tails are removed from the substrate during a 60-min incubation at 37°C to generate a degradative intermediate, which has been shortened by about 15 nucleotides. The appearance of the degradative product is linear for 30 min, after which the rate declines gradually (Fig. 5a, lanes 2-6). Thus, each enzyme molecule is turning over more than 30,000 times.
Digestion of the t40B transcript is complete after a 60-min incubation with 2.0 milliunits of RNase II activity (Fig. 5a, lane 7). Approximately 20% of the substrate is resistant to degradation by the Rnb polypeptide even after addition of 200 milliunits of fresh enzyme (data not shown). A fraction of the substrate appears to form concatemers and as a result does not have free 3Ј-ends accessible to the enzyme. Interestingly, digestion of t40B for 60 min at 37°C with 4 units of enzyme resulted in a further shortened (73 nt) but stable degradation intermediate depicted by the arrowhead (Fig. 5a, lane 8). This experiment suggests that at high concentrations, the Rnb polypeptide can remove three to four additional unpaired residues in the t40B duplex remaining from a previous round of digestion.
Several previously published reports have suggested that RNase II, of varying degrees of purity, is readily inactivated by heat (10 -13, 28). We have tested the purified Rnb polypeptide and found that the recombinant enzyme is also susceptible to thermal inactivation (Fig. 5b). A comparison of Fig. 5b, lanes 2 and 3, shows that less than 1% of the activity remains after a 5-min incubation of the Rnb polypeptide in the absence of substrate at 37°C. Interestingly, the enzyme is stabilized in the presence of substrate and can remain active up to 60 min at 37°C (Fig. 5a). Activity can also be stabilized by the addition of substrate to Rnb polypeptide, which has been partially inactivated by a brief incubation in buffer at 37°C. Once activity has been lost to thermal inactivation, however, it cannot be regained upon addition of substrate (data not shown). The 77-nt product also stabilized the enzyme against heating. Rnb polypeptide was incubated in the presence of 1.5 pmol of partially digested t40B for 5 min at 37°C prior to incubation with full-length t40B transcript. The 77-nt product not only protected the Rnb polypeptide against thermal inactivation but also appeared to stimulate the activity of the enzyme for the full-length substrate by approximately 2-fold (data not shown). The apparent stimulation may be attributable to a decreased rate of thermal inactivation. Taken together, the data demonstrate that the enzyme can be stabilized by both substrate and product. In contrast, both a single-stranded DNA oligonucleotide (33-mer) and double-stranded plasmid DNA inhibited the activity of RNase II but were unable to provide significant protection from heating (data not shown) unlike oligonucleotides of deoxy(C) 27 , which can reduce the rate of thermal inactivation (28).
Stabilization of RNase II by the digested t40B transcript implies that in the absence of any free 3Ј-single-stranded ends, the Rnb polypeptide can bind RNA even if it is not a substrate. To test this hypothesis, t40B was incubated briefly with a large excess of Rnb polypeptide, sufficient to digest it to 73 nt, and then subjected to UV photocross-linking. Fig. 6, lane 1, shows labeling of a band of 70 kDa, the size expected for the Rnb polypeptide. In addition, there is label associated with a band of 14 kDa, which we believe to be RNase A. The Rnb polypeptide is, therefore, able to bind its product (Fig. 6, lane 1) in the absence of any other proteins or cofactors. A 70-kDa protein, corresponding to the molecular mass of RNase II, was also labeled in crude extracts prepared from strain CF881 (Fig. 6, lane 3). All bands were sensitive to proteinase K treatment (Fig. 6, lanes 2 and 4). A comparison of Fig. 6, lanes 1 and 3, also demonstrates that UV cross-linking can provide an important assessment of the purity of the enzyme preparation in light of the affinity chromatography techniques utilized in the purification. Since there are a large number of RNA binding proteins in crude extracts prepared from E. coli that have a significant affinity for the t40B transcript, the presence of even a small percentage these contaminants would be readily detected in the purified material (Fig. 6, compare lane 3 to lane 1).
We have also tested whether the 77-nt product would inhibit the activity of the Rnb polypeptide in subsequent rounds of digestion. In the first experiment, the Rnb polypeptide (3.3 milliunits) was incubated with 25 pmol of unlabeled t40B (2.5fold molar excess over labeled t40B) for 2.5 min at 37°C prior to addition of labeled t40B. The kinetics of digestion of labeled t40B over a 60-min time course were identical to those in an incubation in which the same amount of enzyme was incubated directly with labeled t40B (data not shown). In the second experiment, the t40B transcript, which had been previously digested with Rnb polypeptide, extracted with phenol/chloroform, and ethanol precipitated, was used in a competition experiment. Equimolar amounts of digested t40B did not alter the kinetics of disappearance of the 92-nt substrate and thus were unable to compete effectively for the Rnb polypeptide (Fig. 7). Although the 77-nt product can protect the enzyme from thermal inactivation, it cannot inhibit its activity.
Recent observations have suggested that RNase II can protect "upstream" RNA sequences from PNPase attack through the formation of a stable RNA-RNase II complex (18,19). We have further investigated this hypothesis by incubating the 77-nt product, produced by the action of the Rnb polypeptide (see above), with crude extracts prepared from strain 18 -11 in the presence of 10 mM sodium phosphate. The data demonstrate that the 77-nt product is resistant to digestion by a PNPase-like activity (Fig. 2b, lanes 1-4). As discussed above, the 92-nt substrate is rapidly shortened to approximately 77 nt in the presence of phosphate over a 30-min time course of digestion (Fig. 2a, lanes 5-8).
DISCUSSION
The Mechanism of Action of RNase II on a Novel Substrate-We envisage that the action of RNase II on t40B can be described by the following sequential steps: 1) binding to a free 3Ј-end on the 92-nt substrate, 2) processive hydrolysis of 15 phosphodiester bonds, 3) stalling of the enzyme approximately 9 unpaired nucleotides from the 10-bp G-C-rich stem, 4) dissociation of the enzyme from the substrate, and (5) thermal inactivation of a fraction of the dissociated enzyme. The duration of each such cycle at steady state can be calculated from the apparent turnover number, which we estimate as 9 nt⅐s Ϫ1 based on a rate of 0.16 pmol of product formed per min at 4.3 fmol of enzyme. This yields a cycle time of 1.67 s, the time to remove 15 nucleotides from each 3Ј-end (15 nt/9 nt⅐s Ϫ1 ). The time actually required for hydrolysis of 15 phosphodiester bonds (step 2 in the cycle) is only 0.21 s, however, as the reported turnover number for RNase II acting on poly(A) is 70 nt⅐s Ϫ1 (28). If we assume that this turnover number also applies to the 15 residues removed from t40B and that no enzyme is lost to thermal inactivation (step 5), then steps 1, 3, and 4 account for 1.46 s (1.67 Ϫ 0.21 s) of each cycle. As a consequence, RNase II cannot remain bound to a substrate once processive hydrolysis has ceased any longer than 1.46 s. The latter represents a maximum value for step 3 in the proposed cycle, as binding (step 1), dissociation (step 4), and thermal inactivation (step 5) are not negligible.
The linear kinetics observed for the reaction demonstrate that RNase II stalls at regions of secondary structure, however briefly, but can disengage from the "stalled" substrate and reassociate with a new free 3Ј-end. This is substantiated by the demonstration that the Rnb polypeptide can cycle from an unlabeled to a labeled substrate. Our finding of dissociation from a substrate with 9 unpaired protruding nucleotides at the 3Ј-end of the 77-nt product is in good agreement with the 10 -15-nt digestion limit product obtained for RNase II acting FIG. 6. UV cross-linking of t40B to purified recombinant Rnb polypeptide and proteins in the S-150 fraction prepared from strain CF881. Labeled t40B was incubated with purified recombinant Rnb polypeptide (2.1 units, 50 g/ml) or with 10 g of an S-150 fraction prepared from strain CF881 (33 milliunits) at a concentration of 1 mg/ml, irradiated with UV, digested with ribonucleases, and then separated by SDS-PAGE as described under "Experimental Procedures." A duplicate sample was treated with proteinase K( ϩ ) (PROT K) prior to electrophoresis (lanes 2 and 4). Lanes 1 and 2, purified recombinant Rnb polypeptide; lanes 3 and 4, crude extract prepared from strain CF881.
FIG. 7. Competition between partially digested t40B and complete t40B. Rnb polypeptide (2.0 milliunits, 8.3 ng/ml) was incubated with intact t40B in the presence (᭜) or absence (f) of 10 pmol of t40B that had been previously digested to yield a 77-nucleotide product with the purified Rnb polypeptide. The products were resolved by gel electrophoresis, and the relative amounts of t40B were quantified with a PhosphorImager, expressed as picomoles of RNA remaining, and plotted as a function of time.
on homopolymers (28). Interestingly, RNase II can participate in the processing of some tRNAs in vitro by degrading long trailing sequences but must be able to dissociate from the precursor to allow final maturation of the tRNA by other processing exonucleases (29).
Two lines of evidence suggest that the Rnb polypeptide can also reassociate with the 77-nt product of digestion. First, at high concentrations of enzyme the Rnb polypeptide can remove three to four additional unpaired residues remaining from a previous round of digestion. Second, the Rnb polypeptide can bind its 73-77-nt product as evidenced by UV cross-linking and protection from thermal inactivation. Although partially digested t40B can bind to the Rnb polypeptide, it does not compete with the full-length substrate, indicating that the preferred substrate for RNase II has an extended free 3Ј-end. Moreover, the lack of competition by product implies that product binds to a site distinct from that of the substrate.
A Model for the Control of mRNA Degradation at the 3Ј-End-As discussed in the Introduction, 3Ј-stem-loop structures have been shown to protect upstream RNA sequences from digestion by 3Ј-exonucleases (15)(16)(17). The observed protection of upstream sequences was originally attributed to the impeding of the processive activities of RNase II or PNPase by RNA structure. Our results, however, demonstrate that the Rnb polypeptide loses its apparent processivity nine residues 3Ј to a region of strong RNA secondary structure, where it leaves the substrate rapidly and reassociates with a new free 3Ј-end. The data imply that the recently observed stabilization of the Tn10/ IS10 antisense RNA-OUT (18) and the stabilization of rpsO mRNA (19) by RNase II are probably due to the removal of the 3Ј-overhang rather than to the formation of a stable RNA-RNase II complex, which blocks access of PNPase to the 3Ј-end of mRNAs. However, it should be noted that dissociation and/or binding events could be retarded in vivo if free 3Ј-ends are limiting or if stem-loop binding proteins stabilize an RNase II-product complex (16,30,31). It has been suggested that mRNAs with an immediate 3Ј-stem-loop structure, analogous to the 77-nt product, are poor substrates for PNPase (32). Our data demonstrate that a PNPase-like activity in crude extracts can degrade the t40B substrate in a phosphate-dependent manner while the 77-nt product, produced by the action of the purified Rnb polypeptide, is not an efficient substrate. Conceivably, extension of the 3Ј-end by poly(A) polymerase (PcnB) (32)(33)(34)(35) could provide a necessary single-stranded platform for PNPase to overcome the apparent indirect inhibition by RNase II.
These observations suggest a possible model for the control of mRNA degradation at the 3Ј-end. As RNase II encounters a region of secondary structure, it stalls. If the structure is unstable, the enzyme may advance through the stem-loop in the 3Ј to 5Ј direction. However, if the structure is a stable REP sequence or a Rho-independent terminator, RNase II will dissociate from the transcript before the duplex opens. We propose that loss of the single-stranded 3Ј overhang, which reduces the affinity of RNase II for the stalled transcript, may also reduce the ability of the much larger PNPase to bind and degrade transcripts. It could also reduce the affinity of putative RNA helicases for such structures. Addition of a new 3Ј-end by PcnB followed by the action of PNPase, known to be less susceptible to RNA secondary structure than RNase II (15)(16)(17)(18)(19), would be required for the degradation of strong REP and terminator sequences. Thus, a competition between removal of a 3Ј-overhanging sequence by RNase II and extension-degradation by PcnB and PNPase, respectively, would develop at the 3Ј-end of extended RNA secondary structures and may account for the heterogeneity in the 3Ј-ends of oligoadenylated RNA I (33).
Utility of t40B as a Substrate for RNase II Activity-This partially duplexed RNA is an effective substrate for investigating the properties of RNase II and offers at least four significant advantages over assays previously utilized for detecting RNase II activity. First, the t40B transcript resembles natural mRNA substrates more closely than the homopolymeric substrates utilized in traditional assays as it contains both 3Јunpaired extensions of essentially random composition and a stable duplex mimicking stem-loop structures found in natural mRNAs. Second, the stalling of the enzyme at the duplexed region reflects the known behavior of RNase II on RNAs containing regions of extensive secondary structure (15)(16)(17)(18). Third, the formation of a stable degradative intermediate provides an internal control that distinguishes RNase II activity from single and double strand-specific endonucleases. Finally, the high specific activity of the synthetic transcript increases the sensitivity of the assay and allows for the detection of activity at low substrate concentrations (10 Ϫ10 -10 Ϫ11 ) closer to the physiological range. | 7,541.8 | 1996-01-12T00:00:00.000 | [
"Biology",
"Chemistry"
] |
Mycelium-Based Composite Materials: Study of Acceptance
Mycelium-based composites (MBCs) are alternative biopolymers for designing sustainable furniture and other interior elements. These innovative biocomposites have many ecological advantages but present a new challenge in aesthetics and human product acceptance. Grown products, made using living mycelium and lignocellulosic substrates, are porous, have irregular surfaces and have irregular coloring. The natural origin of these types of materials and the fear of fungus can be a challenge. This research investigated the level of human acceptance of the new material. Respondents were students of architecture who can be considered as people involved in interior design and competent in the design field. Research has been performed on the authors’ prototype products made from MBCs. Three complementary consumer tests were performed. The obtained results measured the human reactions and demonstrated to which extents products made of MBCs were “likeable” and their nonobvious aesthetics were acceptable to the public. The results showed that MBC materials generally had a positive or not-negative assessment. The responses after the pairwise comparison of the MBC with wall cladding samples pointed out the advantage of ceramic reference material above the MBC based on an overall assessment. The respondents also believed that the chamotte clay cladding would be easier to fit into the aesthetics of a modern interior and would in better accordance with its style. Although the MBC was less visually appealing, the respondents nevertheless found it more interesting, original, and environmentally friendly. The experiments suggested that the respondents had double standards regarding MBCs. MBCs were generally accepted as ecological, but not in their own homes. All of these results support current and future applications of MBCs for manufacturing items where enhanced aesthetics are required.
Introduction
Chitin has a number of desirable properties, including being biodegradable and biocompatible, which makes it an attractive alternative to synthetic polymers. In its raw form, chitin is brittle and difficult to process. However, chitin can be processed into different forms, including chitosan [1] and chitin nanofibrils [2]. Chitin nanofibrils of fungi can be used as an reinforcement in biocomposites for furniture and building materials [3]. The priority date of first patent on mycelium-based composites (MBCs) dates back to 2007 [4], while the scientific publications started in 2012 [5]. Since 2013, more than 30 review articles have been published describing various aspects of this type of material, including production [6], applications and properties [7], electronic applications [8], architecture applications [9], patents related to MBCs [10], furniture and art applications [11], a sustainable development [12], and proper selection of material-generating species of fungi [13].
Following the idea of material-generating use of fungi, the authors have begun researching MBCs in interior design use. The analysis indicates that MBCs could become an alternative to sustainable furniture and other interior design materials, despite some known engineering flaws, such as the low ability to transfer tensile forces and high hygroscopicity, resulting in low outdoor durability [11,13]. The use of fungi in the production of MBC usually raises concerns about the health impact, but when compared to MDF, in which formaldehyde or other chemicals are used, MBCs seem to be a safer option [14].
In addition to engineering limitations, using MBCs poses aesthetic challenges, such as surface color uniformity, making it difficult to achieve a consistent appearance in furniture. The texture of mycelium can also vary, making it difficult to control the final appearance of furniture [11]. A diverse array of appearances are available with bio-based materials, ranging from traditional and rustic options to more contemporary and modern designs [15]. Considering the potential bias against fungi and the specific characteristics of MBCs, the question about this material's acceptance level among designers and future customers is fully justified. Unfortunately, this issue is not fully addressed in the scientific literature. The key here is the concept of "likeability", i.e., the answer to whether the consumer will like the material. The "likeability" feature of the material is associated with sensory marketing issues [16]. Even materials with good physical and economic properties may not enter wide industrial applications if users do not accept them [17].
In the case of implementing MBCs-class materials, the risk of non-acceptance of the product is exceptionally high. This material is "growing" and, therefore, is difficult to manufacture-its coloring and surface texture are not regular and homogeneous. Items made of MBCs have a unique aesthetic. Another challenge in implementing MBC materials is their biological origin; the substrate is biological, and the mycelium that holds it together is also biological. The fungus may be of particular concern, despite the use of safe, nonmycotoxic fungi species and their thermal deactivation at the final stage of manufacturing an item from MBCs [13]. These factors narrow down the application field, especially when new materials are introduced for new uses. The purpose of the present research is to answer the following questions: Are people ready to accept MBCs for direct, everyday use? Are they ready to accept MBCs in furnishings or other interior design elements? Thus, an experimental study of the acceptance level of the mycelium-based composites among designers and future everyday users and the "likeability" of those innovative materials becomes crucial.
Organization of the Research
Each engineering material has a specific set of properties that affect the experience of the person who comes into direct contact with it. Although the human sensory experience uses all senses simultaneously, visual perception takes precedence. Therefore, the visual perception of a new material is usually supraliminal to other senses, owing to which the material is assessed and classified based on its appearance. Nevertheless, in research, the sensitivity of the other senses cannot be overlooked [18]. Although sight provides first impressions, the other senses detail the overall experience and are used in long-term contact with the material. The combined action of several senses gives information complete enough to evaluate the material reliably. Therefore, the initial examination of the material was to determine organoleptic comfort, considering sight, smell, and touch.
Considering the argument presented, three consumer surveys were made to obtain the broadest possible range of information on the studied material. The results of the consumer surveys were correlated with each other to produce generalizations and conclusions. The order of performing the studies and presenting the results were related to the complexity of the subject matter, from the fundamental issues of sensory perception, through personal and professional decisions when choosing between two products.
•
Test A: assessing the organoleptic comfort (sight, touch, and smell) of MBCs with a three-degree scale; • Test B: assessing the MBC product acceptance with a nine-degree scale-determining personal decision (methodology based on [19]); • Test C: comparing the MBC wall cladding panels with reference panels made of chamotte clay (pairwise comparisons) (methodology based on [20]).
Production of Samples
It is worth noting that there is no single method of MBCs production (the 2022 review includes an extensive comparative analysis of the applied production conditions based on 92 research articles [13]). Figure 1 presents the samples used for consumer testing in studies A and B. They were made of MBCs and had a hemispherical (dome) shape with a diameter of 30 cm, which allowed the surface to be observed from different angles. The chosen shape highlighted well the texture of the material and changes in Chiaroscuro through varying shell gradients and defined edges.
to the complexity of the subject matter, from the fundamental issues of senso through personal and professional decisions when choosing between two p • Test A: assessing the organoleptic comfort (sight, touch, and smell) o three-degree scale; • Test B: assessing the MBC product acceptance with a nine-degree scale personal decision (methodology based on [19]); • Test C: comparing the MBC wall cladding panels with reference p chamotte clay (pairwise comparisons) (methodology based on [20]).
Production of Samples
It is worth noting that there is no single method of MBCs production (t includes an extensive comparative analysis of the applied production condi 92 research articles [13]). Figure 1 presents the samples used for consumer t ies A and B. They were made of MBCs and had a hemispherical (dome) diameter of 30 cm, which allowed the surface to be observed from differe chosen shape highlighted well the texture of the material and changes i through varying shell gradients and defined edges. The first stage in producing all MBC samples was to prepare gypsu would allow multiple pieces of the same shape to be obtained. Afterwar were thoroughly cleaned and isolated with a polyethylene film. The next st a substrate into the molds and fungus inoculum. Once the molds were fille celium-infected substrate, they were sealed with another sheet of a polyethy film was punctured to allow for airflow to the growing fungus. The humid mycelium growth were closely monitored daily.
On the fourth day of growth, the top film was removed to allow for p and the fungus maturation continued. On the fifth day, the mycelium g mold, and drying was conducted to inactivate the fungus. The molds filled strate to produce MBC samples used in test C are shown in Figure 2. As me C, fired unglazed chamotte clay cladding panels were used as reference s reference samples before firing are shown in Figure 3. The first stage in producing all MBC samples was to prepare gypsum molds that would allow multiple pieces of the same shape to be obtained. Afterwards, the molds were thoroughly cleaned and isolated with a polyethylene film. The next step was adding a substrate into the molds and fungus inoculum. Once the molds were filled with the mycelium-infected substrate, they were sealed with another sheet of a polyethylene film. This film was punctured to allow for airflow to the growing fungus. The humidity levels and mycelium growth were closely monitored daily.
On the fourth day of growth, the top film was removed to allow for primary drying, and the fungus maturation continued. On the fifth day, the mycelium grew to fill the mold, and drying was conducted to inactivate the fungus. The molds filled with the substrate to produce MBC samples used in test C are shown in Figure 2. As mentioned, in test C, fired unglazed chamotte clay cladding panels were used as reference samples. These reference samples before firing are shown in Figure 3. Test C evaluated a specific modular product in a contemporary interior design style. These modules can be put together in any way desired (students designed the studied cladding under the supervision of Klaudia Grygorowicz-Kosakowska, as part of the sculpture class in the second year of the architecture). The clay cladding panels can be used as a bed headboard, a fireplace backdrop, or decorative panels. Chamotte clay products are characterized by their distinct and appealing aesthetics, featuring a natural and heavily grained texture, as well as a soft, beige color. Thanks to its bright and porous surface, chamotte clay did not create contrast with the MBC during an examination. This made it a good material for use alongside MBCs in a similar stylistic context.
Respondents
The survey involved 80 respondents, including 52 females and 28 males, aged 19-24 years, who are the students of the Poznań University of Technology Faculty of Architecture. It is the group of people who will enter the job market as architects and interior designers in the coming years and shape the design trends, impacting the product market. This group's opinions are considered vital in developing designs that will soon be implemented and enter the market. One of the characteristic features of Generation Z-the Post Millennials to which group the respondents belong-is their sensitivity to sustainability and environmental issues. We respected anonymity while collecting, analyzing and reporting survey data. No personal data were collected, so the data about compared engineering materials were not connected with personal information. Respondents could not influence each other's responses. All respondents agreed to participate in the study.
Tests Environment
The tests were carried out in a room with three individual sample presentation stands, allowing independent evaluation, free from the influence or suggestion of others. Each respondent could access only one stand at a time. The room was thoroughly ventilated before the test. It had a temperature of 22 ± 2 • C and a relative humidity of 60% ± 5%. The samples were assessed at a color temperature of 5000 K to 10,000 K against a neutral, uniform background identical for all the elements presented. The stand for sample evaluation is shown in Figure 5. Figures 6 and 7 show the layout of the wall panels assessed by the respondents. lated before the test. It had a temperature of 22 ± 2 °C and a relative humi 5%. The samples were assessed at a color temperature of 5000 K to 10,00 neutral, uniform background identical for all the elements presented. The s ple evaluation is shown in Figure 5. Figures 6 and 7 show the layout of th assessed by the respondents.
Test A: Consumer Test with a Three-Degree Scale
The first test of three consumer tests (test A) was conducted as an organol sessment, using three senses simultaneously. The assessment involved the prop the test material perceived in the following manner: visual-in terms of color (p neutral, and ugly), olfactory (pleasant, neutral, and unpleasant), and haptic (p neutral, unpleasant/hard, difficult to define / warm, neutral, and cold). The resp were presented with a hemispherical sample A, which they could touch, see an Five questions were asked about the reception of the material: 1. Do you perceive the material's color as pleasant, neutral, or ugly? 2. Do you perceive the material as warm, neutral, or cold? 3. Do you perceive the material surface as hard, difficult to define, or soft? 4. Do you find your tactile sensation of it pleasant, neutral, or unpleasant? 5. Do you find your olfactory sensation of it pleasant, neutral, or unpleasant?
Test A: Consumer Test with a Three-Degree Scale
The first test of three consumer tests (test A) was conducted as an organoleptic assessment, using three senses simultaneously. The assessment involved the properties of the test material perceived in the following manner: visual-in terms of color (pleasant, neutral, and ugly), olfactory (pleasant, neutral, and unpleasant), and haptic (pleasant, neutral, unpleasant/hard, difficult to define/warm, neutral, and cold). The respondents were presented with a hemispherical sample A, which they could touch, see and smell. Five questions were asked about the reception of the material:
1.
Do you perceive the material's color as pleasant, neutral, or ugly? 2.
Do you perceive the material as warm, neutral, or cold? 3.
Do you perceive the material surface as hard, difficult to define, or soft? 4.
Do you find your tactile sensation of it pleasant, neutral, or unpleasant? 5.
Do you find your olfactory sensation of it pleasant, neutral, or unpleasant?
The purpose of the questions thus formulated was to determine the level of acceptance at a fundamental, physiological level.
Test B: Consumer Tests with a Nine-Degree Scale of Material Acceptance
The second test (test B) assessed product acceptance and desirability using a nine-point hedonic scale, typically used in consumer research to measure consumer response to some products [19]. In this test, the respondents again presented the same hemispherical sample and wall panels made of MBCs, which the respondents could touch, see and smell. The respondents were asked questions:
1.
Would you accept the material in interior design elements in your own home? 2.
Would you accept the material in interior design elements in a home that you design with an ecological aesthetic?
The first question concerned a personal opinion on the material, with positive answers indicating a significant positive reception. The second question, on the other hand, concerned the respondent's general opinion regarding the use of the material in interiors.
Test C: Consumer Tests with the Method of Pairwise Artifact Comparison of Wall Cladding Samples
The third test used the differential, pairwise comparison method to compare the two products, a piece of wall cladding made of chamotte clay, fired and unglazed, and a mycelium-based composite (MBC). The assessment was carried out to test the potential competitiveness of the solution on the market and to determine whether the new material would gain consumer acceptance and whether it was "likable" compared to other solutions. The aim was to determine the hedonic quality resulting from evaluations of sensory experience in terms of subjective emotions. The respondents presented cladding made from the two materials for comparisons in pairs of the MBC and chamotte clay, with the following questions asked:
1.
Which cladding version gives the impression of being eco-friendly? 2.
Which cladding version is more original? 3.
Which cladding version is more visually appealing? 4.
Which version of the cladding is easier to fit into the aesthetics of an ecologically styled interior? 5.
Which cladding version is easier to fit into the aesthetics of a modern interior? 6.
Which version of wall cladding is more attractive? 7.
Which version of wall cladding do you prefer?
The questions, in this case, concerned both the selection of one of the two solutions in terms of the degree of originality, visual appeal, interest, preference, and their potential use, i.e., use for the interior design of a particular style.
Results and Interpretation of Test A: Consumer Test with a Three-Degree Scale
The results of test A pointed out the generally positive assessment of the MBC material, as the respondents perceived it as neutral or pleasant in all the fields evaluated. The mycelium-based composite's (MBC) visual quality was not disturbing: 62 out of 80 respondents perceived the color as neutral, 11 perceived it as pleasant, and seven perceived it as unpleasant. When touched, most respondents thought the material was neutral (39 people) or "warm" to the touch. Thirty-six people thought that the material felt soft; the opposite opinion was shared by only 26 people, whereas for 14 respondents, it was difficult to determine its texture. The overall sensation was that the material felt pleasant or neutral when touched. Olfactory sensations were also positive: negatively judged by three people only, against 73 respondents who thought the smell of the MBC was pleasant or neutral ( Figure 8). It needs to be stressed that the test was performed in a well-ventilated room which can influence the results in olfactory sensations. The results of test A yielded correlations between the answers to questions 2, 4, and 3. Out of a group of 32 respondents who rated the tactile sensation of the material as pleasant, 26 respondents described it as warm, while 6 respondents described it as neutral. This suggests a correlation between factors 4 and 2 in terms of the tactile properties of the material. No one in this group described the material as cold. Therefore, the material's advantage is the sensation of "warmth" to the touch. This is vital, when it comes to the expectations of the material. The results of test A yielded correlations between the answers to questions 2, 4, and 3. Out of a group of 32 respondents who rated the tactile sensation of the material as pleasant, 26 respondents described it as warm, while 6 respondents described it as neutral. This suggests a correlation between factors 4 and 2 in terms of the tactile properties of the material. No one in this group described the material as cold. Therefore, the material's advantage is the sensation of "warmth" to the touch. This is vital, when it comes to the expectations of the material.
However, observing the correlation between questions 2 and 4, 32 respondents rated tactile sensations as positive, 12 respondents declared that they simultaneously perceived the material surface as soft, 12 respondents perceived it as neutral, and 6 respondents perceived it as hard. Therefore, the asset of the material is its "softness". The correlations between responses to questions 4 and 2 and between questions 4 and 3 indicated that the respondents valued "warm" and "soft" materials.
Other studies confirm that surfaces that users often like can be described as soft and warm, as well as smooth and warm, indicating the feeling of warmth as a decisive positive characteristic [21]. The described distribution of responses also reminds us of the importance of tactile sensations and their often-underestimated role in the design of functional objects and interiors. Visual perception dominates the evaluation of functional objects and designs, although sight does not determine the complete experience of the material. Research focusing on touch and smell demonstrates that when perceiving with one sense, the user mentally constructs an "image" relevant to the other senses. The drive for a multi-modal, multisensory experience prevails [22].
Results and Interpretation of Test B: Consumer Tests with a Nine-Degree Scale of Material Acceptance
Question 1, worded in this way, required expressing one's personal opinion about the material, with positive answers indicating a significant positive reception. The second question, on the other hand, referred to the respondent's general opinion regarding using the material. The results of test B demonstrated that the respondents' acceptance of the MBC was high, especially with homes designed by the respondents in ecological aesthetics. The result of each respondent was a preference profile from rank 1 = "definitely yes" to rank 9 = "definitely no". As shown in Figure 9, the respondents significantly associated the MBC with ecological solutions. However, it is interesting that the respondents did not necessarily associate their homes with environmentalism. Staying eco-friendly is important, but it seems not at one's own home.
The results shown in Figure 9 suggested that the material presented to the respondents was perceived as environmentally friendly due to its natural characteristic, which was perceivable upon contact with the MBC. Although the respondents did not know the production process, they guessed right that they were dealing with an ecological material.
The results of research in related areas [23] confirm the above observations. They show that most young people believe that sustainability is the right course of action but that their positive responses are not noticeably correlated with the degree of familiarity with sustainability. Students strongly associate sustainability concepts with their environmental aspects rather than their economic and social aspects. Regarding their participation in "sustainable" lifestyles, they most often mention "slightly green" activities relating to consumer responsibility, such as changing shopping habits, recycling, and saving energy or water. Young people are not optimistic about the future of society in the face of environmental threats.
MBC was high, especially with homes designed by the respondents in ecological aesthetics. The result of each respondent was a preference profile from rank 1 = "definitely yes" to rank 9 = "definitely no". As shown in Figure 9, the respondents significantly associated the MBC with ecological solutions. However, it is interesting that the respondents did not necessarily associate their homes with environmentalism. Staying eco-friendly is important, but it seems not at one's own home.
Would you accept the material in interior design elements in your own home? Would you accept the material in interior design elements in a home that you design with an ecological aesthetic? The results shown in Figure 9 suggested that the material presented to the respondents was perceived as environmentally friendly due to its natural characteristic, which was perceivable upon contact with the MBC. Although the respondents did not know the production process, they guessed right that they were dealing with an ecological material.
The results of research in related areas [23] confirm the above observations. They show that most young people believe that sustainability is the right course of action but that their positive responses are not noticeably correlated with the degree of familiarity with sustainability. Students strongly associate sustainability concepts with their environmental aspects rather than their economic and social aspects. Regarding their participation in "sustainable" lifestyles, they most often mention "slightly green" activities relating to consumer responsibility, such as changing shopping habits, recycling, and saving energy or water. Young people are not optimistic about the future of society in the face of environmental threats.
Results and Interpretation of Test C-Consumer Tests following the Method of Pairwise Artifact Comparison of Wall Cladding Samples
This study expressed preferences for pairwise comparisons, leading to ranking the cladding type by answering seven questions. General responses when following the method of pairwise artifact comparison of wall cladding samples revealed the advantage of ceramic material, mainly regarding the overall assessment (question 1 in test C) and visual appeal (question 5 in test C). The respondents also believed that the chamotte clay cladding would be easier to fit into the aesthetics of a modern interior, which would be better with its style (question 3 in test C). The results of the research are presented in Figure 9. Results of test B-consumer tests on a nine-degree scale of material acceptance.
Results and Interpretation of Test C-Consumer Tests Following the Method of Pairwise Artifact Comparison of Wall Cladding Samples
This study expressed preferences for pairwise comparisons, leading to ranking the cladding type by answering seven questions. General responses when following the method of pairwise artifact comparison of wall cladding samples revealed the advantage of ceramic material, mainly regarding the overall assessment (question 1 in test C) and visual appeal (question 5 in test C). The respondents also believed that the chamotte clay cladding would be easier to fit into the aesthetics of a modern interior, which would be better with its style (question 3 in test C). The results of the research are presented in Figure 10. At the same time, mycelium-based composites (MBCs) were clearly perceived as a more attractive and original solution (questions 2 and 6 in test C). It also gives the impression that being eco-friendly and definitely would fit and enhance an ecologically styled interior (questions 4 and 7 in study C).
An additional discussion is required to juxtapose whether the material is interesting and original with the answers concerning aesthetics, i.e., its visual appeal. Although the MBC was found to be less visually appealing, the respondents found it more interesting, original, and environmentally friendly. Therefore, the pattern of responses in test C indicated a potential paradigm shift in aesthetics to this material.
Interior design and interior decoration mirror a trend known from the clothing industry, i.e., fast fashion. Retail companies specialized in interior design and decoration vigorously promote the vision of often changing collections once or even several times during one season [24]. Fast fashion's rapid pace has drawn criticism for its detrimental impact on the environment and its role in fueling overconsumption and waste [25]. Consumers are used to permanent purchases, and even the awareness of the problem and negative publicity of fast fashion practices do not always prevail [26]. In this situation, the proenvironmental strategy of using environmentally friendly materials could be implemented along with the policy of reducing consumption, which is highly reasonable.
impression that being eco-friendly and definitely would fit and enhance an ecologically styled interior (questions 4 and 7 in study C).
An additional discussion is required to juxtapose whether the material is interesting and original with the answers concerning aesthetics, i.e., its visual appeal. Although the MBC was found to be less visually appealing, the respondents found it more interesting, original, and environmentally friendly. Therefore, the pattern of responses in test C indicated a potential paradigm shift in aesthetics to this material.
Which cladding version gives the impression of being eco-friendly?
Which cladding version is more original?
Which cladding version is more visually appealing?
Which version of the cladding is easier to fit into the aesthetics of an ecologically styled interior?
Which cladding version is easier to fit into the aesthetics of a modern interior?
Which version of wall cladding is more attractive?
Which version of wall cladding do you prefer? Interior design and interior decoration mirror a trend known from the clothing industry, i.e., fast fashion. Retail companies specialized in interior design and decoration vigorously promote the vision of often changing collections once or even several times during one season [24]. Fast fashion's rapid pace has drawn criticism for its detrimental impact on the environment and its role in fueling overconsumption and waste [25]. Consumers are used to permanent purchases, and even the awareness of the problem and negative publicity of fast fashion practices do not always prevail [26]. In this situation, the pro-environmental strategy of using environmentally friendly materials could be implemented along with the policy of reducing consumption, which is highly reasonable.
Several main paradigms or philosophies influence the aesthetics of space in interior design. Some of the most prominent are as follows: • Minimalism: This philosophy emphasizes simplicity and functionality, focusing on clean lines, neutral colors, and a lack of clutter. A lack of ornamentation and a focus on form and function characterize minimalist interiors.
•
Modernism: Modernist interiors are characterized by a focus on functionality, technology, and the use of new materials and construction methods. They often feature simple, clean lines, and a neutral color palette. • Scandinavian: Scandinavian interior design is known for its simplicity, functionality, and focus on natural materials. This style often features light colors, natural wood, and an emphasis on creating a cozy and comfortable living space. Several main paradigms or philosophies influence the aesthetics of space in interior design. Some of the most prominent are as follows: • Minimalism: This philosophy emphasizes simplicity and functionality, focusing on clean lines, neutral colors, and a lack of clutter. A lack of ornamentation and a focus on form and function characterize minimalist interiors. • Modernism: Modernist interiors are characterized by a focus on functionality, technology, and the use of new materials and construction methods. They often feature simple, clean lines, and a neutral color palette. • Scandinavian: Scandinavian interior design is known for its simplicity, functionality, and focus on natural materials. This style often features light colors, natural wood, and an emphasis on creating a cozy and comfortable living space. • Art Deco: Art Deco interiors are characterized by their use of bold geometric shapes, strong colors, and luxurious materials. This style often features metallic accents, such as brass or chrome, and incorporates exotic motifs inspired by ancient cultures. • Traditional: Traditional interiors are characterized by classic forms, such as ornate moldings, classic furniture styles, and rich colors and fabrics. This style often incorporates antiques and heirloom pieces and is designed to evoke a sense of timeless elegance.
Each of these paradigms has its own distinct aesthetic, and interior designers may use elements from several different paradigms in order to create a unique and personalized design for a space. There are several trends in contemporary aesthetic discourse in the design of functional objects. The leading trend still comes from the modernist tradition of the early twentieth century and is based on the imperative to use the latest achievable materials in streamlined, reduced, and functional forms [27]. The 1970s is the time when the high-tech trend associated with an emphasis on the primacy of technology evolved. These trends have cemented the vision of the design of the future as technologically advanced design forms with perfectly smooth surfaces and precise edges [28]. This classical vision, firmly embedded in the broad public consciousness, has been reinforced by the proliferation of digital (parametric) tools in design, prototyping, and production [29].
At the same time, however, nature-inspired concepts developed in the design arts from the 1930s onwards, including biomorphism, which became the basis for the later ideas of bio-aesthetics and bio-design. The widespread dissemination of knowledge on climate change was initiated by international climate conferences and dates back to 1979 [30]. However, the resulting designs for functional objects were rarely seen and defined as "designs of the future". The same is true today-despite the attention that eco-design is receiving [31,32], biomorphic forms are gaining popularity in new marketing ideas [33,34] and the demand for sustainably produced product components is not diminishing [35]. Environmentalism is still not seen as a solution of the future. In practice, bio-aesthetic design involves using natural materials, such as wood, annual plants, and stone, as well as incorporating plants and other elements of nature into the interior space. It also involves lighting and color in ways that support circadian rhythms and promotes positive moods. Bio-aesthetics aims to create spaces that are not only aesthetically pleasing, but also are supportive of human health and well-being.
Within the domain of ecological design itself, the use of animate systems is undoubtedly an interesting concept. Bioengineering handles the creation and construction of "animate" materials at the micro-scale. An example of such a material on a macro-scale are the objects made of MBCs presented herein. There are other, less popular, yet interesting ways of "animate" production, i.e., the solutions of the Full Grown Furniture company, which has found that excellent results can be achieved by growing trees directly in special molds. Full Grown Furniture's chairs are already design icons, but the relatively small scale of their production makes them luxurious [36].
As presented in the above study, the shift from the imperative of the perfect finish to naturalness is an important signal, suggesting a paradigm shift in aesthetics towards more sustainable solutions. Perhaps, instead of associating modernity with perfectly finished surfaces, designs with more unique surfaces will be accepted, e.g., resulting from the natural growth of the material. In this context, of course, interdisciplinarity and openness to hybrid forms of creation-greatly extending material and technological possibilities-are highly vital [37].
The gap between the answers to questions 1 and 2 in test B ( Figure 9) indicated a double standard. MBCs were accepted in general but not in one's own home. It followed that MBCs were perceived as clearly ecological, but at the same time, they raised some concerns. There could be several potential reasons for the lack of consumer acceptance for interior design products made of these mycomaterials:
•
Conviction about riskiness: the natural origin of these materials and the fear of fungus can be challenging; • Unproven ecological benefits: consumers may be unaware of MBCs and their benefits, such as their sustainable and eco-friendly nature; • Perceived high cost: MBCs are still relatively new and are not yet widely available. As a result, consumers may perceive the cost of producing and using MBCs in interior design products as high, which could deter them from purchasing these products; • Personal aesthetic preferences: the respondents may have specific aesthetic preferences regarding interior design, and MBCs may not fit their style or taste (as mentioned, the natural and organic look of these mycomaterials may not appeal to everyone.); • Inaccessibility: The availability of interior design products made of MBCs is now limited, which could make it difficult for consumers to find these products in stores or online. This could lead to a lack of awareness and interest in mycomaterials among consumers; • Material properties are unknown. The respondents may have concerns about the durability and performance of MBCs compared to those of traditional materials. They may worry that mycomaterials will not hold up over time or will not perform as well as other materials in certain conditions.
Overall, it may take time for MBCs to gain wider acceptance among consumers in the interior design space. Increasing awareness and education about the benefits and properties of mycomaterials and making these products more widely available and affordable help increase their popularity and adoption among consumers.
Summary and Conclusions
There are some aesthetic challenges associated with using mycelium-based engineering materials in furniture design:
•
Color uniformity: mycelium is a natural material whose color can vary, making it difficult to achieve a consistent appearance in furniture; • Surface texture: the texture of mycelium can also vary, making it difficult to control the final appearance of furniture.
Studying the material properties of manufactured products in the context of introducing new materials and applying the knowledge to industrial design is a challenge in product design. Due to the subjectivity and different nature of the user's needs, it is not easy to accurately assess and quantify these characteristics. In this article, the authors relied on usability testing and used traditional marketing and decision-making theory methods, i.e., pairwise comparison and one-step consumer tests on a three-and nine-point scales. As a result, this approach has provided data that helped to understand and identify the requirements of future users.
1.
The overall positive evaluation of the mycelium-based composite (MBC) among architecture and interior design students aged 19-24 years, i.e., Generation Z, demonstrated that continued research into the material in question could yield good commercialization results in the coming years. The main observation is that the younger generation of designers showed a high level of acceptance for the material itself and its products. According to study participants, the MBC material can be described as "likable" (test A) and highly ecological (test B). The wall cladding made of the MBC had advantages regarding its uniqueness, its consistency with eco-styled interiors, and the fact that it is interesting (test C). Further considerations should be given to optimizing the properties and its new applications that are not obvious today.
2.
The results of the experiments suggest double standards in the respondents. MBCs were generally accepted, but not in their own homes. It followed that MBCs were perceived as clearly ecological, but at the same time they raised some concerns. The fear of fungus is deeply ingrained in many cultures and can lead to skepticism or aversion towards products made from mycelium. Some people may be hesitant to use MBCs in their homes or in products they consume due to concerns about fungal growth and associated health risks. Additionally, MBCs are a relatively new technology, and there is still much to learn about their properties, durability, and potential applications. This can lead to uncertainty and skepticism among consumers and industry professionals alike.
3.
Working with this material and other bio-materials can lead to a paradigm shift in aesthetics in which the design mainstream has hitherto been defined by high technology and highly sophisticated design and production methods, which will perhaps soon take on a more casual, nature-like form.
Human acceptance of mycelium-based engineering materials as furniture materials is still growing. However, some people are unfamiliar with mycelium-based materials and may be hesitant to use them in their homes or workplaces. However, there is a growing interest in sustainable and eco-friendly materials, and mycelium-based materials have the potential to appeal to this market. Additionally, as people become more educated about mycelium-based materials' environmental benefits and unique aesthetic qualities, they may be more likely to accept and embrace them.
It's also worth noting that acceptance can vary by cultural and regional factors. In some regions, there may be a greater appetite for experimental and unconventional materials, while in others, there may be more traditional preferences. To sum up, human acceptance of mycelium-based engineering materials will likely continue to grow as more people become familiar with the material and its benefits, but individual preferences and cultural factors will also influence it. | 9,034.2 | 2023-03-01T00:00:00.000 | [
"Materials Science",
"Environmental Science"
] |
Combination immunotherapy in a patient with hemodialysis therapy and metachronous bilateral clear cell renal cell carcinoma: Case report and literature review
Combination immunotherapy is a treatment strategy in patients with renal cell carcinoma that has proved to be effective in phase III randomized clinical trials. These studies do not include patients with end stage kidney disease on hemodialysis. We discuss this case about a patient with metachronous bilateral clear cell renal cell carcinoma, managed with bilateral nephrectomy and ulterior requirement of hemodialysis, with lung and intestinal progression, managed with combination immunotherapy, with a partial response and absence of adverse effects related to treatment.
Introduction
Renal cell carcinoma (RCC) comprises a heterogeneous group of renal tubular epithelial cells cancers and represents almost 4% of malign tumors in adults. One third of RCC patients who are taken to local surgical resection, have tumor relapse with appearance of distant metastases. 1 Since 2005, tyrosine kinase inhibitors, mTOR inhibitors and immune checkpoint inhibitors play a central role in the treatment of this disease; randomized clinical trials have evidenced significant increase in global and progression-free survival. 1 We herein report a case of metachronous bilateral clear cell renal cell carcinoma in a patient with hemodialysis therapy treated with combination immunotherapy.
Case presentation
A 61-year-old male patient with a history of grade 2-pT1aNxM0 clear cell renal cell carcinoma (RCC) of 2 × 3 cm in the middle lobe of his right kidney diagnosed in 2014, treated with right radical nephrectomy at that time, and followed in regular consultations with computed tomography (CT).
In 2018, a CT identified a nodular image in the upper lobe of the left kidney; hence a percutaneous biopsy was performed and the pathology was consistent with grade 1 clear cell RCC. Surveillance was not considered due to concerns of the patient. Again, the patient required a radical left nephrectomy and his oncologic disease was classified grade 2 pT1aNxM0. Since then, he required hemodialysis therapy and regular oncology assessments in another health center in the city, his functionality was preserved, ECOG 0.
In July 2020, he was found hypotensive before his hemodialysis therapy, he complained with dizziness, melena, asthenia and tiredness during the past three weeks; therefore he was remitted to our hospital emergency department. At his admission, he was tachycardic, hypotensive and pale, his initial tests showed severe microcytic and hypochromic anemia, thus requiring 2 red cells units transfusion and fluid support therapy. An upper endoscopy with biopsy was performed and it evidenced a protruded, irregular, ulcerated and friable 10 mm lesion at the posterior wall of the duodenal angle, it occluded 50% of the luminal area. The pathology revealed extensively ulcerated duodenal mucosa, infiltrated by malign epithelial intermediate tumor cells with clear cytoplasm, and intermediate, irregular and hyperchromatic nuclei, which were arranged on nests and cords. Immunohistochemical staining was positive for CKAE1/AE3, CD10 y PAX8, and negative for CK7, CK20 and RCC, which was consistent with metastatic RCC. Thorax and abdomen CT documented multiple metastatic lesions at the right liver lobe, as in both lungs and mediastinum.
The melenic stools persisted and hemostatic endoscopic measures were insufficient, therefore an embolization was performed, and the bleeding ceased.
Combination immunotherapy was selected according to the bleeding risk and adverse effects, thus pembrolizumab plus axitinib were excluded, and nivolumab + ipilimumab were preferred. The cycles were administered 3 hours after the end of the hemodialysis session. He achieved four full dose cycles and single nivolumab therapy was maintained. A new CT was performed after 14 months (Fig. 1), the lung and liver lesions disappeared, and there was a 16 × 17 mm duodenum nodular lesion, another metastatic lesion were not found.
Discussion
The phase III randomized clinical trial Checkmate 214 compared combination of two immune checkpoint inhibitors: PD-1 inhibitor nivolumab plus CTLA-4 inhibitor ipilimumab, against tyrosine kinase inhibitor sunitinib in naïve patients with metastatic RCC. A total of 1096 patients were assigned to receive nivolumab (3 mg per kilogram of body weight) plus ipilimumab (1 mg per kilogram) intravenously every 3 weeks for four doses followed by nivolumab (3 mg per kilogram) every 2 weeks, or sunitinib (50 mg) orally once daily for 4 weeks (6-week cycle). At a median follow-up of 25.2 months in intermediate-and poor-risk patients, the 18-month overall survival rate was 75% (95% confidence interval [CI], 70 to 78) with nivolumab plus ipilimumab and 60% (95% CI, 55 to 65) with sunitinib; the median overall survival was not reached with nivolumab plus ipilimumab versus 26.0 months with sunitinib (hazard ratio for death, 0.63; P < 0.001). The objective response rate was 42% versus 27% (P < 0.001), and the complete response rate was 9% versus 1%. The median progression-free survival was 11.6 months and 8.4 months, respectively (hazard ratio for disease progression or death, 0.82; P = 0.03, not significant per the prespecified 0.009 threshold). The trial protocol excluded patients with glomerular filtration rate under 40 ml/min/1.73 m2 according to Cockroft & Gault formula. 2 A clinically significant impact on pharmacokinetics of nivolumab and ipilimumab has not been observed in patients with end stage renal disease. 3 This subgroup of patients is underrepresented or excluded in different clinical trials; consequently safety and efficacy data of immune checkpoint inhibitors in hemodialysis patients are insufficient. 4 Combination immunotherapy lacks evidence in RCC and hemodialysis cases, there are only case reports and cases series. Kobayashi et al. 5 reported a case of a 77year-old patient with end stage kidney disease associated to hyperuricemia, in hemodialysis three times a week, with clear cell RCC and right radical nephrectomy, with lung metastatic disease after four years. He received nivolumab 240 mg plus imilimumab 1 mg/kg intravenously every three weeks for four doses, followed by nivolumab 240 mg every two weeks. Follow-up CT showed stable disease after 8 months and adverse events were minimal.
Conclusions
RCC is a relevant disease, whose treatment has changed over the years, and immune checkpoint inhibitors have a central role, as monotherapy or combination. Plenty evidence is lacking about the real efficacy and safety of combination immunotherapy in patients with RCC and hemodialysis requirement, only supported by case reports. Further clinical trials are required to answer this clinical question. | 1,437.8 | 2022-09-01T00:00:00.000 | [
"Medicine",
"Biology"
] |
Income Inequality and Political Trust: Do Fairness Perceptions Matter?
Political trust—in terms of trust in political institutions—is an important precondition for the functioning and stability of democracy. One widely studied determinant of political trust is income inequality. While the empirical finding that societies with lower levels of income inequality have higher levels of trust is well established, the exact ways in which income inequality affects political trust remain unclear. Past research has shown that individuals oftentimes have biased perceptions of inequality. Considering potentially biased inequality perceptions, I argue that individuals compare their perceptions of inequality to their preference for inequality. If they identify a gap between what they perceive and what they prefer (= fairness gap), they consider their attitudes towards inequality unrepresented. This, in turn, reduces trust in political institutions. Using three waves of the ESS and the ISSP in a cross-country perspective, I find that (1) perceiving a larger fairness gap is associated with lower levels of political trust; (2) the fairness gap mediates the link between actual inequality and political trust; and (3) disaggregating the fairness gap measure, political trust is more strongly linked to variation in inequality perceptions than to variation in inequality preferences. This indicates that inequality perceptions are an important factor shaping trust into political institutions.
Correlationally, societies with lower levels of income inequality have higher levels of trust in political institutions (Algan et al., 2017;Gustavsson & Jordahl, 2008;Foster & Frieden, 2017;Wilkinson & Pickett, 2009). Acknowledging that empirical finding, it remains unclear how income inequality can influence individual trust in political institutions. Many studies argue that individuals evaluate inequalities and this evaluation affects individuals' trust in political institutions: If individuals evaluate inequalities as, for instance, fair, they tend to be more likely to trust political institutions (e.g. Loveless, 2013;Nannestad, 2008;Uslaner, 2002). This argumen implies that individuals evaluate inequality based on what they perceive. However, past research has shown that such inequality perceptions are oftentimes imprecise and biased (Engelhardt & Wagener, 2018;Norton & Ariely, 2011;Osberg & Smeeding, 2006;Bobzien, 2020;Bublitz, 2022). To understand the specific mechanisms in which fairness evaluations of economic inequality affect political trust, it is thus important to explicitly model individual inequality perceptions.
Considering potentially biased inequality perceptionshis paper argues that individuals evaluate inequalities based upon their perceptions. If individuals identify a gap between the level of inequality they prefer and the level they perceive-I call this gap fairness gapthey feel politically dissatisfied. The observation that one's own taste for (in)equality is not implemented politically, in turn, reduces political trust. Combining the European Social Survey and the International Social Survey Programme, I show empirically that this fairness gap measure is indeed negatively associated with political trust. Using mediation analysis techniques, I further show that about mediate of the effect of actual income inequality on political trust is mediated through the fairness gap despite the overall effect of actual inequality on political trust is already small. Past research has shown that fairness evaluations as well as inequality perceptions depend not only on actual levels of inequality but also upon one's own economic position (Cansunar, 2020). Following this argument,in a final step, I descriptively show, by disagreggating the fairness gap measure, that variation in political trust across educational levels as a proxy for individuals' socio-economic positions is more strongly correlated with perceived inequality rather than preferred inequality. This suggests that individuals are more polarised in their perceptions of inequality than in their preferred levels of inequality and that this polarisation in perceptions closely links to the polarisation in political trust.
The contribution of this paper is thus threefold. Firstly, while the link between inequality perceptions and political preferences such as preferences for redistribution is widely studied (Engelhardt & Wagener, 2018;Osberg & Smeeding, 2006;Bublitz, 2022), it remains unclear how inequality perceptions influence more general feelings about societies and institutions. I apply this perspective to the broader concept of political trust by arguing that individuals evaluate inequalities based upon their perceptions and that these evaluations affect individuals' trust in political institutions. Secondly, I actively operationalise fairness perceptions as individually preferred deviations from the perceived status-quo inequality. Past research either assumed that individuals have an accurate assessment of inequalities or used attitudinal items on inequality to reveal information about inequality evaluations. Thirdly, this methodological advancement enables us to study whether inequality perceptions or inequality preferences predict trust in political institutions. These results indicate that higher inequality is not necessarily linked to lower levels of trust; it is rather important to what extent individuals consider their inequality preferences to be realised. Thus, the legitimacy of and feelings about such inequalities is important. This paper is structured as follows. In Section 2 I review the existing literature on inequality and political trust and introduce the theoretical argument of the paper. Section 3 gives an overview of the method and the data used. In Section 4, I study the effect of the fairness gap on political trust. To study the importance of the fairness gap as as mediator between actual inequality and political trust, I conduct a mediation analysis. Finally, I ask: what can we learn about the relevance of fairness perceptions beyond the question of whether fairness perceptions mediate the relationship between income inequality and political trust? Section 5 concludes.
2 Theoretical Background: How Does Inequality Affect Political Trust? Following Citrin and Stoker (2018) and Levi and Stoker (2000), I understand political trust as a relational, domain-specific concept: 'A trusts B to do X' ( (Citrin & Stoker, 2018), p.50). It is relational insofar as it focuses on the relationship between A and B. It is domain-specific insofar as it focuses on X. Here, I am interested in how individuals (A) trust political institutions of the country they live in (B) to act according to their inequality preferences (X). I follow the literature in assuming that trust in political institutions can be seen as a general proxy for support for the political system (Easton, 1965;Goubin & Hooghe, 2020;Hooghe, 2011). It is a subjective measure in the sense that it refers to individuals' feelings about the political system rather than actual actions such as voting and it is a vertical measure in the sense that it conceptualises the relationship between individuals and the state rather than the relationship between individuals and groups within a society (Chan et al., 2006). Individuals may trust different political institutions differently. Individuals may, for instance, differentiate between institutions at regional, national, or European level (e.g. Talving & Vasilopoulou, 2021;Lipps and Schraff, 2020;Biten et al., 2022). I focus on trust in national institutions as this is the most direct link between income inequality and political trust arguing that individuals hold their own national institutions accountable. Further, existing literature differentiates between political and social (generalized) trust: In contrast to political trust which focuses on trust towards political institutions or actors, social trust is defined as trust in fellow individuals. Generalized and political trust are, while conceptually distinct, empirically highly correlated (Uslaner, 2018;Newton et al., 2018). This suggests that different trust-building mechanisms are closely related. I therefore review the literature on political as well as on social trust. I first summarize the literature studying the link between actual income inequality and political and social trust. I continue reviewing the literature focusing on perceptions of and beliefs about inequality and derive our theoretical argument by combining insights from these strands of literature.
Actual Inequality and Political Trust
While inequality is empirically negatively associated with political trust, there are a variety of potential theoretical mechanisms on how inequality affects individuals' trust levels. In the following, we summarize these mechanisms proposed in the literature differentiating between economic insecurity, segregation, institutional capacity, and fairness norms.
Higher levels of income inequality may be associated with higher levels of economic insecurity (e.g. Schwander, 2020) which, in turn, leads individuals to have less trust in political institutions. While the effects of economic insecurity on preferences for redistribution and political behavior are frequently studied (Marx, 2014;Vlandas & Halikiopoulou, 2019), the effects on more general attitudes such as trust are less well understood. Wroe (2016), studying the US context, shows that perceiving one's 1 3 own living condition as economically insecure negatively affects political trust. For the EU context, using Eurobarometer data, Van Erkel and Van Der Meer (2016) show that changes in the macro-economic performance affect political trust and that these effects are heterogeneous across educational groups and stronger for low-educated individuals. Nguyen (2017) shows that exposure to higher levels of-either potential or actively experienced-labour market insecurity is associated with lower levels of social trust. He further shows that institutions can buffer the effect of economic insecurities on trust by finding that this relationship is moderated by passive and active labor market support. This indicates that (in)security exposure and (in)security perceptions matter for individuals' trust political institutions.
A second line of argumentation suggests that rising inequality increases social distances between individuals and therefore leads to more segregated societies. This translates into higher levels of political trust for economically well-off individuals who benefit from inequality and lower levels of political trust for economically less well-off individuals who are disadvantaged by inequality e.g. due to higher levels of relative deprivation (Deaton, 2001;Hastings, 2019). Besides this mechanical effect (Neckerman & Torche, 2007), Uslaner and Brown (2005) argue that, in the context of high inequality, people at the top and at the bottom of the income distribution will not perceive each other as facing a shared fate. Therefore, they are less likely to trust individuals who are less similar to themselves which may also affect trust in political institutions. Empirically, higher segregation due to higher inequality is associated with lower levels of trust and civic participation (Neckerman & Torche, 2007).
Others argue that perceiving low levels of institutional capacity, for instance in the form of corruption or procedural unfairness, as a result of evaluating inequality (You & Khagram, 2005) leads individuals to lose trust in political institutions (Torcal, 2014;Meer & Dekker, 2011;Rahn & Rudolph, 2005). Uslaner (2010) argues that economic inequality provides an environment that breeds corruption which, in turn, facilitates further inequalities and reduces political trust. Hutchison and Johnson (2011) show, for the African context, that trust in government is a key element of regime legitimacy and find that higher institutional capacity is associated with higher levels of political trust: Political trust may therefore be higher in the context of politically efficient governments. Another strand of literature, which is closely related to the literature on institutional capacity, argues that fairness concerns and inequality aversion links actual inequality and trust (Grimes, 2006;Fehr et al., 2020;You & Khagram, 2005;Goubin & Hooghe, 2020). Zmerli and Castillo (2015), for instance, empirically show that individual perceptions of distributive fairness are closely related to political trust. Gustavsson and Jordahl (2008) find, using registerbased longitudinal data from Sweden, that inequality in income matters for generalized trust; however, they also show that this effect is particularly large for individuals who are inequality-averse. This is in line with the empirical finding by Heiserman et al. (2020) who show that higher perceived inequality and lower perceived mobility increase participants' concerns about economic fairness.
A majority of the research on the determinants of political trust indicates the need to look not only at the mechanical effect of inequality on political trust by hinting to the fact that individuals' evaluations of inequality are important to understand how inequality affects political trust: Individuals feel whether they are in an economically insecure situation, whether they are relatively deprived, and have opinions about state capacity and fairness. Most of the suggested theoretical mechanisms thus (implicitly) assume that individuals perceive, process, and evaluate information about inequality. If individuals have biased perceptions of inequality, actual inequality per se may not directly affect individual trust levels but such perceptions may mediate this relationship. If individuals evaluate inequalities and if these evaluations are important to individuals, actively conceptualising perceptions of and beliefs about inequality may be valuable to better understand how inequality affects political trust.
Considering Fairness Perceptions to Study the Determinants of Political Trust
There has been a growing awareness that individuals often have inaccurate and biased perceptions of inequality (e.g. Karadja et al., 2017;Bublitz, 2022Niehues, 2014Bobzien, 2020). Theoretically, such perceptions may matter in forming general attitudes such as trust in political institutions. The literature on political trust is not unaware of the potential importance of perceptions (e.g. Guinjoan & Rico, 2018). It is, however, methodologically difficult to operationalise perceptions. One approach to mitigate that problem is to use more nuanced inequality measures, such as regional inequality measures, assuming that individuals are better informed about inequalities they are directly exposed to (e.g. Kanitsar, 2022). Lipps and Schraff (2020), for instance, argue that regional inequality is a highly visible and thus a more salient form of income inequality because individuals directly experience it. They find that changes in regional income inequality have an equally strong effect on political trust as changes in national income inequality.
Studies that acknowledge the role of perceptions often approximate such perceptions by using items that operationalise attitudes, feelings, or beliefs about inequality (see (Guinjoan & Rico, 2018), for an exception). Zmerli and Castillo (2015), for instance, operationalise fairness using the question-wording 'How fair do you think the income distribution is in [country]?' with potential answer categories very fair, fair, unfair and very unfair. Loveless (2013) finds that individuals who consider inequality to be 'too high' are significantly more likely to have lower trust and political efficacy rates. He uses the following questionwording to measure inequality perceptions 'Some people say that there is too much social inequality in our society. Others say that there is no or almost no social inequality in our society. What is your view?' with potential answer items ranging from 'too much social inequality' to 'there is no or almost no social inequality'. This reveals attitudes towards the (I argue perceived) status quo. To understand why individuals evaluate inequality to be 'very fair' or to 'very unfair' or why individuals evaluate inequality to be 'too much' or to be 'about right', it is important to consider that responses are given relative to the status quo perception: Respondents are asked to reveal their preference relative to the status quo rather than being asked about preferences for absolute levels of inequality. This results in the fact it is unknown whether variation in answering this question is based on different preferences for absolute inequality or different perceptions of the status quo (Stantcheva, 2021;Bobzien, 2020). This is especially crucial for studying the link between inequality and political trust because such inequality perceptions matter for fairness evaluations. For instance, Heiserman et al. (2020) show, utilising an online experiment executed in the US, that higher perceived inequality increases individual concerns about economic fairness. Thus, inequality perceptions and fairness attitudes are interrelated (Jasso, 1978;Pedersen & Mutz, 2019) and should therefore be studied jointly to understand the formation of general attitudes such as political trust.
In this paper, I utilize a fairness measure that allows to differentiate between perceptions of and preferences for inequality in order to understand the effects of inequality on political trust. There is a long tradition in the fairness literature in differentiating between perceptions and fairness evaluations. These studies mostly study attitudes towards wages for different occupations by analysing survey items that ask respondents to report perceived and fair wages for different occupations (Jasso, 1978;Wegener, 1987;Ahrens, 2020). By doing so, this research is able to measure the distance between what an individual perceives and what she prefers. I apply this idea to the broader concept of inequality, in a simplified way, by introducing a measure which I call the fairness gap. The fairness gap measures the distance between individuals' inequality perception and inequality preference. Goubin (2020) theoretically argues and empirically shows that perceived political responsiveness is strongly related to political trust. This indicates that individuals evaluate political institutions with respect to whether they consider their preferences represented. 1 This paper builds upon these empirical findings and argues that perceiving a fairness gap (see Sect. 3 for construction of the measure)-that is, reporting a gap between perceived and preferred inequality-affects political trust. If individuals perceive a fairness gap, they consider their inequality preferences to be unrepresented. Such feelings of underrepresentiveness then lower political trust, similarly to the ways in which economic insecurity reduces trust (Marx & Nguyen, 2016;Algan et al., 2017). Individuals consider their preferences to be unseen which fosters feelings of unfairness. One response to such feelings is to reduce trust in political institutions. Our first hypothesis is thus: H1: A higher fairness gap is associated with lower political trust.
The link between actual inequality and perceived inequality is empirically complex (e.g. Windsteiger, 2022;Bavetta et al., 2020;Norton & Ariely, 2011). The ways in which individuals perceive inequality is likely to be some function of the actual levels of inequality. I therefore hypothesize that the fairness gap mediates the relationship between actual inequality and political trust: H2: The association between actual inequality and political trust is mediated by the fairness gap.
Past research has shown that inequality perceptions, inequality preferences, and trust in political institutions are stratified along socio-economic variables such as education or income: More educated individuals perceive lower levels of inequality (e.g. Gimpelson & Treisman, 2017), prefer less redistribution and therefore more inequality (e.g. Ahrens, 2020) and show higher levels of trust in political institutions (e.g. Foster & Frieden, 2017). It could therefore be the case that more educated individuals show higher levels of political trust because they perceive lower levels of inequality rather than because they prefer different levels of inequality than less educated individuals. In this study, I am able to differentiate the relative importance of perceived inequality and preferred inequality for political trust across different socio-economic groups. I follow past research in hypothesizing that the individual socio-economic position matters for inequality attitudes and trust in political institutions. Thus, I formulate competing hypotheses to study the importance of perceived and preferred inequalities for understanding variation in political trust across educational groups as a proxy for one's own socio-economic position: H3a: The variation in the fairness gap across socio-economic groups is associated with variation in perceived inequality rather than preferred inequality.
H3b: The variation in the fairness gap across socio-economic groups is associated with variation in preferred inequality rather than perceived inequality.
Empirical Approach
To test the proposed theoretical link between inequality and political trust empirically, I outline the empirical approach and operationalise the variables of interest in the following.
Method and Data
I use two main data sources for the analysis of actual inequality and fairness perceptions as determinants of political trust: the European Social Survey (ESS, 2002(ESS, , 2010(ESS, , 2018 and the International Survey Programme (ISSP Research Group, 2014. While the European Social Survey allows us to operationalise political trust, it lacks items on inequality perceptions. The ISSP includes items on inequality perceptions but lacks items on political trust in the modules which include items on inequality perceptions. In order to study the effect of inequality on political trust, we use the ESS as individual level dataset and merge it with our fairness gap measure obtained from the ISSP. I merge the data on country-year level as well as on a variety of lower aggregation levels. 2 I aggregate on (1) country and year, (2) employment status, 3 country, and year, (3) education, 4 country, and year, (4) selfreported gender, country, and year, and (5) Table 2 in the Appendix gives an overview of the countries and years included in the analysis. Firstly, I utilize pooled OLS regressions to estimate the effect of actual inequality and the fairness gap on political trust. Secondly, I conduct mediation analysis in order to be able to estimate the relative importance of the fairness gap for the link between actual inequality and political trust. Thereafter, I explore the importance of further socio-economic variables beyond actual inequality and descriptively study the ways in which the fairness gap, its components (perceived and preferred inequality), and political trust vary across educational levels as one proxy for individuals' socio-economic positions.
Variables of Interest
Political trust. A majority of the literature on cross-country variation of political trust uses (a subset of) the following items: Using this card, please tell me on a score of 0-10 how much you personally trust each of the institutions I read out. 0 means you do not trust an institution at all, and 10 means you have complete trust. (1) [...] political parties?. Ialso utilize these items and build an equally weighted index ranging from 0 to 10 (Cronbach's α = 0.89) (e.g. Zmerli & Newton, 2008;Lipps and Schraff, 2020;Van Erkel & Van Der Meer, 2016).
Actual income inequality. I use the gini index of disposable incomes after taxes and transfers by country-year provided by the Standardized World Income Database (SWIDD) (Solt, 2019).
Fairness gap. I operationalise the fairness gap as the difference between perceived and preferred inequality. I do so using items from the ISSP-Social Inequality module (II-V) that ask individuals to reveal their perceived and preferred level of inequality using graphical visualisations. Respondents were asked to estimate how they think their society looks today: 'These five diagrams show different types of society. Please [...] look at the diagrams and decide which you think best describes <country> [...] What type of society is <coun-try> today?'. They were further asked to reveal what structure they prefer by answering the question ' [...] What do you think <country> ought to be like?'. Following past research (Niehues, 2014;Gimpelson & Treisman, 2017;Bobzien, 2020), I calculate gini coefficients from these graphs and operationalise the fairness gap as the gap between the perceived gini and the preferred gini (see Gimpelson and Treisman 2017;Niehues 2014). If an individuals indicates that it perceives the society in country X to be Type A in Fig. 1 (gini = 41.95) but has a preference for Type D (gini = 20.13), the fairness gap for that individual would be the difference between the gini coefficient of the perceived inequality and of the preferred inequality, namely: 41.95 − 20.13 = 21.82. This takes, firstly, into account that individuals have different perceptions of inequalities and, secondly, how much these perceptions deviate from what individuals prefer. Figure 2 shows the distribution of perceived (left) and preferred (right) types of society (reported in Fig. 1). Most individuals perceive comparatively high levels of inequality, reporting that their society looks like 'Type A' or 'Type B'. Preferred levels of inequality are more skewed around 'Type D'. I use the difference between these two variables to measure the fairness gap. It is thus an individual level measure which we collapse on different levels: Fig. 1 Response options for preferred and perceived inequality. Source: ISSP (2009) (1) country-year-level, (2) employment status-country-year-level, (3) education-countryyear-level, (4) self-reported gender-country-year-level, (5) age-country-year-level. I merge the ISSP data to the individual-level ESS data on these different aggregation levels.
Controls
Following past research on political trust, I control for several individual-level variables. I control for age (continuous, limited to 15 to 80), gender (female and male), education (lowest formal education, lower than secondary education, secondary education, higher than secondary education, tertiary education), employment status ((self-)employed, in education, retired, not in labor force, unemployed), and occupation using the ISCO major groups. A further important control variable is income since we know that individuals with higher incomes tend to perceive lower levels of inequality (e.g. Karadja et al., 2017;Bavetta et al., 2019) and that individuals with higher incomes tend to report higher levels of trust (e.g. Bjørnskov, 2007). Iconstruct the income measure based on the ESS survey question for self-reported household income. The ESS, however, changed the way they measured income. In the first three ESS waves (2002)(2003)(2004)(2005)(2006), there were 12 potential answer categories fixed across countries. That is, respondents in all countries reported their income on the same income scale rather than on a country-specific income scale which would have taken into account cross-country differences in income. From the fourth ESS wave onward, however, respondents were asked to report their income in country-specific deciles. Following Rueda and Stegmueller (2016), I transform the income categories into their midpoints for the first three ESS ways. I recode, for instance, second lowest category ranging from 1,800 to under 3,600 Euros to 2,700 Euros. I then calculate country-specific deciles for the first three waves and merge them with the country-specific deciles from wave 4 onward. At country-level,I control for GDP p.c. in 1000 Euros obtained from Eurostat. Figure 3 shows the relationship between political trust (x-axis) and actual gini of disposable income (left graph) and fairness gap (right graph) by country-year. Political trust is unequally distributed across countries. Countries such as Norway or Sweden, with relatively low levels of income inequality, show the highest levels of political trust. Poland, Latvia, Portugal and Bulgaria, with different levels of income inequality, report the lowest levels of trust and different. R 2 of this cross-country relationship is 0.21. The right graph shows the relationship between the fairness gap (x-axis) and political trust (y-axis). The general observable pattern is similar to the left graph. However, the correlation between the fairness gap and political trust is, with an R 2 of 0.56, higher. These cross-country patterns show that it may be beneficial to study the fairness gap in order to better understand how actual inequality affects political trust.
Validation Analysis: Validate the fairness gap Measure
Since this paper introduces the fairness gap as a new measure for fairness evaluations of inequality, it is important to validate such a measure by comparing it to other measures used in the literature. I study the relationship between the fairness gap and frequently used ISSP items to operationalise attitudes towards inequality and preferences for redistribution. To operationalise attitudes towards inequality, I use the item in which respondents are asked to (dis)agree with the statement: 'Differences in income in [COUNTRY] are too large.'. To operationalise preferences for redistribution, we use the widely used item 'Government should reduce income differences.' (Ahrens, 2020;Rehm, 2007;Corneo & Gruener, 2002). Finally, I also look at how the fairness gap relates to an item capturing perceived social mobility as a measure for fairness perceptions by asking respondents 'Getting ahead: How important is coming from a wealthy family?' with potential answers ranging from 'essential' to 'not important at all'. The idea is that respondents perceive a society as immobile, and thus unfair, if it is important to come from a wealthy family to succeed. In Figure 4 I study the relationship at country-year level. Perceiving income differences as too large as well as wanting to reduce income differences is associated with reporting higher fairness gaps in a cross-country perspective. Further, believing that coming from a wealthy family is important for success is also associated with reporting higher fairness gaps at country-level. A similar pattern is observable when looking at individual-level variation visualized in Figure 5: Individuals reporting a higher fairness gap are more likely to strongly agree that income differences are too large and that the government should reduce differences between incomes. They are also more likely to find it essential to come from a wealthy family to get ahead in society. Generally, the fairness gap highly correlates with various measures that are regularly used to measure attitudes towards inequality and preferences for redistribution.
Regression Analysis: The Effect of the Fairness Gap on Political Trust
To study the relevance of the fairness gap for understanding the variation in political trust, I specify pooled OLS regressions at different aggregation levels in on country-year specific education. In all model specifications, the actual gini coefficient is negatively associated with political trust albeit only significant ( p < 0.05 ) in some specifications with effect sizes ranging from −0.08 to −0.03 . The fairness gap measure is also negatively associated with political trust across all models with significance levels ranging from p < 0.01 to < 0.001 : An increase in the perceived fairness gap by 1 is associated with reporting lower political trust between 0.05 (model (2)) and 0.13 (model (1)). The effect of the fairness gap on political trust is rather robust across different data aggregations. Individuals with a higher secondary or tertiary education show significantly higher levels of political trust across all models. This is also true for the effect of income deciles on political trust: Being in a higher income decile increases, on average, trust in political institutions by. 04 ( p < 0.001 ). Females report lower levels of political trust compared to males and age does not seemto be associated with political trust. Please see appendix, Table 4 for a stepwise regression and an alternative estimation method using multilevelmodeling (MLM) instead of POLS. Overall, the results indicate that the fairness gap matters for political trust. However, the analysis does not provide us with the answer to the question whether the fairness gap mediates the link between actual inequality and political trust or whether the fairness gap affects political trust through a different mechanism.
Mediation Analysis: Does the Fairness Gap Mediate the Relationship Between Inequality and Political Trust?
I conduct a mediation analysis to study whether the effect of actual inequality on political trust is mediated by fairness perceptions. We do so focusing on the controlled direct effects. This is the effect of inequality when taking the changes of the mediators into account (Iacobucci, 2008;Acharya et al., 2016). I include all control variables that have been included in the models in Table 1 including country-and time fixed effects in the mediation analysis. For this, I use the 'mediation'-R-package (Tingley et al., 2014). This enables us to see how much of the main effect goes through the fairness gap. Figure 6 shows the indirect, direct, and total effect of actual inequality on political trust with fairness gap without controls (left) and with controls (right). (Figure 6, right). The average direct effect (ADE) is the direct effect of actual inequality on political trust; this effect is − 0.03 ( p < 0.001 ). The average causal mediation effects (ACME) of − 0.04 ( p < 0.001 ) is the indirect effect of actual inequality on political trust mediated by the fairness gap. And the total effect is simply the sum of the direct and indirect effect of actual inequality on political trust; the effect size is − 0.07 ( p < 0.001 ). The direct effect of actual inequality on political trust is insignificant while the indirect effect, mediated by the fairness gap, is significant and larger in magnitude than the direct effect. Thus, about half of the total effect goes through the fairness gap indicating that fairness perceptions are an important mechanism linking income inequality and political trust. The effect of actual inequality on political trust is small in effect size and significance depends on model specification. The statistical significant effect of the fairness gap on political trust in Table 1 suggests that the fairness gap may not only work as a mediator but may also affect political trust through other mechanisms.
Outlook: How Does One's Own Economic Position Influence the Fairness Perceptions and Political Trust?
In line with past research, I find that higher actual inequality is associated with lower levels of political trust. These effect sizes are, however, small and significance depends on the model specification. I further find that this already weak relationship is mediated by fairness perceptions. Past research has shown that, beyond actual inequality, one's own socioeconomic position is associated with inequality perceptions, inequality preferences, and political trust. To explore the potential role of fairness perceptions beyond being a mediator between inequality and political trust, I descriptively assess how the fairness gap, its disaggregation in perceived and preferred inequality, and political trust varies across socioeconomic positions. Figure 7 shows (a) political trust by educational level, (b) the fairness gap by educational level and (c) perceived and preferred inequality by educational level. Graphs (a) and (b) show that higher levels of education are associated with higher levels of political trust and that higher levels of education are associated with reporting lower fairness gaps. When differentiating between perceived and preferred ginis, we see that the decrease in the reported fairness gap by education is more strongly driven by decreases (c) perceived/preferred ineq. by education Fig. 7 Political trust, the fairness gap, and perceived and preferred inequality by education. Source: ESS 2002, 2018& ISSP 1999, 2009 in the perceived level of inequality rather than the preferred level of inequality. Put differently, when looking at how perceived and preferred inequality varies by education, I see that there is higher variation in perceived inequality (range [30.65; 33.24]) than variation in preferred inequality (range [22.13; 22.62]) (see graph (c)). This is a pattern which I also observe when studying employment status (see Appendix, Figure 8) or occupational groups according to the major groups of the ISCO classification (see Appendix, Figure 9). Iinterpret this as an indication that individuals differ more strongly in how they perceive inequality across socio-economic variables than in what levels of inequality they prefer. For understanding variation in political trust, it may therefore be promising to focus on the formation of and the consequences of inequality perceptions.
Conclusion
Can fairness perceptions help us to better understand how income inequality affects individual levels of political trust? Introducing a new measure for fairness perceptions-the fairness gap-I find that the ways in which individuals evaluate inequalities matters for their levels of political trust (H1). Roughly half of the effect of actual inequality on political trust is mediated by the fairness gap although the main effect of inequality on trust is already small in effect size and significance depends on model specification (H2). Studying the relevance of one's own economic position approximated by educational level, I further show that variation in the fairness gap across socio-economic groups is to a larger part explained by variation in perceived inequality rather than variation in preferred inequality supporting (H3a) rather than (H3b). These findings provide three contributions to the literature on inequality and political trust. Firstly, I introduce a novel measure to operationalise inequality which is considered (un)fair by individuals. Thereby, I advance the literature on inequality and political trust by actively modelling perceptions rather than relying on attitudes towards inequality. Secondly, I show that perceptions of inequality do not only influence preferences of inequality and redistribution, as shown by past research, but are also associated with more general attitudinal concepts such as political trust. In doing so, I link the literature on inequality perceptions and political preferences to the literature on political and institutional trust. Thus, I outline the importance of perceptions for the formation of more general attitudes towards the political system such as political trust. Thirdly, I show that it is theoretically helpful to differentiate between inequality perceptions and inequality preferences; this analysis shows that individuals differ more strongly in how they perceive inequality across socio-economic variables than in what levels of inequality they prefer. This suggests that individuals are more polarised in their perceptions of inequality than in their preferred levels of inequality and that this polarisation in perceptions closely links to the polarisation in political trust. This paper is not free from caveats. Operationalising political trust is difficult for many reasons and becomes even more difficult in a cross-country perspective. There is evidence for cross-cultural variation in how individuals interpret the survey questions measuring social trust (Torpe & Lolle, 2011;Reeskens & Hooghe, 2008). In this paper, I am interested in the subjective assessment of political trust and hope to mitigate the problem by using country fixed effects. Further, I use observational data and conduct descriptive and regression analyses to understand how income inequality and fairness perceptions link to political trust. Methodologically, it is reasonable to assume that there are feedback-mechanisms between trust and fairness perceptions: Individuals with higher levels of political trust may perceive inequality as fairer because they trust in their political institutions. Endogeneity in terms of omitted variable biases or reverse causality is difficult to exclude in this setup of analysis. Studying the causal mechanisms more directly, using experimental or quasiexperimental methods, would be promising to better understand the exact mechanisms and the causal directions. Given that variation in the fairness gap is primarily driven by fairness perceptions, it would be, for instance, interesting to study the following question in a causal design: Does learning that inequality is lower than one initially thought lead individuals to adjust their fairness evaluations and do such adjustments affect their trust in political institutions? The relevance of inequality perceptions for fairness perceptions and individuals' political attitudes such as political trust makes it crucial to understand how such perceptions are formed. Studying not only the role of heuristics and prior beliefs in the formation of inequality perceptions along other factors such as the own socio-economic position or cultural, economic, and institutional environments but also the relative importance of these different determinants would be a further path for future research.
I suggest that actively operationalising individuals' fairness perceptions-as their preferred deviation from their perceived status-quo inequality-enables us to better understand when and why fairness perceptions affect individuals' trust in political institutions. Future research is needed to better understand (1) how individuals build fairness perceptions, (2) how we can better measure such perceptions, and (3) how the formation of such perceptions and of general attitudes towards the political system such as trust in political institutions are interrelated. Figs. 8,9,10,11 and Tables 2, 3, 4 and 5. Table 4 Stepwise POLS regressions and multilevel estimation (model 4) of political trust Standard errors are clustered at country-year-level. Coefficients for the ISCO major groups, included in model (3) and (4), are not shown *p < 0.05 , **p < 0.01 , ***p < 0.001 | 8,772.4 | 2023-07-16T00:00:00.000 | [
"Economics"
] |
A Conserved Motif in Tetrahymena thermophila Telomerase Reverse Transcriptase Is Proximal to the RNA Template and Is Essential for Boundary Definition*
Background: Telomerase requires the interaction of conserved protein and RNA motifs. Results: Site-directed hydroxyl radical probing and mutagenesis identified an essential region of RNA-protein interactions. Conclusion: The conserved CP2 protein motif is proximal to a conserved telomerase RNA motif. Significance: Understanding how telomerase protein and RNA subunits interact allows us to begin constructing a more complete mechanistic model of telomerase function. The ends of linear chromosomes are extended by telomerase, a ribonucleoprotein complex minimally consisting of a protein subunit called telomerase reverse transcriptase (TERT) and the telomerase RNA (TER). TERT functions by reverse transcribing a short template region of TER into telomeric DNA. Proper assembly of TERT and TER is essential for telomerase activity; however, a detailed understanding of how TERT interacts with TER is lacking. Previous studies have identified an RNA binding domain (RBD) within TERT, which includes three evolutionarily conserved sequence motifs: CP2, CP, and T. Here, we used site-directed hydroxyl radical probing to directly identify sites of interaction between the TERT RBD and TER, revealing that the CP2 motif is in close proximity to a conserved region of TER known as the template boundary element (TBE). Gel shift assays on CP2 mutants confirmed that the CP2 motif is an RNA binding determinant. Our results explain previous work that established that mutations to the CP2 motif of TERT and to the TBE of TER both permit misincorporation of nucleotides into the growing DNA strand beyond the canonical template. Taken together, these results suggest a model in which the CP2 motif binds the TBE to strictly define which TER nucleotides can be reverse transcribed.
The ends of linear chromosomes are extended by telomerase, a ribonucleoprotein complex minimally consisting of a protein subunit called telomerase reverse transcriptase (TERT) and the telomerase RNA (TER). TERT functions by reverse transcribing a short template region of TER into telomeric DNA. Proper assembly of TERT and TER is essential for telomerase activity; however, a detailed understanding of how TERT interacts with TER is lacking. Previous studies have identified an RNA binding domain (RBD) within TERT, which includes three evolutionarily conserved sequence motifs: CP2, CP, and T. Here, we used site-directed hydroxyl radical probing to directly identify sites of interaction between the TERT RBD and TER, revealing that the CP2 motif is in close proximity to a conserved region of TER known as the template boundary element (TBE). Gel shift assays on CP2 mutants confirmed that the CP2 motif is an RNA binding determinant. Our results explain previous work that established that mutations to the CP2 motif of TERT and to the TBE of TER both permit misincorporation of nucleotides into the growing DNA strand beyond the canonical template. Taken together, these results suggest a model in which the CP2 motif binds the TBE to strictly define which TER nucleotides can be reverse transcribed.
Eukaryotic cells must distinguish the natural ends of chromosomes from double-stranded DNA breaks that are a mark of DNA damage. To do so, chromosome ends are capped by chromatin structures called telomeres, comprising repetitive DNA sequences that recruit specific DNA-binding proteins (1). Telomeres are shortened with each round of cell division and so require the enzyme telomerase to maintain telomere length in actively dividing cells. Telomerase dysfunction is associated with diseases that affect proliferative tissues, such as dyskera-tosis congenita and aplastic anemia (2). On the other hand, aberrant telomerase overexpression can help to confer proliferative potential to cells and is associated with 90% of human cancer cell lines, making telomerase an attractive target for cancer therapies (3).
Telomerase is a RNA-protein complex that includes both the protein telomerase reverse transcriptase (TERT) 2 and telomerase RNA (TER). TER contains a region complementary to the telomeric DNA sequence, known as the template. The template region of TER is repetitively reverse transcribed by TERT to extend telomere DNA (4).
TERT contains four domains: an N-terminal domain, an RNA binding domain (RBD), a reverse transcriptase domain, and a C-terminal extension (5). TERs also have several conserved structural motifs required for the function of the tel omerase holoenzyme. TERs from ciliates, yeasts, and mammals all contain an essential RNA pseudoknot proximal to the template (6). In addition, TERs universally possess a template boundary element (TBE) that defines the region of TER to be reverse transcribed by TERT (7,8). The high fidelity of template definition established by RBD-TBE interactions is a central feature of the telomerase catalytic cycle because incorporation of a single nontemplate nucleotide will prevent synthesis of subsequent telomere DNA repeats.
Recent studies in the Tetrahymena thermophila model system have shown that the RBD domain of TERT is primarily responsible for interactions with TER, is sufficient to bind TER with high affinity, and includes several conserved protein motifs involved in RNA binding (9 -13). However, it is still not known which portions of the RBD interact with which conserved RNA motifs. An x-ray crystal structure of a large C-terminal fragment of the T. thermophila RBD has been solved (14), and comparison with the Tribolium castaneum TERT structure suggests that the conserved T motif forms a -strand hairpin near the position of the template RNA (15). Furthermore, * This work was supported, in whole or in part, by National Institutes of Health the conserved CP motif was shown to be adjacent to the T motif in an electropositive groove, perhaps positioned to bind the TBE (14). However, a co-crystal with the RBD bound to RNA was not attained, and so it could not be definitively shown which protein motifs interact with which RNA domains. A third conserved domain, the CP2 motif, was not included in the RBD construct that was solved by x-ray crystallography.
In the present work, we employed site-directed Fe(II)-EDTA hydroxyl radical probing to map RNA-protein interactions in telomerase, as done previously for the ribosome (16,17). Our results demonstrated the CP2 motif and the TBE of Tetrahymena TER are in close proximity. To explore further to role of the CP2 motif in TER binding, we generated CP2 point mutants and measured their affinity for TER. Electrophoretic mobility shift assays (EMSAs) indicated that deletion of the entire CP2 motif severely compromised TER binding, and many singlepoint mutations in the CP2 motif result in a reduced affinity for TER, consistent with the CP2 motif being an important determinant of RBD binding. Among the single amino acid mutants analyzed, the strongest binding defect was observed with a mutation to Arg 237 , a residue previously shown to play a role in telomerase activity and template definition (8,10). Quantitative EMSAs demonstrated that a single amino acid CP2 point mutant (R237A) showed an approximate 7-fold reduction in TER affinity compared with WT RBD.
Finally, we verified that telomerase harboring an R237A mutation is severely knocked down in telomerase activity. Interestingly, the assembly protein p65 can partially rescue telomerase activity in R237A TERT; however, p65 does not rescue the previously reported R237A template boundary defect. Taken together, our results demonstrate that the CP2 motif is essential for a functional interaction between TERT and TER and is a critical protein determinant of template definition.
EXPERIMENTAL PROCEDURES
PCR Mutagenesis-Plasmids containing the T. thermophila RBD fused to an N-terminal His tag were PCR-mutagenized using custom DNA primers. Linear plasmid amplicons were treated with T4 polynucledotide kinase (NEB) and T4 DNA ligase to generate circular plasmids, which were used to transform Escherichia coli 10 cells. Clones were sequenced to confirm the presence of the desired mutation. The Cys-lite RBD had the following mutations: C232A, C300S, C331A, C359A, C387A, C424A.
Protein Expression and Purification-TERT RBD was expressed in E. coli BL21 (DE3) cells induced at 21°C with 0.8 mM isopropyl 1-thio--D-galactopyranoside for 4 h. Cells were harvested by centrifugation and lysed by sonication in buffer containing 20 mM Tris, pH 8.0, 500 mM NaCl, 1 mM MgCl 2 ,1 mM PMSF, 10% glycerol, and 5 mM -mercaptoethanol. Cell lysate was centrifuged to remove precipitates and cell debris, and supernatant protein was purified by nickel affinity chromatography. Purified protein was eluted into a buffer containing 20 mM Tris, pH 8.0, 200 mM NaCl, 1 mM MgCl 2 , 500 mM imidazole, and 5 mM -mercaptoethanol and flash frozen with liquid nitrogen for later use.
For quantitative EMSAs, proteins underwent additional rounds of purification. Following nickel affinity chromatogra-phy, the eluate was diluted to a buffer containing 20 mM Tris, pH 7.0, 50 mM NaCl, 1 mM MgCl 2 , 10% glycerol, and 5 mM -mercaptoethanol and purified on an ion exchange source-S column (GE Healthcare) using a gradient from 50 mM to 1 M NaCl. The protein eluted at approximately 300 mM NaCl and was concentrated in a centrifugal concentrator (Millipore) with a 30-kDa molecular mass cut-off. Protein was flash frozen with liquid nitrogen and later purified on a Sephadex-200 (GE Healthcare) sizing column in 20 mM Tris, pH 8.0, 200 mM NaCl, 1 mM MgCl 2 , 10% glycerol, and 5 mM -mercaptoethanol. Protein was concentrated off of the sizing column and flash frozen for later use. All protein constructs eluted as a single monomeric peak off of the sizing column, consistent with an ϳ40-kDa protein. The percentage activity of WT and R237A protein constructs were determined using stoichiometric gel shifts. 2.5 M cold TER was incubated with 4 nM end-labeled 32 P-labeled TER and 2-16 M RBD constructs in 20 mM Tris, pH 8.0, 100 mM NaCl, 1 mM MgCl 2 , 10% glycerol, and 5 mM -mercaptoethanol. WT and R237A RBD demonstrated approximate percentage binding activities of ϳ30 and ϳ27%, respectively (data not shown).
Fe-BABE Labeling-Protein constructs containing only a single cysteine at the desired labeling site were dialyzed into a buffer lacking reducing agent (20 mM Tris, pH 8.0, 200 mM NaCl, 1 mM MgCl 2 , 10% glycerol) overnight, then switched to fresh dialysis buffer and dialyzed for an additional 2 h. The concentration of Tris in the buffer was then raised to 80 mM, and the dialyzed protein was incubated with a 4-fold molar excess of Fe-BABE for 3.5 h at room temperature in the dark. Next, the protein was dialyzed overnight against fresh dialysis buffer to remove excess Fe-BABE, and again for an additional 2 h. Fe-BABE-labeled RBD was quantified by Bradford assay and flash frozen in liquid nitrogen for later use. Mock-labeled Cys-lite RBD was treated identically to labeled protein; however, the protein construct lacked any cysteines to interact with the Fe-BABE moiety.
Hydroxyl Radical Probing-Protein constructs were incubated with 125 ng of in vitro transcribed TER end-labeled with 32 P (PerkinElmer Life Sciences). Binding was performed in a buffer containing 20 mM Tris, pH 8.0, 100 mM NaCl, and 1 mM MgCl 2 , 0.1 mg/ml tRNA, 80 units of RNasin, and 450 nM p65 for 10 min at room temperature in a final volume of 50 l. 1 l of 250 mM sodium ascorbate and 1 l of 1.25% H 2 O 2 were added to the side of each tube and mixed instantaneously by a centrifuge pulse. Reactions were incubated for 10 min on ice and then quenched with 10 l of 20 mM thiourea, and a radiolabeled DNA recovery control was added. The reactions were then phenol:chloroform-extracted and ethanol-precipitated. The RNA was run on a 7% sequencing polyacrylamide gel containing 8 M urea, and the gel was dried and imaged using a phosphorimaging screen (GE Healthcare) and a typhoon scanner (GE Healthcare). The T1 ladder was generated using full-length TER and RNase T1 (Ambion).
Hydroxyl Radical Probing Quantification-Hydroxyl radical probing gels were quantified with SAFA (18) as described previously (19). The quantified band intensities were compared between 1000 nM Fe-BABE-labeled protein and 1000 nM mocklabeled protein by dividing the intensity of the band in the labeled protein lane over the intensity of the band in the mocklabeled protein lane.
EMSAs-EMSAs were performed as described previously (19). For the quantitative EMSAs in Fig. 4, 0.4 nM end-labeled RNA was used instead of body-labeled RNA. Band intensities were quantified using Imagequant, and data were plotted and fit in Origin to determine K d values. Percentage bound complexes were plotted in Origin, and K d values were determined by fitting to the binding equation where F represents the fraction bound, c represents the concentration, K d is the dissociation constant (the concentration at which 50% of the RNA is bound), F max represents the maximal value of F, and n represents the Hill coefficient.
Telomerase Reconstitution-Prebinding reactions were generated with 62.5 ng of TER with or without 5 pmol of p65 in 20 mM Tris, pH 8.0, 100 mM NaCl, 1 mM MgCl 2 , and 1 mM dithiothreitol in a final volume of 2.5 l. Prebinding reactions were incubated for 10 min at room temperature, and telomerase was reconstituted in a 50-l scale rabbit reticulocyte lysate reaction as described previously (20).
Telomerase Activity Assays-Telomerase activity assays were performed as described previously (21). Some reactions were supplemented with 100 M dATP as indicated.
Tethered Hydroxyl Radical Probing Demonstrates That the CP2 Motif Is
Proximal to the TBE-We initially characterized the structural basis of RBD-TER interactions using site-directed hydroxyl radical probing. In this technique, a single reactive cysteine in the protein of interest is labeled with an Fe(II)-EDTA moiety. The functionalized protein is incubated with RNA in a buffer containing ascorbate and H 2 O 2 , generating hydroxyl radicals by the Fenton reaction (22). These hydroxyl radicals cleave any RNA nearby, but can only travel ϳ20 Å before quenching. Sites of RNA cleavage can be identified by PAGE, giving a readout of regions of RNA-protein interaction. To facilitate site-specific Fe(II)-EDTA tethering, six endogenous cysteines were mutated out of a plasmid coding for the TERT RBD, generating a Cys-lite RBD construct (Fig. 1A). Single cysteines were then introduced back into the protein, generating a series of constructs containing only a single cysteine at specific regions of interest in the TERT RBD. The proteins were expressed in E. coli, affinity-purified, and labeled with Fe-BABE. Fe-BABE-labeled proteins were incubated with 32 Plabeled TER (Fig. 1B), and ascorbate and hydrogen peroxide were added to the RNA-protein complexes to initiate the formation of hydroxyl radicals.
The majority of amino acid labeling sites showed no significant difference between Fe-BABE-labeled protein and unlabeled protein (Table 1). However, RBD labeled at residue Cys 232 showed a considerable increase in cleavage in TER nucleotides 17-20 and 37-41 (Fig. 1C). Titrating increasing concentrations of Cys 232 -Fe-BABE RBD resulted in a corresponding increase in the intensity of the cleavage products (Fig. 1C, lanes 3-7), whereas incubation of the RNA with mock-labeled Cys-lite RBD, unlabeled Cys 232 RBD, or Cys 232 -Fe-BABE RBD in the absence of hydrogen peroxide and ascorbate resulted in no cleavage at this site (Fig. 1C, lanes 8 -10, respectively). A chro-matogram of the lane intensity profiles of a lane with Cys 232 -FeBABE (lane 7) and a mock-labeled control (lane 8) clearly demonstrates the extent of the increase in cleavage (Fig. 1C, right).
To demonstrate the reproducibility of the result, the experiment was repeated three times, and the intensities of the cleavage products were quantified using the gel quantification program SAFA (18). The intensity of the cleavage product was measured at each individual residue and compared between RNA incubated with 1000 nM Cys 232 -FeBABE Cys-lite RBD and RNA incubated with 1000 nM mock-labeled Cys-lite RBD (Fig. 1D). The experiment demonstrates a highly reproducible cleavage pattern, with the most hydroxyl radical-induced cleavage observed at the base of stem II, most notably at residue 17. Plotting the mean cleavage intensity against the secondary structure of the RNA demonstrates the extent to which the pattern of cleavage mirrors the RNA secondary structure, peaking at the base of stem II and diminishing along the base-paired residues (Fig. 1D, inset).
The observed sites of cleavage at the base of TER stem II are part of a previously described conserved RNA motif known as the template boundary element. Mutations to the RNA in this region show errors in template definition, allowing the nontemplate residue U42 to be aberrantly reverse transcribed by TERT in the presence of dATP (8,12). Furthermore, this region was shown to be required for high affinity TERT interaction, leading to the model that the TBE is bound tightly by the RBD and that this interaction prevents the TERT reverse transcription domain from reverse transcribing nontemplate TER nucleotides (8,12).
The site of protein labeling, Cys 232 , is part of an evolutionarily conserved motif, known as the CP2 motif. The CP2 motif is found only in ciliate TERT, and previous mutational studies demonstrated that mutations to many of the residues in the CP2 motif have a significant defect in telomerase activity (10). Interestingly, one CP2 mutant showed the same defect in template definition as observed with a TBE mutant, a result that is consistent with the observed physical proximity of these two domains in our hydroxyl radical probing experiments.
EMSAs Demonstrate That the CP2 Motif Contributes to TER Binding-Although our hydroxyl radical probing results indicate that the CP2 motif is physically proximal to the TBE in the RNA-protein complex, they do not directly demonstrate that CP2 plays an essential role in RNA binding. To explore further the role of the CP2 motif in mediating RNA-protein interactions, we first expressed an RBD construct that lacks the CP2 motif entirely (⌬CP2) (Fig. 2A) and compared its RNA binding activity with the full-length RBD protein by EMSA. The ⌬CP2 RBD construct was readily expressed and purified (Fig. 2B) but demonstrated considerably diminished RNA binding activity compared with WT protein (Fig. 2C). The high ⌬CP2 RBD protein concentrations required to bind TER in our EMSA experiments, together with the propensity to form aggregates at protein concentrations above 320 nM, precluded our ability to determine a dissociation constant accurately. However, these results strongly suggest that the evolutionarily conserved CP2 motif is necessary for efficient and specific binding to TER.
The CP2 motif is a 12-amino acid sequence conserved across various ciliate species, containing many hydrophobic and positively charged residues potentially involved in RNA binding (Fig. 3A). To further dissect the contribution of the CP2 motif to TER binding we next performed an alanine scan through the CP2 region of TERT. To simplify the purification procedure for a semiquantitative initial screen, we only purified the proteins with one round of nickel affinity chromatography. Wild-type RBD and proteins encoding mutants Y231A, C232A, H234A, and R237A showed robust expression and purified to high lev- els of purity as determined by SDS-PAGE (Fig. 3B, left). Mutations R226A, I229A, and F230A showed a modest decrease in protein expression (Fig. 3B, right) and consequently co-purified with increased levels of contaminating protein, confounding accurate determination of protein concentrations by Bradford assays. Instead, the protein concentrations of these constructs were estimated by comparing Coomassie Blue staining of only the RBD band on SDS-PAGE gels (Fig. 3B, arrow).
This initial screen with partially purified protein indicated an approximate dissociation constant, or the concentration of protein where 50% of the RNA is bound, for the WT protein of ϳ60 nM (Fig. 3C). The CP2 point mutations F230A and Y231A appeared to have no effect on RNA binding, as gels for these constructs were virtually identical to the WT protein. Mutants I229A, C232A, and H234A appeared to have a modest defect in RNA binding, as evidenced by the slightly increased concentrations of protein required to bind the RNA (Fig. 3C). However, the largest and most obvious defect was observed with the mutants of the flanking arginines (R226A and R237A). In addition to the reduced affinity, the higher molecular mass complexes observed with the R237A mutant appeared as a diffuse smear on the gel, likely due to partial RBD binding or weakened complex formation, indicative of perhaps a further defect (Fig. 3C).
To further study the contribution of the CP2 motif to RNA binding, we followed our initial screen with quantitative EMSAs using fully purified protein constructs. For these mea surements we focused on a comparison of WT RBD with one point mutant that showed a significant defect in our screen (R237A). These protein constructs were purified using nickel affinity chromatography, ion exchange chromatography, and sizing chromatography before use in EMSAs.
SDS-PAGE indicated that both WT and R237A proteins were purified to homogeneity (Fig. 4A). In addition, analytical sizing chromatography demonstrated that both proteins eluted at the same elution volume, consistent with a monomeric protein of ϳ40 kDa (Fig. 4B), demonstrating that the mutation did not predispose the R237A mutant to aggregation. The quantitative EMSAs on these protein constructs confirmed the results of the initial screen. Under the conditions of our assay, the highly purified WT RBD, demonstrated a K d of 24 Ϯ 5 nM, a value that is slightly better than observed in our initial EMSA screens. The binding defect observed for the R237A mutant persisted, exhibiting a considerable decrease to 183 Ϯ 26 nM compared with WT RBD (Fig. 4, C and D). The effect of the mutation is not due to a general folding defect, as both proteins showed similar specific activities in stoichiometric binding assays (see "Experimental Procedures").
Under higher protein concentrations, EMSAs tended to reveal higher molecular mass complexes, likely corresponding to multiple RBD constructs binding to a single RNA (Fig. 4C, asterisks). A recent electron microscopy structure of Tetrahymena telomerase confirms that T. thermophila TERT and TER bind in a 1:1 ratio (23). Therefore, the higher order RBD-TER complexes we observe are likely artifacts of using purified protein constructs at high relative protein concentrations and do not reflect a physiologically relevant complex. These higher molecular mass complexes were nevertheless included in the quantification as it is likely that they represent naturally bound complexes augmented with an additional protein interaction. Although this may subtly shift the equilibrium between bound and unbound forms, this analysis is sufficient to make a meaningful comparison between the affinities of the WT and R237A protein constructs.
Previous work has suggested multiple sites of interaction between TERT and TER (12,24). This is likely a large source of the residual binding activity in the ⌬CP2 mutant (Fig. 2). It is likely that different parts of the RBD maintain separate binding sites on TER, increasing the overall affinity through an avidity effect (19, 24 -26). Our results are consistent with the CP2 motif constituting one of these points of RNA interaction and
Most protein labeling sites show no site-specific hydroxyl radical cleavage
The table displays the domain location, the motif, and the amino acid number of cysteine labeling sites used in site-directed hydroxyl radical probing experiments. Cleavage products of site-directed hydroxyl radical probing experiments were compared with those obtained from unlabeled protein and scored based on the observed increase in the intensity of cleavage products (Ϫ indicates no observed increase in intensity, ϩϩ indicates moderate to strong cleavage products). Labeling sites in the N-terminal (TEN) domain were obtained with a construct containing residues 1-516 of TERT. We note that a negative probing result does not necessarily preclude the possibility of RNA interactions at this site. A specific labeling site may suffer from poor accessibility to Fe-BABE labeling, or the label itself could prevent the RNA interaction it was intended to measure. specifically implicate Arg 226 and Arg 237 as being critical residues in mediating TER binding.
Location
Mutations to the CP2 Motif Reduce Telomerase Activity and Are Partially Rescued by the Assembly Protein p65-The R237A mutation that showed the most dramatic defect in RBD affinity has been previously identified as a mutation that causes a template definition defect in full-length telomerase (8,10). A previous study determined that the T. thermophila assembly protein p65 can rescue deleterious TERT mutations, presumably by suppressing defects that arise from weakened TERT-TER interactions (27). We set out to test the effect of p65 on R237A mutant telomerase activity. Full-length TERT was reconstituted with TER in the absence or presence of purified p65 in rabbit reticulocyte lysate, and the catalytic activity of the telomerase complexes was assessed using a direct telomere DNA primer extension assay. As expected, wild-type telomerase shows robust extension activity in both the presence and absence of p65 (Fig. 5). However, R237A telomerase displayed a dramatic decrease in telomerase activity, with virtually no telomerase activity observed in the absence of p65 (Fig. 5). We note that a previous study of the R237A TERT in the absence of p65 reported detectable telomerase activity, albeit at a significantly reduced level compared with wild-type TERT (8). We expect the inability of our present experiments to detect activity with the R237A mutant in the absence of p65 likely reflects small differences in the protocols used to reconstitute and purify the telomerase complex. Nevertheless, our results are qualitatively consistent with the previous report in that the R237A mutation results in a marked reduction in the efficiency of telomerase reconstitution.
The presence of p65 improved R237A telomerase activity, although there was still considerably less extension than wildtype telomerase. Previous studies have shown that the R237A mutation inhibits TERT-TER assembly (8). Because p65 is an assembly co-factor, we conclude that p65 likely rescues telomerase activity in the CP2 mutant by stabilizing the interaction between R237A TERT and TER.
We also tested R237A telomerase for template boundary defects. In the presence of a template boundary defect, residue U42 of TER aberrantly enters the TERT active site. This results in dATP incorporation into the telomeric DNA primer, which is observed as an additional band one nucleotide above the canonical repeat addition band. In our assay, the repeat addition band is at the ϩ1 site; therefore, template boundary defects can be observed by an increase in intensity of the ϩ2 band when telomerase activity reactions are supplemented with dATP. Wild-type telomerase showed no increase at the ϩ2 position in the presence of dATP, indicative of robust wild-type template definition (Fig. 5). Due to the undetectable level of catalytic activity in the absence of p65, R237A mutants could not be assessed for template boundary defects under these conditions. However, in the presence of p65, there was a considerable increase in template read-through when dATP was present (Fig. 5, asterisks). Taken together, our results indicate that the CP2 motif is important for TERT-TER assembly and telomerase activity and that the assembly protein p65 can partially rescue telomerase activity in a CP2 mutant. Furthermore, the CP2 motif is essential for proper template definition as reported previously (8,10), and template definition defects persist in a CP2 mutant even in the presence of p65.
DISCUSSION
The CP2 Motif Is Proximal to the TBE and Contributes to Template Definition-Our site-directed hydroxyl radical probing results reveal that the CP2 motif is adjacent to the TBE in the TERT-TER complex, and gel shift assays verify that the CP2 motif is essential for proper RNA binding. Furthermore, the residues most strongly implicated in CP2-TER interaction (Arg 226 and Arg 237 ) are basic residues that may contact the RNA backbone through an electrostatic interaction. Finally, telomerase activity assays on a CP2 mutant confirm that the motif is critical for telomerase activity, showing both a severe assembly defect and a template boundary defect. Taken together, our experiments favor a model in which the CP2 motif binds the TBE to establish the template boundary. Nevertheless, we note that it remains a formal possibility that the CP2 motif is proximal to the TBE and influences RNA binding through an indirect interaction mediated by other amino acids.
A Model for TBE Interaction with the TERT RBD-Comparison of x-ray crystal structures of the T. castaneum TERT in complex with its DNA substrate and the T. thermophila RBD show significant structural homology (14,15,28). Alignment of the two structures reveals the likely position of the T. thermophila RBD with respect to the reverse transcriptase domains and the RNA template (Fig. 6A). In this structure, the 5Ј end of the template is positioned adjacent to the T motif. Because the TBE begins three nucleotides upstream of this position, this puts a constraint on the position of the TBE within the structure. Based on this alignment, the CP motif is optimally positioned to interact with the TBE in an electropositive groove adjacent to the template (14). The evolutionary conservation of FIGURE 5. A CP2 mutant demonstrates telomerase activity defects. Telomerase activity assays performed from rabbit reticulocyte lysate-reconstituted telomerase, using either WT TERT or TERT harboring a CP2 mutation (R237A). Assays were performed in the presence or absence of the assembly protein p65 and in the presence or absence of dATP as indicated. Numbers (ϩ1, ϩ2, ϩ7, etc.) indicate the number of nucleotides incorporated by telomerase at the indicated band. R237A TERT displayed a template boundary defect as indicated by an increase in the ϩ2 product upon addition of dATP (asterisks). LC, loading control.
the CP motif, previous biochemical experiments (13), and its position within the RBD structure suggest that the CP motif plays some part in binding the TBE. We therefore propose a structural model based on the solved crystal structures of the T. thermophila RBD, the T. castaneum full-length TERT, and NMR structures of stem II of T. thermophila TER which places the base of stem II adjacent to the CP motif (14,15,29) (Fig. 6B).
At present, there is no structural information on the CP2 motif. Nevertheless, we can use our hydroxyl radical probing results to place the CP2 motif adjacent to the TBE in our structural model (Fig. 6B). In this model, the CP and CP2 domains co-operate to bind the base of stem II and maintain the template boundary. This conclusion is strongly supported by our biochemical data as well as telomerase activity assays which previously demonstrated that mutants to both the CP and CP2 domains affect template definition (8).
Comparison of the Tetrahymena CP2 Motif with Vertebrate TERT-The CP2 motif is an area of conservation found only in ciliates. The lack of conservation of the CP2 motif across species is not entirely unexpected, as it is known that there is also a divergence in the RNA sequences of the TBE (7). This raises the possibility that this region of the protein may co-evolve with TER TBEs, accounting for the divergence of this region of the protein across phyla. Interestingly, in an analogous position in vertebrate TERT there is another region of conservation, found only in vertebrates, known as the vertebrate-specific region (30). Our results suggest that the vertebrate-specific region may function in vertebrates in a manner analogous to the CP2 motif in ciliates, promoting TBE interactions and establishing the template boundary. | 7,110.6 | 2013-06-11T00:00:00.000 | [
"Biology"
] |
in Remediation of Sulfidic Wastewater by Aeration in the Presence of Ultrasonic Vibration
—In the current study, the aerial oxidation of sodium sulfide in the presence of ultrasonic vibration is investigated. Sulfide analysis was carried out by the methylene blue method. Sodium sulfide is oxidized to elemental sulfur in the presence of ultrasonic vibration. The influence of air flow rate, initial sodium sulfide concentration and ultrasonic vibration intensity on the oxidation of sodium sulfide was investigated. The rate law equation regarding the oxidation of sulfide was determined from the experimental data. The order of reaction with respect to sulfide and oxygen was found to be 0.36 and 0.67 respectively. The overall reaction followed nearly first order kinetics.
INTRODUCTION
Different processes are followed for the removal of sulfides, including wet scrubbing, liquid redox technology, biofiltration, scavengers, carbon adsorption, iron salts, biocide process, anthraquinone and use of oxidizing agents [1][2][3][4].As documented in [1-9], sulfide removal from wastewater is mainly achieved by oxidation.Advanced oxidation is the latest among wastewater treatment techniques [1].Oxidation can be accomplished chemically or biologically [10].Chemical oxidation involves the removal of electrons and removal or addition of hydrogen [7].In water and wastewater engineering, chemical oxidation serves the purpose of converting putrescible pollutant substances to innocuous or stabilized products [11].Reserachers in [9, [12][13][14][15][16] investigated the oxidation of sulfides in the presence of catalyst, however, catalytic oxidation suffers from certain drawbacks and it is not economically suitable for large continuous treatment processes.Moreover, these catalysts are poisonous and hazardous and their complete recovery after treatment is essential.Photo oxidation in both presence and absence of catalyst has also been studied [5,6,17,18].In the absence of catalyst, photo oxidation requires higher UV light intensity, which is not economically suitable for large continuous wastewater treatment processes [17,18].
Authors in [13] investigated the sulfide oxidation by atmospheric oxygen in presence of "Sulfur Black B" dye as a catalyst.The reaction was reported to be first order in sulfide concentration.Authors in [11] made an exhaustive survey of the classical methods on sulfide oxidation.They carried out catalytic oxidation with dissolved oxygen using a number of catalysts including carbon black, ferric salts and a few organic compounds.Authors in [16] reported the effect of FeCl 3 catalyst on the oxidation of Na 2 S with dissolved oxygen.They analyzed the conversion data using a first order rate equation for sulfide removal.They found out that the rate constant is a function of FeCl 3 concentration.Sulfide oxidation can be used for the synthesis of sulfones and sulfoxides.Authors in [14] indicate that the reaction of sulfides with 30% hydrogen peroxide catalyzed by tantalum (V) chloride in acetonitrile, ipropanol, or t-butanol produces high yields of sulfoxides.Authors in [19] developed the microbial fuel cell (MFC) for removal of sulfur-based pollutants.The fuel cell used activated carbon cloth and carbon fiber veil composite anode, air breathing dual cathodes and sulfate reducing species.In this system, most of the sulfide is electrochemically oxidized to sulfur.
A. Analysis of Sulfide
We prepared the test solution using sodium sulfide of 60% purity.Methylene Blue Method (MBM) [20] was used to analyze the sulfide, and estimated its concentration using DR 5000 Hach UV-visible spectrophotometer.Since elemental sulfur formation occurs during the oxidation process, carbon disulfide was added dropwise prior the analysis of sulfide for removing the elemental sulfur formed during the oxidation process.
B. Methodology
A schematic diagram of the oxidation process is shown in Figure 1.The reactor comprised of a glass reactor, ultrasonic vibrator, dissolved oxygen (DO) probe and spurger for distributing the air in the solution taken in the reactor.The ultrasonic vibrator was connected to the power supply.The DO probe was dipped into the liquid in the apparatus only to measure the dissolved oxygen concentration, and the pH probe monitored the solution's pH.The glass reactor was dipped in the water inside the ultrasonic vibrator.The ultrasonic vibrator cannot be used as a reactor because in that case the temperature of the solution raises.The ultrasonic vibration is transferred into the solution inside the reactor through water present in the ultrasonic vibrator.Before the sulfide analysis, carbon disulfide was added in order to dissolve the elemental sulfur formed during the oxidation.Experimental set up for the sulfide oxidation in the presence of ultrasonic vibration.
III. RESULTS AND DISCUSSION
The aerial oxidation of sodium sulfide was investigated at different air flow rates, different initial sodium sulfide concentration and different ultrasonic vibration intensities.It was found that oxidation of sulfide produces elemental sulfur and sodium hydroxide.Initial pH of the sulfide solution was found to be 12 or more depending upon on the initial sulfide concentration.The pH of the solution increased gradually with an ultimate value equal to 13 which is alkaline pH.The increase in the value of pH can be related to the liberation of caustic soda as a result of sulfide oxidation.The chemical reaction of sulfide and oxygen in the presence of ultrasonic vibration is the following: (1)
A. Effect of Air Flow Rate
The oxidation process was investigated at different air flow rates.It was found that the higher the atmospheric airflow rate at the opening of the spurger, the faster the sulfide oxidation.This is because higher atmospheric airflow rate at the opening of the spurger means larger concentration of oxygen in the liquid.The larger concentration of oxygen in the liquid will allow more reaction with the excited sulfide ions in the liquid.Increased reaction will shorten the time to oxidize the sulfides in the liquid.It was observed that the amount of dissolved oxygen remained constant during the oxidation while the pH of the liquid increased over time.This happened because during air bubbling, the oxygen from the air gets dissolved in the solution and the dissolved oxygen reacts with sulfide ions.Under a particular set of experimental conditions, the bubbling air supplied oxygen to the solution at a sufficiently high rate so that its concentration remained practically constant.The oxygen consumed by the sulfide ions was immediately replenished by absorption from the air bubbles.As sulfide was converted to elemental sulfur and sodium hydroxide, the pH increased up to 13.The effect of air flow rate on sulfide oxidation in the presence of ultrasonic vibration is shown in Figure 2.
B. Effect of Initial Sulfide Concentration
The oxidation process was also investigated at different initial sulfide concentration.It was observed that oxidation of 800ppm of sulfide took 50 minutes.The rate of oxidation was found higher at higher initial sulfide concentration.Increase in the initial sulfide concentration increased the time of sulfide oxidation.Oxidation of 1000ppm of sulfide took nearly 60 minutes.The airflow rate and ultrasonic vibration intensity were fixed, at 4liter/minute and 100% respectively.From the experiment done using the experimental setup and process shown in Figure 1, the concentration of sulfides was decreased over time while the concentration of sulfur was increased over time.Oxidation of sulfide in the presence of ultrasonic vibration at different initial sulfide concentrations is shown in Figure 3. From this experiment, it is apparent that the process of removing sulfide ions from synthetic wastewater by means of reaction between sulfide ions and dissolved oxygen in the presence of ultrasonic vibration in wastewater is quite effective.Synthetic wastewater during treatment by aeration in the presence of ultrasonic vibration became turbid due to the formation of elemental sulfur.The turbidity of the solution was found higher at higher initial sulfide concentration indicating higher rate of formation of elemental sulfur at high sulfide concentration.
C. Effect of Ultrasonic Vibration Intensity
It was observed that at higher ultrasonic vibration intensity sulfide oxidation becomes faster.It may be related to the formation of larger amount of excited sulfide ions with the increase in the ultrasonic vibration intensity.The larger amount of excited sulfide ions in the liquid would allow more reaction with the oxygen and thus increased reaction that would shorten the time of the sulfides oxidation.The effect of ultrasonic vibration on sulfide oxidation is shown in Figure 4.
D. Kinetics of Sulfide Oxidation in the Presence of Ultrasonic
Vibration A power law rate equation for sulfide oxidation is proposed in [21]: where K is the rate constant, m 1 and m 2 are orders of reaction with respect to oxygen partial pressure and sulfide respectively.A plot of the rate of sulfide oxidation against initial sulfide
Fig
Fig. 1.Experimental set up for the sulfide oxidation in the presence of ultrasonic vibration. | 1,911 | 2018-06-19T00:00:00.000 | [
"Environmental Science",
"Engineering"
] |
Fast PET Scan Tumor Segmentation Using Superpixels, Principal Component Analysis and K-Means Clustering
Positron Emission Tomography scan images are extensively used in radiotherapy planning, clinical diagnosis, assessment of growth and treatment of a tumor. These all rely on fidelity and speed of detection and delineation algorithm. Despite intensive research, segmentation has remained a challenging problem due to the diverse image content, resolution, shape, and noise. This paper presents a fast positron emission tomography tumor segmentation method using superpixels. Principal component analysis is applied on the superpixels and their average value. The distance vector of each superpixel from the average is computed in the principal components coordinate system. Finally, k-means clustering is applied on the distance vector to recognize tumor and non-tumor superpixels. The proposed approach is implemented in MATLAB 2016A, and promising accuracy with execution time of 2.35 ± 0.26 s is achieved. Fast execution time is achieved since the number of superpixels, and the size of distance vector on which clustering was done are low compared to the number of pixels in the image.
I. INTRODUCTION
Positron Emission Tomography (PET) is a non-invasive nuclear medicine functional imaging method that images the distribution of biologically targeted radiotracers with high sensitivity.PET imaging provides detailed quantitative information about many diseases and is often used to evaluate cancer with segmentation as a principal role.Image contrast enhancement is an essential pre-processing stage in image segmentation [1].For several years, great effort has been devoted to the study of image enhancement techniques; wavelet-contourlet transform [2], iterative denoising and partial volume correction [3], iterative deconvolution [4] were few among them.
Segmentation can be thought as two consecutive processes, recognition and delineation.Recognition is determining where the targeted object is in the image, while the second process is defining the spatial extent of the recognized region [5].[6], [7] demonstrated that manual segmentation is time-consuming, labor intensive, operator dependent, subjective, and these makes it less precise and reproducible.In the recognition process, regions of high uptake of tracer are identified either manually or automatically [8].
Although the number of PET image segmentation publications has always been lower than both CT and MRI [6], there have been some publications; graph cut and locally connected conditional random field via energy minimization [9], binary and Gaussian filtering regularized level set method with capability of detecting weak tumor boundary [10] were developed.In addition [11] developed k-means and fuzzy c-means clustering based segmentation; however, clustering was applied on image pixels directly and this in turns increases the execution time.
PCA based analysis of internal statistics of image patches gives tremendous insight to recognizing patterns in an image [12], which is applied to detect salient objects in natural images.
This paper presents implementation of unsupervised automatic PET image segmentation system to detect tumor regions from PET scans.Section 2 presents the mathematical formulation and implementation of the proposed approach which contains, contrast enhancement superpixels, PCA followed by k-means clustering to recognize the cancerous superpixels.Section 3 is devoted to discussion and evaluation of the simulation results.Finally, Section 4 concludes the paper.
II. IMPLEMENTATION
The workflow of the proposed approach is divided into three stages: Preprocessing, Feature extraction, and clustering segmentationwhere the second step can be divided into three sub-steps, and the third step into two as shown in Figure 1.
A. Preprocessing
Image enhancement is a subjective process to make the image suitable for the next step.
In this paper, piecewise contrast enhancement was applied during the preprocessing part.Upon extensive observation from different images, it was found that by stretching pixels values greater than 110 to a range of gray values from 200 to 255 using piecewise linear stretching makes the image easy for clustering.This is mathematically shown in equation (1) below.
where, I is input image and I enh is contrast enhanced image.
B. Feature Extraction
Feature extraction is a process of simplifying the content of a large set of data in order to describe it efficiently for the purpose of facilitating further processing, storage requirement, and dimensionality reduction.In this paper, features are extracted using superpixel and Principal Component Analysis (PCA) as described below.
Superpixel is a group of pixels in proximity that has similar intensity.Simple Linear Iterative Clustering (SLIC) algorithm [13] is applied due to its fast computational time [14], [15].The size of original superpixels extracted from SLIC is different as there might be a small number of pixels near each other with the similar pixel value in some of the region of the image(most of the time in tumor region), while the in non-tumor part of the image their size will be large.However, we need the same size of superpixels in order to apply PCA.This problem is solved as follows: 1) We computed average size of the superpixel as shown in equation (2).
Where N is the number of superpixels, n i is number of pixels in i th superpixel, M is average number of pixels per superpixel.
2) Then, the size of each superpixel is made same as of the average one by padding some pixel value to the smaller size superpixel and removing some intensity value from the large size superpixels.Instead of appending random intensity values to smaller size superpixels, we pad by repeating the last pixels value of the superpixel itself.Finally, the superpixel matrix is generated as shown in equation ( 3).
Where each column represents a superpixel pixels, M is in equation ( 2) and N is number of superpixels.
As the goal is to detect pixels that are cancerous, and we know in PET images pixels that belong to the tumor have distinct intensity due to high uptake of radioactive tracer, so we need a method that analyses the internal statistics and makes an easy differentiation between the cancerous superpixels.PCA is one of the novel methods to study internal statistics of data.In addition to that, PCA reduces the dimensional space of the data [17].In our implementation, PCA of superpixels is done as follow: 1) Compute average superpixel.
Where S i is the i t h superpixel and S a average superpixel.2) Determine the covariance of superpixels (C s ) Where Y superpixel matrix after average superpixel padding and Y t a is mean of transpose of Y .
3) Calculate the eigenvectors and eigenvalues of the covariance matrix Where P is matrix with eigensuperpixels(principal components) as column and Σ is diagonal matrix of eigenvalues.4) Project the superpixels onto Principal components that contain most variance of the data.Here, the number of Principal components is same as the number of superpixels.As stated in [19], the eigenvectors or Principal components that contain at least 95% variance of superpixels can represent the whole image by confidence.This reduces the dimensional space as most of the information is contained in the first two or three largest eigenvalues.
In our implementation, 95% variance of superpixels was contained in the top two principal components for most of the images.Once, the K dominant vectors are found for feature extraction (distance), the superpixel matrix is projected onto these dominant eigensuperpixels(eigenvectors) using equation (7).
Where P k is eigenvectors matrix that contains at least 95% of the variation in the image and P Pro j is the projection of superpixel matrix to P k .5) Calculate the distance of each superpixel in respect to average superpixel.
While computing distance, we should consider the distribution of superpixels in the principal component coordinate system [12].To incorporate this concept we computed the distance along the principal components.Mathematically, this will be computing L 1 norm distance in the principal components coordinate system as shown in Equation 8 below.
where Si is coordinate of S i relative to S a in the principal component coordinate system, and D(S i ) is L 1 norm distance.
C. Tumor Detection and Contouring
Currently, there are a variety of PET segmentation methods.The most commonly used methods are Fuzzy Locally Adaptive Bayesian (FLAB), Classification/Clustering, and some mixture of them.As stated in [6], there is a growing need for research in clustering based methods as they have the capability of detecting tumors with a complex shape in heterogeneous PET images.In our work, after distance vector is calculated in the principal components coordinate system, Kmeans clustering is applied.K-means is an algorithm that clusters a set of data based on distance measure.In our case, it separates the superpixels as tumor and non-tumor, which is binary classification using a minimization problem as shown in equation( 9).Then, morphological operations(erosion and dilation) are then applied to delineate the spatial scope of the tumor.
where c i is the set of points that belong to cluster i, µ i is the center of i t h cluster, X is distance vector extracted above and D is square of the Euclidean distance.
III. RESULT AND DISCUSSION
Figure 2 shows the input image with a corresponding enhanced image in figure 3. It can be clearly seen that contrast between tumor and non-tumor region of the image is enhanced.Figure 5 illustrates the L 1 norm distance of superpixels from their average along the principal components coordinate system.The horizontal axis represents the superpixel index and the vertical axis represents the distance from average superpixel.For the input image in Figure 2, the size of distance vector is 692 which far smaller than the size of the image (233x328).This is the reason why the execution time is so less for the proposed approach.Non-tumor superpixels (represented by a green point) are located near to average superpixel while tumor (red stars) are far from average.In addition, the heat map of the distance of superpixels from the average in the image space is shown in yellow color which is more distinguishable from the other superpixels with a large distance as depicted in the color bar.Internal statistics of tumor superpixels is so different from the average, thus, the distance will be very large.This classification can fail if cancerous part of the image is larger than the normal part.In case this happens, we have included another step to check some pixel values from each class so identification of the cluster to which the tumor belongs to can be more accurate.
Figure 7 shows the final result of our segmentation algorithm.As it is depicted in the figure the cancer region is identified and delineated correctly.
Table 1 contains information about the size of 3 sample images (2D) obtained from the 3 scans in the used dataset, Even that [16] and [18] were tackling a similar problem to the one presented in our work, however, they have not provided any measures of the execution time of their algorithms.The main concern of our paper was to design a fast PET tumor segmentation.As it can be seen from the table above, execution time of our proposed approach is very fast due to the following reasons: First, there is usually a small number of superpixels compared to the number of pixels in the image.Second, PCA again further reduces the dimension of the data which is then fed as the input to classification.In addition to that, MATLAB vectorization capability has been also extensively exploited throughout our implementation.
Additionally, Dice similarity for our algorithm was 84.2% which is very a comparable and competitive value with respect to the work in [16] and [18] as they obtained a Dice similarity measures of 80%-85% and 92%, respectively.
IV. CONCLUSION
In this paper, we describe and evaluate PET image segmentation to extract cancerous part of the image.Piecewise contrast enhancement was first applied on the input image.Then, superpixel extraction and PCA was performed to extract feature for segmenting the image.After that, K-means clustering was applied to classify the image region into cancerous and non-cancerous parts.The experimental result shows that the proposed approach is capable of providing robust segmentation with fast execution time.
One of the major challenges encountered is the non-availability of public PET datasets to test the algorithm's performance on, that's why the algorithm was tested only on a small number of PET images.Therefore, testing and tuning the algorithm's parameters on other PET datasets surely will help increasing its generalization possibility.
Figure 2 :
Figure 2: Input imageFor the input image in Figure2, it was found that 95% of the variance of superpixels is contained in the top two eigensuperpixels
Figure 4 :
Figure 4: Scatter plot of projection of superpixels of the enhanced image onto the principal components
Figure 6 :Figure 7 :
Figure 6: Heat map plot of superpixels distance in image space and superpixels Kmeans clustering
Figure 8 :
Figure 8: Input image and tumor segmentation results of some test PET images.The first column are input images with corresponding segmented image on second column.
Table 1 :
Scan's sample sizes, number of supperpixels, size of distance vector, execution time and scan's average dice similarity. | 2,951.2 | 2017-10-18T00:00:00.000 | [
"Computer Science"
] |
Detailed Placement and Global Routing Co-optimization with Complex Constraints
: With several divided stages, placement and routing are the most critical and challenging steps in VLSI physical design. To ensure that physical implementation problems can be manageable and converged in a reasonable runtime, placement/routing problems are usually further split into several sub-problems, which may cause conservative margin reservation and mis-correlation. Therefore, it is desirable to design an algorithm that can accurately and efficiently consider placement and routing simultaneously. In this paper, we propose a detailed placement and global routing co-optimization algorithm while considering complex routing constraints to avoid conservative margin reservation and mis-correlation in placement/routing stages. Firstly, we present a rapidly preprocessing technology based on R-tree to improve the initial routing results. After that, a BFS-based approximate optimal addressing algorithm in 3D is designed to find a proper destination for cell movement. We propose an optimal region selection algorithm based on the partial routing solution to jump out of the local optimal solution. Further, a fast partial net rip-up and rerouted algorithm is used in the process of cell movement. Finally, we adopt an efficient refinement technique to reduce the routing length further. Compared with the top 3 winners according to the 2020 ICCAD CAD contest benchmarks, the experimental results show that our algorithm achieves the best routing length reduction for all cases with a shorter runtime. On average, our algorithm can improve 0.7%, 1.5%, and 1.7% for the first, second, and third place, respectively. In addition, we can still obtain the best results after relaxing the maximum cell movement constraint, which further illustrates the effectiveness of our algorithm.
Introduction
In recent years, with the rapid development of integrated circuit manufacturing processes, the geometric dimensions of the integrated circuit have been continuously reduced, and the integration level has continued to increase. Coupled with the limitations of storage space and packaging process limitations, very large scale integration (VLSI) design has increased dramatically. Physical design is one of the key aspects of VLSI design and is the core of electronic design automation (EDA) tools. It mainly includes the following stages: partitioning, floorplanning, placement, and routing [1].
Placement and routing are the most critical and challenging steps in VLSI physical design. It is a typical large scale NP-hard problem which significantly impacts the performance indicators of integrated circuits. To ensure that physical implementation problems can be manageable and converged in a reasonable runtime, placement/routing problems are usually split into several sub-problems: global placement, legalization, detailed placement, global routing, and detailed routing. The global placement stage finds the location for each cell to minimize some performance (for example, the total wirelength) while ignoring some cell overlaps. The legalization stage eliminates all overlaps while maintaining global placement results as much as possible. The detailed placement stage further optimizes the result of legalization by moving cells. In the global routing stage, all nets are routed on a coarse grid map, and the approximate routing of all nets is determined; that is, the routing range is allocated for each net. According to the guide of the global routing result, the detailed routing stage determines the specific routing of each net while all design rules are satisfied.
Previous Works
Detailed placement is a discrete optimization problem which is also crucial to the quality of the placement solution. By legally relocating the movable cells, detailed placement can improve the solution while satisfying some design constraints, such as routing congestion or placement density [2]. One of the most commonly used methods for detailed placement is the sliding window technique. The branch and bound placer [3] reorders adjacent cell groups in a row by the sliding window technique, where the cells are optimally reordered in each window. Another important method is cell matching. NTUplace3 [4] proposes to find a set of exchangeable/independent cells in a given window and formulates a bipartite matching problem by assigning cells to available slots in the window. Cell moving/swapping technique is also a beneficial and effective method for detailed placement. FastPlace-DP [5] moves/swaps cells to their optimal location without overlapping and changing other cells. After finding the optimal region, the cell is exchanged with other cells or white space in the optimal region. The overlap penalty is estimated by the distance that shifts the surrounding cells to a legalized position. The difference between the total wirelength before and after the exchange and the penalty charged on the increasing overlap is a measure of selecting the cell or space in the optimal region. In addition, some detailed placers are trying to improve the routability while reducing the wirelength. For example, RippleDP [6] uses congestion-aware FastPlace-DP to avoid swapping/moving cells to possible routing congestion regions. After moving cells to the optimal HPWL regions, the locations can be locally improved by inter-row moves, cell reordering, and compaction. However, these methods are seldom considered routability, and there may still be greater congestion in the subsequent global routing stage.
Traditionally, global routers route a path for each net on a fixed placement result of detailed placers. There are two strategies to performing the global routing process on the 3-dimensional structure. One is to solve the routing problems on the 3D routing grids directly. FGR [7], which is based on the discrete Lagrange multipliers technique, can obtain a good 3D routing result at the cost of an extremely long runtime. GRIP [8] applies integer programming to minimize wirelength simultaneously and via cost without a layer assignment phase. GRIP also consumes too much runtime to be practical. Recently, CUGR [9] makes great use of the 3D structure of a grid graph with a probability-based cost scheme, 3D pattern routing, and multi-level 3D maze routing. The other approach is to transform the 3D routing grids into 2D grids. FLUTE [10] is conventionally employed to decompose each multi-pin net into a set of two-pin nets to generate an initial solution. After performing 2D global routing, 2D solutions are extended to 3D solutions with layer assignment techniques. Most global routers adopt this two-step routing strategy and achieve high-performance routing results, such as NCTU-GR 2.0 [11], FastRoute 4.0 [12], NTU-GR [13] and NTHU-Route 2.0 [14]. However, these routers consider routes on a fixed placement result which does not allow cell movement. Thus, global routing information can no longer be fed back to the placement to optimize the wirelength further.
However, this divide-and-conquer approach may cause information asymmetry between sub-problems. For example, a placer should systematically guide a router to avoid congestion and achieve high routability by considering cell density or pin density. But cell density or pin density of the placement stage may not accurately depict the actual track density of the routing congestion problem. To bridge the gap between placement and routing, previous works on IPR [15], GRPlacer [16], CRISP [17] and FastRoute [18] all combine a fast global router within their placer to offer accurate wirelength estimation. SRP [19] considers routing and placement simultaneously based on a given placement and global routing result to relocate cells that obstruct routability. The work [20] proposes an ILP-based cell movement to move cells and route nets at the same time after global routing. In the work, it chooses the median point of all the cells in the connected nets as the candidate location, and constructs the integer linear programming (ILP) model according to the possible routings. In the model, the cells that do not belong to the same net are allowed to move at the same time. By dividing the region, it can reduce the size of the ILP model and take benefit from parallel processing of the independent areas. The wirelength can be improved significantly, even when only 2% of the cells moved. However, there are two major drawbacks of their proposed algorithm: (1) the runtime of ILP is sensitive to the quality of the initial solution according to their experimental results, so that an inferior initial routing solution and placement can cause much more runtime in their algorithm; and (2) their method has poor scalability due to the high complexity of solving ILP, and the method is also time-consuming, even when only 2% cells are moved and the problem is handled region by region.
Furthermore, to alleviate the misalignment between placement and routing, the 2020 ICCAD [21] held a CAD contest called routing with cell movement that detailed how placement and global routing could cooperate to optimize the routing length further. Cell movement is allowed during the global routing process instead of routing a path for each net on a fixed placement result. Namely, within the time limited in the contest, this global router can move certain cells from one grid to another if all the given routing constraints can still be satisfied while the wirelength can be further reduced. These make the problem more complicated, and how to solve this problem efficiently is a huge challenge. The work [22] proposes an incremental 3D global routing engine considering cell movement and complex routing constraints to relocate cells and reroute nets. Firstly, Ref. [22] uses a congestion-aware 3D global router to reconnect all the pins of each net with minimized wires and vias. Then, the wirelength-driven movement evaluation method is proposed to find the desired locations for movable cells. Finally, cell-movement-driven incremental routing moves and routes all candidate positions in parallel and determines the desired routing paths that achieve the minimum routing resources without any routing violation.
Our Works
In this paper, we propose an effective cell movement method with efficient incremental routing, which can co-optimize the detailed placement and global routing simultaneously to get the optimal solution. The main contributions of our work are summarized as follows: • We propose an improved batch scheduling method which can increase the speed of scheduling the net into disjoint batches by 70× in this contest. Further, by combining FLUTE and maze routing, we propose a fast and effective preprocessing and refinement strategy; • To find a proper destination for cell movement, a BFS-based approximate optimal addressing algorithm in 3D is designed. Further, we propose an optimal region selection algorithm based on the partial routing solution to jump out of the local optimal solution; • According to the requirements of our work, four partial rip-up strategies for routing length optimization are presented to make a trade-off between quality and efficiency.
Unlike previous works, we present a new routing cost function to consider this problem better. In addition, to improve the rerouting efficiency, we use the A* and the multi-source multi-sink maze routing algorithms to perform partial rerouting operations jointly; • Compared with the top 3 winners according to the 2020 ICCAD CAD contest benchmarks [21], experimental results show that our algorithm achieves the best routing length reduction for all cases with a shorter runtime. On average, our algorithm can improve 0.7%, 1.5%, and 1.7% for the first, second, and third place, respectively. In addition, we can still get the best results after relaxing the maximum cell movement constraint, which further illustrates the effectiveness of our algorithm.
The remainder of this paper is organized as follows. Section 2 describes the problem statement and our algorithm flow. Section 3 gives the preprocessing scheme of the initial routing result. Section 4 introduces our partial rip-up, destination selection and partial reroute algorithm. Section 5 presents our refinement approach. Section 6 shows the experimental results. Finally, conclusions are made in Section 7.
Problem Description
In the detailed placement stage, the placement result is usually improved by moving or swapping cells while maintaining the legality between cells. In this paper, we consider the cell movement problem with the given placed and routed design which was presented in the ICCAD'20 CAD Contest [21]. In this problem, routing resources, including pins and nets, are typically abstracted as a 3D grid graph called gGrids (global grids), where the cell movement and 3D routing can be operated on gGrid. The number of rows N r and columns N c of the gGrids for all the routing layers is the same and given. The number of routing layers is given as N l , and via (vertical interconnect access) is simply modeled as z-direction routing.
The capacity c(u) is defined as the maximum number of routing tracks that can cross the gGrid u. With the given capacity value of the gGrid on each layer, the capacity of some certain gGrids will be increased or decreased based on the default value. Traditionally, the demand d(u) is defined as the actual number of routing tracks crossing the gGrid u. In this problem, the demand d(u) of a gGrid u would be the summation of four parts, i.e., routing segments demand, all blockage demands, extra demand in the same gGrid and extra demand in adjacent horizontal gGrid(s). (1) Routing demand could be calculated as the number of nets which has routing segment in this gGrid. It should be noted that the number of routing segments in a net crossing one gGrid has no additional effect on the routing demand (must be one demand); (2) Blockage demand of the belonging cell will be added to the grid where the cell located, and will change as the location of the cell changes; (3) When a certain pair of cells is placed in the same gGrid, it would need an extra demand for this gGrid; (4) When a certain pair of cells exists in adjacent horizontal gGrids, these two adjacent gGrids would both need extra demand. Congestion happens when the demand d(u) exceed the capacity c(u) assigned to the gGrid u. The resource r(u) is defined as the difference between the routing capacity and demand, i.e., r(u) = c(u) − d(u). If r(u) < 0, it indicates insufficient resources in gGird u, which is called routing overflow.
According to the given initial global routing result and the circuit netlist N, the movable cells can be moved from one gGrid to another, and thus can re-connect the broken routing paths incrementally for connected nets with all the given routing constraints satisfied while the total routing length is minimized. The routing length is calculated by the number of gGrids that all nets span (the number of vias is the same as routings in other directions). The given routing constraints of the problem that should be satisfied are listed as follows. Maximum cell movement constraint C3: In order to maintain information of the given placement results and avoid generating completely altered placement results, the total number of moved cells during the cell movement should be constrained to 30% among all cells; • Net-based minimum layer constraint C4: The net e j may have a minimum layer routing constraint min l,j . The pins whose z-coordinate are smaller than the minimum layer constraint need to be connected to the minimum layer through vias, and further, the H/V-direction routing of this net will be only on or above the given minimum layer; • Layer routing direction constraint C5: The routing direction is horizontal on the first layer M1, and it is different on any two adjacent layers. In other words, H/V-direction routing must route on the odd/even layer, respectively. Figure 1 shows the overall flow of the proposed approach, which consists of three major stages: Rtree-based fast preprocessing, incremental rerouting with cell movement, and routing length driven refinement. In the preprocessing stage, we first present improved scheduling for parallel routing based on R-tree. After that, a greedy selection strategy is used to accept the solution with routing length reduction. During the incremental rerouting with cell movement stage, four partial rip-up strategies are proposed to make a trade-off between quality and efficiency while removing the cell. According to different partial rip-up strategies, a BFS-based approximate optimal addressing algorithm in 3D and an optimal region selection by partial routing solution are proposed to find the candidate destinations of the removed cell. A partial rerouting algorithm hybrid A* and multi-source, multi-sink maze algorithm is proposed to find the optimal destination of cell movement in parallel. Finally, an efficient refinement is adopted to reduce the routing length further.
RTree-Based Fast Preprocessing
In the global routing stage, the complex net structure, unreasonable routing or infeasible ripping-up result in closed loops and needless nodes. The redundant routing result increases the routing length and makes the region congested. Firstly, we mark all the net points in the bounding box as unvisited, and the topology of the tree will be built in Figure 2a. A-F in Figure 2 are the grids where the net passes. Secondly, the depth-first search (DFS) technology is used to mark the visited nodes in Figure 2b, and the nodes that have no pin will be removed in the process of backtracking in Figure 2c,d. After the above operations, the closed loops of nets will be broken, and the redundant nodes will be deleted. More importantly, the routing length and congestion will be significantly improved.
According to the bounding boxes of the given initial routing result, we build R-trees [23] and later query nets with the disjoint border from the R-trees. Similar to [24], we propose the scheduling of all the batches in our work by Algorithm 1. Since conflicts are more likely to occur between large nets, line 1 sorts all nets in decreasing the size of the bounding boxes. Nets are assigned one after another by joining an existing batch or building a new batch (lines 2-18), thus minimizing the number of batches. R-trees are used to judge the overlap between a net bounding box and a candidate batch. In application, we found that most of the R-tree queries in the later stage of the original algorithm failed, which caused a lot of time to be wasted. Therefore, lines 9-11 added some criteria to judge whether enough nets have been added to a batch. Since nets with shorter wirelength have a smaller solution space and a larger number of pins makes it difficult to route, line 19 reordered the batchlist. In this way, the total scheduling runtime can be improved by 70× (detailed comparisons are shown in the Section 6.2). Figure 3 shows an example of our scheduling, where red and green rectangles represent different batches, respectively.
Input:
Nets; Output: BatchList; 1: Sort all nets in decreasing size of the bounding boxes; 2: for each net e i do 3: for each batch b j in BatchList do 4: if batch b j is full then 5: continue; 6: end if 7: if the bounding box of e i has no overlap with b j then 8: Add e i into b j ; 9: if nums(b j ) ≥ n b or A cur /A total > t then 10: b j ← full; 11: end if 12: break; 13: end if 14: if e i has not been assigned to any batch then 15: Build a new batch and added e i ; 16: end if 17: end for 18: end for 19: Sort the batchList with shorter wirelength and a larger number of pins; Compared with the congestion-aware 3D global routing in the work [22], we use a greedy method of mixing FLUTE and maze routing in each batch to optimize the initial solution. Our greedy preprocessing algorithm for the initial global routing result is shown in Algorithm 2. Firstly, line 4 uses a very fast and accurate rectilinear Steiner minimal tree (RSMT) algorithm called fast lookup table estimation (FLUTE) [10]. A net-breaking technique is used for high-degree nets to reduce the net size until the table can be used. In addition, an edge shifting technique is used to direct routing demand away from the congested regions by moving some tree edges without increasing wirelength [25] in line 5. After that, all Steiner trees are broken into 2-pin nets, which are better results in 2D layout. Thus, we use L-shaped pattern routing and layer assignment to rapidly get a reasonable 3D routing result (lines 6-8). During multi-layer global routing, Ref. [26] adopted dynamic programming to find a layer assignment result such that the via cost is minimized while the given congestion constraints are satisfied. Lines 9-12 accept the result if the solution has no overflow and is shorter than the initial result. Otherwise, we use maze routing [25] to reroute the whole net in the 3D boundary. Maze routing is the most popular and powerful technique in global routing to find a path while avoiding congestion. According to some cost functions, maze routing facilitates the shortest path connecting two pins through the fewest congestion grids. The cost function will be introduced in Section 4.3.
Incremental Rerouting with Cell Movement
In this section, we introduce our partial rip-up, destination selection and partial rerouting algorithm. The specific process is as follows. Firstly, we calculate the wirelength of the bounding box that can be reduced by moving it to its optimal region [5], and reorder by the decreasing order. For each cell, we rip up the connected nets partially and find the candidate destinations. For each candidate destination, we first update the extra demand and check that there is no overflow constraint C1, and thus reroute the remaining routing paths and the destination gGrid to obtain the reduced routing length. Since, at most, only one destination will be selected, these rerouting processes can be processed in parallel.
Partial Net Rip-Up with Cell Removal
For cell movement, the nets which connect to the pins of removed cells relocated need to be ripped up before being re-routed. However, dismantling the entire net inevitably brings a lot of unnecessary recalculation because some parts of the nets are not directly connected with the pins of the removed cell or away from the congested region. It is time-saving and computation-reducing to retain some parts of the net which have little effect on the re-routing of the relocated cell. Therefore, under different conditions, we propose a novel method to achieve the different reuse of parts of routing paths which is suitable for our problem. This method is more comprehensive than the previous works [22] and SRP [19], where [22] would keep the remaining wires in one connected component and [19] do not consider the impact of the Steiner points. For convenience, we introduce these schemes in this section. When we delete the routing path connected to the pin on the cell to be moved, four cases will be considered as follows (the detailed illustration is shown in Figure 4). For simplicity, we only show the case of the 2D rip up. For a 3D case, the via above the minimum layer are treated as normal paths, and the via below the minimum layer may be removed as the pin is removed (the via which is used by other pins can be still reserved). Assuming that there are n nodes in the single net, by recursively traversing the nodes, we can dismantle the unwanted part of the net within O(n) time complexity.
Case R1: In Figure 4a, grid (7, 3) contains two pins. After removing the red removed pin, we would not rip up any paths connecting to the grid, as in Figure 4b. If there is the minimum layer constraint on this net, and there is a via in this grid, only the via of another pin is reserved.
Case R2: In Figure 4c, grid (0, 3) only contains a removed pin, and we delete the connected paths from this pin until we reach a grid that contains a pin or a Steiner point, as in Figure 4d. In this case, the remaining paths may be divided into multiple subnets which are equal to the degree of this pin. We will connect these subnets and the relocated pin after finding the new location.
Case R3: In Figure 4e, grid (0, 3) contains a removed pin whose degree is larger than one. When the remaining path need to be connected (which is discussed in Section 4.2.1), we would not rip up any paths connecting to the grid, as in Figure 4f. Case R4: In Figure 4g, grid (3, 0) contains a removed pin, and we delete the connected paths from this pin until reaching a grid that contains a pin or until the second passed Steiner point, as in Figure 4h. Compared with case R2, this case will destroy the local topology, and we believe that the construction of the first passed Steiner point will largely depend on the position of the removed pin. For example, if the removed pin is located at the grid (3,4), the Steiner point in the grid (3,3) would not guarantee the shortest length of the net.
Destination Selection of Cell Movement
In our work, we select one cell to remove each time, and find an optimal moving position to obtain the maximum routing length reduction. Different from the previous work [19], the purpose of SRP is to optimize routability. In this contest, we need to optimize the routing length as much as possible without causing routing overflow. In order to achieve this goal, we propose the following two candidate destination selection schemes.
BFS-Based Approximate Optimal Addressing Algorithm in 3D
To reduce the number of routing operations, which is extremely time-consuming, we need to approximate the routing process as accurately as possible to select the destination. Based on some routing constraints (layer direction, minimum layer, via reused, overflow), we propose a breadth-first search (BFS)-based approximate optimal addressing algorithm in 3D in Algorithm 3. In this algorithm, we divide the routing range into two parts. The part on the minimum layer uses a 3D search strategy and directly performs calculations below the minimum layer to significantly reduce the number of search calculations.
Obviously, if the cell is moved beyond the outer border of its current routing paths, the routing length is almost impossible to reduce. The range [x l , y b ] × [x r , y t ] is obtained by the bounding box of all paths in the connected nets E i . For each net e j ∈ E i , lines 5-9 first execute different rip-up strategies according to different situations to get the remaining paths. Since multiple subnets are searched together, it is challenging to ensure efficiency while considering the Steiner points. Therefore, if the net e j has another pin in the same grid (x i , y i , z min j ) (z min j denotes the pin coordinate in the z-direction on the minimum layer in net e j ), or the degree of this pin is greater than one, we need to ensure that each remaining path is still connected in this method (see Figure 4b,f). Otherwise, we delete the connected paths from this pin until reaching a grid that contains a pin or a Steiner point. Furthermore, line 10 calculates the routing length of the removed paths rl j and the removed via ∆via {j,(x i ,y i )} , which has no overlap with other via in this net.
For each net e j , the z-range z b , z t is obtained by the bounding box of the z-direction of e j on the minimum layer where z t must larger than z b for the different layer directions (if the congestion is severe, we extend z t = z t + 1). Lines 12-16 add the remaining paths to the queue q and mark them as visited, and the cost dis j of the gGrid p is 0. Lines 17-30 pop the gGrid in the queue one by one and search the adjacent gGrids according to the direction of the layer. If an adjacent gGrid is un-visited, mark it as visited and increase its cost by 1, and then add the gGrid to the queue except for the demand equal to the capacity (which means that no path can pass through this gGrid). Repeat the operation within the given search range [x l , y b , z b ] × [x r , y t , z t ] until the queue is empty. Finally, line 31 takes the cost of the layer dis j,z min j where the pin z min j is located. Then, we add the length of the required via to each destination, and deduct the length of the overlapping part if the via of other pins can be reused. After searching for all the nets E i , the cost dis(x, y, z) represents the total routing length that the cell moves to the destination (x, y, z), and rl is the total rip-up routing length. Even if we consider the congestion area where a single net cannot pass through in lines 24-26, there may be multiple routing paths passing through the area close to overflow at the same time, which results in the actual routing length being larger than dis(x, y, z). Therefore, when dis(x, y, z) is less than rl, line 35 adds grid(x, y, z) to the priority queue C.
Algorithm 3
The 3D BFS-based Approximate Optimal Addressing Algorithm.
Input:
Removed cell i, the connected nets E i , rip-up routing length rl = 0; Output: Candidate destinations priority queue C; 1: x i , y i ← the origin location of removed cell i; 2: x l , x r ← the left, right border of all paths in nets E i ; 3: y b , y t ← the bottom, top border of all paths in nets E i ; 4: for net e j ∈ E i do 5: if another pin in grid(x i , y i , z min j ) or degree > 1 then 6: ripupSet j ← keep all paths by R1 or R3; for p ∈ ripupSet j do 13: q.push(p); 14: visited(p) ← true; 15: dis j (p) ← 0; 16: end for 17: while q = ∅ do 18: for grid cur ∈ q do 19: grid cur .pop(); 20: for grid adj ∈ direction(grid cur , z cur ) do 21: if grid adj ∈ [x l , y b , z b ] × [x r , y t , z t ] && !visited(grid adj ) then 22: visited(grid adj ) ← true; 23: dis j (grid adj ) ← dis j (grid cur ) + 1; 24: if d(grid adj ) < c(grid adj ) then 25: q.push(grid adj ); Add (x, y, z) to C;
36:
end if 37: end for An example of this algorithm is shown in Figure 5. In the figure, we want to search for the candidate destination of the removed pin whose z-coordinate is M1 and minimum layer is M5 within the bounding box of the existing routing path. In Figure 5a, the red and green lines represent the routing path on the minimum layer and via, respectively. In our algorithm, we separate the routing region by minimum layer to improve efficiency. On the minimum layer, we set the routing paths in Figure 5a to 0, and use BFS to find the distance while the layer direction is satisfied in Figure 5b. After that, since the minimum layer constraint for the removed pin is M5, we take M5's distance map and add the length of the required via to each gGrid in Figure 5c. In particular, when the via can be reused, only the length of the newly added non-overlapping part needs to be added. In this algorithm, each net in E i is unrelated and can be processed in parallel at the same time. In addition, dividing the search range according to the minimum layer can reduce a lot of search space, which makes the algorithm more efficient. Different from the distance formulation in the previous work [22], our direct search method is closer to the real routing process. Even though our method has spent more time than [22], a more accurate destination selection may reduce the time for subsequent reroutes.
Here we analyze the complexity of the algorithm. For each net e j , there are V = (x l − x r ) × (yt − yb) × (z t − z b ) gGrids in the search region.
Optimal Region Selection Using Partial Routing Solution
In the previous section, due to the limitation of the search method, we require the remaining paths to be connected. This will cause the new destination of the cell to depend on the previous topological structure, and it is easy to fall into a locally optimal solution. In most cases, the structure of the first Steiner point directly connected to the removed cell is largely related to the cell location. Therefore, we adopt the R4 strategy, which deletes the connected paths from this pin until reaching a grid that contains a pin or the second passed Steiner point in the hope of constructing a better topology according to the new location of the cell. In this case, a net may be divided into multiple disconnected subnets. Therefore, we improve the optimal region technique in the previous work [5] to find the candidate destination of the cell.
In the previous work [5], if only one cell i is allowed to move, the region with the optimal wirelength after placing the cell is defined as the "optimal region" of this cell. This region is determined by the median idea in the work [27]. As shown in Figure 6a, we show the optimal region obtained by this method. For the movable cell i, we traverse all the connected nets and find their bounding boxes (not including this cell). For each net j, the left, right, lower and upper boundaries are denoted by x l , x r , y l , and y u , respectively. In the figure, there are three nets connecting to cell i. There are 5, 4 and 3 cells (denoted by diamonds) in net 1, 2 and 3, respectively. The bold dotted boundary boxes are the bounding boxes for the nets excluding cell i. From [27], the optimal region [x r 2 , y l 2 ] × [x l 3 , y u 2 ] is given by the medians of the x−series (x l 1 , x l 2 , x r 2 , x l 3 , x r 3 , x r 1 ) and y−series (y l 3 , y u 3 , y l 2 , y u 2 , y l 1 , y u 2 ) of the bounding boxes. At any gGrid in this optimal region, the sum of the distances to the bold dotted boundary boxes is equal and smaller than the other gGrids.
In the previous work, the optimal region was only related to the cell's position, and the minimum estimated wirelength may have a large gap with the actual routing length. In this work, we have identified the cell's position as well as the actual routing solution. The information of routing paths usually contains routing constraints, such as layer direction and congestion. For example, in Figure 6b, we consider the routing paths on the basis of Figure 6a. In the figure, the straight line represents the remaining paths, and the dashed line represents the removed paths while removing cell i. In net 1, the cells at y u 1 are routed downward instead of connected as a horizontal line because they are affected by the layer direction constraint. In this case, the optimal region is [x r 2 , y l 1 ] × [x l 3 , y l 2 ], which is smaller than the region in Figure 6a. The best moving destination of the removed cell are all (x r 2 , y l 2 ) in both two figures. Even in some complex situations, the original method may miss the correct location. In particular, we prioritize the gGrids such that the via can be reused in the optimal region. In general, this improved method can consider routing constraints as much as possible, and the runtime will not be increased while optimizing the results. Figure 6. The optimal region obtained by: (a) the method presented in the work [5]. (b) the improved method in our work.
Partial Rerouting by A* and Maze Routing Algorithms
A complete routing tree is built by re-routing the several disconnected sub-nets together. Before proposing the routing algorithm, we first give our cost function and briefly explain some basic routing operations in our algorithm. In our problem, the via is simplified to route on the z-direction. Thus, the cost function presented in the work [9] is shown as follows: where wl(u) is the wirelength cost, and the function on the right side forms the congestion cost. d(u)/c(u) and r(u) represent the possibility of overflow and the resource, respectively. α determines the ratio of the congestion term, and variable β of the logistic function determines the global router's sensitivity to overflow. In this problem, there is already a legal initial routing solution, and the objective of rerouting is to reduce the routing length without causing routing overflow. In order to make the solution easier, we use multiple iterations for routing every time until we get a solution without overflow. The cost function in our work is modified as follows: where iter ∈ {1, 2, 3} is the iteration in the routing process and γ is a penalty factor to avoid routing through the gGrid that is about to overflow. θ is a positive integer that controls the available capacity. We remove d(u)/c(u) because none of the grids overflowed (must be d(u) ≤ c(u)) in this problem. To reduce the routing length as much as possible, we should not treat gGrids differently as long as there are sufficient resources. We only need to avoid crossing the gGrid where the demand is close to capacity. To avoid unnecessary searches, we only rerouted inside the bounding box that the origin routing path of the net passed through at the beginning. Since the pins are usually on the lower layer, the higher metal layers are usually not used for 3D routing. Therefore, the congestion of the lower layers would be greater than that of the higher layers. If a solution without overflow can be found, we expand the search range in the z-direction as the iteration increases. We do not expand in the x, y-direction because we prefer to route with less congestion when the same routing length would be increased.
Among the current global routing tools, the more popular one is maze routing with multiple sources and multiple sinks. In this problem, the goal is to connect the removed cell and multiple subnets (in most cases, no more than 3). We use multi-source multi-sink maze routing [25] to generate good routing solutions for the multi-pin nets. The time complexity is O(VlogV), where V is the gGrid points in the search region. This method considers the existing routing tree instead of restricting the two endpoints of the routing path to be the original endpoints of the edge being routed. We treat the removed cell as the source, and all the gGrid points on the remaining paths as sinks. Similar to Dijkstra's algorithm, when a gGrid point is extracted from the priority queue, the cost is the shortest distance from sources to this gGrid point. Once a gGrid point in a sink is extracted from the priority queue, new sources are constructed together with old sources, the shortest paths, and the encountered subnets. The search process is performed again until all the gGrid points are connected.
However, in our work, the difference is that we only partially rip up the net. For example, we adopt the R2 rip up strategy to reroute the candidate destinations obtained in Section 4.2.1 (the worst case is to connect the disconnected subnets according to the R3 situation, and then connect to the target gGrid. Therefore, it will be better than the estimated result); for the candidate destinations obtained in Section 4.2.2, the R4 rip up strategy is adopted to reroute. For the case where a cell will connect to a subnet, we can use the A* algorithm [28] to improve efficiency. The A* algorithm has been applied to global routing [29]. The A* algorithm is the most effective direct search method for solving the shortest path in a static road network, and it is also a practical algorithm for solving many search problems. If the estimated distance value is closer to the actual value in the algorithm, the search speed is faster. In our method, we use the priority queue to select the gGrid (x, y, z) with the current lowest cost, and then use the following heuristic function Cost astar (3) to guide the search direction of the algorithm: Cost astar = Cost predict + (Cost cur + Cost step (x, y, z)), where Cost predict , Cost cur and Cost step (x, y, z) represent the minimum cost estimate to the target gGrid, current cost, and the step cost with the current gGrid to the next gGrid, respectively. If Cost predict is smaller than the actual routing length, the optimal solution can be obtained while the search range is large and the efficiency is low. If Cost predict is equal to the actual routing length, the search efficiency at this time is the highest, and the solution is optimal. The Cost predict is estimated by 3D distance estimation. In the x and y direction, the distance estimation is carried out by the Manhattan distance between the current gGrid and the target gGrid. In the z direction, the distance is estimated by the following equation.
The result estimated is the minimum routing result that satisfies the layer direction constraint, which must be no greater than the actual routing result. Therefore, while ensuring the quality of the solution, it can ensure that the search is carried out in the direction of the target point, which is obviously better than the directionless search of Dijkstra's algorithm. A simple illustration of the 2D routing process is shown in Figure 7, which is similar when extended to 3D. In the figure, red points represent the subnet and the removed pin to be connected, yellow rectangles are obstacles where demand is equal to capacity, and green points represent the grid traversed during the search process. In our algorithm, we control the search range within the bounding box of the existing paths. Different from the complete search of Dijkstra's algorithm in Figure 7b, the A* algorithm is directional, which can reduce a large number of unnecessary searches, as in Figure 7a.
Routing Length Driven Refinement
When the number of movable cells reaches the prescribed maximum number, we stop looking for the cells that need to be moved. However, due to the movement sequence, some cells that have already moved can be optimized again. In addition, due to the partial rip-up and reroute of the net in the above section, some of the nets may not have the optimal topology. Therefore, in this section, we further optimize the results. In this stage, θ in Equation (2) is equal to one, using all capacity as much as possible.
If a cell that has been moved is encountered, it will move again. Therefore, we propose a similar but faster 2D BFS scheme to move the cell in this section. Similar to the process of Algorithm 3, we ignore some routing constraints and perform a breadth-first search in a 2D range. The distance of the z−direction is replaced by the minimum distance between the subnet and the removed pin. If these are on the same layer and not on the same straight line, the distance in the z−direction is 2. After considering the reuse of the vias, the estimated distance for the cell to move to any point in the range is obtained. Since this strategy ignores some routing constraints, the obtained candidate locations will be slightly more than the 3D search. At this stage, there are fewer destinations where the cell can move with the reducing routing length. We adopt the R4 rip-up method and set the termination condition of rerouting as long as there is a location that the length can be reduced.
After that, we reroute each net to get a better topology, as in Algorithm 4. In the algorithm, line 4 first reroutes with the FLUTE, which is shown in Algorithm 2, lines 4-8. If the number of pins does not exceed 9, FLUTE usually find the optimal solution. Otherwise, even if the FLUTE solution Sol f can achieve a smaller length than the initial solution, we still use maze routing to get a solution Sol m . In lines 14-18, when the minimum length of these two solutions is larger than the initial solution, we restore the initial routing state. Otherwise, we choose the solution which has a smaller routing length. It should be noted that when the rerouted is unsuccessful or the solution has routing overflow, the routing length rl is set to be INT_MAX.
Experimental Results
In this section, we first introduce our experimental setup and benchmarks. Then, we study the parallel technology used in this paper to show its impact on performance. After that, we compare our results with the top 3 winners of the ICCAD'20 CAD contest. Finally, we change the maximum cell movement constraint to demonstrate the performance of our proposed algorithm further.
Experimental Setup and Benchmarks
We implemented our routing with the cell movement algorithm in the C++ programming language on a 64-bit CentOS Linux workstation with an Intel(R) Xeon(R) CPU<EMAIL_ADDRESS>GHz, 128 GB memory, and 8 threads. All the experiments were based on the benchmark suite of the CAD contest benchmarks from ICCAD 2020 [30]. Table 1 shows the statistics of the released benchmarks, where "#gGrids", "#Layers", "#CellInsts", "#Nets", and "Initial #Routes" represent the number of gGrids, routing layers, cells, nets, and the initial routes, respectively. "Initial Length" denotes the total routing length of the initial routes. "Max Move" is the maximum cell movement constraint, which is limited to 30% among all cells in the contest. In these benchmarks, the scales of case1 and case2 are too small and are only used as initial examples in the contest, so that subsequent experiments will exclude these two examples.
Parallel Technology
In this subsection, we study the parallel technology used in this paper to show its impact on performance. Firstly, we show the comparison results of the simultaneous maze rerouting for all nets with the batch scheduling strategy in Table 2. In the table, "RL-Red.", "B-Times", and "R-Times" denote the routing length reduction, the batch scheduling runtime (seconds), and the routing runtime (seconds), respectively. The difference between the improved batch scheduling and the original method in [24] is shown in Algorithm 1, line 9, where n b , t would be chosen by 24 and 0.5. On average, our parallel rerouting can achieve 2.629× faster routing runtime compared with the serial rerouting, and the improved batch scheduling strategy speeds up the origin process by 73×. As the number of nets in each batch increases, the routing length decreases because the ordering of nets is destroyed, which also reduces the routing efficiency. In Section 4.2.1, we proposed a 3D, BFS-based approximate optimal addressing algorithm to find the candidate destinations for the relocated cell. According to the minimum layer constraint, the space is divided into upper and lower parts in our algorithm. The upper part uses the search strategy, and the lower part is directly calculated. In addition, we assume that each net is not related to each other, so it can be parallelized. In Figure 8, "M1" represents the method of directly searching for the layer where the pin is located, and "M2" represents our algorithm. In the figure, we can see that the parallel operation of the connected nets can reduce the running time by about half. In addition, our method can achieve different degrees of efficiency improvement according to the proportion of the minimum layer which occupies the layers that the net passes through. This method of dividing the routing range into two parts according to the minimum layer constraint is also applied to our routing algorithm. In Section 4.2, the routing length reduction by each cell move is more significant in the early iterations, which also means that there are a large number of candidate destinations. Therefore, we select at most the first n s candidate destinations with lower cost. For example, we will get a priority queue that estimates the routing length reduction in the 3D BFS-based approximate optimal addressing algorithm. Only gGrids with costs greater than 0 will be added to this priority queue. If the number is greater than n s , only the first n s items will be taken. In the optimal region selection algorithm, if the number of optimal regions is greater than n s , we give priority to locations where the vias can be reused or have enough r(u). This method is similar to the top-k candidate positions in work [22]; the difference is that our available candidate destinations may be less than n s . To obtain the trade-off between solution quality and runtime, we set n s as 8/16, as the number of gGrids is larger/less than 40,000 in this work. These n s destinations can be rerouted in parallel, and finally, the destination with maximum routing length reduction is selected.
In the entire algorithm, the more time-consuming operations mainly include preprocessing, rip-up, destination selection, partial rerouting for routing length estimation, restoration (routing length is not reduced)/actual routing (routing length reduction), and refinement. Parallel technology can be used in some operations, but there are still certain bottlenecks. For example, in preprocessing and refinement, it is possible to divide the area and thus perform rerouting simultaneously, but it is difficult for the large nets that occupy the primary rerouting time to be independent of each other. In the destinations selection, we can search for different nets simultaneously. However, the number of nets connected to each cell is usually not very large, and the time is mainly affected by the nets with the most search layers in the z-direction. In partial rerouting for routing length estimation, compared with the number of threads, the candidate gGrids are not too numerous, and this value will continue to decrease as the number of moved cells increases. Therefore, the time mainly depends on the gGrid with the longest rerouting time. Combining the above-mentioned technologies, we show the impact of our parallel technology on performance in Figure 9. As the result, our proposed algorithm can obtain an average speedup of 2.15× by using 8 threads.
Comparison of Results with the Top Three Winners
To demonstrate the performance of our proposed algorithm, we compared it with the top 3 winners of the 2020 ICCAD CAD contest [21]. In this contest, the evaluation score is calculated by summating the routing length reduction of all the nets. The ranking of this contest is based on the summation of the score, while the runtime is limited to 1 hour for each case. Table 3 shows the comparison results of the total routing length reduction and runtime between our algorithm and the top three winners. In the table, "RL-Red.", "Times", and "Normalized" represent the routing length reduction, runtime for seconds, and the normalized ratios based on our algorithm. The best result for each benchmark is marked in bold. As shown in the table, our algorithm has achieved the best results in all released benchmarks. On average, our algorithm demonstrates improvements of 0.7%, 1.5%, and 1.7% for the first, second, and third place with the comparable runtime, respectively.
Results with Relaxed Max Cell Movement Constraint
In this contest, most of the constraints are hard constraints; that is, a legal routing result cannot be produced if they are violated. In practical applications, the maximum cell movement constraint C3 may not be necessarily limited by 30%. The 2020 ICCAD contest also gives a reduced routing length by changing the limited maximum cell movement to 0%, 5%, 10%, 30%, and 100%, respectively. Since the contest does not report runtime, we compare the routing length reduction in Figure 10 while the runtime of our results is satisfied within a 1 hour limitation. The black, green, red, and blue colored lines in the figure represent our method and the top three winners, respectively. The horizontal axis is the different percentage of maximum cell movement, and the vertical axis is the routing length reduction. As can be seen from the figure, the black line representing our method is always at the top among all lines. This not only illustrates the effectiveness of our routing algorithm but also our cell movement strategy.
Conclusions
To resolve the conservative margin reservation and the mis-correlation problem in the divide-and-conquer place and route approach, we design an effective and efficient algorithm to co-optimize the detailed placement and global routing with complex routing constraints. A fast preprocessing technology based on R-tree is presented to improve the initial routing results. During destination selection of cell movement, we propose a 3D, BFSbased approximate optimal addressing algorithm and an optimal region selection using the partial routing solution to find the required locations. A hybrid A* and multi-source, multisink maze rerouting algorithm is proposed to find the final destination of cell movement in parallel. The experimental results show that we can obtain the best results with any maximum cell movement. Furthermore, with more advanced manufacturing processes, the constraints continue to increase, such as voltage area constraints, R/C characteristics in different layers, and the timing-based net weight. Our proposed algorithm can be effectively extended to address these problems. | 12,488 | 2021-12-24T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Catalytic Water Co-Existing with a Product Peptide in the Active Site of HIV-1 Protease Revealed by X-Ray Structure Analysis
Background It is known that HIV-1 protease is an important target for design of antiviral compounds in the treatment of Acquired Immuno Deficiency Syndrome (AIDS). In this context, understanding the catalytic mechanism of the enzyme is of crucial importance as transition state structure directs inhibitor design. Most mechanistic proposals invoke nucleophilic attack on the scissile peptide bond by a water molecule. But such a water molecule coexisting with any ligand in the active site has not been found so far in the crystal structures. Principal Findings We report here the first observation of the coexistence in the active site, of a water molecule WAT1, along with the carboxyl terminal product (Q product) peptide. The product peptide has been generated in situ through cleavage of the full-length substrate. The N-terminal product (P product) has diffused out and is replaced by a set of water molecules while the Q product is still held in the active site through hydrogen bonds. The position of WAT1, which hydrogen bonds to both the catalytic aspartates, is different from when there is no substrate bound in the active site. We propose WAT1 to be the position from where catalytic water attacks the scissile peptide bond. Comparison of structures of HIV-1 protease complexed with the same oligopeptide substrate, but at pH 2.0 and at pH 7.0 shows interesting changes in the conformation and hydrogen bonding interactions from the catalytic aspartates. Conclusions/Significance The structure is suggestive of the repositioning, during substrate binding, of the catalytic water for activation and subsequent nucleophilic attack. The structure could be a snap shot of the enzyme active site primed for the next round of catalysis. This structure further suggests that to achieve the goal of designing inhibitors mimicking the transition-state, the hydrogen-bonding pattern between WAT1 and the enzyme should be replicated.
Introduction
Human Immunodeficiency Virus (HIV) is the causative agent of Acquired Immunodeficiency Syndrome (AIDS) [1,2]. Inhibitors of the viral enzyme HIV-1 protease (EC 3.4.23.16) are important components of Highly Active Anti Retroviral Therapy (HAART) for HIV/AIDS [3,4]. The emergence of mutants of HIV-1 protease resistant to inhibitor action necessitates continuous improvement of existing drugs and also of design of new inhibitors. Understanding the catalytic mechanism and the structure and interactions of the transition state would contribute significantly in the development of novel inhibitors. Based on computational [5][6][7][8], biochemical [9][10][11] and structural results [12][13][14][15][16], two types of proposals have been made in the past for the catalytic mechanism: direct and indirect [reviewed in [17][18]. In the direct type, championed mostly by computational studies, the nucleophilic attack on the carbonyl carbon atom of the scissile peptide bond is directly by carboxyl oxygen atom of the catalytic aspartates. In the indirect type, the attack is by a water molecule [19]. The position and hydrogen bonding patterns from this water molecule at the time of attack are different in different proposals of the catalytic mechanism, and therefore knowing the location and interactions of nucleophilic water molecule would be a step in establishing the correct mechanism for this enzyme.
HIV-1 protease is a homodimeric enzyme in which the active site is located at the subunit interface, with each subunit contributing one aspartic acid to the catalytic center. The active site is covered on the top by two flaps, which become ordered into a closed conformation whenever a substrate or inhibitor is bound in the active site. During virus maturation, HIV-1 protease cleaves viral polyproteins at nine different sites of varying amino acid sequences. A water molecule found symmetrically hydrogen bonded to carboxyl oxygen atoms of both catalytic aspartates in the high resolution crystal structures of unliganded enzyme, (PDB Id 1LV1 and 2G69) is believed to be the nucleophile. This belief has been questioned [20] recently on the grounds that in the crystal structures of enzyme-ligand complexes, this water molecule has not been found to coexist with the ligand. Thus the location of nucleophilic water in the active site of HIV-1 protease is still an open question. In this respect, we have been pursuing crystallographic studies on active HIV-1 protease complexed with different substrate peptides [21][22][23]. We have been able to carry out such studies because of our discovery of closed-flap conformation of the enzyme in hexagonal crystals of HIV-1 protease even when the enzyme is unliganded [24][25]. Complexes with oligopeptide substrates could then be prepared by soaking these native crystals into aqueous solutions of the substrates. The chemical conditions, pH for example, of these solutions could be varied to try trapping the reactants at different stages of the reaction. In the present study, native crystals were soaked into solution of the substrate of amino acid sequence His-Lys-Ala-Arg-Val-Leu*-NPhe-Glu-Ala-Nle-Ser (where * denotes the cleavage site and NPhe & Nle denote p-nitrophenylalanine and norleucine, respectively) at pH 7.0. It was found that the full length substrate was cleaved at the specific cleavage site (Leu-p-nitro-Phe). The N-terminal product peptide (P product) had diffused out leaving behind only the C-terminal product peptide (Q product) still bound in the enzyme active site. A set of water molecules had moved into the region vacated by the P product peptide. One of these water molecules (WAT1) is optimally positioned to be the nucleophile. In this position, the water molecule does not accept any hydrogen bond through its lone pair and also is a donor in two strong hydrogen bonds, two features that contribute significantly towards activation of the water molecule for nucleophilic attack [26]. This position is shifted by about 1.4 Å from that observed in all unliganded structures of HIV-1 protease. The position WAT1 overlaps exactly the hydroxyl group of the picomolar transition-state mimic inhibitor KNI-272. Adachi et al. have suggested this hydroxyl oxygen of KNI-272 to be an ideal position for a water molecule to launch nucleophilic attack on the scissile peptide bond [27]. Thus the present report of HIV-1 protease product complex is the first observation of putative catalytic water coexisting with the product peptide. This structure further suggests that the transition-statemimics, such as KNI-272, should be so designed that they bind the catalytic aspartates with a hydrogen-bonding pattern similar to that of WAT1.
Results
The Model of the complex HIV-1 protease tethered dimer used here contains a five residue linker, GGSSG, linking the N-terminus of second monomer to Cterminus of the first monomer [28]. Residues in the first monomer are numbered as 1-99 and those in the second monomer are numbered 1001-1099. Residues of the linker are numbered as 101-105. Crystal and intensity data statistics are given in Table 1. On refinement of the protein structure, difference density was found in the active site region of the enzyme (Figure 1), and this difference density represented the soaked-in substrate cleaved at the linkage connecting Leu and Nphe residues in the sequence. As per convention, residues in the C-terminal product (Q product) counted from the scissile bond were designated as P1'-P5', and those in the N-terminal product (P product) as P1-P6. The density for residues P1-P6 was very weak suggesting that the P product peptide had diffused out leaving behind only the Q product peptide still bound in the enzyme active site. A set of water molecules had substituted the P product peptide. Electron density for residues beyond P2' in the Q product was also very weak. The Q product and the water molecules were placed in two orientations, consistent with the pseudo-symmetry of HIV-1 protease active site. The lowest R free was obtained when the occupancies for the two orientations were 0.7 and 0.3. The B-factor averaged over all atoms of the product peptide was 43.7 Å 2 and 42.2 Å 2 respectively for the two orientations. The electron density suggested that the side chains of few protein residues existed in multiple conformations in the crystal. Alternate conformations were modeled for the residues Val 82, Ile 84, Val 1082 and Ile 1084. There was no visible density in the 2Fo-Fc map for the linker region between residues 99 and 1001 of the tethered dimer under study, suggesting that the linker region was not ordered in the crystal. The final molecular model thus consisted of 1514 protein atoms, 181 water molecules and Q product peptide bound in two orientations with occupancies of 70% and 30% respectively. Conformationally, more than 90% of non-glycine residues were in the most favored regions of Ramachandran plot. The final refined 2Fo-Fc map for P1' p-nitro-phenylalanine and P2' glutamic acid residues and the active site water molecules in the two orientations are shown in Figure 2.
Protease-Q Product peptide interactions
Hydrogen bonding interactions between P1', P2' residues of the Q product and the protein residues in the active site are shown in Figure 3a. The Q product is held in the active site through 11 hydrogen bonds, some of which are through bridging water molecules. Terminal nitrogen of product peptide in both the orientations forms hydrogen bond to the outer oxygen (OD2) of ASP-1025/ASP-25 ( Figure 3a). The side chain of P2' GLU forms hydrogen bonds with main chain amide nitrogen and side chain carboxyl oxygen of ASP-30 or ASP-1030 depending upon the orientation. One very well ordered water molecule forms the bridge between product peptide and the amide group of Ile 50/Ile 1050. One of the oxygens of P1' nitro group forms hydrogen bond with the Arg 8 while the other oxygen is bridged by two water molecules to the carbonyl oxygen of Gly 49.
Water molecules in the active site
A set of water molecules had substituted the P product peptide. These water molecules are held in place through hydrogen bonds among themselves and also with the protein (Figure 3b). One of these water molecules, WAT1, which is within hydrogen bonding distance from the oxygens of both catalytic aspartates, may be of functional importance. The OMIT density for this water molecule is shown in Figure 4. WAT1 also makes a short hydrogen bond with the N-atom of the Q product peptide. WAT1 is shifted by about 1.4 Å from the corresponding water molecule coordinating both catalytic aspartates in the unliganded structures (PDB Id 1LV1 and 2G69). This water molecule is at an average distance of about 2.7 Å from the scissile carbon of the modeled substrate peptides ( Figure 4).
Position of attacking water molecule
In the hydrolysis reaction catalyzed by HIV-1 protease there are two substrates: 1) an oligopeptide of appropriate amino acid sequence and 2) the nucleophilic water molecule. At the start of the reaction, both these are bound in the active site leading to formation of Michaelis complex. At the end of the reaction, but before product release, the nucleophilic water is used up and hence should not be present in the active site. The presence of WAT1 in the active site places the present structure somewhere near the beginning steps of the reaction. Presently there is no crystal structure report of a Michaelis complex between active HIV-1 protease and substrate peptide. However, the present structure can be considered a close approximation to Michaelis complex since a part of the substrate peptide is present in the active site along with the water molecule. We have earlier reported the structures of HIV-1 protease complexed with two different substrate oligopeptides corresponding in amino acid sequence to the junctions RH-IN [21] and RT-RH [22] in the polyprotein substrate. While the substrate is converted into a tetrahedral intermediate in the complex with RH-IN, the RT-RH peptide is cleaved, with both product peptides still bound in the active site. The water molecule, WAT1, is at a distance of 0.9 Å from one gem-diol hydroxyl in the tetrahedral intermediate complex.
Similarly WAT1 is at a distance of about 1.0 Å from one of the carboxyl oxygens in the product peptide complex (PDB Id 2NPH) ( Figure 5). Because of these proximities, we suggest that the water molecule serving as the nucleophile in peptide bond hydrolysis does so from the position WAT1 observed in the present structure. Such a hypothesis would be consistent with the principle of least nuclear motion for chemical reactions [29]. To further explore this idea, we have investigated by molecular modeling, if the scissile peptide bond of a substrate bound in the active site would be accessible to WAT1 for attack. We have superposed separately the present complex with reported complexes between D25N inactive enzyme and two different substrate oligopeptides (PDB Id 1KJH and 3BXR) [30,31]. Using only protein Ca atoms for structural superposition, equivalent positions of the substrate molecules were derived. Figure 4 shows the derived positions relative to WAT1. It is clear that the scissile peptide bond is optimally accessible to WAT1 for nucleophilic attack, the WAT1…C-O and WAT1…C-N angles being 69u and 104u respectively. Further, the distance of WAT1 to the scissile carbon atom is 2.7 Å , which is reasonable for a nucleophilic attack. Figure 4 also shows the position of the catalytic water observed in the structure of unliganded HIV-1 protease (PDB Id 1LV1). The separation of this water molecule from the scissile carbon atom is only 1.9 Å , which is too short a distance for the water molecule to stay in this position along with the substrate. Unlike in unliganded structures, the position of WAT1 is asymmetric with respect to the catalytic aspartates. WAT1 forms two short hydrogen bonds to outer and inner carboxyl oxygens of ASP-25 and ASP-1025 respectively. Further, WAT1 does not accept any hydrogen bond and is a donor (see below) in two strong hydrogen bonds with catalytic aspartates. Both these features should increase the nucleophilicity of WAT1 [26]. From all these considerations, WAT1 appears to be a reasonable position for the water molecule from where nucleophilic attack takes place during bond breakage. This hypothesis is also consistent with the structure of HIV-1 protease/KNI-272 complex reported recently [27]. KNI-272 is one of very few highly selective and potent inhibitors of HIV-1 protease with a picomolar inhibitory constant. The high potency is suggested to be due to its pre-organized rigid structure that very closely resembles the transition state. The structure of the complex has been determined to very high resolution using X-ray and neutron diffraction techniques. According to the authors of this study, the position of the hydroxyl group in the hydroxymethylcarbonyl part of KNI-272 is ideal to mimic the location of the attacking water molecule in catalysis. Figure 6 shows the superposition of the present structure with the HIV-1 protease/KNI-272 complex mentioned above (PDB Id 3FX5). It is very interesting that WAT1 perfectly overlaps the hydroxyl group of KNI-272 in the complex. Since this overlap guarantees the adherence to the principle of least nuclear motion, KNI-272 is a very potent inhibitor of HIV-1 protease. In addition to hydrogen bonds to catalytic aspartates, WAT1 is hydrogen bonded to the terminal N atom of the Q product peptide. Once the Q-product leaves the active site, WAT1 will move back to the position observed in the structures of unliganded HIV1-protease. The relative positions of the substrate, nucleophile and catalytic aspartates at different stages of the cleavage reaction, according to our proposal, are shown in Figure 7 (a-f). [21] and product peptide complex [22]: Stereo diagram showing the ligand atoms at the catalytic centre along with catalytic aspartates. Protein Ca atoms are used in the structural superposition. WAT1 is within 1 Å from an oxygen atom in the newly generated gem-diol [21] or carboxyl group [22]. doi:10.1371/journal.pone.0007860.g005
Protonation state of catalytic aspartates
In the process of inhibitor design, it is important to both structurally mimic the transition state intermediate and to maximize interactions between the inhibitor and the catalytic aspartates. In this context, it is essential to know the protonation states of the catalytic aspartates so that appropriate functional groups are chosen in the inhibitor being designed. Even though hydrogen atoms are not located in the present study, the observed strong hydrogen bonds involving the catalytic aspartates provide a clue to the protonation states of the aspartates. Only O-O/N separation shorter than 2.8/2.9 Å are considered as definite hydrogen bonds [32]. There are four such distances at the catalytic centre in the present structure: i) ASP-25 OD1…ASP-1025 OD1, ii) WAT1… ASP-25 OD1, iii) WAT1…ASP-1025 OD2 and iv) ASP-1025 OD2…N-terminus of P1' residue. The angle ASP-25 OD2 -WAT1-ASP-1025 OD1 is 101u which is very close to the H-O-H angle (104u) in a water molecule indicating that in the hydrogen bonding to the water molecule WAT1, the aspartate oxygens act as acceptors. Since the substrate is already cleaved, Nterminus of P1' residue is already protonated and it would be a donor in the hydrogen bond with ASP-1025 OD2 atom. Thus the aspartic dyad is monoprotonated with the proton shared between inner oxygens of the two aspartates. We therefore suggest that just prior to the formation of the transition state the aspartates are in this state of protonation. Since on inhibitor binding the protonation state is not likely to change, the hydrogen-bonding group should be chosen appropriately on the inhibitor to maximize interactions with aspartates in this state of protonation.
Effect of pH on conformation and interactions from ASP-25 and ASP-1025
HIV protease is known to be active over a wide range of pHs. In our earlier study of the crystal structure of HIV-1 protease complexed with the undecapeptide substrate (His-Lys-Ala-Arg-Val-Leu*-NPhe-Glu-Ala-Nle-Ser) at a pH value of 2.0, the substrate bound in the active site had transformed into a tetrahedral intermediate through nucleophilic attack by a water molecule [21]. In contrast, in the present study carried out at pH 7.0, the substrate molecule of the same sequence is found cleaved at the correct scissile bond, and the N-terminal P product peptide has diffused out of the enzyme active site. The conformations and interactions of the catalytic aspartates at the two pHs are compared in Tables 2 and 3. The changes in the conformations around main-chain and side-chain torsions of the two aspartates are very small, but these small changes have synergistically caused differences in the interaction distances, which could be significant. The hydrogen bonds from inner oxygen (OD1) atoms to N atom of corresponding Gly-27/1027 residues have become longer for both aspartates at pH 7.0. The distance between the two inner oxygen atoms, on the other hand, has changed in the opposite direction, that is, to shorter value, at pH 7.0. If the length of a hydrogen bond is assumed to reflect its strength, the changes in lengths mentioned above appear to preserve the total hydrogen-bonding ability of each OD1 atom. There is a significant change in the virtual dihedral angle OD2 (25)-OD1 (25)-OD1 (1025)-OD2 (1025), which is a measure of the co-planarity of the two aspartic acid side chains [15]. The two side chains tend toward being more co-planar at pH 7.0. There also appears to be a correlation between the co-planarity of the two aspartates and the strength of the hydrogen bond between the OD1 atoms of catalytic aspartates; the higher co-planarity leading to stronger hydrogen bond. In the structure of HIV-1 protease product complex [22] determined at a pH of 6.2 the aspartates are more co-planar with a virtual dihedral angle of 22u while the OD1..OD1 distance of the postulated hydrogen bond is only 2.3 Å .
Product release
The patterns of product inhibition are dependent on the enzyme mechanism. Based on product inhibition and solvent isotope effects, in the cleavage reaction by HIV-1 protease the product peptides are proposed to be released in an ordered manner, with the P product peptide released first [11]. The presence of only carboxyl terminal product in the present structure is consistent with this expectation. The Q product peptide is tending to diffuse out of the active site, although more slowly, since the distance between Ca atom of P1' residue and Cc atom of distal aspartate has increased from 5.0 Å in the tetrahedral intermediate structure [21] to 5.5 Å in the present structure ( Figure 5). It is interesting that it is the N-terminal P product which is bound when active HIV-1 protease is cocrystallised with a constrained hexapeptide [31]. This difference may be due to the different approach taken for preparing crystalline enzyme/substrate complex. The constrained hexapeptide is cleaved during cocrystallisation, and from among the two products released into solution the P product is selectively bound in the active site because of its increased hydrogen bonding ability coming from the newly formed carboxyl group. Similarly, on cocrystallisation, the presence of an amino group in the product peptide PIV-CONH 2 resulted in binding of PIV-CONH 2 in the active site of HIV-1 protease, in an unexpected mode [13,31].
Conclusion
Native crystals of active tethered HIV-1 protease were soaked in an undecapeptide substrate solution at pH 7.0. Three dimensional crystal structure, determined to 1.69 Å resolution shows that the Q product peptide generated within the crystal is still bound in the active site of HIV-1 protease along with a set of water molecules. One of these water molecules, WAT1, which is activated through hydrogen bonds to catalytic aspartates, is located at a distance of 2.7 Å along the direction perpendicular to the scissile peptide bond. Assuming the present structure to be a close approximation to Michaelis complex, we propose that the incoming substrate pushes the nucleophilic water from the position observed in the unliganded protease to the WAT1 position from where it attacks the scissile peptide bond. Once the Q product also diffuses out, the catalytic water molecule can move back to the position observed in unliganded structures of HIV-1 protease. Comparison of geometries at the catalytic centre shows systematic changes in the conformation and interactions of catalytic aspartates at pHs 2.0 and 7.0. The structure reported here also suggests that in the design of effective inhibitors of HIV-1 protease, it is important to duplicate the hydrogen-bonding pattern of WAT1 with catalytic aspartates.
Protein expression, crystallization and soaking
HIV-1 protease tethered dimer used in the present study contains a five residue linker, GGSSG, covalently linking the two monomers [28]. Expression, purification and crystallization of HIV-1 protease tethered dimer followed the procedures reported earlier [24][25]. Briefly, BL21 (DE3) cells with HIV-1 protease tethered dimer insert carrying plasmid were grown at 37uC to an O.D 600 of 0.6. The protease expression was induced by adding 1mM IPTG. Two hours after induction, cells were harvested and lysed using sonication to prepare the inclusion bodies. Inclusion bodies were thoroughly washed with Tris EDTA buffer and protein was extracted in denatured form with 67% acetic acid. Extract was diluted and dialyzed overnight against water. This was followed by dialysis against refolding buffer of pH 6.5, containing 20mM PIPES, 100mM NaCl, 1mM dithiothreitol and 10% Glycerol.
The cloned insert contains 57 extra codons in the beginning, which is a part of N-terminal polyprotein of pol gene. Therefore the inserted gene product is a 29 kDa precursor protein, containing natural cleavage site for HIV-1 protease, which after self cleavage results in a mature protein of 22 kDa. Crystals were grown by hanging drop vapour diffusion method. Equal volumes of Protein (5 mg/ml in 50 mM sodium acetate, pH 4.5, containing 1 mM dithiothreitol) and reservoir solution (1% saturated ammonium sulfate, 200 mM sodium phosphate, and 100 mM sodium citrate at pH 6.2) were mixed on a cover slip and sealed over the reservoir well at room temperature.
The 11-residue substrate peptide of amino acid sequence His-Lys-Ala-Arg-Val-Leu-NPhe-Glu-Ala-Nle-Ser was synthesized at the National Institute for Research in Reproductive Health, Parel, Mumbai, by using an automatic peptide synthesizer. The peptide was dissolved in water to prepare a 5 mM stock solution. This stock solution was diluted 5-fold into the reservoir solution (pH 7.0) to prepare the soaking drop. Protease crystal was transferred first to a fresh reservoir solution (pH 7.0) drop to wash the crystal and then to the soaking drop using a cryoloop. The cover slip was inverted and sealed over the same reservoir well in which crystals had been grown.
X-ray data collection and refinement
At the end of 72 h of soaking at room temperature, the crystal was equilibrated in the cryo-protectant (25% glycerol and 75% reservoir buffer) before flash freezing, for exposure to X-rays on the FIP-BM30A beam line [33]. The crystals diffracted to 1.69Å resolution. The diffraction data were indexed, integrated, and scaled by using the computer program XDS [34].
Computer program Phaser [35][36] from CCP4 suite was used to obtain molecular replacement solution using the structure 1LV1 [25,37] as the search model. The structure was refined in Crystallography and NMR System (CNS) by using standard simulated annealing protocols and the amplitude-based maximum likelihood target function [38][39]. A test set containing 5.0% of randomly chosen reflections were reserved for determination of R free [40], which is an indicator of gainful refinement. Occupancies of ligand molecules in the two orientations were systematically varied, in steps of 0.1, subject to the constraint that their sum be 1.0. Electron density maps of all types were calculated using CNS. All interactive model building and molecular superpositions were carried out using the graphics software O [41]. Structural comparisons are based on superpositions of protein Ca atoms. All figures were drawn using program Pymol [42]. Atomic coordinates and structure factors have been deposited in the Protein Data Bank under the PDB Id 2WHH. | 5,874.4 | 2009-11-17T00:00:00.000 | [
"Chemistry",
"Medicine"
] |
SYMBIOAUTOTHANATOSIS: SCIENCE AS SYMBIONT IN THE WORK OF LYNN MARGULIS
Lynn Margulis’s writing about symbiosis has profoundly influenced contemporary evolutionary theory, as well as continental and analytic philosophy of science, the materialist turn, and new materialism. Nonetheless, her work, and all symbiosis or evolution, is founded on a paradox: symbiosis fictionalizes customary accounts of the origin and evolution of species, yet it is impossible to speak of symbiosis (cross-species association) unless species-boundaries have been posited in advance. Thus, a tension is legible throughout Margulis’s work between the drive to surpass the limits of species-definitions as they have been traditionally understood, and a need to maintain them in order that there can be “sym-biosis” at all. Margulis criticized neo-Darwinian accounts of evolution in part because she saw symbiogenesis as debunking the theory that life was defined by individualistic competition. More recently, Myra Hird has suggested that the gift, such as it has been theorized by certain anthropologists and philosophers, could adequately figure symbiosis and the ethical relations founded on it. I turn to Derrida’s writing on the gift to suggest that, if a gift worthy of the name chances to happen, it necessarily exceeds scientific, theoretical, and philosophical knowledge.
At the 1933 meeting of the American Society of Parasitologists, questions of nomenclature were raised that required the formation of a Committee on Terminology.Foremost among them was uncertainty about the usage of symbiosis and a family of related terms.Was symbiosis a neutral term that referred to any sort of close association among the living, or did it refer only to those unions that were mutually advantageous?It had been used, throughout what was already a long history, both as an umbrella term for parasitism, commensalism, and mutualism, and as a synonym for mutualism.The committee traced the word to what they thought was its origin, though they made a common mistake, attributing it to Anton de Bary rather than an 1877 paper by Albert Bernhard Frank (Sapp 1994, 6, 131-32).Regardless, they found it was originally a neutral term, but still felt that the current state of ambiguity made a simple decision on their part impossible: "the present confusion necessitates the definition of the term whenever it is used" (Committee on Terminology 1937, 328).
Even where definitions have been given, something like this confusion has persisted, which is perhaps a sign that we are dealing with something more than a simple question of terminology.Or, that terminology is not a domain admitting of linear borders and voluntaristic decision.Symbiosis has grown today into a program of research that has transformed the understanding of life and its evolution, while also providing a novel biological model for human or "posthuman" ethico-political life.Thinkers from multiple, inter-or syn-disciplinary fields, including Lynn Margulis, Donna Haraway (2008;2016), and Zakiyyah Iman Jackson (2020), have found an impetus for thought in symbiosis, I would argue, precisely because of this undecidability between the neutral and the good.Mutual benefit seems at once to offer an example of the generosity of the living, exceeding the economy of instinctual survival, and yet is entirely circumscribable within the logic of competitive survivalism (each organism seeking its own gain).Anticipating somewhat, I will say that the many theorists of the life sciences who have turned to anthropological and philosophical studies of the gift to try to figure a symbiosis beyond economy, do so as an effect of this undecidability.
Margulis's re-elaboration of evolutionary theory, which made symbiotic union (close, cross-species association) the engine of life's transformations, has been enormously influential, not only for contemporary biology but for much of feminist materialist thought today.Margulis's work came at the forefront of a growing dissatisfaction with the dominant trends of twentieth-century evolutionary theory.The Modern Synthesis union of genetics and natural selection presupposed that everything of relevance to heredity was received at birth from one's natural parents.Something like a paradigm shift has taken place in evolutionary theory over the past several decades, as symbiogenesis, epigenetics, developmental systems theory, niche construction theory, and plasticity have broadened our conception of heredity; it is now recognized that life has been formed and transformed through chance encounters with and even intentional cultivation of its biotic and abiotic environment-well beyond the nuclear family. 2Within this field, the clearest result of Margulis's influence is the focus on the holobiont (a term coined by Margulis to encompass a traditionally conceived organism together with its symbionts) as a model organism and unit of selection (Gilbert, Sapp, and Tauber 2012;McFall-Ngai et al. 2013;Gilbert 2019). 3 Margulis was also an early critic of the tendency toward mathematical abstraction in population genetics, arguing that it abandoned engagement with the actual complexity of life's ecological relations.Though genetics claimed to be discovering the factors that determined the development of particular traits, it was frequently criticized for ignoring the study of development altogether.For the most part, "phenotypic" adult traits were correlated with genetic differences, while the ideology of a "genetic program" served as a blanket answer to how those traits might develop. 4The study of development has led to an increasing recognition of the plasticity of the organism, whose development is responsive to a milieu that includes complex interactions with its environment and symbionts.On the one hand, this challenged many of the assumptions of those who pictured development following from a deterministic program written in an individual's genes.On the other hand, as much as the work of Margulis and other scientist-heretics challenged assumptions that dominated twentieth-century evolutionary --------------------------------------------2 For reasons that are perhaps essential, and which I hope to explore more fully elsewhere, these competing frameworks, while they have radically changed the study of life, have not unified around a single theoretical conception of the object of biological or evolutionary study.Several authors have attempted theoretical syntheses of the sometimes cooperating and sometimes competing approaches to evolution today, for example Oyama 2000, West-Eberhard 2003, Jablonka and Lamb 2014, Gilbert and Epel 2015.The work of Scott Gilbert perhaps most clearly demonstrates the influence of Margulis's thought (Gilbert, Bosch, and Ledón-Rettig 2015).
3 Margulis (1990;1991) introduced the term "holobiont" in two essays, though she defined it slightly differently in each case (Suárez 2018, 86-87). There is more than one way to narrate this complex history, in which genetics seems alternately to dismiss and usurp development (Keller 2002, 73-102;Amundson 2005).
theory and met with the resistance of population geneticists, there is a complicity or undecidability between these internalizing and externalizing representations of heredity that remains to be explored.In either case, if anything about the living is to be scientifically understood or predictable, if the scientist is able to say anything besides "who knows?" about the possibilities of life, it necessarily must be made to fit a form of programmaticity, however networked its inputs and nested its if-thens.
In what follows, I examine Margulis's interventions in evolutionary theory to explore these complicities with the theories she rejects.The rush to declare oneself free of certain inherited errors or sins perhaps unites the "paradigm shift" that is today sometimes called the "Extended Evolutionary Synthesis," and the realist and materialist philosophies that have risen to prominence among humanists and in interdisciplinary science studies.Without pretending that the myriad works marching under the banner of a "turn" or "return" to matter today could all be summarized as sharing a single theoretico-philosophical impetus or essence, one can identify widespread tendencies in their basic view of the natural world and its relationship to scientific discourse that are recognizable as well in Margulis's theorizing: 1) The critique of the mechanistic view of nature and life-in keeping with what Latour and many others frame as the overcoming of a dichotomy instituted by Descartes, Margulis sees her work as discovering or recovering a non-mechanistic life that today would likely be called agential, vibrant, and so on.In short, if genetics saw the organism as passively shaped into an survival machine (guided by the cybernetic program of its genes), Margulis hopes to recover the possibility of understanding life as actively and responsively shaping itself.2) Nature as pure production-It follows directly from the critique of mechanism that nature should be understood not as obedient to a programmatic set of laws but as a source of invention, creativity, novelty, and becoming.For Margulis, this is most visible in her drive to recapture symbiogenesis as the origin of species and speciation, an origin that she argues population genetics has forsaken.In turn, this allows her work to harmonize with the enormous influence of Deleuze and Guattari on continental science studies and materialism.In fact, A Thousand Plateaus frequently invokes symbiosis (though it does not cite Margulis) as an instance of rhizomatic, non-filiational becoming. 53) Posthumanism-true to a tendency that is 5 "Finally, becoming is not an evolution, at least not an evolution by descent and filiation.Becoming produces nothing by filiation; all filiation is imaginary.Becoming is always of a different order than filiation.It concerns alliance.If evolution includes any veritable becomings, it is in the domain of symbioses that bring into play beings of totally different scales and kingdoms, with no possible filiation" (Deleuze and Guattari 1987, 238; oddly, the edition of the French text I have, printed in 2016, does not have italics in this passage, perhaps more traditional than it lets on, Margulis argues that her opponents' scientism has proven false because it imposed unnatural concepts on nature (such as competition), concepts derived from "anthropocentric" cultural relations, whereas she hopes to produce a universally and transhistorically valid theory.In this, her argumentation is the perfect mirror of, for example, Meillassoux's (2009) anti-"correlational" realism, which pretends to oppose philosophy by positing the most traditional ideal of universalizing scientific thought such as could be derived from the philosophical tradition itself. 6The drive for ontology (the desire to suppress "epistemological" questions of knowledge's fallibility) that is recognizable in many fields today is continuous with this tendency.
It would not be possible here to examine every text in which these implicit or explicit similarities can be observed.Rather, I hope to intervene in the field where these tendencies have become commonplace by returning to Margulis's texts and reading the faltering step of these operations.In short, wherever a science, theory, philosophy, ism, or ontology hopes to oppose pure productivity to mechanism it necessarily reinstates the differences it hopes to suppress or overcome.The porosity of this threshold prevents "new" approaches to materiality (as vibrant, agential, alive, inventive) from being purely and simply distinct from "old" approaches to nature and matter (as mechanistic, passive, inert).There is no creative origin located in a pure beyond of economic relations (of life as competition for survival and reproduction), nor is there an economic system without excess, but rather an undecidability of production and reproduction that makes the origin descend from its derivation, even in the form of its all-too-human scientization or mathematicization.
Though it is thought to surpass the arbitrary imposition of groundless concepts on nature, symbiosis only comes to pass where the very speciesidentities it fictionalizes have been posited in advance.Far from overcoming the economy of identity or individualism and vertical filiation, there could be no symbiosis without this economization.Thus, evolution is not simply symbiogenesis in the sense of the crossing of genealogical branches, but is --------------------------------------------though the original printing appears to, as well as other translations).Deleuze and Guattari have had an enormous influence on science studies and contemporary materialism, which I would argue is due as much to the consideration of biological themes in their work as it is to the shared investment of all these authors in the desire for the new.For examples of Deleuzean materialism or biophilosophy see Braidotti 2011;De Landa 1997;Ansell-Pearson 1999;Bennett 2010;Colebrook 2010;Shaviro 2010;Grosz 2011;Protevi 2013;Roy 2018.For a deconstruction of the theme of production as it crosses Deleuzean, Marxist, and biological discourse, see Thomas Clément Mercier's (2021) "Re/pro/ductions: Ça déborde." 6 I have considered these tendencies in the work of other authors associated with new materialism and speculative realism in earlier essays (Basile 2018b;2019;2020)
SYNOEDIPAL RIDDLES: MARGULIS'S ENDOSYMBIOSIS
The work of Lynn Margulis represented an event in the scientific community's view of symbiosis (Margulis [Sagan] 1967; Margulis 1998).Beginning in the late sixties, she undertook to prove that certain organelles unique to the lineage of eukaryotic cells, including mitochondria and chloroplasts, were originally independent unicellular organisms that united symbiotically with a proto-eukaryotic host. 8A cell capable of feeding on another cell's waste is incorporated within its partner, and eventually exports most of its genes (and vital functions) to the nucleus of its host cell.Ultimately, the pair comes to reproduce as one.The debates surrounding Margulis's advocacy of this theory were settled in the minds of many biologists once it was discovered that these organelles retained their own DNA, closely related to that of prokaryotic cells, and that these relatives were thought to be phylogenetically distinct prior to the origin of the eukaryotic cell (Gray and Doolittle 1982;Gray 1992). 9 --------------------------------------------7 I thank Thomas Clément Mercier and Eszter Timár, whose conversations, readings, and thoughts inhabit every word of this text, including this word "symbioautothanatosis," which I believe was first spoken by two or three of us in unison.They have been so generous that it would be impossible to identify the individual gifts that make up this symbiotic textthis non-appearance perhaps being the condition of a true gift.
8 Eukaryotic cells, which are defined by the possession of a true nucleus, make up not only a class of single-celled organisms, but the entire kingdoms of fungi, plants, and animals.It is impossible to narrate the history of a science or to define its terms without feigning the unity of figures that have been in flux throughout their history, today more than ever.While symbiosis is now understood as the origin of the eukaryotic cell, it has also been that cell's dissolution, at least in a theoretical sense.That is, the firm boundary line that once distinguished prokaryotic cells (bacteria and archaea) from eukaryotic cells and the multicellular organisms formed of eukaryotic cells has been displaced by the very force that gave birth to their lineage.Today, it is recognized that symbioses, including those with prokaryotic cells, are essential to eukaryotic life (that is to say, even if these cells can be distinguished, there are no purely "eukaryotic organisms").The vast majority of cells on or within our skin are prokaryotic, as well as the majority of the genetic material within that space.These symbionts are increasingly understood to be essential to our health and life.Philosopher of biology John Dupré (who offers one example of Margulis's influence on analytic philosophy), has placed in question the very concepts of a monogenomic and even a unicellular organism, on the basis of the prevalence of such symbioses (2012,. 9 This last piece of evidence was decisive in the minds of certain biologists, but Margulis rejected it for the same reasons (explored below) that she rejected the creation of the domain Archaea (O'Malley 2017, 35).
In other words, not only did these organelles have genes and other features in common with prokaryotic cells thought to be much older than the eukaryotic cell, but these prokaryotic predecessors were understood to have separated from the lineage that led to the eukaryotic cell long before its rise.This cast doubt on what is, in logical terms, the only possible competitor theory to the Serial Endosymbiosis Theory which Margulis championed (1998, 33-49).Either mitochondria and chloroplasts arrived from outside the cell, or else they must have arisen within it.This inside/outside binary saturates the logical and topological space of possibility. 10The "autogenous" or "direct filiation" theory of the organelle would imply that in the course of its reproductions, a predecessor to those cells we today know as eukaryotic retained in its cytoplasm a primitive form of itself, which then specialized into the metabolic functions it now performs there.It was even hypothesized, in the course of these debates, that prokaryotic cells, such as we know them today, could have originated from these organelles, rather than vice versa-all of the arguments connecting the two structures necessarily succumb to this symmetry (Sapp 1994, 187).An "origin" can only be the feigned or dissimulated product of its own traces read from the imbricated surface of what is not simply a present contemporaneity.The legibility of this presence or present depends not only on its inscription with a conceptuality and taxonomy that implies heterochronous temporalities, but this inscription is itself received as a "phylogenetic" inheritance by a science or scientist who can only be the legatee of the origin they propose to master.I am not trying to recuperate the endogenous or autogenous theory, but to think symbiosis otherwise. 11Whether we imagine the "autogenous" origin or "xenogenous" return of these organelles, a degree of misrecognition is required to maintain any theory of genesis.It is no less the case, according to the exogenous theory typically associated with symbiosis, that the cell returning to live within its host is a product of the same lineage.Both of these cells, according in principle to any phylogenetic analysis, originated --------------------------------------------10 "There are really only two ways such genomic partitioning can be explained" (Gray and Doolittle 1982, 2).
11 What follows should make clear that it is more or less irrelevant to the movement I am attempting to trace whether the independent phylogeny of eukaryotic cell and organelle can be proven or disproven.In either case, the decision rests on an ungroundable definition of "eukaryotic" and its others.Still, it is worth remembering that these phylogenies can only be the contingent and revisable products of the distribution of identities they seek to ground.In the case of the phylogeny that some take as proof of Serial Endosymbiosis Theory, it was formed on the basis of similarities and differences that could be identified among 16S rRNA.While this produced a surprisingly robust set of experimental confirmations regarding the common qualities of the families of cells so identified, it was also based on the since disproven conviction that this structure would not be affected by horizontal gene transfer (Kitahara and Miyazaki 2013).
from the same "universal common ancestor." 12It is only by means of a misrecognition, such as the one that plagued Oedipus, that the return of this familial endowment comes to appear as a foreigner and guest.At the same time, even the "endogenous" story does not necessarily imply familial and filial unity.That one of these theories represents the Darwinian or neo-Darwinian self-propagation of a lineage while the other represents the intrusion and displacement of that lineal continuity depends on a common source or theory that is in a sense the origin of the origin.Whether one imagines the continuity or rupture of these pure lines, the idea that a lineage is formed by the proper reproductions of a bounded cell is a presupposition both explanations hold in common.If it were possible, without any visible barrier even needing to be crossed, for the innate possibilities of an organ or organism to transform or transgress the given, for possibility and impossibility to trade places, then "life" would be exposed to a syn-or hetero-nomy older than any encounter with its near or distant relatives.
DARWIN DISPLACED
For all of the transformations it introduced into evolutionary history and theory, Margulis's work nonetheless betrays its dependence on these limits of phylogenetic thought.Everywhere that she places in question the genetic grounding of life's innate possibilities, it is only to locate innate possibilities of being one step above or below their traditional locus.She puts forward criticisms that would place in question the very scientificity of science, but only to critique particular representations of population genetics, without recognizing that these same criticisms would apply to the symbiotic and symbiolic representation of life that she champions.The fictionality of phylogeny, and of the notions of species and nature with which it is entwined, is not a circumstantial limit of a particular representation of evolution, any more than it is the mark of a fault or sin that distances human knowledge from the tree of life; rather, species, nature, and everything attached to the value of origin depends on the artifice that makes it impossible.
- -------------------------------------------12 Though Margulis and Sagan are critical of the unifying logic of the common ancestor (often referred to as the Last Universal Common Ancestor-LUCA), they grant it in the same breath: "The long-term symbiosis that led to species origin by symbiogenesis requires integration of at least two differently named organisms.No visible organism or group of organisms is descended 'from a single common ancestor'" (2002,7).They can only challenge the unity of ancestry for "visible organisms" (a visibility that they, not without reason, take as definitive of the species, the eidos) by positing a unified life (the "differently named organism") underneath them of which they are the re-composition.
Margulis's criticism of "neo-Darwinian" population genetics follows this pattern of incorporating the "errors" it denounces.All of Margulis's criticisms of the field are apt, but fundamentally they apply just as well to her preferred formalisms.Population genetics treats life as a calculable set of genes.Values can be assigned to represent the effects of factors such as breeding tendencies, selection, mutation, migration, and drift on the intergenerational transmission of these genes, and the resulting formulae can be used to predict the change of genotype frequencies in a population.Margulis argues that this formalization creates a "mechanistic" picture of life that eschews empirical study and deals only with idealized quantifications that are not "directly measurable." 13 This critique of mechanism suggests a transformed picture of evolution and of life: My view is that neo-Darwinist fundamentals, derivative from the mechanistic life science worldview, are taught as articles of true faith that require pledges of allegiance from graduate students and young faculty members.I include as examples of such fundamentals a nonautopoietic definition of life; a bodiless, linear concept of evolution; and an uncritical acceptance of the mesmerizing concept of adaptation.(Margulis and Sagan 1997, 271-72) Margulis's critique of "adaptationism" focuses on the passive role in which it places the living.Adaptation implies an organism honed through random events of differential survival, and thus a purely mechanical or efficient image of causality. 14What appears as an adaptation, in Margulis's view, is both the constant self-maintenance of an autopoietic individual, and self-maintenance within an environmental context that is also autopoietically created --------------------------------------------13 Margulis includes tables in this essay that contrast neo-Darwinist terms she argues are mere groupthink to those she claims are "independent of language and culture" (Margulis and Sagan 1997, 275).Somewhat ironically, given her critiques of efficient causality and of the "physicomathematics envy" of population geneticists (1997,266), the tables of culturally contingent neo-Darwinist terms include any term that implies a purpose or final causality (such as cooperation), while her table of "universal science" includes the basic properties of physics and chemistry (mass, length, volume, velocity, pressure, etc.).Without attempting in any way to recuperate the self-evidence or cultural independence of neo-Darwinian concepts, I would nonetheless posit that Margulis's work depends on just as contingent and deconstructible a set of assumptions.
14 "The mechanistic worldview has many problems, one of which is the failure of neo-Darwinist biologists to think physiologically in general and to recognize the principles of autopoiesis in particular.Biologists are failing to embrace alternatives to a mechanical universe run by their supposed superiors: physicists, chemists, and mathematicians" (Margulis and Sagan 1997, 267).and maintained by the living. 15This connects her theory of symbiotic evolution to her endorsement of James Lovelock's Gaia hypothesis, according to which the earth is itself a metabolically self-sustaining, living individual (Margulis and Sagan 1997, 127-44). 16Regardless of how we assess each of these interventions in evolutionary thought, it remains to ask what can be done if Margulis's theory of evolution and anything that could count as theoretical or scientific must depend in turn on something like mechanistic modeling.
To know whether one has escaped the orbit of mathesis and efficient causality, one would have to know the essence of these categories.It may not be as simple as avowing the absence of arithmetical symbols to prove that there is no "mathematical" residue to one's thinking. 17 If a symbiotic union can lead to a new organism or way of life, then a formula that predicts or postdicts random genetic mutations will tell us --------------------------------------------15 Several obstacles stand in the way of offering a straightforward definition of the term "autopoiesis" as it circulates in Margulis's discourse.Though she frequently invokes the term and does define it, she nonetheless attributes quite heterogeneous and contradictory values to it.The term is used, on the one hand, to grant a basic, elemental status and even "self"-hood to the bacterial cell, on the grounds that it self-produces.To an extent, this is coherent with Maturana and Varela's use of the term, who understand the cell as the most basic autopoietic unit and multicellular organisms to be built of these units.On the other hand, Margulis equates this self-production with a spontaneous originality of life that extracts it from all mechanism, while Maturana and Varela explicitly claim they are creating a mechanistic account of life (Margulis and Sagan 1997, 267;Maturana and Varela 2012, 75-76).Maturana and Varela also correctly argue that their theory can explain nothing about the origin of the variations that shape evolution, while Margulis takes it as a return to originality itself (Maturana and Varela 1992, 115).Moreover, while Maturana and Varela cite Margulis's work and invoke symbiosis, their definition of it is incompatible with many of her formulations (Maturana and Varela 1992, 87-88).For Maturana and Varela, only endosymbiosis would count as symbiosis, while other interactions of cells and organisms would not.One could even go as far as to say that the notion of operational closure Maturana and Varela attribute to the autopoietic system, which assumes that it has, in fact, no environment or world but rather only pre-programmed possibilities of plasticity, guarantees that there can be no true evolution, no genesis or symbiogenesis at all.This is a generalizable problem of scientific thought, which must posit the end as a possibility present from the beginning in order for a cause to be knowable. 16Lovelock's Gaia hypothesis has recently received well-deserved criticism for its rejection of the environmentalist thinking and activism that was nascent at the time of its formation.Lovelock used the idea of a metabolic planet to argue that Earth could maintain its equilibrium much better than environmentalists such as Rachel Carson suggested.This research was a direct result of funding Lovelock received from the Royal Dutch Shell corporation, and was used to give scientific sanction to polluters and their enablers in government (Aronowsky 2021).Margulis's work on the Gaia hypothesis repeats the same criticisms of environmentalist thought (Margulis and Sagan 1997, 129;Cf. Margulis 1996, 140).
17 Nothing about symbiosis is immune or opposed to mathematical modeling.In fact, there can be scientific knowledge and knowledge at all only where a certain formalizability that will always be mathematically expressible holds sway.For an example of an attempt to treat "holobiont" evolution mathematically, see Roughgarden et al. nothing about this source of novelty in the course of evolution (unless one invoked the convenient hypothesis that a genetic mutation caused this choice of symbiotic lifestyle).Without recourse to an innate repository programming the future of life, it would be necessary to turn to history to decipher the contingency of an event without law. 18This is the logic of Margulis's project, and however compelling it may be as an apparent antidote to the stifling project of Modern Synthesis genetics, it depends on principles that are not straightforwardly opposed or opposable to their supposed opposites.The "same" force that, in its population genetic guise, stifled and excluded life, this very same force gives life to every symbiotic union.The mathematical models of the population geneticist missed life entirely, according to Margulis, because they programmatically preserved isolated reproductive lineages that could never capture the apparently spontaneous irruption of the contingency of symbiotic union, the breath of life.But it can always be demonstrated that spontaneity or freedom only exists where some calculation fails-it depends just as much on the program that it appears to outstrip.So, wherever the unprogrammability of life appears to manifest itself by means of symbiosis, it is a priori given that this crossing of borders depends on the very borders it places in question.If one truly abandoned all thinking in terms of genetic lineages, one would lose the symalong with the auto-.The very chance and hope of life is only there where that which corrupts it in principle, the mechanical and programmatic, the stifling or dead, accompanies and cultivates it.Life and death are symbiotic. 19Though no word could capture this duplicitous movement, if word implies the unity of a meaning, conceptual content, or essence, I would propose to let this non-self-identity inhabit, with a sort of parasitic alterity, the term symbioautothanatosis.
I deploy this term, symbioautothanatosis, to trouble the sense of union or reunion that accompanies the thought of symbiosis.Something that cannot be identical with life, because it is not identical to itself, a contingent and --------------------------------------------18 Margulis does not articulate the opposition between mathematicism and history as clearly as Stephen Jay Gould will, for instance.For her, the outside of mathematics is, somewhat confusingly, the dynamic modelling of autopoiesis.Still, turning from the computer screen to "nature" is part of her project: "Neo-Darwinists, closet neo-Darwinists, and non-neo-Darwinists argue among themselves about 'who selects' and 'what is selected.'[...] Dover (1988), for example, attempts to extricate us from some of these evolutionary tangles when he writes: 'The study of evolution should be removed from teleological computer simulations, thought experiments and wrong-headed juggling of probabilities, and put back into the laboratory and the field'" (Margulis and Sagan 1997, 271).
19 One could just as well write: life death is symbiotic."Life death" is the term introduced in Derrida's recently published 1975-76 seminar to describe life as neither opposed nor identical to death (Derrida 2020, 1-24;Vitale 2018).
revisable definition or self-definition of the essence and form (the life and species) of the living, is the precondition of what it makes impossible, the origin of life.For the scientist (who cannot simply exclude themselves from the domain to be defined, the domain of the living), this means that the possibility of symbiotic union, and any other form of horizontal involution in the tree of life, fictionalizes the concept of the isolated reproductive lineage that grounds the thought of species.Nonetheless, without these impossible and impossibly pure reproductive filiations, the very phenomenon of horizontality or symbiosis could not appear.The overcoming of living (natural, original) boundaries implied by the sym-bio-derives from the artificial (instituted, non-natural) positing of fictional divisions, from auto-thanatosis.Moreover, this non-oppositional heteronomy cannot be restricted to the "epistemological" failings of a supposedly unique being who has eaten from the arbor scientiae.The specular alterity that makes scientificity possible and impossible is life itself, if there is any; life has always defended and cultivated and defined itself through acts of recognition and reproduction that necessarily depend on the reading, writing, and re-inscription of traces that are symbioautothanatotic, neither living nor dead, neither self nor other.
BACTERIA: AUTOS REGAINED
The necessity of symbioautothanatosis makes itself explicit in Margulis's work.The dependence of her theory of symbiogenesis on conventional and instituted notions of taxonomy, and thus on the formalism of mathesis, is just as legible in her theory as it is in the work of those geneticists she disparages: The life-centered alternatives to mechanistic neo-Darwinism recognize that, of all the organisms on Earth today, only prokaryotes (bacteria) are individuals.All other live beings ("organisms"-such as animals, plants, and fungi) are metabolically complex communities of a multitude of tightly organized beings.That is, what we generally accept as an individual animal, such as a cow, is recognizable as a collection of various numbers and kinds of autopoietic entities that, functioning together, form an emergent entity-the cow."Individuals" are all diversities of coevolving associates.Said succinctly, all organisms larger than bacteria are intrinsically communities.(Margulis and Sagan 1997, 273) That is, in order to preserve the "life-centered" communalism of symbiosis, it is necessary to insist, just a single rung down in the vital ladder, on the "individuality" of bacteria.Thus, symbiosis has a cause which is perhaps internal to its communitarian body, "endosymbiotic," but nevertheless external and efficient or mechanical in the sense that its source resides in the encounter of originally exclusive bodies or agencies (nothing but an inner will and purpose would transcend efficient causality, which means that nothing but mechanism ever presents itself to scientific knowledge); moreover, it is formal-mathematical in the sense that the unity of these units is defined by a deconstructible model of life and its heredity, not to be confused with life "itself," if there is any.It retains the mark or stain of what Margulis denounced when it appeared in the corpus of population genetics: 1) mechanism and 2) formalism, mathematicism.
Even if Margulis's discourse would not be recognizable to a mathematician as part of their field, that does not guarantee that it is free of all mathematicity.Without pretending that this term could be defined, without pretending that it sheltered anything like an essence, we can at least ask if the defects it has been accused of by Margulis do not return in her own discourse.If the autopoietic living individual is the uniquely productive bacterium, this can easily be indicated by an X, and the formula for a minimal symbiosis could be written as X + X.Whether or not it is written as an algebraic formalism, every concept that allows for effects of scientific or theoretical unity must admit of something like a logic that can be abstracted from and re-deployed within the varying contexts of its application.The difference from nature "itself" that Margulis decried in the form of population genetical formalism necessarily reappears in any discourse that purports to do more than marvel before an unspeakable singularity.If one speaks the word "bacterium," or lets any other term or germ insinuate itself into discourse or consciousness, one is already working with iterable and operationalizable traces.It is not the case that these traces simply repress or falsify the true and vital world of singular things; rather, there is no world without them.There is nothing without the at once repressive-productive trace.The trace allows for every effect of formalizability and de-contextuality, precisely by "producing" the residue of a situated and material or singular world.If a technology like the term bacterium allows for the heterogeneity both outside and within each "individual" to be dissimulated or dissolved in a family resemblance, it is nonetheless only on the basis of such an iterable and abstractable formalization that the singular and its corruptibility or contamination can first appear.
The same analysis or deconstruction that turned the eukaryotic cell and everything based on it (fungus, plant, animal) into a consortium of bacteria can be turned on the "bacterium" itself.If it is possible to observe some unity here, and even to claim that this unity is autopoietic or self-made, that is only the case where it is constantly threatened and even constitutively compromised by forces that can no longer simply be attributed to the bacterium itself, if this is to name anything like a positive entity.I would attempt to draw this partitioning back before the classical topos of philosophical thought that identifies life where parts operate or cooperate in the service of the internal purposes of a whole (as opposed to the external purpose of a machine).Certainly, a bacterium is decomposable, intellectually or materially, into constituents whose unity has the character of a temporary détente.These parts, just in order to count as parts and parts of the whole, are necessarily riven by the trace that places the "whole" in question.Some may prove extrinsic and dispensable to the formation, or some more necessary than others, and each may have a different or differing history, age, and origin, if not quite a history of their own.Ultimately, this deconstruction of the "autopoietic cell" can only demonstrate dis-unity by feigning the unity of parts, attributing them identities and something like a life that can then be given or shared symbiotically.I insist on it not to pretend that those parts are the true unities underlying everything else, but to recognize the necessary transition or translatability of frames that inaugurates the legibility of these possibilities while also harboring their dissolution.
A trace that is neither natural nor artificial functions as the hidden border that interminably re-frames, re-writes and re-reads the dispensation of vitality.It is on the basis of this non-present and non-self-identical trace that "bacteria" and the distinction of separate bacterial lineages can function both as the name for the self-originating source of life and all symbiosis, and as an artificially imposed term that has been repeatedly displaced and deconstructed in the past decades. 20One figure for this disruption of the integrity and conceptual stability of life is the virus.Despite its apparent exteriority to its hosts, it can reveal a re-apportioning of the boundaries of the living that is no more internal than external.It draws boundaries within the distribution of species as they have been known, parasitizing some while benignly occupying others-though these mutualisms may be just as violent, creating a marked lineage that brings death to its former kin.In any case, the mark or trace of species-identity is revealed to be a non-present vulnerability to what arrives long after birth or the origin.As such, it is not simply the case that the virus, as a positive entity with its own identity, befalls the cell, but that its possibility reveals something like a virality that sets in motion every apparently living unity, animating something like the circulation of its ------------------------------------------- -20 The most influential displacement of the category of bacteria has come from Carl Woese's invention or discovery of the domain Archaea (Sapp 2009, 162-313).
economy or membrane.This virality is the non-unity of everything, and yet is the condition of all unity present and to come.The borders of a cell or its species are haunted by a non-apparent vulnerability to something like an infection that can arise from within or without and re-partition the lines or lineages of life by means of an a-filiation without reproduction-a sans-biosis.Every appearance of individual-or species-identity harbors the threat or promise of a subversion that could not simply be attributed to another life, present and self-made.
Perhaps for this reason, Margulis struggles to fit the virus within her theory of autopoiesis and symbiosis.It does not count, in her estimation, as autopoietic or even as living (Margulis and Sagan 2002, 39-40).Nonetheless, she acknowledges that it can be the donor of acquired genes or genomes, and thus of heritable traits and even speciation (Margulis and Sagan 2002, 73-75).The virus is part of the origin of species or evolutionary novelty that she seeks, without itself being part of life.
REPRO-TRADUCTIONS: NATURAL MODELS
The symbiotic theorist, both when identifying her own theoretical act, and when identifying its object, symbiotic union, is guilty of a certain misrecognition.Margulis does not see that her own theory, like the population genetics she criticizes, necessarily has the status of a model or metaphor, something that cannot simply be identified with the "nature" it makes accessible.Yet, like a symbiont that is both foreign to itself and a family relation of even the most unrecognizable other, model-metaphors are not simply external to and imposed on what they describe."Nature" is nothing at all without its model, or at least the designation of the outside-the-model is itself part of any logic, system, science, or model of thought.This referral or renvoi beyond itself is a line that runs within every term or observable in one's system-one never simply arrives at this outside only because one is always there already. 21 The desire for nature itself manifests in Margulis's thought as a search for the origin of nature's pure productivity.She points out that Darwin's Origin of Species lacked a theory of precisely that-the origin of speciation (Margulis and Sagan 2002, 3).Darwin theorized that a variable population inheriting certain traits from its parents with differential survival and reproduction would undergo natural selection, but gave no theory of the source of --------------------------------------------21 On the renvoi, see Derrida 1976, 46-49;Senatore 2017.variability.The Modern Synthesis attempted to fill this gap by placing random genetic mutation as the driver of variation, but this leaves something to be desired.Margulis only points out that random genetic mutation is typically deleterious-rather than leading to new species, it tends toward debilitation and death (Margulis and Sagan 2002, 10-11).One could carry this critique a step further, however.Genetic study must begin from relatively intuitive, observed differences among the living, at which point the genetic difference correlated with that variation can be sought out.The most one ever arrives at by this method (whose limitation applies to any search for causes) is a knowledge of difference, rather than origin.In other words, it is never a positive term, this "gene" here, to which causality can be attributed, but a difference whose positive faces are interminably deconstructible.It will always remain possible to find that an organism which develops at a different temperature, or with more or less nutrition, or with the internal presence of another "gene," or the external presence of a new symbiont, no longer expresses the same phenotypic difference, and thus that this "gene," as a supposedly positive entity, was never a pure origin or causal source.
Even the most univocal geneticists acknowledge the primacy of difference (Huxley 2010, 18-19;Dawkins 2006, 281;Schwartz 2000).It is not true that "gene A causes trait B," but that an unknown network whose inaccessible contours may be broader than the world allows something to operate as a "gene," while always harboring the threat or promise of taking back what it has given.
This difference-or differance-at-the-origin certainly necessitates that genetics will remain a deconstructible science.An "epigenetic inheritance system," or even an external feature of the biotic or abiotic "environment" can function as a "gene" or decisive difference just as well as the organism's "genetic" endowment.However, this "same" deconstructibility, even if it places every "genetics" in question, can never be overcome by another science, under another heading, model, or metaphor.Symbiogenesis (and epigenetics, niche construction theory, developmental systems theory, ecoevo-devo, or any other participant in the "Extended Evolutionary Synthesis") will necessarily remain just as deconstructible in their representations or reconstructions of causality.One can supplant a model-metaphor, but only with another model-metaphor.Thus, we can read with a certain suspicion everything in Margulis's discourse that aspires for a return to the pure productivity of nature, that is, to nature itself without intermediary.Symbiosis is posited as the "source" of evolutionary variation and "novelty," the origin of species or speciation that Darwin missed (Margulis and Sagan 2002, 11-12).Though it is attributed to individuals that are already living, they are purely "productive," "autopoietic" or self-making bacteria, offering the gift of nature itself.
"Bacterium" is just as revisable and revolutionizable a category as any model or metaphor.It is perhaps to avoid facing this artificiality of nature, and of the bacteria in which she had entrusted nature "itself," that Margulis fought against Carl Woese's widely accepted division of "prokaryotes" into the two domains Bacteria and Archaea (Sapp 2009, 198). 22One symptom of this desire to return to a pure and simple origin in and as nature appears in Margulis and Sagan's framing of a deceptively simple inquiry: "where new species come from" (Margulis and Sagan 2002, 3).Phrasing the question in this way leads the reader (and the authors) toward only one side of the Janus-faced answer.Margulis will argue: new species come from symbiosis, from acquiring genomes.What this answer leaves out is that "new" species, symbiogeneses, necessarily depend as well on a prior, revisable, revolutionizable, or deconstructible model-metaphor of species.Someone must decide what "species" means or where to draw its lines."New" species arise where someone or something, often a scientist, deploys a species-concept or species-decision and achieves some degree of communal consensus around their decision or discission.The source or "origin" of nature is divided; "nature" is born from this supplement of artificiality, from symbioautothanatosis.
Anton de Bary's definition of symbiosis, which Margulis frequently invokes, makes this supplementary structure explicit.Symbiosis is "the living together of unlike named organisms" (qtd. in Sapp 1994, 7). 23Without the nomination that creates the apparent family resemblance or effects of conceptual unity of a species, no crossing of borders or symbiogensis could be possible.Where do new species come from?From the scientist, who is at once the most fertile and sterile of creatures.
The dependence on a prior, deconstructible inscription is legible in Margulis's very "definition" of species (which, given this circularity, cannot hope to name a simple concept).The "symbiogenetic definition of species" --------------------------------------------22 Margulis advocated for a five-kingdom taxonomy, which combined all prokaryotes in kingdom Monera.She advocated for this taxonomy in a work coauthored with Karlene Schwartz (1988), Five Kingdoms, and it formed the organizing principle of another work she co-authored with Dorion Sagan (1995), What Is Life?In "Big Trouble in Biology," when Margulis contrasts scientific terms that in her estimation are nothing more than groupthink with those she claimed were "independent of language and culture" and represented "universal science," she placed "Archaeobacteria" and other terms related to Woese's research in the disparaged category (Margulis and Sagan 1997, 276).
23 "The long-term symbiosis that led to species origin by symbiogenesis requires integration of at least two differently named organisms" (Margulis and Sagan 2002, 7).groups together those organisms "composed of the same set of integrated genomes" (Margulis and Sagan 2002, 6).What is a genome but the idealized genetic endowment of a species? 24
ALL TOO HUMAN SCIENCES
Margulis repeats the gesture of accusing the other of a fault or gift they share in common when she describes the concepts of population genetics as "anthropocentric:" Symbiosis, merger, body fusion, and the like cannot be reduced to replacing "competition" as a major motive force in evolution with "cooperation."Ultimately, an anthropocentric term like "competition" has no obvious place in the scientific dialogue[...]Vogue terms like "competition," cooperation," "mutualism," "mutual benefit," "energy costs," and "competitive advantage" have been borrowed from human enterprises and forced on science from politics, business, and social thought.(Margulis and Sagan 2002, 15-16) We have returned to the debate with which we started, over whether symbiosis should be thought as a mutually beneficial association, or a neutral one that perhaps exists beyond the reaches of economic calculation. 25Everything I have said thus far, however, should lead us to question whether any concept, whatever its name and position relative to this economy, could truly remain pure of all "anthropocentrism."If every concept retains some mark of artificiality, then it will always appear detachable from its "natural" context, received from a relatively naïve and extrinsic intuition (even when it originates "within" the sciences), and thus as bearing the stain of its contingent dependence on the apparently human investigator.At the same time, this very contingency is what allows or constrains the "human" to duplicitously reach beyond itself.Nature "itself" is only ever the after-effect of this conceptuality or this re-contextualization of relatively "anthropocentric" --------------------------------------------24 If one took "genome" in this definition to simply mean whatever DNA sequences were present in a single organism, then only genetic clones would belong to the same species.The examples Margulis gives make clear that this is not how she intends her definition to be taken.
25 I hoped to consider the work of Donna Haraway and Zakiyyah Iman Jackson as part of this essay, but constraints of length require that I devote a separate text to these ethicopolitical reflections on symbiosis.Both authors waver between descriptions of symbiosis that recognize it as irreducibly ethically fraught, and those which treat it as the good itself.In this, their projects are the ethical mirror of those debates between symbiosis as mutualism and symbiosis as beyond economy that have plagued every author who takes up the subject.
concepts-if anything at all will appear to us as beyond our limits and definitions, arriving to us as a pure gift of the other, it could only be as a nonpresent restlessness within the circulation of these economic concepts. 26 Margulis may be sensing a deeper risk in the idea of "cooperation," given its circumscription by the "reciprocal altruism" of population genetics and gene-selection discourse.This theory of reciprocity attempts to explain every apparently "altruistic" behavior among the living within the confines of population genetics, and thus of individualistic struggle (Trivers 1971;Dawkins 2006, 166-88).Everywhere that apparent selflessness can be observed, the organism that risks or sacrifices its own benefit must either expect some good deed in return, or be acting for the sake of organisms with which it shares some genes.This conclusion absolves itself of its cynicism with a simple calculation-any behavior that was truly self-sacrificial, in terms of reproductive potential, would be drowned out by those who looked out for their own advantage (this depends on the idea that all behavior is determined through vertical inheritance).In other words, if we define selfinterest as reproductive success, it is more or less tautologous to conclude that a behavior that sacrifices reproduction will cease to exist, while any behavior that augments it will become more prevalent.A system organized to mechanically act out of self-interest could arrive at these apparent performances of "altruism." 27Thus, if one interprets symbiosis as mutual benefit ("cooperation"), one has not in fact challenged the principle of neo-Darwinian thinking, which is Margulis's ultimate goal-to return to a life whose pure generosity is not yet circumscribed within an economy of differential survival.Can such a thing be the subject of scientific knowledge?It would require, according to Margulis and Sagan, a novelty that only the symbiosis of disciplines could provide: Such evolution requires new thought processes.New metaphors to reflect on permanent associations are needed[...]we would propose a new search in the social sciences for terms to replace the old, tired --------------------------------------------26 On the interdependence of anthropocentrism and its excess, see Derrida's (1982) "The Ends of Man."On the suppression of this interdependence (and of deconstruction) in contemporary materialist and realist thought, see my ( 2018) "Misreading Generalised Writing." 27 That is not to say that these are the only possible interpretations of "altruism."The undecidability of its concept or figure demands the search for an interpretation that can never quite satisfy the impetus setting it in motion.One could compare, for instance, Sober and Wilson's (1999) Unto Others, which does not attempt to reduce altruism to individualistic competition, but nonetheless does not take the radical departure from adaptationism that Margulis is proposing.social Darwinist metaphors.If survival is owed to symbiosis, rather than overemphasized intraspecific competitive struggles, what then are the consequences for nonbiologists interested in evolution?(Margulis and Sagan 2002, 15) There is more than an appearance of contradiction in this reasoning which sees "politics, business, and social thought" as anthropocentric but "social sciences" as a potential source for the influx of life itself.This impasse is symptomatic of the desire to exceed anthropocentrism while remaining scientific.
Myra J. Hird, whose project in The Origins of Sociable Life is largely inspired by Margulis's work, seems to discover such a concept from a perhaps unlikely source.She takes the idea of a gift beyond economy not only from anthropologists such as Marcel Mauss (who could be the sort of "social scientist" Margulis envisioned) but from the work of Jacques Derrida as well (Hird 2009, 77-90). 28For Hird, the gift offers a figure, necessary for biological as well as social thought, of a relationship with the other that would take place outside of economic exchange: It is this excess of the gift-a compromising of the self-that interests me.[...] I argue that there is much in gifting that circumvents descriptions of the 'self'/'nonself' dichotomy in terms of a closed economy in which resources are exchanged without excess or remainder.[...] My interest is to bring together these two literatures, the former concerned with the philosophical and the latter with the biological.This bringing together attempts an analogy between the biological and the economical self.To do this, I will suggest that the models of self produced by each discipline have developed in directions that suggest an appreciation of the self's excess produced through intraaction.I argue this excess (especially in terms of its unpredictability and unintended consequences) may be usefully illustrated by the bioevolutionary phenomenon of symbiogenesis.(Hird 2009, 77-78) The figure of the gift, as it circulates in Hird's text, largely follows the contours of Margulis's argument regarding symbiosis: the symbiotic gift is extracted from economic relations of either benefit or detriment, but at the same time is figured as the good itself.One could say that an enormous credit is extended to the symbiotic gift.The problem with letting the gift circulate --------------------------------------------28 Joost van Loon's "Epidemic Space" also turns to Derrida's work on the gift in the context of a discussion of symbiotic phenomena (2005,41).in discourse in this way is that the gift is neither a simple "thought process" nor a "metaphor" (which is what Margulis hoped the social sciences would provide us with).It has no essence and no figure, but loses itself in the "same" gesture that grants it, as Hird's syntax demonstrates with an uncanny insistence: "Corporeal generosity escapes neoliberal notions of a closed economy, and reminds us that, whatever cultural notions of autonomy and free will to which we might aspire, we are all corporeally inter-dependent" (2009,88).Can one escape "closed economy" (and with it neoliberalism as well as what Hird elsewhere calls "Western" thought and society) by recognizing an inter-dependence?That is, by acknowledging one's place in a system of indebtedness?The gift depends on the relations of self and other that it nonetheless places in question, and thus undoes itself in the same breath that offers it.This gesture that declares its freedom from the circularity of identity only by trapping itself within the circle is structurally identical to the claim: "Symbionts all the way down means that we are, ancestrally, made up of bacteria" (Hird 2009, 84).
If the gift depends on what it is not-economy-then it will never be a present entity or process (bacteria, corporeal inter-dependence), nor even a theoretical ideality.Derrida's intervention in anthropological studies of the gift is to demonstrate that everywhere the anthropologist speaks of and celebrates the gift they locate it within economy-that a doctrine of knowledge or positive science of the gift is an impossible project.Thus, he cannot simply be arrayed with Mauss as yet another theorist, scientist, or philosopher of the gift.The problem is not that the gift simply isn't these economic manifestations or circulations.That would leave open the possibility that a gift was something or somewhere else.Rather, gift and economy depend on each other while making each other impossible.They have neither a relation of simple exteriority nor identity.One can give no content to the idea of a gift in any recognizable logic or grammar unless someone gives something to someone else, yet these are precisely the conditions that undermine the gift (Derrida 1992, 11).If a recipient recognizes that they have received a "gift," then they are immediately indebted or obligated in an at least symbolic economy that might require from them gratitude and other recompense.Even if a giver knows (consciously or unconsciously) that they have given in secret, this recognition is enough to annul the generosity of the gift, to bring it within an economy of self-congratulation.The conditions of possibility of the gift are its conditions of impossibility.
Still, if we were to say that there could be gift only under the cloak of an absolute unconsciousness (more radical, Derrida specifies, than that Freudian or Lacanian unconscious that forgets nothing and whose letter always arrives), we would have to admit that what was lost was the possibility of a knowledge or science of the gift.We would never know when a gift had been given, how or by whom, nor could we ever exclude the possibility.And yet, if the gift is the good itself, the only chance of a good beyond economy, it would follow that ethics as such would be the science of the gift, would consist in the commandment that one give and know how to give, and know how to give thanks when a gift has been received.It is imperative that we know, precisely where knowledge makes its own order impossible (its field and its command).
The objective of my or Derrida's texts are not to deny the gift, nor to insist on a unity and saturation of economy that would be just as illusory.As I tried to show above, there could be no life or evolution at all without a gift that is nonetheless impossible as a positive presence.Only by insisting on the most extreme and exacerbated non-self-identity of gift and economy can one give the gift the only chance or risk it will ever have.Otherwise, if one is willing to describe economy while calling it gift, with a self-satisfied credulity, what hope is there?In a word, gift cannot be captured within that economy of knowledge we call science, at least not any science worthy of the name.Life or "evolution" cannot be the gift that "bacteria" give to each other or their hosts in a symbiotic union-not if we hope to pretend that we know anything at all by these names.If one knows what one means and what one says by naming them, can point them out with surety and agree on observations among a community of scientists, can provide logically consistent discourses in which these names circulate, and if one believes or acts as if these terms indicated ideal and self-identical unities, then they will never be gifts or givers.They can circulate in a (mechanistic, mathematical) economy, they can exchange credit and debit, but they cannot give.
As I attempted to show above, it was not where "bacteria" or "symbiogenesis" was invoked as the self-originating origin of life that the gift shone through in Margulis's texts.Rather, it was where a certain arbitrariness marked the deconstructibility of these terms, without offering anything like a secure foothold for alterity, that something like an impossible gift infected their economy.Margulis's text, like Derrida's and I hope my own, is written on the gift, in every sense of the phrase-which certainly does not mean that we can purely thematize and objectify it, but rather that we are already part of and engaged in its sending, before we can even hope to speak its name.Such texts cannot simply belong to the category of "science," though they cannot be opposed to science, placed under another heading, either: For finally, if the gift is another name of the impossible, we still think it, we name it, we desire it.We intend it.And this even if or because or to the extent that we never encounter it, we never know it, we never verify it, we never experience it in its present existence or in its phenomenon.The gift itself-we dare not say the gift in itself-will never be confused with the presence of its phenomenon.Perhaps there is nomination, language, thought, desire, or intention only there where there is this movement still for thinking, desiring, naming that which gives itself neither to be known, experienced, nor lived-in the sense in which presence, existence, determination regulate the economy of knowing, experiencing, and living.In this sense one can think, desire, and say only the impossible, according to the measureless measure of the impossible.If one wants to recapture the proper element of thinking, naming, desiring, it is perhaps according to the measureless measure of this limit that it is possible, possible as relation without relation to the impossible.One can desire, name, think in the proper sense of these words, if there is one, only to the immeasuring extent [dans la mesure démesurante] that one desires, names, thinks still or already, that one still lets announce itself what nevertheless cannot present itself as such to experience, to knowing: in short, here a gift that cannot make itself (a) present [un don qui ne peut pas se faire présent].This gap between, on the one hand, thought, language, and desire and, on the other hand, knowledge, philosophy, science, and the order of presence is also a gap between gift and economy.This gap is not present anywhere [...].(Derrida 1992, 29) "Living" and "science," which are here at least grammatically arranged on the side of "presence" or the "order of presence," are not simply opposable to the thought, language, or desire that exceeds the economy of presence.It is not that life or science must or even could be given up in the name of the gift.Rather, only the movement of risking and re-appropriating science and life holds open the hope and the faith that the most unanticipatable monstrosity could emerge from the economy of nature such as it has been known thus far. | 14,009.8 | 2021-12-28T00:00:00.000 | [
"Philosophy"
] |
A start-to-end optimisation of CLEAR for an inverse Compton scattering experiment, using RF-Track
The CERN Linear Electron Accelerator for Research (CLEAR) has been operating as a user facility since 2017, providing beams for various experiments. This paper describes a start-to-end optimisation of the CLEAR beamline as a driver for X-ray generation through inverse Compton scattering. The novel particle tracking code RF-Track was used to simulate the electron beam from the bunch generation at the cathode up to the interaction with a laser beam. Figures of merit of the scattered photon beam were computed in RF-Track, and optimised by tuning the beam parameters at injection and quadrupole strengths across the beamline. The aim of the optimisation was to maximise the scattered photon flux, and minimise the effects from static and dynamic imperfections. The start-to-end model of the CLEAR beamline was used to derive the impact of jitter on flux.
Introduction
Recent advancements in the average power of commercial lasers have led to a resurgence in the R&D of Inverse Compton scattering (ICS) sources as a compact alternative to synchrotrons.In particular, the energy tunability of ICS sources can be exploited to generate high-intensity gamma beams, which can be used for various applications, including protein crystallography [1], nuclear resonance fluorescence [2], or tomography, such as K-edge subtraction [3,4].
ICS sources based on storage rings have been extensively used due to their high repetition rate [5,6,7].However, besides a significant footprint, these sources are also limited by a large emittance, which decreases the average brilliance of the scattered photon beam.Sources based on linear accelerators can offer high-quality beams for pulsed operation [8,9] in considerably smaller facility sizes.The low repetition rate of linacs can be compensated by using a burst mode-operated Fabry-Perot cavity, where Joule-level laser effective energies are stored [10].
Linac-based ICS sources can benefit from developments in high gradient acceleration and high repetition rate injectors developed, for example, in the context of the Compact Linear Collider (CLIC) [11].
The CLEAR beamline
The CLEAR facility provides high-quality electron beams for activities comprising R&D on accelerator components for current and future accelerators, electron-based irradiation, and novel accelerating technologies [12].
Electron Beam Parameters
The CLEAR linac accelerates electrons up to 200 MeV.The facility can provide bunches with a length from 0.1 ps to 10 ps and charge from 5 pC to 3 nC.The train repetition rate varies from 0.83 to 10 Hz.The maximum total pulse charge is 30 nC.The photoinjector laser sets the bunch spacing to 1.5 GHz and can be increased to 3 GHz through an optical double pulse system.The relative energy spread of the electron bunch is under 0.2% RMS.
Laser Beam Parameters
In the first phase of the ICS experiment, the photoinjector would provide the laser beam.This simplifies the temporal matching with the electron beam.The laser wavelength is 1047 nm, with a pulse length of 4.7 ps, a burst repetition rate of 10 Hz, and a micropulse repetition rate of 1.5 GHz.The micropulse energy ranges from 10 µJ to 15 µJ.Given 150 incident pulses, the burst energy becomes 2.3 mJ, and the average power 23 mW.
Interaction Region
Considering the linear Compton regime with round Gaussian laser and electron intensity distributions and neglecting the hourglass effect, the number of photons generated through ICS in a bunch crossing is where σ T is the Thomson cross section, N e is the number of electrons in a bunch, N laser is the number of laser photons in a pulse, ϕ is the crossing angle between the laser and electron beam, and σ i with i = x, y, z is the convolution of the electron and laser transverse and longitudinal size at the interaction point (IP) [13].
To maximise the outgoing flux from Eq. (1), one needs to design a source with strongly focused head-on collisions of high-density electron and laser pulses.To achieve this, the relevant electron beam parameters need to be optimised at injection, and the quadrupole strengths tuned to allow for a small waist at the IP.Optimising the CLEAR beamline required software capable of tracking the electron beam through a custom set-up and simulating ICS.
RF-Track simulation of ICS
A start-to-end simulation of the CLEAR beamline was performed using RF-Track [14].RF-Track is a tracking code developed at CERN to simulate beam transport under the simultaneous effect of space-charge forces and wakefields.Linear ICS has been recently implemented and benchmarked against CAIN [15], a standard simulation code of ICS [16].In RF-Track, the beam can be tracked from the cathode to the interaction point, where the ICS effect is computed.The scattered photon beam is then tracked up to the detector's location, where its figures of merit are evaluated.The start-to-end simulation also allowed for studying the flux sensitivity to various sources of jitter.
The normalised beam emittance and the relative energy spread of the electron bunch are fixed at injection.Since these two parameters contribute to the quality of the scattered photon beam, a realistic simulation of the injector, based on the field maps of the CLEAR photoinjector and travelling wave structures, was implemented in the RF-Track model.At high bunch charge, beam loading effects in the RF gun and the linac become relevant.A beam-loading module has been developed in RF-Track to include these effects [17,18].
Parameters such as the RF phase, the gradient, and the laser spot size on the photoinjector cathode were tuned to minimise emittance and energy spread at the end of the injector.This allowed for an increase in the average brilliance and a decrease in the bandwidth of the scattered photon beam.
To optimise the electron beam parameters at the interaction point, a model of the CLEAR optics was implemented in RF-Track.Using the simplex algorithm with an appropriate merit function, the quadrupole strengths were tuned to reduce the beam size and obtain a waist at the IP.To avoid beam loss from the interaction of the electron beam with the beam pipe walls, additional weighting was introduced to limit the maximum electron beam size to 10% of the beam pipe aperture.The limit was applied from the linac exit to the last dipole, after which the electron beam is separated from the scattered photon beam, and sent to the beam dump.The IP coordinates were chosen based on the available focusing power and the impact of the beam jitter on the scattered photon flux.
Beam parameters at injection
The optimisation of the CLEAR beamline began with the choice of beam parameters at injection.The optimisation of the linac parameters allowed for a normalised emittance of 12 mm mrad and an energy spread of 2‰ at the start of the first quadrupole triplet.To maximise the number of electrons per bunch from Eq. ( 1), the bunch charge was increased to 1 nC.Simulations showed that a further increase in bunch charge led to a saturation of the photon flux.A larger bunch charge corresponds to an increase in emittance under space charge effects.This leads to a weak focusing and a larger spot size at the IP.Additionally, previous laser wire studies of the CLEAR injector showed that using a bunch charge over 1 nC led to a significantly larger background in the ICS photon detector [19].Given a total pulse charge of 30 nC at CLEAR, the number of electron bunches per train was maximised to 30.To match the longitudinal distribution of the incident laser pulse, the bunch length was set to 2 ps.The laser micropulse energy was increased to 15 µJ, which maximised the number of photons per laser pulse.A summary of the electron and laser parameters is given in Table 1.showed that an increase in ϕ to 5°leads to a loss in flux of 40%.In linac-based ICS sources, head-on collisions can be achieved by using parabolic mirrors with drilled holes to allow for the passage of the electron beam [8].
Quadrupole focusing at the IP
The tracking of an electron bunch beam size through the optimised quadrupoles at CLEAR is shown in Fig. 1.The location of the IP was set after the third quadrupole triplet.This placement allowed for a strong final focusing and a minimised impact on the scattered photons flux from the jitter.The distance from the IP to the detector was minimised to 3.7 m.
A horizontal and vertical electron beam size of 86 µm and 26 µm was obtained at the interaction point.The electron beam size was kept to a value lower than 10% of the beam pipe radius, except at the exit of the quadrupole doublet.This might lead to the production of secondary radiation, which can be mitigated by either scraping the electron beam with a set of collimators or installing polyethylene boards around the X-ray beamline, attenuating the secondary neutron radiation and enabling a clear path for the scattered photons [9].A summary of the gamma-ray parameters is given in Table 1.The ICS photon spectrum through a 1.5 mrad aperture was computed in Fig. 2, with the Compton edge at 715 keV.
Impact of beam jitter on flux
The RF-Track model of CLEAR was used to evaluate the impact of beamline jitter on the scattered photon flux in a 1.5 mrad cone.The jitter amplitudes reported at CLEAR comprise an energy jitter of 1%, a timing jitter of 300 fs, a position (angle) jitter of 10% of the beam size (divergence), a magnet displacement jitter of 1 µm, and a bunch charge jitter of 1%.A realistic estimate of the scattered photon flux was obtained by implementing all the sources of jitter into the RF-Track model.This was done by randomly varying the quantity of jitter from each source with respect to its amplitude and tracking an electron bunch through the modified beamline.
After a thousand runs, where each run is equivalent to one bunch crossing, the distribution of the change in flux through a 1.5 mrad cone due to jitter was calculated with respect to the nominal value, as shown in Fig. 3.These simulations showed that jitter in the beam energy has the most significant impact on flux.This is likely due to chromatic effects introduced by the quadrupoles preceding the IP.The simulations also showed that an energy jitter with an amplitude less than 0.25% would be required to keep changes in flux under 10%.A gamma distribution was used to fit the histogram in Fig. 3, and the mean and σ of the flux in a 1.5 mrad cone were determined to be 4 × 10 5 ± 5% ph/s.
Further work
The optimisation of the CLEAR beamline was performed as a prerequisite for an eventual ICS experiment at the user facility.Most of the results obtained, including the photon spectrum and flux sensitivity to jitter, can be experimentally determined and benchmarked against the RF-Track simulation.
This paper was focused on the optimisation of the electron beam for ICS.However, later stages of the experiment will require the design of a burst mode-operated Fabry-Perot cavity, which will allow for a significant increase in flux.A preliminary study indicated that increasing the number of electron bunches per train to 150 will produce a maximum effective gain in the cavity of 116, maximising the effective laser energy and scattered photon flux.
The experimental apparatus will comprise an inorganic scintillator, which will detect the outgoing photons, and an interaction chamber housing the alignment equipment and optical mirrors, which will focus the laser beam at the IP.A GEANT4 [20] simulation of the detector is being developed to determine the expected signal produced by the on-detector gamma beam.This result will be used to identify background sources of radiation and reduce the detector signal-to-noise ratio.
Conclusion
An optimisation of the electron beamline at the CLEAR user facility was computed in RF-Track for an inverse Compton scattering experiment.The electron and laser beam parameters were chosen to maximise the scattered photon flux.The magnetic strength of the quadrupoles was tuned to ensure a small beam size at the IP and avoid scraping of the beam pipe walls.A bunch satisfying the required conditions was tracked through the set-up, and figures of merit were determined for the scattered photon beam.The implementation of the full beamline in RF-Track allowed for determining the impact of jitter on the scattered photon flux.The sensitivity
Figure 1 :
Figure 1: Tracking of the electron beam size (sigma) along the CLEAR beamline.The position of the quadrupoles is marked in green.The interaction point is circled in black.
Figure 3 :
Figure 3: Histograms of the flux sensitivity for the total, beam energy, and magnet displacement jitter.Counts were normalised to the maximum bin height.The fractional change in flux was defined relative to the nominal value.
Table 1 :
Parameters of the Electron Beam, Laser, and Scattered Photon Beam Obtained from the Optimisation. | 2,921.8 | 2024-01-01T00:00:00.000 | [
"Physics",
"Engineering"
] |
Fabrication and Characterization of Biomedical Ti-Mg Composites via Spark Plasma Sintering
The fabrication of Ti-Mg composite biomaterials was investigated using spark plasma sintering (SPS) with varying Mg contents and sintering pressures. The effects of powder mixing, Mg addition, and sintering pressure on the microstructure and mechanical properties of the composite materials were systematically analyzed. Uniform dispersion of Mg within the Ti matrix was achieved, confirming the efficacy of ethanol-assisted ball milling for consistent mixing. The Young’s modulus of the composite materials exhibited a linear decrease with increasing Mg content, with Ti-30vol%Mg and Ti-50vol%Mg demonstrating reduced modulus values compared to pure Ti. Based on density measurements, compression tests, and Young’s modulus results, it was determined that the sinterability of Ti-30vol%Mg saturates at a sintering pressure of approximately 50 MPa. Moreover, our immersion tests in physiological saline underscore the profound significance of our findings. Ti-30vol%Mg maintained compressive strength above that of cortical bone for 6-to-10 days, with mechanical integrity improving under higher sintering pressures. These findings mark a significant leap towards the development of Ti-Mg composite biomaterials with tailored mechanical properties, thereby enhancing biocompatibility and osseointegration for a wide range of biomedical applications.
Introduction
In recent years, with the increase in demand for medical care due to aging and longevity, there is an expectation for the development of high-functioning and high-valueadded biomaterials.Metallic biomaterials are often used as load-bearing components, taking advantage of their high fracture toughness and fatigue strength.Alongside existing biomaterials such as stainless steel (SUS316L), Co-Cr alloys, industrial pure titanium, and titanium alloys, functional metallic biomaterials such as shape memory alloys, gradient function materials, and biodegradable Mg materials have been developed [1,2].Among these, industrial pure titanium and titanium alloys are widely used in dental and surgical implants due to their high specific strength and excellent corrosion resistance.While the Young's modulus of titanium alloys (approximately 110 GPa) is significantly smaller compared to other metallic biomaterials like stainless steel or Co-Cr alloys, it is still much larger than the Young's modulus of cortical bone (7-30 GPa) [2][3][4].This difference leads to stress shielding, where a significant portion of stress is preferentially borne by the implant, potentially causing an inhibition of bone growth and a decrease in bone density [5].Therefore, efforts have been made to develop materials such as β-type titanium alloys and porous materials to decrease Young's modulus [6,7].
It has been found that the Young's modulus of β-type titanium alloys is lower compared to α-type titanium alloys, and the non-toxic Ti-29Nb-13Ta-4.6Zr(TNTZ) alloy, which exhibits the lowest Young's modulus of approximately 60 GPa, has been developed [2,8,9].Additionally, porous materials have a lower apparent Young's modulus and can enhance osseointegration, where implants bond with living bone at the optical microscope level, allowing for a sufficient fixation of implants [10].However, there are challenges with β-type titanium alloys, such as the high cost and high melting point of alloying elements like Nb, Ta, Mo, or Zr [11].On the other hand, porous materials often experience stress concentration in pore regions, leading to inferior mechanical properties, making them more suitable for low-load environments [12].
To address these challenges, research on Ti-Mg composite materials has been conducted [13][14][15].Magnesium is an essential element in the body and has a modulus of 41 GPa, which is close to the modulus of cortical bone compared to other metallic biomaterials and is cost-effective [16,17].Since the modulus of composite materials is roughly proportional to the volume fraction, compounding magnesium with a lower modulus than titanium results in Ti-Mg composite materials having a lower modulus than pure titanium and titanium alloys [18,19].Moreover, when biodegradable magnesium dissolves in the body, the originally magnesium-containing parts transform into pores, changing Ti-Mg composite materials into porous titanium [20,21].Therefore, Ti-Mg composite materials exhibit properties that initially have superior strength compared to porous titanium due to the presence of magnesium during implantation.As magnesium dissolves during bone recovery and growth, it transforms into porous titanium with low modulus and excellent osseointegration with living bone [22][23][24].The density and melting point of Ti are 4.506 g/cm 3 and 1668 • C, respectively, while the density and melting point of Mg are 1.738 g/cm 3 and 650 • C, respectively.Due to the significant difference in density and melting point, obtaining Ti-Mg composite materials with a uniform structure using conventional casting methods is difficult [25].Powder metallurgy is practical for manufacturing such metal composite materials with significant differences in properties.Powder metallurgy is a method of producing dense materials by diffusion of metal atoms between powder particles, eliminating the need to melt the metal and allowing for material fabrication at lower temperatures compared to casting methods [26].In manufacturing Ti-Mg composite materials, the Spark Plasma Sintering (SPS) method has several advantages over other methods [27], such as liquid Mg infiltration, as shown in Table 1, making it suitable for biomedical applications.Specifically, SPS allows for the precise control of sintering temperature and pressure, ensuring uniform microstructural development and enhanced densification.This accurate control minimizes thermal gradients and results in superior mechanical properties.Additionally, SPS significantly reduces processing time compared to liquid infiltration methods and allows for the fine-tuning of microstructural characteristics through adjustable sintering parameters.This flexibility enables tailoring mechanical properties to meet specific biomedical requirements, such as desired Young's modulus and strength [28].discharge between powder particles, heating by joule heating, and pressure application, effectively shortening the sintering time due to rapid heating rates [29,30].Moreover, materials fabricated using SPS typically exhibit high-density uniform structures, and the short sintering time reduces the likelihood of grain coarsening, resulting in sintered bodies with excellent mechanical properties [28,31].For the Ti-Mg composite materials produced by changing the composition and sintering pressure, their mechanical properties were evaluated by observing the microstructure, compression tests, and Young's modulus measurements.How changes in the amount of Mg added and sintering pressure affect the microstructure and mechanical properties was investigated.Additionally, Mg was dissolved by immersing the Ti-Mg composite materials in a physiological saline solution.Mg toxicity typically results from an excessive intake of medications containing magnesium or impaired kidney excretion [32].Therefore, it is crucial to regulate and monitor its solubility carefully.The compression test was performed on samples immersed for different durations to investigate the mechanical integrity of the composite materials.
Materials and Methods
Using pure Ti powder (purity 99.98%, spherical, maximum particle size 45 µm) and pure Mg powder (purity 99.5%, irregular shape, average particle size 180 µm) as raw materials, three types of powder were prepared: pure Ti, Ti-30vol%Mg, and Ti-50vol%Mg.Pure Ti remained pure Ti powder, while a pulverizing ball mill (Pulverisette 7 classicline, Fritsch, Germany) was used to mix the composite powder.A stainless-steel mixing container was filled with 50 wt% ZrO 2 balls with a diameter of 1 mm and 5 wt% ethanol as the mixing solvent.After mixing for 5 min at 500 rpm, the rotation was stopped for 5 min, then reversed, and mixing was continued for another 5 min at 500 rpm.After mixing, ZrO 2 balls were separated from the mixed powder using a 600 µm sieve.The purpose of the ball mill in this study is to prevent powder aggregation by solvent and aim for a more uniform mixture rather than particle crushing.The powder-mixing operation was conducted in a glove box filled with Ar gas.Before the sintering process, X-ray diffraction (XRD, MiniFlex600 by Rigaku, Tokyo, Japan) analysis was performed with angle conditions set at 20 • -90 • and a step size of 0.01 • on three types of powders (pure Ti powder, pure Mg powder, and Ti-30Mg mixed powder), to verify their composition and crystallinity [33,34].Carbon paper with a thickness of 0.2 mm was placed on the bottom and sides of the graphite die (NJS-Japan, Tokyo, Japan), and each powder was filled into the graphite die inside a glove box.The powders were then compressed using a hydraulic pump at 20 MPa for 1 min to obtain a compacted body.The sintering container, now containing the compacted body, was carefully installed in the chamber of the SPS device (511S, SPS Syntex, Tokyo, Japan).The vacuum level inside the chamber was meticulously maintained at 50 Pa.The sintering temperature, a critical parameter that directly influences the final properties of the sintered samples, was set to 580 • C. Similarly, the sintering pressure, another key parameter that significantly affects the sintering process, was set to 25, 50, 75, and 100 MPa, respectively.These settings were crucial to achieving the desired sintering results.After sintering, the samples were unloaded without waiting for cooling and cooled while maintaining the vacuum inside the chamber.Table 2 shows the Mg content and sintering conditions of the fabricated samples.The sintered specimen was cut using a microcutting system (Accustom-5, Struers, Tokyo, Japan), and the cross-section was polished with #500 to #4000 SiC emery papers, followed by polishing with 3 µm diamond spray as the abrasive.After ultrasonic cleaning with isopropanol for 5 min., observations were made using an optical microscope (OM) (DMI3000M, Leica Microsystems, Wetzlar, Germany) and field emission scanning electron microscopy (FESEM) (JSM-7200F, JEOL, Tokyo, Japan), and an elemental analysis and element distribution map (EDM) were performed using energy dispersive spectroscopy (EDS) (JED-2300 Analysis Station Plus, JEOL, Tokyo, Japan).The powder, fracture surface, and post-infiltration structure were also observed using FESEM without polishing.
The sintered specimens were cut into dimensions of 4 mm × 4 mm × 8 mm and polished using #1000 SiC emery paper.The vertical, horizontal, and height dimensions of the samples were measured three times using a micrometer, and the average values were used to determine the dimensions.The weight of each sample was measured to calculate its density.Compression tests were conducted using an Autograph universal testing machine (AG-1 1000 kN, Shimadzu, Tokyo, Japan).Nominal strain and nominal stress were calculated using the above measurement values in the compression tests.The tests were performed three times for each specimen, and stress-strain curves were plotted based on the results.To measure Young's modulus, an ultrasonic pulse velocity test was conducted.This is a non-destructive technique that involves propagating ultrasonic pulses through the sample using longitudinal and transverse wave transducers.Young's modulus was then determined based on the velocity of the ultrasonic pulses.
Using SiC emery paper ranging from #500 to #4000, the bottom of the sintered specimens was polished to be parallel, then polished with a diamond spray (3 µm).After preparing cylindrical specimens with a diameter of approximately 20 mm and a thickness of about 9 mm, the thickness of the samples was measured at seven points using a micrometer, and the average of five data points, excluding the maximum and minimum values, was taken as the thickness of the sample.For the measurements, an ultrasonic flaw detector (USM35X, GE Measurement and Control, MA, USA), longitudinal wave probe (G5KB, GE Measurement and Control, MA, USA), and transverse wave probe (B2C10SN, ITeS Corporation, Tokyo, Japan) were used.Measurements were conducted seven times for both longitudinal and transverse waves, and the average of five data points, excluding the maximum and minimum values, was taken as the velocity to calculate Young's modulus.Due to the insufficient size of the immersion test samples, a unique approach was taken to calculate Young's modulus [35,36].The slope in the elastic region of the stress-strain curve obtained from the compression test was used.However, it is important to note that the Young's modulus calculated from this slope in the compression test shows lower values than the usual Young's modulus.Therefore, the evaluation of the Young's modulus change according to the immersion time was conducted by setting 0 days of immersion as 100% and evaluating the decrease rate of Young's modulus.
For the immersion test, two types of sintered materials were prepared for compression testing and observation, respectively, and each sintered material was cut into rectangular shapes of 4 mm × 4 mm × 8 mm.The sides were polished with #1000 SiC emery paper, and then the height, width, and length were each measured three times using a micrometer.Finally, ultrasonic cleaning was conducted for 300 s using isopropanol.A physiological saline solution was prepared by adding 18 g of sodium chloride to 2 L of distilled water.The specimens were immersed in 50 mL of physiological saline solution per 10 mm 2 of sample surface area, and the temperature was maintained at 37 • C using a muffle furnace (FO-60P, Glass Kiki Co., Ltd., Tokyo, Japan) [37].After the immersion test, the samples were washed with distilled water for 60 s, followed by compression testing.Additionally, the observation samples were immersed in a mixture of chromic acid and nitric acid for 60 s to remove corrosion products, then they were washed with distilled water before observation.
Evaluation of Powders
Figure 1a,b, respectively, show the appearance of Ti and Mg powders observed through FESEM.The appearance of the Ti-30Mg mixed powder obtained by mixing this raw material powder is shown in Figure 1c.The portion indicated by the white arrows in this figure is Mg particles, and it was observed that large and irregularly shaped Mg particles are dispersed within the relatively small spherical Ti powder.Mechanical mixing was performed using a ball mill.However, there was no significant difference in the particle size of the Mg particles compared to that of the Mg raw material powder.Due to the short mixing time of 10 min, most of the Mg particles appear to have remained uncrushed.Generally, the effective diameter of pores for bone cells to penetrate and proliferate within pores in porous biomaterials is reported to be between 100 µm and 400 µm [38,39].Since the average size of the Mg particles remained uncrushed at 180 µm in this mixing condition, good osteoinductivity of the porous body after Mg dissolution can be expected.Additionally, the analysis of the Mg particle surfaces in the mixed powder revealed smooth surfaces.However, the Mg particles, after mixing, were found to be covered with fine attachments.The EDS point analysis of these attachments mostly revealed pure Mg, while in some white attachments, an oxygen concentration of 53.41 at% was determined, indicating magnesium oxide.The temperature of the mixing vessel rises due to collisions between the vessel walls, ZrO2 balls, and the raw material powder in the ball mill, promoting the reaction between ethanol and Mg particles and resulting in the formation of compounds.Figure 1d represents the XRD analysis results for the powder.No peaks other than Ti and Mg were observed in pure Ti and pure Mg.Although peaks of Ti and Mg were observed simultaneously in Ti-30Mg, peaks of oxides were not observed.Although the presence of reaction products was identified through FESEM and EDS analysis, they were only present on the surface of Mg particles and in small amounts overall.Hence, no peaks were observed.
Microstructure of Sintered Composites
Figures 2a, b, and c depict micrographs of Pure Ti, Ti-30Mg, and Ti-50Mg by optical Additionally, the analysis of the Mg particle surfaces in the mixed powder revealed smooth surfaces.However, the Mg particles, after mixing, were found to be covered with fine attachments.The EDS point analysis of these attachments mostly revealed pure Mg, while in some white attachments, an oxygen concentration of 53.41 at% was determined, indicating magnesium oxide.The temperature of the mixing vessel rises due to collisions between the vessel walls, ZrO 2 balls, and the raw material powder in the ball mill, promoting the reaction between ethanol and Mg particles and resulting in the formation of compounds.Figure 1d represents the XRD analysis results for the powder.No peaks other than Ti and Mg were observed in pure Ti and pure Mg.Although peaks of Ti and Mg were observed simultaneously in Ti-30Mg, peaks of oxides were not observed.Although the presence of reaction products was identified through FESEM and EDS analysis, they were only present on the surface of Mg particles and in small amounts overall.Hence, no peaks were observed.
Microstructure of Sintered Composites
Figure 2a-c depict micrographs of Pure Ti, Ti-30Mg, and Ti-50Mg by optical microscope, respectively.The vertical direction represents the compression direction during sintering.Although 580 • C is a relatively low sintering temperature for Ti, the result of sintering with Pure Ti showed only a small amount of porosity.According to density measurements, the relative density was 91%, meaning the porosity was 9%, indicating sufficient densification of Pure Ti even at 580 • C, as shown in Table 3.With a sintering pressure of 50 MPa, it can be observed that Mg is uniformly dispersed within the Ti matrix in Ti-30Mg and Ti-50Mg.Mg particles appear slightly flattened perpendicular to the compression direction, indicating deformation due to sintering pressure.Based on the calculation of porosity using relative density, the addition of Mg resulted in a decrease in porosity due to its effect of filling the pores between Ti particles.The lowest pressure sample, Ti-30Mg (TM25), exhibited a porosity of 6.4%, while TM50-TM100 showed results close to 0%.
Specimen
Porosity of Sample (%) Pure Ti 9.0 Ti-30Mg 0 Ti-50Mg 0 Ti-30Mg (TM25) 6.4 Ti-30Mg (TM75) 0 Ti-30Mg (TM100) 0 Furthermore, an increase in porosity within the Ti matrix compared to Pure Ti ca observed.Figure 2d presents an enlarged image of the microstructure of Ti-30Mg, wh as indicated by the red circle, the regions without Mg show well-progressed sintering neck growth between Ti particles, whereas, as noted in the yellow circle, Ti particles the Mg particles do not exhibit neck growth and maintain the shape of the raw pow This suggests that insufficient sintering stress is applied to Ti particles near Mg part due to Mg deformation, resulting in delayed sintering progress.The increase in poro
Specimen Porosity of Sample (%)
Pure Ti 9.0 Ti-30Mg 0 Ti-50Mg 0 Ti-30Mg (TM25) 6.4 Ti-30Mg (TM75) 0 Ti-30Mg (TM100) 0 Furthermore, an increase in porosity within the Ti matrix compared to Pure Ti can be observed.Figure 2d presents an enlarged image of the microstructure of Ti-30Mg, where, as indicated by the red circle, the regions without Mg show well-progressed sintering with neck growth between Ti particles, whereas, as noted in the yellow circle, Ti particles near the Mg particles do not exhibit neck growth and maintain the shape of the raw powder.This suggests that insufficient sintering stress is applied to Ti particles near Mg particles due to Mg deformation, resulting in delayed sintering progress.The increase in porosity within the Ti matrix is also presumed to be due to the deformation of Mg during sintering.Various parameters affect the structure and mechanical properties in the production of specimens through powder sintering, such as composition, powder particle size, powder mixing method, and sintering conditions.Especially in the case of discharge plasma sintering, the sintering conditions vary, and changes in temperature, holding time, heating rate, pressure, discharge pulse interval, electric current, etc., alter the sintering characteristics.In this study, the sintering temperature was limited to a maximum of 650 • C to conduct sintering at a temperature where Mg does not dissolve.Therefore, to enhance the sintering characteristics of Ti particles, the effect of sintering pressure on the sintering properties of Ti-30vol%Mg composition was investigated in the present study.
Figure 3a-c depict the microstructure of specimens TM25, TM75, and TM100, respectively, which were sintered under different pressures of 25, 75, and 100 MPa, using Ti-30vol%Mg mixed powder raw materials.TM100 sintered at the highest pressure of 100 MPa showed fewer pores within the Ti phase.Through the microstructure, it was possible to confirm that sinterability improves with increasing sintering pressure.To quantify sinterability, the density of each specimen was measured, and density changes with pressure variations were represented, as shown in Figure 3d.The density of TM25 showed a smaller value (3.44 g/cm 3 ) compared to other specimens, indicating the presence of many pores within the specimen.While the density increased significantly up to 3.70 g/cm 3 in TM50, there was only a slight increasing trend in TM50, TM75, and TM100, with no significant difference in density.When manufacturing Ti-Mg composites using SPS, the porosity of the composite material can be influenced by several factors, in addition to sintering pressure, such as sintering temperature, Mg composition, and the size and shape of the powders [15,40].Specifically, as the Mg composition increases, the diffusion of Mg, which has a relatively lower melting point, occurs rapidly, leading to a decrease in porosity and an increase in density [15,41].
The sintering process consolidates powder particles into a solid mass by applying heat and pressure.Several mechanisms govern this process, including powder particle interactions, diffusion kinetics, and microstructural evolution [42].During sintering, powder particles come into contact with each other due to applied pressure.At the contact points, known as necks, atomic bonds form between particles, facilitating the consolidation process.The initial stage involves surface diffusion, where atoms migrate along the particle surfaces to form bonds [43][44][45][46].As sintering progresses, bulk diffusion becomes dominant, with atoms diffusing through the lattice of particles to further densify the material.Diffusion plays a critical role in sintering, as it governs the movement of atoms within the powder compact.The diffusion rate depends on factors such as temperature, pressure, and the chemical composition of the powder [43].High temperatures increase atomic mobility, promoting faster diffusion and densification.Pressure reduces the activation energy required for diffusion, accelerating the sintering process.Additionally, the chemical composition of the powder influences the diffusion kinetics, as different elements diffuse at varying rates [47].As sintering progresses, the microstructure of the material undergoes significant changes.Initially, pores between powder particles are eliminated as necks form and grow.As sintering continues, the pores decrease in size and number, increasing material density.The microstructure evolves from a network of interconnected pores to a solid, dense structure.However, excessive sintering can result in grain growth and the formation of large pores, negatively impacting the mechanical properties of the material.The microstructural analysis revealed a uniform dispersion of Mg within the Ti matrix, which was achieved through mechanical alloying and spark plasma sintering.This uniform distribution is critical for ensuring consistent mechanical properties throughout the composite.The formation of fine Ti-Mg intermetallic phases suggests successful bonding between the titanium and magnesium particles, which is crucial for enhancing the composite's mechanical performance.The presence of these intermetallic phases can be attributed to the high heating rates and localized temperature spikes inherent in the spark plasma sintering process, promoting rapid diffusion and reaction between Ti and Mg.
Mechanical Properties Depending on Mg Contents
Figure 4a shows the stress-strain curves obtained from compression tests of Pure Ti, Ti-30Mg, and Ti-50Mg.In the compression test of Pure Ti, the specimen did not fracture, and the stress continued to increase, so the test was stopped at the point exceeding the fracture strain of Ti-30Mg and Ti-50Mg.The compressive strength of cortical bone is estimated to be approximately 180 MPa, and all specimens exhibited higher strengths than cortical bone.When comparing the stress-strain curves of pure Ti, where Mg compounds are not present in the powder, and Ti-30Mg and Ti-50Mg, where Mg compounds are formed on the surface of Mg, no significant decrease in mechanical properties due to compounds in the powder was observed.The yield stresses were 311 MPa for Pure Ti, 334 MPa for Ti-30Mg, and 239 MPa for Ti-50Mg, with Ti-30Mg showing a higher value than pure Ti.This is believed to be due to the solid solution of oxygen in Ti.According to the Ti-O binary phase diagram, the solubility limit of oxygen in Ti is approximately 33% at both sintering and room temperatures, which is significantly high.It is considered that oxygen solubility occurred due to exposure of the powder to air during transportation until installation in the chamber of the SPS device, as well as oxygen atoms from ethanol, the mixing solvent.On the other hand, the yield stress of Ti-50Mg was lower than that of Pure Ti and Ti-30Mg.This is attributed to a more significant decrease in strength due to a reduction in the volume fraction of Ti than to solid solution strengthening by oxygen in Ti. Figure 4b illustrates the relationship between Mg content and Young's modulus measured by the ultrasonic pulse method.Meanwhile, the Young's modulus of typical pure Ti
Mechanical Properties Depending on Mg Contents
Figure 4a shows the stress-strain curves obtained from compression tests of Pure Ti, Ti-30Mg, and Ti-50Mg.In the compression test of Pure Ti, the specimen did not fracture, and the stress continued to increase, so the test was stopped at the point exceeding the fracture strain of Ti-30Mg and Ti-50Mg.The compressive strength of cortical bone is estimated to be approximately 180 MPa, and all specimens exhibited higher strengths than cortical bone.When comparing the stress-strain curves of pure Ti, where Mg compounds are not present in the powder, and Ti-30Mg and Ti-50Mg, where Mg compounds are formed on the surface of Mg, no significant decrease in mechanical properties due to compounds in the powder was observed.The yield stresses were 311 MPa for Pure Ti, 334 MPa for Ti-30Mg, and 239 MPa for Ti-50Mg, with Ti-30Mg showing a higher value than pure Ti.This is believed to be due to the solid solution of oxygen in Ti.According to the Ti-O binary phase diagram, the solubility limit of oxygen in Ti is approximately 33% at both sintering and room temperatures, which is significantly high.It is considered that oxygen solubility occurred due to exposure of the powder to air during transportation until installation in the chamber of the SPS device, as well as oxygen atoms from ethanol, the mixing solvent.
On the other hand, the yield stress of Ti-50Mg was lower than that of Pure Ti and Ti-30Mg.This is attributed to a more significant decrease in strength due to a reduction in the volume fraction of Ti than to solid solution strengthening by oxygen in Ti. Figure 4b illustrates the relationship between Mg content and Young's modulus measured by the ultrasonic pulse method.Meanwhile, the Young's modulus of typical pure Ti is 106 GPa, and that of pure Ti is 92 GPa, which is believed to be due to the presence of 9% porosity in the sintered composite specimen.The Young's modulus of Ti-30Mg and Ti-50Mg were 81 GPa and 75 GPa, respectively, confirming that the Young's modulus decreases with increased Mg content.The microstructure and phase composition can significantly affect the stress-strain response observed in compression tests of Ti-Mg composites.The microstructure influences the load-bearing capability and deformation mechanisms of the composite [48,49].A well-sintered microstructure with fewer pores and a uniform distribution of Mg within the Ti matrix tends to exhibit better mechanical properties.Pores act as stress concentrators and can initiate cracks, reducing the material's strength and ductility [50].The content of Mg in the composite has a direct effect on Young's modulus due to the lower modulus of Mg compared to Ti.Generally, as the Mg content increases, the overall modulus of the composite decreases.This is because the modulus of composite materials is roughly proportional to the volume fraction of the constituent phases [51].Therefore, with a higher volume fraction of Mg (which has a modulus of 41 GPa) compared to Ti (which has a modulus of about 110 GPa), the composite's modulus will be lower than pure Ti.
Figure 5 depicts SEM images of the fracture surface after compression testing of Ti-30Mg and the results of element distribution mapping by EDS analysis.The fracture surface of the Mg portion exhibits a river line crack pattern, indicating brittle fracture in the Mg portion.In contrast, the Ti portion maintains the shape of the raw powder, suggesting that the fracture in the Ti portion mainly occurred at the particle boundaries of the powder.From these observations, it can be understood that atomic diffusion at the Ti particle boundaries did not sufficiently progress.Optimizing the sintering conditions is expected to enhance the overall mechanical properties of the composite material by improving the sinterability of Ti [52].The microstructure and phase composition can significantly affect the stress-strain response observed in compression tests of Ti-Mg composites.The microstructure influences the load-bearing capability and deformation mechanisms of the composite [48,49].A wellsintered microstructure with fewer pores and a uniform distribution of Mg within the Ti matrix tends to exhibit better mechanical properties.Pores act as stress concentrators and can initiate cracks, reducing the material's strength and ductility [50].The content of Mg in the composite has a direct effect on Young's modulus due to the lower modulus of Mg compared to Ti.Generally, as the Mg content increases, the overall modulus of the composite decreases.This is because the modulus of composite materials is roughly proportional to the volume fraction of the constituent phases [51].Therefore, with a higher volume fraction of Mg (which has a modulus of 41 GPa) compared to Ti (which has a modulus of about 110 GPa), the composite's modulus will be lower than pure Ti.
Figure 5 depicts SEM images of the fracture surface after compression testing of Ti-30Mg and the results of element distribution mapping by EDS analysis.The fracture surface of the Mg portion exhibits a river line crack pattern, indicating brittle fracture in the Mg portion.In contrast, the Ti portion maintains the shape of the raw powder, suggesting that the fracture in the Ti portion mainly occurred at the particle boundaries of the powder.From these observations, it can be understood that atomic diffusion at the Ti particle boundaries did not sufficiently progress.Optimizing the sintering conditions is expected to enhance the overall mechanical properties of the composite material by improving the sinterability of Ti [52]. Figure 6a plots the changes in uniaxial compressive strength (UCS) and yield strength with variations in sintering pressure, obtained from the stress-strain curves of TM25, TM50, TM75, and TM100 from the compression test results.Only TM25 showed distinctly different compression test results compared to other specimens, with minimum compressive strength and yield strength.TM50, TM75, and TM100 showed no significant differences in compression behavior, compressive strength, or yield strength, similar to density.Additionally, Figure 6b presents the results of Young's modulus measurements at each sintering pressure.Young's modulus increased with increasing sintering pressure, from 57 GPa in TM25 to 91 GPa in TM75 and TM100.Since this study aims to produce Ti-Mg composite materials with low Young's modulus, a lower modulus is desirable.However, considering that Mg dissolves during insertion into the body, leading to porous Ti, and prioritizing the sinterability of the matrix Ti, an increase in Young's modulus due to increased sintering pressure is considered a favorable outcome.Since only TM25 showed significantly lower values in density measurements, and there were no significant differences in TM50, TM75, and TM100, it can be assumed that the sinterability of Ti-30vol%Mg saturates around 50 MPa.Figure 6a plots the changes in uniaxial compressive strength (UCS) and yield strength with variations in sintering pressure, obtained from the stress-strain curves of TM25, TM50, TM75, and TM100 from the compression test results.Only TM25 showed distinctly different compression test results compared to other specimens, with minimum compressive strength and yield strength.TM50, TM75, and TM100 showed no significant differences in compression behavior, compressive strength, or yield strength, similar to density.Additionally, Figure 6b presents the results of Young's modulus measurements at each sintering pressure.Young's modulus increased with increasing sintering pressure, from 57 GPa in TM25 to 91 GPa in TM75 and TM100.Since this study aims to produce Ti-Mg composite materials with low Young's modulus, a lower modulus is desirable.However, considering that Mg dissolves during insertion into the body, leading to porous Ti, and prioritizing the sinterability of the matrix Ti, an increase in Young's modulus due to increased sintering pressure is considered a favorable outcome.Since only TM25 showed significantly lower values in density measurements, and there were no significant differences in TM50, TM75, and TM100, it can be assumed that the sinterability of Ti-30vol%Mg saturates around 50 MPa.The significant effect of sintering pressure on Young's modulus, compared to UCS or yield strength, can be attributed to the microstructural changes that occur during the sintering process.Higher sintering pressures lead to better densification of the material, reducing porosity and increasing the contact area between grains [28].This improved grain boundary contact enhances the material's ability to resist deformation, thereby increasing Young's modulus.Sintering pressure helps in achieving a more homogeneous microstructure.A uniform microstructure with fewer defects and voids contributes to a higher Young's modulus because the material can deform elastically more uniformly under stress.Young's modulus is more sensitive to changes in microstructure and density than UCS or yield strength [53].While UCS and yield strength are influenced by factors like grain size and the presence of flaws, Young's modulus is directly related to the stiffness of the material, which is significantly affected by the degree of densification and the quality of grain boundaries.During sintering, pressure aids in the formation of stronger bonds and larger necks between particles.These stronger inter-particle bonds contribute to a higher elastic modulus, as the material can better resist elastic deformation.
The Mg content and the sintering parameters significantly influence the mechanical properties of the Ti-Mg composites.The addition of Mg reduces the overall density of the composites, making them lighter than pure titanium.This reduction in density is advantageous for biomedical implants, as it can lead to less stress on surrounding bone and tissue.The Young's modulus of the composites can be tailored by adjusting the Mg content.Composites with 5-15 wt% Mg exhibited Young's modulus closer to that of natural bone, which is beneficial in minimizing stress-shielding effects.This characteristic addresses a significant drawback of conventional titanium implants, which have a much higher modulus than bone, leading to stress shielding and bone resorption over time.The compressive strength of the composites decreases with increasing Mg content.However, the values obtained are still within acceptable ranges for load-bearing applications.This trade-off between modulus and strength must be carefully balanced to optimize implant performance.
Immersion Test in Physiological Saline
Figure 7 shows the microstructure of Ti-30Mg after immersing in physiological saline for 1 day.The observation was conducted after immersion for 1 min in a mixed solution of chromic acid (H2CrO4) and silver nitrate (AgNO3) to remove corrosion products.In Figure 7a, the dissolution of Mg and the formation of pores on the sample surface were observed.The micrograph in Figure 7b, magnified inside the pores, shows that most Ti particles maintain the shape of the raw powder.Here, as in the cross-sectional micrograph of OM, it can be observed that the sintering of Ti particles around the Mg powder was not The significant effect of sintering pressure on Young's modulus, compared to UCS or yield strength, can be attributed to the microstructural changes that occur during the sintering process.Higher sintering pressures lead to better densification of the material, reducing porosity and increasing the contact area between grains [28].This improved grain boundary contact enhances the material's ability to resist deformation, thereby increasing Young's modulus.Sintering pressure helps in achieving a more homogeneous microstructure.A uniform microstructure with fewer defects and voids contributes to a higher Young's modulus because the material can deform elastically more uniformly under stress.Young's modulus is more sensitive to changes in microstructure and density than UCS or yield strength [53].While UCS and yield strength are influenced by factors like grain size and the presence of flaws, Young's modulus is directly related to the stiffness of the material, which is significantly affected by the degree of densification and the quality of grain boundaries.During sintering, pressure aids in the formation of stronger bonds and larger necks between particles.These stronger inter-particle bonds contribute to a higher elastic modulus, as the material can better resist elastic deformation.
The Mg content and the sintering parameters significantly influence the mechanical properties of the Ti-Mg composites.The addition of Mg reduces the overall density of the composites, making them lighter than pure titanium.This reduction in density is advantageous for biomedical implants, as it can lead to less stress on surrounding bone and tissue.The Young's modulus of the composites can be tailored by adjusting the Mg content.Composites with 5-15 wt% Mg exhibited Young's modulus closer to that of natural bone, which is beneficial in minimizing stress-shielding effects.This characteristic addresses a significant drawback of conventional titanium implants, which have a much higher modulus than bone, leading to stress shielding and bone resorption over time.The compressive strength of the composites decreases with increasing Mg content.However, the values obtained are still within acceptable ranges for load-bearing applications.This trade-off between modulus and strength must be carefully balanced to optimize implant performance.
Immersion Test in Physiological Saline
Figure 7 shows the microstructure of Ti-30Mg after immersing in physiological saline for 1 day.The observation was conducted after immersion for 1 min in a mixed solution of chromic acid (H 2 CrO 4 ) and silver nitrate (AgNO 3 ) to remove corrosion products.In Figure 7a, the dissolution of Mg and the formation of pores on the sample surface were observed.The micrograph in Figure 7b, magnified inside the pores, shows that most Ti particles maintain the shape of the raw powder.Here, as in the cross-sectional micrograph of OM, it can be observed that the sintering of Ti particles around the Mg powder was not fully completed.Figure 7c shows the element distribution maps of the pore region after the dissolution of Mg.Mg was detected on the pore's surface, and oxygen was distributed along with Mg. fully completed.Figure 7c shows the element distribution maps of the pore region after the dissolution of Mg.Mg was detected on the pore's surface, and oxygen was distributed along with Mg. Figure 8a presents the stress-strain curves of compression tests after immersion tests for various durations.At the same time, Figure 8b shows the changes in UCS and fracture strain according to immersion time.After 6 days of immersion, the compression strength was 214 MPa, and Ti-30Mg could maintain a strength higher than that of cortical bone (approximately 180 MPa [54]) until 6 days after immersion.Both compression strength and fracture strain decreased with increasing immersion time.UCS and fracture strain showed nonlinear deceleration with immersion time.In both cases, it was observed that the experimental results closely matched a sigmoidal curve based on the Boltzmann function.Figure 8c shows the stress-strain curves obtained from compression tests of TM100 (Ti-30Mg) fabricated with a sintering pressure of 100 MPa for various immersion times.At the same time, Figure 8d illustrates the changes in compressive strength and fracture strain during immersion.Similar to the compression test results for TM50 (Ti-30Mg) in Figure 8b, the UCS and fracture strain decreased with increasing immersion time due to the dissolution of Mg within the composite material.The UCS on the 9th and 10th days of immersion were 250 MPa and 276 MPa, respectively, maintaining strengths exceeding those of cortical bone until the 10th day. Figure 9a plots the variation in UCS with increasing immersion time for both TM50 (Ti-30Mg) and TM100 (Ti-30Mg), with a green dashdot horizontal line representing cortical bone strength.It can be inferred that the mechanical integrity improves with increasing sintering pressure, as TM100 maintains strength above that of cortical bone for a longer duration compared to TM50.
Mechanical property changes were relatively significant in the early stages of immersion.However, as time increased, the changes became relatively small, and there was almost no change in fracture strain after the 6th day.This is believed to be because Mg dissolved from the entire sample surface during the early stages of immersion, but subsequently, dissolution of Mg from the center of the sample was required.The dissolution of Mg occurs through the chemical reactions below [55], resulting in the generation of hydrogen gas and corrosion products where Mg is dissolved.
Mg + 2H2O = Mg(OH)2 + H2 Mg(OH)2 + 2Cl− = MgCl2 + 2OH − When hydrogen gas or corrosion products are generated inside the pores, it takes time for the solution to penetrate into the center of the specimen.Therefore, as the immersion time passes, it is expected that mechanical property changes will diminish, especially after a certain period has elapsed.Figure 8a presents the stress-strain curves of compression tests after immersion tests for various durations.At the same time, Figure 8b shows the changes in UCS and fracture strain according to immersion time.After 6 days of immersion, the compression strength was 214 MPa, and Ti-30Mg could maintain a strength higher than that of cortical bone (approximately 180 MPa [54]) until 6 days after immersion.Both compression strength and fracture strain decreased with increasing immersion time.UCS and fracture strain showed nonlinear deceleration with immersion time.In both cases, it was observed that the experimental results closely matched a sigmoidal curve based on the Boltzmann function.Figure 8c shows the stress-strain curves obtained from compression tests of TM100 (Ti-30Mg) fabricated with a sintering pressure of 100 MPa for various immersion times.At the same time, Figure 8d illustrates the changes in compressive strength and fracture strain during immersion.Similar to the compression test results for TM50 (Ti-30Mg) in Figure 8b, the UCS and fracture strain decreased with increasing immersion time due to the dissolution of Mg within the composite material.The UCS on the 9th and 10th days of immersion were 250 MPa and 276 MPa, respectively, maintaining strengths exceeding those of cortical bone until the 10th day. Figure 9a plots the variation in UCS with increasing immersion time for both TM50 (Ti-30Mg) and TM100 (Ti-30Mg), with a green dash-dot horizontal line representing cortical bone strength.It can be inferred that the mechanical integrity improves with increasing sintering pressure, as TM100 maintains strength above that of cortical bone for a longer duration compared to TM50.
Mechanical property changes were relatively significant in the early stages of immersion.However, as time increased, the changes became relatively small, and there was almost no change in fracture strain after the 6th day.This is believed to be because Mg dissolved from the entire sample surface during the early stages of immersion, but subsequently, dissolution of Mg from the center of the sample was required.The dissolution of Mg occurs through the chemical reactions below [55], resulting in the generation of hydrogen gas and corrosion products where Mg is dissolved.
When hydrogen gas or corrosion products are generated inside the pores, it takes time for the solution to penetrate into the center of the specimen.Therefore, as the immersion time passes, it is expected that mechanical property changes will diminish, especially after a certain period has elapsed.At the outset, the immersion phase is crucial as it sets the stage for the corrosion process.Here, the Mg in the Ti-Mg composite swiftly interacts with the saline solution, leading to the creation of Mg(OH)2 and hydrogen gas [56,57].These reactions initiate the formation of a porous Mg(OH)2 layer on the surface, which initially provides some defense but gradually transforms into more soluble MgCl2 in the presence of chloride ions from the saline solution.As immersion continues, the protective Mg(OH)2 layer gradually breaks down, releasing Mg ions into the solution.The generated hydrogen gas can form bubbles at the interface, further promoting Mg degradation [19,58].This stage is characterized by significantly reducing mechanical properties as Mg content diminishes.Over At the outset, the immersion phase is crucial as it sets the stage for the corrosion process.Here, the Mg in the Ti-Mg composite swiftly interacts with the saline solution, leading to the creation of Mg(OH)2 and hydrogen gas [56,57].These reactions initiate the formation of a porous Mg(OH)2 layer on the surface, which initially provides some defense but gradually transforms into more soluble MgCl2 in the presence of chloride ions from the saline solution.As immersion continues, the protective Mg(OH)2 layer gradually breaks down, releasing Mg ions into the solution.The generated hydrogen gas can form bubbles at the interface, further promoting Mg degradation [19,58].This stage is characterized by significantly reducing mechanical properties as Mg content diminishes.Over At the outset, the immersion phase is crucial as it sets the stage for the corrosion process.Here, the Mg in the Ti-Mg composite swiftly interacts with the saline solution, leading to the creation of Mg(OH) 2 and hydrogen gas [56,57].These reactions initiate the formation of a porous Mg(OH) 2 layer on the surface, which initially provides some defense but gradually transforms into more soluble MgCl 2 in the presence of chloride ions from the saline solution.As immersion continues, the protective Mg(OH) 2 layer gradually breaks down, releasing Mg ions into the solution.The generated hydrogen gas can form bubbles at the interface, further promoting Mg degradation [19,58].This stage is characterized by significantly reducing mechanical properties as Mg content diminishes.Over time, the corrosion rate slows down as the more easily accessible Mg is depleted, and the remaining Mg is less exposed due to deeper penetration into the composite.The changes in mechanical properties, mainly compressive strength, decrease at this stage [18].Our immersion tests have yielded significant findings.Ti-30vol%Mg maintained its compressive strength above that of cortical bone for a considerable period of up to 10 days.This behavior suggests that the composite can retain its mechanical integrity for a sufficient duration, potentially supporting the initial bone healing processes.The release of Mg ions plays a critical role in the biodegradation process.Mg ions are known to be biocompatible and can enhance osteogenesis.However, the localized increase in pH due to the formation of OH − ions needs to be managed to prevent adverse effects on surrounding tissues.
Figure 9b illustrates the variation in Young's modulus during immersion for specimens of TM50 (Ti-30Mg) and TM100 (Ti-30Mg).Here, the reduction in Young's modulus was evaluated with respect to the specimen at 0 days before immersion, which was considered 100%.It can be observed that Young's modulus decreases with increasing immersion time for both specimens.For TM50 (Ti-30Mg), Young's modulus decreased by 40% after 6 days of immersion compared to before immersion, resulting in a modulus of approximately 49 GPa, considering that the modulus of Ti-30Mg was measured to be 81 GPa.In the case of TM100 (Ti-30Mg), the modulus decreased overall by 31% after 10 days of immersion, indicating a modulus of approximately 63 GPa at Day 10 of immersion, considering that the modulus of TM100 measured by ultrasonic pulse method was 91 GPa.It was observed that sintering pressure enhancement led to improved sinterability and mechanical properties.However, in cases where the specimens maintained strengths above that of cortical bone (6 days for TM50 and 10 days for TM100), the modulus after extended immersion was higher for TM100 than TM50.To suppress stress shielding, it is ideal for the modulus after immersion to be as small as possible and closer to that of cortical bone.By adjusting multiple conditions, such as increasing sintering pressure to enhance sinterability and increasing Mg content to reduce the modulus, it is possible to produce Ti-Mg composite materials with higher strength and lower modulus.
Conclusions
Uniform Ti-Mg composite materials were fabricated by mixing pure Ti powder with low-modulus and biodegradable Mg powder using spark plasma sintering.The effects of mixing powder, evaluating Ti-Mg composite materials, and the influence of Mg addition and sintering pressure on the properties of Ti-Mg composite materials were investigated systematically.
1.
The uniform dispersion of Mg in Ti-30Mg and Ti-50Mg and the confirmation of the effectiveness of powder mixing with ethanol-added ball milling in producing uniform Ti-Mg composite materials further solidify the reliability of our findings.
2.
Young's modulus of pure Ti, Ti-30Mg, and Ti-50Mg is 92 GPa, 81 GPa, and 75 GPa, respectively.This indicates a linear decrease in Young's modulus of Ti-Mg composite materials with Mg addition.
TM50 (Ti-30Mg) showed a decrease in Young's modulus from 81 GPa to 49 GPa after 6 days of immersion, while TM100 (Ti-30Mg) showed a decline from 91 GPa to 63 GPa after 10 days of immersion, indicating a reduction in stress shielding phenomenon.
5.
During immersion in physiological saline, TM50 maintained compressive strength above cortical bone for 6 days and TM100 for 10 days.This confirms that the mechanical integrity of Ti-30vol%Mg improves with increasing sintering pressure.
Materials 2024 ,
17, x FOR PEER REVIEW 10 of 18 50Mg were 81 GPa and 75 GPa, respectively, confirming that the Young's modulus decreases with increased Mg content.
Figure 6 .
Figure 6.(a) Changes in UCS and Yield strength with variations in sintering pressure.(b) Relationship between sintering pressure and Young's modulus.
Figure 6 .
Figure 6.(a) Changes in UCS and Yield strength with variations in sintering pressure.(b) Relationship between sintering pressure and Young's modulus.
Figure 7 .
Figure 7. (a) A FESEM micrograph of the surface of the Ti-30Mg sample, (b) a magnified micrograph for pore observation after 1 day of immersion in saline solution, (c) Element distribution maps of O, Mg, and Ti by EDS.
Figure 7 .
Figure 7. (a) A FESEM micrograph of the surface of the Ti-30Mg sample, (b) a magnified micrograph for pore observation after 1 day of immersion in saline solution, (c) Element distribution maps of O, Mg, and Ti by EDS.
Figure 8 .
Figure 8. Compressive stress-strain curves of Ti-30Mg samples with (a) 50 MPa and (c) 100 MPa of sintering pressures immersed in saline for various times up to 10 days.Changes of UCS and fracture strain of Ti-30Mg with (b) 50 MPa and (d) 100 MPa of sintering pressures increasing immersion times in saline for up to 10 days.
Figure 9 .
Figure 9. (a) Relationship between UCS and immersion time of TM100 (Ti-30Mg) and TM50 (Ti-30Mg).(b) Changes in the ratio of Young's modulus to the initial Young's modulus with an increase of the immersion time for TM100 (Ti-30Mg) and TM50 (Ti-30Mg).
Figure 8 .
Figure 8. Compressive stress-strain curves of Ti-30Mg samples with (a) 50 MPa and (c) 100 MPa of sintering pressures immersed in saline for various times up to 10 days.Changes of UCS and fracture strain of Ti-30Mg with (b) 50 MPa and (d) 100 MPa of sintering pressures increasing immersion times in saline for up to 10 days.
Figure 8 .
Figure 8. Compressive stress-strain curves of Ti-30Mg samples with (a) 50 MPa and (c) 100 MPa of sintering pressures immersed in saline for various times up to 10 days.Changes of UCS and fracture strain of Ti-30Mg with (b) 50 MPa and (d) 100 MPa of sintering pressures increasing immersion times in saline for up to 10 days.
Figure 9 .
Figure 9. (a) Relationship between UCS and immersion time of TM100 (Ti-30Mg) and TM50 (Ti-30Mg).(b) Changes in the ratio of Young's modulus to the initial Young's modulus with an increase of the immersion time for TM100 (Ti-30Mg) and TM50 (Ti-30Mg).
Figure 9 .
Figure 9. (a) Relationship between UCS and immersion time of TM100 (Ti-30Mg) and TM50 (Ti-30Mg).(b) Changes in the ratio of Young's modulus to the initial Young's modulus with an increase of the immersion time for TM100 (Ti-30Mg) and TM50 (Ti-30Mg).
Table 1 .
Comparison between spark plasma sintering (SPS) and liquid Mg-infiltration methods.
Table 2 .
Specimen conditions for Mg content and sintering pressure.
Table 3 .
Porosity data after sample preparation by SPS. | 11,580.8 | 2024-07-01T00:00:00.000 | [
"Materials Science",
"Engineering",
"Medicine"
] |
Feasibility of Using Laser Imaging Detection and Ranging Technology for Contactless 3D Body Scanning and Anthropometric Assessment of Athletes
The scope of this pilot study was to assess the feasibility of using the laser imaging detection and ranging (LiDAR) technology for contactless 3D body scanning of sports athletes and deriving anthropometric measurements of the lower limbs using available software. An Apple iPad Pro 3rd Generation with embedded LiDAR technology in combination with the iOS application Polycam were used. The effects of stance width, clothing, background, lighting, scan distance and measurement speed were initially assessed by scanning the lower limbs of one test person multiple times. Following these tests, the lower limbs of 12 male and 10 female participants were scanned. The resulting scans of the lower limbs were complete for half of the participants and categorized as good in quality, while the other scans were either distorted or presented missing data around the shank and/or the thigh. Bland–Altman plots between the LiDAR-based and manual anthropometric measures showed good agreement, with the coefficient of determination from correlation analysis being R2 = 0.901 for thigh length and R2 = 0.830 for shank length, respectively. The outcome of this pilot study is considered promising, and a further refinement of the proposed scanning protocol and advancement of available software for 3D reconstruction are recommended to exploit the full potential of the LiDAR technology for the contactless anthropometric assessment of athletes.
Introduction
Subject-specific anthropometric measurements are needed in a broad context across ergonomics, engineering, design research, health and sports sciences.Therefore, subjectspecific body measurements are traditionally obtained by hand according to the standards of the International Society for the Advancement of Kinanthropometry (ISAK) and used for, e.g., designing new customer goods, assessing patient characteristics or monitoring training progress [1].
In recent years, automatic 3D scanners have provided new means to capture body surface data of individual subjects, and these are contactless, with high repeatability and speed [2].Particularly, a new light detection and ranging (LiDAR) sensor for depth sensing was introduced in 2020 by Apple (Apple Inc., Cupertino, CA, USA) into their high-end mobile devices, which has opened the way for convenient 3D scanning outside the laboratory.The LiDAR technology works by emitting arrays of infrared light pulses from a series of transmitters into the environment, which are reflected from the surface of the target object and re-captured by integrated photodetector sensors.The sensors detect the frequency of the reflected light, which is then used to calculate travel time and distance to the target surface [2].The Apple LiDAR technology has been adopted for, e.g., forensic 3D documentation [3], large animal assessments in agriculture [4] and to estimate tree diameter in the context of forest management [5].Furthermore, 3D scanning using commercial mobile devices has proven useful for preoperative and postoperative analysis of facial structures by plastic surgeons [6], as well as the estimation of body segment parameters for biomechanical analysis [7].Yet, the potential of the Apple LiDAR technology for 3D body scanning and anthropometric assessment of sports athletes has not yet been demonstrated.
The goal of this pilot study was to assess the feasibility of using the Apple LiDAR technology for contactless anthropometric measurements of strength-training athletes outside the dedicated laboratory and to assess the feasibility of extracting anthropometric measures from the 3D data using available iOS software (Version 15.5).It was hypothesized that the LiDAR technology allows for the contactless measurement of shank and thigh length based on 3D body surface scanning in a training-specific setting with manual measurements, according to ISAK standards as reference values.
Materials and Methods
Ethical approval for this study was given by the regional ethics committee (Kantonale Ethikkommission Bern, Nr: 2021-00403).A total of 22 healthy, recreationally active subjects (n = 12 M/10 F, age = 29 ± 4.7, height = 1.64 ± 0.38 m; body mass = 76 ± 12 kg) gave written informed consent to participate in this study.An Apple iPad Pro 3rd Generation with embedded LiDAR technology was used in combination with the iOS application Polycam for 3D scanning, visualization and analysis (https://poly.cam, accessed on 1 September 2022).
The effect of leg distance, clothing, background, lighting, scan distance and measurement speed were initially assessed by scanning the lower limbs of one test person multiple times.Based on this initial testing, the measurement protocol for scanning the lower limbs was defined as follows: (1) uniform background and lighting, (2) participant wearing tight, single-coloured clothing or only presenting naked skin, (3) participant standing in a T-position with standardized leg distance of 25-30% body height, (4) examiner moving at constant, moderate speed on a circular path around the target, (5) keeping the iPad as stable as possible, perpendicular to the plane of motion and (6) keeping a constant distance of 50-100 cm from the target.
Each subject was scanned according to the above protocol, whereby all scans were performed by one examiner.The same examiner also performed the test scans in order to familiarize herself with the technology.Additionally, anthropometric data of all participants, including size, weight, thigh and shank lengths and circumferences, were manually measured by a trained practitioner according to ISAK standards.Each measure was taken twice to be averaged.Data acquisition was not randomised.For all participants, the LiDAR scan was firstly obtained, followed by manual anthropometric measurements.
The 3D models from LiDAR scanning were visually assessed and analysed using the iOS application Polycam.Measurements of thigh and shank length were extracted as study outcome parameters from the 3D models using the integrated linear measuring tool.Thus, 3D models were categorized into 'poor', 'moderate' and 'good' depending on the completeness of body surface data.Particularly, 3D models were categorized as 'poor' if the extraction of thigh and shank lengths was not possible and as 'moderate' if only thigh or shank length could be individually extracted.
The length measures from the left and right leg of all participants were combined and statistically compared between the manual and the LiDAR-based data.The comparison was limited to thigh and shank length measurements due to the constraints of the Polycam software (Version 3.2.7),which only allowed for linear measurements to be extracted.Prior to statistical analysis, data were checked for normal distribution using the Shapiro-Wilk test.Student's paired t-tests were then used to determine whether the differences in lengths measures between the manual and the LiDAR-based measures were statistically significant, with the level of significance set at p < 0.05.Furthermore, the correlation and agreement between the length measures from manual versus LiDAR-based assessment were analysed by calculating the coefficient of determination (R 2 ) and visualizing the data using Bland-Altman plots, with the confidence interval set at 95% limits of agreement [8].
Results
The data of all the 22 participants were included in the evaluation.The average thigh and shank lengths of both legs from the manual and the LiDAR-based assessments including statistical results are given in Table 1.Thus, the scans of six participants were either too distorted or contained missing data, thereby unable to extract shank and thigh lengths (i.e., poor scans), and the scans of another five participants only allowed thigh length measures to be derived (i.e., moderate scans).The scans of eleven participants were categorized as good, allowing for the extraction of shank and thigh lengths of both legs from the 3D point clouds.Consequently, this categorization yielded 16 thigh length measurements (from the 'moderate' and 'good' groups, n = 5 + 11) and 11 shank length measurements (from the 'good' group, n = 11).A representative sample of LiDAR scans is given in Figure 1, and the results from the correlation analysis and Bland-Altman plots are shown in Figure 2.
Table 1.Average thigh and shank lengths of each participant from manual assessment compared to LiDAR-based assessment with p-values given from Student's paired t-test, coefficient of determination (R 2 ) from correlation analysis, as well as bias, upper and lower limit from the Bland-Altman analysis, respectively.For each participant (i.e., n = 16 thigh, n = 11 shank), the length measures of the left and right leg were combined for statistical analysis.lengths measures between the manual and the LiDAR-based measures were statistically significant, with the level of significance set at p < 0.05.Furthermore, the correlation and agreement between the length measures from manual versus LiDAR-based assessment were analysed by calculating the coefficient of determination (R 2 ) and visualizing the data using Bland-Altman plots, with the confidence interval set at 95% limits of agreement [8].
Results
The data of all the 22 participants were included in the evaluation.The average thigh and shank lengths of both legs from the manual and the LiDAR-based assessments including statistical results are given in Table 1.Thus, the scans of six participants were either too distorted or contained missing data, thereby unable to extract shank and thigh lengths (i.e., poor scans), and the scans of another five participants only allowed thigh length measures to be derived (i.e., moderate scans).The scans of eleven participants were categorized as good, allowing for the extraction of shank and thigh lengths of both legs from the 3D point clouds.Consequently, this categorization yielded 16 thigh length measurements (from the 'moderate' and 'good' groups, n = 5 + 11) and 11 shank length measurements (from the 'good' group, n = 11).A representative sample of LiDAR scans is given in Figure 1, and the results from the correlation analysis and Bland-Altman plots are shown in Figure 2.
Table 1.Average thigh and shank lengths of each participant from manual assessment compared to LiDAR-based assessment with p-values given from Student's paired t-test, coefficient of determination (R 2 ) from correlation analysis, as well as bias, upper and lower limit from the Bland-Altman analysis, respectively.For each participant (i.e., n = 16 thigh, n = 11 shank), the length measures of the left and right leg were combined for statistical analysis.
Discussion
Given the growing popularity of LiDAR technology as a consumer electronic device, the goal of this pilot study was to provide guidelines and praxis-oriented insights into the potential of the Apple LiDAR technology for convenient anthropometric assessment in a sport-specific setting.Based on initial testing, the measurement protocol was defined to ensure uniform background and lighting, with the examiner moving at a constant moderate speed around the subject and keeping the iPad stable and perpendicular to the plane of motion at a constant distance to the target.Nevertheless, half of the resulting scans were only moderate or poor in quality, with distortions or missing data especially between the legs and closer to the floor (Figure 1).
Inconsistent lighting between the legs, as well as around the shank and ankle close to the floor, may have contributed to the poor 3D reconstruction in these areas.Unfortunately, no decisive conclusion could be drawn regarding the best choice of garment, colour and/or bare skin to improve scan quality.In similar work of facial scanning, it was also found that areas with inconsistent lighting and increased specular reflectivity (e.g., nose and chin) led to higher inaccuracies [6].Thereby, the influence of skin type on scan outcome was also inconsistent in previous work [6].Further experiments with additional adjustments to the present scan protocol are thus highly recommended, including other software packages for reconstruction.
Discussion
Given the growing popularity of LiDAR technology as a consumer electronic device, the goal of this pilot study was to provide guidelines and praxis-oriented insights into the potential of the Apple LiDAR technology for convenient anthropometric assessment in a sport-specific setting.Based on initial testing, the measurement protocol was defined to ensure uniform background and lighting, with the examiner moving at a constant moderate speed around the subject and keeping the iPad stable and perpendicular to the plane of motion at a constant distance to the target.Nevertheless, half of the resulting scans were only moderate or poor in quality, with distortions or missing data especially between the legs and closer to the floor (Figure 1).
Inconsistent lighting between the legs, as well as around the shank and ankle close to the floor, may have contributed to the poor 3D reconstruction in these areas.Unfortunately, no decisive conclusion could be drawn regarding the best choice of garment, colour and/or bare skin to improve scan quality.In similar work of facial scanning, it was also found that areas with inconsistent lighting and increased specular reflectivity (e.g., nose and chin) led to higher inaccuracies [6].Thereby, the influence of skin type on scan outcome was also inconsistent in previous work [6].Further experiments with additional adjustments to the present scan protocol are thus highly recommended, including other software packages for reconstruction.
The quality of scans using LiDAR technology is largely dependent on the 3D reconstruction capability of the chosen software.In the present work, the iOS application Polycam was used for 3D scanning and data visualization.Unfortunately, the integrated software tool only allowed for the extraction of linear measurements from the 3D point cloud, i.e., shank and thigh lengths but not circumferences.There are a few fitness-specific stationary 3D scanners available, as well as mobile applications to estimate body dimensions based on RGB images from different views [2].Yet, to the authors knowledge, there Sports 2024, 12, 92 5 of 6 is no software available for anthropometric assessment based on LiDAR data.Further software development is highly encouraged and may likely help to improve analysis results.
This pilot study was part of a larger study to improve the safety and efficiency of strength training by means of mobile technology [9,10].These results indicate a degree of reliability (Table 1), but further validation studies with a repeated measurement design and larger sample sizes are needed to draw decisive conclusions regarding the validity and reliability of LiDAR technology for contactless anthropometric assessment.The potential advantage of using 3D scanning technology compared to manual anthropometric assessment is the measurement speed, with the duration of a scan taking less than 1 min, as well as the possibility for the layperson to obtain an accurate measurement of body dimensions without prior training.Additionally, larger and more diverse body scanning datasets may become publicly available, with advances in deep learning algorithms and optimization techniques as well for the improved monitoring of physical training and rehabilitation progress.
Conclusions
Advancements in LiDAR technology, as embedded in mobile devices, are opening the doors for convenient 3D body surface scanning and anthropometric assessment in a sport-specific setting.Despite challenges with inconsistent lighting across body parts and remaining software limitations, the outcomes of this pilot study are considered promising.Further advancements of the proposed scanning protocol and available software for 3D reconstruction are highly recommended to exploit the full potential of the LiDAR technology.For validation purposes, future studies should consider a repeated measurement design with larger sample sizes to substantiate the present preliminary results with scientific rigor.The ability to conveniently assess subject-specific body dimensions using mobile devices outside the dedicated laboratory is expected to help in the monitoring of training and rehabilitation outcomes to the benefit of athletes and patients alike.
− 1. 40 *
indicates a significant difference between ISAK and LIDAR-based assessment with p < 0.05.
Figure 1 .
Figure 1.Representative sample of LiDAR scans of the lower limbs from frontal (top row) and side view (bottom row).(a) Good scan of female participants, (b) good scan of male participant, (c) incomplete scan of male participant, (d) incomplete and distorted scan of female participant.Visualisation and stillshots of the 3D point clouds were done using the iOS application Polycam.
Figure 1 .
Figure 1.Representative sample of LiDAR scans of the lower limbs from frontal (top row) and side view (bottom row).(a) Good scan of female participants, (b) good scan of male participant, (c) incomplete scan of male participant, (d) incomplete and distorted scan of female participant.Visualisation and stillshots of the 3D point clouds were done using the iOS application Polycam.
Figure 2 .
Figure 2. Correlation (top) with coefficient of determination (R 2 ) and Bland-Altman plots (bottom) with the confidence interval at 95% limits of agreement between LiDAR-based and manual anthropometric measures of thigh length (left) and shank length (right).Length measures from the left and right leg of each participant (i.e., n = 16 thigh, n = 11 shank) were taken into account.
Figure 2 .
Figure 2. Correlation (top) with coefficient of determination (R 2 ) and Bland-Altman plots (bottom) with the confidence interval at 95% limits of agreement between LiDAR-based and manual anthropometric measures of thigh length (left) and shank length (right).Length measures from the left and right leg of each participant (i.e., n = 16 thigh, n = 11 shank) were taken into account. | 3,801 | 2024-03-26T00:00:00.000 | [
"Engineering"
] |
MAGIC detection of GRB 201216C at $z=1.1$
Gamma-ray bursts (GRBs) are explosive transient events occurring at cosmological distances, releasing a large amount of energy as electromagnetic radiation over several energy bands. We report the detection of the long GRB~201216C by the MAGIC telescopes. The source is located at $z=1.1$ and thus it is the farthest one detected at very high energies. The emission above \SI{70}{\GeV} of GRB~201216C is modelled together with multi-wavelength data within a synchrotron and synchrotron-self Compton (SSC) scenario. We find that SSC can explain the broadband data well from the optical to the very-high-energy band. For the late-time radio data, a different component is needed to account for the observed emission. Differently from previous GRBs detected in the very-high-energy range, the model for GRB~201216C strongly favors a wind-like medium. The model parameters have values similar to those found in past studies of the afterglows of GRBs detected up to GeV energies.
INTRODUCTION
Gamma-ray bursts (GRBs) are sources exhibiting bright electromagnetic emission in two phases called prompt and afterglow.The former peaks at hard X-ray and soft gamma-ray energies, lasting between a fraction of a second and hundreds of seconds.In particular, the prompt temporal behavior shows short time scale variability down to milliseconds.Although its origin is not completely understood (for a review, see Kumar & Zhang 2015), recent evidence is pointing to a synchrotron origin (Zhao et al. 2014;Zhang et al. 2016;Oganesyan et al. 2017Oganesyan et al. , 2018Oganesyan et al. , 2019)).The afterglow radiation partly overlaps with the prompt and evolves over longer timescales, up to several months after the GRB onset.The emission in this phase decays smoothly with time as a power law and it can be detected in several energy bands, from radio up to gamma rays, and is interpreted as synchrotron and Inverse Compton emission mostly from electrons accelerated in the external shock (Sari et al. 1998;Panaitescu & Kumar 2000).
GRBs are classified as short and long depending on whether their duration in terms of 90 , the time interval containing 90% of the total photon counts, is shorter or longer than two seconds.While this observational definition is widely adopted, a more physical classification comes from the progenitor system at the origin of the bursts.In this context, short GRBs are thought to be produced as the result of the merger of binary systems of compact objects involving at least one neutron star (NS).The only confirmation of such association is the short GRB 170817A, which was detected in coincidence with a gravitational wave signal generated by a NS-NS merger (Abbott et al. 2017;Goldstein et al. 2017).On the other hand, long GRBs are often associated with supernovae of type Ib/c, when detectable (e.g. if redshift is ≲ 1).The supernova emission peaks several days after the GRB onset, when it outshines the decaying optical afterglow of the burst itself (Woosley & Bloom 2006).
The afterglow phase of GRBs has been studied in detail over several wavelength bands thanks to numerous instruments both groundbased (covering the radio and optical wavelengths, and VHE gamma rays) and space-based (detecting X-rays and gamma rays).Such observations have made it possible to trace the origin of the multiwavelength afterglow emission to the synchrotron process (Mészáros 2002;Piran 2004).Such radiation is mostly produced by electrons accelerated at the so-called forward shock, when the GRB jet decelerates by interacting with the interstellar or circumstellar medium.Until recently, the afterglow was detected up to GeV energies by the Fermi-LAT instrument, with some hints of a possible tail extending to higher energies (Ackermann et al. 2014), where imaging atmospheric Cherenkov telescopes (IACTs) are more sensitive.The presence of emission in the very-high-energy (VHE, > 100 GeV) range in the afterglow phase of GRBs was predicted, even before the operation of Fermi-LAT and IACTs, in several theoretical models involving either leptonic or hadronic processes.A breakthrough was achieved in 2019, when the detection of VHE emission in the afterglow of three long GRBs was reported.The MAGIC collaboration first reported the detection of GRB 190114C (Mirzoyan et al. 2019;MAGIC Collaboration et al. 2019a,b), followed by GRB 180720B and GRB 190829A detected by the H.E.S.S. telescopes (Abdalla et al. 2019; H. E. S. S. Collaboration et al. 2021).The detection of such sources with IACTs confirmed the presence of an emission in the VHE range.In particular, the spectral and temporal analysis of GRB 190114C showed that such emission is associated with a component, separate from the synchrotron one, well explained by synchrotron-self Compton (SSC) radiation from electrons accelerated at the forward shock.A similar conclusion can be drawn for GRB 180720B (see e.g.Wang et al. 2019), even though the multi-wavelength data available were not enough to perform a proper modeling.An unusual and controversial interpretation was put forward in the case of GRB 190829A.In H. E. S. S. Collaboration et al. (2021) the authors suggested that the emission could be attributed to a single synchrotron component, which extends over nine orders of magnitude in energy up to the TeV domain.This requires an acceleration mechanism that is able to overcome the limit resulting in the so-called burnoff limit for the energy of synchrotron photons (de Jager et al. 1996;Piran & Nakar 2010).
The studies on this small sample of events shows how the understanding of the afterglow phase in the VHE range is far from complete.Currently only a few events have a detection at VHE (or evidence, as in GRB 160821B, see Acciari et al. 2021), and different interpretations were proposed.However, the SSC scenario proved to be flexible and applicable to all the three GRBs detected at VHE.In order to investigate if such an interpretation may be universal to explain VHE afterglows, we present here the detection of the long GRB 201216C with the MAGIC telescopes.We use the available multi-wavelength data to model the broadband emission in the SSC scenario.We find that the SSC model provides a satisfactory interpretation of the MAGIC light curve and spectrum.
The paper is organized as follows.In Section 2, we summarise all the observations available for GRB 201216C.In Section 3 we discuss the MAGIC observations and data analysis.The results are presented in Section 4. In Section 5 we present the analysis of optical observations taken with the Liverpool Telescope and the other multiwavelength observations that we use to model the emission with a synchrotron and SSC scenario (discussed in Section 6).Finally, in Section 7 we summarise and discuss our findings.
OBSERVATIONS OF GRB 201216C
GRB 201216C was detected by Swift-BAT on December 16th 2020 at 23:07:31 UT (Beardmore et al. 2020) 1 , hereafter 0 .The burst was also detected by other space-based instruments including Fermi-GBM, ASTROSAT and Konus-Wind.The light curve by Swift-BAT shows a multi-peaked structure 2 from 0 − 16 s to 0 + 64 s, with a main peak occurring at ∼ 0 + 20 s.
Observations at different times by the VLT, FRAM-ORM and the Liverpool Telescope confirmed the presence of the optical afterglow.The position of the optical counterpart is consistent with the refined position provided by Swift-XRT.VLT X-Shooter spectroscopy at ∼ 0 +2.4 hours, covering the wavelength range 3200-22 000 Å, allowed the measurement of the redshift, estimated 3 to be = 1.1.Based on the VLT photometry, the steep photon index of optical data suggests a significant extinction, making GRB 201216C a dark GRB (Vielfaure et al. 2020).
The afterglow was also detected in the X-ray band by Swift-XRT.The X-ray afterglow decay 4 can be described as a power law with temporal index = 1.75 ± 0.09.
At higher energies, the burst was observed by HAWC starting at 0 + 100 s up to 0 + 3600 s, resulting in a non significant detection (Ayala 2020).
Detection of radio emission was reported by Rhodes et al. ( 2022) from 5 to 56 days after the burst, from 1 to 10 GHz.The radio flux at the time of detection is already decaying, although at a slow rate, except for the flux at 1 GHz, for which the flux is increasing between 30 and 40 days.
Finally, the burst was observed by the MAGIC telescopes in the VHE range.Details of such observations are given in the following section.
MAGIC OBSERVATION AND DATA ANALYSIS
MAGIC is a stereoscopic system of two 17-m diameter IACTs situated at the Observatory Roque de los Muchachos (ORM), La Palma, Canary Islands.For short observations, as the ones usually performed for GRBs, the integral sensitivity achieved by MAGIC in 20 min is about 20% of the Crab Nebula flux above 105 GeV for low zenith angles (see Aleksić et al. 2016 for details on the telescopes performance).
MAGIC received the alert for GRB 201216C at 23:07:51 UT ( 0 + 20 s) from the Swift-BAT instrument.The MAGIC telescopes automatically reacted to the alert and, after a fast movement, they reached the target at 23:08:27 UT ( 0 + 56 s).The observation was carried out in the so-called wobble mode around the coordinates provided by Swift-BAT, RA:01h05m26s Dec:+16d32m12s (J2000).In local coordinates, the observation started at zenith 37.1 • , lasting up to 01:30:08 UT reaching zenith 68.3 • .The weather conditions were very good and stable during all the data taking with a median atmospheric transmission value at 9 km a.g.l from LIDAR measurements of 0.96, with 1 being the transmission of a clear atmosphere (see Fruck et al. 2022;Schmuckermaier et al. 2023 for a description of the LIDAR instrument and correction of VHE data).The observation was performed under dark conditions.
MAGIC continued the observation on the second night for 4.1 h from 0 + 73.8 ks.The observational conditions were optimal with an average transmission above 0.9 at 9 km and dark conditions.The zenith angle changed from 17.0 • to 46.3 • with culmination at 11.7 • .The data on the second night was taken with the analog trigger system Sum-Trigger-II (described in Dazzi et al. 2021), which was not available during the first night of data taking.Sum-Trigger-II improves the sensitivity of MAGIC in the low-energy range below ∼ 100 GeV.In particular, the trigger efficiency, compared to the standard digital trigger, is two times larger for Sum-Trigger-II at 40 GeV.
The data analysis is performed using the standard MAGIC Reconstruction Software (MARS; Zanin et al. 2013).In order to retain as many low energy events as possible, an algorithm (Shayduk 2013;MAGIC Collaboration et al. 2020) where the calibration and the image cleaning are performed in an iterative procedure was adopted.This image cleaning was applied to the GRB data, gamma-ray Monte Carlo data, and to a data sample taken on sky regions without any gamma-ray emission (used for the training of the particle identification algorithm).Data analysis beyond this level is performed following the prescriptions described in Aleksić et al. (2016).The usage of Sum-Trigger-II, combined with the optimized cleaning algorithm, allows for a collection area an order of magnitude larger around 20 GeV when compared with the one obtained with the standard digital trigger.
RESULTS FROM THE VERY-HIGH-ENERGY DATA
In this section we show the results of the analysis performed on the data collected by MAGIC on GRB 201216C.
Detection and sky map
Fig. 1 shows the distribution of the squared angular distance, 2 , for the GRB and background events (red circles and blue squares, respectively) for the first 20 minutes of data (from 0 + 56 s to 0 + 1224 s).The significance of the VHE gamma-ray signal from GRB 201216C is 6.0 , following the prescription of Li & Ma (1983), confirming the significant detection of the GRB.For the computation of the significance, we apply cuts on 2 and hadronness.The former is the squared angular distance between the reconstructed direction of the events and the nominal position of the source, taken from Swift-BAT for GRB 201216C.The latter is a parameter which discriminates between gamma-like and background-like events, with gamma rays having hadronness values close to zero.The cuts on 2 and hadronness were optimized for a source with an intrinsic powerlaw spectrum with index = −2, later corrected considering the absorption by the extragalactic background light (EBL) according to the model by Domínguez et al. (2011), hereafter D11.For the signal significance evaluation, the intrinsic spectral index for the cut optimization was chosen to be similar to the one found in the other GRBs detected at VHE, so without any prior knowledge of the actual value for this specific GRB (see Section 4.2).The corresponding energy threshold of the optimized cuts is 80 GeV defined by the peak of the energy distribution of the surviving simulated events.
Fig. 2 shows the test-statistics map in sky coordinates for the first 20 minutes of data.The same event cuts as for Fig. 1 are used.Our test statistic is Li & Ma (1983) equation 17, applied on a smoothed and modeled background estimation.Its null hypothesis distribution mostly resembles a Gaussian function, but in general can have a somewhat different shape or width.In the sky map, the peak position around the center is consistent with the one reported by Swift-XRT within the statistical error.The peak significance is above 6 , which corroborates the detection.
Average spectrum
The average spectrum for the first 20 minutes of observation is shown in Fig. 3.The data points are the result of an unfolding procedure following the prescription of the Bertero method described in Albert et al. (2007).The best fit to the points is instead provided by the forward folding method (Piron et al. 2001).For the event cuts optimization, the adopted spectrum is an intrinsic power-law spectrum with an index = −3, which is close to the final estimated value (see below), later attenuated by EBL assuming the model D11 and = 1.1.Because of the strong EBL absorption, the observed denoted by white and blue filled points respectively.The highest energy bin is a 2 upper limit in each spectrum.The solid black and dashed grey lines represent the forward folding fits to the data points.The solid grey line is obtained from the intrinsic spectrum fit (black solid line) after the absorption by the EBL is taken into account, using the D11 model.spectrum has a steep power-law index of −5.32 ± 0.53 (stat.only) above 50 GeV.The intrinsic (EBL-corrected) spectrum is consistent with a simple power-law function and shows a harder index of −3.15 ± 0.70 (stat.only).The normalization factor at 100 GeV is (2.03 ± 0.39) × 10 −8 TeV −1 cm −2 s −1 (stat.only).The highest energy bin around 200 GeV is a 2 upper limit due to a large relative flux error about 100%.
The obtained spectrum suffers from systematic uncertainties coming from different sources.For such a steep observed spectrum, the uncertainty of the energy scale significantly affects the computed fluxes.We estimated the flux variation by shifting the light scale in the simulations during the forward folding procedure assuming the EBL model D11.We adopted a ±15% shift as prescribed in Aleksić et al. (2016).The results are shown in Table 1.When the energy scale is shifted by -15%, the observed spectrum is shifted to the low-energy side resulting in a lower flux.The spectral index of the intrinsic spectrum is softened due to the smaller attenuation by EBL at lower energies.In case of the +15% shift, the flux and the spectral index are shifted in the opposite direction.ranges from -3.19 in the -15% case to -2.17 in the +15% case.The normalization factor instead varies by a factor of 3. The spectral uncertainty originating from the energy scale is therefore significantly larger than the statistical errors.
The VHE flux of GRB 201216C is also affected by the choice between available EBL models.At such high redhift = 1.1, EBL models show large differences in predicted attenuation factors.We compared the spectra calculated with four EBL models including D11 with the same unfolding method as the one used for Fig. 3.The three models besides D11 are Franceschini, A. et al. (2008), Finke et al. (2010), and Gilmore et al. (2012) (hereafter F08, FI10, and G12, respectively).The results are shown in Table 1.The power-law index ranges from -3.19 in the F08 case to -2.45 in the G12 case, and the normalization factor varies by a factor of 2. Also in this case, the systematic uncertainty on the parameters due to the EBL models is larger than or equal to the statistical errors.
At = 1.1, D11 and F08 have similar attenuation values below 200 GeV, which is the maximum energy in our analysis.The attenuation discrepancy between D11 and G12 is a factor of 2 at 100 GeV and a factor of 5 at 200 GeV.Thus, the intrinsic spectrum has a larger normalization and it is harder in the G12 case than in the D11 and F08 case, as seen in Table 1.
Light Curve
The VHE energy-flux light curve between 70 GeV and 200 GeV is shown in Fig. 4. The energy flux of each time bin is obtained by integrating the EBL-corrected forward-folded spectrum with the D11 model, so that the spectral variability with time is taken into account.For each time bin, the event cut is based on the signal survival fraction of simulated events in order to increase the statistics in such short time bins.The corresponding energy threshold is around 70 GeV for all the time bins.The light curve is compatible with a power-law decay.The best fit decay index until the 5th bin excluding upper limits is −0.62 ± 0.04.
Upper limits are calculated for bins where relative flux errors are larger than 50% using the method described in (Rolke & López and from 0 +20.5 h to 0 +24.6 h (second night).Upper limits are calculated as 95% confidence level for the bins with relative errors >50%.
2001).The excess count upper limit of 95% confidence interval is calculated for each of such bins and converted into the energy flux unit by assuming the power-law spectrum with an index of -3 attenuated with the D11 model.
The systematic uncertainties considered in Sec.4.2 also affect the flux points in the light curve to a similar extent.However, since the spectral shape is not expected to change significantly during the short period of each bin of the light curve, the relative flux error is similar among all the bins.Therefore, the temporal decay index should be independent of the uncertainties as long as the spectrum is assumed to be stable.In fact, we could not detect any significant spectral changes larger than the statistical error during the time interval where the light-curve was produced.
From the analysis of the data on the second night, which spans from 0 +20.5 h to 0 +24.6 h, we found no significant excess around the position of the GRB with both the cut used in Sec.4.1 and a conventional cut optimized for the Crab Nebula.
We calculated the flux upper limit on the second night assuming an intrinsic power-law spectrum with an index of -3 and the EBL model D11.The event cut applied is the same one as used for the light curve on the first night.The upper limit of the EBL-corrected flux is shown in Fig. 4. We note that the VHE luminosity of GRB 201216C implied by MAGIC observations is fainter (a factor 10-30) than the luminosity predicted by Zhang et al. (2023) on the basis of their afterglow modeling at lower frequencies.
MULTI-WAVELENGTH DATA FROM RADIO TO GAMMA-RAY
In this Section we give an overview of the data at lower energies collected from the literature or analysed in this work, and later used for modeling and interpreting the overall emission (see Section 6).
The data is shown are Figs.5 and 6.
Radio observations
We collected radio observations from Rhodes et al. (2022).These late-time observations have been performed with e-MERLIN, the VLA, and MeerKAT, and cover the ∼ 1 − 10 GHz frequency range.There are no simultaneous detections available at higher frequencies at the time of radio detections, which span the temporal window ∼ 5 − 56 days after 0 .Rhodes et al. (2022) argue that the emission detected in the radio band is dominated by a different component as compared to the emission detected at earlier times in the optical band and in X-rays, and they suggest radiation from the cocoon as possible explanation.In our analysis we also find that radio data cannot be easily explained as synchrotron radiation from the forward shock driven by the relativistic jet, see the discussion in Section 6.We nevertheless include radio data in our analysis (star symbols in Figs. 5 and 6), verifying that the estimation of the synchrotron flux from the jet given by the modeling lies below the observed radio emission.
Optical observations: the Liverpool Telescope and VLT
The 2-m fully robotic Liverpool Telescope (LT) autonomously reacted (Guidorzi et al. 2006) to the Swift-BAT alert, and started observations from about 178 s after the burst with the IO:O5 optical camera in the SDSS-r band (Shrestha et al. 2020).The light curve of GRB 201216C optical counterpart initially displayed a flat behaviour (see Fig. 5) followed by a steepening, as revealed by VLT data gathered at 2.2 hours post-burst (Izzo et al. 2020) and by the non-detection of the afterglow in deeper LT observations at 1 day post-burst.We note that the LT photometry data was calibrated using a common set of stars present in the field of view selected from the APASS catalog.
X-ray observations
Swift-XRT started to collect data on GRB 201216C only 2966.8 s after the burst onset due to an observing constraint (Beardmore et al. 2020).Observations continued up to 0 + 4325.4 s.The unabsorbed X-ray flux integrated in the 0.3-10 keV energy range is shown in Fig. 5 (blue data points).At around 0.1 days XRT and optical data are simultaneously available and we built the spectral energy distribution (SED) around this time (Fig. 6).The XRT spectrum has been derived by analysing data between 8900 and 9300 s with the XSPEC software.Source and background spectra have been built using the automatic analysis tool6 .We modeled the spectrum with an absorbed power-law accounting both for Galactic and intrinsic metal absorption using the XSPEC models and , respectively.The Galactic contribution is fixed to the value H,G = 5.04 × 10 20 cm −2 (Willingale et al. 2013), while the column density in the host galaxy is a free parameter.We find that the best fit photon index is −1.67 ± 0.19 and the intrinsic column density is H = (1.48±0.52)×10 22 cm −2 .The spectral data, rebinned for plotting purposes and de-absorbed for both Galactic and intrinsic absorption, are shown in Fig. 6 (black crosses).
Gamma-ray observations by Fermi-LAT
Fermi-LAT observations started from 0 + 3500 s and continued until the GRB position was no longer visible ( 0 + 5500 s).No signal is detected during this time window.Assuming a photon index = −2, the estimated upper limit in the energy range 0.1-1 GeV is 3 × 10 −10 erg cm −2 s −1 (Bissaldi et al. 2020).This upper limit is included in our analysis (orange arrow in Fig. 5).
All the light curves at different frequencies are shown in Fig. 5.The Swift-BAT prompt emission light curve is also included in the figure (grey data points).The BAT flux is integrated in the 15-50 keV energy range and points are rebinned using a signal-to-noise ratio (SNR) criterion equal to seven7 .The vertical colored stripes mark the times where SEDs are built.The SEDs are shown in Fig. 6, where the MAGIC spectrum integrated between 56 s and 1224 s is also shown.
MODELING
In this Section, we discuss the origin of the emission detected by MAGIC and its connection to the afterglow emission at lower energies, from radio to X-rays.In particular, we test an SSC scenario from electrons accelerated at the forward shock.We consider a relativistic jet with initial Lorentz factor Γ 0 ≫ 1, opening angle jet , and a top-hat geometry.The (isotropic equivalent) kinetic energy of the jet k is related to iso, ∼ 6 × 10 53 erg (see Sec. 2) through the efficiency for production of prompt radiation : The details of the equations adopted to describe the dynamics, the particle acceleration, and the radiative output can be found in Miceli & Nava (2022) and are also reported in Appendix A. We summarize here the general model and the main assumptions.
The jet is expanding in an ambient medium characterised by a density described by a power-law function () ∝ − .We consider the density to be either constant ( = 0 and () = 0 ) or shaped by the progenitor stellar wind: () = −2 ( = 2), where is related to the mass loss rate of the progenitor's star and to the velocity of the wind w by = /4 p ( p is the mass of the proton).We normalize the value of to a mass loss rate of 10 −5 solar masses per year and a wind velocity of 10 3 km s −1 : = 3 × 10 35 ★ cm −1 .We assume that ambient electrons are accelerated at the forward shock into a power-law distribution / ∝ − from min and max .The bulk Lorentz factor of the fluid just behind the shock is assumed to be constant (Γ = Γ 0 ) before the deceleration and described by the solution given by Blandford & McKee (1976) (Γ = Γ BM ) during the deceleration (note that the equation given by Blandford & McKee (1976) describes the Lorentz factor of the shock Γ sh , which we relate to the Lorentz factor of the fluid using Γ = Γ sh / √ 2).The two regimes are smoothly connected to obtain the description of the bulk Lorentz factor of the fluid just behind the shock as a function of shock radius.
To infer the particle distribution and the photon spectrum at any time we numerically evolve the equations describing the electron and photon populations including adiabatic losses, synchrotron emission and self-absorption, Inverse Compton emission and − annihilation and pair production.To relate the comoving properties computed by the code to the observed one, we assume that the emission received at a given observer time is dominated by electrons moving at an angle cos = from the line of sight to the observer, where is the velocity of the shocked fluid.
Before presenting the results of the numerical modeling, we discuss some general considerations that can be inferred using analytic approximations from Granot & Sari (2002).Fig. 5 shows that the optical flux is nearly constant up to at least 5 × 10 −3 d.At later times this behaviour breaks into a steeper temporal decay.We take as reference value for the break time ∼ 10 −2 d.This behavior of the optical light curve can be explained if the break frequency m (i.e. the typical photon energy emitted by electrons with Lorentz factor min ) is crossing the band.The nearly constant flux before the crossing time is indicative of a wind-shaped external medium (i.e., = 2).The preference for a wind-like medium is also supported by the lack of a phase of increasing flux in the MAGIC observations, which start as early as ∼60 s after the onset of the prompt emission.Since m ∝ −1.5 , we expect m ∼ 1 GHz at ∼ 50 d.Observations at 1.3 GHz do not allow to constrain the peak time, but we notice that they are consistent with the presence of a peak around 50 days (a zoom on radio observations can be found in figure 1 of Rhodes et al. 2022 and it shows that, considering the errors, the flux is consistent with being constant at about 50 days).The radio SED at this time (see Fig. 6 and also Rhodes et al. 2022) shows that the self-absorption frequency sa must be below 1 GHz.Since sa ∝ −3/5 , this implies that during the time spanned by observations sa < m .The fast increase of the 1.3 GHz flux (with temporal index ≳ 5, as reported in Rhodes et al. 2022) however implies that observations at this time are below the self-absorption frequency (otherwise the flux at 1.3 GHz should be constant), constraining sa (50 d) ∼1 GHz.
To summarise, the scenario implied by optical and radio observations invokes a jet expanding in a wind-like density and producing a synchrotron spectrum with sa < m , and m crossing the optical band at ∼ 10 −2 d and the 1 GHz frequency at ∼ 50 d.We now check the consistency of this interpretation with X-ray observations.Imposing m (54 d) = sa (54 d) = 1 GHz and the flux ( sa , 54 d) = 2 × 10 −18 erg cm −2 s −1 , and using equations for the break frequencies and flux in a wind-like medium from Granot & Sari (2002), it is possible to derive the values of k , B , and ★ as a function of e , for fixed values of .For = 2.2 we find k,52 ≃ e , B ≃ 2.3 × 10 −6 −5 e and ★ ≃ 8.8 2 e .This shows that the requirement that the spectrum peaks at sa = 1 GHz at 54 days limits the energy to a low value k < 10 52 erg, inconsistent with the large flux detected in the X-ray afterglow.In particular we find that the X-ray band is always above the cooling frequency c for different assumptions on e .Moreover, in this range, the predicted flux is at least one order of magnitude below the detected X-ray flux.This statement is quite robust, as the flux in this band weakly depends on B , does not depend on ★ , and is proportional to k e .Pushing e to large values (close to one) improves the situation, at the expense of a very small B , implying a large SSC component.This solution is ruled out by MAGIC observations.
Being unable to find a scenario that explains all the available data as synchrotron and SSC emission from the forward shock driven by a relativistic jet, we consider the possibility that late-time radio emission is dominated by a different component, as also concluded by Rhodes et al. (2022), which identify in a wider mildly (or non) relativistic cocoon the origin of the radio emission.We then restrict the modeling to the MAGIC, X-ray and optical data, requiring that the flux at 1-10 GHz from the narrow relativistic jet is below the observed flux.
We performed numerical calculations of the expected synchrotron and SSC radiation and their evolution in time for wide ranges of values of the parameters k , e , B , , (), jet , and Γ 0 .The investigated range of values for each parameter is reported in Table 2.The numerical calculations confirm the considerations derived from analytic estimates.In particular, we neither find a solution for a homogeneous medium nor for a complete description of radio to GeV observations.Assuming a wind-like density profile, we find that the observations can be well described as synchrotron and SSC radiation.In particular, once the request to model also radio observations with forward shock emission from the relativistic jet is abandoned, the X-ray flux can be explained by increasing the assumed value of the jet energy, which also moves the self-absorption frequency to lower energies.An example of modeling is provided in Figs. 5 and 6, where observations (corrected for absorption in the optical and X-ray band) are compared to the light curves and spectra predicted with the following parameters: k = 4 × 10 53 erg, e = 0.08, B = 2.5 × 10 −3 , ★ = 2.5 × 10 −2 , = 2.1, Γ 0 = 180, and jet = 1 • the values are listed also in Table 2.The jet opening angle is broadly constrained by the need to not overproduce the radio flux.The inferred value points to a narrow jet, with opening angle in the low-value tail of distributions of inferred jet opening angles for long GRBs (Chen et al. 2020).We note that a similarly small ( jet ∼ 0.8 • ) value for the jet opening angle has been inferred for the TeV GRB 221009A LHAASO Collaboration et al. (2023).The inferred jet kinetic energy implies an efficiency of the prompt emission ≃ 60%.In agreement with the steep optical spectrum reported by Vielfaure et al. (2020), this model implies an extinction of 4.6 magnitudes in the ′ band, which is well in excess of the Galactic contribution ( ( −) = 0.05).As it can be seen from Fig. 5, the onset of the deceleration occurs at obs ≲ 200 s, where the X-ray and TeV theoretical light curves steepen from an almost flat to a decaying flux.
The steepening of the optical light curve instead occurs at ∼ 10 3 s because, as already commented, it is determined by the m frequency crossing the band.In this interpretation, the frequency m is initially above the optical band (see the brown SED in Fig. 6) and then moves to lower frequencies crossing the optical and explaining the steepening in the light curve.X-ray observations lie just above the cooling frequency, but the X-ray spectrum remains harder than expected due to the role of the Klein-Nishina cross section.We also computed the expected SED averaged between 56 s and 1224 s, where the MAGIC spectrum (see Fig. 3) is computed.The model SED is reported in Fig. 6 (green curve, to be compared with the MAGIC data, green circles).We find that the − internal absorption plays a minor role in shaping the spectrum: the flux reduction at 200 GeV is about 25%.In the same figure it is also possible to see the expected location of the maximum energy of synchrotron photons, initially located at 10 GeV at the time of the first SED, and then moving towards lower energies.Assuming diffusive shock acceleration proceeding at the maximum rate rules out a synchrotron origin for the photons detected by MAGIC.Rhodes et al. 2022).The XRT spectral data points estimated around 9000 s are also shown.Green circles show the MAGIC spectrum averaged between 56 and 1224 s (Fig. 3).The theoretical SED to be compared with the MAGIC spectrum is the green curve, which shows the predicted spectrum (synchrotron + SSC) averaged in the same time window (56-1224 s).
CONCLUSIONS
In this paper, MAGIC analysis results on GRB 201216C and their interpretation were presented.The GRB afterglow was observed at early times (∼ 10 2 − 10 3 s) by MAGIC for a total of ∼2.5 h during the first night and detected at the level of 6 in the first 20 minutes.This is the second firm detection of a GRB with the MAGIC telescopes after GRB 190114C, and also the farthest VHE source detected to date.Both the observed and intrinsic average spectra can be well described by a power-law.A time-resolved analysis was also performed, in order to evaluate the temporal behavior of VHE emission.The obtained light curve shows a monotonic power-law decay, indicating a probable afterglow origin of the VHE emission.Multi-wavelength data were also collected by other ground and space-based instruments.Unfortunately, most of them are not contemporaneous to the first MAGIC observation time window.In other cases, as for Fermi-LAT, the GRB could not be detected.Like other GRBs detected in the VHE range, multi-wavelength data was used to perform a modeling of the broadband emission.In this manuscript a synchrotron and SSC radiation model at the forward shock in the afterglow was considered.SEDs built at different times show that synchrotron photons can reach a maximum energy of 10 GeV about three minutes after the GRB onset.The emission detected by MAGIC reaches higher energies, and can therefore be explained by the SSC component of the model.By comparing analytic estimates and the numerical modeling, evidence for the need of a different component at the origin of late-time radio emission is found, in agreement with previously published studies on this GRB.Both observations and modeling support a wind-like medium, as expected in the case of a long GRB.The best fit model parameters are found to be consistent with those estimated in previous studies of GRB afterglows without VHE detection.This proves the flexibility of the SSC scenario in describing the VHE emission of GRBs.
Like other VHE detected GRBs (GRB 180720B and GRB 190114C), 201216C was a bright GRB, allowing for a detection in spite of the high redshift.Once again, the rapid response and low-energy threshold of the MAGIC telescopes to GRB alerts was crucial to detect the VHE emission in the early afterglow phase.Altogether, the detection by MAGIC and other experiments of several bursts so far suggests that VHE emission is common both in high and low-luminosity GRBs.Other VHE detected GRBs showed a correlation between the intrinsic emission in the X-ray and VHE bands, where a similar time decay and flux value were observed.In the case of GRB 201216C such a direct comparison cannot be performed given the lack of contemporaneous data in the two bands.The extrapolation of the X-ray flux into the first MAGIC time window, assuming a smooth power-law behavior typical of the afterglow phase, shows that the VHE flux is lower than the X-ray one.However, one should take into account the rather narrow energy range of the VHE detection due to the large absorption caused by the EBL.Number JP21K20368.L. Nava acknowledges partial support from the INAF Mini-grant 'Shock acceleration in Gamma Ray Bursts'.The Liverpool Telescope is operated on the island of La Palma by Liverpool John Moores University in the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofisica de Canarias with financial support from the UK Science and Technology Facilities Council (STFC) under UKRI grant ST/T00147X/1.Manisha Shrestha and Iain Steele thank UKRI/STFC for financial support (ST/R000484/1).We would like to thank the STARGATE collaboration for the confirmation of the redshift of the source via private communication.
Figure 1 .
Figure 1. 2 distribution for the first 20 minutes of observation, see the main text for the definition.Both GRB (red circles) and background events (blue squares) are shown.The vertical dashed black line shows the value of the cut in 2 used for the calculation of the significance.
Figure 2 .
Figure 2. Test-statistics sky map for the first 20 minutes of observation.The cross marker shows the position of GRB 201216C reported by Swift-XRT.The white circle shows the MAGIC point spread function corresponding to 68% containment.
Figure 3 .
Figure3.Observed and EBL-corrected spectra for GRB 201216C as measured by the MAGIC telescopes during the first 20 minutes of observations, denoted by white and blue filled points respectively.The highest energy bin is a 2 upper limit in each spectrum.The solid black and dashed grey lines represent the forward folding fits to the data points.The solid grey line is obtained from the intrinsic spectrum fit (black solid line) after the absorption by the EBL is taken into account, using the D11 model.
Figure 4 .
Figure 4. EBL-corrected energy-flux light curve between 70 GeV and 200 GeV from 0 +56 s to 0 +40 min (first night, divided into five time bins)and from 0 +20.5 h to 0 +24.6 h (second night).Upper limits are calculated as 95% confidence level for the bins with relative errors >50%.
Figure 5 .
Figure 5. Multi-wavelength light curves of GRB 201216C.Both the X-ray and optical observations have been corrected accounting for absorption.MAGIC data points are EBL-corrected.Upside down triangles represent upper limits.Solid curves show the best fit model obtained in a synchrotron -SSC forward shock scenario.Different colors refer to the different wavelengths where observations are available (see the legend).The modeling is obtained with the following parameters: k = 4 × 10 53 erg, e = 0.08, B = 2.5 × 10 −3 , ★ = 2.5 × 10 −2 , = 2.1, Γ 0 = 180, and jet = 1 • .Vertical lines mark the times where SED have been built (see Fig. 6).
Figure 6 .
Figure6.SEDs of GRB 201216C at different times.Different colors for curves and data points refer to different times (see the legend).The times where the SEDs are calculated are also marked in Fig.5with vertical stripes.Solid curves show the synchrotron and SSC theoretical spectra for the same parameters used for Fig.5.De-absorbed optical data in the ′ filter are marked with square symbols, while star symbols are observations at 1.3 GHz and 10 GHz at 54.5 d and 53 d, respectively (fromRhodes et al. 2022).The XRT spectral data points estimated around 9000 s are also shown.Green circles show the MAGIC spectrum averaged between 56 and 1224 s (Fig.3).The theoretical SED to be compared with the MAGIC spectrum is the green curve, which shows the predicted spectrum (synchrotron + SSC) averaged in the same time window (56-1224 s).
Table 1 .
The obtained power-law index Fitted power-law spectral parameters of the 20-minute average spectrum using different scales of the Cherenkov light amount and different EBL models.The tested EBL models are D11, F08, FI10, and G12 with the nominal light scale.The tested light scales are nominal, -15%, +15% with the D11 EBL model.The normalization energy is fixed to 100 GeV.The errors are statistical only.The resulting systematic errors are reported in the main text.
Table 2 .
List of the input parameters for the afterglow model.For each parameter, the range of values investigated by means of the numerical model are listed in the second column.Solutions are not found for an homogeneous density medium ( = 0).The last column list the values that better fit the observations and used to produce the model light-curves and model SEDs in Figs. 5 and 6. | 9,615.2 | 2023-10-10T00:00:00.000 | [
"Physics"
] |
Transcriptional Regulation of the Human Sterol 12α-Hydroxylase Gene (CYP8B1) ROLES OF HEPATOCYTE NUCLEAR FACTOR 4α IN MEDIATING BILE ACID REPRESSION
Abstract Sterol 12α-hydroxylase catalyzes the synthesis of cholic acid and controls the ratio of cholic acid over chenodeoxycholic acid in the bile. Transcription of CYP8B1is inhibited by bile acids, cholesterol, and insulin. To study the mechanism of CYP8B1 transcription by bile acids, we have cloned and determined 3389 base pairs of the 5′-upstream nucleotide sequences of the human CYP8B1. Deletion analysis ofCYP8B1/luciferase reporter activity in HepG2 cells revealed that the sequences from −57 to +300 were important for basal and liver-specific promoter activities. Hepatocyte nuclear factor 4α (HNF4α) strongly activated human CYP8B1 promoter activities, whereas cholesterol 7α-hydroxylase promoter factor (CPF), an NR5A2 family of nuclear receptors, had much less effect. Electrophoretic mobility shift assay identified an overlapping HNF4α- and CPF-binding site in the +198/+227 region. The humanCYP8B1 promoter activities were strongly repressed by bile acids, and the bile acid response element was localized between +137 and +220. Site-directed mutagenesis of the HNF4α-binding site markedly reduced promoter activity and its response to bile acid repression. On the other hand, mutation of the CPF-binding site had little effect on promoter activity and bile acid inhibition. A negative nuclear receptor, small heterodimer partner markedly inhibited transactivation of CYP8B1 by HNF4α. Mammalian two-hybrid assay confirmed that HNF4α interacted with small heterodimer partner. Furthermore, bile acids and farnesoid X receptor reduced the expression of nuclear HNF4α in HepG2 cells and rat livers and its binding to DNA. Bile acids and farnesoid X receptor also inhibited mouseHNF4α gene transcription. In summary, our data revealed the critical roles HNF4α play on CYP8B1transcription and its repression by bile acids. Bile acids repress human CYP8B1 transcription by reducing the transactivation activity of HNF4α through interaction of HNF4α with SHP and reduction of HNF4α expression in the liver.
Sterol 12␣-hydroxylase catalyzes the synthesis of cholic acid and controls the ratio of cholic acid over chenodeoxycholic acid in the bile. Transcription of CYP8B1 is inhibited by bile acids, cholesterol, and insulin. To study the mechanism of CYP8B1 transcription by bile acids, we have cloned and determined 3389 base pairs of the 5-upstream nucleotide sequences of the human CYP8B1. Deletion analysis of CYP8B1/luciferase reporter activity in HepG2 cells revealed that the sequences from ؊57 to ؉300 were important for basal and liver-specific promoter activities. Hepatocyte nuclear factor 4␣ (HNF4␣) strongly activated human CYP8B1 promoter activities, whereas cholesterol 7␣-hydroxylase promoter factor (CPF), an NR5A2 family of nuclear receptors, had much less effect. Electrophoretic mobility shift assay identified an overlapping HNF4␣-and CPFbinding site in the ؉198/؉227 region. The human CYP8B1 promoter activities were strongly repressed by bile acids, and the bile acid response element was localized between ؉137 and ؉220. Site-directed mutagenesis of the HNF4␣-binding site markedly reduced promoter activity and its response to bile acid repression. On the other hand, mutation of the CPF-binding site had little effect on promoter activity and bile acid inhibition. A negative nuclear receptor, small heterodimer partner markedly inhibited transactivation of CYP8B1 by HNF4␣. Mammalian two-hybrid assay confirmed that HNF4␣ interacted with small heterodimer partner. Furthermore, bile acids and farnesoid X receptor reduced the expression of nuclear HNF4␣ in HepG2 cells and rat livers and its binding to DNA. Bile acids and farnesoid X receptor also inhibited mouse HNF4␣ gene transcription. In summary, our data revealed the critical roles HNF4␣ play on CYP8B1 transcription and its repression by bile acids. Bile acids repress human CYP8B1 transcription by reducing the transactivation activity of HNF4␣ through interaction of HNF4␣ with SHP and reduction of HNF4␣ expression in the liver.
High serum cholesterol contributes to atherosclerosis and cardiovascular diseases (1). The conversion of cholesterol to bile acids is the most significant pathway for cholesterol disposal and occurs exclusively in the liver (2)(3)(4). The neutral pathway of bile acid synthesis is subjected to bile acid feedback inhibition of the rate-limiting step catalyzed by cholesterol 7␣-hydroxylase (CYP7A1) 1 (5). CYP8B1 catalyzes the synthesis of cholic acid and controls the ratio of cholic acid (CA) over chenodeoxycholic acid (CDCA) that determines the hydrophobicity of the bile acid pool (6). CYP8B1 was purified in 1992 (7) from rabbit livers. Recently, the cDNA and the gene encoding CYP8B1 were cloned in the rabbit, rat, mouse, and human (8 -10). Interestingly, the CYP8B1 has no intron.
Cholesterol feeding or thyroid hormone repress CYP8B1 expression (8,9,11), in contrast to their stimulatory effect on CYP7A1. In streptozotocin-induced diabetic rats, the CYP8B1 activity and mRNA levels were elevated, which could be suppressed by insulin administration (12). An increase in CYP8B1 transcription may explain the increased synthesis of cholic acid in diabetes.
Recently, two overlapping CPF-binding sites have been identified in the rat CYP8B1 promoter (31). Mutation of the CPFbinding sites abolished CYP8B1 transcriptional activity such that the bile acid inhibition of the CYP8B1 promoter activity could not be determined. We analyzed nucleotide sequences of the putative BAREs in the CYP7A1 and CYP8B1 promoters of different species and unveiled a general feature that the BAREs contain the overlapping binding sites for CPF and HNF4␣ despite low sequence identity between the BAREs of these two genes (4,31). We hypothesize a general mechanism that SHP interacts either with HNF4␣ or CPF and inhibits the genes regulated by bile acids. HNF4␣ (NR2A1) is an orphan nuclear receptor that binds to the direct repeat with one base spacing (DR1) motif as a homodimer and regulates the liverspecific expression of many genes in lipoproteins and glucose metabolism. HNF4␣ has constitutive activity and is able to transactivate genes without ligand binding. HNF4 has been shown to activate CYP7A1 transcription (32).
The goal of this research was to investigate the mechanism of transcriptional regulation of the human CYP8B1 promoter by bile acids. In the present study, we characterized the promoter of the human CYP8B1. Site-directed mutagenesis, reporter gene assay, and EMSA were used to study effects of CPF, HNF4␣, and SHP. Since CPF was used in this study, the term CPF will be used in this work unless specified otherwise. It should be mentioned that FTF is the recommended name by Genomic Data Base Nomenclature Committee (accession number 9837397) (26,29). We demonstrated that HNF4␣ played a major role in regulating human CYP8B1 transcription and mediating bile acid repression by interacting with SHP. In addition, bile acids also inhibited CYP8B1 transcription by inhibiting HNF4␣ transcription.
EXPERIMENTAL PROCEDURES
Cloning of the Human CYP8B1 Gene-Based on the available human CYP8B1 sequence, the sequence from nucleotide Ϫ514 to ϩ300 relative to the transcription initiation site was amplified by polymerase chain reaction (PCR) using human liver genomic DNA as a template. One genomic clone, H8B5, was isolated from the FIX II genomic library (Stratagene, La Jolla, CA) using the end-labeled genomic fragment as a hybridization probe. Southern blot and PCR analyses confirmed that this clone (15 kb) contained the entire coding sequence of the human CYP8B1 and about 8 kb of the 5Ј-flanking sequences. A PstI/SacI restriction fragment containing 3.2 kb of the 5Ј-flanking sequences was subcloned into the pBluescript II SKϩ vector (Promega, Madison, WI) and designated as pBSK/h8B1/3.5K. Nucleotide sequencing revealed 3389 base pairs of the 5Ј-upstream sequence (GenBank TM accession number AF226627).
Construction of Human CYP8B1/Luc Reporters-The Ϫ514/ϩ300 fragment obtained by PCR was cloned into the SacI and SmaI sites of the luciferase reporter, pGL3 basic vector (Promega). The phCYP8B1Ϫ514/ϩ300Luc obtained was then digested with KpnI and SpeI to release a fragment covering the sequence from Ϫ514 to ϩ172 and replaced it with a KpnI and SpeI fragment (Ϫ3064 to ϩ172) released from pBSK/h8B1/3.5K. The resulting constructs were designated as phCYP8B1Ϫ3064/ϩ300Luc. The phCYP8B1Ϫ514/ϩ300Luc was digested with HinfI at Ϫ111, respectively. The sticky ends produced by HinfI were blunted with the Klenow polymerase. The linearized plasmid was subjected to XhoI digestion and subsequently cloned into the pGL3 vector digested with SmaI and XhoI. The phCYP8B1Ϫ514/ ϩ300Luc was also used as a template for construction of deletion mutants by PCR to generate phCYP8B1Ϫ434/ϩ300Luc, phCYP8B1Ϫ164/ϩ300Luc, and phCYP8B1Ϫ57/ϩ300Luc. All of these constructs had MluI site built in at the 5Ј-end and XhoI site at the 3Ј-end and cloned into MluI-and XhoI-digested pGL3 vector. To construct the 3Ј-deletion mutants, the phCYP8B1Ϫ514/ϩ300Luc was digested with BglII and SpeI or StuI to release fragments from ϩ180 to ϩ300 and ϩ248 to ϩ300, respectively. The sticky ends were then blunted with the Klenow polymerase, and the linearized plasmids were religated to generate phCYP8B1Ϫ514/ϩ180Luc and phCYP8B1Ϫ514/ ϩ248Luc. The phCYP8B1Ϫ514/ϩ76Luc was constructed by digestion phCYP8B1Ϫ514/ϩ300Luc with HindIII to release the fragment from ϩ76 to ϩ300 and religate the linearized plasmid. Other 3Ј-deletion mutants were obtained by PCR to construct phCYP8B1Ϫ514/ϩ220Luc, phCYP8B1Ϫ514/ϩ200Luc, and phCYP8B1Ϫ514/ϩ137Luc. These three constructs had SacI at the 5Ј-end and SmaI sites at the 3Ј-end.
Mutations were introduced into reporter constructs by PCR-based site-directed mutagenesis using QuikChange Site-directed Mutagenesis Kit (Stratagene). Two complementary oligonucleotide sets were designed as PCR primers; primer M4 (ϩ194 to ϩ234) was used to mutate the HNF4␣ site and introduce a CPF site in the reverse strand 209 tttaccttga 218 ; primer M5 (ϩ200 to ϩ245) was used to introduce a consensus 3Ј-HRE 215 aGgtCA 220 . Reaction mixtures were set up according to the manufacturer's instruction using 50 ng of template DNA and 125 ng of primers. Cycling parameter: denaturing at 95°C for 30 s, followed by 18 cycles at 95°C for 30 s, 55°C for 1 min, and 68°C for 12 min. The reaction was subjected to DpnI digestion for 2 h. The plasmids were then transformed into XL1-Blue supercompetent cells. Sequences of all constructs were confirmed by DNA sequencing.
Mammalian Two-hybrid Assay-Gal4/HNF4␣ fusion construct (provided by M. Crestani (33)) was prepared by inserting the full-length HNF4␣ coding region (amino acid residue 1-455) into plasmid pcDNA3X(Ϫ) (Invitrogen) at the BamHI site. VP16-SHP (provided by D. Mangelsdorf (24)) contained a full-length mouse SHP coding region ligated into pCMX-VP16. CheckMate TM mammalian two-hybrid assay kit was obtained from Promega. The pBIND vector contains the yeast Gal4 DNA binding domain, and the pACT vector contains the herpes simplex virus VP16 activation domain. Gal4/Id and VP16/MyoD provided in the kit were used as a positive control. The reporter plasmid pG5luc contains five copies of Gal4-binding sites fused upstream of the firefly luciferase gene (luc).
Transient Transfection Assay-Confluent cultures of HepG2 cells grown in 24-well tissue culture plates were transfected with plasmids by the calcium phosphate coprecipitation method. The reporter construct, receptor expression plasmid, and pCMV -galactosidase plasmid (CLONTECH, Palo Alto, CA, one-tenth of reporter plasmid, as internal control for transfection efficiency) were transfected in each well. The pcDNA3 vector was added to normalize the amounts of DNA transfected in each assay. Cells were overlaid with serum-free media containing indicated concentrations of bile acids. Cells were lysed 40 h after transfection. Each data point is the average of triplicate assays. Each experiment was repeated three times. Luciferase activity was assayed using Luciferase Assay System (Promega) and luminescence was determined using a Lumat LB 9501 luminometer (Berthold Sys-tems, Inc., Pittsburgh, PA). Luciferase activities were normalized for transfection efficiencies by dividing the relative light units by -galactosidase activity expressed from cotransfected pCMV plasmid. A human CYP7A1 reporter gene phCYP7A1Ϫ372/ϩ25Luc was constructed previously. Mouse HNF4␣/Luc reporter (pDGT43) containing 744 base pairs 5Ј-upstream sequence was provided by Dr. T. Leff (Werner Lambert/Parke-Davis, Ann Arbor, MI).
Nuclear Extract Preparation-Confluent HepG2 cells were lysed by trypsin and washed twice with cold phosphate-buffered saline (PBS). Cells were then resuspended in hypotonic buffer and swelled for 10 min on ice. The cells were broken using a Dounce homogenizer with a tightly fit pestle. One-tenth volume of 75% sucrose buffer was added and homogenized. The homogenate was spun for 30 s at 16,000 ϫ g at 4°C. Then the viscous nuclear pellet was lysed in nuclear resuspension buffer containing 0.4 M ammonium sulfate and centrifuged at 2°C for 90 min at 150,000 ϫ g to pellet nuclear debris and chromatin. Solid ammonium sulfate was added to precipitate the nuclear protein from the supernatant. The pellet was dissolved in nuclear dialysis buffer and dialyzed overnight at 4°C. Protein concentration was quantitated using Coomassie Plus Protein Assay Reagent Kit (Pierce), and the nuclear extracts were stored at Ϫ70°C in aliquots. Nuclear extracts also were isolated from the livers of rats treated with a diet supplemented with CA (1%), CDCA (1%), deoxycholic acid (DCA) (0.25%), ursodeoxycholic acid (UDCA) (1%), cholestyramine (5%), or cholesterol (1%) for 2 weeks. Animals were housed in a room with reversed dark/light cycle (3 a.m. to 3 p.m. dark, 3 p.m. to 3 a.m. light) and sacrificed at 9 am.
Electrophoretic Mobility Shift Assay (EMSA)-Double-stranded synthetic probes for EMSA were prepared by heating equal molar amounts of complementary oligonucleotides to 95°C in 2ϫ SSC (0.5 M NaCl, 15 mM sodium citrate, pH 7.0) and cooled to room temperature. The resulting double-stranded fragments were labeled by filling in the overhang incorporated in the synthetic oligonucleotides with [␣-32 P]dCTP (3000 Ci/mol) with the Klenow fragment of DNA polymerase I. Oligonucleotides filled-in with non-labeled dNTPs were used as cold competitors. Labeled fragments were purified through two G-50 spin columns. Binding reactions were initiated with the addition of 3 g of nuclear extracts to 100,000 cpm of labeled oligonucleotide probe dissolved in 20 l of the buffer containing 12 mM HEPES, pH 7.9, 50 mM KCI, 1 mM EDTA, 1 mM dithiothreitol, and 15% glycerol, and 2 g of poly(dI-dC). Samples were incubated for 20 min at room temperature. Four percent polyacrylamide gels were prepared and pre-run for 30 min at 200 V. Electrophoresis was performed at room temperature at constant 200 V for 1.5-2 h. The gel was dried and autoradiographed using Phosphor-Imager 445Si (Molecular Dynamics, Sunnyvale, CA). The images were analyzed using IP Lab Gel software (Signal Analytics Corp., Vienna, VA). Antibody supershift was carried out by adding the antibody (1-2 l) to the nuclear extract and incubated for 15 min before mixing with labeled probe. Oligonucleotides used for competition assays were HNF4 from Santa Cruz Biotechnology, Inc. (Santa Cruz, CA), SP-1 from Promega (Madison, WI), and CPF synthesized according to Gilbert et al. (30).
Immunoblot-To measure HNF4␣ protein in the nuclei, 3 g of nuclear extracts were run on 10% SDS-polyacrylamide gel electrophoresis and transferred electrophoretically to a nitrocellular membrane (Hybond ECL, Amersham Pharmacia Biotech). Membranes were blocked with 5% (w/v) non-fat milk in Tween PBS (T-PBS) overnight at 4°C and incubated with the antibody against HNF4␣ (Santa Cruz Biotechnology) at a dilution of 1:5000 in T-PBS for 2 h at 4°C. Membranes were washed three times with T-PBS and incubated with a secondary antibody (horseradish peroxidase-conjugated anti-goat IgG) at a dilution of 1:3000 at 4°C for 2 h. Immunodetection was carried out using an enhanced chemiluminescence kit (Amersham Pharmacia Biotech). Membranes were imaged using a Kodak Imaging Station 440.
RESULTS
Functional Analysis of the Human CYP8B1 Gene Promoter-To determine the contribution of the 5Ј-flanking sequences to human CYP8B1 promoter activity, we performed transient transfection assays of CYP8B1/luciferase reporters in HepG2 cells and CHO cells. Fig. 1A shows that sequential deletion of the nucleotide sequence from the 5Ј-direction did not alter reporter activity much in HepG2 cells. In contrast, deletion of sequences from the 3Ј-direction markedly reduced reporter activity in HepG2 cells (Fig. 1B). The sequence from ϩ248 to ϩ300 apparently was very important for promoter activity, because deletion of this region reduced the activity by 75%, relative to the promoter activity of phCYP8B1Ϫ514/ϩ300Luc. Deletion of the sequence from ϩ180 to ϩ248 further reduced the promoter activities to 10%. Transfection assays of these deletion mutants were also done in CHO cells. Deletion of the sequence from 5Ј-end did not change the promoter activities as observed in the HepG2 cells (data not shown). The loss of promoter activity was much less in CHO cells than in HepG2 cells when the regions between ϩ180 and ϩ300 were deleted (Fig. 1B). It appeared that the sequence between ϩ180 and ϩ300 was important for the liver-specific transcription of the human CYP8B1. Fig. 2 shows the nucleotide sequence and putative transcription factor-binding sites of the proximal promoter of the human CYP8B1. The transcription start site is located at 325 base pairs upstream of the translation start codon (9). The TATA The 5Ј-and 3Ј-sequential deletions of human CYP8B1 upstream sequence were cloned into pGL3 basic reporter vector. Chimeric reporter plasmids (1 g) were transfected into confluent HepG2 or CHO cells. Schematic representation of the deletion constructs with numbering at the left indicates the nucleotide covered relative to the transcription start site (TS) which is indicated by a dotted line. The luciferase reporter activities of deletion constructs were expressed related to the reporter activities of plasmid hCYP8BϪ57/ϩ300LUC and hCYP8B-514/ ϩ300LUC, which were set as 1 in the 5Ј-(A) and 3Ј-deletion (B) analysis, respectively. The error bars represent the standard deviation from the mean of triplicate assays of a representative experiment. Experiments were repeated three times.
box is located at -56/-51. Sequences downstream of the transcription start site contain consensus binding sites for both ubiquitous and liver-specific transcription factors. The region from ϩ248 to ϩ300, which is critical for basal promoter activity, contains several NF-1-binding sites. The region upstream to ϩ180 contains a cluster of putative binding sequences for liver-specific factors, HNF3, CEBP, and DBP. HNF3-and CEBP-binding sites are similar to the insulin response sequence, T(G/A)TTTTG, found in the phosphoenolpyruvate carboxykinase and insulin-like growth factor-binding protein-1 that has been implicated in mediating the repression by insulin (34). The DBP plays a role in diurnal rhythm of the CYP7A1 and other clock genes (35). Interestingly, a sterol response element-3 (SRE-3)-like palindromic sequence (CACTAGTG), a SRE/Sp1 (TGCGGCCAC), and an E box (CAGGTG) are located in this region. These sequences are potential SREBP-binding sites. SREBPs are helix-loop-helix-leucine zipper transcription factors that regulate the genes in cholesterol and fatty acid synthesis (36,37). The sequence from ϩ208 to ϩ220 contains overlapping HNF4␣ and CPF-binding sites. In addition, two E boxes, an HRE half-site, AGGTCA preceded by an A/T-rich sequence (a binding site for monomeric nuclear receptor Rev-erb␣), and an SRE are located further upstream.
Mapping of the HNF4␣-and CPF-binding Sites-Sequences from ϩ198 to ϩ227 of the human CYP8B1 contain a CPFbinding site (GCAAGGTCC, Fig. 3A) which is similar to that identified in the rat CYP8B1 (31). The rat CYP8B1 contains two overlapping CPF-binding sites and a DR1 shown to be a weak PPAR␣-binding site (38). We also noticed a DR1 motif (AGGGCAaGGTCCA) overlapping with a CPF-binding site in the human CYP8B1. This feature is similar to the bile acid response element II we identified previously in the rat and human CYP7A1 (Fig. 3A) (4, 23). HNF4␣ and CPF have been shown to bind the BARE-II (27,32). To identify transcription factors bound to the sequence from ϩ198 to ϩ227 of human CYP8B1, we performed EMSA using HepG2 cell nuclear extracts (Fig. 3B). Four DNA-protein complexes were obtained. The strongest band was further identified as an HNF4␣-DNA complex by competition assay using unlabeled HNF4␣ consensus probe, in vitro synthesized HNF4␣ protein, and antibody supershift assays. A faster moving band was identified as a CPF-DNA complex using in vitro synthesized CPF and competition assay using unlabeled CPF oligonucleotides. These two complexes were competed out by 100-fold excess of unlabeled CPF or HNF4␣ oligonucleotide, respectively, but an unrelated SP-1 oligonucleotide could not compete out the complexes. Fig. 3C shows that in vitro synthesized CPF binds to this probe, but RAR␣/RXR␣, PPAR␣/RXR␣, and RXR␣ homodimer were unable to bind to this sequence. These data indicate that HNF4␣ and CPF specifically bind to the ϩ198/ϩ227 region. HNF4␣and CPF-binding sites were further studied by mutagenesis. We designed mutant oligonucleotides to alter nucleotide sequences located upstream of the putative HNF4␣ site (M1, M2, and M3), the HNF4␣ site (M3, M4, M6, and M7), and the putative CPF-binding site (M5 and M7) (Fig. 4A). Mutation in M4 altered the HNF4␣ site but created a new CPF site in the reverse strand. M5 was designed to alter the 3Ј-HRE (GGTCCA) to a consensus HRE half-site, AGGTCA, and mutated the CPF site. M6 was designed to mutate the DR1 by deleting the spacing between two HRE and mutate the downstream sequences so that the DR1 motif was altered to a DR0. M7 was designed to mutate both HNF4␣ and CPF sites.
Mutant probes were then labeled and used for EMSA to study the binding specificity with in vitro synthesized HNF4␣ and CPF (Fig. 4, B and C). Mutations of the sequences upstream of the HNF4␣/CPF site (M1, M2, and M3) reduced both HNF4␣ and CPF binding. Mutations of the HNF4␣-binding site (M4 and M7) totally abolished the HNF4␣ bindings. M5 showed stronger HNF4 binding because the 3Ј-HRE was mutated to a consensus HRE. Mutations of the core (AAGG) of the CPFbinding site (M5 and M7) abolished CPF binding. M4, which had an HNF4␣ site mutated and a CPF site created in the reverse strand, bound CPF stronger than the wild type probe. M6, which had the HNF4␣-binding site altered but not CPF site, bound CPF. Fig. 4D shows EMSA using HepG2 nuclear extracts. Essentially the same results were obtained as using in vitro synthesized HNF4␣ and CPF. These data confirmed the HNF4␣-and CPF-binding sites in this region. However, sequences upstream of this overlapping site are also involved in the binding of HNF4␣ and CPF, because mutations of these sequences reduced their binding affinity.
Transcriptional Regulation of the Human CYP8B1 by HNF4␣ and CPF-We then studied the regulation of human CYP8B1 transcription by HNF4␣ and CPF using transient transfection assay of CYP8B1/Luc reporter genes in HepG2 cells. HNF4␣ dose-dependently stimulated the reporter activity by 20-fold (Fig. 5A). Under the same assay condition, CPF had much less effect, up to 2-fold of stimulation (Fig. 5B). We did reporter assays in HEK293 cells (Fig. 6A). As in HepG2 cells, CPF (0.5 g) did not affect CYP8B1/Luc reporter activity, whereas HNF4␣ (0.5 g) strongly stimulated reporter activity by 5-fold. Cotransfection with both HNF4␣ and CPF did not potentiate reporter activity stimulated by HNF4␣. We did the same assay in 293 cells with a human CYP7A1/Luc reporter as a comparison. CPF or HNF4␣ stimulated CYP7A1/Luc reporter activity by 2-fold (Fig. 6B). Cotransfection with both CPF and HNF4␣ stimulated human CYP7A1 promoter by 4-fold. Thus these two liver-specific nuclear receptors regulate human CYP7A1 and CYP8B1 differently; both of them synergistically regulates human CYP7A1, but only HNF4␣ regulates CYP8B1.
We then introduced mutations into human CYP8B1/Luc reporter based on EMSA results. As shown in Fig. 7, mutation of the HNF4␣ site created a CPF site in the reverse strand (M4) which markedly reduced reporter activity and abolished the stimulatory effect of HNF4␣. Mutation of the 3Ј-HRE of the HNF4␣ site to a consensus HRE mutated the CPF core sequence (M5) but did not alter basal reporter activity and maintained HNF4␣ stimulation. These results suggest that the HNF4␣-binding site is critical for basal promoter activity, whereas CPF does not have much effect on human CYP8B1 transcription regardless of its ability to bind to the gene.
Suppression of Human CYP8B1 Transcription by Bile Acids
Was Mediated through HNF4␣-We then studied the effects of different bile acids and FXR on the reporter activity of the human CYP8B1/Luc reporter in transfection assay in HepG2 FIG. 3. Bile acid response elements of rat and human CYP7A1 and CYP8B1 and electrophoretic mobility shift assays of HNF4␣ and CPF binding to a probe containing nucleotide sequences of the human CYP8B1 from ؉198 to ؉227. A, alignment of rat and human CYP8B1 sequences from ϩ198 to ϩ227. Putative HNF4␣ and CPF-binding sites are indicated. Corresponding sequences of the rat and human CYP7A1 are shown for comparison. B, EMSA were performed with ␣-32 P-labeled double-stranded probe, H8Bϩ198/ϩ227. Nuclear extracts isolated from HepG2 cells and in vitro synthesized HNF4␣ and CPF were used for EMSA. Reaction mixtures contained 3 g of protein of nuclear extracts or in vitro synthesized HNF4␣ (5 l) or CPF (5 l). The unlabeled oligonucleotides were added in 100ϫ excess for competition assays. Anti-HNF4␣ antibody (2 l) was added into the reaction 30 min prior to the addition of the probe for supershift assay. C, EMSA of CPF and HNF4␣ binding to human CYP8B1 ϩ198/ϩ227 probe. TNT lysates programmed with CPF, HNF4␣, PPAR␣, and RXR␣ were used for EMSA. HNF4␣ oligonucleotide probe, CTCAGCTTGTACTTTGGTACAACA; CPF oligonucleotide probe, TAGGCCTCAAGGTCGGTCG; SP1 oligonucleotide probe, ATTCGATCGGGGCGGGGCGAG. (Fig. 8A). Addition of DCA or CDCA (25 M) repressed the reporter activity by 50 -70%, respectively. Cotransfection with human liver Na ϩ taurocholate cotransport peptide (NTCP) was required for all taurine-conjugated bile acids to repress reporter activity in HepG2 cells, except taurolithocholic acid. Taurolithocholic acid is highly hydrophobic and may be toxic to HepG2 cells even at low concentrations. However, transfection with FXR/RXR␣ somewhat stimulated basal activity but did not enhance the inhibitory effect of bile acids on the human CYP8B1. We found previously that FXR enhanced the inhibitory effect of bile acids on CYP7A1 transcription. We interpreted that factors induced by bile acids in HepG2 cells might be sufficient for bile acid inhibition of the CYP8B1, thus the CYP8B1 may be more sensitive to bile acid inhibition than CYP7A1. This is consistent with the reported potency of bile acid inhibition as CYP8B1 Ͼ CYP7A1 Ͼ CYP27A1 (11). It is also possible that somewhat different mechanisms may be involved in bile acid repression of these genes.
We then used deletion mutants of phCYP8B1Ϫ514/ϩ300Luc to map the region conferring bile acid repression. Deletion from Ϫ514 to Ϫ57 did not affect the bile acid repression (Fig. 8B). When we deleted the sequences from the 3Ј-direction, between FIG. 5. Effects of HNF4␣ and CPF on human CYP8B1 reporter activity. A, dose-responses of HNF4␣ on phCYP8B1Ϫ514/ϩ300/Luc reporter activity. Reporter (1 g) was cotransfected with indicated amounts of HNF4␣ expression plasmid into HepG2 cells. B, dose responses of CPF on phCYP8B1Ϫ514/ϩ300/Luc reporter activity. Reporter (1 g) was cotransfected with indicated amounts of CPF expression plasmids into HepG2 cells. Reporter activities were determined as described under "Experimental Procedures" and as in Fig. 1. FIG. 6. Effects of HNF4␣ and/or CPF on human CYP8B1 and CYP7A1 transcription. A, effect of HNF4␣ and CPF on human CYP8B1-514/ϩ300/Luc reporter activity. phCYP8B1Ϫ514/ϩ300LUC reporter (1 g) was cotransfected with 0.5 g of HNF4␣ and/or CPF into HEK293 cells. B, effect of HNF4␣ and CPF on human CYP7A1/Luc reporter activity. phCYP7A1Ϫ372/ϩ25/Luc (1 g) was transfected with HNF4␣ and/or CPF expression plasmid (0.5 g) into HEK293 cells. Control was transfected with pcDNA3 vector (0.5 g). The empty plasmid was added to compensate for the total amount of DNA transfected in each assay. The reporter activities were expressed as the relative luciferase activities normalized by -galactosidase activities. ϩ137 and ϩ220, bile acid responses as well as promoter activities were greatly reduced (Fig. 8C). This region contains HNF4␣-and CPF-binding sites. We subsequently studied the bile acid effect on mutant CYP8B1/Luc reporters in HepG2 cells. As shown in Fig. 9, the promoter activities of wild type and mutant construct M5, which had the CPF site mutated but the HNF4␣ site maintained, were suppressed by CDCA in a dose-dependent manner. When the HNF4␣ site was mutated but a CPF site was created in the reverse strand (M4), CDCA did not affect the reporter activities. These results demonstrated that HNF4␣ binding is necessary and sufficient for mediating bile acid repression of human CYP8B1 transcription, and CPF might not be involved in mediating bile acid repression of the gene.
SHP Interacted with HNF4␣ and Repressed Human CYP8B1 Transcription-Bile acids repress CYP7A1 transcription by induction of SHP which interacts with CPF and represses CYP7A1 transcription (24,25). Since HNF4␣ was found to play a major role and CPF had much lesser effect on CYP8B1 transcription, we studied the effect of SHP on HNF4␣ or CPF regulation of CYP8B1 transcription. Fig. 10A shows that cotransfection of HNF4␣ (1:5 of reporter) strongly stimulated human CYP8B1/Luc reporter activity by 8-fold, and transfection of SHP alone did not have any effect on CYP8B1 reporter activity. When cotransfected with both HNF4␣ and SHP, reporter activity was the same as the control which was transfected with pcDNA3 empty vector. Thus SHP repressed the reporter activity stimulated by HNF4␣. Fig. 10B shows that HNF4␣ dose-dependently stimulated human CYP8B1/Luc reporter activity, and cotransfection with increasing amounts of SHP strongly repressed reporter activity in a dose-dependent manner. CPF stimulated CYP8B1/Luc reporter activity by up to 2-fold when transfected with 2-fold excess of the receptor plasmid over reporter plasmid (Fig. 10C). SHP repressed the reporter activity stimulated by CPF by only about 50%, much less than its marked inhibitory effect on CYP8B/Luc reporter activity stimulated by HNF4␣.
We then employed a mammalian two-hybrid assay system to study the interaction between HNF4␣ and SHP in HepG2 cells (Fig. 10D). As a positive control of two-hybrid assays, cotransfection with Gal4/Id and VP16/MyoD hybrid constructs resulted in a strong stimulation of luciferase reporter (pG5Luc) activity. Cotransfection with both VP16/SHP and Gal4/HNF4␣ hybrid plasmids resulted in stimulation of reporter activity by 4-fold over cotransfection of Gal4/HNF4␣ with either VP16 empty vector (PACT) or VP16/MyoD. We also did two-hybrid assays with Gal4/SHP and VP16/HNF4␣ (data not shown). A strong stimulation of reporter activity by 26-fold was obtained. Thus both Gal4/HNF4␣ and VP16/SHP were required for stimulation of reporter activity in HepG2 cells indicating that HNF4␣ and SHP did interact as demonstrated by mammalian two-hybrid assays.
Bile Acids Inhibited HNF4␣ Expression-It is possible that bile acid repression of human CYP8B1 transcription may be also due to inhibition of HNF4␣ binding to CYP8B1 or inhibition of HNF4␣ expression in hepatocytes. We first examined the effect of CDCA on HNF4␣ and CPF binding to the ϩ198 to ϩ227 probe (Fig. 11A). HepG2 cells were treated with CDCA (25 M) with or without cotransfection with RXR␣/FXR. Nuclear extracts were isolated from HepG2 cells and used for EMSA. When nuclear extracts of HepG2 cells treated with CDCA (25 M) were used for EMSA, the HNF4␣-DNA complex was reduced, and the CPF-DNA complex was abolished. Interestingly, nuclear extracts of HepG2 cells cotransfected with FXR/RXR␣ generated the same gel shift pattern as using nuclear extracts isolated from untreated cells. However, when nuclear extracts isolated from HepG2 cells transfected with FXR/RXR␣ and treated with CDCA were used, the band shifts were almost completely abolished. When the HNF4␣ consensus sequence was used as a probe for EMSA using the same nuclear extract preparations (Fig. 11B), similar results were obtained. These results suggested that FXR and CDCA reduced HNF4␣ binding to DNA.
We wanted to determine whether the decreased expression level of HNF4␣ was responsible for the decreased HNF4␣ binding to DNA. We examined the nuclear HNF4␣ protein level in HepG2 cells by immunoblot assay (Fig. 12A). The HNF4␣ protein level in HepG2 cells was dramatically decreased by the CDCA treatment. Cotransfection of FXR/RXR␣ and treatment with CDCA completely eliminated HNF4␣ protein expression. We also treated rats with a diet supplemented with CDCA (1%), DCA (0.25%), CA (1%), UDCA (1%), cholestyramine (5%), or cholesterol (1%) for 2 weeks. Nuclear extracts were isolated from rat livers for EMSA. Fig. 12B shows that CDCA, DCA, and CA treatments markedly reduced the nuclear HNF4␣ protein levels. Cholestyramine and cholesterol did not alter the HNF4␣ protein levels. We then studied the effect of CDCA on mouse HNF4␣/Luc reporter activity in transfection assay in HepG2 cells (Fig. 12C). CDCA (25 M) repressed HNF4␣ reporter activity by 60%. Cotransfection with FXR/RXR␣ reduced reporter activity by 20%, and addition of CDCA further repressed reporter activity by 80%. These data revealed that bile acid repression of HNF4␣ transcription might also contribute to the inhibition of CYP8B1 transcription by bile acids, in addition to the repression by SHP/HNF4␣ interaction. DISCUSSION It has been revealed recently that FXR is a highly specific bile acid receptor that is activated by hydrophobic bile acids at physiological concentrations to directly stimulate the transcription of the genes in bile acid transport, absorption, and reverse cholesterol transport but indirectly inhibit CYP7A1 transcription (23)(24)(25). FXR may play a pivotal role in cholesterol metabolism by regulating the reverse cholesterol transport from the peripheral tissues to the liver for its conversion to bile acids. This mechanism may regulate the bile acid pool size in the liver, thus protecting liver cells from cytotoxic effect of bile acids. It appears that the bile acid-activated FXR induces a negative nuclear receptor SHP which interacts with CPF and down-regulates CYP7A1 transcription (24,25,39). It was sug- gested that the same mechanism may also regulate CYP8B1 transcription and coordinately regulates bile acid biosynthesis (24,25). However, these investigators did not provide any experimental evidence that SHP interacted with CPF and inhibited CYP8B1 transcription. We demonstrated for the first time in this study that HNF4␣ was necessary and sufficient in mediating bile acid repression of human CYP8B1 transcription. The bile acid response elements of CYP7A1 identified previously and of CYP8B1 identified here share a common characteristic, i.e. they contain an overlapping CPF and HNF4␣-binding site. Furthermore, the nucleotide sequences of the bile acid response elements of the rat and human CYP8B1 are different in that the rat CYP8B1 contains two overlapping CPF-binding sites (31), whereas the human CYP8B1 contains only one CPF site. This may explain our observation that CPF stimulated but HNF4␣ had little effect on the rat CYP8B1, 2 in contrast to the strong stimulatory effect of HNF4␣ but weak effect of CPF on human CYP8B1 transcription. Thus SHP interacts with HNF4␣ or CPF and represses human or rat CYP8B1 transcription, respectively. Therefore, CPF and HNF4␣ differentially regulate CYP7A1 and CYP8B1 transcription in a species-and gene-specific manner.
In this study we further demonstrated that SHP interacted with HNF4␣ by mammalian two-hybrid assay. Our result is consistent with the report by Lee et al. (40) that SHP interacts with HNF4␣ and stimulates reporter activity by 4-fold in twohybrid assay in HepG2 cells. In contrast, Goodwin et al. (25) observed no interaction between SHP and HNF4␣ in mammalian two-hybrid assay in CV-1 cells. SHP lacks a DNA binding domain and functions predominantly as a negative factor that heterodimerizes with many nuclear receptors (41). It has been reported that SHP either directly inhibits the transactivating activity of nuclear receptors or competes for the coactivators (40).
It has been suggested that CPF (LRH) is a competence factor that potentiates the sterol response of rat CYP7A1 transcription by oxysterol receptor, liver X receptor (LXR) (24 10. Effects of HNF4␣, CPF, and SHP on human CYP8B1 transcription. Different amounts of HNF4␣ and SHP expression plasmids were cotransfected with phCYP8B1-514/ϩ300Luc reporter (1 g) in HepG2 cells. A, effect of HNF4␣ and SHP on CYP8B1-514/ϩ300/Luc reporter activity in HepG2 cells. B, dose-dependent effects of HNF4␣ and SHP on phCYP8B1-514/ϩ300Luc reporter activity. C, dose-dependent effects of CPF and SHP on CYP8B1-514/ϩ300Luc reporter activity. Experimental conditions were the same as under Fig. 1. D, mammalian two-hybrid assay of SHP and HNF4␣ interaction in HepG2 cells. HepG2 cells were transfected with 0.5 g of the reporter vector pG5Luc and 0.5 g each of the hybrid constructs or emptyl plasmids indicated. The pBIND and pACT are GAL4 and VP16 empty vectors, respectively. Hybrid plasmids Gal4/Id and VP16/MyoD are used as a positive control of two-hybrid assay (left panel). Right panel shows two-hybrid assays of Gal4/HNF4␣ and VP16/SHP. Normalized luciferase reporter activities from triplicate samples are presented. The numbers indicate folds of induction of reporter activity over the control assay that was transfected with Gal4/HNF4␣ and pACT.
trast, CPF potentiates HNF4␣ stimulation of human CYP7A1. It is also apparent that CPF is a weak transcription factor that stimulates human CYP7A1 (27) and rat CYP8B1 reporter (31) activity when cotransfected at high levels in non-liver cells. We found previously that CPF could function as a negative factor that inhibited human CYP7A1 transcription in transfection assay in HepG2 cells (39). CPF apparently is not important in regulating human CYP8B1 transcription as shown by mutagenesis analysis reported here.
In this study we revealed that bile acids were able to repress nuclear HNF4␣ protein expression in HepG2 cells transfected with FXR and treated with CDCA and also in bile acid-treated rat livers. HNF4␣ reporter activity was strongly repressed by CDCA and FXR. The rat HNF4␣ gene is regulated by CPF (26). Therefore, interaction of SHP with CPF may repress HNF4␣ gene transcription. We reported previously (42) that HNF4␣ protein expression was reduced by PPAR␣ and ligand Wy14,643 to explain the inhibition of CYP7A1 transcription by fibrates. De Fabiani et al. (33) reported that bile acids could suppress CYP7A1 transcription by reducing the transactivation activity of HNF4␣ through a mitogen-activated protein kinase cascade. All these studies support our finding that HNF4␣ plays a pivotal role in regulating bile acid synthesis genes. FXR, HNF4␣, CPF, and SHP are liver-enriched nuclear receptors with similar tissue expression patterns. They interact with each other and regulate gene expression during liver development and differentiation. It is intriguing that a specific bile acid receptor FXR induces a nonspecific receptor SHP that then interacts with other nuclear receptors to repress gene transcription. The expression of these nuclear receptors in hepatocytes must be tightly controlled to regulate liver gene expression during development and under different physiological and pathophysiological states.
CYP8B1 transcription is strongly inhibited by bile acids, cholesterol, and insulin. The expression of CYP8B1 activity may regulate the bile acid hydrophobicity in the bile that ultimately regulates the overall rate of bile acid synthesis by feedback inhibition of CYP7A1 transcription. It has been suggested that a lithogenic diet containing cholic acid and cholesterol induces gallstone formation by facilitating the absorption of cholesterol in the intestine (43). A high cholesterol diet is known to stimulate CYP7A1 but reduce CYP8B1 transcription and result in increasing the hydrophobicity of the bile. We suggest that FXR and LXR differentially regulate CYP7A1 transcription in different species (44). When fed a high cholesterol diet to rabbits, the bile acid hydrophobicity and pool size increase such that the negative effect of FXR may dominate over the positive effect of LXR and repress CYP7A1 transcription. On the other hand, the positive effect of LXR may dominate over the negative effect of FXR and result in the stimulation of CYP7A1 transcription in rats fed a high cholesterol diet. Insulin strongly inhibits both CYP7A1 and CYP8B1 transcription and results in decreasing the conversion of cholesterol to bile acids. Insulin is known to increase the synthesis of cholesterol and triglyceride by stimulating sterol response elementbinding protein isoform 1c (SREBP-1c) transcription and may contribute to hyperlipidemia in the patients with type II diabetes (45). Mutations of the HNF4␣ gene have been identified in patients with maturity onset diabetes of the young (MODY1) (46). Interestingly, mutations of the SHP gene have been identified in obese Japanese subjects with early onset diabetes (47). These investigators suggest that SHP is a candidate MODY gene that may regulate HNF4 activity in pancreas and control energy metabolism and body weight. In diabetes, bile acid synthesis and pool size increase in association with an increase in CYP8B1 activity (48,49). This is consistent with our results that SHP and HNF4␣ play important roles in regulating human CYP8B1 transcription. Thus, understanding the molecular mechanisms of CYP8B1 transcription by bile acids, cholesterol, and insulin is important for elucidating the mechanisms of lipid metabolisms and pathogenesis of diabetes. Nuclear extracts of HepG2 cells from different treatments were separated by SDS-polyacrylamide gel electrophoresis (10%) and transferred to a nitrocellulose membrane. A goat polyclonal antibody raised against the HNF4␣ was used to detect HNF4␣ protein. B, immunoblot of nuclear proteins using liver nuclear extracts isolated from rats. Rats were treated with different bile acids, cholestyramine, or cholesterol as described under "Experimental Procedures." Each lane contained 3 g of nuclear proteins. C, effects of CDCA and FXR/RXR␣ on mouse HNF4␣/ Luc reporter activity in HepG2 cells. A mouse HNF4␣/Luc reporter (1 g) was transfected into HepG2 cells that were treated with 25 M CDCA with or without the cotransfection of FXR and RXR␣ (0.25 g). The -galactosidase activity was used to normalize luciferase activities.
In summary, we have unveiled a unique mechanism of HNF4␣-mediated bile acid repression of human CYP8B1 transcription. Bile acids are signaling molecules that may regulate many genes involved in lipid metabolisms. New drugs targeted to nuclear receptors including FXR, LXR, and HNF4␣ may modulate the transcription of the genes involved in bile acid synthesis, transport, and absorption and lead to the reduction of serum cholesterol levels and the prevention of cholestasis and other liver diseases. | 9,389.8 | 2001-11-09T00:00:00.000 | [
"Biology",
"Chemistry"
] |
Three-dimensional simulation for fast forward flight of a calliope hummingbird
We present a computational study of flapping-wing aerodynamics of a calliope hummingbird (Selasphorus calliope) during fast forward flight. Three-dimensional wing kinematics were incorporated into the model by extracting time-dependent wing position from high-speed videos of the bird flying in a wind tunnel at 8.3 m s−1. The advance ratio, i.e. the ratio between flight speed and average wingtip speed, is around one. An immersed-boundary method was used to simulate flow around the wings and bird body. The result shows that both downstroke and upstroke in a wingbeat cycle produce significant thrust for the bird to overcome drag on the body, and such thrust production comes at price of negative lift induced during upstroke. This feature might be shared with bats, while being distinct from insects and other birds, including closely related swifts.
Introduction
Hummingbirds are distinguished and extremely agile flyers among birds. They are capable of not only sustained hovering flight, but also fast forward flight and various rapid manoeuvers. Recent studies of fluid dynamics have mainly focused on hummingbirds' unique hovering capability and unsteady aerodynamics associated with the wings that move in a relatively horizontal plane [1][2][3][4][5][6].
Model configuration and simulation approach 2.1. Reconstruction of the wing kinematics
A female calliope hummingbird (Selasphorus calliope) was the study subject, whose basic morphological data are provided in table 1. The experimental study was conducted to obtain the wing kinematics at a sustained flight speed of U = 8.3 m s −1 , at which the wingbeat frequency is 45.5 Hz while the stroke plane angle between the stroke plane and the horizontal is 67.9 • . In the experiment, the bird was placed in an open-circuit, variable-speed wind tunnel with a feeder at the middle of the tunnel, and it was trained to adapt to the wind while feeding. We recorded flight kinematics of the hummingbird using three highspeed cameras distributed outside the wind tunnel: 1 Photron SA-3 (Photron USA Inc., San Diego, CA, USA) and 2 two Photron 1024 PCI, all with electronically synchronized shutter timing. Two cameras were placed dorsally to the bird, and one was placed laterally, as shown by the views in figure 1. Video recordings were made at 1000 Hz with a shutter speed of 1/10 000 s. The working section of the tunnel is 85 cm in length, square in cross section, 60 × 60 cm 2 at the inlet and increasing to 61.5 × 61.5 cm 2 at the outlet to accommodate boundary-layer thickening [28]. Maximum deviations in velocity within a cross section are less than 10% of the mean; the boundary layer is less than 1 cm thick and turbulence is 1.2%. After the videos were taken, a custom MATLAB program [29] was used to track the markers frame by frame and to extract their three-dimensional coordinates. These markers were pre-labelled on the wings using non-toxic paint, and they included five points on the leading edge, on at the wingtip and three on the trailing edge, as shown in figure 2, where a comparison of the reconstructed model with the camera view shows that the instantaneous wing position and deformation are well captured. In a similar study for the hovering hummingbird [5], a principal components analysis verifies that these points are sufficient to characterize the wing motion. The wing geometry reconstruction process is similar to that in the previous study [5]. The wing profile was constructed using spline interpolation through the marker points, and the wing surface was then built using triangular elements within the profile. The bird body was reconstructed using the camera views of the hummingbird. In the current reconstruction, a single wing consists of 1335 elements and 718 nodes, while the body surface consists of 3560 elements and 1782 nodes. A total of 13 cycles of wingbeats during steady flight were captured, and each cycle contains approximated 22 frames. To increase the time resolution of the wing position, the trajectories of the wing mesh nodes are also refined by spline interpolation in time. Seven cycles of wingbeats were reconstructed from the imaging data and used for the simulation. Figure 3 shows a sequence of wing positions within a cycle and also the wingtip trajectory (see the electronic supplementary material for an animation). between the chord and the flight direction. The angle of attack, α, is defined as the angle between the chord and the relative flow direction that combines both the freestream velocity and the translational velocity of the chord at the leading edge. These two angles are plotted in figure 5 for two chords and five cycles, one proximal chord at dimensionless locationr = r/R = 0.15 and one distal chord atr = 0.9, which are denoted by subscripts p and d, respectively. From these plots, we can see large differences between the proximal chord and distal chord. For the proximal chord, the chord angle ψ p and angle of attack α p are both positive during the entire cycle. For the distal chord, these angles change the sign and vary significantly. During the downstroke, ψ d is negative, i.e. the leading edge tilting downward, but α d is positive owing to fast translation of the chord. During upstroke, ψ d is positive, i.e. the leading edge tilting upward, but the α d is negative, indicating that the pressure surface and the suction surface are swapped at that moment. Wing twist can be described by the difference between the two chord angles, ψ d − ψ p , which is plotted in figure 6. It is shown that rsos.royalsocietypublishing.org R. Soc. open sci. the twist angle reaches its extreme value during mid-downstroke and mid-upstroke; however, it is more pronounced during upstroke (near 40 • ) than during downstroke (near 25 • ). These differences between the proximal section and the distal section lead to highly non-uniform pressure distribution on the wing surface as shown later.
Simulation set-up and verification
In the model, the Reynolds number, defined as Uc/ν, is set to be Re = 3000, wherec is the average chord length and ν is the kinematic viscosity. The flow is assumed to be governed by the viscous incompressible Navier-Stokes equation, which is solved by an in-house code that adopts a secondorder immersed-boundary finite-difference method. The code is able to handle large displacement of the moving boundaries [30]. A fixed, non-uniform, single-block Cartesian grid is employed to discretize the domain Table 2. Comparison of the force coefficients for the wings and body from the two different meshes, where C Z and C T are for the vertical force and thrust of one wing, respectively, and C Z,b and C D,b are for the vertical force and drag of the body, respectively. million) points are used for the baseline simulation. A finer mesh is also used in the simulation to verify grid convergence. Both of these meshes have maximum resolution around the wing, which is 1 60 cm in all three directions for the baseline case and 1 70 cm for the refined case. The simulation was run in parallel using domain decomposition and Message Passing Interface (MPI). The time step is t = 5 µs, which leads to approximately 4400 steps per wingbeat cycle. A multigrid method was employed to accelerate convergence of the Poisson solver. A total of 96 processor cores were used for the baseline case, and 128 cores for the refined case.
The vertical force F Z and thrust F T = −F X generated by one wing are normalized by the fluid density, ρ, the flight speed, U, and the surface area of the wing according to The lift and drag on the bird body, F Z b and F D b , are normalized in the same manner. The aerodynamic power coefficient of one wing is defined as where f is the stress on the wing surface, and u is the velocity of a point on the wing in the body-fixed coordinate system. The simulation results for two wingbeat cycles from both meshes are shown in figure 7 and table 2 for comparison. In figure 7 and also other figures from herein, the shaded area indicates downstroke, while the white area indicates upstroke. These results include the time-averaged lift and thrust of one wing and also lift and drag of the bird body. From the table, we see that the maximum difference of all the forces is less than 5%. Thus, the baseline resolution is deemed satisfactory for the current study.
Aerodynamic forces
The force and power coefficients are shown in figure 8, which include both instantaneous and phase-averaged data. The cycle-averaged data are listed in table 3 for an entire cycle and for downstroke/upstroke separately. Figure 8a shows that the weight support is mostly generated during downstroke where C Z is positive. Mid-downstroke corresponds to the maximum lift production. During supination and early upstroke, the wings are still able to generate some weight support. Around mid-upstroke, vertical force becomes negative even though its amplitude is not particularly high.
On the other hand, figure 8b shows that thrust is mostly positive during both downstroke and upstroke. Furthermore, thrust has a greater peak during upstroke than during downstroke. However, the data in table 3 show that downstroke on average produces more thrust than upstroke. Figure 8c shows that the power consumption during both half wingbeats are significant. However, the power requirement is greater for downstroke, about twice as high as upstroke. This feature is similar to hovering, where downstroke power is nearly 2.8 times of upstroke power according to Song et al. [5], who studied the ruby-throated hummingbird. Using the equation P = 1 2 C P ρU 3 (2S) for the aerodynamic power, P, we have P = 94.5 mW for the calliope hummingbird, and body mass-specific power 34 W kg −1 . Thus, mass-specific power output of the hummingbird is within the range reported for larger bird species. For example, cockatiel power output ranges from 17 W kg −1 at 5 m s −1 to 47 W kg −1 at 14 m s −1 , and dove power output ranges from 31 W kg −1 at 7 m s −1 to 54 W kg −1 at 17 m s −1 [31]. Compared with the hovering ruby-throated hummingbird at 55 W kg −1 [5], forward flight in the calliope hummingbird requires less power, which is expected since hovering generally is more energy-demanding than forward Table 3. Force production and power of upstroke, downstroke and entire cycle (i.e. average between upstroke and downstroke). flight. On the other hand, forward flight at 8.3 m s −1 is not necessarily minimum power speed for the hummingbird, as the mechanical power output of birds can be described as a U-shaped curve function of the flight speed [31][32][33].
Using the present force coefficients and equation F total = 1 2 (2C Z + C Z,b )ρU 2 S, we obtain the total vertical lift produced by the bird, which is around 94% of the bird weight. The bird body itself generates about 22.2% of body weight. This result will be discussed later. The thrust generated on the two wings together is 152% of the body drag. The imbalance of the vertical and horizontal forces could have been caused by several reasons: (i) the absence of camber in the wing model, (ii) error in digitization of the wing position, (iii) beak-feeder interaction as the bird was attempting to feed during the recording, (iv) interaction between the bird's body and the wake of the feeder, and (v) an underestimate of drag coefficient for the body.
Force production mechanism
Overall force production of the hummingbird can be explained from the wing kinematics as viewed from the global coordinate system, i.e. the coordinate system fixed with the ambient air. Figure 9 shows the proximal and distal chord moving in the global coordinate system with their trajectories traced out. During downstroke, the angle of attack is positive for both the proximal chord and the distal chord. Therefore, both wing sections generate weight support. Since the leading edge of the distal section tilts downward, the aerodynamic lift has a forward component that leads to thrust generation during downstroke.
During upstroke, both wing sections move forward in air, even though the stroke plane angle is less than 90 • and the wings move backward with respect to the body. Nevertheless, positive thrust is generated during this half cycle by the distal section. As shown in figure 9, the angle of attack of the distal chord is negative at upstroke, and the overall force on the section points downward and forward. Figure 10 shows the pressure distribution within four selected vertical slices at mid-downstroke and mid-upstroke. It can be seen that the roles of the distal section and proximal section are different. For both downstroke and upstroke, the proximal wing has pressure surface on the ventral side and suction surface on the dorsal side. Thus, its main role is for vertical force generation. However, the distal wing flips its angle of attack between the two half cycles. Thus, positive (negative) pressure is distributed on the ventral (dorsal) side during downstroke, and the opposite is true during upstroke. This pressure differential leads to weight support during downstroke only, but thrust production during both downstroke and upstroke.
Vortex structures
Vortex structures, which are induced by the wing motion and dominate the wake, have been a focal point in the study of force production of flapping wings and fish fins. They can also be used to evaluate whether a bird adopts slow gait or fast gait [22]. As has been pointed out by previous researchers, at slow gait the trailing-edge vortices (TEVs) form rings after each downstroke, and the sequence would look like a series of smoke rings [34,35]; while at fast gait, the tip vortices (TVs) form undulating vortex tubes from the tip, and the TEVs form cylinders from the trailing edges, both being convected downstream [24,25,36].
In the current study of hummingbird flight, we used the iso-surface to show the vortex structures. The scalar quantity is defined as the maximum value of the imaginary part of the eigenvalue of the velocity gradient tensor ∇u, and it describes the strength of the local rotation of fluid [37]. It is found that the TVs are continuously shed from the wingtip, and the TEVs shed with the shape of separate cylinders. Such ladder-type vortex structures were also observed in the measurement of forward flight of swifts [25], which are closely related to hummingbirds, and like hummingbirds, do not flex their wings during flight. As pointed out by the authors [25], the ladder-type wake is formed by continuous shedding of the spanwise vortices from the wing surface and is different from an earlier speculation that for swifts and hummingbirds, a ladder-like wake would be generated by a distinct vortex in each of downstroke and upstroke [38]. Even though they both generate continuous ladder-like vortex structures, swifts do not forego weight support on upstroke like we report for hummingbirds.
Several snapshots of the flow field are shown in figure 11. These snapshots show roughly the shape of the TVs that follow the trajectory of the wingtips. In addition, vortex shedding from the trailing edge is evident. Formation of the leading-edge vortices (LEVs) during both downstroke and upstroke is visible, and the LEVs are stable for both downstroke and upstroke. The behaviour of the LEVs has to do with both the instantaneous angle of attack and pitching rotation of the wing [39]. From figure 5d, the angle of attack of the distal section keeps a maximum value around 25 • for a significant period of time, which would have caused LEV shedding and stall, if the wing simply translated without changing its pitch. However, the wings are also performing rapid pitching around their axes, as seen from variation of the chord angle plotted in figure 5b. That is, the chord angle magnitude decreases during downstroke after t/T > 0.2, and quickly increases during upstroke before mid-upstroke. Such rotational motion has been known to maintain stability of LEVs and to enhance lift production of the wings [39].
Forces on the bird body
Lift and drag on the bird body are affected by the orientation of the bird during flight. In general, the inclination angle of the body decreases with the increase of flight speed [12,23,27,40,41]. In the current study, the body angle of the hummingbird is χ b = 12 • , which is close to the angle of the rufous hummingbird at a speed of 8 m s −1 , where χ b = 11 • [12]. Figure 12 shows both the instantaneous and phase-averaged data of the forces on the body. Downstroke-, upstroke-and cycle-averaged data are listed in table 3. These results show that the lift on the body provides 22.2% of the weight support. Furthermore, lift on the body during downstroke is 1.76 times of that during upstroke. In figure 12a, lift on the body oscillates significantly during a wingbeat cycle. On the other hand, drag on the body does not vary very much in a cycle and is nearly equal on average between downstroke and upstroke, which are reflected in figure 12b and table 3.
The high percentage of lift produced by the body and the oscillations of the body lift in a cycle may be attributed to aerodynamic interaction between the wings and the body that is assumed to be stationary in the current study. To verify this possibility, we also simulated separately the same flow around the isolated body without the wings attached. The shape and orientation of the body remain the same in the test. Figure 13a shows the pressure distribution on the bird body from the isolated body simulation, which can be compared with the result from the full-body simulation shown in figure 13b and c for middownstroke and mid-upstroke, respectively. For the isolated body, even though a high-pressure zone is established below the body and near the head, the flow passes around the body in the absence of the wings and merges behind and above the body, where the pressure is partially recovered. As a result, the overall lift by the body is small. When the wings are present, the flow from below is prevented from passing around the body by the wings. Furthermore, the wing-wing interaction mechanism, similar to that proposed by Lehmann et al. [42], apparently has played a role here. That is, when the two wings are separating from one another from pronation to mid-downstroke, they create a low-pressure zone above the bird body, as shown in figure 13b, thus leading to a net upward force. This mechanism also explains why during upstroke, the low-pressure zone above the body, as plotted in figure 13c, is significantly smaller when compared with downstroke. The present result shows that the bird body has significant contribution to the overall weight support. In comparison, previous experimental studies of insects and other birds indicated that the body lift is only a small portion of the animal weight, as shown in table 4. However, we point out that in those previous studies, the force was measured for the isolated animal body only, while in the current study, the wings are present and are in constant motion. For the isolated hummingbird body, we also observed [48], bumblebee: [15,17], rufous hummingbird: [12], magpie, pigeon and zebra finch: [40,41], hawk moth: [49], barn swallow: [50], bat: [51]). low lift production. As shown in table 4, lift of the isolated hummingbird body is only 8% of the weight and is comparable with previous data for insects and also birds (e.g. 15-20% during flexed-wing bounds in zebra finch).
Comparison of hummingbirds with other flying animals
The advance ratio J and stroke plane angle β are two primary factors that affect the force production of flapping wings during forward flight. These two variables differ largely among animal species. Figure 14 shows a few species on the β − J map with the data directly collected or derived from various sources. It can be seen that hummingbirds largely fall within the range of the insects but also extend into the range of other birds. For all species, the stroke plane angle increases with the advance ratio, which is expected since at fast flight speed, the animals not only reduce the body angle for drag reduction, which would naturally cause the stroke plane angle to increase, but may also tilt the stroke plane further to enhance thrust generation. For the small insects like bumblebees and fruit flies, the advance ratio is usually less than one [14]. For such slow flight, lift production is predominant over thrust production. Since the back-sweeping velocity of the distal wing exceeds the forward flight speed at upstroke [15,20,33], the wingtip trajectory traced out in the global coordinate system is highly backward skewed at upstroke, which is shown in figure 15 for a bumblebee at J = 0.6. In this case, downstroke is mainly for lift production, while upstroke is mainly for thrust production. If the flight speed is further reduced, with a more skewed trajectory, upstroke may even produce lift as well. Overall, this strategy of using upstroke is also known as 'backward-flick' [18,23,31]. An exception is the fruit fly which is shown by a recent study that its upstroke uses a paddling mode to produce drag-based thrust [20].
For most birds, the advance ratio is significantly greater than one, and the stroke plane angle is close to 90 • . Thus, the wingtip trajectory in the global coordinate system becomes much less backward-skewed and the wavelength-to-amplitude ratio becomes greater as shown in figure 15 for a pigeon. In this case, upstroke is not suitable for thrust generation. Instead, the wings are either feathered with little force produced [24,25], or swept on upstroke with some lift production (e.g. pigeon) [26,27]. On the other hand, a powerful downstroke is used to produce both lift and thrust.
For the hummingbird in the current study, the advance ratio is between that of insects and other birds. The wingtip trajectory is moderately skewed as shown in figure 15. Therefore, with proper angle of attack, the wings can still produce thrust during upstroke. However, since the overall force points downward, some lift has to be sacrificed. From this figure, it can be seen that thrust can be produced when the wing speed at upstroke is comparable to or possibly even lower than the flight speed. This thrust mechanism is analogous to a sail that moves against wind and thus is termed a 'sail mode' in this work. On the other hand, downstroke of the hummingbird is similar to that of big birds, as shown in figure 15, where both lift and thrust are generated. It should be pointed out that at slow flight speeds, force production of the hummingbird still appears close to that of insects. As shown by Tobalske et al. [12], when J is below 0.7, the stroke plane angle of the hummingbird is small and the wingtip trajectory is also highly skewed like that of insects. Similarly, some insects can also perform fast flight at J > 1, e.g. hawk moth, as seen from figure 14. It would be interesting to see whether their force production mechanism is similar to that described here for the hummingbird.
There is significant similarity between the hummingbird and bats, both being capable of hovering and forward flight at J = 1 [52]. Recent flow visualization studies of bats have suggested that aerodynamic function of upstroke changes with forward flight speed [16]. At hovering and slow speeds, bat wings are inverted during upstroke to aid weight support; and at fast speeds, downstroke produces weight support and thrust, but upstroke may generate extra thrust at cost of negative lift. These features are similar to what we have reported for the hummingbird, which is interesting given that, unlike hummingbirds, bats flex their wings during upstroke.
It is also interesting to point out the similarities and differences between the hummingbird and the swift, as both of them keep their wings extended during the entire wingbeat cycle regardless of flight speeds. This is highly unusual for birds, as most birds flex the wings during upstroke [53]. Regardless of keeping the wings extended, the combination of spanwise twist and the stroke plane angle reported here for the hummingbird is a mechanism for permitting flight over a wide range of speed. For the swift, it is believed that wing twist is an important factor leading to optimal efficiency of flapping flight [54]. On the other hand, the swift's flight style is dedicated to cruising flight, and it is not adept at hovering, and some swift species are even unable to accelerate into flight from a standing start without first dropping in altitude to gain velocity [55]. Therefore, one conclusion could be that the swift pattern of force development is optimized for more continuous weight support during cruising flight in a way that the closely related hummingbird does not achieve; while for the hummingbird, its evolutionary trajectory may have favoured the capacity for some thrust production at the expense of the more-constant weight support style in the swift [54]. This hypothesis merits testing in a broader phylogenetic context, as it is not appropriate to infer evolutionary trajectories from two-species comparisons [56].
Conclusion
Three-dimensional computer simulation has been performed to study aerodynamics of a hummingbird in fast forward flight and whose wing motion was captured by filming the bird flight in a wind tunnel. The finding places hummingbirds in an interesting position relative to insects and large birds. At a speed of 8.3 m s −1 , the advance ratio of the hummingbird is between those of typical insects and large birds, and the simulation results show that the hummingbird uses a different strategy for lift and thrust production. In particular, its power downstroke generates both weight support and thrust just like other birds, but its upstroke further enhances thrust by setting the distal wings at a proper angle of attack with respect to the oncoming air, even though such thrust enhancement comes at cost of some negative lift at upstroke. These features are thus similar to those of bats flying at J = 1. Ultimately, caution is necessary in interpreting our results as they emanate from the study of a single bird of one species. New studies in a phylogenetic context will be useful for understanding the evolutionary trajectories and selective pressures that drove the hummingbird flight style.
Ethics. All protocols associated with hummingbird care and experimentation were approved by the University of Data accessibility. Data available from the Dryad Digital Repository: http://dx.doi.org/10.5061/dryad.8ch1b. | 6,223 | 2016-06-01T00:00:00.000 | [
"Physics"
] |
EXPLORING STUDENTS' PROFICIENCY THROUGH PERSONAL CHARACTERISTICS IN MATH LOGIC COURSES USING PEER TEACHING FLIPPED CLASSROOM
Students must understand the concept of mathematical logic before they can apply it to solve problems. On the other hand, students are divided into two major personality types: extroverts and introverts. Students' personality types influenced their study process. This study was carried out to examine the impact of students' personality types on their conceptual understanding abilities. Where questionnaires for personality examinations and tests for understanding concepts were employed as research instruments. This is identified as correlational research. This research was conducted on students in math logic courses from either the mathematics education department at Universitas PGRI Sumatera Barat. The research representative sample was 40 students, whereas 20 students in peer teaching flipped classroom (PTFC) and 20 in conventional class. This research was conducted in September-December 2022. According to research findings, in using the peer teaching flipped classroom model, students' conceptual understanding in math logic courses is strongly affected by personality category. The results of experiments such as peer teaching flipped classrooms exhibit that extrovert students have better conceptual understanding skills than introvert students. On the other hand, students who use the PTFC model are better than students who learn using the conventional models. Creating videos and organizing class discussions has a big impact on students who are outgoing. They can improve their skills while also sharing their knowledge and information with their classmates
INTRODUCTION
According to Maulyda (2020), all students should be taught mathematics since it can help pupils acquire logical, analytical, systematic, critical, and creative thinking skills. In alliance with this, (Ozdamli & Asiksoy, 2016) asserts that as mathematics has the potential to anticipate and various forecasting, it needs to be taught in a way that connects it to regular living. Student is expected to comprehend the issue and implement mathematical understanding to resolve it. At several educational levels, mathematical logic is a required study. The art of thinking is mathematical logic (Fezile, Kocakoyun, Sahin, & Akdag, 2016;Kurtz, Tsimerman, & Steiner-Lavi, 2014). The way that mathematical logic is taught in schools also affects pupils' thought processes (Muttakhidah, 2015). A branch of cognitive science that is extremely relevant to daily life is mathematical logic. To be able to apply mathematical logic in solving problems in everyday life, students must first understand the concept of mathematical logic.
Students' conceptual understanding is measured by their capacity to define, comprehend, restate, and apply particular ideas to effectively solve mathematical problems. Personality types can have an impact on how well students learn mathematical concepts, and these personality variations have an impact on how well students understand and evaluate circumstances (Desriyanto, Yunita, & Muslim, 2020). Eysenck (Ramadoni & Mustofa, 2022) contended that personality affects IQ as well. This intelligence refers to the pupils' comprehension of the topic being taught or their aptitude for problem-solving. Extrovert and introvert personality types had an impact on students' conceptual understanding, verified this (Moffett & Mill, 2014;Uzunboylu & Karagozlu, 2015).
The characteristics of students with introverted personalities include being calm, anxious, favouring to be by themselves, savouring reading, having a tendency to plan ahead, being considerate, and controlling their impulses. They are also less aggressive, trustworthy, pessimistic, and have high standards of ethics. Students who have an extrovert personality tend to be very sociable, have many friends, need someone to talk to, dislike reading, seek out excitement, changeable, unpredictable actions, and are typically impulsive. They also enjoy light humor, are cheerful and optimistic, enjoy laughing and having fun, are active and involved in lots of activities, and have a tendency to be aggressive (Tanner & Scott, 2015;Ulwiyah & Djuhan, 2021).
Flipped classrooms, which incorporate peer teaching, have become popular in recent years as a method for assisting students to better understand the concepts being taught (Ramadoni & Mustofa, 2022;Sukma, Ramadoni, & Suryani, 2022). Before class and during class are the mainly two learning phases of the peer teaching flipped classroom. Learners construct instructional videos, evaluate videos, and also provide feedback before class. The phases that occure in class are numbering, submitting, thinking, responding, assessing, and reaching a conclusion. From then, the researcher used the peer teaching flipped classroom model in the mathematics logic courses that were being evaluated depending on the students' personalities (Ramadoni & Chien, 2023).
The difference from the previous studies is that while the research sub-
2320|
jects still focus on calculus courses, this time the focus is on research on math logic courses. Alongside other studies that simply explore the impact of peer teaching flipped classrooms on conceptual-understanding, this study explores personal characteristics of introvert and extrovert students' learning outcomes.
METHOD
This is identified as correlational research. This research was conducted on students in math logic courses from either the mathematics education department at Universitas PGRI Sumatera Barat. The research representative sample was 40 students, whereas 20 students in peer teaching flipped classroom (PTFC) and 20 in conventional class. To establish the personality of the student, a specific personality questionnaire was adminis-tered. In the meantime, students' performance were derived from assessments of students' conceptual understanding of math logic courses. The test analysis technique was carried out using an analytic rubric with a scale of 0 -4, and the student's total score was calculated. The student personality type questionnaire was administered in the form of a closed questionnaire and was scored. The data analysis technique used is factorial design.
RESULTS AND DISCUSSIONS
Based on the data obtained from the questionnaire given, the results obtained were that 9 students were included in the introvert category and 11 students were included in the extrovert category in PTFC Class. While in the conventional class there are 10 introvert students and 10 students were extrovert category. As a whole from the two sample classes obtained data that there were 19 extrovert students and 21 introvert students. A conceptual comprehension test was then administered, and hypothesis testing was performed using SPSS. This can clearly be seen in Table 1. | 2321 Based on the Table 2, it can be seen that the understanding of the concept of extrovert students in the PTFC class (M=84.22) is higher than the understanding of the concept of extrovert students in conventional classes (M=53.60). On the other hand, the conceptual understanding of introverted students in the conventional class (M=64.90) was higher than the conceptual understanding of introverted students in the PTFC class (M=62.27).
In further review, when compared to extrovert students and introvert students in the PTFC class, it can be seen that extrovert students' conceptual understanding (M=84.22) is higher than introvert students' conceptual understanding (M=62.27). On the other hand, when compared to extrovert students and introvert students in conventional classes, it can be seen that introvert students' conceptual understanding (M=64.90) is higher than introverted students' conceptual understanding (M=53.60).
To further test the hypothesis with a factorial design, but previously carried out a homogeneity test as shown in the Table 3 Based on the Table 3, there was homogeneity of variances for engagement scores for extrovert-introvert and PTFC-Conventional assessed by Levene's test for equality of variances (p = .527). Then, it can be concluded that the data has a homogeneous distribution, so that it can be continued with a factorial design test. Based on Table 4, it is concluded that there is a significant effect of learning models with F (1,36) = 4.21, p=.048<.05. The learning models used is a factor that increases conceptual understanding significantly. The data in the table above reveals that the mean of PTFC is higher than the mean of conventional, and the learning models provides a significant effect. Thus, it can be concluded that the PTFC class is better than the conventional class in increasing students' conceptual understanding.
2322|
there are differences in the results between the two learning models and personalities, where extrovert students tend to be taller using the PTFC model, while introvert students tend to be taller using conventional models. Data can be seen in the Table 5. Based on table 5, it is known that extrovert students like the PTFC model because they are given the opportunity to make videos, teach peers, discuss and present the results of their discussions, on the other hand introvert students tend to be more passive and prefer to receive knowledge from the teacher alone and introvert students prefer to study alone rather than learn group. Based on table 6, it is known that, there is no significant difference between extrovert and introvert students with F (1,36) =.609, p=.44>.05. Although the average extrovert student (M=68.91) is higher than introvert student (M=63.59), it is not significantly different. Data can be seen in Table 7. Based on table 7, it is known that, a review of the comparison of the two learning models used in the study will be carried out, as shown in the Table 8. This is indicated by average PTFC models (M=73.25) is higher than conventional models (M=59.25), it is significantly different. Data can be seen in the Table 9. Based on the Table 9, using the peer teaching flipped classroom model in math logic courses, it was discovered that the extrovert student's understanding of the concept was significantly better than the introverted student, with a p-value of.019. Extrovert students have an average score of 80.22, while introvert students have a score of 58.45. Furthermore, the standard deviation of introvert students is greater than that of extrovert students, indicating that the range of scores ISSN 2089-8703 (Print) Volume 12, No. 2, 2023, 2318-2326ISSN 2442 among introvert students is greater when compared to extrovert students. This is due to the fact that extrovert students prefer the stages of creating videos to introvert students during the peer teaching flipped classroom process. As shown in Figure 1.
Figure 1. Students extrovert videos
Based on the Figure 1, when compared to introverted students, extrovert students are better at explaining. They teach material in a similar way that teachers teach their students. Extrovert students are more skilled at explaining than introvert students.
Meanwhile, introverted students are more inclined to read only the powerpoint slides they generate. Students with extroverted personalities are more open, sociable, and have the ability to communicate (Prayitno & Ayu, 2018). In Figure 2, readers can see an example of an introverted student video.
Figure 2. Students Introvert Videos
The Figure 2 demonstrates that students with introverted personality types only read out their own powerpoints. However, a closer examination reveals that introverted students are much more capable of utilizing technology than extrovert students.
Peer teaching flipped classroom can improve students' conceptual understanding abilities and student
2324|
learning outcomes (Ramadoni & Chien, 2023). There are differences in understanding the concepts and learning outcomes of extrovert students and introvert students. Another uncovering gleaned from field notes was that extrovert students were preferable at discussing during group class discussions. In-class student interactions differ depending on the student personality type (Ozdamli & Asiksoy, 2016;Widya Zulfa Ulwiyah & Muhammad Widda Djuhan, 2021). Introvert personalities are quiet and reserved and only want to be listeners, resulting in them being less able to train than extrovert personalities (Kurtz et al., 2014;Uzunboylu & Karagozlu, 2015). Extrovert students work well in groups, whereas introvert students prefer to work independently and passively. Extrovert also have a higher probability than introvert students to present the results of their discussions in front of the class, whereas introvert students tend to be quiet and reserved and hesitant to explain in front of the class. The introvert personality is selfconscious and finds it challenging to adapt to new situations (Moffett & Mill, 2014;Ramadoni & Mustofa, 2022). Students with introverted personalities generally struggle with verbal communication, specifically expressing what is in their hearts, as opposed to extroverted personalities (Meika & Sujana, 2017;Zubaidah, 2018). Extrovert personality traits involve being more open and sociable (Azizah & Maulana, 2018). These two major factors have a significant impact on students' conceptual understanding in math logic courses using the peer teaching flipped classroom model.
CONCLUSSIONS AND SUGGESTIONS
In accordance with research findings, personality category influence students' conceptual understanding in math logic courses when using the peer teaching flipped classroom model. Such as using peer teaching flipped classrooms, the results indicate that extrovert students have better conceptual understanding skills than introvert students. On the other hand, students who use the PTFC model are better than students who learn using the conventional models. Creating videos and getting class discussions has a big impact on students with extroverted personalities. They can enhance their abilities while also discussing knowledge and information with their classmates. Suggestions for the future are for teachers or researchers to consider learning models and student personality categories in developing their understanding of the concept. | 2,947 | 2023-06-25T00:00:00.000 | [
"Mathematics",
"Education",
"Psychology"
] |
Controlling spin supercurrents via nonequilibrium spin injection
We propose a mechanism whereby spin supercurrents can be manipulated in superconductor/ferromagnet proximity systems via nonequilibrium spin injection. We find that if a spin supercurrent exists in equilibrium, a nonequilibrium spin accumulation will exert a torque on the spins transported by this current. This interaction causes a new spin supercurrent contribution to manifest out of equilibrium, which is proportional to and polarized perpendicularly to both the injected spins and the equilibrium spin current. This is interesting for several reasons: as a fundamental physical effect; due to possible applications as a way to control spin supercurrents; and timeliness in light of recent experiments on spin injection in proximitized superconductors.
There are four relevant species of quasiparticles in the systems that we will consider: namely electrons and holes, which each have two distinct spin projections. These have the densities where Ψ † σ and Ψ σ are standard creation and annihilation operators. For comparison, the propagators are defined as [S1-S3]: where the subscripts σ and σ denote possible spin projections. Combining these definitions, we see that the quasiparticle densities are directly related to the equal-coordinate propagators: These expressions can be used to calculate the spin-resolved density of electrons and holes, respectively. Note that holes carry both opposite charge and opposite spin compared to electrons [S26]. The charge and spin accumulations are then found by multiplying each quasiparticle density with their respective charges or spins, and summing up their contributions: ρ e e 1 2 [+n e↑ + n e↓ − n h↑ − n h↓ ], (S10) where we use the convention that e is the electron charge (e < 0). The prefactors 1/2 are required to prevent double-counting, and can be explained as follows. If we add one physical electron to the system, then the charge of the system increases by e. However, the number of electrons increases by one, and the number of holes decreases by one, meaning that the difference between electrons and holes increases by two. Thus, when the charge density ρ e is described in terms of both electrons and holes, we need an extra factor 1/2 to get the right physical charge. The same logic applies to the spin density ρ z . We can rewrite the results in terms of the propagators above, and recognize the remaining sum as a trace over spins: There is nothing special about the spin-z axis, so it is straightforward to generalize this result to arbitrary spin-projections: where σ = (σ 1 , σ 2 , σ 3 ) is the Pauli vector. From the definition of the Keldysh propagator above, we can also use the identity AB * = B † A † to show that G K * σσ = −G K σσ . This means that G K σσ is imaginary, which makes ρ e , ρ s ∼ iG K manifestly real. For later convenience, we will therefore write this out explicitly:
B. Quasiparticle currents
Now that we know the charge and spin accumulations, the next step is to find the corresponding currents. To derive these, we go back to the quasiparticle densities defined in Eq. (S1): To rigorously derive expressions for the charge and spin currents, we will use the definitions above to look for quasiparticle continuity equations on the form where j τσ is the particle-and spin-resolved current density we are interested in, while q τσ represents possible source terms. We start by differentiating the densities with respect to time: We can rewrite the above using the Heisenberg equation of motion for the field operators. Note that any contributions to the continuity equation arising from non-derivative terms in the Hamiltonian-such as a superconducting gap or an exchange field-can be incorporated into the source term q. Thus, for the purposes of deriving current equations, it is sufficient to consider only derivative terms. Whether or not the currents we derive are conserved currents can be checked at the end of the derivation, by substituting the Usadel equation into the final quasiclassical current equations [S4, S5]. If we for simplicity disregard gauge fields for now, the equations reduce to: We then substitute these back into the equations for ∂ t n τσ : Thanks to cancellation of cross-terms, these can be factorized: Comparing this to Eq. (S20), we conclude that: As a mathematical trick, let us now use different coordinates Ψ σ = Ψ σ (r, t) and Ψ † σ = Ψ † σ (r , t ) for the field operators, where we let r → r and t → t in the end. In this case, the differential operators acting on the field operators can be factored out of the expectation value without ambiguity: We are now ready to define the charge and spin current densities. In correspondence with Eq. (S10), we define these as: Substituting Eq. (S31) into Eqs. (S33) and (S34), comparing the results to the Keldysh propagator in Eq. (S7), and recognizing the results as traces in spin space, we conclude: Generalizing to all spin projections, we obtain the final results: We wish to point out that these currents are manifestly real. From the definition of the Keldysh propagator, we see that: But which set of coordinates we chose to call (r, t) and (r , t ) was arbitrary, and should not affect the physical results, since we are considering the limit r , t → r, t anyway. This means that we can interchange the coordinates (r, t) and (r , t ) on the righthand side of the equation, as long as we do this consistently for every factor simultaneously. The coordinate interchange leads to a sign flip in (∇ − ∇) which cancels the minus sign inside the brackets, and makes the two sides of the equation equal. This lets us conclude that (∇ − ∇)G K * = (∇ − ∇)G K , which in turn implies that the charge and spin currents are real. For later convenience, we can therefore rewrite the above as
C. Quasiclassical and diffusive limits
To derive equations we can use together with the Usadel equation, we now follow the standard prescription for taking the quasiclassical and diffusive limits [S1-S3, S6]. The net change to the Keldysh propagator and its derivative are then: 1 2m where v p/m is interpreted as the quasiparticle velocity, N 0 is the density of states at the Fermi level, and · · · F refers to the average over the Fermi surface. From the derivation of the Usadel equation, we also know that in the diffusive limit the Fermi-surface averages can be written [S6, S7] where∇ is a gauge-covariant derivative including the electromagnetic vector potential and spin-orbit interactions [S6, S7, S20],ǧ s is the isotropic propagator, and D is the diffusion constant. We drop the subscripts on the isotropic propagatorsǧ s , and substitute the above into the accumulations and currents: where we have reintroduced the matrix currentǏ D(ǧ∇ǧ). Note that these equations only depend on the "electronic" part of the propagators in Nambu space, which in reality contains information about both the electrons and holes in the system.
All these results can be written as integrals over only positive energies using the symmetries of the Nambu-space matriceŝ In other words, the negative-energy contributions can be recast in terms of the lower-right blocks; and since take the real part of the results, the complex conjugations are irrelevant. The remaining structure can be recognized as a trace over Nambu space, yielding the final quasiclassical transport equations Note thatσÎ K should be interpreted as an outer product between two vectors, which results in a rank-2 tensor. This is because a general description of spin transport requires both a direction of transport ∼Î K and a spin orientation ∼σ.
D. Higher-order gauge contributions
The equations of motion for the field operators also include first-order derivative terms in systems with electromagnetic [S6, S9] or spin-orbit [S7, S10, S11, S20] gauge fields. If we ignore all other terms in the Hamiltonian, these derivative terms give the following Heisenberg equations: where we implicitly sum over the spin index σ . Going through the same kind of derivations as without the gauge fields, we find that we basically just have to make the following replacement in the results right before taking the quasiclassical limit: Note that the gauge fields also affects charge and spin transport in a different way, since they also appear as covariant deriva-
A. Supercurrents vs. resistive currents
As shown in previous sections, the total spin current J s can in the quasiclassical limit be calculated as an energy integral, where the spectral spin current j s Re Tr[τ 3σÎ K ]/8 and the matrix currentÎ Dǧ∇ǧ. If we substitute the parametrization g K =ĝ Rĥ −ĥĝ A into the definition of the matrix current, we find that its Keldysh component can be expanded aŝ The terms on the first line may be finite even for a homogeneous distribution functionĥ, and produces spin currents even in equilibrium. Furthermore, they are sensitive to the phase-winding of the superconducting condensate viaĝ R ∇ĝ R andĝ A ∇ĝ A . We therefore identify this as a supercurrent contribution. The terms on the second line, however, are proportional to ∇ĥ. This current contribution both requires an inhomogeneous distribution function, and is insensitive to the phase-winding of the superconducting condensate, and has to be a resistive current.
In this work, we are primarily interested in generating a spin supercurrent from a nonequilibrium spin accumulation. We therefore limit our attention to systems with a positionindependent distribution functionĥ that has an excited spin mode. Since we assume ∇ĥ = 0, the second line of Eq. (S59) disappears, and only the supercurrent contribution remains: As for the distribution function, it can be parametrized aŝ where h s points along the net quantization axis of the accumulated spins, and the magnitudes of the modes above are (S63) Note that the energy mode h 0 and spin mode h s are odd and even functions of energy, respectively. We have parametrized the spin mode in terms of a spin voltage V s (V ↑ − V ↓ )/2, where V σ are the effective potentials experienced by spinσ quasiparticles [S12-S14]. The spin mode h s is related to the spin accumulation in Eq. (S52) by an energy integral where N( ) is the density of states [S13].
B. Expansion in Pauli matrices
Once we substitute Eq. (S61) into Eq. (S60), there are a few subtleties to be careful about. To handle these, without yet introducing all the details of the singlet/triplet-decomposition, we first expandĝ R ∇ĝ R directly in terms of Pauli matrices: The first four terms parametrizes a general block-diagonal matrix, while the last termˆ represents off-block-diagonal parts. Since the distributionĥ can always be chosen to be block-diagonal,ˆ does not contribute to the trace in Eq. (S60). The other coefficients are found by taking appropriate traces: We parametrizeĝ R † ∇ĝ R † using coefficients α, β, γ, δ that are defined in the same manner as above.
We will now argue that the parameter δ is identically zero. By differentiating the normalization condition (ĝ R ) 2 = 1, one can show that the retarded propagator anticommutes with its gradient, {ĝ R , ∇ĝ R } = 0. This identity can be rewritten Let us now trace both sides of the equation, and use the cyclic rule Tr[ÂB] = Tr[BÂ] on the right-hand side, Sinceσ 0τ0 is an identity matrix, we see from Eq. (S65) that: In other words, δ = 0 is always satisfied, as any other conclusion would violate the normalization condition (ĝ R ) 2 = 1. Next, to clarify another subtlety, we need to derive some trace identities. By explicitly writing out the matrix products and usingσ = diag(σ, σ * ), one can show that Products of spin matrices in general satisfy (a · σ)(b · σ) = (a · b) +i(a × b) · σ; multiplying by σ and taking the trace, we find the associated trace rule Tr[(a · σ)(b · σ)σ] = +2i(a × b). However, if we complex-conjugate before taking the trace, we uncover another identity Tr[(a · σ * )(b · σ * )σ * ] = −2i(a × b). A geometric motivation for the sign difference is that if the basis σ = (σ 1 , σ 2 , σ 3 ) defines a right-handed coordinate system, then σ * = (σ 1 , −σ 2 , σ 3 ) has to define a left-handed one-and this inverts the right-hand rule that cross-products usually satisfy.
With the aid of the results above, we see that This is the subtle trap alluded to above: due to the way we defineσ = diag(σ, σ * ), the generalization of the Pauli crossproduct identity to matrices in Nambu space requires an extra factorτ 3 in the trace to produce a nonzero result. We now substitute Eqs. (S61) and (S64) into Eq. (S60). With the identities above, we see that the only contributions are: By multiplying Eq. (S66) by appropriate Pauli matrices, taking traces, and using Tr[Â † ] = Tr[Â] * , one can show that α = −α * . This makes α−α real and α+α imaginary, so both contributions are compatible with the normalization condition. We could also use this information to eliminate the underlined coefficients, but this would make it harder to see how mixed singlet/tripletterms cancel later in the derivation. Interestingly, all spin supercurrent contributions depend on the same coefficient α, and do not couple to the other traces ofĝ R ∇ĝ R . The physically observable spin supercurrent is found by integrating the spectral current over all positive and negative energies. We also know that h 0 and h s are odd and even functions of energy, respectively. We can therefore let α (+ ) → ∓α (− ) = ∓α * (+ ) in the spectral current without changing the total spin supercurrent: This form of the result will be useful later, as it makes it clearer which parts of the non-underlined and underlined coefficients cancel for symmetry reasons. Conveniently, this also makes the h 0 and h s contributions take very similar forms.
We now proceed with an expansion of the propagators in terms of physically meaningful components. Following the same kind of parametrization as Ref. S15, we can writê Here, f s represents the spin-singlet pair amplitude, while f t is the spin-triplet amplitude. On the other hand, we can interpret g s and g t as the spin-independent and spin-dependent parts of the density of states, respectively [S15]. In our notation, this means that the density of states for particles with spinprojection p is given by N = N 0 Re[g s + g t · p]. In equilibrium, the spin accumulation is found by integrating h 0 g t over energies, giving another interpretation of g t . Outside of equilibrium, we of course get another kind of spin accumulation due to a nonzero spin mode h s , which we are interested in here. Using Eq. (S75) and the identity σ 2 σσ 2 = −σ * , we find that the diagonal components ofĝ R ∇ĝ R in Nambu space are where the subscripts [· · · ] i, j are matrix indices in Nambu space.
Using the identity (a · σ)(b · σ) = (a · b) + i(a × b) · σ and its conjugate (a · σ * )(b · σ * ) = (a · b) − i(a × b) · σ * , we can sort the above into spin-independent and spin-dependent terms, Since we defineσ = diag(σ, σ * ), Eq. (S65) tells us that the coefficient α that we require can be expressed as Together with the expansion ofĝ R ∇ĝ R above, and standard trace identities for Pauli matrices, we then obtain Let us now calculate the corresponding coefficient α from the matrixĝ R † ∇ĝ R † . Taking the complex-transpose of Eq. (S75), we see thatĝ R † changed as follows compared toĝ R : (S81) Other than these transformations, the parametrization is clearly identical, and the derivation of α becomes identical as well. If we in the end results also choose to let → − , corresponding to a combination of complex-conjugation and tilde-conjugation, the net transformation rules become We can therefore simply perform the changes above to Eq. (S79) to get the corresponding equations forα * : We are now ready to calculate the spectral spin supercurrent in terms of the singlet/triplet-decomposition. Adding up Eqs. (S79) and (S83), we see that all mixed singlet/triplet terms drop out, and we are left with only the cross-product terms: Substituting this into Eq. (S74), we immediately see that: We have shown earlier in the derivation that both contributions are compatible with the normalization condition. The fact that they did not cancel during the last simplification above, shows that both contributions are compatible with the energy symmetries of h 0 and h s . Finally, we know that the contents of the brackets g t × ∇g t − f t × ∇f t can be nonzero, since this is the source of equilibrium spin currents.
The final result shows that if one in equilibrium has a spin supercurrent j eq s , then a nonequilibrium spin mode h s gives rise to a new component j neq s ∼ j eq s × h s . This can intuitively be interpreted as the injected spins h s exerting some kind of torque on the spins transported by the equilibrium current j eq s , thus producing a component j neq s that is spin-polarized in a direction perpendicular to both. This analogy is not perfect: it leaves out the Im and Re operations in Eq. (S86), and the fact that the cross-product relation is between spectral currents and accumulations. However, the intuition provided by this picture is sufficient to explain the results in the main manuscript.
III. NUMERICAL MODEL
As summarized in the main article, we perform the numerical calculations using the Usadel formalism [S1-S3, S13, S16]. This is formulated in terms of the 8×8 quasiclassical propagatoř which satisfies the identitiesĝ K =ĝ Rĥ −ĥĝ A andĝ A =τ 3ĝ R †τ 3 . Together, these identities show that we have to determine two 4 × 4 matrices to knowǧ: the retarded propagatorĝ R , which determines the spectral properties of a material; and the distribution functionĥ, which describes the occupation numbers of the states in the material. Both of these matrices can be functions of position r and quasiparticle energy .
In general, the distribution functionĥ follows from solving a kinetic equation that can be derived directly from the full 8 × 8 Usadel equation. We present a complete derivation of a kinetic equation and relevant boundary conditions in Ref. S12, which is valid for quite general superconducting structures. The result is formulated as an explicit and linear differential equation, which can be easily and efficiently implemented in a numerical Usadel solver. Related derivations can be found in Refs. [S2, S3, S13, S17-S19]. However, instead of solving the kinetic equation explicitly, we have made two simplifying assumptions about the distribution functionĥ. The first is that it is roughly constant throughout the superconductor, which is reasonable as long as the superconductor is not too thick compared to its spin relaxation length. The second is that the distribution function can be modelled using a spin voltage, which we justify in the Discussion in the main article. These assumptions imply that we can treatĥ as a constant parameter, which simplifies our model system from a 2D to 1D geometry, thus making it much more feasible to attack numerically.
In the numerical simulations, we also approximate the effect of inelastic scattering using a Dynes parameter: → + 0.01i∆ 0 . The ferromagnetic insulators in our model system were treated as spin-active interfaces, i.e. boundary conditions that account for spin-dependent phase shifts for quasiparticles in the superconductor that are reflected from the insulating interface [S21-S24]. This boundary condition takes the form where the plus and minus signs describe boundary conditions at the left and right interfaces of the superconductor, respectively. Here, L is the length of the superconductor along the x-axis, G is the bulk conductance of the superconductor in its non-superconducting state, G ϕ describes the effect of spindependent phase shifts when quasiparticles are reflected from a magnetic interface, and m is a unit vector that describes the magnetization direction at the same interface. In addition to the equations above, we need to solve a selfconsistency equation for the order parameter ∆(x) [S20], Here, f K s is the singlet part of the anomalous part of the Keldysh propagatorĝ K . In practice, it can be evaluated at all energies from the calculatedĝ K at positive energies using the identities f K s (x, − ) = 1 4 Tr (+iσ 2 ) (τ 1 + iτ 2 )ĝ K (x, + ) * .
(S92)
These follow from parametrizingĝ K in a similar manner as Eq. (S75), then invoking the definition of tilde-conjugatioñ f s (x, ) f * s (x, − ), and finally using standard trace identities. The actual numerical implementation was done using the Riccati parametrization of the retarded propagator [S20, S25], where γ is a 2 × 2 complex matrix, and N (1 − γγ) −1 . This matrix structure is defined in a way that automatically satisfies the normalization condition (ĝ R ) 2 = 1, and accounts for the particle-hole symmetries of the propagator, thus reducing the number of independent variables one has to solve for numerically. This parametrization also has the additional benefits that the Riccati parameter γ is single-valued and has a bounded norm, thus resulting in a more stable numerical solution procedure. How to reformulate the equations forĝ R in terms of γ is described in e.g. Ref. S20. In practice, we alternate between calculating γ from Eqs. (S88) and (S89), and updating ∆(x) using Eqs. (S90)-(S92), until they converge to a satisfactory degree. The simulation code itself is publicly available from github.com/jabirali/ . | 5,194 | 2018-10-19T00:00:00.000 | [
"Physics"
] |
Numerical computation of the main gas-dynamic bearing static characteristics for the ball gyroscope
The numerical computation questions and analysis of the engineering calculation of the main gas-dynamic bearing static characteristics for the ball gyroscope experimental model are considered. The developed gyroscopic device construction is provided. These investigations for evaluation of the possibility to realize a sensitive element, which base on the developed device were provided. It is supposed, that this sensitive element can be used in information measuring systems of various applications. The main calculation data, such as load-bearing capacity and other characteristics of the hemispherical gas-dynamic bearing is carried out. According to the attained results, preliminary implications are stated. The basic problems for the further research are determined.
Introduction
One of the most important gas bearing advantages is using the ambient air as a lubricant reservoir directly.A bearing, where gas is injected in the lubricating gap by the motion of the lubricated surfaces and there are no additional sources of pressurized gas, are known as self-supporting or gasdynamic.This type of bearing was taken as a basis for the designed device [1].
Gas lubrication has special features which are typical for its nature.In comparison with liquids, gas has the less viscosity.Ambient temperature acts on the gas properties with a small effect.Ambient pressure impacts on the gases far less.Such viscous stability and its small quantity open the wide variety of gas bearing application in devices working on a high speed with a wide range of operational temperature and pressure [1].Therefore, gas bearing application is determined by these unique advantages in cases when the traditional types of bearings lose their properties [2].
At this stage, the main project purpose is to define the mathematical model for calculation of the main static gas-dynamic bearing characteristics.It is necessary for evaluating the possibility to use this kind of bearing as a suspension of the gyroscope ball rotor.It is planned that developed gyroscopic device can be used as a position sensor of information measuring systems working under the severe climatic and mechanical conditions.Due to this fact, such sensor has to meet requirements connected with efficiency, surety, and accuracy [1,3
Device construction
Figure 1 shows the developed gyroscopic device construction with gas-dynamic bearings.The main elements of the experimental model are: as the bearings, there are two hemispherical bowls 7, rotor 1 is placed in the cavity of these bowls.It is a standard ball bearing.It has an axial bore, additionally.Moving parts of the dual axis angle transducer 4 are pressed in the pole places of the axial bore.The transducer mating part 3 is placed in one of the bowls.The diameter of the bowl forming sphere for 5÷10 mm more than actual diameter of the ball rotor, which is 28,587 mm.Thereby, such difference provides the initial gap which is necessary for operational regime.
The ball rotor is rotated by a three-phase asynchronous motor 2. Power 40 V, at frequencies 500 or 1000 Hz.
In general view control position elements of the ball rotor are not shown.
Modeling procedure
Experimental and, especially, theoretical research plays a crucial role in assessing the gas-dynamic bearing characteristics.Mathematical modeling comes to the fore as an experiment for units operating on gas lubricant at the design stage.It is attributable to difficulties and expensiveness of the high precision manufacturing suspension parts.The principal characteristics estimating efficiency and reliability of the units with gas lubrication are: load-bearing capacity, bearing stiffness, and quantities of the viscous and dry friction momentums (the last value is important only for the initial start time) [3].
A number of several characteristics such as bearing radius, clearance, presence of the grooves, parameters of the gaseous medium (viscosity, free-path length of gas molecules, pressure and temperature), magnetic traction, geometric errors of the "contact" surfaces, injection capability, bearing microprofile, and other parameters, have a huge impact on the gas-dynamic bearing characteristics in general [4].
At this point, there are a number of universal software tools using to solve some problems of gas dynamics.These software tools are: LS-DYNA, ABAQUS, STAR-CD, ANSYS modules -CFX and Fluent, Flow Vision and others [5].
Among them, the most appropriate software package is ANSYS 15.0 (Fluent and CFX modules).The mathematical model is based on solving the equations system.This system consists of the fundamental laws of mass, momentum and energy conservation.The system is closed by initial and Space Engineering 2016 boundary conditions, as well as defining relations.If some effects are not accounted by the grouped equations, it will taking into account by the special turbulence equations.The resulting synthesized system is the Navier-Stokes equation.It is general equation of the laminar flow of viscous gas dynamics.
Initial data for the computation is shown in Table 1.It contains information about geometric and gaseous medium parameters, and rotation speed.Using software package ANSYS 15.0 -Fluent, two-dimensional model of the system "rotorbearing" was formed.Also, operating pressure in the gap bearing was calculated.The simulation results are presented in Figure 2. If clearance increases, pressure in the gap will decrease.It reduces load-bearing capacity efficiency.The highest pressure can be obtained when the gap magnitude less than 2 microns.
In reality, the minimum gap value, which can be provided by existing elements base of the experimental model, is about 5 microns.
Table 2 presents mathematical modeling results and estimating computation of the gas layer parameters at the 5 microns gap.It was provided at two basic nominal frequencies of rotation as shown in Table 1.
Parameter Value
Rotor speed, rad/s 850 1675 Ball rotor weight, N 0,8 Reference suspension area, cm 2 4 Operating pressure, Pa 1749,2 2067,3 Load-bearing capacity, N 0,7 0,83 Maximum viscous momentum, N•m 0,0012 0,0023 Calculated values of the load-bearing capacity show the following.When the gap magnitude is equal to 5 microns and rotor speed is 850 rad/s, bearing, at a given load (own weight), will not work in the gas lubrication regime.The load-bearing capacity is less than the ball rotor weight.
In comparison with the full-scale simulation, results are different.It is assumed, that geometric and technological errors of the hemispherical bowls working surfaces increase the injection capability.In theoretical model such results were not observed, since the injection capability was not included, due to the consideration of smooth, perfectly shaped hemispherical bowls [6].
At the rotor speed 1675 rad/s, bearing worked in the gas lubrication regime, which was confirmed experimentally.
The viscous friction momentum effects on the rotor acceleration, determines the required electric power, characterizes heating, and influences on the device sensitivity.The more the gap bearing is, the less viscous friction momentum effects.At the same time, the load-bearing capacity and rigidity become lower, and device sensitivity decreases too [6].Decreasing of the last values will adversely affect the possibility of using this type of bearing as the suspension for the gyroscopic sensor.Therefore, the main solution for reducing viscous friction momentum influence is to feed into gaseous medium (not air).It will reduce the aerodynamic drag and accelerate the processes of heat removal, as well as allows designing of a reliable sensory system with satisfactory accuracy [7].
Conclusion
Investigations of the main gas-dynamic bearing static characteristics for the ball gyroscope show the convergence results for both mathematical and physical experiments [6].It is found that the effectiveness of this bearing realization is not high in terms of load-bearing capacity and, obviously, the device sensitivity as an inertial sensor.It should be noted that geometric errors of the rotor surfaces and hemispherical bowls were not taken into account in the theoretical model.Further research will be focused on improving physico-mathematical model, parameters optimization [8], as well as to address the other issues related to costing and achieving acceptable results.
Figure 2 .
Figure 2. Dependence of the pressure distribution on the gap bearing at the rotor nominal speed 850 rad/s and 1675 rad/s. ].
Table 1 .
Initial data for parameters computation of the working gas-dynamic bearing variant (unshaped). | 1,808.8 | 2016-01-01T00:00:00.000 | [
"Engineering"
] |
A classical picture of subnanometer junctions: an atomistic Drude approach to nanoplasmonics
The description of optical properties of subnanometer junctions is particularly challenging. Purely classical approaches fail, because the quantum nature of electrons needs to be considered. Here we report on a novel classical fully atomistic approach, {\omega}FQ, based on the Drude model for conduction in metals, classical electrostatics and quantum tunneling. We show that {\omega}FQ is able to reproduce the plasmonic behavior of complex metal subnanometer junctions with quantitative fidelity to full ab initio calculations. Besides the practical potentialities of our approach for large scale nanoplasmonic simulations, we show that a classical approach, in which the atomistic discretization of matter is properly accounted for, can accurately describe the nanoplasmonics phenomena dominated by quantum effects.
Introduction
A cornerstone of nanoscience is that systems at the nanoscale have properties neither of the molecular nor of the macroscopic length scales. 1,2 Nanoplasmonics is a beautiful example of this: localized surface plasmons supported by metal nanostructures disappear in clusters with few atoms, and acquire different properties (surface plasmon polaritons) on extended surfaces. [1][2][3][4] The enormous progress of nanoscience has permitted a targeted control of the morphology of nanostructures at the nanometer and even subnanometer scales, thus allowing several applications in plasmonics and nanooptics. [5][6][7][8][9] Most properties of plasmonic nanostructures follow from the tunability of their optical response as a function of their shape and dimensions; in case interparticle gaps are formed, the socalled "hot-spot" regions occur, in which localized surface plasmons can interact with molecules placed in the junctions, allowing single molecule detection. [10][11][12][13][14][15] The optical properties of nanostructures are generally treated, independent of the system's size/shape, by resorting to classical approaches. [16][17][18][19][20][21][22][23][24][25][26][27][28][29] However, when the size of the particles or junctions is only a few nanometers or smaller, the quantum nature of electrons emerges, activating quantum tunneling effects across subnanometer interparticle gaps. 19,20,[30][31][32][32][33][34][35][36][37][38][39][40][41][42][43][44] Tunnelling effects are not considered in classical models, so that quantum corrected approaches need to be applied. 37,41 The theoretical study of the atomic-scale features in nanojunctions is still an almost unexplored field, because most phenomenological classical models do not address quantum effects. In fact, as reported by Urbieta et al., 45 an appropriate description of atomic-scale effects would require a full quantum framework, accounting for the atomistic structure of the nanoparticles and the wave nature of electrons building up the plasmonic excitation.
By starting from the above considerations, in this paper we report on a fully atomistic classical model based on three very basic ingredients, i.e. the Drude model for conduction in metals, classical electrostatics and quantum tunneling, which is able to reproduce with quantitative fidelity the optical properties of subnanometer junctions. In our approach, which we will call ωFQ (frequency dependent Fluctuating Charges), each atom of the nanostructure is endowed with an electric charge, which is not fixed but can vary as a response to the externally applied oscillating electric field.
Remarkably, here we go a step further with respect to other classical approaches. In fact, we are not using any experimental frequency dependent dielectric constant ( possibly corrected for non-locality and electron scattering at the surface), but we allowed the dielectric response of the nanosystem to arise from atom-atom conductivity. Quantum tunnelling effects originate from a geometrical damping imposed on the atom-atom conductivity regime. The model is challenged to accurately reproduce complex ab initio simulations on a † Electronic supplementary information (ESI) available: Detailed derivation of the ωFQ model. Details on the calculation of the electric current. Model parametrization on single Na nanoparticles. Comparison of ωFQ and ab initio computational costs. Test applications to silver nanorods of different sizes. Dependence of ωFQ absorption cross sections on model parameters. ωFQ MEP maps for plasmon excitation of selected structures. DFT values of the electric current as a function of the elongation distance. See DOI: 10.1039/C8NR09134J
Theoretical model
The model we are introducing here, ωFQ, has its fundamentals on the Fluctuating Charges (FQ) force field, which is usually adopted for describing molecular systems. [48][49][50][51][52][53] FQ places on each atom of a molecular system a charge, which is not fixed but allowed to vary as a result of differences in atomic electronegativities. Charges are regulated by atomic chemical hardness, which plays the role of an atomic capacitance. From the mathematical point of view, FQ charges are obtained by minimizing the functional defining the energy of the system (see section S1 given as the ESI †). ωFQ extends the basic formulation of the FQ model to take into account the interaction of the system with an external oscillating electric field E(ω). In particular, each atom is assigned a charge, which is allowed to vary as a response of the polarization sources, which also include the external field E(ω). Thus, due to the fact that the electric field is a complex quantity, calculated ωFQ charges become complex, being their imaginary value in quadrature with the field (if the field is real) and related to the absorption phenomenon. To build up the ωFQ approach, the time response of charges has to be related to external polarization sources. To this end, two alternative response regimes are set: (i) a conductive regime, in which the exchange of electrons between contiguous atoms is governed by the dynamics of the delocalized conduction electrons, giving rise to a damping; (ii) an alternative conductive regime, in which the exchange of electrons is also mediated by quantum tunneling effects. In this section we briefly discuss the main physical aspects of ωFQ: more details on the derivation of the equations and their implementation are given as the ESI † (see sections S2 and S3).
The first regime is described by reformulating the Drude model of conductance 54 to treat charge redistribution between atoms. The key equation representing the Drude model reads: 54 where p is the momentum of the electron and τ is a frictionlike constant due to scattering events. The total charge derivative on atom i can be written as: where A ij is an effective area dividing atom i by atom j, n i is the electron density on atom i, 〈p〉 is the momentum of an electron averaged over the trajectories connecting i and j and l ji = −l ij is the unit vector of the line connecting j to i. By assuming the total charge on each atom to be only marginally changed by an external perturbation, we can assume n i = n j = n 0 . Therefore: where 〈p〉·l ji needs to be estimated. To this end, it is convenient to consider a monochromatic applied electric field, so that eqn (3) translates to: To proceed further, 〈E(ω)〉·l ji (the total electric field averaged over the line connecting j to i) needs to be connected to atomic properties. This can be done by assuming 〈E(ω)〉·l ji ≈ (μ el j − μ el i )/l ij , where μ el i is the electrochemical potential of atom i and l ij is the distance between atoms i and j. Therefore, eqn (4) becomes: where n 0 = σ 0 /τ, with σ 0 being the static conductance of the considered metals. Note that the dependence of both σ 0 and τ on the temperature can in principle be considered; however, in this paper the effects of such dependence have not been investigated. Eqn (5) can be rewritten collecting K dru ij ¼ 2n 0 1=τ À iω A ij l ij in a K dru matrix (see eqn (5)). In order to make the model physically consistent, i.e. not to allow electron transfer between atoms that are too far apart, the pairs of atoms considered in eqn (5) have to be selected by exploiting a geometrical criterion, based on l ij , i.e. to limit the interactions to the nearest neighbors only.
To avoid any issue related to the specific definition of the nearest neighboring atoms, a Fermi-like f(l ij ) damping function is introduced as a weight of the Drude conductive mechanism: where f ðl ij Þ ¼ 1 In eqn (7) l 0 ij is the equilibrium distance between the two nearest neighbors in the bulk, whereas d and s are the parameters determining the position of the inflection point and the steepness of the curve. Note that in the case of systems composed of different atomic species, the Fermi-like function defined in eqn (7) needs to be adjusted so to take into account the specificities of the considered material.
Eqn (6) finally defines the ωFQ model. Whenever f (l ij ) = 0, the purely Drude conductive regime is recovered. For f (l ij ) > 0, Drude mechanisms exponentially turn off as l ij increases, making electron transfer enter in a second alternative regime. In this regime, the electric current exponentially decreases upon increasing the inter-atomic distance. Therefore, the typical functional form of tunneling exchange is recovered. 37 Once ωFQ frequency-dependent charges are obtained by solving eqn (6), the complex polarizability ᾱ is easily calculated. From such a quantity, the absorption cross section is recovered: where c is the speed of light, ω is the external frequency and Im(α) is the imaginary part of the complex polarizability ᾱ.
The ωFQ approach has been implemented in a standalone Fortran 77 package. Eqn (6) is solved for a set of frequencies given as input. All computed spectra reported in the manuscript were obtained by explicitly solving linear response equations for steps of 0.01 eV. For all the studied Na nanosystems, the parameters given in eqn. (5) and (6) were extracted from physical quantities recovered from the literature or numerically tested on single Na nanoparticles (see the ESI † for more details). The parameters finally exploited are the following: τ = 3.2 × 10 -14 s, 55 σ 0 = 2.4 × 10 7 S m −1 , 56 A ij = 3.38 Å 2 , l 0 ij = 3.66 Å, 56 d = 12.00, s = 1.10.
Before presenting the results and applications of ωFQ, some aspects of the newly developed approach need to be clarified. The equations specifying our approach are defined in the quasistatic regime, i.e. when the dimension of the studied system is much smaller than the wavelength of the external radiation. However, since the equations defining the model (see eqn (3)) can be rewritten in terms of Maxwell's sources (such as the electric current densities), its extension to the fully electrodynamical regime could be considered. This will be addressed in future communications and will permit the application of ωFQ to the calculation of the plasmonic response of nanostructures with a dimension comparable to the external radiation wavelength.
As stated before, here we focus on complex nanojunctions made of Na particles. Nevertheless, the model can be used in its present form for any metal for frequencies far from interband transitions. To back up this point, in section S4.2 in the ESI, † ωFQ is applied to selected silver nanorods of different sizes, for which the absorption maxima are far from interband transitions. 42 Almost perfect agreement with the reference ab initio data 42 is obtained. The inclusion of the effect of inter-band transitions is underway, in particular we are exploring both the inclusion of a Lorentz term in the conductivity and the explicit introduction of d-electron polarizability via an induced point dipole for each atom.
Furthermore, ωFQ has the potentiality to be coupled to electrodynamics models based on the metal permittivity, such as the Boundary Element Method (BEM), because the way the elements interact in both approaches is the same, i.e., via electromagnetic fields (see eqn (S11) given as the ESI †). In particular, this can be done by following the same approach which has been used to couple a polarizable MM layer to the Polarizable Continuum Model in the case of molecular systems. 57,58 The coupling of ωFQ with electrodynamical models will permit one to study much bigger systems, for instance by treating the core of nanoparticles with a continuum approach and retaining the atomistic description only for their surface.
Results and discussion
In order to test our newly developed ωFQ method, based on the Drude model for conductivity in metals, classical electrodynamics and quantum tunneling, we shall compute the optical response of Na aggregates which are characterized by subnanometer gaps. We shall compare the results obtained by exploiting our model with those calculated at the ab initio level.
In particular, the optical absorption spectra of a metal nanorod pulled beyond the breaking point 46 and those of two small metal nanoparticles brought into contact 47 are studied because they are paradigmatic of a class of nanoplasmonic problems where ab initio simulations seem mandatory.
As stated before, ωFQ equations are solved for each frequency given as the input. In the two considered cases, the absorption cross sections for all structures were calculated by considering 300 frequencies (from 0.0 to 3.0 eV, step 0.01 eV) and 450 frequencies (from 0.0 to 4.5 eV, step 0.01 eV) for the stretched nanorod and the nanoparticle dimer brought into contact, respectively. The total computational cost for each considered structure of the two systems is 11 seconds (11 Mb RAM) and 22 minutes and 55 seconds (112 Mb RAM), respectively. The calculations were performed on a MacBook Pro 2011, 2.3 GHz Intel Core i5, 4GB RAM. 4 cores were used for OpenMP parallelization. While we have no data on the time and memory requirements of the original ab initio calculations, supercomputers are certainly in order. This is of course an important practical advantage of this approach compared to ab initio. To further deepen this point, in section S4 of the ESI † we compare the ab initio vs. ωFQ requirements for a Na 59 nanoparticle (see Table S3 †). There we also show that the developed approach, although not fully algorithmically optimized, is already able to treat a 10 nm nanoparticle (around 13 803 atoms) with reasonable computational effort.
Stretched sodium nanorods
In this section the ωFQ approach (see Section 2) is applied to a challenging system, i.e. a mechanically stretched sodium nanorod, which has been recently studied at the ab initio level. 46 For such a system, absorption cross sections as a function of the elongation distances at the full ab initio level have been reported, 46 and such data are taken in this paper as reference values to evaluate the quality of our fully atomistic, but classical ωFQ approach. It is worth noting that upon increasing the elongation distance, a sub-nanometer junction region occurs, in which quantum tunneling effects play a crucial role in determining the spectral features. 32,41,43 Therefore, the application of our model to such a challenging system will highlight its potentialities and limitations at describing such effects.
The nanorod structures, eight of which are depicted in Fig. 1, were kindly provided to us by the authors of ref. 46. A total of 60 different structures were obtained from an initially perfect Na 261 nanorod, which was adiabatically stretched, by allowing the atomic positions of the central region to relax (see ref. 46 for more details).
For all 60 structures, absorption cross sections were calculated by exploiting the ωFQ model; Fig. 2 reports the absorption spectra of selected 30 structures as a function of the elongation distance.
As depicted in Fig. 1, the sodium nanorod is elongated and the atoms in the nanojunction region are relaxed until the structure breaks for distances longer than 26 Å, where the limit of mono-atomic junctions is reached.
These structural features are reflected by the calculated spectra (see Fig. 2(a)); in fact, a clear discontinuity is evident at d = 26 Å (structure G). Let us focus on elongation distances d < 26 Å. The pristine nanorod structure (structure A) presents one intense excitation at 1.5 eV (dubbed Local Plasmon LP) and a less intense peak at 2.8 eV (LP2). Our atomistic model allows the identification of the nature of such LPs, for instance by graphically plotting the imaginary part of atomic charges for each transition. Maps of the molecular electrostatic potential (MEP) obtained from such charges are reported in panel (a) of Fig. 3 for structures A-H. The comparison of data in panels (a) of Fig. 2 and 3 clearly shows a first charge-transfer excitation and a second transition with a dipolar character. Therefore, by exploiting the same nomenclature used for metal dimers, LP will be renamed as a Charge Transfer Plasmon (CTP), whereas LP2 as a Boundary Dipolar Plasmon (BDP). [59][60][61] As the elongation distance increases, both CTP and BDP significantly redshift, and this feature is particularly evident for CTP. In addition, they behave in a completely different way: CTP intensity slowly decreases, whereas BDP becomes more and more predominant. Such a behavior is commonly identified in most nanoplasmonic dimers. [45][46][47]62,63 When the elongation distance reaches 26 Å (structure G) a monoatomic junction is obtained, which is the limiting structure occurring just before the structure breaks. Such features are reflected by the absorption cross section. The nature of the lowest energy transition is different from the previous CTP because the node is not placed at the geometrical center of the nanostructure, although it preserves the CT character (see Fig. 3, panel a). For this reason, such a plasmon is called CTP2. CTP2 occurs at about 0.5 eV and shows a very low intensity, because electrons can only transfer through a single atom. The BDP excitation becomes the most intense and shifts at 2.2 eV. The inspection of panel (b), structure G in Fig. 3 shows that such an excitation has now an octupolar character, characterized by 3 nodes. At such an elongation distance, a third excitation, which is actually already visible at 25 Å, arises at about 1.4 eV. The analysis of the MEP map suggests this transition to be due to an additional dipolar plasmon, BDP2.
We move now to comment on the spectra for d > 26 Å. CTP disappears, as expected, because the gap between the two Fig. 2) compare extremely well with the ab initio data, and only minor discrepancies are present. The ab initio spectra for structures with 13 < d < 26 Å show a really low peak at about 0.5 eV, associated with CTP2. Such a peak, which however can hardly be identified in the ab initio density maps (see panel (b) of Fig. 3), is reproduced by ωFQ only in the case of the structures with 24 < d < 26. Thus, ωFQ is not able to reproduce the relative intensity between CTP and CTP2, which are both present in the ab initio results. In addition, the ab initio BDP transition results in a narrower band. Such a difference can be justified by the atomistic nature of our model, which results in some kind of inhomogeneous broadening due to transitions with different nodal structures at the atomic scale, but corresponding to the plasmons of similar nature. Note that such inhomogeneous band broadening is not reported in the ab initio data probably because of the real-time TDDFT broadening. Furthermore, with reference to structure G, ωFQ predicts BDP to have a multipolar character (with 5 nodes), whereas its ab initio counterpart shows such character only at higher energies (see Fig. 3, panels a and b). Such a discrepancy shows that for structure G, ωFQ tends to overestimate the 5 node plasmon character. However, it is worth pointing out that such a qualitatively different description is reported only in the case of structure G. Despite such minor discrepancies, the agreement between ωFQ and DFT spectra is impressive. In fact not only excitation energies but also relative intensities are correctly reproduced in all the elongation ranges, as well as the discontinuities that are reported for both CTP and BDP. Furthermore, ωFQ reproduces the ab initio calculated discontinuity (redshift) in the spectrum of the structure with d = 7 Å. Such a behavior is probably due to the structural rearrangement of the nanorod as a result of the ab initio geometry relaxation. The peculiar atomistic nature of ωFQ makes it capable of also exerting such effects, resulting from tiny deformations of the nanostructure.
To further analyze our results, in Fig. 4, excitation energies and integrated intensities calculated by exploiting our model are shown for the three plasmons. Many discontinuity points are noticed for both CTP and BDP as a result of the stretching of the nanostructure. In particular, at about 7.5 Å the energies of CTP and BDP decrease by 0.1 eV. Integrated intensities also present a discontinuity point at such a distance. Such shifts and discontinuities, which as stated before are due to struc-tural rearrangements, are also reported by DFT calculations (see Fig. S6 given as the ESI †). 46 Our results, which also in this case are quantitatively comparable to DFT, show once again that our classical atomistic approach gives a correct description of the underlying physical phenomena.
Sodium NP dimer: approaching and retracting processes
As a second test to analyze the performances of ωFQ, the latter is challenged with the description of the optical properties of two Na 380 icosahedral nanoparticles which are approached and retracted (see Fig. 5). This system has been recently studied at the full ab initio level by Marchesin et al., 47 who kindly provided us with the full set of model structures.
Two alternative processes will be considered: first, the two Na 380 nanoparticles are placed at a distance of 16 Å (such a distance guarantees that they do not interact) and then they are drawn closer until they fuse (see Fig. 5, panel a). Then the two fused Na 380 nanoparticles are retracted until the structure separates (see Fig. 5, panel b), giving rise to a process which is similar to the case presented in the previous section. Two alternative situations of approaching and retracting were both considered, because as has been already reported in ref. 47 the two processes are physically different.
Let us start the discussion by considering the approaching process (Fig. 5, panel a). The imaginary part of the longitudinal polarizability, i.e. the component parallel to the dimer axis, has been computed as a function of the inter-nanoparticle distance; its values are reported in panel (a) of Fig. 6.
The calculated 2D plots present a clear discontinuity between the nominal gap sizes of 6.1 Å and 6.2 Å, i.e. between structures B and C in Fig. 5 panel a, which correspond to a jump-to-contact instability. At higher inter-nanoparticle distances, plots are dominated by a single peak, which is placed at 3.19 eV at d = 16 Å, i.e. when the two nanoparticles are far apart. This band can be attributed to BDP. The induced charges and the corresponding MEP maps are depicted for the four selected significant structures in Fig. 7. We clearly see that for structure A BDP is a dipolar plasmon. As expected, as the distance between the two nanoparticles decreases, the BDP redshifts due to the increasing of electrostatic interactions. When the two nanoparticles fuse (structure C) a clear discontinuity appears and for d ≤ 6.1 Å, the 2D plot is characterized by two main peaks, namely CTP (1.66 eV) and CTP′ (3.15 eV), at d = 6.1 Å, corresponding to the plasmon excitation represented for structures C and D in Fig. 7. As the distance further reduces, CTP blueshifts whereas CTP′ remains almost unchanged.
The inspection of the MEP maps in Fig. 7 shows that the higher order CTP′ shows a dipolar character, similarly to BDP, which occurs for structures with d > 6.1 Å. The jump-to-contact structural instability is confirmed by the appearance of CTP, which is characterized by a net flux of charge between the two (fused) nanoparticles. Such a flux gives rise to a conductive regime, resulting in an electric current. Note that, as previously reported by Marchesin et al., 47 the sudden occurrence of the junction bypasses the distance regime where quantum tunneling effects are relevant. Moving back to Fig. 6, we note the very good agreement between the results obtained by exploiting the ωFQ approach and the ab initio counterparts. Qualitatively, DFT results are perfectly reproduced, in fact all the three CTP, CTP′ and BDP bands are described, their behavior as a function of the distance is correctly reproduced and the relative intensities of the bands are qualitatively well described. Some minor discrepancies are present from the quantitative point of view. In fact, the behavior of BDP as a function of the distance is not perfectly described, e.g. ωFQ intensities remain almost constant along the approaching process. Also, CTP intensities are overestimated and the CTP′ band seems broader. Such findings are in line with what has been found in the previous section and can be due to the atomistic nature of our approach, which does not smooth out inhomogeneities on the atomic scale.
We move now to study the two fused Na 380 nanoparticles which are retracted until the structure breaks (see Fig. 5 panel b for representative structures). As is evident the breaking process gradually occurs. In fact, a monoatomic junction arises (structure F), which breaks as the distance increases further. Therefore, tunneling effects are expected to be relevant, thus resulting in a different behavior of the calculated spectrum with respect to what we have reported in the previous paragraphs for the approaching process, and also found at the ab initio level. 47 Indeed, this is confirmed by ωFQ calculated values of the imaginary part of the longitudinal polarizability, i.e. the component parallel to the dimer axis; such data are reported in panel (a) of Fig. 8 as a function of the elongation distance.
By starting from the fused A structure, we notice that, as expected, the spectrum consists of two bands, which can be related to CTP and CTP′ excitation. Their nature can be understood by referring to Fig. 9; CTP occurs at 1.84 eV and corresponds to a charge flux between the two nano-moieties. CTP′ (3.18 eV) shows instead the anticipated dipolar character by looking at the single nanoparticles.
As the elongation distance increases, both CTP and CTP′ redshift, and this is particularly evident especially for CTP. In addition the CTP band shrinks and its intensity decreases, whereas CTP′ shows an opposite behavior, i.e. its intensity increases and the band broadens. Small discontinuities, characterized by sudden red-or blue-shifts of the excitation, are visible. This behavior is similar to what we have found in the previous section for the stretched Na 261 nanorod (see Fig. 4, panel (a)), and can be reasonably due to the structural relaxation and the resulting thinning of the conductive channels as the structure stretches. As the limiting structure F is reached (d = 32.1 Å), a monoatomic junction arises (see Fig. 9), resulting in the CTP band to occur at 0.25 eV and the CTP′ band at 2.89 eV. The inspection of the corresponding MEP maps shows that the nature of the associated plasmons is unchanged with respect to the initial A structure. Suddenly, the structure breaks (structure G, d = 32.3 Å), thus resulting in the disappearance of CTP and the convergence of CTP′ towards BDP. The MEP associated, depicted in Fig. 9, shows a multipolar character. Moving back to Fig. 8, also for the elongation process a very good agreement between the results obtained by exploiting our classical atomistic ωFQ approach and the reference ab initio data 47 is noted. Qualitatively, DFT results are perfectly repro-duced, in fact all the three CTP, CTP′ and BDP bands are described, their behavior as a function of the distance is correctly reproduced and the relative intensities of the bands are qualitatively well described. ωFQ intensities for the CTP band are slightly overestimated, and remain higher also as the nominal gap size increases. Furthermore, ωFQ well reproduces the discontinuities in the spectra, and specifically those marked as α, β and γ in Fig. 8 panel (b). As already pointed out in the previous section, such a behavior can be due to the structural rearrangement of the nanostructure as a result of the ab initio geometry relaxation. The classical but atomistic nature of our approach makes it capable of correctly describing such effects.
The ωFQ imaginary charges for structures before and after the spectral jumps α, β and γ are depicted in Fig. S8, given as the ESI. † The structural change associated with each spectral jump is reflected by differences in the corresponding plasmons, i.e. by changes on the charges of the junction atoms. Remarkably, our data are in agreement with DFT density distributions around the junction reported in ref. 47, thus showing once again the reliability of our classical atomistic model. Note again that most of the physical findings were fully disclosed in the reference ab initio study. 47 To end the discussion and to further analyze the performance of the model, we report in Fig. 10 the calculated ωFQ absolute values of the electric current through the plasmonic nanojunctions as a function of the elongation distance. Both the approaching and retracting processes are considered. The reported values were obtained at the excitation energies of each plasmon.
As expected, for the approaching process when the two nanoparticles do not interact, i.e. when the spectra are dominated by BDP, no current flux is evidenced. As the jump-tocontact instability is reached, a discontinuity in the current arises, i.e. a net current flux is established. The current further increases as the inter-nanoparticle distance decreases.
For the retracting process, the CTP plasmon clearly dominates the charge flux. As the system is stretched, the current intensity slowly decreases, until it vanishes when the system breaks (structures F and G). Several discontinuities in the CTP current are present, similarly to what was already observed for the stretched nanorod in the previous section. Remarkably, the α, β and γ spectral jumps can easily be identified in the current plot. Note that the ωFQ plot reported in Fig. 10 can be directly compared to its ab initio counterpart depicted in Fig. S9 given as the ESI. †
Conclusions
In the present work, a novel atomistic model, ωFQ, based on textbook concepts (Drude theory, electrostatics, and quantum tunneling) has been proposed. In such a model, the atoms of complex nanostructures are endowed only with an electric charge, which can vary according to the external electric field. The electric conductivity between the nearest atoms is modeled by adopting the simplest possible assumption, i.e. the Drude model which has been reformulated in terms of electric charges. Thus, only a few physical parameters define our equations. Furthermore, the dielectric response of the system arises naturally from atom-atom conductivity. Remarkably, such a feature permits one to avoid the use of any experimental frequency-dependent dielectric constant, which is adopted in the quantum corrected models. 37 Moreover, ωFQ takes also into consideration quantum tunneling effects by switching off exponentially conductivity between neighboring atoms.
The ωFQ model was challenged to reproduce the optical response of complex Na nanoclusters which have been investigated previously at the ab initio level 46,47 and for which a QM description has been considered mandatory. The capability of our approach to reproduce the results of complex simulations has a relevant practical consequence; in fact, due to its classical formulation, ωFQ can be applied to model nanoplasmonic systems of size well beyond what can be currently treated at the ab initio level. Moreover, the good agreement between the ab initio simulations and ωFQ results shows that the physics it encompasses (Drude model, electrostatics and a quantum tunneling correction) properly ported at the atomistic level, is dominating the nanoplasmonic phenomena also in this small scale regime.
In this work, only Na clusters have been considered. However, ωFQ, properly extended to account for the atomic core polarizability that characterizes d-metals, has the potential to treat a great variety of plasmonic materials. Also, the formulation of the model in terms of electric charges and its manifest reliability shows that ωFQ has the potentialities to be coupled to fully QM molecular simulations within a QM/MM framework so as to allow the modeling of spectral enhancement of molecules adsorbed on plasmonic nanostructures. These aspects will be treated in future communications.
Conflicts of interest
There are no conflicts to declare. | 7,491.4 | 2019-03-28T00:00:00.000 | [
"Physics"
] |
Costing the economic burden of prolonged sedentary behaviours in France
Abstract Background There is strong evidence showing that sedentary behaviour time increase the risk to develop several chronic diseases and to premature death. The economic consequences of this risk have never been evaluated in France. The aim of this study was to estimate the economic burden of prolonged sedentary behaviour in France. Methods Based on individual sedentary behaviour time, relative risk to develop cardiovascular disease, colon cancer, breast cancer and all-causes of premature mortality were identified. From relative risk and prevalence of sedentary behaviour time, a population attributable fraction approach was used to estimate the yearly number of cases for each disease. Data from the National Health Insurance were used to calculate the annual average costs per case for each disease. Disease-specific and total healthcare costs attributable to prolonged sedentary behaviour time were calculated. Indirect costs from productivity loss due to morbidity and premature mortality were estimated using a friction cost approach. Results In France, 51 193 premature deaths/year appear related to a prolonged daily sedentary behaviour time. Each year prolonged sedentary behaviour cost 494 million € for the national health insurance. Yearly productivity loss due to premature mortality attributable to prolonged sedentary behaviour cost 507 million € and yearly productivity loss due to morbidity cost between 43 and 147 million €. Conclusion Significant saving and many deaths could be avoided by reducing prolonged sedentary behaviour prevalence in France. To address this issue, strong responses should be implemented to tackle sedentary behaviour, complementary to physical activity promotion.
Introduction
L ast decades, modernization and urbanization of the society have led to change the population's lifestyle decreasing the physical activity (PA) level and increasing sedentary behaviour (SB) time. 1,2 PA is defined as 'Any bodily movement produced by skeletal muscles that requires energy expenditure', 2 whereas SB is defined as 'any waking behaviour characterized by an energy expenditure 1.5 metabolic equivalents, while in a sitting, reclining or lying posture'. 3,4 In 2020, World Health Organization (WHO) provided for the first time public health guidelines to address health risks of prolonged SB. 1,2 The amount of time spent being sedentary should be reduced as much as possible, and it should be replaced by any time and any intensity of PA. 1,2 The association of prolonged SB with premature mortality and non-communicable diseases (NCDs) such as cardiovascular disease (CVD), colon cancer, breast cancer is now well established. [5][6][7][8] Despite evidence, there are still limited policy interventions aiming at reducing SB, complementary to health-enhancing physical activity (HEPA) promotion. [9][10][11] Yet, meta-analysis showed an interdependent relationship between SB and PA. [12][13][14][15] Thus, meeting the threshold of WHO PA recommendations 1 is not enough to attenuate detrimental influence of high amount of prolonged daily SB time on premature mortality. 5 However, recent studies suggested that breaking every hour of sedentary period by few minutes of moderate or vigorous PA has a significant impact on health outcomes. 13,16,17 The difference between physical inactivity, 'an insufficient PA level to meet present PA recommendations', 2 and SB and their respective consequences on health still can be confusing, 3,18 especially for policy-makers and the general population.
Moreover, mechanisms and correlates of SB and PA differ. 17,19-21 Thus, reducing SB require specific responses, complementary to PA promotion. 9,22,23 Inform policymakers about consequences of health issues, such as physical inactivity and/or SB, and provide evidence to address it, may foster the topic to the policy agenda setting and may help policymakers to their decisions to define and implement policies solutions. 24,25 In France based on the INCA 3 study, 26 the Agency for Food, Environmental and Occupational Health & Safety (ANSES) has highlighted the need to tackle prolonged SB. 27 From a national representative sample, this study on the lifestyle habits of the French population collected data on SB of 2682 adults through the Recent PA Questionnaire for adults. 26 According to INCA 3 study, almost 25% of people between 18 and 79 years had a high risk for health conditions with more than 8.6 h of daily SB. 26 Unlike physical inactivity, 28 premature deaths due to this SB has never been estimated in France. Likewise, few studies have evaluated the economic burden of physical inactivity, 29,30 whereas the economic consequences of SB in France have never been costed. The aim of this study is to estimate premature deaths and the economic burden of prolonged SB in France. In this study, prolonged SB is defined as daily SB time exposure that increasing risk to premature death or to develop NCDs.
Methods
Direct and indirect economic consequences of prolonged SB in France were estimated in this study. Annual direct healthcare expenditures attributable to prolonged SB in France were calculated using a prevalence-based and population attributable fraction (PAF) approach. 30,31 Indirect costs from productivity loss due to morbidity and to premature mortality were estimated using a friction cost approach. 30,32 Quantify direct costs of prolonged SB Annual direct healthcare expenditures attributable to prolonged SB in France were calculated in four steps: (i) identification and quantification of the increase risk to all-causes premature mortality and to develop NCDs due to prolonged SB; (ii) estimation of the number of heath conditions attributable to prolonged SB using the PAF approach 30 ; (iii) calculation of the annual average costs of healthcare expenditures for each disease; and (iv) estimation of the diseasespecific and total healthcare costs attributable to prolonged SB.
Identification and quantification of the increase risk to all-causes premature mortality and to develop NCDs due to prolonged SB From meta-analysis or large cohorts, we identified relative risk (RR) of all-causes premature death and to develop NCDs after co-variables adjustments including PA (table 1). To this end, PubMed and Google Scholar databases were used to identify the most suitable studies in order to extract RR with their respective confidence interval (CI). Studies were selected with these following criteria: recent (<10 years) meta-analyses or large cohorts (!50 000 participants), daily SB time exposure indicated, non-disease participants at baseline, adult participants, studies with RR adjusted for levels of PA. When studies met the criteria, these using accelerometer measurements were privileged instead of studies using questionnaire.
Estimation of the number of heath conditions attributable to prolonged SB using the PAF approach From studies selected in the Step 1, RR adjusted of prolonged SB were extracted with their thresholds in hours per day for all-causes premature mortality and NCDs (table 2). Then, prevalence of prolonged SB in the French population were calculated for each health risk (Supplementary Material S1) using two 2016 collection of open data of the INCA 3 study' on the lifestyle habits of the French population 33 and the causes of death of the French National Institute of Statistics and Economic Studies (INSEE). 34 From these previous calculations, PAF were computed to estimate the yearly number of cases for premature all-causes mortality and for each disease (table 2). For this purpose, the following formula was used 30,31 : where P 1 is the prevalence of prolonged SB at baseline. RRs is the RR adjusted on prolonged SB comparing with no prolonged SB, adjusted with confounding factors (table 1). For each PAF, 95% CI were computed.
Calculation of the annual average costs of healthcare expenditures for each disease
Each year, French National Health Insurance make available data of prevalence and healthcare expenditures of specific disease groupings. 35 For each disease, data include hospital care expenditures, drug expenditures, physician care expenditures, others health professionals care expenditures, biological tests, transports and daily sickness allowance. 35 From this open database, 2016 annual average costs per case for each disease was extracted (Supplementary Material S2).
Estimation of disease-specific and total healthcare costs attributable to prolonged SB To estimate total healthcare costs attributable to prolonged SB, we first multiplied yearly number of cases for each disease and their 95% CI by annual average costs of healthcare expenditures. Then, the costs related to each disease were summed to estimate total healthcare costs attributable to prolonged SB in year 2016 (table 3). i4 European Journal of Public Health
Quantify indirect costs of prolonged SB
A friction cost approach 32 was used to estimate productivity losses due to mortality attributable of prolonged SB (!8.6 h/day). Such as Ding et al., 30 3 months of friction period was used. However, we replaced total number of deaths with total number of premature death due to NCDs in France. 36 Thus, the following formula was used: Indirect costs of productivity losses ¼ total number of premature deaths due to NCDs in 2016 PAF for all-cause mortality  proportion of deaths occurred at age 15 years or above  employment rates amongþ population aged 15 years and above  (Gross Domestic Product per person employed in 2016 0.25).
To get the proportion of deaths occurred at age 15 years or above, the employment rates among population aged 15 years and above, and the gross domestic product (GDP) per person employed, we used data from INSEE, 34 the World Bank 37 and the OECD. 38 We sensitized friction time using 1.5 months as the lower limit and 4.5 months as the upper limit as recommended by Ding et al. 30 Productivity losses from workdays lost due to NCDs attributable to prolonged SB were also computed. Data from the study of Vuong et al. 39 were used to estimate the number of workday loss due to NCDs (Supplementary Material S3). Then, the following formula was used to calculate the productivity losses from prolonged SB:Indirect costs of productivity losses ¼ Number of NCDs cases attributable to prolonged SB  mean of workdays lost per year due to the disease  employment rates among population aged 15 years and above  (GDP per person employed in 2016/360).
Quantify direct costs of SB
A total of four studies meeting criteria were selected in order to extract RR adjusted of prolonged SB (table 1). Among these studies, only the study of Ekelund et al. 5 (table 2).
Each year, prolonged SB cost almost 494 million e (95% CI; 147-777) for the National Health Insurance, including 317 million e for CVD, 142 million e for breast cancer, and 34 million e for colon cancer (table 3).
Quantify indirect cost of SB
Yearly productivity loss due to premature mortality attributable to prolonged SB cost 507 million e (95% CI; 305-636). Whereas yearly productivity loss due to morbidity costs between 43 and 147 million e (table 4).
Discussion
The results of this study suggest that in 2016, 51 193 (95% CI; 35 317-63 699) deaths might have been avoided if 24.9% of the French adult population had had a daily SB time under the threshold of 8.6 h/day. Moreover, this study provides for the first time a global estimation of the economic burden of prolonged SB in France. Direct and indirect economic consequences of prolonged SB for the French society costed more than 1 billion e in 2016. In the context where the French population is ageing and the prevalence of NCDs will continue to rise, 40 this economic burden might continue to increase in the forthcoming decades if the prevalence of prolonged SB is not reduced.
For comparison in the UK, Heron et al. 41 estimated that 69 276 deaths could be avoided each year if prolonged SB have been eliminated. There, direct healthcare costs of prolonged SB were estimated at 0.8 billion £ in 2016-17 (almost 0.94 billion e in 2017). In this study, the authors considered prolonged SB as spending at least 6 h of waking time sedentary. This threshold was based on their national health survey, which estimated that 30% of adults on weekdays and 37% of adults on weekend had prolonged SB. Thus, higher direct healthcare costs of prolonged SB in UK were notably due of a higher prevalence of SB than in France. The higher direct costs of prolonged SB in UK could be also explained by the choices of studies' inclusions in the analysis and by differences between national health systems.
It should be emphasized that our results were probably underestimated. Direct healthcare cost and some parts of indirect cost were only estimated from CVD, colon cancer and breast cancer whereas moderate to strong evidence showed that prolonged SB was associated with several others NCDs. 42,43 However, we did not identify recent meta-analyses or large cohorts using daily SB time exposure to estimate the RR for other health outcomes. Some researches have investigated RR by comparing the least sedentary individuals with the most sedentary ones without SB time exposure in hours of day. 42,43 Yet, available data on SB of the French population only use daily SB time exposure measurement. Moreover, some large cohorts or meta-analysis studies on the association between SB and NCDs did not take into account some important confounding factors in their RR adjustment. Thus, we did not include these studies in our analysis. For example, Stamatakis et al. 44 showed that sitting behaviour was not associated with incident diabetes over 13 years, once RR was adjusted on the body mass index at baseline. Furthermore, some methods also might underestimate the economic consequences of prolonged SB in France. In this study, we used a PAF approach 30,31 to estimate direct healthcare costs attributable to prolonged SB. According to Ding et al. 30 in their economic analyses of the physical inactivity consequences, and comparatively to Carlson et al.'s 45 study, PAF approach provides lower estimation than more direct approach such as studies which used econometric approach to link data to the risk factor with healthcare expenditures at the individual level. 30 Moreover, prevalence of prolonged SB daily time in INCA 3 study' was calculated from data obtained with selfreported questionnaire. 27 This prevalence and their economic consequences could be underestimated because the use of accelerometer appears more accurate to measure SB than questionnaire which generally underestimates the daily SB time. 46,47 On the contrary, indirect costs of productivity due to mortality and to morbidity estimated from a friction cost approach 32 might be overestimated. This approach considers that in the event of illness or death of a worker there is a loss of productivity for his company for the entire duration of his absence. Although the friction time was sensitized, in the real-world productivity could be partially maintained following a reorganization and optimization of means of production while employee's absence duration. To estimate productivity loss of absenteeism due to morbidity, we used data of absenteeism of American worker 39 because we had no data for a French worker. Yet, work absenteeism due of illness may quite differ according the country. 48 Nevertheless, our results showed that prolonged SB have significant economic impact for the French health system and employers. The workplace could be particularity effective to implement policy interventions aiming to reduce SB and promote PA. 49,50 According to a study of Said et al. 51 describing SB of 35 444 French workers, adults spent a mean of 4.17 h of SB per day in work setting. Moreover, when people are sedentary at work, there are more likely to be also outside of work. 51 Having a global policy to promote active travel as often as possible might also be effective to reduce SB and might generate substantial saving. 52,53 French workers spend on average 1.1 h/day in transport sitting. 51 Moreover, it seems that the development of teleworking increase daily SB time. 54 There is now strong evidence showing that replacing SB by any duration and any intensity of PA has significant impact on health conditions. 13,42 For this reason, policies that need to be implemented to tackle SB will be complementary to HEPA promotion.
This study has certain limitations. Although our analysis was based on RR after PA adjustment, for RR of NCDs we were not able to use studies which investigated dose-response associations between accelerometry measured PA and SB daily time, such as Ekelund et al. 5 for all-causes mortality. Moreover, this study has not included in the analysis all RR studies' showing an increase risk to develop NCDs. In addition, some RR extracted from large cohorts where association between SB and NCDs were observed in several countries as in France. As described above, indirect costs of productivity loss due to morbidity were computed from data of absenteeism for an American worker. Our study concerned only the French adult population. Yet, data from ANSES 33,55 showed that most of the French youth and adolescent not meet the WHO SB guideline. 2 To conclude, this study shows that many deaths could be avoided by reducing prolonged SB prevalence in France. Moreover, direct healthcare costs attributable to SB-related diseases represent a high economic burden for the French health system. For employers, prolonged SB of workers led to significant productivity loss. To address these issues, strong responses should be implemented to tackle SB, complementary to PA promotion. Further prospective studies with all age cohorts should be developed to analyze the association between PA, prolonged SB, and the risk to develop NCDs from youth to older adults. They would allow to achieve more accurate economic analyses of prolonged SB. Further studies should also investigate economic consequences of prolonged SB in specific groups of population such as people from disadvantaged socio-economic condition in order to help policy-makers to target their policies to reduce SB.
Supplementary data
Supplementary data are available at EURPUB online.
Funding
This research was supported by the French Ministry of Sport.
Conflicts of interest: None declared.
Key points
• Each year, many deaths might be avoided if prevalence of prolonged sedentary behaviour (SB) was reduced. • In France, direct and indirect economic consequences of prolonged SB costed more than 1 billion e in 2016. • These results can help policy-makers and employers in their decision to invest in policies aiming to reduce prolonged SB.
i6 European Journal of Public Health | 4,156.6 | 2022-08-26T00:00:00.000 | [
"Economics",
"Medicine"
] |
Honeycomb‐Like Magnetosheath Structure Formed by Jets: Three‐Dimensional Global Hybrid Simulations
Magnetosheath jets with enhanced dynamic pressure are common in the Earth's magnetosheath. They can impact the magnetopause, causing deformation of the magnetopause. Here we investigate the 3‐D structure of magnetosheath jets using a realistic‐scale, 3‐D global hybrid simulation. The magnetosheath has an overall honeycomb‐like 3‐D structure, where the magnetosheath jets with increased dynamic pressure surround the regions of decreased dynamic pressure resembling honeycomb cells. The magnetosheath jets downstream of the bow shock region with θBn ≲ 20° (where θBn is the angle between the upstream magnetic field and the shock normal) propagate approximately along the normal direction of the magnetopause, while those downstream of the bow shock region with θBn ≳ 20° propagate almost tangential to the magnetopause. Therefore, some magnetosheath jets formed at the quasi‐parallel shock region can propagate to the magnetosheath downstream of the quasi‐perpendicular shock region.
Introduction
The interaction between the super-magnetosonic solar wind and the Earth's magnetic field forms the magnetosphere whose outer boundary is the magnetopause.The magnetosheath is located between the magnetopause and the bow shock that decelerates the solar wind from super-magnetosonic to sub-magnetosonic (Fairfield, 1971;Peredo et al., 1995).According to the angle (θ Bn ) between the shock normal direction and the interplanetary magnetic field (IMF), the bow shock is categorized into quasi-perpendicular (θ Bn ≳ 45°) and quasi-parallel (θ Bn ≲ 45°).Solar wind particles reflected by the quasi-parallel shock can travel far upstream along the magnetic field lines, generating ion beam instabilities to excite ultra-low-frequency (ULF) waves (Hao et al., 2021;Lembege et al., 2004;Lu et al., 2020;Omidi, 2007;Quest, 1988;Su et al., 2012;Wu et al., 2015).These waves are carried to the magnetosheath by the solar wind and cause turbulence therein.
Magnetosheath jets, also known as high-speed jets, were first reported by Němeček et al. (1998) at Earth and are frequently observed in the magnetosheath downstream of the quasi-parallel shock (Archer et al., 2012;Plaschke et al., 2013).Within the magnetosheath jets, dynamic pressure increases and ion velocity often exceeds the local Alfvén speed, while the plasma temperature is lower and more isotropic than the surroundings (Archer & Horbury, 2013;Plaschke et al., 2013).It is well known that the magnetosheath jets can be formed by the interaction between the upstream waves and the quasi-parallel bow shock (Hietala et al., 2009;Palmroth et al., 2018;Raptis et al., 2022;Ren et al., 2023;Suni et al., 2021).The magnetosheath jets can drive bow waves inside the magnetosheath (Hietala et al., 2009;Liu et al., 2019;Ren et al., 2024), and impact the magnetopause to trigger localized magnetopause indentation (Shue et al., 2009;Yang et al., 2024), reconnection (Hietala et al., 2018), and magnetopause surface waves (Archer et al., 2019).The magnetosheath jets are more likely to reach the magnetopause when the IMF is quasi-radial (LaMoury et al., 2021).Guo et al. (2022) suggested that the alignment between the IMF and the solar wind velocity favors the formation of large magnetosheath jets.Further, Ren et al. (2023) found that the large magnetosheath jets form when upstream compressive structures continuously interact with the bow shock at specific regions.These large magnetosheath jets transport more mass and energy from the solar wind and thus have a more significant influence on the Earth's magnetosphere (Plaschke et al., 2016).
To statistically analyze the scale sizes of magnetosheath jets, Plaschke et al. (2020) assumed the magnetosheath jets to be cylinder-like, whose axial directions are parallel to the propagation directions.They suggested that the magnetosheath jets have median scale sizes of 0.12 R E and 0.15 R E in the parallel and perpendicular directions, respectively.Using multi-spacecraft observations, Karlsson et al. (2012) found that the plasmoids, which are related structures to the magnetosheath jets, have pancake-like structures ("flattened flux tubes") with one dimension shorter than the others.Omelchenko et al. (2021) also demonstrated pancake-like jets using threedimensional (3-D) global hybrid simulations, where the jets have three characteristic sizes: 4 R E in the parallel direction, 6 R E in the dawn-dusk direction, and 0.6 R E in the north-south direction.However, their simulations reduce the scale of the Earth's magnetosphere, which may lead to unrealistic results.
In this study, we conducted a realistic-scale, 3-D global hybrid simulation to demonstrate that the magnetosheath has a honeycomb-like 3-D structure where jets with increased dynamic pressure surround magnetosheath cavities with decreased dynamic pressure (Guo et al., 2022;Katırcıoğlu et al., 2009;Omidi et al., 2016).Our results also indicate that the magnetosheath jets formed downstream of the quasi-parallel shock can propagate to the magnetosheath downstream of the quasi-perpendicular shock, which may be a source of jets downstream of the quasiperpendicular shock.
Simulation Model
This study utilizes a three-dimensional global hybrid simulation model (Lin & Wang, 2005).In hybrid simulations, ions are treated as particles while electrons are treated as a massless, charge-neutralizing fluid.The displacement current is neglected, the electric field is solved through Ohm's law, and the magnetic field is advanced by Faraday's law.The simulation is performed in a spherical coordinate system (r, θ, φ), encompassing a simulation domain of the geocentric distance 3 R E ≤ r ≤ 30 R E , polar angle 10°≤ θ ≤ 190°, and azimuth angle 20°≤ φ ≤ 160°.The simulation grid consists of N r × N θ × N φ = 720 × 420 × 540 cells.Within the inner magnetosphere (r ≤ 6.5 R E ), a cold, incompressible ion fluid is filled to represent the plasmasphere.
Grid spacing in the r direction is nonuniform, with Δr ≃ 0.02R E within 8 R E ≤ r ≤ 14 R E and being larger elsewhere.This setup keeps high resolution in the magnetosheath region while reducing the computational costs.Outflow (open) boundary conditions are applied to all boundaries for particles (fields), except for a conductive field boundary at the inner boundary (r = 3 R E ) and the injection of solar wind particles at the outer boundary (r = 30 R E ).The simulation results are presented in geocentric solar-magnetospheric (GSM) coordinates, with the x-axis points from the Earth's center to the Sun, the z-axis aligned with the Earth's dipole axis, and the y-axis completing the right-handed coordinates system.The simulation time step is Δt = 0.02 Ω 1 i , where the ion gyrofrequency Ω 1 i is determined by the solar wind magnetic field.The initial state involves about 8 × 10 9 macroparticles, and a small, current-dependent collision frequency is used to simulate anomalous resistivity and trigger magnetic reconnection.In the solar wind, the plasma number density is N i = 3.2 cm 3 , and the magnetic field is B = (3.72,0.13, 0.21) nT.The plasma beta values for ions and electrons are β i = β e = 0.22, and the solar wind velocity is V SW = ( 466.48, 12.86, 14.31) km/s.The Alfvén Mach number is therefore M A = 12.27.In our simulation, for the first time, a realistic magnetosphere scale is used, where 1 R E = 50 d i0 , with d i0 representing the ion inertial length in the solar wind.Additionally, to study the effect of reducing magnetosphere scale on the 3-D structure of magnetosheath jets, another simulation is also performed with a reduced scale where 1 R E = 10 d i0 (5 times smaller than reality), while other parameters are kept identical to those of the realistic-scale case described above.Because the Alfvén speed, which measures the evolution speed of kinetic effects, is larger relative to the magnetosphere scale size in the reduced-scale case, the magnetosphere evolves faster than in the realistic-scale case.For a direct comparison of the two cases, the simulation time in the reduced-scale case is presented 5 times larger.
Simulation Results
An overview of the realistic-scale case under a radial IMF is shown in Figure 1.The bow shock results from the interaction between the solar wind and the geomagnetic field, with the magnetosheath situated downstream of the bow shock.The bow shock is quasi-parallel and rippled around the subsolar region while being quasi-perpendicular around the flank region.Downstream of both quasi-parallel and quasi-perpendicular bow shock, many magnetosheath jets with high dynamic pressure are observed, and some of them impact and dent the magnetopause.In Figure 1b, elliptical magnetosheath cavities with low dynamic pressure are surrounded by magnetosheath jets, illustrating an overall honeycomb-like 3-D structure in the magnetosheath, where the cavities and jets resemble honeycomb cells and their edges.Once the jets form downstream of the quasi-parallel shock, they propagate toward the flank region along the magnetosheath plasma flow.
The magnetosheath jets downstream of the bow shock with θ Bn < 20°are shown in Figure 2. Figures 2b-2e plot the dynamic pressure and ion temperature of one magnetosheath jet ("J1") at t = 671.5 s in two perpendicular slices (pink and cyan slices in Figure 2a).Inside "J1," the ion temperature is decreased and is more isotropic than the surroundings, because the plasma from the solar wind is less heated by the bow shock in the magnetosheath jets.The parallel scale size of "J1" is about 3 R E , and its perpendicular scale sizes are about 3 R E and 0.4 R E in the pink and cyan slices, respectively.This indicates a pancake-like localized structure for "J1," formed by being the edge of two magnetosheath cavities (Figure 1b).Additionally, some magnetosheath jets are located at the vertices of multiple magnetosheath cavities, whose localized 3-D structures are approximately cylinder-like (Figure 1b)."J1" impacts the magnetopause along the normal direction of the magnetopause, causing a localized magnetopause indentation.The magnetopause is dented more severely as "J1" continuously compresses the magnetopause in later times and expands back beyond its original location (not shown), as suggested by Němeček et al. (2023).Moreover, "J1" is meandering in the cyan slice (Figures 2c and 2e), which may be caused by Kelvin-Helmholtz instability (Guo et al., 2022).
Figure 3 shows magnetosheath jets downstream of the bow shock with 20°< θ Bn < 45°.The magnetosheath jet "J2" forms at the bow shock where θ Bn is about 20°, and propagates toward the flank region along the background plasma flow.Similar to "J1," the ion temperature inside "J2" also decreases (Figures 3d and 3e).The parallel scale size of "J2" is about 9 R E and its perpendicular scale sizes are about 2.8 R E and 1 R E , indicating a ribbon-like localized 3-D structure.Within the honeycomb-like magnetosheath structure, the jets in the realistic-scale simulation can have different localized structures than just a cylinder or pancake shape suggested by previous studies (Omelchenko et al., 2021;Plaschke et al., 2016Plaschke et al., , 2018))."J2" propagates almost tangential to the magnetopause, and there is almost no inward pressure to the magnetopause in "J2" (Figure 3f).Therefore, no obvious magnetopause indentation is caused.Magnetosheath jets like "J2" that form downstream of the bow shock with larger θ Bn (>20°in our simulation) may have minor effects on the magnetopause.Some magnetosheath jets can propagate long distances, even enter the magnetosheath downstream of the quasiperpendicular bow shock.Figure 4 shows some magnetosheath jets downstream of the bow shock with θ Bn > 45°.Magnetosheath jets "J3"-"J5" are identified by dynamic pressure exceeding 1.5 times the magnetosheath background value."J3"-"J5" also have ribbon-like localized structures and decreased temperature (Figures 4f-4i), but are weaker than those downstream of the quasi-parallel bow shock.At t = 559.6 s (Figures 4b and 4f), "J3" is located in the magnetosheath downstream of the quasi-parallel bow shock."J3" then propagates along the magnetosheath plasma flow (Figures 4c and 4d) and enters the downstream magnetosheath of the quasiperpendicular bow shock.At t = 559.6 s (Figure 4b), magnetosheath jets "J4" and "J5" are formed at the bow shock where θ Bn is about 45°."J4" and "J5" also propagate long distances (about 10 R E and 5 R E , respectively) in the magnetosheath and have entered the magnetosheath downstream of quasi-perpendicular bow shock at t = 671.5 s.
Discussion
Although magnetosheath jets have been studied for over a decade, their 3-D structure has always been under debate (Karlsson et al., 2012;Omelchenko et al., 2021;Plaschke et al., 2016).By performing a 3-D global hybrid simulation with realistic scale, we find that magnetosheath jets and cavities form an overall honeycomb-like 3-D magnetosheath structure, while the localized 3-D structure of the jets can be pancake-like, cylinder-like, ribbonlike, etc.To demonstrate the significance of the realistic scaling, a reduced-scale simulation with 1 R E = 10 d i0 is also performed for comparison.Figure 5 shows the magnetosheath jets in the reduced-scale case at t = 840.2s.There are only 3 jets-surrounded magnetosheath cavities in this case (Figure 5a), while there are about 16 in the realistic-scale case (Figure 1b).The maximum diameters of the magnetosheath cavities are about 8 R E and 3 R E in this reduced-scale case and the realistic-scale one, respectively.Moreover, in Figures 5b-5e, the parallel size of "J1′" is about 4 R E , with perpendicular sizes of about 5.2 R E and 0.8 R E .Therefore, in the reduced-scale case, like the ones performed by Omelchenko et al. (2021), the numbers of magnetosheath jets and cavities are underestimated, while their scale sizes are overestimated.
The small and numerous jets in the realistic-scale case can increase turbulence in the magnetosheath and lead to more magnetopause indentations.Moreover, jets with ribbon-like localized structures downstream of the bow shock with θ Bn ≳ 20°are difficult to identify in the reduced-scale case.Only in the realistic-scale case do we find jets formed downstream of the quasi-parallel shock propagate downstream of the quasi-perpendicular shock.Therefore, global simulations with a reduced-scale magnetosphere may not effectively capture physical processes related to the bow shock.The scale sizes of many foreshock structures, such as spontaneous hot flow anomalies (Omidi et al., 2013;Zhang et al., 2013), magnetosheath cavities (Guo et al., 2022;Katırcıoğlu et al., 2009;Omidi et al., 2016), and foreshock bubbles (C.Wang et al., 2021;B. Wang et al., 2020), may be related to the scale size of the magnetosphere.Using a 3-D global hybrid simulation with a reduced-scale magnetosphere (1 R E = 12 d i0 ), Ng et al. (2023) showed that kinetic structures like foreshock cavitons can be seen through soft X-ray imaging.However, it is possible that the scale size of the foreshock cavitons was overestimated in their simulation results.
Conclusions
In this study, we investigate the 3-D structure of magnetosheath jets using a realistic-scale, 3-D global hybrid simulation.The magnetosheath has an overall honeycomb-like 3-D structure, where the magnetosheath jets surround magnetosheath cavities like honeycomb cells.The magnetosheath jets downstream of the bow shock with θ Bn ≲ 20°propagate approximately along the normal direction of the magnetopause, while those downstream of the bow shock with θ Bn ≳ 20°propagate almost tangential to the magnetopause.Moreover, some magnetosheath jets formed downstream of quasi-parallel shock can propagate to the magnetosheath downstream of the quasi-perpendicular shock and become a source of jets therein, which is shown in the realistic-scale simulation but not in the reduced-scale one.Our results highlight the necessity of realistic-scale simulation models to study the structures related to the bow shock.
Figure 1 .
Figure 1.Overview of the honeycomb-like magnetosheath structure driven by jets at t = 671.5 s.(a) Dynamic pressure P d in the noon-midnight meridian plane.The gray surface is the magnetopause identified by the boundary of open/closed magnetic field lines.The pink contour indicates where the dynamic pressure is two times the background value in the magnetosheath, as defined in Archer and Horbury (2013).(b) Dynamic pressure in the magnetosheath.The white and red curves indicate the bow shock with θ Bn = 20°and 45°, respectively.
Figure 2 .
Figure 2. Magnetosheath jets downstream of the bow shock with θ Bn < 20°at t = 671.5 s (a) 3-D view of the dynamic pressure P d of the jets.The white and red curves indicate the bow shock with θ Bn = 20°and 45°, respectively.The original point of both the pink and cyan slices is (12.8, 0.77, 2.07) R E , while their normal directions are ( 0.07, 0.697, 0.713) and (0.153, 0.715, 0.683), respectively.(b-c) Dynamic pressure P d in the pink slice (b) and cyan slice (c).(d-e) Ion temperature T i in the pink (d) and cyan (e) slices.The solid white curves indicate the magnetopause, and the dashed white curves indicate the bow shock.
Figure 3 .
Figure 3. Magnetosheath jets downstream of the bow shock with 20°< θ Bn < 45°at t = 671.5 s (a) 3-D view of the jets.The original point of both the pink and cyan slices is (10.6, 6.05, 4.56) R E , while their normal directions are (0.050, 0.656, 0.753) and (0.580, 0.594, 0.557), respectively.(b-c) Dynamic pressure P d , (d-e) ion temperature T i , (f-g) r component of the dynamic pressure P d,r .The variables are taken from the pink (b, d, f) and cyan (c, e, g) slices.The solid white curves indicate the magnetopause, and the dashed white curves indicate the bow shock.
Figure 4 .
Figure 4. Magnetosheath jets downstream of the bow shock with θ Bn > 45°.(a) 3-D view of the jets at t = 671.5 s.The original point of the pink slice is (4.85, 6.09, 12.6) R E , and its normal direction is ( 0.143, 0.911, 0.386).(b-e) Dynamic pressure P d in the pink slice t = 559.6 s, 615.5 s, 643.5 s, and 671.5 s (f-i) Ion temperature T i in the pink slice at the same times.The solid white curves indicate the magnetopause, and the dashed white curves indicate the bow shock.
Figure 5 .
Figure 5. Magnetosheath jets at t = 840.2s obtained from the reduced-scale case where 1 R E = 10 d i0 .(a) Dynamic pressure P d in the magnetosheath.The white and red curves indicate the bow shock with θ Bn = 20°and 45°, respectively.(b-c) Dynamic pressure in the equatorial plane (b) and noon-midnight meridian plane (e).The solid white curves indicate the magnetopause, and the dashed white curves indicate the bow | 4,110.4 | 2024-06-12T00:00:00.000 | [
"Physics",
"Environmental Science"
] |
A broadband, self-powered, and polarization-sensitive PdSe photodetector based on asymmetric van der Waals contacts
: Self-powered photodetectors with broadband and polarization-sensitive photoresponse are desirable for many important applications such as wearable electronic devices and wireless communication systems. Recently, two-dimensional (2D) materials have been demonstrated as promising candidates for self-powered photodetectors owing to their advantages in light–matter interaction, transport, electronic properties, and so on. However, their performance in speed, broadband response, and multifunction is still limited. Here, we report a PdSe 2 photodetector with asymmetric van der Waals (vdWs) contacts formed by using a homojunction configuration. This device achieves a high responsivity approaching 53 mA/W, a rise/decay time of about 0.72 ms/0.24 ms, and a detectivity of more than 5.17 × 10 11 Jones in the visible-near infrared regime (532–1470 nm). In addition, a linear polarization-sensitive response can be observed with an anisotropy ratio of 1.11 at 532 nm and 1.62 at 1064 nm. Furthermore, a strong anisotropic response endows this photodetector with
Introduction
Self-powered photodetectors are criteria for realizing low power consumption in integrated systems [1].Traditional photodetectors require a high external bias to obtain a detectable photocurrent.Thus their performance in suppressing dark current, eliminating circuit noise, and low-power operation is usually limited [2][3][4].Generally, several methods can be considered for constructing self-powered photodetectors: (i) utilizing photovoltaic effect in p-n junction (including heterojunction and homojunction) photodetectors [5][6][7]; (ii) generating photovoltaic signals in a Schottky junction [8,9]; (iii) using a temperature gradient to drive the carriers based on photothermoelectric (PTE) effect [10][11][12]; (iv) designing self-powered photodetectors based on ferroelectric materials assisted with spontaneous polarization [13][14][15].On the other hand, different mechanisms lead to different photoresponse speeds which are important for some practical applications like optical communication, imaging system, and high-speed optical chips [15][16][17].Given the speed of a photodetector, the device based on a Schottky junction usually exhibits a faster speed than other types of phototransistors because the defect-induced minority carrier trapping can cause the devices to operate slower [18].Various methods can be applied to enhance the self-powered and high-speed properties of a photodetector.Among them, coupling a basic device with asymmetric contacts receives much attention to their great potential in achieving self-powered photodetectors with large Schottky contact differences.It has shown that such photodetectors exhibit high performance in their electric and optoelectrical properties [19,20].
Self-powered photodetectors based on a heterojunction structure usually require complicated fabrication processes, which may limit their further applications [21,22].Through forming two different contacts at the two terminals of active material, asymmetric contacts for photodetectors can be realized which exhibit outstanding properties such as low power consumption, broadband detection, high responsivity [23][24][25][26], and so on.Different metal electrodes with different work functions can construct Schottky barriers with different heights in the two terminals, thus generating a self-driven net photocurrent.In addition, the geometry of the contacts in the two terminals can also be modulated to be asymmetric, to produce a self-powered photodetector [22].Furthermore, electrodes are not necessarily limited to metal materials, other 2D materials like graphene with extraordinary electric and transport properties are promising candidates and can also be assembled as an electrode.Self-powered photodetectors based on this structure exhibit high performance in photodetection [10].However, the traditional methods for constructing asymmetric contacts like sputtering and evaporation are strongly affected by the Fermi level pinning effect, which makes it difficult to form an asymmetric contact with a strong photovoltaic effect in the interface [27].Recently, van der Waals (vdWs) contacts as a new fabrication method to suppress the Fermi level pinning effect has received great interest.For 2D materials, they can form vdWs contact with metal or 2D semimetal materials through vdWs forces owing to their pristine interfaces free of dangling bonds.Recently, extensive research efforts have been devoted to investigating the unique electrical and optoelectrical properties of various 2D materials which are promising for light-matter interaction [28,29], transport [30], and electronics [31].Transition-metal dichalcogenides (TMDs) materials, coupled with layered crystal structure [32], strong anisotropy absorption [33,34], and tunable bandgap [35,36], appear to be promising candidates in fabricating advanced photodetectors.Till now, various TMDs materials have been studied to enhance the photoresponse of photodetectors, such as MoS 2 [37,38], WSe 2 [22,39], and WS 2 [40,41].As an emerging candidate in TMDs materials, PdSe 2 has strong interaction between the material layers [42], indirect energy band structure [43], and outstanding anisotropic properties [35].Similar to other kinds of TMDs materials, the band gap of PdSe 2 nanoflakes is apparently influenced by modulating the material thickness [44,45].However, an important difference between PdSe 2 and many other kinds of 2D materials is that PdSe 2 has a tunable bandgap ranging from 0 eV for the bulk to 1.3 eV for the monolayer, which means that the properties of PdSe 2 can transit from semiconductor to semimetal by controlling the nanoflake thickness [46,47].This paves a new way to construct unique vdWs contacts between semiconductors and semimetals.
Herein, a PdSe 2 photodetector with asymmetric vdWs contacts was successfully fabricated by using a PdSe 2 homojunction and a bottom Au electrode.Leveraging the intrinsic interlayer vdWs force, one vdWs contact is formed at the interface between the thick and thin PdSe 2 layers.The other vdWs contact is formed at the interface between the thin PdSe 2 layer and the bottom Au electrode.Due to the different work functions between Au and semimetal thick PdSe 2 flakes, asymmetric Schottky barriers at two vdWs contacts were achieved in the two terminals of the thin PdSe 2 region.Under global illumination, this PdSe 2 photodetector was enabled to achieve high-performance photodetection at zero bias.Significantly, a broadband detection from visible to the near-infrared regime (532 nm-1470 nm), a fast response speed with a response time of 0.72 ms/0.24ms, a high responsivity (53 mA/W at 730 nm), and a high detectivity of over 5.17 × 10 11 Jones under zero bias was achieved in this PdSe 2 photodetector.Its anisotropy sensitivity was verified using lasers with different wavelengths (532 nm and 1064 nm), presenting promising polarized light detection ability, with an anisotropy ratio reaching 1.11 at 532 nm and 1.62 at 1064 nm.Especially, a polarization imaging with a contrast-enhanced degree of linear polarization (DoLP) was demonstrated by this device, showing excellent polarization imaging capabilities.
Result and discussion
Device architecture of the PdSe 2 photodetector.As illustrated in Figure 1(a), the PdSe 2 nanoflake composed of a thin region and a thick region was fabricated on a Si/SiO 2 substrate through mechanical exfoliation.For the thick region of the nanoflake with multilayer PdSe 2 , a layered crystal structure can be observed with a strong vdWs force combining each layer [48].Here, the thin PdSe 2 nanoflake works as the active material, which is asymmetrically vdWs contacted with a thick PdSe 2 flake and a bottom Au electrode (10 nm Ti/50 nm Au).The optical microscope image of the fabricated photodetector with asymmetric vdWs contacts was presented in Figure 1(b).The apparent color difference indicates the coexistence of thin and thick regions in this nonuniform-thickness PdSe 2 nanoflake.In addition, the Raman spectrum measurement was carried out to study the thickness of the nonuniform PdSe 2 flake.As shown in Figure 1(c), four Raman intensity peaks of the thin PdSe 2 (red line) and thick PdSe 2 (blue line) were observed corresponding to A 1 g , A 1 g , B 1 g2 , and A 3 g Raman modes.It can be noticed that all Raman peaks of the thin PdSe 2 region are blue-shifted (deviate a little to the higher frequency) compared with the thick region, which can be ascribed to the strong interlayer coupling of PdSe 2 nanoflake and special layer hybridization [49].The thin and thick PdSe 2 have different bond lengths between the atoms in each layer, leading to the slight modification of a vibration mode.The Raman spectra exhibit a blue shift phenomenon which verifies the coexistence of thin and thick regions in this PdSe 2 nanoflake [50].Furthermore, the thickness of the nonuniform PdSe 2 nanoflake was exactly determined using atomic force microscopy (AFM) as presented in Figure 1(d).The line profile shown in dashed green and orange arrows was utilized to measure the thicknesses of the two different regions of the nonuniform PdSe 2 nanoflake, and the measured thicknesses are 6.5 nm (about 10 layers) and 98.3 nm for the thin and thick regions, respectively (Figure 1(e)).
Photoresponse mechanisms of the PdSe 2 photodetector.To figure out the photoresponse mechanism of the PdSe 2 photodetector with asymmetric vdWs contacts, a local laser-induced photocurrent mapping was carried out and presented in Figure 2(a).The distribution of photocurrent (I ph ) excited by a 532 nm laser radiation was illustrated at zero bias.The black dashed line indicates the interface between the thick and thin PdSe 2 nanoflakes.As shown in Figure 2(a), positive photocurrents appear in the contact region of the thin PdSe 2 region and the Au electrode, while negative photocurrents are generated in the lateral homojunction region between the thin and the thick PdSe 2 nanoflakes.In addition, a line profile curve of the photocurrent along the green dashed arrow in the device channel is extracted and shown in Figure 2(b).Points A and B are corresponding to the largest positive and negative photocurrents as marked in Figure 2(a).The photocurrent near the "thin PdSe 2 -Au contact" region (point A) has a much higher absolute value than the "thin PdSe 2 -thick PdSe 2 contact" region (point B).As a result, under a global illumination, there will be a positive net photocurrent because of the majority contribution from the Schottky region between the thin PdSe 2 and Au electrode.This photoresponse mechanism in our device is different from previously reported thickness-based lateral heterojunction photodetectors [51].To clarify the photoresponse mechanism of the photodetector based on asymmetric vdWs contacts, the energy band diagrams were calculated and analyzed (Supplementary Note 1 and Figure S1a).Based on the theoretical bandgap and the reported experimental parameters of PdSe 2 , the bulk PdSe 2 can be treated as a semimetal material with a zero bandgap (∼0.03 eV), thus an Ohmic contact was realized between the Au electrode and thick PdSe 2 [52].And no potential barrier in this region means that there is no existing separation of electron-hole pairs in this area and no contribution to the final output photocurrent, which can be observed directly in Figure 2(a).In contrast to the zero bandgaps of the bulk PdSe 2 , the 10-layer PdSe 2 has a bandgap of about 0.8 eV [53].As we all know, the Schottky barrier height will influence the efficiency of photoresponse based on the photovoltaic effect [54,55].The band diagram of the 10-layer and bulk PdSe 2 was calculated after contact equilibrium (Figure S1b).After the contact, a built-in potential (∼0.41 eV) was formed between the electrode and 10-layer PdSe 2 , while a smaller built-in potential (∼0.38 eV) was generated between the 10-layer and bulk PdSe 2 (See Supplementary Information Note 1).In addition, to better understand the response origin of this photodetector with asymmetric vdWs contacts, the energy band diagram of this PdSe 2 device was simulated, as presented in Figure 2(c).Two asymmetric Schottky barriers are formed at two asymmetric vdWs contacts owing to the different work function between Au and thick semimetal PdSe 2 nanoflake.Moreover, the simulated electric field along the channel of the PdSe 2 device is depicted in Figure 2(d).And two electric field peaks can be observed in the "Au-thin PdSe 2 " contact region and the "thin-thick PdSe 2 " contact region.The higher value (4 × 10 5 V cm −1 ) of the electric field in the "Au-thin PdSe 2 " interface is arising from the higher built-in potential as discussed above.Once the local laser illuminates the "thin PdSe 2 -Au" contact region, a strong photocurrent can be collected, which indicates that a separation of the electron-hole pairs happened in this region.Then the electrons diffused to the left electrode and were collected.When the laser was localized on the "thin-thick PdSe 2 " contact region, there was also a separation of electron-hole pairs, and the electrons diffused to the thick PdSe 2 region.Therefore, a reversed and weaker photocurrent can be measured, which matched the experimental results in Figure 2(a).According to this band theory, the detailed working mechanism can be attributed to the asymmetric photovoltaic effect in this device originating from the asymmetric vdWs contacts, which is dominated by the Schottky junction between the Au electrode and thin PdSe 2 .This enables the photodetector to realize a fast photoresponse with zero applied bias.In addition, a PdSe 2 /PdSe 2 homogenous junction device (Figure S7a) was fabricated and tested under a 532 nm laser, with no apparent photocurrent generated (Figure S7b).This further confirmed that the dominant photoresponse should be ascribed to the photovoltaic effect in the contact region between the thin PdSe 2 and the Au electrode.
Besides, it is notable that PTE effect can be distinguished from the photoresponse in this device, as the apparent photocurrent happens only when the laser illuminates the interface of the junction region.It is a typical feature of the photovoltaic effect, while the PTE current can be located in a larger area, not only limited to the junction regions [56].And we can also exclude the possible photocurrent generated from the thin PdSe 2 in the middle of the two contact regions as the main photoresponse mechanism in a Au/thin PdSe 2 /Au device has been studied to be photoconductive effect which needs an external bias to generate a photocurrent [3].And a strong photocurrent generated under zero bias between the electrodes can exclude the photoconductive effect from this device since a source-drain voltage is necessary for the photoconductive photodetectors to generate output photocurrent.
Broadband photodetection characterization of the PdSe 2 photodetector.To further evaluate the broadband photodetection performance of this PdSe 2 photodetector, a systematical investigation was scheduled by using lasers with different wavelengths ranging from visible (532 nm) to near-infrared (1470 nm).All the measurements were carried out in a room-temperature environment.It is noticeable that the diameters of the laser spots corresponding to various wavelengths are all much larger than the size of the device.This means that a global illumination was realized during the whole testing procedure, with the final output signal being a combination of the photoresponse in the asymmetric vdWs contact regions.
The I -V curves of this PdSe 2 photodetector are shown in Figure 3(a), measured under dark and a 532 nm laser with incident power changing from 0.39 nW to 6.12 nW.The linearity of the I -V curves demonstrate that a low barrier exists in both the "electrode-thin PdSe 2 " contact region and the "thin PdSe 2 -thick PdSe 2 " contact region.Besides, a slight rectification characteristic can be observed indicating the asymmetric Schottky barriers in this device (Figure S2, Supplementary Information), which is consistent with the analysis of the energy band diagram and the photoresponse mechanism (Figure 2).Then, the temporally resolved photoresponses of the device at zero bias were measured as shown in Figure 3(b).During the test, a chopper was utilized to modulate the on/off states of the laser with a period of 5 s.As presented in the curve, a repeatable and increasing photoresponse excited by enhanced light powers can be observed.Based on the experimental results, I ph , responsivity (R), and specific detectivity (D * ) can be calculated by the following formulas: where I ph indicates the photocurrent, P represents the effective light power at a device, A is the active area of the photodetector, q is the elementary charge, and I dark is the dark current.The calculated photocurrent increases almost linearly with the increased light power and the responsivity changes from 38 to 43 mA/W as shown in Figure 3(c).And the relationship between incident light and photocurrent can be fitted by law power law I ph ∝ P , where P is the actual incident power illuminated on the photodetector and is the ideal factor.And through fitting the photocurrent corresponding to the incident power, the calculated ideal factor is 0.99.And based on the responsivity R = I ph /P, almost constant responsivity can be expected with the increase of incident power.It is noticeable that we measured the photoresponse in a small power range which is in the linear dynamic range (LDR) of the photodetector, and a linear relationship between R and incident laser power is demonstrated.As the laser power goes beyond the LDR, a stronglypower-related R can be measured [57,58].Figure 3(d) shows a detailed measurement of the rise/decay time.By fitting the time-resolved photoresponse using the exponential decay function, the rise/decay time with 0.72 ms/0.24ms can be obtained directly.The result demonstrates a fast response speed in this PdSe 2 photodetector ascribed to the photovoltaic effect, which is comparable to the traditional 2D materials-based photodetectors [3,26,59].
To further characterize the broadband photodetection capability of this photodetector, another laser source with a wavelength of 1064 nm was utilized as a light source, as shown in Figure S3, Supporting Information.The I -V curves of the photodetector under a 1064 nm laser with various incident power ranging from 8.8 nW to 52.3 nW are presented in Figure S3a, with a similar linear characterization compared with the result measured under a 532 nm laser.Then the photoresponse of the photodetector under pulsed laser radiation with various powers from 8.8 nW to 52.3 nW was collected, as illustrated in Figure S3b.With the increase of the incident light power, the corresponding photocurrent increases from 0.05 nA to 0.3 nA, presenting a nearly linear relation with the illumination power.And the calculated photoresponsivity changes from 3.55 mA/W to 5.12 mA/W, demonstrating a high sensitivity to near infrared illumination (Figure S3c).Besides, to verify the photoresponse of this device towards the light with other wavelengths, two laser sources with a wavelength of 730 nm and 1470 nm were also utilized to determine the sensitivity of this photodetector, as demonstrated in Figure S4, Supporting Information.The photoresponse under pulsed laser radiation is shown in Figure S4a (730 nm) and Figure S4b (1470 nm), with an incident light power of 48.9 nW and 33.4 nW, respectively.And a strong photoresponse with a generated photocurrent of 0.65 nA under the 730 nm light can be measured, showing the great ability to detect visible light by this photodetector.
And a ∼25 pA photocurrent illuminated by a 1470 nm laser could be collected, demonstrating the broadband spectral response ability of this photodetector.
Based on the photoresponse investigation of this photodetector, the calculated R and D * are presented in Figure 3(e), as a function of the illumination wavelength.
Both the peaks of responsivity of 53 mAW −1 and detectivity of 5.17 × 10 11 Jones located at the 730 nm wavelength are achieved.However, there are other types of noise contributing to the total device of the photodetector beside of the dark current, like thermal noise or 1/f noise.In order to further study the noise in the device, we calculated the thermal noise in our photodetector based on the for- where k B is Boltzmann constant, t is the temperature and R is the resistance of the device.And the calculated thermal noise is about 7.61 × 10 −14 A which is three orders of magnitude smaller than the dark current with a larger value of 1.54 × 10 −11 A. We also measured the 1/f noise spectra at a low frequency region, and the 1/f noise current proved to be much smaller than the dark current (Figure S8).Therefore, for simplicity, the shot noise from dark current density is assumed to be the dominant contribution to total noise in our device [60].By comparing with the performances of recently developed photodetectors based on photovoltaic effect, this broadband PdSe 2 photodetector shows attractive and comparable photoresponse properties (See Supporting Information Table S1 for detail) [3,26,46,54,59,[61][62][63][64].Besides, the durability and stability of this photodetector were also investigated.As shown in Figure 3(f), the ideal photoresponse stability over a long duration of time ranging from the fresh to 3 months has been observed.We also conducted a repeated illumination test over 360 circulations based on a homemade testing system (Figure S5a, Supporting Information).
A chopper with a fixed frequency was applied to control the on/off status of the illumination.And good repeatability of this photodetector was also confirmed (See Supporting Information Figure S5b for detail).
The polarization sensitivity of the PdSe 2 photodetector.2D materials were recently discovered to have attractive anisotropic properties [65,66], and the photodetectors based on them established high performance in polarization light detection.Typically, non-polarized photodetectors are typically solely sensitive to the intensity and wavelength of the input light while the polarized photodetectors can directly detect polarized light in addition to intensity and wavelength.Using polarization-sensitive photodetectors, the information and status of the polarized incident light can be distinguished and extracted [67,68].And applications such as optical communications, optical switching, polarization sensing systems, and optical radar all depend on the ability of the photodetector to detect polarized light [69].Among the different 2D anisotropic materials, PdSe 2 , coupled with an anisotropic crystal structure, can enable the device to be sensitive to the polarized incident light.To identify the polarization light response properties of this PdSe 2 photodetector, the generated I ph in this device under linear polarized light was measured and collected.The measurements were carried out through changing the linear polarization angle of the incident light based on a homemade polarization measurements setup, as illustrated in Figure 4(a).Paralleled I -V curves briefly indicate that this PdSe 2 photodetector output different electrical signals depending on light with different polarization angles.Then a systemic investigation of the polarized photoresponse was demonstrated.As presented in Figure 4(c) and (e), the I ph was collected at the incident laser with wavelengths of 532 nm and 1064 nm.And a periodic change can be observed corresponding to the varying polarized angle.Besides, the polar coordinates of the I ph under 532 nm and 1064 nm polarized light were plotted, as shown in Figure 4(d) and (f), respectively.A two leaves-shape of the polarization-dependent curves fitted by the (a + bcos2) function was illustrated.And the ratio of anisotropy ellipses can be calculated to be 1.11 and 1.62 at 532 and 1064 nm, respectively.These results prove that this PdSe 2 device can achieve a broadband polarization light response, which makes it possible to work in practical applications like polarization imaging.Besides, a discrepancy in the ratio of anisotropy under 532 nm and 1064 nm laser can be confirmed.This can be explained by the fact that the polarized absorption of PdSe 2 is strongly affected by the incident wavelength, which will lead the ratio of anisotropy to be different when illuminated by lasers with different wavelengths [70,71].
Polarization imaging.Realistic imaging requires both high sensitivity and detectivity [72], and high-quality polarized imaging relies on the polarized photoresponse of photodetectors.The high sensitivity, fast photoresponse speed, and excellent polarization photoresponse demonstrated in this work endow this device with potential to realize polarized imaging.Based on the outstanding polarized photoresponse, high-quality polarization imaging with an enhanced imaging contrast can be expected in this device.To further investigate the polarization imaging capabilities of this photodetector, a homemade imaging test setup was designed as illustrated in Figure 5(a).The linear polarization light directly illuminated the fabricated photodetector and the output photovoltage was collected and amplified by an amplifier, and the final polarized images were processed and presented in a computer.A metallic object with "NTU EEE" letters was utilized as the pattern, and the polarized imaging was acquired with the photodetector by changing the pattern location.The imaging results under linear-polarized light with polarized angles of 0 • , 45 • , 90 • , and 135 • were obtained, as shown in Figure S6a, Supporting Information.Then, the spatial distribution of degree of linear polarization (DoLP) was calculated by Eqs. ( 4)-( 7), (Figure S6b, Supporting Information), where I(x,y) refers to the generated I ph measured at a polarized angle of [72,73].The calculated imaging results including S 0 , S 1 , S 2 , and DoLP were presented in Figure 5(b).Furthermore, the imaging contrasts of the S 0 , S 1 , S 2 , and DoLP images were calculated based on the 8-neighbors contrast calculation method and presented in Figure 5(c).The results show that an enhanced imaging contrast for the target was observed in the DoLP image compared with S 0 , S 1 , and S 2 .This demonstrated that this PdSe 2 photodetector can be a promising candidate for polarization imaging applications [72].
Conclusions
In summary, a broadband, polarization-sensitive, and selfpowered PdSe 2 photodetector with asymmetric vdWs contacts was realized.A strong asymmetric photovoltaic effect was generated between the Au-thin PdSe 2 and the thin-thick PdSe 2 contact regions, which supports the self-powered photodetection.This self-powered photodetector exhibits a promising photoresponse for the visible-near infrared light in a broad region from 532 nm to 1470 nm, with a peak responsivity over 53 mA/W and a high detectivity of 5.17 × 10 11 Jones at 730 nm.In addition, benefiting from the high operation speed of photovoltaic effect, an attractive response speed with a rise/decay time of about 0.72 ms/0.24ms is achieved.Besides, due to the excellent anisotropic optical properties of the PdSe 2 nanoflake, this photodetector exhibits a polarization-sensitive photoresponse with an anisotropic ratio of 1.11 and 1.62 at 532 nm and 1064 nm, respectively.Considering the fast response speed, high sensitivity, and polarization sensitivity, the polarization imaging using our proposed device is further demonstrated, and a contrast-enhanced degree of linear polarization imaging is realized.All these results show the tremendous potential of this type of device in achieving a self-powered, broadband, and polarization-sensitive photoresponse, with a potential application in polarization imaging.
Experimental section
Device Fabrication.The PdSe 2 flakes were naturally mechanically exfoliated from the bulk PdSe 2 material.The SiO 2 /Si wafer was treated as a substrate, and it was washed in an ultrasonic cleaner with isopropanol, acetone, ethanol, and deionized water separately in advance.The conventional UV lithography and e-beam evaporation techniques were applied to form two paralleled electrodes (10 nm Ti/50 nm Au) on the Si/SiO 2 substrate.Then, the previously exfoliated PdSe 2 nanoflake with non-uniform thickness was transferred on the substrate and aligned with the Au electrodes by a polydimethylsiloxane (PDMS) assisted dry transfer method.The thin and thick regions of the PdSe 2 nanoflake were contacted with these two electrodes, respectively.Material Characterization.The optical image of the sample was characterized by a Nikon optical microscope.A WITec (Alpha 300) micro-Raman spectrometer system was utilized to measure the Raman spectra of the PdSe 2 nanoflake pumped by a 532 nm laser source.And the exact thicknesses of the two different parts of this PdSe 2 nanoflake were measured by atomic force microscopy (Bruker Dimension Icon).
Device Characterization.For the spatial photocurrent mapping, the Raman system with a scanning micromotion platform was applied, excited by a focused 532 nm laser scanning over the device.And during the mapping test, an amplifier was coupled with the device to amplify and extract the output signal.In the electrical measurements of the device under dark and illumination, a Keysight, B2912A digital source meter owning two highly sensitive channels were used, where t the output current and photovoltage were collected at the same time.And for the transient photoresponse test, a timer mode with a time resolution of 5 μs was set in the source meter and the laser source was modulated by a chopper.Four different laser sources with fixed laser wavelengths of 532, 730, 1064, and 1470 nm (MIDL-III-532, 730, 1064, and 1470) were applied to illuminate the active region of the photodetector.And the diameters of the laser spot were ∼5.6, 5.8, 6.3, and 9.2 mm for the 532, 730, 1064, and 1470 nm lasers.The laser power of these laser sources was measured by a Thorlabs, PM100D power meter.
Polarized light detection measurements.In the polarized light detection measurements, a half-wave plate was employed to modulate the polarization angle.With changing rotation angle of the plate, the polarization angle of incident light can be modulated.The laser polarization direction initially parallels the x-axis of our devices, and the rotation step of the half-wave plate is 5 • , which corresponded to a step of 10 • for the polarization angle.Then the laser signal with different polarization angles was detected by the photodetector and the output photocurrent was collected by the source meter.
Polarization imaging.The polarization imaging was demonstrated by a homemade scanning system.The light source we chose is a single-wavelength laser with a fixed wavelength of 532 nm.In the polarized light imaging measurements, a half-wave plate was put between the laser source and the imaging target, to simulate the linear polarized state of the target.The polarized angle of the laser was set to be 0 • , 45 • , 90 • , and 135 • .All the corresponding polarized images were collected by the homemade imaging system.There are two step motors (along the x-axis and y-axis) controlling the location of the text pattern.By changing the pattern location, a photocurrent signal is acquired for each pixel.
Figure 1 :
Figure 1: Structure characterization of the photodetector with asymmetric vdWs contacts.(a) The schematic diagram of a PdSe 2 photodetector.(b) The optical image of the PdSe 2 photodetector with asymmetric vdWs contacts.The scale bar is 5 μm.(c) The Raman spectra of the PdSe 2 nanoflake measured at the blue and red points marked in (b).The thin and thick PdSe 2 have different Raman peak positions corresponding to A 1 g , A 1 g , B 1 g2 , and A 3 g
Figure 2 :
Figure 2: The photoresponse mechanism of the PdSe 2 photodetector.(a) The local laser-induced photocurrent mapping exhibits ambipolar photocurrent-generation origins in the "thin-Au" contact and "thin-thick" interface.The black dashed line indicated the whole outline of the PdSe 2 nanoflake in the channel.(b) The photocurrent distribution measured along the green dashed line in (a) presents an asymmetric ambipolar photocurrent existing along the channel.(c) The simulation of the bandgap of the thin PdSe 2 photodetector with asymmetric vdWs contacts.(d) The simulated electric field distribution along the device channel.
Figure 3 :
Figure 3: The performance of the PdSe 2 photodetector.(a) The I -V curves of the photodetector under dark and global illumination with a 532 nm laser.The power changes from 0.39 nW to 6.12 nW.(b) Time dependence of photoresponse under pulsed laser radiation with various powers ranging from 0.39 nW to 6.12 nW at zero bias.(c) The power dependence of photocurrent and responsivity under a wavelength of 532 nm.The fitting line is the red solid line plotted by the power law, demonstrating a linear relationship between photocurrent and incident light power.(d) The illustration of the rise time (0.72 ms) and decay (0.24 ms) time of the photoresponse, indicates a fast response speed.All data were measured under a 532 nm laser with a fixed power.(e) The spectral responsivity and detectivity of the PdSe 2 photodetector as a function of the incident laser wavelength.(f) The photocurrent as a function of the time measured with a pulsed 532 nm laser.And the same test was repeated after 3 weeks and 3 months respectively, indicating the long-term stability of this photodetector.
Figure 4 :
Figure 4: Polarized photoresponse of the PdSe 2 photodetector.(a) Schematic illustration of a photodetector measurement system for analyzing linear polarization sensitive response.(b) The I -V curves of the device under 1064 nm illumination with linear polarization angles of 0 • and 90 • .(c) and (e) The linear-polarization-angle dependent photocurrents under 532 nm (c) and 1064 nm (e) light illuminations.Here 0 • corresponds to the angle of the incident light when the laser polarization direction is aligned with the x-direction of the asymmetric-thickness PdSe 2 .(d) and (f) The generated photocurrent as a function of the polarization angle of the laser.All the dots in (d) and (f) are fitted with the (a + bcos 2) function.
Figure 5 :
Figure 5: Polarization imaging measurements using the PdSe 2 photodetector.(a) Schematical demonstration of the setup of the polarized imaging test system.(b) The calculated normalized contrast constant of S 0 and DoLP images.(c) The calculated S 0 , S 1 , S 2 , and the final DoLP results with a large contract constant with the 8-neighbors contrast calculation method, indicating the extraordinary polarization detection ability of this device. | 7,432.6 | 2023-01-11T00:00:00.000 | [
"Physics"
] |
Dynamically important magnetic fields near the event horizon of Sgr A*
We study the time-variable linear polarisation of Sgr A* during a bright NIR flare observed with the GRAVITY instrument on July 28, 2018. Motivated by the time evolution of both the observed astrometric and polarimetric signatures, we interpret the data in terms of the polarised emission of a compact region ('hotspot') orbiting a black hole in a fixed, background magnetic field geometry. We calculated a grid of general relativistic ray-tracing models, created mock observations by simulating the instrumental response, and compared predicted polarimetric quantities directly to the measurements. We take into account an improved instrument calibration that now includes the instrument's response as a function of time, and we explore a variety of idealised magnetic field configurations. We find that the linear polarisation angle rotates during the flare, which is consistent with previous results. The hotspot model can explain the observed evolution of the linear polarisation. In order to match the astrometric period of this flare, the near horizon magnetic field is required to have a significant poloidal component, which is associated with strong and dynamically important fields. The observed linear polarisation fraction of $\simeq 30\%$ is smaller than the one predicted by our model ($\simeq 50\%$). The emission is likely beam depolarised, indicating that the flaring emission region resolves the magnetic field structure close to the black hole.
Introduction
There is overwhelming evidence that the Galactic Centre harbours a massive black hole, Sagittarius A* (Sgr A*, Ghez et al. 2008;Genzel et al. 2010) with a mass of M ∼ 4 × 10 6 M as inferred from the orbit of star S2 (Schödel et al. 2002;Ghez et al. 2008;Genzel et al. 2010;Gillessen et al. 2017;Gravity Collaboration et al. 2017, 2018a, 2020bDo et al. 2019a). Due to its close proximity, Sgr A* has the largest angular size of any existing black hole that is observable from Earth, and it provides a unique laboratory to investigate the physical conditions of the matter and the spacetime around the object.
Using precision astrometry with the second generation beam combiner instrument GRAVITY at the Very Large Telescope Interferometer (VLTI) operating in the NIR (Gravity Collaboration et al. 2017), we recently discovered continuous clockwise motion that is associated with three bright flares from Sgr A* (Gravity Collaboration et al. 2018b, 2020c. The scale of the apparent motion 30 − 50 µas is consistent with compact orbiting emission regions ('hotspots', e.g. Broderick & Loeb 2005Hamaus et al. 2009) at 3−5R S , where R S = 2GM/c 2 10 µas, is the Schwarzschild radius. In each flare, we also find evidence for a continuous rotation of the linear polarisation angle. The period of the polarisation angle rotation matches what is inferred from astrometry. An orbiting hotspot sampling a background magnetic field can explain the polarisation angle rotation, as long as the magnetic field configuration contains a significant poloidal component. For a rotating, magnetised fluid, remaining poloidal in the presence of orbital shear implies a dynamically important magnetic field in the flare emission region.
Here, we analyse the GRAVITY flare polarisation data in more detail, accounting for an improved instrument calibration that now includes the VLTI's response as a function of time (Section 2). We find general agreement with our previous results of an intrinsic rotation of the polarisation angle during the flare by using numerical ray tracing simulations (Section 3); we created mock observations by folding hotspot models forward through the observing process. We compare this directly to the data to show that the hotspot model can explain the observed polarisation evolution as well as to constrain the underlying magnetic field geometry and viewer's inclination (Section 4). Matching the observed astrometric period and linear polarisation fraction requires a significant poloidal component of the magnetic field structure on horizon scales around the black hole as well as an emission size that is big enough to resolve it. We discuss the implications of our results and limitations of the simple model in Section 5.
GRAVITY Sgr A* flare polarimetry
GRAVITY observations of Sgr A* have been carried out in splitpolarisation mode, where interferometric visibilities are simultaneously measured in two separate orthogonal linear polarisations. A rotating half-wave plate can be used to alternate be-tween the linear polarisation directions P 00 -P 90 and P −45 -P 45 . As a function of these polarised feeds, the Stokes parameters, as measured by GRAVITY, are I = (P 00 + P 90 )/2, Q = (P 00 − P 90 )/2 and U = (P 45 − P −45 )/2. The circularly polarised component V cannot be recorded with GRAVITY.
We relate on-sky (unprimed) polarised quantities with their GRAVITY measured (primed) counterparts bȳ whereS andS are the on-sky and GRAVITY Stokes vectors, respectively, and M is a matrix that characterises the VLTI's optical beam train response as a function of time, taking into account the rotation of the field of view during the course of the observations and birefringence. The former was calculated from the varying position of the telescopes during the observations and calibrated on sky by observing stars in the Galactic Centre (Gravity Collaboration et al. 2018b). The latter are newly introduced in the analysis here and they were obtained from modelling the effects of reflections on a long optical path through the individual UT telescopes and the VLTI. During 2018, GRAVITY observed several NIR flares from Sgr A* (Gravity Collaboration et al. 2018b). Figure 1 shows the linear polarisation Stokes parameters for four of them as measured by the instrument. On the top left, top right, and bottom left, the flares on May 27, June 27, and July 22 are shown, respectively. Only Stokes Q was measured on those nights. For the July 28 flare (bottom right), both Q and U were measured. All of the flares observed during 2018 exhibit a change in the sign of the Stokes parameters during the flare, which is consistent with a rotation of the polarisation angle with time. The linear polarisation fractions are 10 − 40%, which is in agreement with past measurements (Eckart et al. 2006;Trippe et al. 2007;Eckart et al. 2008a). Polarisation angle swings have also been previously seen in NIR flares with NACO (e.g. Zamaninasab et al. 2010). The smooth polarisation swings in both flares and the July 28 single loop in U versus Q ('QU loop', Marrone et al. 2006, Figure 2) support the astrometric result of orbital motion of a hotspot close to event horizon scales of Sgr A*.
Two assumptions have been made in the calculation of this loop. Since GRAVITY cannot register both linear Stokes parameters simultaneously, one has to interpolate the value of one quantity while the other is measured. In the case of Figure 2, this has been done by linearly interpolating between the median values over each exposure of 5 min. Second, no circular polarisation data are recorded (Stokes V ). This implies that transforming the GRAVITY measured Stokes parameters (primed) to on-sky values (unprimed) not only requires a careful calibration of the instrument systematics (contained in the matrix M, Eq. 1), but an assumption on Stokes V . In Figure 2, the assumption is that V = 0. While in theoretical models Stokes V = 0 is well justified for synchrotron radiation from highly relativistic electrons, birefrigence in the VLTI introduces a non-zero V . It is therefore important to characterize it properly.
In this work, we adopt a forward modelling approach. We take intrinsic Stokes parameters Q and U from numerical calculations of a hotspot orbiting a black hole in a given magnetic field geometry, transform them to the GRAVITY observables Q and U following Eq. (1), and compare them to the data. This not only allows us to fit the July 28 polarisation data directly without having to make assumptions on Stokes V or interpolate between gaps of data due to the lack of simultaneous measurements of the Stokes parameters, but to make predictions for Q when it is the only quantity measured, as is the case for the other 2018 flares.
Polarised synchrotron radiation in orbiting hotspot models
An optically thin hotspot orbiting a black hole produces timevariable polarised emission, depending on the spatial structure of the polarisation map (Connors & Stark 1977). For the case of synchrotron radiation, the polarisation traces the underlying magnetic field geometry (Broderick & Loeb 2005). We first discuss an analytic approximation to demonstrate the polarisation signatures generated by a hotspot in simplified magnetic field configurations, before describing the full numerical calculation of polarisation maps used for comparison to the data.
Analytic approximation
We define the observer's camera centred on the black hole with impact parametersα andβ, which are perpendicular and parallel to the spin axis, with a line of sight directionk (Bardeen 1973). In terms of these directions and assuming flat space, the Cartesian coordinates are expressed bŷ x =α,ŷ = cos iβ − sin ik,ẑ = sin iβ + cos ik, where i is the inclination of the spin axis to the line of sight. Equivalently, α =x,β = cos iŷ + sin iẑ,k = − sin iŷ + cos iẑ.
A&A proofs: manuscript no. GRAVITY_flare_pol Fig. 3. Lab frame diagram of a hotspot orbiting in thexŷ plane with position vectorh = R 0r , wherer is the unit vector in the radial direction. We note thath makes an angle ξ(t) withx. The magnetic fieldB is a function of ξ and consists of a vertical plus radial component. The strength of the latter is given by tan θ, θ, the angle between the vertical, andB. The observer's camera is defined by impact parametersα,β, and a flat space line of sightk. The line of sight makes an angle i with the spin axis of the black hole. The observer's view is shown on the right. Lastly,φ is the unit vector in the azimuthal direction.
The colour gradient denotes the periodic evolution of the hotspot along its orbit over one revolution. The only reason the width of the curves vary is for visualisation purposes. Top: completely vertical magnetic field (θ = 0). We note that Q and U are constants in time and have static values in QU space. Bottom: significantly radial magnetic field with θ = 80; Q and U oscillate and trace two QU loops in time that change in amplitude with inclination. High inclination counteracts the presence of QU loops.
When face-on,k points alongẑ andβ points alongŷ. When edgeon,k points along −ŷ andβ points alongẑ. Let a hotspot be orbiting in thexŷ plane ( Figure 3). In terms ofα,β, andk, the hotspot's position vectorh is given bȳ wherer is the canonical radial vector, R 0 is the orbital radius, and ξ is the angle betweenα andr. Let us consider the magnetic field with vertical and radial components given bȳ where B 0 is the magnitude ofB and θ is the angle betweenẑ and B. The polarisation is given asP =k ×B. In flat space and in the absence of motion (no light bending or aberration), The polarisation angle on the observer's camera is tan ψ = P ·β/P ·α, so that Given that U/Q = 1/2 tan ψ, the Stokes parameters as a function of the polarisation angle are With equations (6), (7), and (8), Stokes Q and U are obtained.
It is important to note that a single choice of i and θ returns Q=Q(ξ) and U=U(ξ). Assuming a constant velocity along the orbit, the angle ξ can be mapped linearly to a time value by setting the duration of the orbital period and an initial position where the ξ = 0.
Additionally, an inclination of i = i 0 < 90 • and i = 180 • − i 0 produces the same polarised curves but they are reversed in ξ with respect to each other. This is expected since, for an observer at i = i 0 and one at i = 180 • − i 0 , the hotspot samples the same magnetic field geometry, but they appear to be moving in opposite directions with respect to each other. This means that the relative order in which the peaks in Q and U appear are reversed between observers at i = i 0 and at i = 180 • − i 0 .
Given that light bending has not been considered in this approximation, in a significantly vertical field (θ 0, top of Fig. 4), the polarisation remains constant in ξ (and time) proportional to − sin i. In QU space, this means a static value as the hotspot goes around the black hole. A particular case of this isP 0 at i 0, sincek andB are parallel. As θ − → π/2, tan θ − → ∞ (bottom of Fig. 4), and the magnetic field becomes radial. In this case and at low inclinations, the polarisation configuration is toroidal (P ∝φ, the azimuthal canonical vector, Eq. B.1). As the hotspot orbits the black hole, Q and U show oscillations of the same amplitude. In one revolution, two superimposed QU loops can be traced. If the viewer's inclination increases, one of the loops decreases more in size than the other and eventually disappears at very high inclinations, leaving only one behind. Increasing inclination, therefore, counteracts the presence of QU loops in an analytical model with a vertical plus radial magnetic field. It is noted that the normalised polarisation configurations of a completely radial magnetic field and a toroidal one are equivalent with just a phase offset of 90 • in ξ (Eq. B.2 in Appendix B).
Ray-tracing calculations
Next, we use numerical calculations to include general relativistic effects. We used the general relativistic ray tracing code grtrans (Dexter & Agol 2009;Dexter 2016) to calculate synchrotron radiation from orbiting hotspots in the Kerr metric.
The hotspot model is taken from Broderick & Loeb (2006), and it consists of a finite emission region orbiting in the equatorial plane at radius R 0 . The orbital speed is constant for the entire emission region, and it matches that of a test particle motion at its centre. The maximum particle density n spot ∼ 2 × 10 7 cm −3 falls off as a three-dimensional Gaussian with a characteristic size of R spot . The magnetic field has a vertical plus radial component 1 . Its strength is taken from an equipartition assumption, where we further assume a virial ion temperature of kT i = (n spot /n tot ) (m p c 2 /R), (n spot /n tot ) = 5, where n tot is the total particle density in the hotspot. For the models considered here, a typical magnetic field strength in the emission region is B 100 G. We calculated synchrotron radiation from a power law distribution of electrons with a minimum Lorentz factor of 1.5 × 10 3 and considered a black hole with a spin of zero. 2 . The model parameters for field strength, density, and minimum Lorentz factor were chosen as typical values for models of Sgr A* which can match the observed NIR flux. Other combinations are possible.
Example snapshots of a hotspot model in a vertical field (θ = 0) and the resulting polarisation configuration are shown in Figure 5. The effects of lensing can be appreciated in the form of secondary images. It can be seen as well that as the hotspot moves along its orbit around the black hole, it samples the magnetic field geometry in time, so that the time-resolved polarisation encodes information about the spatial structure of the magnetic field. Figure 6 shows the numeric calculations of hotspot models with the same magnetic field angles as those in the analytic approximation. Inclination and θ are key parameters in the observed number and shape of QU loops. In contrast to the analytic case, in a significantly vertical field (θ 0, top of Fig. 6), the polarisation is not zero. This is mainly due to light bending, which introduces an effective radial component to the wave-vector in the plane of the observer's camera. This radial component ofk leads to an additional azimuthal contribution toP. The θ = 0 cases show that this effect alone is able to generate QU loops. We see again that increasing inclination leads to a change from two QU loops per hotspot revolution at low inclinations to a single QU loop at high inclinations.
The cases where θ − → 90 • (bottom of Fig. 6) show that increasing this parameter also leads to scenarios with two QU loops per hotspot orbit. The shape of the numerical Q and U curves is similar to the analytic versions. The differences are due to the inclusion of relativistic effects in the ray-tracing calculations. We note that numerical models with a vertical plus toroidal magnetic field show similar features and behaviour to those in the vertical plus radial case (see Appendix C).
Model fitting
We calculated normalised Stokes parameters Q/I and U/I from ray tracing simulations of a grid of hotspot models, folded them through the instrumental response (Eq. 1), and compared them A&A proofs: manuscript no. GRAVITY_flare_pol to GRAVITY's measured Q /I and U /I . The parameters of the numerical model are the orbital radius R 0 , the size of the hotspot R spot , the viewing angle i, and the tilt angle of the magnetic field direction θ. We understand qualitatively how the hotspot size and the orbital radius affect the Q and U curves. 'Smoother' curves, where the amplitude of the oscillations is reduced, are produced either with increasing hotspot sizes at fixed orbital radius or with decreasing R 0 at a fixed hotspot size, due to beam depolarisation (see Appendix F). Since performing full ray tracing simulations is computationally very expensive, and due to the fact that the curves change smoothly and gradually with R 0 and R spot , we chose to fix their values to R 0 = 8R g and R spot = 3R g , R g the gravitational radius. We then scaled them in both period and amplitude to match the data better in the following manner.
Given the duration of a flare ∆t, we could scale a hotspot's period by a factor nT to set the fraction of orbital periods that fit into this time window. The new radius of the orbit is then R ∝ (∆t/nT ) 2/3 . This rescaling introduced small changes in fit quality compared to re-calculating new models, within our parameter range of interest (see Appendix E). We absorbed the effect of beam depolarisation into a factor s that scales the overall amplitude of both Q and U and, therefore, the linear polarisation fraction as well.
Given a hotspot's period, the relative phase reflects the hotspot position relative to an initial position measured at some initial time, where the phase is defined to be zero. We chose the initial position of the hotspot based on the astrometric measurement of the orbital motion of the flare in Gravity Collaboration et al. (2020c). Specifically, we chose the initial phase ξ to match the initial position of the best-fit orbital model to the astrometry.
Application to the July 28 flare
The observed Q /I and U /I were measured from fitting interferometric binary models to GRAVITY data. The binary model measures the separation of Sgr A* and the star S2, which were both in the GRAVITY interferometric field of view ( 50 mas) during 2018. For more details, see Gravity Collaboration et al. (2020a). We measured polarisation fractions assuming that S2's NIR emission is unpolarised. The 70 minute time period analysed is limited by signal-to-noise: binary signatures are largest when Sgr A* is brightest. As a result, we focused on data taken during the flare. We fitted to data binned by 30 seconds since the flux ratio can be rapidly variable. We further adopted error bars on polarisation fractions using the rms of measurements within 300s time intervals since direct binary model fits generally have χ 2 > 1, and as a result underestimate the fit uncertainties.
We computed a grid of models with i, θ, s, and nT as pa- , ∆s = 0.05, and nT such that the allowed range of radii for the fit is R = 8 − 11 R g with ∆R = 0.2. We have included this prior in radii to match the constraint from the combined astrometry of the three bright GRAVITY 2018 flares (Gravity Collaboration et al. 2020c). The best fit parameters and corresponding polarised curves are shown in Figure 7. We find that the curves qualitatively reproduce the data and that the statistically preferred parameter combination for July 28, with a reduced χ 2 ∼ 3.1, favours a radius of 8 R g and moderate i and θ values (left panel of Figure 7). In QU space, these parameters produce two intertwined and embedded QU loops of very different amplitudes in time (right panel of Figure 7). The outer one is fairly circular, centred approximately around zero and with an average radius of 0.18. The inner one has a horizontal oblate shape with a QU axis ratio of approximately 2:1, does not go around zero, and represents a much smaller fraction of the orbit than the larger loop. These moderate values of θ imply that a magnetic field with significant components in both the radial and vertical directions is favoured.
The hotspot is free to trace a clockwise (i > 90 • ) or counterclockwise (i < 90 • ) motion on-sky. At fixed θ, this change in apparent motion results in an inversion of the order in which the maxima of the Q and U curves appear 3 . This effect is due to relativistic motion (Blandford & Königl 1979;Bjornsson 1982). When the magnetic field is purely toroidal (velocity parallel toB), the polarisation angle is independent of velocity. When there is a field component perpendicular to the velocity (poloidal field), relativistic motion induces an additional swing of the polarisation angle in the direction of movement where magnitude depends on the velocity. We ignore this effect in the analytic approximation above, but it is included in our numerical calculations.
The data favour models where the maxima in U /I precede those of Q /I . This behaviour is observed in the case of clockwise motion (i > 90 • ) with θ ∈ [0 • − 90 • ] and in counterclockwise motion (i < 90 • ) with θ ∈ [90 • − 180 • ]. In fact, model curves at a given i > 90 • and θ ∈ [0 • − 90 • ] are identical to those with their 'mirrored' values i = 180 • −i and θ = 180 • −θ. In our analysis, we consider θ ∈ [0 • − 90 • ], which favours a clockwise motion. However, we cannot uniquely determine the apparent direction of motion of the hotspot due to this degeneracy.
Our models overproduce the observed linear polarisation fraction by a factor of ∼ 1.7 (scaling factor s 0.4 < 1). The maximum observed polarisation fraction is 30%, while it is 50% in our models. The degree of depolarisation introduced by the VLTI is not substantial enough to reduce the model linear polarisation fraction to the observed one. Moreover, in the NIR, there are no significant depolarisation contributions from absorption or Faraday effects. As a result, we conclude that the low observed polarisation fraction is likely the result of beam depolarisation. The observed low polarisation fraction implies that the flare emission region is big enough to resolve the underlying magnetic field structure. In the context of our model, this could imply a larger spot size. It could also indicate a degree of disorder in the background magnetic field structure, for example as a result of turbulence.
Application to the July 22 flare
July 28 is the only night with an observed infrared flare in which GRAVITY recorded both Stokes Q and U . Since a single polarisation channel is insufficient to constrain the full parameter space used in our numerical models, we restricted ourselves to the night of July 22, as this observation has the highest precision astrometry 4 , and fixed the viewer inclination and magnetic field geometry to be the same as the best fit model to the July 28 data. We scaled the curves in amplitude with s ∈ [0.05 − 0.35], ∆s = 0.05.
The initial position on sky for both flares is constrained by astrometric data and, therefore, so is the phase offset between both curves. With a fixed phase difference between the curves and free range of radii, we find that the July 22 data favours ex- 8. Fit to the July 22 NIR flare without restricting the phase difference between this night and that of July 28. The colour gradient denotes the evolution of the hotspot as it completes one revolution. The viewer's inclination, magnetic field geometry, and orbital direction have been fixed to the values found for the July 28 flare. The fit favours values of R 0 ∼ 11 R g and there is no initial phase difference between the nights (no difference in starting position on-sky), which is out of the allowed uncertainty range for the astrometry.
tremely large values of R 0 > 20 R g , which are outside of the allowed range obtained from astrometric measurements. In allowing the phase difference to be free and constraining the radii to 8 − 11 R g , with ∆R = 0.2, we find that the data tend to values of R 0 ∼ 11 R g and a phase difference between curves of 0 • (Figure 8). This phase difference value (and position difference associated with it) is outside of the allowed uncertainties in the initial position indicated by the astrometric data. The fact that the magnetic field parameters that describe the July 28 flare fail to adequately fit the data from July 22 may indicate that the background magnetic field geometry changes on a several-day timescale.
Summary and discussion
In this work, we present an extension of the initial analysis of polarisation data performed in Gravity Collaboration et al. (2018b). We forward modelled Q and U Stokes parameters obtained from ray-tracing calculations of a variety of hotspot models in different magnetic field geometries, transformed them into quantities as seen by the instrument, and fitted them directly to the polarised data taken with GRAVITY. This allowed us to not only fit data directly without making assumptions about Stokes V or the interpolation of data in nonsimultaneous Q and U measurements, but also to predict the behaviour in time of the polarised curves and loops for the cases where only one of the parameters was measured.
We have shown that the hotspot model serves to qualitatively reproduce the features seen in the polarisation data measured with GRAVITY. A moderate inclination and moderate mix of both vertical and radial fields provide the best statistical fit to the data. Consistent results are found by fitting the data with a vertical plus toroidal field component (Appendix C). We note that this result does not rely on the assigned strength of the magnetic field, since the model curves are scaled in amplitude, but rather it is only from the geometry of the field. Magnetic fields with a non-zero vertical component fit the data statistically better. This supports the idea that there is some amount of ordered magnetic field in the region near the event horizon with a significant poloidal field component. The presence of this component is associated with magnetic fields that are dynamically important and it confirms the previous finding of strong fields in Gravity Collaboration et al. (2018b). Spatially resolved observations at 1.3mm also found linear polarisation structure consistent with a mix of ordered and disordered magnetic field (Johnson et al. 2015).
Matching the clockwise direction of motion inferred by the astrometric data would require that θ ∈ [0 • − 90 • ]. Under this assumption, the results are also in accordance with the angular momentum direction and orientation of the clockwise stellar disc and gas cloud G2 (Bartko et al. 2009;Gillessen et al. 2019;Pfuhl et al. 2015;Plewa et al. 2017).
We have chosen the bright NIR flare on July 28, 2018 since it is the only one for which both linear Stokes parameters have been measured. Naturally, increasing the number of full data sets in future flares will be useful in constraining the parameter range more.
Our models overproduce the observed NIR linear polarisation fraction of ∼ 30% by a factor of ∼ 1.7, and they must be scaled down to fit the data. In the compact hotspot model context, this implies that an emission region size larger than 3 R g is needed to depolarize the NIR emission through beam depolarisation. Including shear in the models would naturally introduce depolarisation since a larger spread of polarisation vector directions (or equivalently, the magnetic field structure) would be sampled at any moment (e.g. Gravity Collaboration et al. 2020c; Tiede et al. 2020). However, this might smooth out the fitted curves and would probably change the fits. In any case, the observed low NIR polarisation fraction means that the observed emission region resolves the magnetic field structure around the black hole.
Though simplistic, the hotspot model appears to be viable for explaining the general behaviour of the data. It would be interesting to study the polarisation features of more complex, total emission scenarios explored in other works. Ball et al. (2020) study orbiting plasmoids that result from magnetic reconnection events close to the black hole, where some variability in the polarisation should be caused by the reconnecting field itself. Dexter et al. (2020) find that material ejected due to the build-up of strong magnetic fields close to the event horizon can produce flaring events where the emission region follows a spiral trajectory around the black hole. In their calculations, ordered magnetic fields result in a similar polarisation angle evolution as we have studied here. Disorder caused by turbulence reduces the linear polarisation fraction to be consistent with what is observed.
Spatially resolved polarisation data are broadly consistent with the predicted evolution in a hotspot model. This first effort comparing these types of models directly to GRAVITY data shows the promise of using the observations to study magnetic field structure and strength on event horizon scales around black holes.
Appendix A: Vertical plus radial field in Boyer-Lindquist coordinates
In the Boyer-Lindquist coordinate frame, a magnetic field with a vertical plus radial components can be written as: where B µ are the contravariant components of B and δ c ≡ B r /B θ . The magnetic field must satisfy the following conditions: where u µ are the contravariant components of the four-velocity, B is the magnitude of B, and g µν are the covariant components of the Kerr metric. In Boyer-Lindquist coordinates with G = c = M = 1, the non-zero components of the metric are: where a is the dimensionless angular momentum of the black hole. Using Eq. (A.1), (A.2), and (A.3), it follows that the Boyer-Lindquist coordinate frame contravariant components of the magnetic field are δ c g rr u r + g θθ u θ g tt u t + g tφ u φ () (A.4) and where δ LNRF is the ratio of the radial and poloidal magnetic field components in the locally non-rotating frame (LNRF, Bardeen 1973) and B (µ) are the contravariant components of B in the LNRF: where the expression to the far right is obtained by assuming r a (as it is in the hotspot case). The variable δ used in the main text (Eq. (5)) corresponds to δ LNRF defined here as being calculated using the r a approximation.
Appendix B: Analytic approximation with a vertical plus toroidal magnetic field
In the case of a vertical plus toroidal magnetic field, the magnetic field can be written asB ∝ẑ+λφ, where λ ∝ tan θ T is the strength of the toroidal component, θ T is the angle measured from the toroidal component to the vertical component (θ T = 0 denotes a completely toroidal field), and is the canonical vector in the azimuthal direction ( Figure 3). We note thatr ·φ = 0. The polarisation vector in a flat space given byk ×B is then P ∝ −(sin i + λ cos i cos ξ)α − λ sin ξβ (B.2) and the polarisation angle is given by It can be seen from expression (B.2) that at low inclinations or when λ >> 1 (complete toroidal magnetic field), the polarisation has a radial configuration (P ∝r, Eq. 4). This is geometrically equivalent to the polarisation having a toroidal configuration (similar to the one generated by a completely radial magnetic field, see Section 3) with a phase offset of π/2 in Q and U. In this case, we would expect to have two superimposed QU loops in one revolution of the hotspot. Figure B.1 shows a comparison between the analytic (top) and numeric (bottom) calculations for a vertical plus toroidal magnetic field (Appendix C). As expected, in the analytic case, there are always two superimposed loops in QU space in the case of a completely toroidal field. In the numeric calculations, this is also the case given that light bending favours the presence of loops. As a vertical component in the field is introduced, the loops no longer overlay on each other. This effect increases with viewer inclination. It can also be seen that the completely toroidal and radial cases produce the same Q and U curves at low inclinations, save for a phase offset and scaling factor. It can also be seen that toroidal and completely radial configurations produce the same curves, save for a a scaling factor and a phase offset.
Appendix C: Vertical plus toroidal field in Boyer-Lindquist coordinates
In the Boyer-Lindquist coordinate frame, a magnetic field with a vertical plus toroidal components can be written as: where B µ are the contravariant components of B and η c ≡ B θ /B φ . Just as in the vertical plus radial case, the magnetic field must satisfy Eqs. (A.2). Using Eqs. (C.1), (A.2), and (A.3), it follows that the Boyer-Lindquist coordinate frame contravariant components of the magnetic field are η LNRF = B (θ) /B (φ) = tan θ T the ratio of the poloidal and toroidal magnetic field components in the LNRF (Eq. (A.5)), and θ T is the angle measured from the toroidal component to the vertical (θ T = 0 implies a completely toroidal field, Appendix B). We fitted the July 28 data considering this magnetic geometry. Just as in the vertical plus radial case, we computed a grid of models with i, θ, s, and nT as parameters: i ∈ [0 − 180] in increments of ∆i = 4 • ; θ T ∈ [0 − 90], ∆θ T = 5 • ; s ∈ [0.4 − 0.8], ∆s = 0.05, and nT such that the allowed range of radii for the fit is R = 8 − 11 R g with ∆R = 0.2. The best fit is shown in Figure C.1. Though a better reduced χ 2 is found at a somewhat higher inclination than the best fit with a vertical plus radial magnetic field (Fig. 7), the presence of a poloidal component in the magnetic field is still needed. Considering θ T ∈ [0 • − 90 • ], a clockwise motion is preferred (i > 90 • ). Identical curves can be obtained when the direction of motion is counterclockwise (i < 90 • ) and the magnetic field angle is θ T = 180 • − θ T . Figure C.2 presents a model of a vertical plus toroidal magnetic field with similar parameters to those of the vertical plus radial field best fit.
Appendix D: Spin effects
We present the effects of spin in our calculations. Figure D.1 shows three models with the best fit parameters found for the July 28 flare, at three different dimensionless spin values a=0.0, 0.9, −0.9. The corresponding reduced χ 2 values are reported in Table D
Appendix E: Scaling period effects
We explore the effects of scaling the period of model curves. Figure E.1 shows the best fit model found for the July 28 flare and one calculated at R = 11 R g scaled down to match the period at 8 R g , with the rest of the parameters fixed to those of the best fit. The corresponding reduced χ 2 values are reported in Table E.1. It can be seen that the curves show similar behaviours. Scaled models might have a better reduced χ 2 than their non-scaled versions, but they are still not better than the best fit.
Appendix F: Qualitative beam depolarisation
In the absence of other mechanisms, such as self-absorption or Faraday rotation and conversion, infrared emission from an orbiting hotspot is depolarised by beam depolarisation. . Models calculated at R = 8 R g and at R = 11 R g , the latter was scaled down to match the orbital period at 8 R g . The rest of the parameters are those found for the best fit for the July 28 flare. The reduced χ 2 are reported in Table E.1. For better clarity, the R = 11 R g non-scaled model fit is not shown, but the χ 2 is reported. Comparison of three numerical calculations with all identical parameters, except for R spot : 1, 3, and 5 R g . As the hotspot size increases, the curve features are smoothed from beam depolarisation by sampling larger magnetic field regions and averaging out the different polarisation directions in time.
Table E.1. Reduced χ 2 of models calculated at R = 8 R g and at R = 11 R g , the latter was scaled down to match the orbital period at 8 R g .
More beam depolarisation occurs, the larger the emitting region that samples the underlying magnetic field is, or the more disordered the field itself is. Given the simple magnetic field geometries considered in this work, disorder at small scales is nonexistent. We discuss qualitatively the impact of emission size in the following.
As the hotspot goes around the black hole, it samples a wedge of angles in the azimuthal direction with an arc length of R spot /R 0 . Larger beam depolarisation occurs with the increase of this factor. Figure F.1 shows example curves of numerical calculations at a moderate inclination and magnetic field tilt, where only the hotspot size has been changed. As expected, with increasing R spot at a fixed orbital radius, not only does the amplitude of the polarised curves and QU loops diminish (and with it, the linear polarisation fraction), but the features in them are smoothed out as well. Within the hotspot model, beam depolari-sation can therefore be used to constrain the size of the emitting region as a function of the observed linear polarisation fraction. | 9,126.6 | 2020-09-03T00:00:00.000 | [
"Physics"
] |
Feature separation and adversarial training for the patient-independent detection of epileptic seizures
An epileptic seizure is the external manifestation of abnormal neuronal discharges, which seriously affecting physical health. The pathogenesis of epilepsy is complex, and the types of epileptic seizures are diverse, resulting in significant variation in epileptic seizure data between subjects. If we feed epilepsy data from multiple patients directly into the model for training, it will lead to underfitting of the model. To overcome this problem, we propose a robust epileptic seizure detection model that effectively learns from multiple patients while eliminating the negative impact of the data distribution shift between patients. The model adopts a multi-level temporal-spectral feature extraction network to achieve feature extraction, a feature separation network to separate features into category-related and patient-related components, and an invariant feature extraction network to extract essential feature information related to categories. The proposed model is evaluated on the TUH dataset using leave-one-out cross-validation and achieves an average accuracy of 85.7%. The experimental results show that the proposed model is superior to the related literature and provides a valuable reference for the clinical application of epilepsy detection.
Introduction
Epilepsy is a chronic disorder caused by the sudden abnormal discharge of nerve cells in the brain, resulting in temporary brain dysfunction. Epilepsy is the second most common neurological disorder after headache, affecting approximately 70 million people worldwide. The clinical manifestations of epileptic seizures are complex, and the types of epileptic seizures are varied. The clinical manifestations may include impaired consciousness, limb spasms, urinary incontinence, frothing, and other symptoms. Although epileptic seizures have little impact on patients in the short term, long-term frequent seizures have a severe impact on the physical, mental, intellectual health of patients (Rakhade and Jensen, 2009;Rasheed et al., 2021). Most people with epilepsy can control their condition with medication Frontiers in Computational Neuroscience 01 frontiersin.org and surgery, still, about 30% of people with intractable epilepsy cannot be adequately controlled with medication (Kwan and Brodie, 2000), posing a severe threat to the life and health of patients and a heavy burden to their families and society. The pathogenesis of epilepsy is complex, and the types of epileptic seizures are varied. The characteristics of EEG (electroencephalogram) data during the epileptic seizure period are related to the original location and cause of epilepsy. Different diseases of the nervous system or various conditions of the brain can cause different epileptic seizures, and the same condition of the nervous system can cause more than one type of epileptic seizure. Previous studies have pointed out that about 7% of the neurons ignited in patients with subclinical seizure, about 14% of the neurons ignited in patients when omen appeared. About 36% of the neurons ignited in patients with clinical seizure. Therefore, in the same patient, the intensity, type, location, duration of each seizure may be the same or different. In multiple patients, the differences are more marked (Babb et al., 1987;Fisher et al., 2017).
Most of the existing epileptic seizure detection methods focus on the patient-dependent scenario, which refers to detecting a patient's epileptic seizure by learning from his own historical records; this method is easy to implement and has high detection accuracy. In contrast, patient-independent methods advance in alerting potential patients but are easily corrupted by interpatient noises. Most existing studies fail to eliminate significant differences between patients (mainly caused by multiple factors such as physical condition, pathogenesis, seizure intensity, seizure type, etc.). When the model is trained directly on data from multiple patients, it will easily lead to underfitting, and detection performance will drop sharply on new patients. For these reasons, we propose a new method, which uses a feature extraction network and feature separation network to improve the discriminability of features, and which uses the marginal distribution and conditional distribution alignment technology of features to enhance the ability to extract patient invariant features.
The main contributions of our study can be summarized as follows: (1) We propose a novel domain generalization model based on feature disentanglement and adversarial training to enhance the ability of extracting patient invariant features, so the generalization ability of the model is improved. (2) We verify the proposed model through extensive experimental evaluations. The experimental results show that our proposed approach has significant potential to provide an optimal epileptic seizure detection method, and it also provides a valuable reference for clinical application.
The remainder of this paper is organized as follows. In the section "2. Related work, " reviews the related work of epileptic seizure detection. In the section "3. Methodology, " a patientindependent epileptic seizure detection model is proposed. In the section "4. Experiments, " we present experiments and results on a benchmark dataset. In the section "5. Discussion, " we analyze the effectiveness of the proposed method. Finally, some conclusions are given in the section "6. Conclusion."
Related work
As a subclass of machine learning, deep neural networks have made remarkable progress in computer vision, natural language processing, and other fields, and researchers have proposed a variety of network models and methods for specific application scenarios. In the research of domain generalization methods, the following two approaches are usually adopted: (1) The method based on experience and knowledge is designed to extract universal features that can perform good detection on new patients. (2) The domain adaptive technology is used to extract invariant features of multiple patients to improve the generalization ability of the model.
For the first approach, Ansari et al. (2021) proposed an automated seizure onset detection system, which used power spectrum features and some statistical features to detect seizure onset, achieving a mean latency of 0.9 s and 1.02 false detections per hour. Liu et al. (2022) proposed a novel patient-independent approach; this method used wavelet decomposition, Convolutional Neural Network (CNN), Bidirectional Long Short-Term Memory (Bi-LSTM) network and a novel channel perturbation technique, achieved mean accuracies of 97.51 and 93.70%. Sridevi et al. (2019) proposed a patient-independent approach; this method used spectral entropy, spectral energy and signal energy as useful features, achieved a better classification effect.
For the second approach, Zhao et al. (2021) proposed a domain adaptive method, domain shift can be eliminated from the source domain to the target domain, and achieved better performance. Li et al. (2021) proposed a bi-hemisphere domain adversarial neural network, that achieved good recognition performance in EEG emotion recognition. Tang and Zhang (2020) applied conditional adversarial domain adaptation neural network to motor image EEG decoding, and achieved a better classification effect.
In epilepsy detection, Zhang et al. (2020) used feature separation and adversarial representation learning methods to decompose the data into categories (seizure and normal) related features and patient-related features, achieving an average accuracy rate of 80.5% on the TUH EEG dataset. Dissanayake et al. (2021) used the CNN network structure and Siamese network structure, and achieved an accuracy of 88.81% on the CHB-MIT dataset.
To the best of our knowledge, the above methods do not completely eliminate the effects of the data distribution shift between patients, so in this study, we propose a robust approach to address this problem.
The proposed network
The proposed patient-independent epileptic seizures detection model is illustrated in Figure 1, which includes three subnets. (1) Multi-level temporal-spectral feature extraction network, (2) feature separation network, and (3) invariant feature extraction network. The feature extraction network extracts temporal feature information and frequency domain feature information from EEG data , and performs enhanced characterization by the Squeeze-and-Extraction Network (Hu et al., 2018), so that the extracted features are discriminable; the feature extraction The architecture of the proposed network. The architecture of multi-level temporal-spectral feature extract network.
network is illustrated in Figure 2. The feature separation network disentangles the features into category-related features and patientrelated features. Finally, the invariant feature extraction network extracts the invariant patient-independent features by aligning the marginal distribution and the conditional distribution; so the generalization ability of the model is improved.
Multi-level temporal-spectral feature extract network
Electroencephalogram data is two-dimensional data similar to images, which has uncertainties and incidences; therefore, it is necessary to preprocess the original data; we use min-max regulation technology to regulate the data. You can also refer to Rahim et al. (2016) and Versaci and Morabito (2021) for preprocessing.
As convolution operators are essentially equivalent to a low-pass filter (Azimi et al., 2019), the embedding block, the embedding block, that is, successive temporal convolution and batch normalization (BN) operations, is first adopted to infer an optimal filter-band for the subsequent analysis. As a result, after stacking original data and output embeddings with a channel-wise concatenation function, the embedding block obtains a sub-band matrix, which provides a subsequent network with adaptive subband responses and also original data. Finally, the data is fed into the multi-level spectral feature extraction module and the multilevel temporary feature extraction module for feature extraction.
In the multi-level temporal-spectral feature extraction network, in order to prevent the deformation of the boundary data caused by zero padding in the convolution operation, the head and tail of the data are filled according to formula (1): (1) Where, | is a concatenating operator, x(i) is the i-th element of input x, R representing the parameter kernel size in the convolution operation.
In order to reduce the time of data computation, the proposed method adopts convolution operation to perform multi-level wavelet decomposition, which is defined as follows: (2) Where, ⊗ is the convolution operation,gand h represent a pair of scaling and wavelet filter, s represents the parameter stride in the convolution operation, y A (i) is the approximation (low pass) coefficients, and y D (i) is the detail (high pass) coefficients.
In the multi-level spectral feature extraction module, to extract the corresponding wavelet coefficients under standard physiological sub-bands δ(0∼4 Hz), θ(4∼8 Hz), α(8∼16 Hz), β(16∼32 Hz), and γ(32∼64Hz), we select Daubechies order-4 (Db4) wavelet, since previous studies reported that Db4 mother wavelet is useful for epileptiform transient detection due to its high correlation coefficients with the epileptic spike signal (Indiradevi et al., 2008). Finally, the frequency features In the multi-level temporal feature extraction module, considering the data distribution shift between subjects, we use five independent convolution, batch normalization and empirical linear unit (ELU) operations to capture multi-level temporal feature information with different receptive fields. The convolution kernel size is set to [S, 1], the value of S is {k, k, k/2, k/4, k/8}, k= 2 5 , and finally, the temporary features To further extract discriminative feature information, the features extracted by the multi-level spectral feature extraction module and the multi-level temporal feature extraction module are combined according to the feature dimensions: The combined features f all are fed into Squeeze-and-Excitation Network to enhance feature discrimination.
Feature separation network
The feature information (category information, patient information, etc.) is contained in each dimension and intertwined. If the features can be disentangled by the feature separation network, the separability and discriminability of the features will be improved. Therefore, according to the prior knowledge, we separate the features which are obtained from the feature extraction network into two parts, the first half of the features is the category-related component, which is recorded as F category_related , the second half of the features is the patient-related component, which is recorded as F patient_related . In addition, to ensure the first half of the features are the category-related component, the category classifier and cross-entropy loss function are used, to ensure the second half of the features are the patient-related component, the patient classifier and cross-entropy loss function are used, to ensure better separation of the features of the two parts, the maximum divergence loss function is used to ensure the maximum separation of the category-related component and the patient-related component (Bui et al., 2021).
The loss function of the category classifier and the patient classifier is: Where, N is the number of samples, x i is the data sample, G f is the feature extraction network, G c1 is the category classifier, G p is the patient classifier, L is the cross-entropy loss function, y i is the category label (seizure or normal), d i is the patient label, D s ∈ D 1 ∪ D 2 ... ∪ D n (D 1 ,D 2 ,. . .. . ., D n are the data of each patient).
To separate category-related component (F category_related ) and patient-related component (F patient_related ), we use the maximum divergence loss function: Then combine the separated features to create new features:
Invariant feature extraction network
The feature separation network effectively disentangles the features and improves the discrimination of the features, but the current features are not the invariant features of each patient. To improve the generalization ability of the model, the proposed method is based on the methods of DANN (Domain-adversarial training of neural networks) (Ganin et al., 2016;Yu et al., 2019) and MADA (Multi-adversarial domain adaptation) (Pei et al., 2018) to achieve better invariant feature learning. The global patient discriminator aligns the features of each patient according to the marginal distribution. The local patient discriminator aligns the features of each category according to the conditional distribution. The global adversarial loss function and the local adversarial training loss function are as follows: Where, L is the cross entropy loss function, G f is the feature extraction network, G g and G k l (k = 1,2) are the patient discrimination network, d i is the patient label, y k i (k = 1,2)is the first and second dimensional data of the original label after one-hot encoder, D s ∈ D 1 ∪ D 2 ... ∪ D n is the patient sample set.
In category classifier, to centralize the character of data, the central loss function is adopted. The loss function is (Wen et al., 2016): Where, c y i is the category center. Through the above operations, the marginal distribution and conditional distribution of features are aligned, and the features are gathered to the central point of each category, so the invariant features are obtained. The loss function of the category classifier (Rahim et al., 2015;White et al., 2020;Versaci et al., 2022;Waheed et al., 2023) is: Where, G c2 is the category classifier, y i is the category label.
Training details
We propose an adversarial training strategy to train all the loss functions jointly (Matsuura and Harada, 2020): Where, λ = 0.1. θ g , θ 1 l , θ 2 l are trained by a special layer called Gradient Reversal Layer (GRL), this GRL is omitted during forward propagation, and the gradient is reversed in backpropagation.
Finally, we search for the optimal parameters
θ 2 l to meet the following requirements: Where, θ f are the parameters of multi-level temporal-spectral feature extract network, θ c1 are the parameters of category classifier in feature separation network, θ p are the parameters of patient classifier in feature separation network, θ c2 are the parameters of category classifier in invariant feature extraction network, θ g are the parameters of global patient discriminator in invariant feature extraction network, θ 1 l , θ 2 l are the parameters of local patient discriminator in invariant feature extraction network.
During training, if the training samples are trained by minibatch, the features of all the training samples cannot be obtained in time, so we feed all the training samples into the network as a batch for training. The Adam optimizer is used for the model; the learning rate is set to 0.005; the center loss function is optimized using the Stochastic Gradient Descent (SGD) optimizer, and the learning rate is set to 0.05; the training rounds are 200. We use the grid search method to set the hyperparameters in the experiment.
Dataset
The proposed approach is evaluated on a benchmark dataset, the TUH corpus (Obeid and Picone, 2016), which is a neurological seizure dataset of clinical EEG recordings associated with 22 channels according to the international 10/20 system. We form a subset of the TUH with 14 subjects by selecting the subject with more than 250 s of seizure state. For each subject, we use 500 s (half normal and half seizure) of EEG signals with a sampling rate of 250 Hz. Each EEG fragment has 250 sample points (lasting 1 s) and adjacent fragments with 50% overlap. For each EEG fragment, those belonging to the epileptic seizure state are labeled as 1, while those belonging to the normal state are labeled as 0. Then the sample set is divided into a training set and a test set.
Evaluate metrics
The experiment used accuracy (ACC), sensitivity (SN), and specificity (SP) to quantify the performance of the algorithm (Yang et al., 2023).
Where, TP (True Positive): The sample which is positive is judged to be positive, TN (True Negative): The sample which is negative is judged to be negative, FP (False Positive): The sample which is negative is judged to be positive, FN (False Negative): The sample which is positive is judged to be negative.
Baselines
The adopted baseline models include: • Zabihi et al. (2013) applied Discrete Wavelet Transform (DWT) and calculated metrics such as relative scale energy and Shannon entropy as features; SVM is used for data classification.
• Fergus et al. (2015) applied Power Spectral Density (PSD) and calculated metrics such as peak frequency and max frequency as features; KNN is used for data classification.
• Schirrmeister et al. (2017) applied convolutional neural networks to distinguish seizure segments by decoding task-related information from EEG signals.
• Kiral et al. (2018) designed a deep neural network for seizure diagnosis and further developed a prediction system on a wearable device.
• Zhang et al. (2020) proposed an adversarial representation learning strategy, which achieves robust and explainable epileptic seizure detection.
• Dissanayake et al. (2021) used the CNN network structure and Siamese network structure to improve the generalization ability of the model.
The six comparison methods and my experiment used the same data segment length on the TUH dataset, using leave-one-out crossvalidation, and obtained the comparison results in Table 1. Through comparative analysis, the methods in literature (Schirrmeister et al., 2017;Kiral et al., 2018) only used a deep neural network to train a model with the data of multiple patients together, without considering the negative impact of inter-patient differences on the training model, resulting in poor detection accuracy when applied to new patients. In literature (Zabihi et al., 2013), relative scale energy and Shannon entropy, etc., were used as features, in literature (Fergus et al., 2015), peak frequency and max frequency, etc., were used as features, these methods were able to extract the obvious common features, but were unable to extract the deeper common features, so the detection accuracy of the methods was higher than the results in Schirrmeister et al. (2017) and Kiral et al. (2018) and lower than the results in Zhang et al. (2020) and Dissanayake et al. (2021). For the methods mentioned in the literature Dissanayake et al., 2021), which applied a neural network to eliminate the negative impact of the data distribution shift between patients, the results were higher than those without considering the elimination of the negative impact of the data distribution shift between patients. For the method proposed in this paper, which uses feature separation and adversarial training to disentangle features in the latent space while learning domain-invariant features to achieve the goal of mitigating the influence of inter-patient differences, its experimental results are the best, with an average detection accuracy of 85.7% by leaveone-out cross-validation.
In addition, the confusion matrix and the receiver operating characteristic (ROC) curve with the area under the curve (AUC) value are shown for a closer look at the detection results. The results of one of the best-performing subjects (patient 6) are illustrated in Figure 3. From the confusion matrix we can see that our approach achieves a sensitivity of 98.4% and a specificity of 100%.
Discussion
To analyze the effectiveness of the proposed method, first, we removed the feature separation network while leaving the other settings unchanged. Then we tested on the TUH dataset using leave-one-out cross-validation. The results of the tests are shown in Table 2: By comparison, the average accuracy of the comparison method in which the feature separation network is removed is 81.6%. The proposed method ensures feature separability and improves feature discrimination, thus improving detection performance.
Second, for the invariant feature extraction network, since DANN only aligns the marginal distribution features of multipatients, and MADA only aligns the conditional distribution features of multi-patients, we propose the method which aligns the marginal distribution and conditional distribution of each patient's features at the same time. As the label of each training set, y k i (k = 1,2)in the MADA method is modified with the value of the original label by the one-hot encoder. Then, the model is trained in the adversarial network, respectively, so that the invariant features of each category can be obtained.
To compare the advantages of the proposed method, this paper trains and tests networks that only use DANN and only use MADA. By comparing with the proposed method, the proposed method has the best performance. The results of performance comparison are shown in Table 3.
For a clear illustration, we further use the t-SNE method (Maaten and Hinton, 2008) to visualize the feature distribution of the comparison methods, the feature distribution is illustrated in Figure 4. It can be seen that DANN only tries to align the marginal distribution. Still, due to the shift in data distribution between patients, it is difficult to align the marginal distribution, resulting in features in a decentralized state. MADA uses the aligned conditional distribution and different features are mixed together. In the proposed method, the features are clustered by category and can be discriminated. It is shown that the proposed method has advantages in learning invariant features.
The reasons are as follows: first, DANN, which uses global domain adversarial method aligns the marginal distribution of features not according to the data category; second, the MADA, which uses local domain adversarial method aligns the conditional distribution of features according to the data category; but y k i (k = 1,2)in the MADA method are not the true category information, which is the output of the classification network; therefore, the features of each category cannot be aligned accurately. The proposed method uses the marginal distribution and conditional distribution alignment simultaneously, and uses the accurate label of the training set as y k i (k = 1,2), which improves the performance of data feature alignment. Therefore, the proposed method has the best performance.
For future work, I suggest the following three points: First, in the proposed method, the data features are divided into category-related features and patient-related features. In future work, the features can be divided into more detailed features, and new network structures and loss functions can be used for feature extraction to improve the algorithm's performance.
Second, the proposed method uses adversarial training to learn the invariant features, but the results of adversarial training are not stable; there are significant differences between each training epoch; therefore, new invariant feature learning methods can be studied in the future to improve the stability of training.
Thirdly, the experiments of the proposed method are all conducted on the existing public dataset and not verified on the real clinical dataset, therefore, we need to cooperate with the clinical hospital to obtain the clinical data of epilepsy and verify the actual effect.
Conclusion
In the proposed method, a domain generalization model based on feature separation and adversarial training is proposed for the case where there is a significant shift in the data distribution between patients in the epilepsy dataset. The model includes a feature extraction network, a feature separation network, and an invariant feature extraction network. The multi-level temporal-spectral feature extraction network extracts valuable features using a convolutional operation and attention mechanism. The feature separation network is used to improve feature discrimination. The invariant feature extraction network is used to align the marginal distribution and conditional distribution of features to make the features more discriminable and general. We use the TUH dataset of 14 patients and leave-one-out cross-validation, and compared with the related literature, the proposed method achieves the best result; therefore, the proposed method can provide some reference for the clinical application of epilepsy detection.
Data availability statement
The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found in the article/ supplementary material.
Ethics statement
Ethical review and approval was not required for the study on human participants in accordance with the local legislation and institutional requirements. Written informed consent to participate in this study was provided by the participants' legal guardian/next of kin. | 5,747 | 2023-07-19T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
Coupling and electrical control of structural, orbital and magnetic orders in perovskites
Perovskite oxides are already widely used in industry and have huge potential for novel device applications thanks to the rich physical behaviour displayed in these materials. The key to the functional electronic properties exhibited by perovskites is often the so-called Jahn-Teller distortion. For applications, an electrical control of the Jahn-Teller distortions, which is so far out of reach, would therefore be highly desirable. Based on universal symmetry arguments, we determine new lattice mode couplings that can provide exactly this paradigm, and exemplify the effect from first-principles calculations. The proposed mechanism is completely general, however for illustrative purposes, we demonstrate the concept on vanadium based perovskites where we reveal an unprecedented orbital ordering and Jahn-Teller induced ferroelectricity. Thanks to the intimate coupling between Jahn-Teller distortions and electronic degrees of freedom, the electric field control of Jahn-Teller distortions is of general relevance and may find broad interest in various functional devices.
which exhibit a complex structural ground state including different Jahn-Teller distortions. Using first-principles calculations, we reveal in AA'V 2 O 6 superlattices an unprecedented orbital ordering and purely Jahn-Teller induced ferroelectricity. We demonstrate that this enables an electric field control of both JT distortions and magnetism. Since JT distortions are intimately connected to electronic degrees of freedom 20 , such as magnetism, orbital orderings and metal-insulator phase transitions to name a few, the proposed mechanism may find broader interest for novel functional devices outside the field of magnetoelectrics.
Bulk A 3+ V 3+ O 3
Whilst the V 4+ perovskites (e.g. SrVO 3 21 ) have been studied mainly for their interesting metallic properties, the V 3+ perovskites are Mott insulators. A 3+ V 3+ O 3 compounds have attracted much attention since the fifties when they were first synthesized 22 . During this time, many studies began to determine their magnetic, electronic and structural properties [23][24][25][26][27][28][29][30][31][32][33][34][35][36][37][38][39] . A central theme at the core of these properties in vanadates is the so-called Jahn-Teller (JT) distortion. The famous Jahn-Teller theorem claims that a material with degenerate electronic states will be unstable towards undergoing a structural distortion lowering its symmetry to remove the electronic degeneracy. In other words, the Jahn-Teller effect is an electronic instability that can cause a structural and metal-insulator phase transition. For instance, in the cubic perovskite symmetry, the crystal field effect splits the d electron levels into a lower lying degenerate three-fold t 2g and a higher lying degenerate two-fold e g state. Hence in 3d 2 systems such as the rare-earth vanadates, a Jahn-Teller distortion is required to split the t 2g levels in order to form a Mott insulating state. We note here the distinction between the Jahn-Teller effect and what we call the Jahn-Teller distortion in this study. Here we define the Jahn-Teller distortion by the symmetry of the atomic distortion as shown in Fig. 1b,d. Whilst a distortion of this symmetry will by definition remove the d electronic degeneracy, the origin of such a distortion does not necessarily need to appear from the Jahn-Teller effect. An important result of this study is that the Jahn-Teller distortion can instead be induced by structural anharmonic couplings, being therefore not only restricted to Jahn-Teller active systems 40 .
In the vanadates, two different JT distortions are observed [24][25][26]33 , with each one consisting to two V-O bond length contractions and two elongations, often labelled as a Q 2 distortion 41 . The corresponding distortions are displayed in Fig. 1 where they are compared to the antiferrodistortive (AFD) motions. The AFD motions can be viewed as oxygen octahedra rotations around an axis going through the B cations, while the Jahn-Teller distortions in the present case correspond to oxygen rotations around an axis going through the A cations. Both JT and AFD motions can be either in-phase ( Fig. 1.a+b) or anti-phase ( Fig. 1.c+d) between consecutive layers and therefore appear at the M or R points of the Brillouin zone respectively. Consequently, we label the Jahn-Teller distortions as + Q 2 ( + M 3 mode) and − Q 2 ( − R 3 mode). While AFD motions do not distort the BO 6 octahedra, JT motions lift the degeneracy of the d levels through octahedra deformations. According to such distortions, the V 3+ 3d 2 occupation consists of either a d xy and d xz t g 2 2 or a d xy and d yz t g 2 2 state in an ideal picture. Nearest-neighbor vanadium sites within the (xy)-plane develop opposite distortions and hence alternative d xy and d xz / d xy and d yz occupations as shown in the top panel of Fig. 2. Along the c axis, the octahedra deformations and hence orbital ordering are either in phase (C-type orbital order) or anti-phase (G-type orbital order) for the + Q 2 or − Q 2 Jahn-Teller distortion respectively (see Fig. 2 bottom panel). Crucially, the orbital ordering determines the magnetic ordering through superexchange interactions [42][43][44] . Strongly overlapping and parallel orbitals between neighboring sites favors antiferromagnetic superexchange interactions. With this in mind, the + Q 2 motion favors a purely antiferromagnetic solution called G-AFM whilst the − Q 2 motion favors (xy)-plane antiferromagnetic alignment and ferromagnetic out-of-plane alignment called C-AFM. In Octahedras for the plane in z = 0 are plotted in red and in blue for the plane in z = c/2. The AFD motions can also appear around the y and z axes (not shown), whereas the Jahn-Teller motions only manifest around the z axis in the vanadates.
other words, a C-type orbital ordering (C-o.o.) is linked to a G-type antiferomagnetic ordering (G-AFM), while a G-type orbital ordering (G-o.o.) is linked to a C-type antiferromagnetic ordering (C-AFM). Experiments indeed observe both G-AFM and C-AFM magnetic phases in the vanadates, with each magnetic ordering favoring a certain structural symmetry 25,26,33 .
At room temperature, all rare-earth A 3+ V 3+ O 3 vanadates crystallize in a Pbnm structure [24][25][26]33 . With decreasing temperature, they undergo an orbital ordering phase transition to a G-type orbital ordered (G-o.o.) phase between 200 K and 150 K (depending on the A-cation size). This transition is accompanied by a symmetry lowering from Pbnm to P2 1 /b. A magnetic phase transition from a paramagnetic to an C-AFM antiferromagnetic state occurs within this phase at a slightly lower temperature between 150 K and 100 K. Finally, for the smallest A cations (A = Yb-Dy, Y), another orbital ordering phase transition to a purely C-type (C-o.o.) arises and is accompanied by a structural phase transition from P2 1 /b back to Pbnm, and a magnetic phase transition from C-AFM to G-AFM. For medium A cations (A = Tb-Nd), a coexistence of P2 1 /b (G-oo) and Pbnm (C-oo) phases is reported 26,45 . No further transitions are found for larger A cations (A = Pr, Ce and La).
To better understand the distorted structures of vanadates, we perform a symmetry mode analysis 46,47 of the allowed distortions with respect to a hypothetical cubic phase on three different compounds, covering a wide range of A-cation sizes: YVO 3 , PrVO 3 and LaVO 3 . The analysis is performed on experimental structural data, and the amplitudes of distortions are summarized in Table 1.
In the Pbnm phase (a − a − c + in Glazer's notations 48 ), all three vanadates develop two unique antiferrodistortive (AFD) motions φ − xy ( ) a a c xy x y 0 and φ + z (a 0 a 0 c + ). Table 1 shows that the magnitudes of these AFD motions strengthen with decreasing A-cation size as expected via simple steric arguments 49 . Within this Pbnm tilt pattern, the + Q 2 lattice motion is already compatible and does not require any symmetry lowering to appear 38 . This latter observation is in agreement with the sizeable + Q 2 lattice distortion extracted from our analysis on room temperature structures, despite the fact that no orbital-ordering has yet been reported for this temperature range 25,26 . The Pbnm phase then appears to always be a pure + Q 2 phase. Additionally, an anti-polar − X 5 mode whose motion is in the (xy)-plane is allowed in the Pbnm symmetry (see supplementary materials).
Going to the P2 1 /b symmetry, a subgroup of Pbnm, the aforementioned AFD motions are still present, but the − Q 2 distortion is now allowed and would lead to a G-o.o. phase. However, the P2 1 /b phase is never an exclusive − Q 2 phase but always coexists with the + Q 2 distortion, even for the larger A cations (A = Pr, La). A mixed C-o.o. and G-o.o. should then manifest for all P2 1 /b structures, independent of orthorhombic/monoclinic phase coexistence. Additionally, another anti-polar − X 3 mode, whose motion is now along the z direction, arises in this new phase (see supplementary materials).
In order to understand the origin and coupling between these distortions, we can perform a free energy expansion (see methods) around a hypothetical cubic Pm m 3 phase with respect to the different distortions. Among all the possible terms in the Pbnm phase, two trilinear couplings are identified: tri Pbnm xy z x y 5 5 2 Within the Pbnm symmetry, when φ − xy and φ + z are non zero in magnitude, the free energy of the system is automatically lowered by the appearance of − X 5 due to the first trilinear term of Eq. 1. Similarly, through the appearence of − X 5 , the free energy is again lowered by forcing the appearance of the + Q 2 motion thanks to the second trilinear coupling. This explains the presence of the + Q 2 distortion in the Pbnm phase of vanadates, even at room temperature. This demonstrates that, in addition to its possible appearance as an electronic instability, it may also appear as a structural anharmonic improper mode within the Pbnm phase (whose strength depends on the coupling constant) even in non Jahn-Teller active materials 40,50 . Going to the P2 1 /b phase, two additional trilinear couplings are identified: The orbital-ordering phase transition to a G-o.o. phase experimentally observed between 150 K and 200 K for all vanadates 25,26 manifests itself through the appearance of a − Q 2 distortion. Consequently, through the third trilinear coupling of Eq. (2), both JT distortions produce the additional anti polar − X 3 motion. This is in agreement with the experimental data of Table 1. Finally, an extra φ − z (a 0 a 0 c − ) AFD motion arises in the P2 1 /b phase through the last trilinear coupling, yielding a rare a − a − c ± tilt pattern with both in-phase and out-of phase AFD motion around the c axis. This tilt pattern has previously been predicted to appear within this space group 38 .
Therefore, within this P2 1 /b phase, both Jahn-Teller distortions coexist, but likely with different origins. The + Q 2 mode is "pinned" into the system through an improper anharmonic coupling with the robust AFD motions while the − Q 2 mode may appear through the traditional Jahn-Teller electronic instability. It is interesting to note that this coexistence is allowed due to the improper appearance of + Q 2 , despite there likely being a competition between both JTs. This competition would be understood as an electronic origin to favor one type of orbital ordering over the other, producing a biquadratic coupling with a positive coefficient in the free energy expansion. In the light of there being an abundance of + Q 2 with respect to − Q 2 phases across the perovskites, we then propose whether it is this improper appearance of + Q 2 via the robust AFD motions that helps favor this phase universally. The vanadates would then be a special case where the − Q 2 instability is robust enough to appear despite this competition. This universal Table 1. Amplitudes of distortions (in Å) on experimental structures of vanadates at different temperatures. In the P2 1 /b symmetry, both φ − xy and φ − z AFD motions belong to the same irreducible representation ( − R 5 mode) even if the φ − z amplitude is likely very small. The reference structure was chosen as a cubic structure whose lattice vector corresponds to the pseudo cubic lattice vector associated to the room temperature Pbnm phase. symmetry analysis and free energy expansion rationalizes the origin of the coexisting orbital ordered phase in the P2 1 /b symmetry as observed in vanadates both experimentally and theoretically 25,26,37,38 .
The coexistence of both Jahn-Teller motions in the vanadates, will also clearly affect the orbital ordering and consequently the magnetic ordering. One might expect a complex canted magnetic ordering to occur, resembling partly C-AFM and partly G-AFM, as indicated experimentally from neutron scattering on several vanadates. While a pure G-type AFM ordering is observed in the Pbnm phase of YVO 3 with magnetic moments lying along the c axis, a non-collinear spin arrangement is observed in the P2 1 /b phase 29,30,51,53 . Indeed, the spin arrangement corresponds to a C-AFM ordering with magnetic moments located in the (ab)-plane plus a weaker G-AFM ordering with magnetic moments along the
(AVO 3 ) 1 /(A'VO 3 ) 1 layered structures.
Magneto-electric multiferroics are widely studied due to their intriguing coupling between ferroelectricity and magnetism (electric field control of magnetism and conversely), and are proposed as promising candidates for lower energy consumption spintronic devices 7,9 . However, materials combining both ferroelectric and (anti)-ferromagnetic order parameters are elusive in nature and the identification of new single phase multiferroics remains a challenge for modern day research 54 .
Hybrid improper ferroelectricity, in which a polar distortion is driven by two non-polar motions, emerged recently as a possible new mechanism to induce ferroelectricity in otherwise non-ferroelectric compounds [10][11][12][13] . When considering magnetic compounds, the trilinear coupling between polar and non-polar lattice distortions achieved in such systems appeared moreover as a promising pathway to achieve enhanced magneto-electric coupling 13,14,18,19,55 . Rondinelli and Fennie clarified 12 the emergence of rotationally driven ferroelectricity in ABO 3 /A'BO 3 superlattices, providing concrete rules for the design of new hybrid improper ferroelectrics.
Following the same spirit, we consider (AVO 3 ) 1 /(A'VO 3 ) 1 structures with planes of different A cations layered along the [001] direction. This structure can either appear naturally as in the double perovskites, or through single layer precision epitaxial deposition techniques. The free energy expansion around a P 4 /mmm layered reference structure (equivalent to Pm m 3 in bulk) then becomes: tri Pb xy z xy xy xy z z z z 2 2 2 The first observation is that the symmetry breaking due to the A cation layering turns the X antipolar modes to polar modes, i.e. in-plane (110) P xy and out-of-plane (001) P z 10,12,54,56,57 . The first and fourth trilinear couplings of Eq. 3 correspond to the rotationally driven hybrid improper ferroelectricity mechanism [10][11][12] . The second trilinear term links the in-plane polarization to both an antiferrodistortive (AFD) and Jahn-Teller (JT) distortion, already observed in reference 18. However, we identify in Eq. 3 a new trilinear term + − P Q Q z 2 2 coupling the out-of-plane polarization P z to both JT distortions. Since JT distortions are intimately connected to orbital-orderings and particular magnetic states as discussed in the previous section, we can expect to have a direct and strong coupling between polarization and magnetism from this term.
In the present work, we have performed first-principles calculations in order to show that (AVO 3 ) 1 / (A'VO 3 ) 1 layered structures are indeed ferroelectric and develop both in plane and out-of-plane polarizations. On the one hand, P xy appears as a slave to the rotations and is indirectly linked to magnetism through the modification of the superexchange path as in the usual rotationally driven ferroelectrics 13 . On the other hand, P z appears thanks to an electronic instability manifested as a particular orbital and magnetic ordering. Finally, we demonstrate that an electric control of the magnetic state is indeed possible, providing a novel paradigm for the elusive magnetoelectric multiferroics.
In order to test the above hypothesis, we considered two different superlattices: (PrVO 3 ) 1 /(LaVO 3 ) 1 (PLVO) and (YVO 3 ) 1 /(LaVO 3 ) 1 (YLVO). First principles geometry relaxations (see method section) of the superlattices converged to two metastable states: a C-AFM ordering is found in a Pb structure (equivalent to the P2 1 /b in bulk) while a G-AFM ordering is found in a Pb2 1 m symmetry (equivalent to Pbnm in bulk). We find that PLVO adopts a Pb C-AFM ground state while YLVO adopts a Pb2 1 m G-AFM ground state. The symmetry adapted modes and computed polarizations of all metastable phases are presented in the supplementary material. As predicted, the Pb2 1 m ground state of YLVO only exhibits a P xy polarization, whose magnitude is 7.89 μC.cm −2 . However, the Pb ground state of PLVO develops both P xy and P z polarizations of 2.94 and 0.34 μC.cm −2 respectively. The P z contribution indicates a Jahn-Teller induced ferroelectricity (third term of Eq. 3). Below we explore the origin of P xy and P z in more detail.
Bulk vanadates exhibit a Pbnm phase at room temperature and hence both superlattices should first go to the equivalent Pb2 1 m intermediate phase. We therefore begin by providing insight on the driving force yielding the various distortions within this phase. For this purpose, we condense different amplitudes of distortions (see methods) within the metastable Pb2 1 m state of the PLVO superlattice starting from an ideal P 4 /mmm structure (for each potential, see supporting information). Four main distortions are then present in this Pb2 1 m phase: φ − xy , φ + z , + Q 2 and P xy . As expected, the two antiferrodistortive motions are strongly unstable (approximately 1 eV of energy gains for each) and are the primary order parameters of this Pb2 1 m symmetry. P xy and + Q 2 present single wells which are the signature of an improper anharmonic appearance 55 . Therefore, the P xy polarization appears through a hybrid improper mechanism driven by the two rotations through the first term of Eq. 3. Furthermore, as predicted in the first section, this analysis suggests that the + Q 2 appears with a structural hybrid improper mechanism rather than an electronic instability in this compound.
Having considered the intermediate Pb2 1 m phase, we next turn our attention to the phase transition of PLVO to its Pb ground state. Curiously, a phonon calculation on the intermediate Pb2 1 m phase did not identify any unstable modes, indicating that no lattice motions can be responsible for the phase transition. Clearly, the system has to switch from G-AFM to C-AFM and therefore in an attempt to understand this phase transition we performed the following two sets of calculations. The atomic positions were fixed to the intermediate Pb2 1 m structure and the energy was computed i) with imposed and ii) with no imposed, Pb2 1 m symmetry for the electronic wavefunction, both within the two possible magnetic states. While for the G-AFM calculations, no energy difference is observed between calculations with and without symmetry, the C-AFM calculation with no symmetry leads to a lower energy (around 4.5 meV) than the one with imposed symmetry. The only difference between the two calculations is that the electronic structure is allowed to distort and consequently breaks the symmetry. We discover that, even with the atoms fixed in centrosymmetric positions along the z axis, the electronic instability creates an out-of-plane polarization P z of 0.04 μC.cm −2 .
In order to understand the nature of this electronic instability, we plot the projected density of states on vanadiums in Fig. 3. Starting from the projected density of states with Pb2 1 m symmetry, consecutive atoms along the z direction (V 1 and V 3 , V 2 and V 4 on Fig. 2) exhibit identical density of states. Consequently, the orbital ordering appears to be of C-type. When allowing the electronic structure to distort, several changes appear in the orbital occupations. Consecutive atoms along the z direction now prefer to occupy either more of the d xz or the d yz orbital, which results in a mixed G-type (G-o.o.) plus C-type orbital ordering (C-o.o.). The G-o.o. that appears, despite the absence of the − Q 2 motion, is allowed via the Kugel-Khomskii mechanism 44 . This mixed orbital ordering produces an asymmetry between the VO 2 planes, as indicated by the two magnitudes of magnetic moments in each layer (1.816 ± 0.001 μ B and 1.819 ± 0.001 μ B ). The mixed orbital ordering also appears in the bulk vanadates, such as the G-o.o. + C-o.o. ground state of LaVO 3 or PrVO 3 (previously thought to be just G-o.o. from experiments) 25,26 . However, here it is not enough to break the inversion symmetry along the z axis yielding no out-of-plane polarization. The second necessary ingredient is the symmetry breaking due to the A and A' ordering along the [001] direction in the superlattices. The combination of both effects (in the AO and VO 2 planes) is required to break inversion symmetry along the z axis and to produce the out-of-plane polarization. The result is an orbital ordering induced ferroelectricity in vanadate superlattices.
Interestingly, the direction of the orbital ordering induced ferroelectric polarization is found to be arbitrary, and both + 0.04 and − 0.04 μC.cm −2 are observed. Each state displays a reversal of the magnitude of the magnetic moment of the two VO 2 planes. Starting from these two possibilities, we performed the geometry relaxation and it ended with the previously identified Pb ground states, with both possibilities (up and down) for the out-of-plane polarization. We note that the difference in magnetic moment between both VO 2 planes is more pronounced (1.820 ± 0.001 μ B and 1.828 ± 0.001 μ B ) after the geometry relaxation. Three new lattice distortions develop to reach the Pb phase: P z , − Q 2 and φ − z . To understand the nature of their appearance, we plot in Fig. 4 each potential as a function of the distortion amplitude. All potentials present single wells, more or less shifted through an improper coupling with the electronic instability. This confirms that the electronic instability is the primary order parameter driving the phase transition. Moreover, the − Q 2 motion presents an energy gain of one to two orders of magnitude larger than those of the φ − z motion, indicating that the − Q 2 couples more strongly with the electronic instability, which might be expected. Consequently, once the electronic instability condenses, the − Q 2 lattice distortion is forced into the system which consequently produces the lattice part of the polarization through the structural hybrid improper coupling. This Jahn-Teller induced ferroelectricity amplifies by one order of magnitude the electronic out-of-plane polarization. The sign of the three lattice distortions is again imposed by the initial sign (up or down) of the electronic polarization. Consequently, the reversal of P z through an application of an external electric field would require the reversal of both − Q 2 , φ − z and the magnitude of the magnetic moment of both VO 2 planes. The saddle point at the midway of this reversal (all three modes equal zero, i.e. the Pb2 1 m phase) is of the order of 10 meV higher in energy, which represents a reasonable estimate of the ferroelectric switching barrier. Compared to the rotationally driven ferroelectricity P xy , whose energy barrier is of the order of 0.1 to 1 eV 12,14,15 , this Jahn-Teller induced ferroelectricity is therefore very likely to be switchable. The large difference between the two energy barriers is due to two different energy landscapes involving i) the robust AFD motions inducing P xy and ii) the relatively soft distortions inducing P z .
Finally we discuss a novel route to create the technologically desired electrical control of magnetization. Starting from a Pb2 1 m phase with an G-AFM magnetic ordering, the application of an external electric field E along z will induce P z in the system through the dielectric effect. As a result, the − Q 2 distortion is automatically induced through the + Q 2 − Q 2 P z trilinear term. This electric field induced − Q 2 distortion is a general result for any (ABO 3 ) 1 /(A'BO 3 ) 1 superlattice consisting of two Pbnm perovskites. Since − Q 2 distortions are intimately connected to the G-o.o. and the C-AFM magnetic ordering, for a finite value of E, the system may switch from the initial G-AFM phase to the C-AFM phase. In reality, the C-AFM phase should exhibit a net weak magnetization from a non collinear magnetic structure as observed in several bulk vanadates of P2 1 /b symmetry 28,30,53 . Therefore, the application of an electric field may not only switch between AFM orderings, but also produce a net magnetic moment in the material. However, for illustrative purposes, even at the collinear level in our calculations, we can look at the relative stability between the two magnetic states under an external electric field in the YLVO superlattice, which presents the desired Pb2 1 m ground state (8 meV lower than the Pb phase). Figure 5 (top panel) plots the internal energy U(D) of the Pb and Pb2 1 m phases as a function of the amplitude of the electric displacement field D applied along the z axis. In such a graph, the switching E-field at which the the Pb phase with the G-o.o./C-AFM ordering becomes more stable than the Pb2 1 m phase with the C-o.o./G-AFM ordering, is given by the slope of the common tangent between the two curves and is evaluated to be around 6.54 MV.cm −1 . This corresponds to a voltage of 0.50 V for one bilayer ( . ) c 7 8Å . This critical electric field could be further decreased by reducing the energy difference between the two phases at zero field. This can be achieved by changing the A cations or applying biaxial epitaxial strain (see supplementary material).
As illustrated in Fig. 5, what we demonstrate more generally in the present study is an electric field control of the JT distortions, mediated by their coupling with the polar mode. Since this mechanism arises from universal symmetry relations, we can expect this effect to also appear in other perovskite superlattices, such as nickelates, fluorites or manganites to name a few. This effect may find applications outside the field of magnetoelectrics such as for tunable band gaps and metal-insulator transitions, since the JT distortion affects the electronic structure in general.
Conclusions
In conclusion, we have identified novel lattice mode couplings in the vanadates, helping to clarify the origin of the unusual coexisting Jahn-Teller phase, and indeed the role of Jahn-Teller distortions in perovskites in general. These findings have enabled the prediction of a novel paradigm for the elusive magnetoelectric multiferroics, based on a Jahn-Teller/orbital ordering induced ferroelectricity. Due to the intimate connection between Jahn-Tellers and orbital ordering with magnetism, this unprecedented type of improper ferroelectric facilitates an electric field control of both orbital-ordering and magnetization. The rationale is completely general, and a challenge for applications will be to identify new materials with a magnetic and co-existing Jahn-Teller phase at room temperature. The demonstration of an electric field control of Jahn-Teller distortions may find more general applications for novel functional devices, outside the field of multiferroics. We hope these discoveries will help motivate future studies that will further unlock the potential of vanadate perovskites, and other Jahn-Teller systems, such as fluorites, nickelates and manganites.
Methods
The basic mechanism we propose here is solely based on symmetry arguments. Symmetry mode analysis of experimental data were performed using amplimodes 46,47 . The free energy expansion of Eq. (1-3) is performed using the invariants software from the isotropy code 58 . The results from these symmetry considerations are not dependent on the technical parameters of the first-principles calculations. The latter are only there to illustrate on a concrete basis and quantify the effect. First principles density functional theory calculations were performed using the VASP package 59,60 . We used a 6 × 6 × 4 Monkhorst-Pack k-point mesh to model the Pbnm (P2 1 /b) phase and a plane wave cut off of 500 eV. Optimized Projector Augmented wave (PAW) potentials for PBEsol exchange-correlation functional were used in the calculations. The polarization was computed using the Berry phase approach as implemented in VASP. The study was performed within the LDA + U framework 61,62 . The LDA + U framework has already been shown to be sufficient to reproduce the ground state of vanadates 37,63 . The U parameter was first fitted on bulk compounds in order to correctly reproduce the ground state of the bulk vanadates. A value of U = 3.5 eV was obtained (see Table 2 and Table 3 in the supporting information). Phonon calculations were performed using the density functional perturbation theory. We used a collinear approach to model the magnetic structures. Structural relaxations were performed until the maximum forces were below 5 μeV.Å −1 and the energy difference between conjugate gradient steps was less than 10 −9 eV. The superlattices were relaxed starting from four different initial guesses: two magnetic orderings (C-AFM and G-AFM) and two space groups (Pb2 1 m and Pb, subgroups of Pbnm and P2 1 /b respectively for the layered structures). Lattice distortion potentials were plotted as a function of the fractional amplitude of each mode separately appearing in the ground state. In order to determine the electric field required to switch from the Pb2 1 m to the Pb phase, the internal energy at fixed D was estimated for each phase as follows. First, the polar atomic distortion pattern ξ associated to the linear response of the system to an electric field E along z was determined from the knowledge of the phonon frequencies and oscillator strengths. Second, the Kohn-Sham energy well U KS (ξ) in terms of the amplitude of ξ was computed, yielding a model U KS (P 0 ) restricted to the subspace spanned by ξ using P 0 = Z * ξ/Ω where Z * is the Born effective charge associated to ξ, Ω the unit cell volume. Third, U(D) was deduced as 64 where 0 is the permittivity of vacuum and ∞ , the optical dielectric constant. | 7,212.8 | 2014-09-30T00:00:00.000 | [
"Physics"
] |
Design of intelligent temperature control system for parking vehicle based on solar energy
. Because of the high summer temperature, after the vehicle parking, the interior temperature increased sharply. It can make the vehicle interior parts easy to age, and release harmful gases. While the driver can’t adapt to high temperature when enter to the vehicle again. The interior high temperature is a threat to human’s health. In order to reduce the temperature inside the vehicle, this paper designed a solar-based intelligent temperature control system. After the vehicle’s engine stops working, it can monitor the temperature inside the vehicle in real time, and can reduce the temperature to the appropriate temperature when the temperature is too high. The system has the advantages of simple structure, convenient use, no need to modify the vehicle and so on. After the real vehicle test, the effect of reducing the heat accumulation in the vehicle can be achieved.
The hazards of high temperatures in vehicles
In recent years, the amount of vehicle ownership has increased significantly in China, and vehicles have become an important means of transportation for people to travel. When driving in hot summer, the temperature inside the vehicle rises, and people use air conditioning to cool down. However, when the automobile stops working, the air conditioning has no power source. so it stops working. Under the strong radiation of the sun, the temperature inside the vehicle rises rapidly. McLaren and others had make an actual measurement on the temperature inside the vehicle, the result is that when the outside temperature is 22 ℃ , the parked vehicle compartment temperature can reach 47 ℃ . When the driver re-enters the vehicle to drive, the high temperature of the interior will make the driver very uncomfortable and it's not conducive to good health.
In addition, there are many hazards because of the high temperature in the vehicle after parking. In recent years, there have been many related news reports, such as children have been forgotten in the vehicle, the high temperature inside the vehicle caused brain damage and even death of children, as well as the vehicle placed perfume bottles, lighters and other items in the high temperature burst, the vehicle placed bottled beverages in the high temperature will produce toxic substances harmful to health, leather ornaments at high temperature will accelerate aging and many other problems.
Research status and significance
At present, there are two commonly adopted treatment methods. The first one is to open all the doors of the vehicle and use the air convection to reduce the temperature inside the vehicle. The temperature drops to the appropriate temperature, which need about at least 15 minutes or more.
The other method is to start the engine, open the vehicle air conditioning's most upscale forced cooling. This method is also need to wait for some time to make the vehicle temperature drop down. It has some disadvantages, such as the vehicle idle operation for a long time, will greatly increase the oil consumption, with the experimental data showed that oil consumption can be increased by 14.2%. At the same time, when the vehicle is operating at idle, the fuel is not completely burned. So there's a lot of carbon around the valve. The vehicle will exhaust a large number of harmful discharges, which will pollute the environment. This is not conducive to environmental protection.
In recent years, much of the researches on the application of solar energy in vehicle cooling mainly are focus on how to connect solar cells to the air conditioning system in the vehicle. After the vehicle is parked, solar cells drive the air conditioning system to work to achieve the purpose of cooling, which requires the vehicle to be redesigned or modified.
In view of the above problems, it has important sense to design an intelligent temperature monitoring and cooling device based on solar energy, which can automatically reduce the temperature inside the vehicle without changing the automobile's original structures.
This device can make the driver feel comfortable when entering the vehicle, at the same time, it can extend the service life of the vehicle interior, and effectively reduce a series hazards caused by the high temperature of the vehicle.
2 Overall design scheme of the system
Collection and analysis of environment temperature and interior temperature data
In the summer, the vehicles are exposed to the sun after they stopped. Most of the vehicle body is metal, so the body will absorb heat and conduct heat. Under the sunshine, the body of the vehicle will gather a lot of heat, which will be conducted to the interior. The sun also shines through the windows into the inside of the vehicles. As the parking time increases, the temperature inside the vehicles will increase rapidly. Figure 1 shows the measured environmental temperature in Changchun someday and the numerical curve of the temperature inside the vehicle. The test condition is that the doors and windows are closed and the front windshield is not shielded. The maximum environmental temperature is 33 ℃ , the maximum temperature inside the compartment can reach 57℃, and the maximum temperature difference between inside and outside the compartment can reach 26.9℃. As can be seen from Figure 1, with the increase of the parking time, the temperature inside the compartment will continue to rise even if the range of environmental temperature change is not large.
Fig1. The temperature curve of environment and interior
Due to the large area of the front windshield of the vehicle, more sunlight is shone into the vehicle through the front windshield. Therefore, the difference of the internal temperature change of the vehicle is compared and analyzed when the front windshield of the vehicle is shielded or not. The test conditions are as follows: After the vehicle is parking, the doors and the windows of the vehicle are shut. The front windshield of a car is shaded by a sun visor. Under these conditions, the temperature data in the vehicle is measured. The environment temperature and the temperature data inside the vehicle are shown in figure 2.The red bar chart represents the environment temperature, and the purple bar chart represents the vehicle temperature. The figure 2 shows that when the environmental temperature is at 30 ℃ or so, the vehicle temperature increased with the extension of the parking time. The highest temperature in the vehicle is more than 50 ℃.
As can be seen from the comparison between Figure 1 and Figure 2, when the environmental temperature is roughly the same, if the front windshield is shielded, the temperature inside the vehicle rises slowly and the maximum temperature decreases slightly. Therefore, shielding the front windshield is conducive to reducing the temperature inside the vehicle. Wang Weijian et al. studied on the influence of solar irradiation on the interior temperature, and measured the temperature in Guangzhou. When the environmental temperature is above 35℃, the temperature near the dashboard of the interior vehicle can reach above 90 ℃ . At the same time, when the front windshields have different colors, the dark color of the windshield can make the temperature rise slowly, which is consistent with the measured data in Changchun. Through the comparison of measured temperature data, shielding the front windshield has an effect on reducing the temperature inside the vehicle, so the cooling device is placed on the front windshield.
Functions to be realized by the system
In order to reduce the temperature in the vehicle after the vehicle is parked, the cooling device designed in this paper needs to achieve the following functions: first, to block the radiation from the front windshield of the sun, second, to use the fan to form the air flow inside and outside the vehicle, third, to use semiconductor refrigeration chips to produce cooling effect, fourth, to monitor the temperature in the vehicle compartment in real time, the engine stops working after the vehicle stops, at this time the power supply is solar panels. Solar panels have the advantages of low cost and environmental protection, as well as a certain degree of flexibility, convenient to place on the curved object.
System structure
The main hardware components of the system are solar panels, temperature control switch, temperature sensor, fan, semiconductor refrigeration, sunshade baffle, etc., which can be divided into two modules: temperature monitoring module and temperature regulation module. The overall control structure of the control system is shown in Figure 3.
A number of solar panels are placed in a sunshade, which is shown in figure 4. The solar panels are pasted in the visor corrugated. In driving process, the sunshade is rolled up by a retracting device. Stick it to the top of the front windshield so that it doesn't block the view.
Fig3. Structure diagram of control system
When the vehicle is parking, the sunshade can be placed on the front windshield through the retractable device.
It can not only block the direct injection of sunlight, but also the electricity generated by the solar cell, through the temperature control switch to control the driving fan and semiconductor cooling chip work, form the inside and outside air circulation, to achieve the purpose of cooling.
Working principle of the system
The working principle of the solar cooling system designed in this paper is as follows: under the condition of sufficient sunlight, the solar panel receives sunlight to generate electricity and drives two small fans placed inside and outside the vehicle in reverse to realize the exchange of air, helping the flow of air inside and outside the vehicle to bring down the temperature inside the vehicle. The cooling system mainly consists of the following parts: signal acquisition system, control system, ventilation system and display system. Here's how they work: The temperature sensor is powered by the solar cell, placed on the vehicle's instrument table, monitors the temperature in the compartment in real time, and transmits the temperature signal to the Arduino controller.
The control principle of the control system is shown in Figure 5: When the temperature in the compartment is less than t 1 (t 1 is defined as the starting temperature of the fan in low gear), the temperature control switch is kept long open and the temperature control system does not work; When the temperature in the compartment is as follows: t 1 <t<t 2 (t 2 is defined as the starting temperature of the fan in the high speed gear), the temperature control switch one (low speed gear) closed, and the temperature control system controls the rotation of the fan at low speed; When the temperature in the compartment is t>t2, the temperature control switch two(high speed) is engaged. The temperature control system controls the high-speed rotation of the fan, and the semiconductor refrigeration unit starts to work at the same time.
Fig5. Control schematic diagram of the control system
The system starting temperature t 1 and the fan switching temperature t 2 can be defined according to actual needs.
The ventilation system is mainly composed of a fan and a semiconductor refrigeration unit. The operating state of the semiconductor refrigeration unit is on and off. The running speed of the fan is controlled by the temperature control switch. According to the different temperature, the control system adjusts its working state.
The display system can display the real-time temperature in the compartment and the working status of the system.
Software design of control system
In order to realize a good temperature detection and adjustment function in the vehicle, an Arduino-based control system was designed and a software program was written. The workflow of the software system is as follows: After the vehicle is in the parking state, we put adsorption cooling device on the front windshield. The solar cells will absorb sunlight, output voltage, and power supply to the system. When the driver press the startup command, the temperature sensor will collect signals of temperature in the vehicle, and then transfer it to the controller. The controller receives the signal, analysis the signal processing, and send instructions to the fan and the semiconductor refrigeration unit. The workflow of the software system is shown in Figure 6.
The analysis of system commissioning and real vehicle test
The function of the cooling system was tested in real vehicle. The working state of the solar panel, the working shape of the temperature monitoring system and the working state of the temperature regulation system were tested respectively. The problems of unstable output voltage, temperature detection sensitivity and software program were solved. The real vehicle test vehicle is a hatchback vehicle, and the test picture is shown in Figure 7. The experimental scheme and results are as follows.
Fig7. Actual vehicle test diagram of the system
The test environmental conditions were as follows: the environmental temperature was 28 ℃ , the starting temperature t1 of the refrigeration system was set as 40℃, and the vehicle exposure value reached 35 ℃ in the vehicle.
At this point, the working state of the test system is as follows: the output voltage of the solar cell is 12 volts; The temperature displayed on the temperature display is consistent with the thermometer placed in the vehicle, indicating that the temperature sensor is normal. The temperature in the vehicle did not reach the starting temperature of the refrigeration system of 40℃, the fan and semiconductor refrigeration sheet did not work, and the system was in standby state, indicating that the hardware and software of the control system were working normally.
When the environmental temperature is 33℃, test the system again. The system startup temperature T1 is set to 40℃, the fan high-speed rotation startup temperature T2 is set to 50℃, and when the vehicle is exposed to sunlight until the vehicle interior temperature is 46 ℃ , the fan operates at low speed, the semiconductor refrigeration unit does not work, and the interior temperature can be stabilized within 40℃. When the environmental temperature is 38℃, test the system again. The starting temperature t1 of the system is set as 40℃, and the starting temperature T2 of the highspeed rotation of the fan is set as 50℃. When the vehicle is exposed to the sun until the temperature in the vehicle reaches 60℃, the high-speed operation of the fan and the semiconductor refrigeration unit work, and the temperature in the vehicle can be stable within 40℃.
Through the above tests on the cooling device in different ambient temperatures, it can be seen that the cooling system achieves the expected function and has a good cooling effect.
Conclusions
The real vehicle test results show that: the cooling system designed in this paper takes solar energy as the energy, using cooling curtain block part of the sun radiation. The solar cells can collect solar energy, output voltage, supply power to the system, and drive the inside and outside air circulation refrigeration through fans and semiconductor refrigeration chips, so as to realize the cooling effect of the vehicle and achieve the expected function.
In particular, the device is simple and reliable in design, easy to operate, safe and environmentally friendly. The suction cup can be attached to the front windshield of the vehicle without any modification of the vehicle, which has a wide application prospect. | 3,551.8 | 2020-11-01T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
INTEGRATION OF SCIENCE AND GLOBALIZATION PROCESSES
The purpose of this work is to show the possible influence of integration processes in science on the strategies and results of globalization. In this regard the following are considered: the main strategies of globalization; the role of science in globalization; the genesis of scientific knowledge; the current state of science disintegration; possible ways and methodology for science integrating. It has been shown that the main problem of modern science is the disunity of scientific branches, which is the result of the simultaneous use of scientific paradigms developed at various stages of the development of science. Attention is paid to the development of humanitarian knowledge that is lagging behind natural science. This lag is due to the fact that humanities still use the paradigm of classical science and its mechanistic ideas about man, while natural science is based on the paradigm and methodology of post-non-classical science. Since the education system broadcasts scientific knowledge, this situation of science disintegration by means of education only worsens over time. As a way out of this situation, the need to use the paradigm of modern post-non-classical science and its main methodology - integral vision - in all scientific branches is justified. To accelerate the processes of science integration, it is necessary to implement at all levels of the educational system a transdisciplinary concept of the educational disciplines interaction, according to which all disciplines should be built into a multilevel hierarchical system with a common paradigm and axiomatics.
Introduction
Globalization processes are a powerful attractor, drawing into its orbit many other processes that stand at lower levels of the world procedural hierarchy. The reason for this is the fact that humanity is at the level of consciousness (subjective) is initially integrated with each other and with the world as a whole, as quantum physics and the doctrine of the noosphere Vernadsky tell us, and at the level of objective, body existence, a person, for the most part, identifying himself with body form, sees himself as an individual, some wholeness, having boundaries and separated from the surrounding world. Hence two strategies arise in his life: on the one hand -cooperation, on the other -confrontation, opposition, competition and enmity. Hence the two concepts of globalization, one of joining forces for the common good; the second is the conquest of world space to satisfy clan interests. Due to the fact that the process of mankind evolution in the future provides for its integral unity not only in the subjective aspect, but also in reality -in the world of objectively recorded forms and systems -the processes of such unity will inevitably develop in all spheres of human activity and in scientific among others. Since science as a source of objective knowledge is a significant system for the development of society and social relations, its internal integration in terms of the general integration process is seen as an extremely urgent task, without which the contribution of science to the process of globalization aimed at uniting humanity for the purpose of cooperation cannot be significant enough.
Problem Statement
What prevents science from achieving its inner unity? First of all, these are all the same human qualities that the now departed spiritual leader of India, Bhagavan Sri Satya Sai Baba, called six villains: lust, anger, attachment, hatred, greed, pride. (Bhagavan, 2012). It is these qualities that lead to the fact that, as Spengler said, many of the great ideas of alien cultures we let die, perceiving them as false, unnecessary or meaningless (Spengler, 2014). To this we can only add that this is happening now not only in relation to the ideas of "alien cultures," but also in relation to ideas belonging generally to someone else, for example, representatives of another scientific school. The very fact of the existence of commissions to combat pseudoscience suggests that nothing has changed in this aspect of scientific existence. Another, no less good reason is the irreparable insufficiency of scientific knowledge, since science functions in the space of objective, in the space of the world of reality (the world of things), the world of forms and phenomena of essence, but not itself. Actuality as a whole is inaccessible to it, since objectively existing tools cannot explore the subjective space, the space of individual and collective consciousness, in which there are no boundaries (Wilber, 2004). Scientific knowledge is always expressed in a text that has a finite alphabet, and by finite means it is impossible to directly express the infinite -the essence of things, their consciousness, their controlling system. Thus, the problem lies in the following: how, in the conditions of the existence of these reasons, to nevertheless move along the path of intra-system integration of the institute of science, without which it is impossible to effectively develop and positively influence the processes of globalization. https://doi.org/10.15405/epsbs.2021.11.153 Corresponding Author: Nepomnyashchiy Anatoly Vladimirovich Selection and peer-review under
Research Questions
This problem can be presented in the form of a combination of its individual questions, the main ones of which, in our opinion, are the following.
Genesis of the problem.
Current state of scientific knowledge.
Methodological approaches and specific solutions
Purpose of the Study
The purpose of the study is to show and substantiate the methodology of systemic integration of the Institute of Science.
Research Methods
Deductions, integral vision, orienting generalizations
Findings
A historical excursion at the time of the birth and beginning of the development of modern science using deduction methods and orienting generalizations (Wilber, 1997) allows you to see the following.
Modern civilization has passed, in the foreseeable time period, a number of fairly pronounced stages of its development, characterized by the actualization and activation of individual subsystems in the general system of man's perception of his internal world and the world of the external, i.e. the habitat in its entire integrity. At the moment, the following components of the general system of human perception are known and to varying degrees have been scientifically investigated. This is a kind of "eyes of cognition," as they are called in the key works of the author of the integral vision methodology Ken Wilber (Wilber, 1997;Wilber, 2000;Wilber, 2011): the eye of body -human sensory systems; the eye of mind and reason -fragmentation of the world and a vision of meaning in semantic systems; the eye of soul -perception of the energy state of space (sensual and supersensitive perception); the eye of spiritcontrolled intuition, vision through space and time, that is, the ability to visually perceive objects of the world and events removed in space and in time.
Each person has his own unique system of perception, characterized by the degree of activity of his eyes of cognition. At the same time there are also general, historically shown stages of development of human mind, reason, consciousness (Wilber, 1997;Wilber, 2017), the consideration of which allows you to see the genesis of the problem under study.
Archaic stage of development: man is fused with nature and does not need means of assistance; eye of mind and reason in rudimentary state, other subsystems are activated; man unites in collective communities, which are quite enough for life that nature gives them; the process of cognition has an intuitive-sensual character.
Magical stage of development: the beginning of actualization of the mind and reason, in connection with which a desire arises and is realized to control elements of nature by their own psycho-https: //doi.org/10.15405/epsbs.2021.11.153 Corresponding Author: Nepomnyashchiy Anatoly Vladimirovich Selection and peer-review under responsibility of the Organizing Committee of the conference eISSN: 1155 energy means; the magician relies on his previous experience of archaic, not losing contact with nature, with the habitat, but mastering the functions of a manager; horticulture and gardening arise in human communities; the knowledge of the world is carried out in a subjective way, by identifying the subject of cognition with the object of cognition (Aristotle's organon).
Mythical stage of development: a person realizes that there is someone above his capabilities; the idea of a personified God arises; increase of body eye and mind eye activity with decrease of soul eye and spirit eye activity; the emergence of science and scientific myths; the experience of archaic and magician by man is forgotten, due to the closure of the eye of soul and the eye of spirit as a result of the use of psychoactive substances (alcohol, smoking, etc.); a person gravitates to the use of means of assistance in his activities; the development of preclassical science and industrial production.
Rational stage of development: complete closure of the eye of soul and the eye of spirit with the dominance of the eye of mind and the eye of body; man becomes an atheist, because he has nothing to see God; the development of classical science based on Bacon's "New Organon," that is, an exclusively objective way of knowing using auxiliary tools; all scientific industries are integrated into one whole, because they are based on one basis, a paradigm whose role is played by the law of conservation of massthe mass of matter in the universe is constant; a mechanistic idea of a person identified with his bodily form; psychology denies the existence of its subject of study -the soul; man sees science finite in space and time; development of technospheric thinking, consolidation of capital and means of production.
The pluralistic stage of development: the most developed part of humanity and the scientific community becomes clear that everything in the world has the right to exist, given to it by the Creator; the human being at this stage of development awakens the eye of the soul and discovers for himself a number of world universals (universal laws of the universe) proving the existence of the Creator, in particular the second beginning of thermodynamics, which suggests that unruly systems can strive only for chaos, and not for evolutionary development; science discovers a new substance (energy), clarifying the law of conservation in which both the mass of matter and energy are now included; the development of quantum physics takes science to the next stage of development -non-classical science; since the main contingent of the world of science is still at a rational level, classical science continues to exist, maintaining leading positions in the world of science and especially in its humanities; the simultaneous existence of two paradigms leads to a significant disintegration of the institute of science, the emergence of separate scientific branches with their private paradigms, a pluralistic view of the world on the principle of "each has its own truth and its own paradigm".
The cholistic stage of development: the avant-garde of science and philosophy reveals that quantum effects are not local, but therefore the world is something holistic, and all its components are totally interconnected; all the same vanguard reveals such a substance of the universe as information (not as a collection of data, but as a creative force); the law of conservation takes the trinitarian form of "matter-energy-information" (Kuhn, 1962), perfectly consistent with the representation of the eternal philosophy of the trinitarian body-soul-spirit structure of man (Huxley, 1946); those who move science forward understand that science cannot distance itself from any source of knowledge (being, cultural, philosophical or contemplative) and unite with these sources on the principle of a union of mind and heart; post-non-classical science arises and its main methodology is "integral vision," developed and https: //doi.org/10.15405/epsbs.2021.11.153 Corresponding Author: Nepomnyashchiy Anatoly Vladimirovich Selection and peer-review under responsibility of the Organizing Committee of the conference eISSN: 1156 proposed to the world by the outstanding philosopher and psychologist Ken Wilber (Wilber, 1997;Wilber, 2004;Wilber, 2011); the old in science does not want to give way to his positions either in the worldview or administratively; the avant-garde is accused of apostasy, betrayal of science and fascination with mysticism; Anti-pseudoscience commissions and committees are established; the internal disunity of the institute of science reaches its apotheosis.
Thus, in its genesis, science has approached its modern state, the main characteristic of which is disintegration. Why in the heyday of technology there is a state of disintegration in science, giving rise to an abundance of simulacrums and simulations in the humanities of science (Baudrillard, 1994;Gazzaniga, 2005) leading humanity to oblivion?
The stages of development of human consciousness and, accordingly, science are not tied to a specific calendar time. Studying Bhagavatgita, Einstein discovered there a description of nuclear weapons and the results of their use, which in this work, written more than a thousand years ago, is called the Brahmo weapon. That is, what we now call non-classical science was already lived by a person thousands of years ago, or perhaps more, which follows from those archaeological discoveries that are now not officially accepted to be published and discussed. What could be the methodological approaches and specific solutions to the problem under consideration? The answer to this question is simple enough: what disengages the institute of science should be eliminated. That is, the following is necessary.
A single common paradigm should be established for all branches of knowledge, corresponding to post-non-classical science, that is, a conservation law that takes into account the existence of a minimum of three substances (matter, energy and information) and the possibility of their mutual transformation.
The paradigms of the classical and non-classical stages of science can be studied and investigated only in the context of considering the history of science and its delusions.
Since future scientists receive basic knowledge in the educational system, it first of all requires the widespread introduction at all levels of education of a transdisciplinary concept of the interaction of educational disciplines (Jantsch, 1972), according to which all disciplines studied should be built into a multi-level hierarchical system with a common paradigm and axiomatics set by the discipline of the https: //doi.org/10.15405/epsbs.2021.11.153 Corresponding Author: Nepomnyashchiy Anatoly Vladimirovich Selection and peer-review under responsibility of the Organizing Committee of the conference eISSN: 1157 highest level of this structure. Only in this case, the reproduction of misconceptions, simulacrums and simulations broadcast by the education system to new generations can be stopped.
It is necessary to restore the quality of the system of general education and higher education, since the latter has long been turned into a system of vocational training aimed not at the comprehensive development of the individual, but at the training of "literate users" for the needs of the economy. It is necessary to eliminate the situation in which "we pretend to teach, they pretend to learn" (Collier, 2013).
As a rule, real scientists (discoverers of the new one) do not leave the system of vocational training, only artisans come out, which satisfies many interested in preserving disintegration and science, and society.
All this cannot be realized based on the methodology of the early stages of the development of science. It is necessary to use the main methodology of post-non-classical science -the methodology of integral vision, which allows to create a holistic, integral picture of the world and, accordingly, the theory of everything (Wilber, 1997;Wilber, 2011). On its basis, it is only possible to integrate the institute of science, stopping, first of all, its division into the sciences of natural and unnatural (humanities), which is the main disintegrating factor that inhibits the development of the entire institute of science.
An integral vision implies a view of the object and subject of study by all four eyes of cognition, that is, to conduct research not only in an objective way, but also subjective. In other words: science should use not only Bacon's "new organon" (objective means of measurement and observation) as a universal means of cognition, but also Aristotle's "organon" -the use of the subject of cognition as a tool of cognition, by identifying the mind of the person who knows with the object of cognition.
The need for such an approach is perfectly illustrated by the four-sector model of integral vision proposed by Wilber (1997), according to which any individual in the universe (and a person in particular) exists in four spaces of being -objective, subjective, individual and collective. If a scientist looks at a person only as an object, he begins to mistakenly identify it with the biological body -with the form of existence. If a particular denomination looks at a person only as a "spark of God," this can lead to fanaticism, asceticism, and other misconceptions that interfere with integral human development. If a researcher looks only into the space of an individual, he has the idea of human isolation, from where most human problems go up to wars. If a researcher looks only into the space of a collective, he begins to imagine a person as a product of social, from where the misconception of psychology that individuals are not born, but become. Thus, the disintegration of the human sciences is carried out, as a result of which personality psychology proudly declares that in its understanding, personality is not what social psychologists talk about. The other side responds with the same pride. Both do not even think about looking at the problem integrally.
At the same time, in the sphere of subjective experience, there are excellent integral models of man, which make it possible to reconcile everyone and show them their shortcomings. For example, the seven-level model proposed by the layman brother of the Order of Rosicrucians Handel at the beginning of the last century (Heindel, 1911) very clearly and consistently shows the integral structure of a man in which there are seven bodies with a detailed description of the functionality of each of them. Yes, this is also a model, like all theoretical objects produced by science, but it has an integral character and, as a result, a huge explanatory potential that the models developed by science at the previous stages of its development do not have at all. https://doi.org/10.15405/epsbs.2021.11.153 Corresponding Author: Nepomnyashchiy Anatoly Vladimirovich Selection and peer-review under
Conclusion
Modern science throughout the Earth's world is politicized, since its existence and successful functioning in any country are totally dependent on economics and politics.
The fragmentation of the interests of the economy and politics of leading countries, which arose as a result of the implementation of the principle of "divide and rule" known from antiquity, gives rise to the ideological and methodological disintegration of all social institutions and science, and religion, among other things.
As a result, globalization processes still continue to develop mainly on the basis of a strategy of counteraction, as evidenced by the competition in the development and use of means of conquering space, tactile, energy, information and hybrid wars.
Hope that the integration of humanity will arise on its own or by the will of external management is possible, but not necessary, as evidenced by the experience of previous dead civilizations and their high-tech remains. The desire and aiming to survive and not lag behind the process of general evolutionary development in space should come from humanity itself, since it is granted freedom of will by the Creator, and so far, no one is going to take it away from man.
There are two social institutions on earth that claim a leading role in society in terms of understanding the world: the first, confessional, because it considers itself closer to the Creator (subjective perception of the world by the eye of soul and the eye of spirit); the second is science, because she considers herself the holder of objective truth, knows more about the world order (objective perception of the world by the eye of body and the eye of mind and reason).
The time has come to follow the counsel of the Teachers of mankind to create a "union of mind and heart." This call to mankind was heard and broadcast by many, including Max Planck, who at the beginning of the twentieth century said that science and religion were moving from different directions, but to one goal -to the knowledge of the Creator. In this regard, the pervasive forced retention of the humanities in the paradigm of classical science is a crime against humanity, as it impedes this alliance.
One way to accelerate the processes of globalization aimed at cooperation, rather than selfdestruction, is to popularize by all social institutions, especially education, religion and culture, the achievements of post-non-classical science, which includes not only objective truth, but also knowledge gained in culture, philosophy and religion. | 4,733.4 | 2021-11-29T00:00:00.000 | [
"Political Science",
"Philosophy",
"Economics"
] |
Non-Contact Smartphone-Based Monitoring of Thermally Stressed Structures
The in-situ measurement of thermal stress in beams or continuous welded rails may prevent structural anomalies such as buckling. This study proposed a non-contact monitoring/inspection approach based on the use of a smartphone and a computer vision algorithm to estimate the vibrating characteristics of beams subjected to thermal stress. It is hypothesized that the vibration of a beam can be captured using a smartphone operating at frame rates higher than conventional 30 Hz, and the first few natural frequencies of the beam can be extracted using a computer vision algorithm. In this study, the first mode of vibration was considered and compared to the information obtained with a conventional accelerometer attached to the two structures investigated, namely a thin beam and a thick beam. The results show excellent agreement between the conventional contact method and the non-contact sensing approach proposed here. In the future, these findings may be used to develop a monitoring/inspection smartphone application to assess the axial stress of slender structures, to predict the neutral temperature of continuous welded rails, or to prevent thermal buckling.
Introduction
Columns, beam-like structures, cables, and rails are common engineering structures subjected to axial stress. For some of these structures, the stress is cyclic, i.e., tension-compression, and may lead to buckling. The most common example is the stress in continuous welded rails (CWRs) that are track segments welded together to form a continuous miles-long rail. When anchored, a CWR is pre-tensioned to counteract the thermal expansion occurring in warm days but the pre-tension cannot be too high because the rail may break in winter due to contraction. Typically, the pre-tension is such that the rail neutral temperature (RNT) T N , i.e., the temperature at which the net longitudinal force is zero, is between 32 • C and 43 • C. However, over time the neutral temperature "physiologically" decreases and becomes unknown, increasing the risk of extreme compression in hot days when the compressive force may buckle the rail. Buckling occurs when the actual temperature T R in the material reaches the Euler temperature T E , which is related to the Euler stress σ E as [1]: In Equation (1), E and α represent the Young's modulus and the coefficient of thermal expansion of rail steel, respectively. As σ E , E, and α are typically known, buckling may be prevented using a reliable nondestructive methodology that enables to measure thermal stress or to infer T N .
The current noninvasive methods to estimate the RNT or the axial stress have advantages and limitations and there is no uniform consensus about the best technique. Some methods such as the lift method [1,2] require track closure. Others require long-term wayside installation [3]. New techniques such as those based on electromechanical impedance [4,5], nonlinear ultrasonics [6,7], or highly nonlinear solitary waves [8][9][10][11] are at a research stage and have not been commercialized yet.
In the present study, we proposed an approach based on structural dynamics and the non-contact detection of vibration modes using a smartphone operating at frame rates higher than conventional 30 Hz. This approach goes along some recent lines of research where high-speed cameras were proposed to replace accelerometers or laser vibrometers for the non-contact measurement of the dynamic parameters of structures for structural health monitoring (SHM) applications [12][13][14][15][16][17][18][19][20]. This emerging noncontact vision-based technique is eased by the widespread diffusion of affordable high-speed consumer-grade video cameras, high-performance smartphones, and by the rapid development of image processing algorithms [21][22][23][24][25][26][27][28][29][30][31][32][33]. Some of these algorithms use phase-based video motion processing [29][30][31] and video motion magnification techniques [21][22][23][24][25]27,28,[31][32][33][34]. They require the prior knowledge of the frequency range of interest to identify accurately the modal shapes and corresponding frequencies. To avoid this requirement, blind identification processes were developed [32,33,35] in which the video frames are analyzed with a multi-scale image processing method to extract the local pixel phases to get the local structural vibrations by using spatiotemporal filters. All of these image processing algorithms require complex image transformations and may be computationally expensive. In addition, they work better with high-end cameras that provide high contrast images.
A viable alternative is provided by multi-threshold techniques [36] in which a small number of pixels within a pre-defined region of interest (ROI) is used to identify the vibration characteristics of a given structure. The movement of the objects within the ROI is assessed by accounting their luminance changes using a local multi-threshold technique. In general, the frames associated with a vibrating structure have periodic levels of illumination. Such periodicity is identified to ascertain the frequency of vibration. To do so, the method analyzes the number of bright pixels in different thresholded luminance levels and then combines this information to obtain the main vibration frequency of the movement. The combination of the different thresholded levels permits cancelling the noise in the image while enhancing the main signal. The method can also be applied to a whole scene by dividing all the frames in small overlapping ROIs, as was demonstrated in [37].
In the study presented in this paper, we implemented the multi-threshold technique in MATLAB ® [38] to extract the natural frequency of steel beams subjected to thermal load. The study improved and expandd the work published by Ferrer et al. [36] because, for the first time, the multi-threshold technique was applied to a structure that is thermally loaded with the aim of measuring axial stress. Two fixed-fixed steel beams were monitored with a smartphone. In contrast to existing studies [21,[23][24][25]28,32] where high cost, high frames per second (fps) cameras were adopted to identify the vibration characteristics, in the present work, a smartphone camera was used together with an ad hoc video processing algorithm. The scope and main novelty of the paper is proving that the imaging algorithm, originally proposed in [36], can be used to process videos taken with a smartphone instead of a regular camera at a frame larger than 30 Hz, with the purpose of solving the engineering challenge of thermal stress measurement. To the best of our knowledge, this is the first study in which the measurement of thermal stress was addressed by using a video-based non-contact and noninvasive approach. It is noted that the use of a smartphone may have practical advantages with respect to high-end cameras because smartphones are widely available and the proposed methodology may be easily replicated/adopted by the scientific community to monitor structures of interest.
Image Processing Algorithm
The algorithm used here was proposed by Ferrer et al. in [36] to track subpixel movements of objects in a video scene to obtain the main frequency of vibration. The algorithm is based on the analysis of the luminance variation between two consecutive frames of the sequence. Therefore, no initial guess about the object shape or the movement frequency is needed. The method is based on the evidence that any movement of an object can be detected by a camera if it produces any change in the luminance levels registered by the camera sensor. For small movements, on the subpixel scale, these changes will be of one or two luminance levels and will only affect a very few pixels, as shown in the simulation depicted in Figure 1. The effect is more noticeable at the edges of complex-shape objects, even though it is produced along the whole object at different positions, as the objects move. To enhance the detection capabilities, the luminance can be analyzed at threhsolded levels, i.e., all pixels in the image above a certain luminance level are set in black while the others are set in white. If the luminance of a pixel is affected by the movement, it may eventually cross the threshold, so the change will become evident. In the case of periodic movements, the variation of the binary pattern will also reproduce this feature and thus the main frequency can be obtained by counting the number of pixels that are active in each frame, thus obtaining a temporal signal representing the movement.
Looking at the thresholded objects depicted in Figure 1c,d and counting the number of active (white) pixels, it is found that the first case ( Figure 1c) has nine more active pixels than the second case ( Figure 1d). If it is assumed that the object moves periodically between these two positions, i.e., it has a harmonic oscillation of amplitude 0.1 px, and the total number of white pixels in time is represented in the time domain, a periodic oscillation in the number of active pixels will be observed, thus revealing the movement of the object. Notice that the direction of the movement is unknown, but the pixels count will vary at the same rate as the object moves, revealing the frequency of the vibration. Examination of a single thresholded level may reveal the object movement but also noise coming from different sources that may affect to the level of a single pixel. To avoid the detrimental effects of noise, the analysis is extended to a group of levels. This situation is represented in Figure 2, where three thresholded levels are shown from the original grayscale object. Multilevel threshold of a grayscale object. Each of the thresholded versions will be treated as a separated sequence and its movement will be analyzed following the algorithm here explained.
The signals obtained for each thresholded level can be combined (i.e., added or even multiplied) to reinforce the periodic signal and cancel out non-harmonic components. In Figure 3, the Fourier transform of the signal obtained from the analysis of eight thresholded levels of a vibrating tuning fork is shown. The frequency of 440 Hz was detected at all levels, although with different intensities. Combination of all the levels will reinforce the major peak while cancelling the secondary peaks, which may come from image flickering or vibrations of the camera As can be deduced from Figure 3, the method is implemented on a small ROI that might contain the whole object. If one is interested in the vibrating frequency of a particular point of a scene showing a variety of moving object, the analysis must be performed on small ROI around the target point. The size of the ROI has to be small enough to exactly locate the vibration points, but wide enough to allow small drifts of the camera or the specimen within the selected area and thus not missing the target. One can also extend the method to the analysis of a whole scene by dividing the whole frame in the video sequence into small ROIs and analyzing all of them separately (see [37]). This is useful in the case that the scene shows different objects simultaneously vibrating, but the high computational cost of such approach makes it inappropriate for many applications.
All the processes described in this section have been implemented in Matlab and a test example can be downloaded from [39]. The website includes test software for the selection and calculation of vibration frequencies in four regions simultaneously, along with a test video sequence.
Experimental Setup
In the study presented in this article, two beams, hereinafter referred to as the thin beam and the thick beam, respectively, were examined. The former was 41.34 mm × 10.12 mm × 1402 mm and made of Type 416 steel, whereas the thick beam was 127 mm × 15.88 mm × 1395 mm and made of A36 steel. The geometry of the thick beam was chosen because it resembles a large rail web. The mechanical and the geometric properties of the specimens are listed in Table 1. Figure 4 shows the experimental setup. The beams were clamped to a MTS machine (MTS Corporation Model LVDT) with ultimate capacity of 1780 kN, operating in displacement control. Owing to the fixed-fixed boundary conditions, the Euler load P cr was [40]: where I and L represent the moment of inertia and free length, respectively, of the beam. For our specimens the critical load P cr and the corresponding stress σ cr are listed in Table 1. They were obtained by considering the free length L shown in the third line of the table. Each beam was instrumented with a PCB 356808 accelerometer (see Figure 4c,d) connected to a signal conditioner which, in turn, was connected to an oscilloscope sampling at 10 kHz. The accelerometer was placed at 1/3 of the free length of the beam to record the first two modes. The specimens were heated with a commercial thermal tape (Breask Heat BSAT 301010, BriskHeat ® , Columbus, OH, USA)) secured to the whole length of the beam to impart uniform heat. For the thin specimen, the width of the thermal tape was 25.4 mm, whereas, for the thick beam, a 76.2 mm wide heat tape was used.
The temperature was measured with a thermocouple (ExTech Instruments Type J/K Thermometer) and an infrared camera, model FLIR SC660 (Figure 4b) (FLIR, Inc., Wilsonville, OR, USA). The emissivity of the camera was set to 0.85 in accordance with Type 416 stainless steel beam. Finally, the vibration of the specimens was triggered with a hammer and recorded with a smartphone operating at 240 Hz frame rate. The smartphone, a Samsung Note 8, was secured to a tripod and its shutter was activated with a remote Bluetooth shutter. Two lamps (Commercial Electric) were used to increase the signal to noise ratio of the videos. The implementation of the image processing algorithm described in the previous section was done considering eight different thresholded levels of the sequence. The combination of these eight signals minimized any detrimental effects of noise and non-periodic movements.
Thin Beam
For the thin specimen, the following protocol was used. Pre-tension was initially applied at about 5% of the beam's yield load. Heat was imparted while the machine, operating in displacement control, held the beam. When the surface temperature was about 70 • C, the beam was cooled naturally until the initial temperature was reached. Three heating-cooling cycles were completed. At every ∆T = 4 • C step, vibration was induced with a hammer and a 5 s video and the time waveform of the accelerometer were recorded. At those instants, the load displayed in the control panel (MTS FlexTest SE) of the MTS machine (MRS ® Eden Prairie, MN USA), the thermocouple reading, and a snapshot of the IR camera were taken. Examples of these infrared images are displayed in Figure 5. thin (a,b) and thick (c,d) beams at the initial and the highest temperature in the third heating ramp. The rectangular frame emphasizes the area of the beam considered to compute the average temperature and includes the entire free length (see Table 1) of the specimens.
Using known formulation commonly described in Solid Mechanics textbooks, the thermal load P T and the corresponding stress σ T imparted to the beam was calculated as: In Equation (3a), A is the cross-sectional area, T 0 and T f represent the initial and final temperatures of the beam, respectively. The critical temperature occurs when P cr = P T . From this identity, the temperature raise ∆T cr necessary to induce buckling is equal to: equivalent to: For the thin beam, ∆T cr was equal to 30.31 • C, whereas, for the thick specimen, ∆T cr was equal to 54.08 • C. The values of 30.31 • C was lower than the temperature imparted on the beam for reasons that are explained below. Figure 6a shows the axial stress recorded from the MTS as a function of the temperature for the three thermal cycles completed in the experiment. Although the cycles overlap very well, the heating and cooling ramps do not overlap and show a plateau between 52 • C and 60 • C, and between 30 • C and 40 • C, respectively. This behavior is attributed to local elastic deformation of the reaction plates placed at the end of the beam and to small, yet relevant, adjustments in the MTS machine during the transition from tension to compression, and vice versa. These uncontrollable phenomena resulted in a significant difference between the average temperature of the beam and the analytical stress expected from Equation (3b). This is shown in Figure 6b where the expected stress is presented as a function of the beam temperature. This difference was such that the analytical critical temperature predicted using Equations (5) and (6) was much lower than the empirical temperature that the beam could withstand without buckling. Another contributing factor to this discrepancy was the presence of the clamps that acted as heat sinks, which cannot guarantee a true uniform heat distribution along the beam and through its thickness. Nonetheless, we demonstrate below that the factors discussed above did not affect the objective of the study and the validation of our research hypothesis. The local multi threshold technique [36] was applied to extract the natural frequencies of vibration of the beam. The ROI in Figure 7 was considered and consisted of a squared frame made of 15 × 15 pixels. The pixel size was 1.4 µm. The maximum and minimum luminance in the area was determined and eight thresholds have been applied thus producing eight binarized sequences. A temporal signal was obtained for each sequence and the Fourier transform of each signal was computed. By averaging the frequencies obtained from the eight thresholded levels, the main frequency peak of the vibrating beam in the considered ROI was obtained. From the natural frequency f n , the axial stress P was extracted using classical structural dynamics concepts [40][41][42], and in particular the equation: where ρ is the density of the material, P is positive if the force is in tension and viceversa, and β n L is the n-th root of the differential equation of the vibration of the single span beam applied to a given boundary condition. For the first mode (n = 1) and fixed-fixed support, β 1 L = 4.73 [40]. Figure 8 shows the accelerometer readings and the corresponding FFTs measured at the beginning and the end of the third heating ramp when the average temperature of the specimen estimated with the IR camera was 21.7 • C and 69.6 • C, respectively. The FFT reveal the frequencies of the first two modes: 39.91 Hz and 101.1 Hz at ambient temperature, and 28.08 Hz and 85.71 Hz when the beam was nearly 70 • C. Higher-modes were not visible because the accelerometer was placed close to one of the nodal points of the beam.
The results of the image processing are presented in Figure 9 and refer to the same instants discussed above. As the frame rate was 240 Hz, the graphs extend to 120 Hz, corresponding to the Nyquist frequency of the videos. The computation that yielded Figures 8 and 9 was applied to all measurements taken during the three thermal cycles. The results are presented in Figure 10 where the frequency of the first mode measured from the accelerometer and the video algorithm is plotted against the axial stress, measured through the MTS machine. For the sake of clarity, each cycle is presented separately. The figures demonstrate the excellent agreement between the accelerometer-based and the video-based results and the repeatability of the setup. Any small discrepancy between the two noninvasive approaches can be reduced by minimizing the noise in the video recordings, e.g., by increasing the illumination and/or by improving the spatial resolution of the ROI. The use of the smartphone makes the measurements more agile and non-contact, without the need for a signal conditioner and an oscilloscope. Figure 10 also demonstrates that both NDE methods reveal the linear relationship between frequency and true stress as predicted by Equation (6), despite the setup constraints that became apparent only during the post-processing analysis. All experimental data relative to the smartphone-based videos are presented in Figure 11a, in which the frequency extracted from the image processing is plotted against the axial stress recorded with the MTS machine. The data are very well interpolated with a linear function. The line interpolation suggests that the natural frequency of the stress-free beam, i.e., at beam's neutral temperature, is equal to 34.43 Hz. This value is only 5.3% different from the theoretical value of 36.27 Hz found using Equation (6) and the properties listed in Table 1. To ease the comparison between empirical results and analytical prediction, the latter is presented in Figure 11b where the expected natural frequency of the fundamental mode is plotted as a function of the axial stress. For convenience, the vertical axis is left identical to the corresponding axis of Figure 11a. The comparison shows the close correlation between experimental results and theoretical prediction.
The ability to capture the true stress of the beam using the accelerometer-based and the video-based data is emphasized in Figure 12, which shows the estimated stress from Equation (6) as a function of the stress measured with the MTS. The graphs quantify the linear interpolation of the experimental data and display a very small divergence from the ideal case y = x, that would indicate perfect match between the estimated and the true stresses.
Thick Beam
For the thick beam, one thermal cycle was completed. Figure 13 shows the axial stress, recorded with the loading machine, as a function of the average beam temperature. Similar to what was observed in the slender beam test, the heat imparted to the specimen was much higher than anticipated with the analytical prediction. At about 65 • C, the same kind of plateau observed in the slender beam is visible. The reason the stress did not show any plateau towards the cooling phase is unclear. However, the slope of the stress-temperature data is lower than predicted by Equation (3b). Figure 14 shows the estimated stress obtained from Equation (6) as a function of the recorded stress through the MTS. The graph demonstrates the excellent agreement between the accelerometer data and the smartphone data. The two noninvasive approaches are compared quantitatively with Figure 15 where the experimental data are interpolated linearly. The slope of the video-based data (Figure 15a) is identical to the slope of the accelerometer data ( Figure 15b). Both data marginally diverted from the ideal result of y = x.
Conclusions
In this article, a study about the use of a smartphone to capture some vibration characteristics of simple structures such as beams was presented. One thin and one thick beam were held in tension and heat was imparted to induce axial load. Transverse vibration was triggered by using a hammer and recorded with a conventional accelerometer attached to the beam and a smartphone operating at 240 frames per second. Measurements were taken at discrete temperature intervals and the frequency of the fundamental mode of vibration was extracted from both contact (accelerometer) and non-contact (video) monitoring methods. From the value of the natural frequency of vibration, axial stress in the beams was extracted. The results of the experiments clearly demonstrated that the new non-contact method can reliably replace conventional accelerometers as the frequency found with both methods matched very well. The results also proved that the combined use of a smartphone and the proposed imaging algorithm can assess the axial stress thermally induced in the specimens. As such, the proposed non-destructive, non-contact evaluation method may be considered in the future, after proper research and development, for the field measurement of thermal stress in continuous welded rails.
It is acknowledged that the camera of a smartphone does not add any new feature to a high-end, high-speed video-camera, and that the main advantage of a smartphone is the versatility: a typical user-end camera is a device limited to capture videos but then these videos must be transferred to a computer for further analysis. In the future, a medium-end smartphone may be able to capture images, process them, and eventually combine them with other information provided by embedded accelerometer and GPS, for example. Additionally, data taken with a smartphone can be immediately shared.
Future studies shall validate the repeatability of the proposed methodology, widen the stress range being monitored, and improve the image processing algorithm to be embeddable in the smartphones for rapid field assessment of critical infrastructures. | 5,626.6 | 2018-04-01T00:00:00.000 | [
"Engineering"
] |
Identification of additional body weight QTLs in the Berlin Fat Mouse BFMI861 lines using time series data
The Berlin Fat Mouse Inbred line (BFMI) is a model for obesity and metabolic syndrome. The sublines BFMI861-S1 and BFMI861-S2 differ in weight despite high genetic similarity and a shared obesity-related locus. This study focused on identifying additional body weight quantitative trait loci (QTLs) by analyzing weekly weight measurements in a male population of the advanced intercross line BFMI861-S1 x BFMI861-S2. QTL analysis, utilizing 200 selectively genotyped mice (GigaMUGA) and 197 males genotyped for top SNPs, revealed a genome-wide significant QTL on Chr 15 (68.46 to 81.40 Mb) for body weight between weeks 9 to 20. Notably, this QTL disappeared (weeks 21 to 23) and reappeared (weeks 24 and 25) coinciding with a diet change. Additionally, a significant body weight QTL on Chr 16 (3.89 to 22.79 Mb) was identified from weeks 6 to 25. Candidate genes, including Gpt, Cbx6, Apol6, Apol8, Sun2 (Chr 15) and Trap1, Rrn3, Mapk1 (Chr 16), were prioritized. This study unveiled two additional body weight QTLs, one of which is novel and responsive to diet changes. These findings illuminate genomic regions influencing weight in BFMI and emphasize the utility of time series data in uncovering novel genetic factors.
weight, hepatic fat storage, low insulin sensitivity, and impaired glucose tolerance.In contrast, S2 is insulin sensitive despite being obese 5 .
In a previous study, we used an advanced intercross population (AIL) between the BFMI861-S1 line and the reference strain B6N to discover further regions involved in body weight regulation.By applying a variation of multiple QTL mapping approaches (MQM) which adjust for the large effect of jObes1 (by including it as cofactor into the model), we were able to identify a hidden body weight QTL on Chr 6 10 .
In another study, an AIL population was used, which was generated from an initial cross between the BFMI861 lines S1 and S2 (AIL BFMI861-S1 x BFMI861-S2), to identify more genetic loci accounting for the observed phenotypic difference in traits of the metabolic syndrome of the S1 line 11 .The advantage of crossing the two BFMI lines is that it naturally corrects for the significant effect of the jObes1 locus on body weight, as both lines carry the high allele.Furthermore, this population may reveal hidden minor QTLs that contribute to weight variability and have not been discovered previously.As a result, three novel QTLs for traits of the metabolic syndrome (Chr 3: gonadal adipose tissue weight, blood glucose, Chr 15: gonadal adipose tissue weight, Chr 17: gonadal adipose tissue weight, liver weight, blood glucose concentration, liver triglycerides) and one QTL for body weight on Chr 16 were successfully identified using end point measurements at week 25 11 .
In the current study, we focused not only on the endpoint measurement but on time series body weight data that were collected in this AIL population weekly from week 3 until week 25.This data allowed us to identify additional body weight QTLs that contribute to the overall obese phenotype peculiar to the BFMI lines in addition to the known major QTL jObes1 on Chr 3 and the other QTLs on Chr 6 and 16.
Mouse population
We used male mice from the 10th generation of the AIL population BFMI861-S1 x BFMI861-S2, which originated from an initial breeding between a S1 male and a S2 female, followed by successive rounds of random mating in each subsequent generation 5 .The randomization of mating pairs was done using the RandoMate program 12 .
Animal husbandry and phenotyping
The German Animal Welfare Authorities granted approval for all experimental treatments involving mice under the reference number G0235/17 and reported in accordance with ARRIVE guidelines.All methods were performed in accordance with the relevant guidelines and regulations.The mice were maintained in standard conditions, with a 12-h light-dark cycle (lights turned on at 0600 h), and at a controlled temperature of 22 ± 2 °C.Furthermore, the mice were provided with ad libitum access to both food and water.Mice received a standard diet until week 20 (16.7 MJ/kg of metabolizable energy, 11% from fat, 26% from protein, and 53% from carbohydrates, V1534-000, ssniff EF R/M; Ssniff Spezialdiäten GmbH, Soest, Germany), followed by two weeks of a high-fat, low-carbohydrate diet (16.9 MJ/kg of metabolizable energy, 34% from fat, 19% from protein, and 47% from carbohydrates, C1057; Altromin Spezialfutter GmbH & Co. KG, Lage, Germany) to increase obesity but to protect β-cells and finally three additional weeks of high-fat, high-carbohydrate diet feeding until week 25 (21.9MJ/kg of metabolizable energy, 28% from fat, 20% from protein, and 40% from carbohydrates 13 to enhance metabolic differences 11 . Body mass was recorded weekly using a standard laboratory scale between the age of 3 (after weaning) and 25 weeks (Supplementary File 1).Outliers were defined as individuals which have a measurement that deviates from the population mean by more than three standard deviations and were removed from the data.
Genotyping
Among the 397 male mice subjected to phenotyping, 200 mice, representing the extreme ends of the phenotypic distributions for gonadal adipose tissue weight and liver weight, were chosen for genotyping using the Giga-MUGA Array 11 .Due to high genetic similarity of the parental lines S1 and S2, only 5,171 (distribution: Supplemental Fig. 1B in 14 ) out of 143,259 SNPs on the array were informative and passed the quality control 11 (Supplementary File 2).Genomic positions are given according to the Mouse Genome Version MM10, GRCm38.p6.
To reduce the potential bias in estimating allele effect sizes caused by selective genotyping, the remaining 197 males of the AIL population were genotyped for two top markers identified in an initial QTL scan as being associated with body weight (see QTL mapping section).For these markers, KASP genotyping assays were developed (Supplementary Table 1).
QTL mapping and candidate gene prioritization
QTL mapping was conducted for each body weight time point in a two-step process.Initially, a QTL scan was carried out using the 200 AIL males genotyped with the GigaMUGA array.Subsequently, a final QTL scan was conducted, incorporating all male animals (genotyped with both GigaMUGA and KASP methods).
Multiple testing correction was performed using Bonferroni method 15 and the number of independent SNPs as determined by simpleM 16 which was estimated as m Eff of 849 using a window size of 820.P-values were converted to LOD scores using LOD = −log10(p-value).LOD scores exceeding 4.9 and 4.2 were considered as highly significant (p < 0.01) and significant (p < 0.05), respectively.To establish the 95% confidence interval for a QTL, a 1.5 LOD drop from the top SNP position was applied 17 .For each week of body weight measurement, the start and end positions of this interval were defined as the positions of the first SNP upstream or downstream of the 1.5 LOD-drop confidence interval.The final QTL interval was defined by taking the smallest start and highest end point across all measured weeks.At this, the 1.5 LOD drop was calculated considering only the markers from the 200 males genotyped with the GigaMUGA array.
Detection of ChoRE motifs
Due to the diet-sensitive nature of the QTL on Chr 15, positional candidate genes were further scanned on the presence of carbohydrate-response elements (ChoRE).Based on previous research, it has been shown that expression of genes with a ChoRE motif can be induced upon glucose, adenosine-containing molecules, and other physiological cues 19 .To investigate the presence of ChoRE motifs, all genes present in the chromosome 15 QTL were determined using biomaRt.The R package GenomicFeatures and associated R data packages containing the MM10 mouse genome sequence and annotation (TxDB.Mmusculus.UCSC.mm10.endGene)were used to extract 2000 bp upstream of the genes transcription start site.Two ChoRE motifs weight-matrices were defined based on the identified ChoRE-a CAC GAG (N) 5 CAC GAG and ChoRE-b CAC ACC (N) 5 CAC GCG motifs determined by Yu and Luo 19 .Using R function matchPWM (min.score= 90%, R package Biostrings), the 2000 bp upstream of each gene were scanned to identify presence of the ChoRE-a and ChoRE-b motifs.
QTL mapping
QTL mapping was performed for body weight of the AIL population with data collected once a week from week 3 until the end of the experiment at week 25.QTL analysis on selectively genotyped 200 AIL males revealed significant loci on Chr 15 and 16.The follow-up QTL analysis after KASP genotyping including all 397 males, confirmed the two QTLs and provided true estimates for the genetic effect size (Table 1).
In detail, a genome-wide significant QTL for body weight from week 9 to week 20 was mapped on Chr 15 between 68.46 and 81.40 Mb (Fig. 1A).The most significant SNP of this region was UNC25922623 at week 20 (77.362.610bp; LOD = 7.81, Fig. 2 top).At this locus, the S1 allele increased body weight (Table 1).At the top SNP, homozygous S1 mice showed 5.40 g (13.8%) higher body weight (44.41 ± 4.04 g) compared to homozygous S2 mice (39.01 ± 3.76 g) and 3.24 g (7.9%) elevated compared to heterozygous mice (41.17 ± 4.47 g).Interestingly, this QTL was significant between week 9 and 20 (standard diet), then the diet was changed at week 20 (first dietary switch: high-fat, low-carbohydrate diet) and the LOD dropped below the significance level before it rose again reaching again significance for week 24 and 25.Between week 20 and 22, homozygous S2 mice gained 1.96 g, whereas homozygous S1 mice gained only 0.75 g (heterozygous: 1.78 g).During the second dietary switch (high-fat, high-carbohydrate diet, week 23 to 25), homozygous S2 mice gained 5.15 g, whereas homozygous S1 mice gained 4.18 g (heterozygous: 4.46 g).This QTL region contains 199 protein-coding genes.
Another genome-wide significant QTL for body weight from week 6 to week 25 was mapped on Chr 16 between 3.89 and 22.79 Mb (Fig. 1B).The most significant SNP of this region was UNCHS041907 at week 18 (16,995,303; LOD = 11.84,Fig. 2 bottom).At this locus, also the S1 allele increased body weight (Table 1).Homozygous S1 mice had an 11.8% higher body weight (40.45 ± 3.15 g) compared to homozygous S2 mice (36.19 ± 3.85 g) and a 6.0% higher body weight when compared to heterozygous mice (38.15 ± 3.39 g).This region contains 213 protein-coding genes.
Candidate genes in the QTL regions
Within the confidence interval of the significant QTL on Chr 15, 10,410 SNPs and 199 potential protein-coding candidate genes are located.Due to the close relatedness between the parental lines S1 and S2, only 165 of these genes contain polymorphic DNA variants.For further analysis, also 1000 bp up-and downstream of the genes were considered.For Chr 16, this region harbours 78 SNPs and 213 genes of which 16 contain polymorphic DNA sequence variants.Mutations in these genes were scored for their potential functional effects on gene transcripts (missense mutations including SIFT score information, mutations in splice sites, UTRs, promotor, CTCF binding sites and enhancers),expression level of the encoded protein, and contribution to KEGG pathways as described in the decision tree by Delpero et al. 11 (Supplementary Table 2).None of the candidate genes carries a loss of function mutation.Nevertheless, different mutations influencing protein sequence or gene regulation occur.
Considering the QTL on Chr 15, Gpt (glutamic pyruvic transaminase, soluble, upstream of the top marker: 8.12 Mb), Cbx6 (chromobox 6, upstream of the top marker: 11.25 Mb), Apol6 (apolipoprotein L 6, upstream of the top marker: 8.47 Mb), and Apol8 (apolipoprotein L 8, upstream of the top marker: 9.18 Mb) ranked as top candidates (Table 2).Gpt was lower expressed in S1 versus S2 mice in both gonadal adipose tissue (p = 1.23 × 10 −7 ) and liver (p = 2.79 × 10 −6 ).Furthermore, in S1, Gpt possesses a deleterious missense variant in a functional domain plus several variants in UTRs, CTCF binding and splice sites, and promoter.The deleterious missense variant is caused by an amino acid exchange from isoleucine to methionine located at amino acid position 418 out of a total length of 496 amino acids.Cbx6, which is the second top candidate in the region on Chr 15, harbors a deleterious missense variant in a functional domain plus several variants in UTRs, CTCF binding and splice sites, and promoter in S1 mice.The deleterious missense variant results in an amino acid exchange from tyrosine to cysteine located at amino acid position 124 out of 127 amino acids (transcript ENSMUST00000148358).No expression data for this gene was available using the Clariom S assay for mouse.The apolipoproteins Apol6 and Apol8 were ranked third.Apol6 was lower expressed (p = 3.87 × 10 −6 ) in gonadal adipose tissue of S1 mice.In S1, Apol6 carries a deleterious missense variant in a functional domain plus several variants in UTRs, enhancer and splice site.The deleterious missense variant results in an amino acid exchange from glutamic acid to aspartic acid located at amino acid position 122 out of a total length of 329 amino acids.Furthermore, Apol6 carries a splice donor variant (rs239965506, 15:77045317_T/G) that is classified with high impact.Apol8 was not differentially expressed in adipose tissue or liver.Besides a deleterious missense variant in a functional domain, Apol8 in S1 mice harbours several variants in the UTRs, promotor, CTCF binding site and splice sites.The deleterious missense variant causes an amino acid substitution, changing aspartic acid to glycine acid at position 61 out of a total length of 78 amino acids (transcript ENSMUST00000229445).Five other genes in S1 mice were found to carry deleterious missense variants in the QTL on Chr 15: Recql4 (RecQ protein-like 4, substitution of alanine to valine at protein position 967 of 1216, upstream of the top marker: 8.13 Mb), Adgrb1 (adhesion G protein-coupled receptor B1, substitution of arginine to histidine at position 379 out of 1582, upstream of the top marker: 5.93 Mb), in Fam135b (family with sequence similarity 135 member B, threonine to proline at position 487 of 1403, upstream of the top marker: 2.84 Mb), Fam227a (family with sequence similarity 227 member A, substitution of leucine to proline at position 80 of 115 (transcript ENSMUST00000191401), upstream of the top marker: 11.03 Mb), and Apol9a (apolipoprotein L 9a, substitution of valine to methionine at position 151 of 310, upstream of the top marker: 8.83 Mb).Due to the diet-responsive nature of the QTL on Chr 15, genes were further scanned for ChoRE motifs, which can induce gene expression in the presence of glucose 20 .Only one gene, Sun2 (Sad1 and UNC84 domain containing 2), carries a ChoRE-b motif (79,742,515-79,742,531 bp, motif on reverse strand CAC ACT CGG CCA CGCG).Depending on the respective transcript of Sun2, this motif is either in the 5'UTR (transcripts Sun2-201 and -202), 10-60 bp upstream (transcripts Sun2-203, -205, and -208) or more than 150 bp upstream (all other transcripts).In S1 mice, Sun2 harbors a tolerated SNP in a functional domain accompanied by several variants in the promotor, CTCF binding site and enhancer.
For the QTL on Chr 16, Trap1 (TNF receptor-associated protein 1, downstream of the top marker: 7.23 Mb), Rrn3 (RRN3 homolog, RNA polymerase I transcription factor, upstream of the top marker: 2.48 Mb) Table 1.Mean body weight in g and LOD of body weight QTLs for the top SNP identified in the AIL (BFMI861-S1 × BFMI861-S2) in up to 397 mice between week 3 and 25.LOD ≥ 4.2 are significant and are highlighted in bold.The true top markers for weeks marked with an asterisk "*" are JAX00063853 (76.873.588bp, LOD = 6.89) for the QTL on Chr 15 at week 12, and UNCHS041714 (11.120.784bp) for the QTL on Chr 16 for week 24 (LOD = 7.10) and week 25 (LOD = 8.17).and Mapk1 (mitogen-activated protein kinase 1, upstream of the top marker: 5.68 Mb) ranked highest (Table 2).The top candidate genes Trap1 (p = 5.50 × 10 -5 ) and Rrn3 (p = 2.05 × 10 -6 ) were lower expressed in gonadal adipose tissue, and Rrn3 was additionally significantly lower expressed in the liver (p = 1.28 × 10 -6 ) of S1 mice compared to S2.In contrast, Mapk1 had significantly higher expression in gonadal adipose tissue of S1 mice (p = 4.74 × 10 -2 ).S1 mice carry a tolerated missense variant in a functional domain of Trap1 and Rrn3.All three top candidates carry numerous SNPs in regulatory regions potentially contributing to expression differences.
Discussion and conclusion
To gain a deeper understanding of the differences in body weight in the two sublines of the Berlin Fat Mouse BFMI861-S1 and BFMI861-S2 that show 96.4% of genetic similarity 11 , we investigated an advanced intercross population of the initial cross between the BFMI861 mouse lines S1 and S2.Besides being genetically closely related, these two BFMI lines share the known juvenile obesity locus on Chr 3 which explains 40% of the overall variance in obesity in all BFMI lines 6 .
Performing QTL mapping on time series body weight data, we identified a QTL for body weight on Chr 15 which accounts for 9.2% of the variance in the AIL population in week 20 and another QTL for body weight on Chr 16 which explains 11.9% of the variance in week 18.
The Chr 15 locus influencing body weight has not been previously detected in BFMI mice.This QTL is genome-wide significant between week 9 and 20, is not significant afterwards and rises again during the last two weeks of the experiment.Remarkably, the drop in LOD in weeks 21, 22, and 23 coincides with a change in the diet of the mice in week 20.Until week 20, the mice received a standard diet.During week 21 and 22 the mice were fed a diet with high-fat, but very low carbohydrate content, followed subsequently by a diet high in fat and carbohydrate content for the final three weeks of the experiment from week 23 on.Under standard diet homozygous S1 gain more weight that homozygous S2.However, on the high-fat, low-carbohydrate diet this effect is opposite.The final rise in the LOD indicates that the homozygous S1 animals catch up with the homozygous S2 animals in body weight gain again and reestablish the initial difference between the two genotypes.This LOD drop and rise could indicate different responses to the diet change depending on the genotype especially during the first diet switch.The gene Sun2 on Chr 15 is an interesting candidate to implement this diet responsiveness via its ChoRE motif.Albeit no SNPs between S1 and S2 were located directly in the ChoRE motif, the gene encompasses many SNPs in the promotor, enhancer or CTCF region potentially leading to an altered gene regulation or transcript variant mediating difference in diet responsiveness.Female mice with a homozygous Sun2 knockout Table 2. Top candidate genes after applying the prioritization criteria.Bold indicates significant differences.The p-values are corrected according to Benjamini-Hochberg.FC fold change, Not det.not determined.
Gpt Cbx6 Apol6 Apol8 Trap1 Rrn3 Mapk1
Type of mutation Deleterious domain missense show a significant decrease in lean body mass (https:// www.mouse pheno type.org).It can be speculated that the homozygous S1 mice are less flexible in substrate metabolization and probably need both, fat and carbohydrates, to increase body weight further.This is in line with the fact that inbred S1 mice are lipodystrophic with lower adipose tissue weight and elevated liver weight and liver fat content 10 .Therefore, this QTL likely contributes to the metabolic difference between the parental lines S1 and S2.The analysis of time-series body weight data allowed the mapping of QTLs which are acting during specific time periods only and that could be hidden at later age or under specific conditions such as dietary changes.Such developmental stage-dependent gene activity could play an important role in adult body weight variation.The QTL on Chr 16 had been mapped previously for body weight at the endpoint of week 25 exactly at the same position in the same AIL population BFMI861-S1 × BFMI861-S2 11 .In the current study, we associated this QTL also with body weight at younger age from week 6 until week 25.The highest LOD score in this study was 11.8 in week 18 compared to the previous study with 7.1 at 25 weeks, where the same AIL population was used 11 .This indicates as postulated and shown in other studies 21 that the effect varies over time and that QTLs can fade or remain undetected if single time points are investigated.
The top candidate genes for the novel QTL on Chr 15 associated with body weight are Gpt, Cbx6, Apol6 and Apol8.
Gpt encodes for alanine aminotransferase 1 (ALT1) and plays a crucial role in the transamination of amino acids, channeling them into gluconeogenesis and the urea cycle.A deleterious missense variant in a functional domain could result in impaired ALT1 function in S1 mice, potentially disrupting amino acid metabolism in various tissues, such as the liver and adipose tissue which, in turn, might have repercussions forgrowth and body weight.Cbx6 (chromobox 6) is predicted to be involved in the regulation of transcription.Knockout mice of Cbx6 have an increased lean mass and decreased blood glucose level (https:// www.mouse pheno type.org) pointing towards an involvement in growth and metabolism.A deleterious missense variant in a domain of Cbx6 in S1 mice could increase body weight of S1 mice.The gene Apol6 encodes for the lipid binding protein APOL6 (apolipoprotein L6) which acts extracellularly as part of high-density lipoproteins and intracellularly affecting lipid transport and binding to organelles 22 .Overexpression of Apol6 results in induced mitochondriamediated apoptosis 23 .Further, APOL6 is described to be involved in the regulation of the differentiation of 3T3-L1 adipocytes 24 .Thereby, dysfunctional APOL6 and a reduced expression in adipose tissue in S1 mice could contribute to a malfunctional adipose tissue of these mice resulting in the metabolic unhealthy phenotype with elevated liver weight of S1 mice.Apol8 is a metabolically less studied member of the apolipoprotein family, which was shown be involved in neuronal differentiation 25 and differentially expressed in stretched myocytes 26 .
For the QTL on Chr 16, Trap1 (TNF Receptor Associated Protein 1) and Rrn3 (RNA polymerase I transcription factor homolog) were ranked as top two candidate genes which have been previously described in Delpero et al. 11 to be associated with final body weight at week 25.The protein TRAP1 is localized to the mitochondria and regulates metabolic reprogramming and mitochondrial apoptosis 27 .Furthermore, Trap1-knockout mice show reduced body weight 28 indicating that altered Trap1 regulation could be involved in metabolic changes in S1 mice and thereby alter body weight development.Rrn3 is highly conserved between yeast and mammals 29 .In yeast, RRN3 is required for the transcription of rRNA by RNA polymerase 1 30 and in mammals its phosphorylation regulates ribosome biogenesis 31 .Thereby, sequence variation of Rrn3 could affect protein synthesis and result in body weight differences.Furthermore, Mapk1 (mitogen-activated protein kinase 1) is ranked subsequently.MAPK1 is an extracellular signal-regulated kinase (ERK) and acts in a wide variety of cellular processes including proliferation, differentiation and transcription.MAPK1 is regulated by phosphorylation; different splice isoforms exist 32 and sequence variation, e.g. in the 5'UTR, may result in differently spliced transcript variants.Consequently, sequence variation has the potential to modify protein function, potentially leading to changes in body weight.
While the juvenile obesity QTL on Chr 3 is responsible for juvenile obesity until about 8 weeks in all BFMI lines 6 , the QTL on Chr 16 in our current study affects the persistence of obesity in the BFMI861-S1 mouse line from week 6 until week 25.Further studies are needed to clarify whether and how these genes and their regulation could influence body weight gain.
In the current study, we could identify one novel QTL and one previously identified QTL for body weight in our population by using time series data.The identification of these two QTLs which are significant over a wide range of ontogenetic development help us to unravel the genetic puzzle that is driving the higher body weight observed in the BFMI lines over time.
Body weight is influenced by many factors including environmental as well as genetic factors.While some genes are known for their significant influence on body weight when they dysfunction (e.g.leptin, leptin receptor, MC4R, POMC), most genes have a small impact and many more are still undiscovered 2 .Genes with smaller effects are harder to track, in particular when their effect is time-dependent so that they mainly act during certain time periods such as puberty.Obesity is a complex trait driven by multiple genetic and environmental factors.While many environmental factors are well known, the contribution of many genetic factors and the interaction between genetic determinants and environment is currently highly investigated in the field of nutrigenetics 33 .Obesity is a disease that develops over time, where puberty and young adulthood are very sensitive phases for disease onset and progression 34 .QTL mapping under different conditions such as time series data, genetic background and dietary condition and the subsequent identification of genomic regions and candidate genes influencing obesity in both mice and humans are important to help understanding the genetic contribution and its interaction with environmental factors to this common complex human disease.
Figure 2 .
Figure 2. Boxplots for 397 mice of the AIL (BFMI861-S1 x BFMI861-S2) in generation 10 aged 3 to 25 weeks and curves depicting body weight development.For every time point, boxplots for all three genotype classes (S1 homozygous; H, heterozygous; S2 homozygous) are shown for SNP UNC25922623 located at the top position on Chr 15 (top) and for SNP UNCHS041907 located at the top position on Chr 16 (bottom). | 5,871.6 | 2024-03-14T00:00:00.000 | [
"Biology",
"Agricultural and Food Sciences"
] |
Prevention of allograft rejection in heart transplantation through concurrent gene silencing of TLR and Kinase signaling pathways
Toll-like receptors (TLRs) act as initiators and conductors responsible for both innate and adaptive immune responses in organ transplantation. The mammalian target of rapamycin (mTOR) is one of the most critical signaling kinases that affects broad aspects of cellular functions including metabolism, growth, and survival. Recipients (BALB/c) were treated with MyD88, TRIF and mTOR siRNA vectors, 3 and 7 days prior to heart transplantation and 7, 14 and 21 days after transplantation. After siRNA treatment, recipients received a fully MHC-mismatched C57BL/6 heart. Treatment with mTOR siRNA significantly prolonged allograft survival in heart transplantation. Moreover, the combination of mTOR siRNA with MyD88 and TRIF siRNA further extended the allograft survival; Flow cytometric analysis showed an upregulation of FoxP3 expression in spleen lymphocytes and a concurrent downregulation of CD40, CD86 expression, upregulation of PD-L1 expression in splenic dendritic cells in MyD88, TRIF and mTOR treated mice. There is significantly upregulated T cell exhaustion in T cells isolated from tolerant recipients. This study is the first demonstration of preventing immune rejection of allogeneic heart grafts through concurrent gene silencing of TLR and kinase signaling pathways, highlighting the therapeutic potential of siRNA in clinical transplantation.
In addition to MyD88 and TRIF adaptor molecules, TLR agonists also activate the PI3K-Akt-mTOR pathway, another important downstream kinase responsible for modulation of TLR-induced proinflammatory and immune responses. Based on our previous study in which the combination of rapamycin and MyD88/TRIF siRNA significantly prolonged heart graft survival, we postulate that the inhibition of PI3K-Akt-mTOR pathway may enhance immune suppression; concurrently inhibiting mTOR may synergize MyD88/TRIF siRNA in tolerance induction in heart transplantation.
Specific silencing of genes using small interfering RNA (siRNA) is an advanced method of RNA interference that is more potent and specific in the knockdown of gene expression than conventional blocking methods 7 . Combined knocking down of TLRs adaptor molecules and mTOR may reduce the innate and adaptive immune response to the allograft and thus prolong allograft survival. In this study, we administrated MyD88, TIRF and mTOR siRNA expression vector to the recipient to examine whether this could significantly prolong cardiac allograft survival.
Results
mTOR, MyD88 and TRIF gene silencing in vitro in DCs. TLR and mTOR act as important regulators of the dendritic cells (DCs)' maturation and function and play crucial roles in modulating both the innate and adaptive immune systems 6,8,9 . To confirm siRNA gene silencing efficacy, we transfected the cultured C57BL/6 mice bone marrow DCs with siRNA specifically targeting the mTOR, MyD88 and TRIF genes. Forty-eight hours after transfection, the expression of mTOR, MyD88 and TRIF genes was detected in the DCs by quantitative real time RT-PCR (Fig. 1A). mTOR, MyD88 and TRIF genes expression was significantly knocked down by 75-80% when compared with the DCs transfected with scrambled siRNA, or untransfected negative control DCs. (Fig. 1A). Therefore, we confirmed the gene silencing efficacy of siRNAs specifically targeting the mTOR, MyD88 and TRIF genes.
Concurrent silencing of TLR and mTOR pathway has a synergistic effect in the reduction of DC maturation and increased negative regulator PD-L1 expression. TLRs on DCs identify specific
structures of microorganisms (pathogen-associated molecular patterns PAMPs), recruit intracellular adaptors, MyD88 and TIRF, and lead to DC maturation. We demonstrated that silencing both MyD88 and TRIF genes resulted in reducing DC maturation 3 . It had been reported that rapamycin, an mTORC1 inhibitor, reduces DCs co-stimulatory molecules expression and impairs their function 10 . We therefore explored whether concurrent silencing of both TLR and mTOR signaling pathways has a synergistic effect in the reduction of DCs maturation. DCs were cultured from bone marrow progenitor cells, and then transfected with mTOR siRNA alone, MyD88 and TRIF siRNA or mixture of mTOR, MyD88 and TRIF siRNA. DCs were transfected with scrambled siRNA as a control. Twenty-four hours after transfection, the transfected DCs were stimulated with LPS overnight. We tested the costimulatory molecules, CD40 and CD86, expression of the DC by flow cytometry in different treatment groups. Control DCs that were transfected with scrambled siRNA highly expressed CD40 (94.6%) and CD86 (88.7%), suggesting that these DCs were mature (Fig. 1B). Compared to control DCs, transfection with MyD88 and TRIF siRNA or mTOR siRNA alone both can reduce CD40 (52.9%, 59.3% vs 94.6%) and CD86 (56.1%, 52.3% vs 88.7%) expression. Concurrent silencing of MyD88, TRIF and mTOR genes had a synergistic effect leading to further reduction of CD40 (34.5%) and CD86 (48.0%) expression (Fig. 1B).
DCs perform as professional antigen presenting cells and provide positive or negative signals to regulate T cells function. Programmed death ligand 1 (PD-L1) expressed on DCs binds with Programmed cell death protein 1 (PD-1) on T cells and negatively regulates T cells activity and results in a lack of T cell response to the antigen 11 . Rosborough et al., reported that Torin1 conditioned DCs which block both mTORC1 and mTORC2 expressed elevated levels of PD-L1 12 . We also found that silencing mTOR in DCs significantly increased PD-L1 expression at the protein level by flow cytometry compared with control siRNA and MyD88/TRIF silenced DCs (93.3% vs 61.9% and 78.6%). Combination of MyD88, TRIF and mTOR siRNA silencing presented an additive effect on PD-L1 expression in DCs andPD-L1 expression in tripled silenced group was 95.1% ( Fig. 2A). The results were also confirmed by real time RT-PCR in which PD-L1 expression increased 2.9 and 4.1 folds at the mRNA level in mTOR siNRA alone or combined with MyD88/TRIF siRNA silenced DCs compared with scrambled siRNA treated DCs (Fig. 2B).
These data show that concurrent silencing of both TLR and mTOR signaling pathways has a synergistic effect in reducing DCs maturation and increasing negative regulator PD-L1 expression.
TLR and mTOR silenced DCs suppress allogeneic T cell proliferation and induce Treg generation. We next sought to determine the function of DCs after gene silencing of TLR and mTOR signaling pathways using mixed lymphocyte reaction (MLR) to test allogeneic T cells stimulatory ability of siRNA-treated DCs. DCs cultured from C57BL/6 mice were transfected with MyD88 and TRIF siRNA, and mTOR siRNA alone or in combination and were used as stimulators. DCs transfected with scrambled siRNA was used as controls. These DCs were cultured with allogeneic T cells from BALB/c mice. The results demonstrated that, compared with scrambled siRNA-transfected DCs, mTOR siRNA alone silenced DCs reduced levels of allogeneic T cell proliferation. Silencing both MyD88 and TRIF using siRNA significantly inhibited allogeneic T cell proliferation. Combined silencing of TLR and mTOR pathways showed a synergistic effect in restraining allogeneic T cells proliferation (Fig. 3A).
We further explored the ability of siRNA silenced DCs to induce Treg. Compared with scrambled siRNA transfected DCs, mTOR siRNA alone and MyD88 plus TRIF siRNA transfected DCs increased Treg induction. The percentage of FoxP3+ CD25+ population in CD4+ cells was 14.5% and 13.4% respectively, while in Scientific RepoRts | 6:33869 | DOI: 10.1038/srep33869 allogeneic T cells cocultured with scrambled siRNA transfected DCs, the percentage of FoxP3+ CD25+ was only 3.7%. Concurrent silencing of MyD88, TRIF and mTOR genes in DCs had a synergistic effect on the induction of Treg as 20.4% of CD4+ cells were FoxP3+ CD25+ (Fig. 3B). The results suggested that silencing both TLR and mTOR signaling pathways in DCs significantly reduced their ability to stimulate allogeneic T cells and induced more Treg generation. Scientific RepoRts | 6:33869 | DOI: 10.1038/srep33869 mTOR and TLR adaptors silenced DCs induced allogeneic T cells exhaustion. Co-stimulatory and co-inhibitory receptors play key roles in T cell activation or dysfunction 13,14 . T-cell exhaustion is a state of T-cell dysfunction and exhausted T cells lose robust immune response functions 15,16 . PD-1, one of the T cell exhaustion markers and its ligands, PD-L1 and PD-L2 is one of the critical inhibitory pathways for inducing allograft tolerance in the murine transplantation model 17 . As our results demonstrated that silencing DCs with mTOR siRNA significantly increased PD-L1 expression in DCs, we further explored whether mTOR siRNA treated DCs will induce allogenenic T cell exhaustion or not. DCs cultured from C57BL/6 mice were transfected with MyD88 siRNA and TRIF siRNA, mTOR siRNA alone or in combination and were cocultured with allogenic T cells from BALB/c mice for 5 days. DCs treated with scrambled siRNA were used as controls. The cells were collected and PD-1 expression was detected by flow cytometry. The results demonstrated that compared with scrambled siRNA transfected DCs, mTOR or MyD88 plus TRIF siRNA transfected DCs increased PD-1 expression in cocultured allogenic T cells (19.6%, 14.8% vs 11.4%). The combination of siRNA treated DCs further increased PD-1 expression to 29.0% (Fig. 4A).
T cell immunoglobulin and mucin domain-containing protein 3 (Tim-3) is also an inhibitory receptor which is expressed on the surface of exhausted T cells. A previous study showed that CD80/CD86 lo DCs promoted expression of both PD-1 and TIM-3 18 . Our results demonstrated that DCs treated with mTOR, MyD88 plus TRIF siRNA increased TIM-3 expression in allogeneic T cells after 5 days of coculture, as compared with DCs transfected with scrambled siRNA. Allogenic T cells TIM-3 expression increased to 19.5% when cocultured with DCs treated with combination siRNAs (Fig. 4A). The results were also confirmed by real time RT-PCR. The mRNA expression of PD-1 and TIM-3 in allogeneic T cells cocultured with combined siRNA transfected DCs increased 3.0 and 4.5 fold respectively, compared with T cells cocultured with scrambled siRNA transfected DCs. (Fig. 4B). Taken together, these results indicated that mTOR and TLR adaptor silenced DCs can induce allogeneic T cell exhaustion and may promote tolerance in transplantation.
Prevention of cardiac allograft rejection by silencing both TLR adaptors and mTOR genes with
siRNA expressed vector. We reported that interruption of the TLR signaling pathway with low dose rapamycin can increase allograft survival 3 . Rapamycin is a potent mTORC1 inhibitor and acts as an immunosuppressant and anti-cancer agent 19 . In vitro results show that concurrent gene silencing of TLR and mTOR genes has a synergistic effect in reducing DC maturation and increased PD-L1 expression (Figs 1B and 2A,B) and inhibits allogeneic T cell proliferation promoting more Treg generation (Fig. 3A,B) T cells exhaustion (Fig. 4A,B). We, therefore, hypothesized that blocking both the TLR and mTOR signaling pathways might induce long term allograft survival. To test this, we treated BALB/C recipients with MyD88, TRIF and mTOR expressed siRNA vectors before fully MHC-mismatched transplantation of C57BL/6 hearts was performed. In the control group of recipients treated with scrambled siRNA, the allograft only survived 5-8 days. Treatment with either MyD88 and TRIF siRNA, or mTOR siRNA alone, significantly prolonged cardiac allograft survival (36.7 ± 2.1 days and 39.2 ± 2.5 days) (Fig. 5). Furthermore, combined silencing of MyD88, TRIF and mTOR genes further increased allograft survival (95.8 ± 4.6 days); 85.7% of recipients achieved acceptance of allografts (Fig. 5). Knockdown of TLR adaptors modulators and the mTOR signaling pathway induce more Treg generation in vivo. Treg plays a critical role in inducing and maintaining tolerance in organ transplantation 20 . TLRs are expressed on DCs and T cells and they can modulate Treg generation through direct action or indirectly through DCs [21][22][23] . It has been reported that inhibition of mTOR promotes Treg generation [24][25][26] . We presumed that treatment of the recipient with MyD88, TIRF and mTOR siRNA expression vectors in order to prolong allograft survival may accompany more Treg cell generation. To test that, we detected Treg in the spleen and Lymph node (LN) of the recipient with different treatments. The results demonstrated that compared to the recipient treated with scrambled siRNA, recipients treated with mTOR siRNA alone or MyD88 plus TRIF siRNA can increase the percentage of CD4+ CD25+ FoxP3+ T cells in both the spleen and LN (Fig. 6A,B). Concurrent silencing of both TLR adaptors and mTOR pathway had a synergistic effect of inducing significant Treg generation and long term allograft survival (Fig. 6A,B).
DCs in tolerant recipients are immature and suppress allogeneic T cell proliferation.
In transplantation, DCs play a key role in directing the alloimmune response and depend on the state of DCs to direct allograft tolerance or rejection 27 . To determine the state of DCs in recipients, CD40, CD86, and PD-L1 expression was examined. In rejected recipients, there was a high expression of CD40 (60.2%) and CD86 (63.0%), however in tolerant recipients, CD40 (39.6%) and CD86 (41.1%) expression were significantly decreased. On the contrary, tolerant recipients had a higher level of PD-L1 expression compared to rejected recipients (67.2% vs 29.3%, Fig. 7A,B). DCs with low CD40, CD86 and high PD-L1 expression induced antigen specific Treg generation. We next tested the function of the splenic DCs. The splenic DCs from recipients with rejected allografts displayed a vigorous stimulation of allogeneic T cell proliferation. In contrast, in tolerant recipients with silencing of both TLR adaptors and the mTOR signaling pathway, splenic DCs had significantly inhibited allogeneic T cell proliferation in a MLR (Fig. 7C). These data suggest that concurrent silencing of both TLR adaptor and mTOR signaling pathway generate more powerful tolergenic DCs as they suppress the allogeneic T cell response and may provide the conditions to generate more antigen specific Treg and induce immune tolerance.
T cell exhaustion was increased in tolerant recipients. We detected T exhaustion markers PD-1 and
TIM-3 in the tolerant and rejected recipients. At the endpoint of experiment, the splenic T cells were isolated from recipients. Levels of PD-1 and TIM-3 were detected by quantitative real time RT-PCR. Tolerant mice treated with both TLR adaptors and mTOR siRNA showed elevated levels of PD-1 and TIM-3 gene expression compared with rejected recipients (Fig. 8).
Discussion
Induction of immune tolerance that results in permanent acceptance of allogeneic grafts without immune rejection is a lofty goal for transplantation. Many attempts have been done to generate transplant tolerance, and limited success has been achieved in animal models. In the past decade, we have developed multiple regimens of transplant tolerance to prevent graft rejection through immune modulation and gene silencing. There are a series of immune modulatory events that can induce different states of T cell dysfunction including tolerance, exhaustion, anergy, senescence, deletion and ignorance leading to transplant tolerance 28 . These different types of T cell dysfunction can occur simultaneously in transplant tolerance 15 . On the other hand, activation of naive T cells is highly dependent on three signals between antigen presenting cells (APC) and T cells, including antigenic stimulation through the T cell receptor (TCR) and major histocompatibility complex II (MHC II) on the APC, costimulatory molecules and inflammatory cytokines. Positive costimulatory pathways including CD28:B7, CD40:CD154, OX40:OX40L promote complete T-cell activation and development of effector function 29 . Blocking the positive costimulatory pathways, TCR signaling alone resulted in T cell dysfunction and prolonged allograft survival in transplantation 30 T cell exhaustion was initially described as dysfunction of T cells during chronic infections and cancer. Induction of T cell exhaustion is a just recognized emerging mechanism of transplant tolerance which may contribute significantly to transplant survival 15,28,32 . Both extrinsic negative regulatory pathways (such as immunoregulatory cytokines) and cell-intrinsic negative regulatory pathways (such as PD-1) play key roles in T cell exhaustion 33 . Exhausted T cells are characterized by expression of several transcription factors and inhibitory receptors (iRs), such as PD1, TIM3, BTLA, CTLA-4, Lymphocyte-activation gene 3 (LAG3), 2B4, CD160, and Killer cell lectin-like receptor subfamily G member 1 (KLRG1), contributing to their poor functional state [34][35][36][37][38] . In this study, we demonstrated that gene silencing of TLR adaptors (MyD88/TRIF) and mTOR can delay immune rejection in heart transplantation, which is associated with T cell exhaustion (Fig. 8), suggesting that T cell exhaustion may at least partially attribute to immune tolerance and that synergized immune modulation occurs through the interaction of TLR adaptors and kinase signaling pathways.
The prolonged and/or high expression of multiple upregulated iRs in T exhausted cells play an important role in autoimmunity and transplant tolerance [39][40][41][42] . PD-1 and PD-L1/PD-L2 pathway is a major and best studied iRs pathway involved in T cell exhaustion. PD1 is a type I transmembrane receptor and is member of the immunoglobin gene superfamily. It is also a member of the CTLA-4 family of T-cell regulators and is expressed on the surface of T cells, B cells, macrophages, and DCs. PD-1 interacts with two ligands, PD-L1/PD-L2 which are expressed on APC and other immune cells. PD-1 and PD-1L interaction inhibits T-cell activation and cytokine production. PD-L1 is a transmembrane protein and has been presumed to play a critical role in transplant tolerance and autoimmune disease. In murine transplantation models, administration of anti-PD-L1 antibodies or lacking of PD-L1 on donor tissue accelerated allograft rejection or abrogated tolerance induced by CTLA4Ig 43 . Rosborough et al. reported that by contrast with rapamycin, inhibition of both mTORC1 and mTORC2 in DCs elevated PD-L1 expression 12 . In agreement with this notion, we used mTOR siRNA to transfect DCs and found PD-L1 expression in mTOR silenced DCs was increased at protein and mRNA levels ( Fig. 2A,B). A recent study showed that increasing donor antigen specific T cells exhaustion may provide a novel strategy to prolong allograft survival and induce transplantation tolerance 44 .
Tim-3 is also expressed on exhausted T cells, and may regulate alloimmune responses and significantly prolong allograft survival in heart and skin transplantation through Tim-3: Galectin-9 inhibitory pathway 45,46 . Bauer et al., reported that activation of cytotoxic T lymphocytes (CTLs) by Treg-conditioned CD80/86 lo DCs increased expression of both TIM-3 and PD-1 18 . In our study, we knocked down TLR adaptor and mTOR gene expression in DCs and observed lower CD40 and CD86 expression, and promoted T cells increased both PD-1 and Tim-3 expression in vitro (Fig. 4A,B). In tolerant recipients, we also found that both PD-1 and Tim-3 expression was increased compared with rejected recipients. The results further confirmed that T cell exhaustion may be an important part of the mechanism for transplant tolerance and also implied the involvement of iR ligands in immune tolerance induced by the blockage of mTOR in heart transplantation.
In our previous study, we demonstrated that silencing of MyD88 and TRIF genes impairs DC maturation, inhibits allogeneic T cell proliferation, and promotes Treg generation, and that combined treatment with rapamycin induced allograft survival in heart transplantation 3 . Rapamycin is a specific inhibitor of mTORC1. Studies have reported that mTORC1 is important for Th1 and Th17 differentiation and mTORC2 is critical for Th2 differentiation [47][48][49] . Blocking both of mTORC1 and mTOC2, not only inhibited differentiation of Th1, Th2 and Th17 but also promoted more FoxP3+ Treg generation 12,50,51 . Compared with rapamycin that selectively inhibits mTORC1, mTOR inhibition suppressed the effector T cell activation and promoted Treg generation, but did not affect the function and homeostasis of Treg 6 . mTOR inhibition suppresses IL-4 dependent mouse bone marrow DCs maturation and inhibition of both signaling decreases positive costimulatory molecules expression in DCs 52 . It has been reported that a new generation of mTOR kinase inhibitor which blocks both of mTORC1 and mTORC2 had more potent immunosuppressive function and prolonged allograft survival in rodent organ transplantation models 51,53,54 . In our study, we used siRNA targeting the mTOR gene to knock down mTOR which is a component for both mTORC1 and mTOCR2, subsequently not only inhibited mTORC1, but also blocked mTOR2. From this aspect, mTOR siRNA seems more powerful than rapamycin for inducing tolerance in transplantation. We found that the silencing mTOR gene in DCs decreased CD40 and CD86 expression in DCs (Fig. 1B) and increased PD-L1 ( Fig. 2A,B), and that combined knocking down of TLR adaptor genes and mTOR had a synergetic effect on decreasing expression of positive costimulatory molecules. Moreover, our results showed that there was more Treg generation in the recipient mice treated with mTOR siRNA vector, as compared with scrambled siRNA vector treated mice (Fig. 6A).
Many studies revealed that long term use of mTOR inhibitors including rapamycin produces lots of side effects such as oral mucositis, stomatitis, diarrhea, noninfectious pneumonitis, diabetes, nephrotoxicity, delayed graft function and gonadal toxicity [55][56][57][58][59][60] . Compared with immunosuppressive drugs, siRNA has been reported with lower toxicity, which makes it suitable for clinical therapy 61 . Furthermore, our previous study has demonstrated there is an inhibitory feedback loop between tolerogenic DCs and Treg cells in vitro and in vivo 62 . In this study, we administrated siRNAs in 3 weeks after transplantation to knock down mTOR and TLR adaptor genes resulting in induction of tolerogenic DCs and Tregs. The generated tolerogenic DCs and Tregs formed self-maintaining inhibitory loop and induced donor-specific immune tolerance, which waives the long term use of immunosuppression and minimize the side effects of systemic immune inhibition. Nevertheless, future study on potential toxicity of siRNA is needed in order to translate this research finding in to clinic.
In conclusion, our study demonstrates that silencing of TLR adaptor and mTOR genes impairs DC maturation and promotes Treg cell generation, and increases PD-L1 expression and T cell exhaustion, thereby preventing immune rejection in heart transplantation. These results highlight the therapeutic potential of siRNA in clinical transplantation.
Methods and Material
Mice and heterotopic cardiac transplantation. Male C57BL/6 (H-2b) and BALB/c (H-2d) mice (Charles River Canada, Saint-Constant, Canada) weighing 25 to 30 g were used as donors and recipients, respectively. All experiments in the study were performed in accordance with the guidelines established by the Canadian Council of Animal Care and were approved by the Animal Care Committee of the University of Western Ontario.
Recipients (BALB/c) were treated with MyD88, TRIF and mTOR siRNA expression vectors 7 and 3 days prior to heart transplantation and 7, 14 and 21 days after transplantation by hydrodynamic injection. Fifty micrograms of each MyD88, TRIF and mTOR siRNA vectors were diluted in 1.6 ml of PBS and rapidly injected into the mice through the tail vein within 5-7s. Recipient BALB/c mice were subjected to intra-abdominal allogeneic cardiac transplantation using the hearts from fully MHC-mismatched C57BL/6 mice according to our laboratory's routine procedure. Pulsation of cardiac grafts was monitored daily by direct abdominal palpation in a double-blind manner to determine survival/rejection of the cardiac graft.
DCs culture and transfection. DCs were cultured from bone marrow progenitor cells as previously described 63 .
Briefly, bone marrow cells were flushed from the femurs and tibias of C57BL/6 mice, then washed and cultured in 6-well plates supplemented with 10 ng/ml of recombinant Granulocyte-macrophage colony-stimulating factor (GM-CSF) and recombinant mouse IL-4 (Peprotech, Rocky Hill, NJ, USA). All cultures were incubated at 37 °C in 5% humidified CO 2 . Non-adherent cells were removed (Day 2) and fresh medium was added. Medium were changed every 2 days, until day 6 for transfection. The DCs were transfected with siRNA by using lipofectamine 2000 (Life technologies, Burlington). Twenty-four hours after transfection, Lipopolysaccharide (LPS, 100 ng/ml) was added to the medium overnight and then the cells were recollected for further experiments.
MyD88, TRIF and mTOR siRNA and expressed siRNA vector constructs. For in vitro studies, MyD88, TRIF and mTOR siRNA were synthesized by Dharmacon (Ottawa, ON). The sequences of MyD88 and TIRF siRNA are as previous described 3 . mTOR siRNA was purchased from Cell Signaling Technology (Whitby, ON, cat#6332S).
For in vivo studies, the siRNA expression vector was constructed as previously described 64,65 . The oligonucleotides containing target-specific sense and anti-sense sequences of MyD88, TRIF and mTOR mRNAs were synthesized, annealed and inserted into the pRNAT H1.1 siRNA expression vector utilizing restriction enzyme sites at the end of the strands (Genscript, Piscataway, NJ) to express the siRNAs.
Mixed lymphocyte reaction (MLR).
For in vitro MLR, T cells (2 × 10 5 /well) from naïve BALB/c mice were cocultured with DCs cultured and transfected from C57BL/6 mice in different ratios of DC: T cells. For in vivo MLR, splenic DCs isolated from tolerant or rejecting recipients (BALB/c) using CD11c MACS beads (MiltenyiBiotec) and cocultured with T cells (2 × 10 5 /well) from C57BL/6 mice in 200 μ l of complete RPMI 1640 medium (Life Technologies). Cells were cultured at 37 °C in a humidified atmosphere of 5% CO 2 for 3 days, and pulsed with 1 μ Ci of [ 3 H] thymidine (PerkinElmer, Woodbridge, ON) for the last 18 h of culture. Cells were harvested and the incorporated radioactivity was quantified using a Wallac Betaplate liquid scintillation counter. Results were expressed as mean ± SEM cpm of triplicate cultures.
Statistical analysis. In this study, data were reported as the mean ± SEM. Allograft survival among experimental groups was compared using the log-rank test. Quantitative real-time PCR data were analyzed using one-way ANOVA or student's t-test. Differences with P values less than 0.05 were considered significant. | 5,763.8 | 2016-09-23T00:00:00.000 | [
"Biology",
"Medicine"
] |
Ranking methodology for Islamic banking sectors – modification of the conventional CAMELS method
The state of banking systems is an important issue. The purpose of this paper was to test whether the well-known CAMELS microeconomic methodology, generally used for ranking banks, is applicable to evaluating Islamic banking systems. The hypothesis was tested by implementing a method for a particular case, public, free data – from 2013 till the first quarter of 2018 – on Islamic banking systems from the “Islamic Financial Services Board” (IFBS) database. As expected, modifications were necessary. First, because of the lack of data (in Islamic databases, no data refer to the management (“M”)), and second, to avoid the subjectivity of the five-degree method and to reach more sensibility. Thus, a hundred-level (standardized) rating system was introduced – “CAELS 100”, where “100” refers to the levels. The other part of the methodology – creating a simple average of the (now level 100) rating of raw indicators to get the letters of CA(M)ELS in the relevant period – remained unchanged. After the data cleaning, only six countries (Bahrain, Egypt, Kuwait, Oman, Turkey, and the United Arab Emirates) were able to participate in the analysis.The result showed that Egypt, Turkey and Kuwait were the best ones respectively. Thus, it was concluded that this “CAELS 100” methodology is suitable for evaluating Islamic banking systems.
AcknowledgmentThe research was supported by the project “Intelligent specialization program at Kaposvár University”, No. EFOP-3.6.1-16-2016-00007.
INTRODUCTION
The rating of bank systems is an important issue. There are techniques for that, but most of them were developed for conventional banks and have not been used to rate Islamic bank systems.
The interest-free banking system began about fifty years ago when the first Islamic bank was founded in Dubai in 1975. This type of banking has widely spread to several non-Arabic (Pakistan, Malaysia, Indonesia, Turkey) and even non-Islamic countries like the USA and the UK (Karapinar & Dogan, 2015). One of the largest markets for Islamic finance is in Indonesia. In 1992, Bank Muamalat was established and the government improved banking regulations there.
After the 2008 financial crisis, more attention was paid to Islamic banking, as these banks had almost no 'toxic' assets as they run safer operations than conventional banks (Széles, 2015). The research question of this paper is whether the CAMELS microeconomic bank rating methodology is suitable for evaluating Islamic banking systems.
LITERATURE REVIEW AND ANALYSIS
The topic of Islamic banking is still poorly represented in the European literature. Similarly, in Islam countries, there are few publications referring to "conventional" banking.
Islamic banks must operate under the Islamic principles of Sharia'h rules, paying interest is prohibited. According to Islam, money is just a simple instrument, it has no value by itself. It is merely used to measure the value of things as the principles of the Muslims' holy book the "Holy Quran" and "Sunnah" tell. Islamic finance emphasizes partnership and cooperation. The institutions, firms, and tools base their operations on interest-free transactions and profit and loss sharing. The parties share the risks, returns, and losses. Tabash and Dhankar (2014) pointed to the double importance of Islamic banking that comes from the remarkable growth and stability during the crises.
The Islamic banking sector is dynamically increasing. The data from the free database of the Islamic Finance Service Board (IFSB) show the growth rate between 2013 and 2018 is about 50%, considering total assets (Table 1). A similar tendency can be found for other indicators in the same table.
CAMELS -the methodology intended to usewas introduced in 1979 by the US banking supervisors to analyze the financial performances of banks. It was adopted by the North America Bank to know the financial and managerial reliability of commercial lending institutions. There are several other techniques for analyzing banks' performance, but this is the most spread-up one, according to the literature (Baka et al., 2012). It is "a useful tool to examine the safety and soundness of banks, and help mitigate the potential risks, which may lead to bank failures" (Dang, 2011, p. 2) even after the banking crisis (Dang, 2011, p. 16 CAMELS is a subjective grading method that uses six criteria, the acronym comes from Capital Adequacy, Asset Quality, Management, Earnings, Liquidity, and Sensitivity to Risk. This model as- sesses the overall condition of a bank, its strengths and weaknesses. The composite ranges of the CAMELS rating system consist of five groups: • Rating 1 (composite range 1-1.49): is a strong position, good working in every respect, resistance to external economic and financial disturbances, no cause for supervisory concern.
• Rating 2 (composite range 1.5-2.49): shows a satisfactory position, is stable and can withstand business fluctuations well, supervisory concerns are limited to extent that findings are corrected.
• Rating 3 (composite range 2.5-3.49): fair position, financial, operational, or compliance weaknesses ranging from moderately severe to unsatisfactory, easily deteriorates if actions are not effective in correcting weakness.
• Rating 4 (composite range 3.5-4.49): marginal position, the immoderate volume of serious financial weaknesses, without correction, high potential for failure, without correction these conditions could develop further and impair future viability.
• Rating 5 (composite range 4.5-5.00): unsatisfactory position, high immediate or near term probability failure, without immediate corrective actions, liquidation is likely to be lost.
The literature review proved the CAMELS methodology can be implemented for ranking Islamic banking sectors of countries.
HYPOTHESIS AND METHOD
The hypothesis of this paper is whether the CAMELS method, after modification, fulfils the needs for evaluating Islamic banking systems based on data from the ISFB database.
The methodology for testing the hypothesis will be the implementation of the (modified) method for a particular case. The test data come from the free, public IFSB (Islamic Financial Services Board) database -referring to the time period from 2013 till the first quarter of 2018. Rating the Islamic banks' systems of the available countries, will be a newby-product -result.
Relationship between the CAMELS and IFSB indicators
CAMELS has been invented for conventional banks, but the aim of this study is to investigate the systems of Islamic banks. The difficulty occurred because the indicators are not the same in these two banking systems. All indicators of the IFSB database will be presented, but only those that can participate in a CAMELS-type analysis are described in detail. The names of the indices will remain original, thus, the numbering of the indicators in the analysis will not be monotonous ( Table 2).
Capital Adequacy Ratio (C) measures the safety and stability of banks. The equity capital shows the financial situation of a bank and allows one to write off losses if something goes wrong. CAR determines the ability of a bank to meet the obligation on time and other risks such as credit risk, etc. All core indicators correspond to equivalent IMF Financial Soundness Indicators (FSIs), except for Net Profit Margin and the Cost to Income ratio, which are commonly used banking indicators (IFSB, 2019b).
In most countries, the calculation of the capital adequacy ratio is regulated according to the Basel (I, II and III) recommendations. According to Basel II, the capital covers three types of risk, namely: credit risk (the risk of loss due to a counterparty defaulting on a contract), market risk (the risk of losses on on-and-off-balance sheet positions arising from movements in market prices, interest rates, and exchange rates), and operational risk (the risk of the non-perfect operation of the banking system).
For Capital Adequacy, both indicators are directly proportional (it is denoted by "+" after the short name of the indicator.) Asset Quality (A) is the second area of a CAMELS analysis. Its main area is lending quality. Lending activities are particularly important for banks, so it is essential to analyze the quality of assets in terms of a bank's successful operation and efficiency. Classified loans, especially non-performing loans (NPL), indicators in conventional banks, and non-performing financing (NPF) are mainly analyzed in Islamic banks. The NPL ratio provides information about the level of non-performing loans in the total loan portfolio. Non-performing assets are usually bad debts that are in default or near to be in default.
All Asset Quality (A) indicators are inversely proportional -denoted by "-" after the short name of the indicator.
The evaluation of the Management (M) for conventional banks is mainly based on the share price and the income-cost ratio of the relevant bank. Theoretically, it is possible to collect this information for every bank and take part in the analysis. However, given the bank systems of countries, this technique is impractical, unachievable due to the huge number of banks and the predicted lack of data.
In the IFSB database, there is no official indicator referring to Management. A possible methodological explanation can be the following. The role of the management and the attitude of customers, owners -there are many governments owned, supported banks -is different in Islamic banks. Also, the share price consideration (inter alia because of religious causes) is also different in Islam. This way, the performance of management in the two banking systems is really incomparable.
Some information about the management is involved in other indicators, like capital adequacy and earnings. Thus, even if letter M is avoided, the ranking is based on the performance of the management as well.
Earnings (E) is necessary for banks to generate sufficient earnings to stay in the market for a longer period. The profitability indicator refers to the management effectiveness. Return on Equity (ROE) shows how equity produces profit. It shows the efficiency, profitability of a bank, how efficiently the bank uses its capital. Return on Assets (ROA) gives information about a bank's assets. ROA avoids the volatility of earnings linked with unusual items and measures the bank's profitability.
The Net profit margin is equal to a bank's total interest income minus total interest expenses. The cost-to-income ratio is calculated by dividing the operating expenses by the operating income generated i.e. net interest income plus the other income. Three indices of Earnings are direct till the third one inverse proportional, as the lower the cost to income index, the better the operational efficiency of the bank.
Liquidity (L) is the ability of a firm to convert its financial assets into cash most rapidly or in quick succession. The indicators in the group of liquidity answer the question of how much a bank can fulfill its short-term liabilities using its current assets. The liquidity indicator shows how fast a bank's financial instruments can be converted to cash without losses. The liquidity indicators give information to what extent it can meet its shortterm liabilities with short-term assets. The higher the index value, the more liquid a bank can be considered. The liquidity rate was counted by using cash, central bank deposits, loans to other banks, and the sum of securities compared to the balance sheet total. There is not enough information on the indicators of the new Basel III system, LCR (liquidity coverage ratio), and NSFR (Net stable funding ratio) for the relevant period of 2013-2018. Both liquidity indicators are directly proportional, since the growth of liquidity means an upward trend.
Sensitivity to risk (S) consists of the interest, operational and financial risks, like changes in interest rates, foreign exchange rates, and prices. It affects a bank's earnings. Of course, in the case of Islamic banks, there is no interest risk. In the IFSB database, there are three indicators for this field. CP17 refers to the Net foreign exchange open position to capital (see below). CP18 refers to the "Large exposures to capital" and CP19 for the Growth of financing to the private sector for sensitivity to risk. Due to the lack of data, the last two had to be deleted and only the first CP17 remained. This index is inversely proportional, since the lower it is, the better the bank's position.
CAELS 100 new methodology
At the beginning of the study, it became obvious that the original CAMELS methodology should be modified and somehow improved for the following reasons. First, there were no data for the performance of the management in the IFSB database -thus, the "letter M" had to be left out, as was explained earlier. The second reason was the lack of sensibility, the third one -the method allows subjectivity. The last reason is a well-known property of the CAMELS analysis, which might be an advantage, if the evaluator wants to add some subjectivity, but a disadvantage when objectivity is the target of research. Given the proportionality of the criteria, a hundred-level evaluation was introduced that can be considered ratio (percentage, %) or standardization. It avoided the subjectivity and solved the lack of sensitivity problem. If a variable is directly proportional, the maximum got 100, the minimum 0, and vice-versa -if inverse proportional, the minimum got 100 and the maximum 0, or the composite indicators were the simple mathematical average of these standardized values created. To handle the remaining lack of data situation, "values of the letters" were created using the adaptive average technique. Available data were used and the missing ones were left out in the average construction -without any weight, simple mathematical average. For example, letter C (for Capital) is a simple mathematical average of CP01, CP02, and CP03 indicators (later, the indicator CP03 will be left because of the lack of data).
After having the CAELS average, the ranking of countries can be created, since it is part of the original CAMELS methodology.
The free of charge database of IFSB (Islamic Financial Services Board) was used for the investigation. That is an available comprehensive systematic collection of the Islamic banking data. The focus of the study was only on the countries with the Islamic banking systems, not on the Islamic windows. There were fifteen countries involved in the analysis: Bahrain, Brunei, Egypt, Indonesia, Iran, Jordan, Kuwait, Lebanon, Malaysia, Nigeria, Oman, Pakistan, Sudan, Turkey, and the United Arab Emirates. Due to the lack of data, it was necessary to delete not only some main or sub-indicators, but also some countries.
The database had 19 available indicators. The time series started with the average for 2013, continued quarterly until the first quarter of 2018, which amounted to 18 time-series data.
The original three-dimensional data cube contained 19 * 15 * 18 data (19 indicators, for 15 countries for 18 time periods). Two of them -CP11 (Capital to assets (balance sheet definition), Tier 1 capital, Total assets) and CP12 (Leverage (regulatory definition), Tier 1 capital, Exposure) -related to the leverage of the banking system and were not part of the CAMELS methodology, thus they were omitted first. Five additional indicators had to be omitted because of the lack of huge amount of data: • indicator CP3 "Common Equity Tier 1 (CET1) capital to RWA"; • indicator CP15 "Liquidity coverage ratio (LCR)"; • indicator CP16 "Net stable funding ratio (NSFR)"; • indicator CP18 "Large exposures to capital"; • indicator CP1. "Large exposures to capital".
LCR and NSFR have recently required indicators by the Basel III system, so it is obvious that there was no data for them.
In the raw data table of the CAELS 100 analysis (Tables A1-A6 in the Appendix), the names of indicators remained the same and were used in the IFSB database. Thus, one can relate it easily to the original IFSB data columns.
Fortunately, the withdrawal of these five indicators from the analysis did not make significant difficulty, as the technique of creating the "letter average" only from the available ones was implemented. In every group, at least one sub-indicator remained.
In After omitting the indicators and countries, there was even some particular lack of data, which are listed with the methodology for processing them below. The cleaned data are in Tables A1-A6 of the Appendix. The CAELS raw data of six countries are listed, with the original numbering from the IFSB database also with the proportionality of the variable ("+" or "-" directly or indirectly proportional).
The CAELS averages were added as a new column (Remark: as only one variable refers to sensitivity, it is titled with S average as well -instead of duplicating the column). Into the row of proportionality in the average columns, there are "N.A." written, as the proportionality was not applied for these variables. If for a certain average all of the raw variables taking part in the average are with the same proportionality, the relevant sign ("+" or "-") appears in brackets, just as information not used for anything. The cells of the missing date remained empty, and the averages were created without them. The details of the implemented techniques are below: • In the case of Bahrain, there was no data available for the Net foreign exchange open position to capital in the Sensitivity to Risks group. It was handled by creating the CAELS average from 5 criteria instead of 6.
• Also for Bahrain, the total column of CP14 "Liquid assets to short-term liabilities" was practically absent. For such cases, the CP13 Liquid assets ratio made the average of "L". For the period of 2017Q4, the situation changed. There were data for CP14, but CP13 was missing. The creation of "L" has always been consistent with data availability.
• In the case of Egypt, there is no data for CP5 "Net non-performing financing (net NPF) to capital", so the average of the two remaining asset quality indicators has to be created.
• In the case of Egypt, CP14 "Liquid assets to short-term liabilities" has no value, the average for "L" was created based on CP13.
• For Oman, six data were missing for column CP06. For these time periods, an average was generated without these values.
RESULTS
Time averages of CAELS 100 indices are presented in Table 3. Based on them, a ranking of countries can be compiled. It is in the last column.
Looking at Despite the stability of a banking system, which is an important issue, this extremely high value refers to a very low risk-taking of banks in Oman. Looking beyond this fact, investigation of the original time series data of the indicators (CAR and Tier 1 capital to RWA) and also constructing the graph for them are needed (Figure 1). It was found, that the original indicators used were extremely high at the beginning of the period (81%), and it reached a value of 15%, which is a common value in other countries.
The maximum values of other countries are 22% and 21%, the minimal are 11% and 7% for CAR and Tier 1 capital to RWA, respectively. The variance of these indicators for Oman is bigger than 22%, while the others are smaller than 2%. To sum up, the high average for Oman is due to the high value in the past, now they have reduced their CAR and Tier indicators to the general level for the region. In contrast to the capital adequacy ratio, there is a significant data dispersion in the asset quality, profitability and liquidity indicators. In terms of asset quality, Oman also plays a leading role with the best value (89.25%). It is followed by Egypt and Kuwait, with a score of about 70%, while the remaining three countries are below 58% (Remark: these values are scores on the 100-level ranking system, not the original values of the indices).
With regard to profitability, Egypt is the first and Oman is the last. It is not surprising, since Oman operated in the most risk-avoiding way, more than enough secure, thus, the country has the weakest profitable banking system. The performance of other four countries is very close to each other, from 53% to 55%, they produced almost the same relative profitability.
As for liquidity, Egypt leads the field with 53.28%, Turkey follows with 35.05%, and Kuwait, Oman, and Bahrain are in the middle, 16% -21.5%. The worst liquidity situation is in the United Arab Emirates with a 7.1% relative value.
For the last value, which refers to Sensitivity, a two-fold situation occurred. While the United Arab Emirates has 34.6% and Oman has 53.2% -they are at the bottom of the ranking -the other four countries have rather high points in the range of 88%-94%. Figure 2 shows the CAELS based performance of the countries of the Islamic bank sector (There is no value for variable "S" for Bahrain, so the line is above the letter L. Figure 2 better shows how countries' scores match and how close -even the most sensitive CAELS 100 indices. This fact will form the basis of the grouping.
DISCUSSION
The final result of the CAELS 100 new method can be seen in Table 3. In the before the last column, one can find the standardized average for the period 2013-2018, on the basis of which the countries are ranked. Based on this result, four groups can be created, as some of the points are very close to each other. This means that the performance of the banking system is nearly at the same level: • Egypt entered the first "group", ranking first with an average relative score of 58.22%.
Source: Own calculation based on the IFSB data. Egypt is the first in three variables: "A", "E", and "L". It ranks second in terms of "S" (Sensitivity to risk) just one relative point than Turkey. But Egypt is the last in the indicator of "C" (Capital adequacy), which indicates stability or risk-taking by banks. It can be stated that Egyptian banks are taking risks and are successfully coping with this, given this time period. Their success is evidenced by the high values of other indicators.
Perhaps a more detailed investigation into the reasons for the Islamic bank system's particular performance will be carried out, but apart from the page limit, the authors do not consider themselves empowered to analyze the detailed banking and economic policies of these countries.
In summary, it can be said that the hypothesis -CAMELS can be used to rank Islamic bank systems of countries -can be accepted, with the remark that methodology modification is needed, for example, deleting the "letter M" refers to management and creating a 100-level evaluation.
CONCLUSION
Evaluation and comparison of banking systems is an important issue not only for conventional but also for Islamic banks. In the banking analysis literature, the use of the microeconomic CAMELS methodology is very common to evaluate banks. In this paper, this way was not used, but CAMELS was implemented at the macro level for the aggregated indices of countries with Islamic banking systems. This idea with a hundred-level evaluation and the interpretation of the management indicator make this publication a novelty, uniqueness.
Hypothesis testing based on free access data, IFSB data, contains aggregated data of Islamic bank sectors of countries.
The conclusion from this study is that CAMELS -after some modification -can be applied to rank Islamic bank sectors. The modified technique can be called "CAELS 100" because the letter "M", an indicator referring to management, had to be deleted, since there was no data for it in the IFSB database. The name "100" refers to the level of grading. It is much more sensitive than five grades of the original CAMELS methodology. These were the novelties in the methodology.
As an additional conclusion of this study, a ranking of selected Islamic banking systems was compiled. The selection was based on data availability.
Egypt has the best Islamic banking system. The medium level: Turkey, Kuwait, and Oman, and the worst of all is in the United Arab Emirates and Bahrain. These groups were created because the indicesdespite this more sensitive methodology -were very close to each other. The ranking of Islamic banking sectors of these countries for the period 2013-2018 is also a novelty of this publication. | 5,521.6 | 2021-02-16T00:00:00.000 | [
"Economics",
"Business"
] |
A Statistical Study of the Forestry in Ukraine
The article is devoted to the analysis of the forestry in Ukraine as the reference point for further development of the framework for constructing the national forest account allowing for a description of interactions between economic activities and forests as a nature environment, and for consistent and comprehensive integration of environmental and economic problems in this field . The study covers the existing statistical definitions, classifications and the available statistical information about the forest, selected forestry indicators for Ukraine, the existing sources of data for the analysis of forestry, with proposing the necessary steps for further applications of forest accounting tools, in order to construct the forest account . It is pointed out that the forestry is represented by two large groups of institutional units: physical persons or groups of physical persons in form of households; legal entities, established and operated in keeping with the law, irrespective of what persons or entities may be their owners or managers . The main categories of legal entities are corporations, non-commercial organizations, and public administration bodies . It is determined that the main sources of data about the forest fund and forest resources of Ukraine are as follows: (i) statistical information based on the data from enterprises, obtained from official statistical observations of the State Statistics Service of Ukraine; (ii) administrative data based on the data from enterprises, obtained by public administration bodies (The State Service of Ukraine on Geodesy, Cartography and Cadastre, the State Agency of Forest Resources of Ukraine, the State Custom Service, the State Taxation Service) as part of functional responsibilities; (iii) the data of the national inventory of forests, obtained by the authorized bodies . The latest official data of the national forest inventory for Ukraine are available as of January 01, 2011, but these data have not been published yet in a proper manner . It is demonstrated that the official statistics cover a limited set of statistical data about the forestry due to the institutional constraints . A dynamic and structural analysis of the forest lands is explored, with outlining the main problems related with improving methodological approaches to the formation of the forestry statistics . The analysis allowed for determining the main areas of improvements in the forestry accounting and coming up with propositions of necessary steps to solve the problems of statistical studies of this industry .
Importance of the research theme.
Ukraine is a country with one of the largest forest areas in Europe and with old traditions of forestry . Due to their diverse structure and intensive natural rehabilitation, forests are a category of resources that are the key to the future development of this country . Ukraine, like all other countries of Europe with rich forests, faces tremendous challenges related with preparation of forests to future climate change, on the one hand, and has specific problems of the developing forestry and wood industry in a sustainable and efficient way . In the era of limited resources, wood is becoming a raw material of primary importance . Logging and sales of roundwood, i . e . operational and fuel wood, allows for stable profits . However, when raw wood is processed, its price can be increased several fold and become a main source for the rapid development of the domestic wood industry and the alternative energy generation .
The Association Agreement signed by EU and Ukraine in 2014 opened up a new phase in the development of contract relations between EU and Ukraine, aimed at the political association and economic integration . The association offers a step forward on the way to EU accession . According to Article 355 of the Association Agreement, to be harmonized with European norms and standards, the national statistical system has to rely upon fundamental principles of UN on the official statistics with consideration to acquis EU in the statistics field, the European Statistics Code of Practice in particular [1] . Acquis in the statistics field are set forth in the annually updated compendium of statistical requirements, considered by the Parties as Annex XXIX to the Agreement . The latest available version of this compendium can be found on the website of the Statistical Office of the EU (Eurostat) [2] .
The description of the section 3 . 1 . 1 "Forest Statistics and Accounts" of the above mentioned compendium shows that the countries of EU and the European Free Trade Association supply annual data on the output and trade of wood and wood products on the basis of the Joint Forest Sector Questionnaire (JFSQ) [3], which provides the guidelines for the European Economic Commission, UN, the Food and Agricultural Organization, Eurostat and the International Tropical Timber Organization (ITTO) at the global scale . The economic data on forestry and logging is collected by use of other questionnaire: the European Forest Accounts as part of the Integrated Environmental and Economic Accounting of Forests (IEEAF) [4] .
A review of the current forest law in EU shows that Ukraine needs to expand considerably its regulatory effort in harmonization of the law, to meet the requirements of regulatory acts of EU in all the thematic areas of the new EU Forest Strategy [5] . At the same time, the nomenclature of measures on adaptation of the environmental and forest statistics related with the forestry, which is fixed by the Association Agreement, is a necessary set of urgent actions to achieve the conformity of the national forestry law with the respective EU framework . Hence, to have the provisions of the Agreement implemented, Ukraine needs to meet, as soon as possible, the requirements for collection and aggregation of the data on the Ukrainian wood exports in keeping with European questionnaires, and to use European approaches to the forest accounting .
Forestry practices in Ukraine are based on principles and methods established 30-40 years ago or even earlier . Given the cardinal change in the external conditions (political system, policy, economy, and climate) and the emergence of new knowledge and technologies, a large part of the existing ones have become obsolete and hindered the development . The traditional conservatism of the forestry industry and the lack of necessary competencies with persons responsible for decision-making make the forestry unattractive for new approaches in all the functional areas . A statistical study of the forestry in Ukraine will be conducive for dissemination of knowledge about the European assessment in the field, and will help systematize the data required for constructing the forest account of Ukraine .
Literature review. As nature systems, forests are located on the crossroad of many environmental and economic problems, including climate change, loss of biodiversity, erosion of soils, water stress or stability of highland areas . The products that can be derived from forests are quire diverse and capable to meet a wide range of needs, including food, industry, dwelling and energy, but the assessment of the forestry impact on the economic development requires an appropriate statistical base . At the same time, the forest sector of Ukraine, which covers forestry, sawmill, pulp & paper industry and bio-energy, still remains inadequately explored due to a number of problems caused by the lack of complete and reliable official statistical information on forests and forestry performance .
The issues of forestry, forestry studies, forest planting and forest cultivation, forest melioration, monitoring in this field, radiology, selection of wood species etc . have been investigated by Ukrainian researchers such as I . Buksha, M . Hordiienko, V . Krasnov, O . Mihunova, V . Pasternak and others . Methodological, organizational and practical aspects of the development and formation of the environmental statistics components have been dealt with by V . Danylko, A . Yerina, O . Osaulenko, N . Parfentseva and others . However, a large part of issues concerned with the integrated environmental and economic accounting of forests and its development prospects in Ukraine have been out of focus and call for further elaboration, especially through the prism of the statistical monitoring of these processes . Studies attempting to find solutions for these issues are becoming very significant for the current official statistics of Ukraine and the overall development of statistical science and practice, because they largely determine the quality of information support for management at each level .
The article's objective is to conduct monitoring of the main forestry indicators in Ukraine, to outline the problems of accounting and statistical studies of forests, and to find their solutions .
Results. The organizing structure of the subsection "Forestry" is a difficult issue . Currently, the normative and legal regulation of the forestry is subject to the responsibility of the Ministry for Protection of Environment and Natural Resources of Ukraine . The relations concerned with the forestry involve public administrative bodies and local self-government bodies, legal entities and citizens [6] . The economic competence of public administration bodies and local self-government bodies is implemented by respective public or communal departments (legal entities) [7] .
The forestry is represented by two large groups of institutional units: physical persons or groupі of physical persons in form of households; legal entities, established and operated in keeping with the law, irrespective of what persons or entities may be their owners or managers . The main categories of legal entities are corporations, non-commercial organizations, and public administration bodies .
More than 90% of legal entities engaged in the forestry belong to the sector of corporations, including non-financial corporations, created specifically for the commercial production of goods and services that are sources of income or other financial benefit for their owners . A part of legal entities engaged in the forestry belong to the general government that includes ministries, public services, agencies, inspections, public committees, public administration bodies with special status, regional or local public administration bodies . The Classification of Institutional Sectors of the Economy states: "The main distinction between legal entities of corporations and public administration bodies stems from the distinctions in the objectives for which the production is performed . Corporations make products for the market and seek to sell them on economically significant (market) prices . Public administration bodies organize and finance the supply of goods and services to selected households and to the community on the whole, and they incur costs on the final consumption . The products made in this sector are usually provided either free of charge or on the prices that are set on the basis of decisions not related with market mechanisms . The activities of non-commercial organizations are aimed at the achievement of economic, social or other results in the field of forestry, without the receipt of profit to be further distributed between the parties involved" [8] .
The main sources of data on the forest fund and forest resources of Ukraine as follows: 1) statistical information based on the data from enterprises, obtained from official statistical observations of the State Statistics Service of Ukraine; 2) administrative data based on the data from enterprises, obtained by public administration bodies (The State Service of Ukraine on Geodesy, Cartography and Cadastre, the State Agency of Forest Resources of Ukraine, the State Custom Service, the State Taxation Service) as part of the functional responsibilities; 3) the data of the national inventory of forests, obtained by the authorized bodies . The latest official data of the national inventory of forests for Ukraine are available as of January 01, 2011, but these data have not been published yet in a proper manner .
The forest statistics of Ukraine is formed on the basis of the above mentioned data sources . The resulting information can be used for analyses and further computations, as well as in publications of public administration bodies and local self-government bodies, by business circles (institutions, enterprises, organizations), scientists and researchers, mass media, citizens and international organizations, such as the United Nations Food and Agricultural Organization (FAO), which website displays the global assessment of forest resources of various countries, including Ukraine, for the year of 2015 .
The share of forests and other forest lands in Ukraine range from 15 to 18% of the total country area . In average, each country resident accounts for 0 . 2 hectares of forest and other forest lands . According to the State Statistics Service of Ukraine, large business entities cannot be found in the forestry of Ukraine . Nearly 80% of the staff is employed in medium-sized business entities, and the rest work in small businesses . The employment in the forestry businesses fell by nearly 9% in 2018 compared with 2010, making 65 . 5 thousand persons . The overwhelming majority of employees in the forestry (more than 70%) are engaged in logging . However, the sales of goods and services by business entities in the forestry increased in 2018 by nearly four times compared with 2010, amounting to 22 . 6 billion UAH [9] .
Capital investment in the forestry and logging in Ukraine grew up from 177 . 8 to 980 . 3 million UAH in 2010-2018 ( Figure 1, constructed by data from [10]) . Unfortunately, the capital investment fell down to 548 . 7 million UAH in 2019 . Meanwhile, more than 80% of the capital investment in the forestry and logging come from internal funds of business enterprises and organizations . The share of capital investment in the forestry of Ukraine is minor, making 0 . 1% of the total capital investment .
A core indicator of the sustainable development of forests across the world and the main parameter of the maintenance of forest lands is the forest area . It is designed to reflect the positive and negative change, to identify the areas of deforestation and to revise the regional patterns of change .
According to Article 13 of the Constitution of Ukraine, land, its bowels, air, water and other natural resources located within the boundaries of the Ukraine's territory, natural resources of its continental shelf and exclusive (marine) economic zone are subject to the ownership right of the Ukrainian people [11] . The forests of Ukraine can be in public, communal or private property .
According to international definitions, the main distinguishing line is drawn between the forests that are in public and private property . For data consistency at the international level, the forest area and forest lands in Ukraine are divided into two groups by ownership form: those in public (including communal) property and those in private property . While the right of public (including communal) property for the forests can be gained and implemented by the government without limitations on their area, limitations on private forests are not applied only for degraded or unproductive lands, whereas the forests created as part of rural households, farms or other entities have the upper margin of 5 hectares . In times of socialism, the private property on forests was ignored by national forest policy leaders . Compared with rather intensive management of public forests, private forests were neglected by forest policy managers and private owners . Given that the private The data of land books and cadastres are not entirely accurate, as a part of information was destroyed in time of extreme circumstances . Forest lands are for the most part subject to family inheritance, but in many cases the procedure for formal transfer of the property right is not officially complete . Much more often, land cannot be appropriated by individual physical persons, because the procedure of land division is quite expensive and labor-consuming, whereas potential benefits of one owner from his inherited part of the land tend to be lower than the costs . Therefore, in many cases a forest asset is owned by a group of people (usually members of one family) on the shared basis, who know the local boundaries and use the land mostly sporadically and for own needs . The fuel wood for household needs is a prevailing end use of private forests, and only 20% of private owners are focused on the market, selling either firewood or wooden boards .
According to the data of the State Agency on Forest Resources of Ukraine, the share of forests in private ownership is smaller than 0 . 1% of the total forest area, the rest being public (including communal) forests [12] . However, the share of private forests has been slightly increasing over the latest 20 years . It is caused by far and large by the overall tendency of spontaneous afforestation in the abandoned agricultural lands in rural areas .
In the Central Framework of the Environmental-Economic Accounting, adopted by the United Nations Statistical Commission in 2012 as an international statistical standard, forests are considered as a form of land cover, with the forestry representing a category of land use [13] .
According to Append 3 to the Procedure for maintenance the State Land Cadastre, approved by the Decree of the Cabinet of Ministers of Ukraine from 17 . 10 . 2012 No 1051 «Approval of the Procedure for the State Land Cadastre" (with amendments), agricultural purpose is one of the main purposes of lands recorded in the land cadastre, which are further subdivided by purpose, as shown in Table 1 In the international practice, the definition of forest lands as a classification category of land use is nearly fully harmonized between Eurostat, FAO, OECD, and UN ECE . Forests as part of forest lands are separated from other forest lands on the basis of various parameters, such as the percentage of tree crown, the minimal area etc . The forest is defined as a land with the coverage of tree crown more than 10% and the area larger than 0 . 5 hectares . Upon the maturity, the trees on the place of growing are expected to be able to achieve the minimal height of 5 meters . Other forest lands are defined as lands with 5 to 10% coverage of tree crown, capable to achieve the height of 5 meters on the place of growing upon maturity, or with more than 10% coverage of tree crown, not capable to achieve the height of 4 meters on the place of growing upon maturity (such as dwarf trees), as well as the zones with the coverage of shrubs or bushes on the area more than 0 . 5 hectares and with the width of more than 20 meters . The area of forests and other forest lands, as defined in the international forestry statistics, does not cover all the land with trees . It implies that some "land with trees" is to be excluded from the category of lands "forests and other forest lands" in the classification of land use . The areas with importance as recreation places for city residents (municipal parks or gardens) or as eco-systems (scattered trees etc . ) need to be recorded in other categories of the land classifi-cation . For example, in France these areas account for about 5 to 10% of the total forest land [4] .
The main source of data on forests are forest inventories, but because they do not use a similar methodology or margins at country level, cross-country comparisons or even confrontations of various statistical data for one country need to be made with great caution . Yet, international definitions need to be preserved as they offer a single ground for reconciliation of data at global level .
The purpose of statistical classification of forests and other forest lands is to integrate economic aspects of the forest accounting by the two main groupings: -forests available for wood supply (operational): "forests and other forest lands where neither legal nor economic nor specific environmental limitations can have essential impact on wood supply . This group includes the areas with no logging, in spite of the absence of the above limitations, such as the lands included in long-term plans of use or intentions"; -forests not available for wood supply: " forests and other forest lands where any kind of legal, economic or specific environmental limitations is intended to prevent from wood supply" . It includes: а) forests and other forest lands with legal limitations or limitations due to other political decisions, which ultimately excludes or essentially limits wood supply, in particular due to considerations of environ-ment or bio-diversity preservation (such as forest under protection, national parks, reservations or other protected areas); b) forests and other forest lands where the physical productivity or the quality of wood is too low or costs for logging and transportation are too high to plan the logging .
In case of the forests available for wood supply, the forest accounting system involves the following division: -exploited (including planted) forests that are actually managed for economic purposes (economic forests); -unexploited (natural) forests that are beyond the active management (non-economic forests) .
Forests unavailable for wood supply are grouped into: • protective forests which function is to ensure protection of soils from erosion caused by water or wind, to prevent desertification, to reduce the risk of avalanches or rocks; • protected forests, i . e . forests of special purpose, which are extremely rare by their nature or which have special cultural, religious or historical significance, including national parks, natural parks, reservations, forests intended for leisure, sports, recreation, training and scientific research, climatic or other resorts, hunting grounds, as well as the forests representing special interest for the national defense and as sources of drinking water . Because many forests fulfill the functions of protection and production at the same time, this grouping is rather ambiguous, and the distinguishing line is usually drawn for protected forests, but not for protective ones .
According to the data from the State Forest Cadastre [15], the total area of forest lands in Ukraine as of January 1, 2011, made 10378 . 7 thousand hectares, of which 9573 . 9 thousand hectares, or 92%, are forest areas covered by forest plants . The latest data from the forest inventory are shown in Table 3 (compiled by data from [16]) . Of the total forest area, 6441 .9 thousand hectares, or 62%, is accounted for by the forests with special mode of use (nature protection, scientific, historic and cultural purpose, recreation and wellness etc .), with the rest of 3936 .8 thousand hectares, or 38%, being operational forests .
The available data of the forest cadastre were regrouped for making international comparisons . As a result of regrouping, the forest area with special mode of use, forests available for exploitation (1718 .4 thousand hectares) and the area of operated forests, including those beyond the active management (574 .0 thousand hectares), are classified as the forests available for wood supply . The rest of forest areas fell into the category of forests not available for wood supply . Results of the regrouping are shown in Figure 2 (constructed by the authors by data from [16]) . Data on the forest areas in Ukraine, disseminated via official editions and websites, are different, but the tables setting correspondence between the existing definitions of the forest land area and other lands covered by forests, and the area of lands with forestry purposes by individual purpose are not available (Table 4, compiled by data from [16; 17]) . The existing differences in the definition and the legal status of forest lands available or not available for wood supply cannot allow for a sound comparison of data and require improvements and standardization in future with respect to the definition .
The total estimated stock of forest stands as of January 1, 2011 was 2100 million m3, which makes 202 m3 per 1 hectare of the total area of forest lands or 219 m3 per 1 hectare of the total area of forest lands covered by forest plants [11] .
The forest has been traditionally seen by Ukrainians as a source of prosperity . As of today, more than 85% of the logged wood in Ukraine is accounted for by roundwood (operational roundwood and fuel wood) . Its sales in the raw form can bring stable incomes . There has been an upward tendency in the logging output since 2005 . In 2019, the total logging output made 20 .7 million m3, including 17 .9 million m3 of roundwood, which is by 22% and 17% higher than in 2005 (Figure 3, constructed by the authors by data from [18]) . The existing system for classification of roundwood in Ukraine does not allow for obtaining reliable data on its logging by end use category, adopted in the output statistics since long . Throughout 2019 the State Agency of Forest Resources of Ukraine was attempting to adopt the national standards of roundwood accounting, harmonized with the European standards, but because regulations of the soviet times have still been incorporated in them, the problem of breaking the roundwood by two main groups for the roundwood accounting, used for international comparisons (operational wood and fuel wood) is yet to be solved . A part of roundwood remains outside the international standards of wood accounting, and the term "fuel wood" is not covered by the national regulation .
It follows that the available data on Ukraine cannot give a comprehensive picture of the forestry . The State Agency of Forest Resources of Ukraine is the central body of the executive power responsible for the implementation of the national policy in the forestry sector, which exercises the official management in the forestry field in keeping with the enforced legal and regulatory acts . But it has to achieve its objectives by use of inconsistent or even unreliable data, which results in the destabilized situation in the sector under study and the related industries .
The study of the forestry statistics reveals the main problems calling for improvements in methodological approaches to this statistics production: а) a limited set of statistical data on forest resources such as the forest area or the increasing stock . In the official website of the State Statistics Service of Ukraine (www .ukrstat .gov .ua), the heading "Statistical information", there is a section of economic statistics "Agriculture, forestry and fishery", where information on the forest areas and the increasing stock is missing; b) the unsolved problem of harmonization of the forestry statistics with international norms; c) the unsolved methodological problems related with improvements in the statistical analysis of output and consumption of forest products on regular basis and in synchronization with the international practice; d) the need for the constant monitoring of the information, to provide support for setting up and implementation of the national policy for the development of forestry sector .
Conclusions . So, as the significance of the forestry sector and the forestry market grows day by day, analysis of this economic sector and monitoring of its dynamics is an urgent issue . The studies of the domestic market of forest resources draws far less attention compared with the external market of forest products, which is a source of currency inflow . However, the capacity of the domestic market of forest products is an important economic indicator showing how the domestic forestry sector is developed in comparison with other countries, and the forest products in Ukraine offers can alternative option for the development of the national wood industry .
The analysis allows to outline the main areas of improvements in the forestry accounting and the propositions of the necessary steps to address the problems of statistical studies in this field . It is demonstrates that due to the institutional constraints the official statistics has a limited set of data on the domestic forest sector, forestry in particular . An important area of the improvements in the national statistical system with respect to solutions to information problems of the forestry sector is systematization of the data on its condition and performance, which have a common coverage, definitions and classifications in an integrated format . This is supposed to help in constructing the national forest accounts in a statistically consistent and integrated form in compliance with the accounting rules, principles and frameworks specified in the first international statistical standard of the environmental-economic accounting "The Central Framework of the Environmental-Economic Accounting" (2012) . | 6,577.2 | 2020-11-24T00:00:00.000 | [
"Environmental Science",
"Economics"
] |
Noxious pressure stimulation demonstrates robust, reliable estimates of brain activity and self-reported pain
Functional neuroimaging techniques have provided great insight in the field of pain. Utilising these techniques, we have characterised pain-induced responses in the brain and improved our understanding of key pain-related phenomena. Despite the utility of these methods, there remains a need to assess the test retest reliability of pain modulated blood-oxygen-level-dependant (BOLD) MR signal across repeated sessions. This is especially the case for more novel yet increasingly implemented stimulation modalities, such as noxious pressure, and it is acutely important for multi-session studies considering treatment efficacy. In the present investigation, BOLD signal responses were estimated for noxious-pressure stimulation in a group of healthy participants, across two separate sessions. Test retest reliability of functional magnetic resonance imaging (fMRI) data and self-reported visual analogue scale measures were determined by the intra-class correlation coefficient. High levels of reliability were observed in several key brain regions known to underpin the pain experience, including in the thalamus, insula, somatosensory cortices, and inferior frontal regions, alongside “excellent” reliability of self-reported pain measures. These data demonstrate that BOLD-fMRI derived signals are a valuable tool for quantifying noxious responses pertaining to pressure stimulation. We further recommend the implementation of pressure as a stimulation modality in experimental applications.
Experience is not static over time, and pain intensity can fluctuate . Despite variations over time, subjective measures can reliably capture the pain experience (e.g. Grafton et al., 2005 ;Hodkinson et al., 2013 ;Williams et al., 2000 ). For instance, a meta-analysis conducted on visual analogue scales (VAS), numerical ratings, and verbal rating scales showed high reliability for these measurements ( Williamson and Hoggart, 2005 ). Comparatively, the robustness of noxious-induced brain activations is less clear ( Bennett and Miller, 2010 ). A well-established measure of reliability, the intra-class correlation coefficient ( ICC, 1979 ), has been the consistent method employed to quantify estimates of pain-induced blood-oxygen-leveldependant (BOLD) responses ( Letzen et al., 2016 ;Letzen et al., 2014 ;Quiton et al., 2014 ;Upadhyay et al., 2015 ). ICC has been described in the context of consistency between ratings given by different judges; however, it is also used to assess the reliability of ratings across testing sessions and of imaging methods over time ( Bennett and Miller, 2010 ;Caceres et al., 2009 ). ICCs for noxious thermal stimulation have been shown to range from "poor " to "excellent " ( Fleiss et al., 2013 ) in painrelated regions ( Letzen et al., 2014 ;Quiton et al., 2014 ;Upadhyay et al., 2015 ). Mechanical stimulation has also shown high repeatability in areas such as secondary somatosensory cortex, but lower and more variable repeatability in the primary somatosensory cortex and thalamus ( Taylor and Davis, 2009 ). Altogether, more work is needed to determine the reliability of pain-induced imaging endpoints, and the robustness of noxious pressure has yet to be assessed, despite its increasing application in clinical paradigms.
In the present study, BOLD-fMRI was employed to examine pressure pain-induced brain responses, using an evoked-response paradigm. Test retest (ICC) of participants' subjective pain ratings, and group-level, pain-induced BOLD signal responses were examined across two identical sessions. Next, to provide further information regarding pressure stimulation effects on the brain, a map was constructed to assess the effect size distribution across voxels for pain-induced responses. Finally, ICC analyses were implemented to determine the intra-subject inter-session reliability of the pain-induced neural endpoints.
Participants
Twenty-three healthy pain-free participants (nine females; mean age = 26 years, SD = 5.2) were recruited for the study. Two participants from the initial twenty-three were excluded from data analysis for not completing both sessions. All participants were right-handed [as assessed by the Edinburgh handedness inventory; ( Oldfield, 1971 )] with normal or corrected-to-normal vision, no history of neurological or psychiatric disorder or history of substance abuse and no MRI contradiction. Participants with a chronic pain condition, a history of hand/thumb trauma, or with a neurological condition affecting the hand were additionally excluded. Participants were asked in advance of the initial testing session whether they wore artificial fingernails and if they could be removed prior to taking part in the experiment. If removal was not possible, then these participants were excluded. Previous data has indicated that females exhibit variability in their pain responses due to the phase of the menstrual cycle (e.g. Iacovides et al., 2015 ;Martin, 2009 ;Teepker et al., 2010 ;Vincent and Tracey, 2010 ). Accordingly, female participants completed all three sessions of this study within the equivalent 10-day period of consecutive months (follicular phase; between day 1-10 of their menstrual cycle). Irregular menstrual cycles therefore constituted an exclusion. Further, to minimise the influence of diurnal variations on pain and BOLD signal ( Hodkinson et al., 2014 ;Jiang et al., 2016 ), participants were always tested at the same time in the day. Moreover, participants were required to adhere to the following lifestyle guidelines; abstain from alcohol for 24 hrs and limit caffeine to a max-imum of one caffeinated drink prior to each visit, abstain from nonsteroidal anti-inflammatory drugs or paracetamol for 12 hrs as well as the use of tobacco or nicotine containing products for four prior to each visit. Participants gave written informed consent and the experiment was approved by the Psychiatry, Nursing and Midwifery Research Ethics subcommittee at King's College London, UK (Ethics reference: RESCM-17/18-4769).
Procedure
Participants attended three separate sessions in total. The first session was a familiarisation and sensory thresholding session conducted in a mock scanner environment. The following two sessions were conducted in a MRI scanning unit and were identical for test retest purposes. The mean interval between each testing session was 10.8 days (SD 10.6).
Session 1 (familiarisation and sensory thresholding)
At the commencement of the first session, participants underwent a Drugs of Abuse (DOA) test and breath alcohol test to assess for substance use and compliance with the study requirements. Furthermore, compliance with lifestyle guidelines was assessed. Next, participants underwent sensory thresholding for pressure stimulation. Each participant received one ascending series of pressure stimuli and one randomised series (for each hand separately). All stimulations were applied to the thumbnail using an automated, custom-made, pneumatic, computercontrolled stimulator with a plastic piston that applies pressure via a 1.13 cm 2 hard rubber probe ( Jensen et al., 2009 ;Jensen et al., 2010 ). The thumb was inserted into a cylindrical opening and positioned such that the probe applies pressure to the nail bed. The precision of pressure applied via the piston was calibrated over four repetitions to confirm reliability of the delivered force prior to commencing data collection for this study. This same pressure device was used for both the sensory thresholding session and for the evoked-pressure paradigm in the scanner for consistency.
In the first stage of the ascending series staircase, participants received stimulation at 55 kilopascals (kPA; 2 s duration) which increased incrementally in steps of 4 kPA (4 s intervals). Participants were required to inform the experimenter when they had reached their minimum pain threshold (first score > 0 on a pain scale made up of 100 elements, anchored with 'no pain' at one side (0) and 'worst pain imaginable' at the other side (100)). Participants were additionally asked to inform the experimenter when they reached their "high " pain threshold (first score = 70). These values were then used to compute the magnitude of five different pressure intensities within the range of each participant's minimum and high threshold, e.g. if the minimum pain threshold was represented by a pressure of 200 kPa in the ascending series and the high pain threshold = 70 was reached with a pressure of 600 kPa, the randomised series would consist of pressures of 200, 300, 400, 500 and 600 kPa. Each of these five stimulations were repeated three times, thus in total 15 stimuli of 2 s duration were delivered in a pseudo-randomised order at 24 s intervals. During the interval, participants were required to rate their level of pain using a button box in the contralateral hand on a computerised pain VAS (7 s duration total presentation). A first order polynomial function was used to determine each participant's representation of a score of 60, derived from the 15 ratings from the randomised series [for further details refer to Jensen et al., 2009 ]. This thresholding procedure was repeated for both the left and the right hand to account for differences in sensitivity between the left and right thumb.
Session 2 and 3 (imaging acquisition)
The procedure for the two imaging sessions was identical. Prior to entering the scanner participants underwent DOA and breath alcohol tests to assess for substance use and compliance with the study requirements. Furthermore, compliance with lifestyle guidelines was assessed. Next, participants underwent structural and localiser scans followed by the evoked-response pressure paradigm ( Fig. 1 ), utilising the pressure In each block (2 blocks depicted above) participants received a train of three pressure stimuli at either high (noxious; score of 60 determined by thresholding) or low (non-noxious; 55 kPA across all participants) intensity (alternating blocks). Within the train, each pressure stimulus had a total duration of 2 s with an interstimulus interval of 5 s. The first pressure stimulus occurred at a jittered interval after the start of the block (0-1.2 s). Following the train of pressure stimulation, participants were then presented with a computerised VAS of the pain scale. This scale was made up of 100 elements, anchored with 'no pain' at one side (0) and 'worst pain imaginable' (100). This scale was presented for 7 s. This was followed by a jittered interval prior to the start of the next block. There were 20 blocks of stimulation per run (10 noxious and 10 non-noxious blocks). Participants completed two runs in total (left-and right-hand stimulation). Each stimulus (within a train; noxious or non-noxious) contributed to the explanatory variable for either noxious or non-noxious stimulation (block dependant).
probe previously described. The paradigm was a block design with alternating blocks of high (noxious: score of 60 determined on an individual basis by sensory thresholding in session 1) and low-pressure stimulation (non-noxious: average score of 8, 55 kPA, across all participants). Participants were informed that pressure stimulation would vary throughout the experiment; sometimes it would be higher and sometimes it would be lower. These instructions were non-specific so that participants were not aware of the thresholded values and further to reduce the likelihood of anchoring responses. The durations selected for event presentation were chosen for optimal design efficiency ( Josephs and Henson, 1999 ). In each block (30.8 -32 s duration) participants received a train of three pressure stimuli. Each pressure stimulus had a duration of 2 s. The first stimulus occurred at a jittered interval (0-1.2 s) after the start of the block enabling sampling of a different point in the participant's haemodynamic response when modelling the events. Each subsequent stimulus followed in intervals of 5 s. Succeeding each train of stimulation (three stimuli in total) and at the end of each block, participants were presented with a computerised VAS of the pain scale as in the thresholding procedure (7 s duration). This was followed by a blank black screen (jittered duration of 7.8-9 s) prior to the start of the next block. In entirety, there were 20 blocks of stimulation per run (10 noxious and 10 non-noxious blocks; total duration = 638 s). Participants completed two runs in total (one left-and one right-hand stimulation). Two separate runs were included in the experimental design (one per hand), and each hand was thresholded separately, to maintain a high total number of trials whilst minimising the effects of sensitisation.
Data acquisition
The data were collected using a 3T GE MR750 MRI scanner equipped with a 32-channel receive-only head coil (Nova Medical, USA) at the Centre for Neuroimaging Sciences, King's College London, UK. We used an echo planar imaging (EPI) acquisition sequence with the following parameters: repetition time (TR) 2000 ms; echo time 30; 48 slices with a thickness of 3 mm and a 0.3 mm inter-slice gap; matrix 64 × 64; field of view 211 mm 2 , flip angle 75°. Slices were acquired sequentially in descending order. High-resolution T1-weighted structural images were also acquired for all participants.
Preprocessing
MRI data were preprocessed using SPM 12 (Wellcome Department of Imaging Neuroscience, www.fil.ion.ucl.ac.uk/spm ) in Matlab 2015b. Functional MRI data were converted from DICOM to NIFTI format, spatially realigned to the first functional scan (within session) and slice timing corrected. Translation and rotation parameters were determined to be in an acceptable range ( < 2 mm and < 1.08°respectively for all participants). Structural scans were co-registered to the mean EPI and normalised, using the segment and normalise routine of SPM12, to derive the individual participant normalisation parameters. Normalised data were spatially smoothed (8 mm isotropic Gaussian kernel full-width at half maximum) to improve signal-to-noise ratio and were additionally high pass filtered (144 s).
Estimating bold signal responses to noxious stimulation
For the two primary conditions of interest (noxious and non-noxious stimulation; high and low pressure respectively) separate BOLD explanatory variables (EVs) were constructed. For both conditions (noxious and non-noxious), each of the trials began with a train of 3 stimulations, which were each modelled as having a duration of 2 s and an ISI of 5 s. Furthermore, for each condition, we constructed additional regressors encoding the period during which the VAS scores were collected. A final regressor was included for the blank black screen presented during rest, to minimise any superfluous noise in the model. All regressors were constructed separately for the two hands of stimulation. The resultant regressors were convolved with the canonical hemodynamic response function to produce ten BOLD EVs for modelling. Translation and rotation parameters (totalling 6 regressors), white matter and ventricular signal intensity were included in the model as covariates of no interest.
To establish the most robust effects of noxious pressure stimulation and incorporate all of our data, we first produced linear contrasts of parameter estimates (COPE) for each participant for the BOLD response to noxious stimulation compared to the implicit resting baseline (main effect of noxious stimulation; both hands of stimulation combined). We then generated two additional COPEs for each participant for each hand of stimulation. Next, COPEs were generated for each participant for the following comparisons: Noxious > non-noxious stimulation (i) of the left hand plus that associated with right hand stimulation, (ii) of the left hand, (iii) of the right hand. To test for group-related responses associated with each of the 1st level contrasts of interest, one sample t-tests were carried out. The statistical height threshold was set to p < 0.001, family-wise error (FWE), Gaussian Random Field (GRF) corrected at the cluster level ( p < 0.05). To provide further information on the pain-induced BOLD responses that were subsequently submitted for test retest reliability, we calculated the size of the effect at each voxel. Effect size calculations (Cohen's d) were performed for the central contrast of interest (main effect of noxious stimulation). Following previous work ( Geuter et al., 2018 ), the effect size was computed at each voxel ( v ) as the mean COPE divided by the standard deviation (across all subjects; Groupings of effect size were based on guidelines from Cohen ( Cohen and DuBois, 1999 ). (0) and 'worst pain imaginable' (100)), were higher for noxious stimulation compared to non-noxious stimulation (ANOVA). No other main effects or interactions were significant. Error bars indicate standard error. Right Panel: Plot shows CVs by session and stimulation type (noxious, non-noxious). There was higher dispersion around the mean under non-noxious stimulation compared to noxious stimulation (main effect: ANOVA). No other main effects or interactions were significant.
2.6. ICC reliability 2.6.1. Behavioural measures VAS scores were entered into a two-way ANOVA, with factors Stimulation Type (noxious, non-noxious) and Session (session 1, session 2). Next, coefficients of variation (CVs; SD/mean) were calculated separately for noxious and non-noxious stimulation, in each session, in each individual. As CVs were not normally distributed, we performed a log transformation (Log10) prior to entering the data into a two-way ANOVA with factors Stimulation Type (noxious, non-noxious) and Session (session 1, session 2). Test retest reliability of VAS self-report pain scores were calculated between session 1 and session 2 (intra-subject, inter-session; collapsed across left-and right-hand). To assess reliability, the ΔVAS scores (noxious -non-noxious) and ICC (3,1) were computed using SPSS v19.0 (SPSS Inc., Chicago, IL, USA). Following previous recommendations ( Fleiss et al., 2013 ), ICC values were categorised accordingly: < 0.4 as poor, 0.4-0.59 as fair, 0.60-0.74 as good, and > 0.75 as excellent. While a value of 1.0 indicates near-perfect agreement between the values of the test and retest sessions, a value of 0.0 would indicate that there was no agreement between the values of the test and retest sessions.
Reliability of bold signal in response to noxious stimulation
To systematically evaluate the neural test retest performance, intersession intra-subject reliability was estimated using the third ICC where BMS is the between-target mean squares, EMS is the error mean squares, and k is the number of repeated sessions . All ICC values were calculated in MATLAB 7.1 (The Mathworks Inc.) and the locally-developed ICC toolbox ( Caceres et al., 2009 ). Intra-subject reliability was calculated at three levels: the whole brain, the complete activation network and the activated regions of interest (ROI), using a voxel-wise t-statistic threshold of 4.5 [following Caceres et al., 2009 ]. The activation network was obtained using a one sample t -test for the first session (for each contrast of interest separately). Functional ROIs were obtained in a second level analysis and were FWE, GRF corrected at the cluster level ( p < 0.05), and obtained using an initial voxel-wise height threshold of p < 0.001. The ROI masks were extracted using the MarsBar toolbox ( Brett et al., 2002 ). The medICC is reported, which is the reliability measure obtained from the median of the ICC distributions within regions ( Caceres et al., 2009 ).
Behavioural data
VAS scores were entered into a two-way ANOVA with factors Stimulation Type (noxious, non-noxious) and Session (session 1, session 2). There was a main effect of Stimulation Type (F(1,20) = 271.5, p < 0.001) with higher VAS scores in the noxious (VAS; mean 51.1, SD = 14.7) compared to the non-noxious condition (VAS; mean 5.9, SD = 5.1). No other main effects or interactions were significant (all p values > 0.5). The VAS report is presented in Fig. 2 (left panel).
Next, we calculated CVs by stimulation type and session. A twoway ANOVA as above revealed a main effect of Stimulation Type (F(1,20) = 47.1, p < 0.001) with increased dispersion around the mean under non-noxious (CV; mean 0.57, SD = 0.61), compared to noxious stimulation (mean CV 0.17, SD = 0.13). No other main effects or interactions were significant (all p values > 0.4). Fig. 2 , right panel, depicts CVs by Stimulation Type and Session .
Finally, the intra-subject inter-session ICC was obtained for our behavioural measures. An "excellent " degree of reliability was found for the ΔVAS (noxious -non-noxious) pressure scores between session 1 and session 2. The single measures ICC was 0.75 (95% CI [0.49, 0.89]).
Evoked responses to noxious pressure
The primary aim of the presented research was to calculate the reliability of BOLD signal responses pertaining to noxious pressure stimulation (incorporating the data from both left-and right-hand stimulation). We computed the main effect of noxious pressure (compared to an implicit resting baseline), to assess the ICCs of the pain-modulated signal over two identical sessions. For comparison, we additionally computed the reliability of noxious pressure against a baseline of non-noxious stimulation, as the baseline used for subtraction has previously been shown to modulate ICC estimates ( Hodkinson et al., 2013 ). Therefore, in the following, we report data pertaining to both the main effect of, and the contrast of noxious stimulation.
Analysis of the main effect of noxious and non-noxious pressure stimulation (data from stimulation to each hand incorporated), at the rec- In session 1 there were clusters of activation in insula, thalamus and putamen extending into the postcentral gyrus. Additional regions included the cerebellum and primary somatosensory cortices extending into IFG. Peak activation in session 2 followed a similar pattern. Depicted here are two overlays at initial height thresholds p < 0.001 (blue to white), and p < 0.0001 (red to yellow). Eklund et al., 2016 ], revealed several large clusters reaching a size of 44,820 voxels. Accordingly, a more conservative height threshold ( p < 0.0001) was adopted for these contrasts, in order to render these clusters interpretable (refer to Fig. 3 for a cluster extent comparison between the two height thresholds). At this more conservative threshold, a main effect of noxious stimulation (session 1 data) showed significant activity bilaterally across the insula, thalamus and putamen extending into the postcentral gyrus. Additional regions included the cerebellum and primary somatosensory extending into the inferior frontal gyrus (IFG), (refer to Paired t -test comparisons between noxious and non-noxious stimulation were additionally computed (data from both hands incorporated). For the contrast of noxious > non-noxious stimulation (session 1 data) there was significant activity in regions including the bilateral insula extending into the thalamus, putamen, and precentral gyrus. Further regions included the cerebellum (bilateral) and primary somatosensory cortices (refer to Table 3 for peak coordinates and Fig. 4 ; upper panel for session 1). Likewise, in session 2, peak activation for noxious > nonnoxious stimulation was observed in the thalamus, primary somatosensory cortices, cerebellar regions and precentral gyrus, and additionally in the IFG, cingulate and supramarginal gyrus ( Table 3 and Fig. 4 ; lower panel for session 2). For the opposite contrast (non-noxious > noxious stimulation) there was significant activation in both sessions observed in occipital (e.g. lateral occipital complex) and temporal areas (e.g. superior gyrus) as well as in and around the postcentral/precentral gyrus ( Table 3 , lower panels for peak coordinates).
Test retest reliability of evoked noxious pressure
ICC measures were implemented to examine test retest reliability of voxel-wise fMRI data. The results are presented in Table 4 . For the main analysis of interest (main effect of noxious stimulation; stimulation to both hands incorporated), there was "fair " reliability in the brain (ICC: 0.46) and "good " reliability in the activated network (0.60). The relative number of voxels against ICC scores are plotted in Fig. 5
Fig. 4. Evoked Activation for Noxious > Non-noxious Stimulation (Both Hands) for Session 1 (Upper Panel) and Session 2 (Lower Panel).
In session 1 there were significant clusters of activation in regions including bilateral insula extending into the thalamus, putamen, and precentral gyrus. Additional regions included the cerebellum (bilateral) and primary somatosensory cortices. Peak activation in session 2 also centred in the thalamus, primary somatosensory cortices, cerebellar regions and precentral gyrus, and additionally in the IFG, cingulate and supramarginal gyrus. The height threshold was set to p < 0.001. (left panel). Fig. 6 A. (main effect of noxious pressure) illustrates ICC values across the brain in the upper panel, with significant clusters of activation from the pertinent second-level analysis in the lower panel. Significant clusters of activation from session 1 ranged between "poor " (lowest ICC = 0.33; intracalcarine cortex) to "good " reliability (highest ICC = 0.74; thalamus, insula, putamen, primary somatosensory cortices, IFG, and postcentral gyrus; refer to Table 4 , upper panel). Reliability estimates were additionally computed for left-and right-hand stimulation separately for comparison to the composite of both (supplementary data). For the reliability of left-hand stimulation, significant clusters ranged from "poor " (lowest ICC = 0.04; thalamus) to "good " (highest ICC = 0.68; right primary somatosensory). For right-hand stimulation, clusters again ranged from "poor " (lowest ICC = 0.27; cerebellum) to "good " (highest ICC = 0.70; left supramarginal extending into postcentral gyrus). Moreover, reliability measures were computed for the main effect of non-noxious stimulation ( Table 4 , middle panel, for full list of ICCs). Comparative to reliability estimates for the main effect of noxious stimulation, ICCs were lower for both the brain (0.35) and activated network (0.52).
Reliability was also assessed for noxious pressure stimulation with a baseline measure (subtraction) of non-noxious stimulation for comparison. The relative number of voxels against ICC scores for the brain and network are plotted in Fig. 5 (right panel). For these data, there was poor reliability overall (brain; 0.27, activated network; 0.39) but "fair " reliability across a couple of the significantly activated clusters including a large cluster extending over the insula/thalamus/putamen and precentral gyrus ( Table 4 and Fig. 6 B).
Discussion
In the current study, we examined the reliability of acute noxious pressure, a now commonly implemented, but previously unassessed stimulation modality. Group-level analysis for noxious pressure, both the main effect of, and contrasted against non-noxious stimulation, revealed a number of regions of cortical and sub-cortical pain-related activation, in line with previous research ( Apkarian et al., 2005 ). ICC calculations for the main effect of noxious pressure, which demonstrated large effect sizes, indicated good reliability across the activated network (0.60) as well as within significantly activated clusters (0.33-0.74). The reliability of the behavioural data was "excellent ", replicating previous findings of high reliability across behavioural measures (e.g. Bijur et al., 2001 ). These data inform our understanding on the nature of paininduced BOLD signal establishing that pressure stimulation produces robust and reliable evoked-activation.
A substantial body of work has been conducted on the functional localisation of responses to noxious stimulation. For instance, a metaanalysis ( Duerden and Albanese, 2013 ) of 140 neuroimaging paradigms revealed that whilst some activations are dependant on stimulus modal- ity (e.g. heat vs. cold), the thalamus and insula are similarly activated regardless of the type of noxious stimulus, both of which activations were observed in the present report. In comparison, the number of reports that provide quantification regarding test retest ICCs of these paininduced responses are sparse. Nonetheless, our findings echo previous investigations that have implemented ICC calculations of acute pain and demonstrated ranges of poor to excellent reliability ( Letzen et al., 2016 ;Quiton et al., 2014 ;Upadhyay et al., 2015 ). The present study demonstrated that noxious pressure elicits high levels of reliability with "good " ICCs associated with regions commonly recruited during acute stimulation, including the insula, thalamus, putamen, IFG and somatosensory areas (e.g. Apkarian et al., 2005 ;Duerden and Albanese, 2013 ;Peyron et al., 2000 ). In these specific clusters, ICCs were observed in the range of 0.68 to 0.74, greater than the average report across disciplines ( Bennett and Miller, 2010 ), and analogous to previous research that has examined the reliability of noxious heat. These prior studies reported coefficients within this range in the insula ( Letzen et al., 2016 ;Quiton et al., 2014 ;Upadhyay et al., 2015 ), thalamus, inferior frontal regions, and somatosensory areas ( Quiton et al., 2014 ;Upadhyay et al., 2015 ). This indicates that noxious pressure and heat have similar neural endpoints that are reliably activated over multiple sessions. However, as there is only limited data reporting ICCs of pain-induced responses, with some modalities yet to be assessed (e.g. noxious cold), future work is needed to determine the degree of stimulus-specific reliability.
In the present data we observed greater activation in the noxious compared to non-noxious condition in a wide range of regions including insula, thalamus, posterior cingulate and inferior frontal areas. However, high levels of reliability across the significantly activated clusters were only exhibited when scrutinising the main effect of noxious pressure. Comparatively, there was lower reliability across the activated network and significantly activated clusters when employing a baseline (subtraction) of non-noxious stimulation. These findings echo previous reports ( Hodkinson et al., 2013 ) emphasising that the elected baseline plays an important role in measures of ICC. Here, a baseline of nonnoxious stimulation does not provide a highly reliable endpoint, as stimuli are considered to be less salient and BOLD responses to non-noxious stimulation less stable across time. Note as well that within these reliability maps, as well as for those pertaining to the main effect of noxious stimulation, there were regions observed outside of the significantly activated clusters that displayed high levels of reliability. This has been previously demonstrated ( Caceres et al., 2009 ), where highly activated regions have shown low reliability whilst some sub-threshold regions have displayed high reliability. It is not entirely unexpected that regions may convey a reliable BOLD signal without carrying significant information about the specified contrast. One reason for this being that fluctuations have been identified during both resting-state and active tasks that are believed to reflect long distance neural synchronisation ( Buzsáki and Draguhn, 2004 ) and that are, in addition, reliable over time ( Zuo et al., 2010 ).
The test retest characteristics of noxious BOLD-evoked responses are on a par with reliability reports from other sensory-motor, cognitive and affective domains. In a meta-analysis of fMRI test retest data ( Bennett and Miller, 2010 ), reliability ranged from "fair " to "good " across all disciplines, with an average ICC report of 0.5. More recently a meta-analysis determined the average reported ICC at 0.4 ( Elliott et al., 2019 ). In general, sensory and motor tasks tend to have high reliability. For example high ICCs are reported; 0.85 ( Friedman et al., 2008 ), 0.76 ( Kong et al., 2007 ) and 0.72 ( Gountouna et al., 2010 ), for finger-tapping tasks. Comparatively, ICCs tend to be lower in the cognitive domain, such as in the case of reward-driven or n-back tasks [e.g. highest ICCs in ROIs 0.62, 0.57 respectively; ( Plichta et al., 2012 )]. These findings are broadly comparable to ICCs observed in this report for brain activity in response to noxious stimulation, and previous work in the field reporting ICCs > 0.7 (e.g. Letzen et al., 2014 ). However, comparing reliability data across investigations is not straightforward, not only in view of the relatively limited number of current ICC reports and modalities assessed, but also given methodological differences between studies in paradigm design, data acquisition and analytical approaches. For instance, Friedman and Glover ( Friedman et al., 2008 ) showed that repeating the number of experimental runs between one and four in a sensory-motor task provided a positive linear increase in ICC, leading the authors to speculate that further repeats may continue to provide additional improvement. Other methodological factors including the test retest interval, sample size and design (e.g. blocked vs. event-related) will all play a role in reliability estimates.
It is important to consider the effects of inherent sources of noise in reliability estimates, such as variation due to motion, attention, and arousal (e.g. Cohen and DuBois, 1999 ;McGonigle et al., 2002 ). In this study translation and rotation parameters were assessed within-run and determined to be within an acceptable range. Scanning was also performed at the same time of day to minimise diurnal variation for each participant over repeated sessions ( Jiang et al., 2016 ), and a constant level of arousal and attention was maintained by restricting caffeine consumption prior to scanning acquisition ( Chen and Parrish, 2009 ;Liu et al., 2004 ); recommendations we would make for researchers considering similar studies. However, although participants received the same instructions in both sessions, it was not possible to fully control for expectancy and initial levels of saliency and anxiety, both of which have been shown to have a significant effect on an individual's level of pain perception (e.g. Baker and Kirsch, 1991 ;Brown et al., 2008 ;McGowan et al., 2009 ;Vase et al., 2005 ;Wager, 2005 ). Of note, however, is that participants' first visits were conducted in a mock scanning environment to assist in minimising these effects. Nonetheless, it is a common experimental observation that anticipating, and being anxious about upcoming pain, can exacerbate the experience ( Tracey and Mantyh, 2007 ). Therefore, one could speculate that a blocked design becomes predictable over two sessions (and thus initial anxiety, and saliency effects dissipate). In addition, as with repeated stimulation paradigms utilising visual stimuli (e.g. Parkes et al., 2004 ), noxious stimulation too decreases BOLD signal over repetitions (e.g. Bingel et al., 2007 ). However, whilst these aforementioned factors may have elicited variations over the two sessions, we utilised a blocked design as it provides maximal power, and significantly, despite the potential limitations of a blocked design, we observed high levels of reliability elicited by noxious pressure.
Intersession reliability may also vary based on the number of trials of painful stimuli delivered within-session. Here we employed a relatively large number of total trials, incorporating data from both left-and righthand stimulation. When examining the estimates from each hand separately (effectively, utilising only half of the data), a similar spread of activation was observed, biased to the hemisphere contralateral to the stimulated side. Reliability estimates were only slightly lower for leftand right-hand stimulation considered separately. Whilst increasing the quantity of stimuli has the potential to increase power ( Huettel and Mc-Carthy, 2001 ) and intersession reliability, as observed here, it can additionally introduce fluctuations in BOLD response ( Duann et al., 2002 ) that would add variability and decrease reliability as well as contribute to habituation or sensitisation.
The problem of response adaptation and habituation occurs in all sensory modalities ( Thompson and Spencer, 1966 ), but is additionally difficult to avoid in experimental pain studies where participants are aware of the ethical responsibilities of the experimenter to 'do no harm' ( https://www.iasp-pain.org/Education/Content.aspx? ItemNumber = 1213 ). A core component of the pain response incorporates consideration of the potential threat to homoeostasis for the individual ( Melzack, 2001 ;Moseley, 2003 ). Accordingly, only paradigms with moderate to severe evoked pain, as utilised here, are likely to persist over time and the robustness of observed brain responses to mildly or non-noxious stimulation may be reduced. That said, it remains an open theoretical question whether an appropriate isosalient nonnoxious stimulus would demonstrate reliability characteristics more comparable with noxious stimulation. 'Danger appraisal' theories of pain ( Moseley, 2003 ) suggest otherwise; pain motivates decision and action, with a greater priority to be processed compared to a highlysalient non-noxious stimulus ( Wiech and Tracey, 2013 ), resulting in consistently strong responses over multiple sessions and higher reliability. However, we accept that the contrary viewpoint exists; as pain responses are dynamic and multifaceted, including varying levels of physiological arousal ( Lee et al., 2020 ), variance estimates of responses to pain may be higher compared to a more uniform non-noxious stimulus, which may result in comparatively reduced reliability for noxious isosalient signals.
Although the reported data demonstrated high reliability for moderate-to-severe noxious pressure stimulation, it is a reasonable speculation that paradigms utilising the same stimulation modality with clinical populations may not elicit such highly reliable endpoints. In patients there may be further variations with regard to disease-specific pain and other related factors (e.g., fatigue, depression or frequency of medication), that can be present across time ( Apkarian et al., 2011 ). To what extent these additional patient-specific fluctuations may impact on the reliability of the BOLD measures obtained is unknown. It is likely that background levels of spontaneous pain, a defining characteristic of chronic pain for many patients that waxes and wanes over time, introduces additional variability and thus lessens test-retest reliability of evoked pressure pain endpoints, compared to healthy controls. This is an important future consideration for experimental medicine research utilising pressure stimulation with an aim of developing brain-based biomarkers of acute and chronic pain states ( Borsook et al., 2011a ), and particularly in the case of 'cross-over' within-patient designs determining therapeutic responses (e.g. Svendsen et al., 2004 ).
In this work we assessed test retest reliability using a mass univariate framework, deriving ICC on a voxel by voxel basis. Voxelwise approaches have been extensively employed to derive mechanistic insights in how the brain responds to noxious stimulation Neuroimaging, 2007 ). By contrast, multivariate 'machine learning' (ML) methodologies have been more recently employed that consider the contribution of all brain voxels in tandem. These approaches are appealing, as spatial correlations between activated voxels can be considered, providing potential improvements in sensitivity to detect experimental effects, for example, whether an individual is experiencing pain or whether a treatment may be effective. ML approaches also offer the desirable proposition to make predictions about new, previously unseen data, referred to as 'generalisability' ( van der Miesen et al., 2019 ). They also offer great promise in the much-needed development of brain-based biomarkers for pain. To date, ML classification of experimentally induced pain in healthy volunteers have largely predominated, for example, in prediction of responses to thermal pain ( Brown et al., 2011 ) as opposed to studies of real-world chronic pain states ( van der Miesen et al., 2019 ). Further reports have demonstrated that classifiers could be specific to pain as opposed to other salient stimuli ( Liang et al., 2013 ) and be able to detect modulation of pain response by analgesia ( Wager et al., 2013 ) . Accurate generalisation performance of ML algorithms inherently requires a robust and unique 'fingerprint' of pain response that is detectable across individuals, both within and beyond the test sample under consideration. Ostensibly, these qualities bear similarity to assessment of test retest reliability; however, the reliability of ML pain classifiers, namely, the extent to which their predictions are consistent across time in each individual in the sample, remains to be put to proof. Given the potential for ML techniques to offer heightened sensitivity to detect pain, their reliability characteristics may accord with, or even exceed the current gold standard for investigating pain; participants' own subjective reports. This is an important next step for ML technologies if they are to be exploited as diagnostic and prognostic markers for pain.
ICCs of self-reported pain indicated high reliability and were slightly higher than the most reliable fMRI brain responses. These findings accord with previous work stating high reliability for self-report of pain (e.g. Bijur et al., 2001 ;Gallagher et al., 2002 ;Hodkinson et al., 2013 ;Rosier et al., 2002 ;Williamson and Hoggart, 2005 ) also demonstrating self-report ICCs exceeding those associated with imaging endpoints (e.g. Letzen et al., 2014 ). In the present study, VAS inter-session ICC was 0.75, indicating excellent sensitivity to the changes in pain intensity within-subjects. If one were to view BOLD measures as a substitute for self-report, higher ICCs as compared to neuroimaging endpoints would be concerning. However, it is only through the use of a wide range of distinct methodologies that we will gain a greater understanding regarding behavioural and brain-based endpoints of pain. Self-report estimates such as VAS are one-dimensional and used in isolation do not adequately capture the multi-faceted experience of pain ( Schiavenato and Craig, 2010 ;Williams et al., 2000 ). Further, ratings can be severely affected by cognitive factors, for example, social desirability bias ( Van de Mortel, 2008 ). For example, it is possible that participants in this experiment actively attempted to rate consistently over the two sessions, potentially anchoring their responses to the two stimulation intensities. However note that to guard against anchoring behaviours, participants were purposely not informed that there would be only two stimulation types. Moreover, our observed CVs for VAS reports in noxious and non-noxious classes indicate moderate dispersion in ratings either side of mean VAS reports in each individual, suggesting against widespread anchoring behaviour.
Our VAS estimates were derived post-hoc and comprised a composite subjective report of three evoked stimuli. Whilst previous work has shown a close relationship in the mean and peak response of real-time pain intensity ratings to post-stimulus ratings ( Koyama et al., 2004 ), post-hoc report is unlikely to fully capture the temporal dynamics of pain. However, this design choice was adopted with the intention of avoiding motoric and saccadic confounds on BOLD responses that would have been induced were participants to have rated continuously, which may have had an additional confounding effect on cross-session reliability estimates. Like many others, we suggest that neural and behavioural endpoints have different strengths and limitations but offer added value to one another when recorded in concert; the value in imaging pain is not to obviate self-report but to provide adjunct information.
Many factors can influence an individual's experience of acute pain over time. Here we have presented a test retest analysis of acute pressure stimulation across two fMRI sessions. ICC measures were implemented to quantify the reliability of both the brain and behavioural response to noxious pressure. The results indicate that noxious pressure elicits a reliable behavioural and pain-induced BOLD signal over two sessions. Moreover, stimulation by noxious pressure elicits activation across a vast range of regions previously shown to be fundamental to the perception of pain. These findings demonstrate that pressure stimulation is a viable method in the study of pain and are important for clinical research that is in the pursuit of developing biomarkers or that assumes reliability over repeated sessions, for example within-subject cross-over designs commonly adopted in the development of novel therapeutics. | 9,578.8 | 2020-07-21T00:00:00.000 | [
"Psychology",
"Biology"
] |
LATERAL STRESS EFFECTS ON LIQUEFACTION RESISTANCE CORRELATIONS
When the sand compaction pile (SCP) method is implemented to improve loose deposits of sandy soils, its effect is evaluated generally in terms of increase in density, which is beneficial for reducing the liquefaction potential of the deposits during earthquakes. An additional advantage can be expected to occur due to concurrent increase in lateral stress. When the resistance to liquefaction is evaluated on the basis of SPT N-value or CPT qc-value, the increased resistance to penetration due to the sand compaction has been interpreted conventionally as being associated mainly with the increase in density. Therefore, in order to properly evaluate the effectiveness of ground improvement in compacted soils, it is necessary to quantify the effect of lateral stresses on the penetration resistance and liquefaction strength. In this paper, based on the results of SPT and CPT performed in a chamber box in the laboratory, the relationships between penetration resistance, liquefaction resistance and relative density were re-examined and the influence of lateral stress, expressed in terms of KC, was investigated. Although the results indicated that generally the resistance to liquefaction increases with increasing KC–value, little difference was noted when the density of the deposit was high. Based on the results, recommended charts incorporating the effect of KC were proposed.
INTRODUCTION
Conventionally, the cyclic resistance to liquefaction of in-situ deposits is evaluated from the penetration resistance obtained using standard penetration tests (SPT) or cone penetration tests (CPT).In North America, for example, charts have been proposed by correlating the SPT N-value or CPT q c -value and the estimates of cyclic stress ratio of a number of sites which had or had not manifested evidence of liquefaction during major earthquakes in the past (e.g., Youd et al., 2001).In Japan, on the other hand, the results of cyclic shear tests on high-quality undisturbed samples from in-situ sand deposits are correlated with the penetration resistance obtained at nearby sites to establish the chart (e.g., JRA, 1996).Charts obtained from North American and Japanese practice tacitly assume that the lateral stress σ' h in the deposit are approximately equal to 0.45-0.50times the vertical effective stress, σ' v ; in other words, the lateral stress ratio, K C =σ' h /σ' v , is taken as 0.45-0.50.
When liquefaction potential is to be evaluated for natural or reclaimed sand deposits, the above assumption may be reasonable.However, when these deposits are improved by means of a sand compaction pile (SCP) or any similar methods, it has been common practice to evaluate the effect of improvement in terms of the resistance in penetration tests, which is considered mainly to reflect an increase in density of sand deposits.However, opinions have been expressed that the pile installation and the resulting expansion of pile diameter during SCP implementation contribute to increased lateral stress in the deposits as well, which is known to be beneficial for further increasing resistance to liquefaction.Therefore, the effect of increased lateral stress needs to be properly accounted for in evaluating the liquefaction resistance of sand deposits.
In this paper, investigation is made on the effect of increased lateral stress on the cyclic resistance of sandy deposits.Since it is difficult to explicitly investigate the effects of the K Cconditions on the relation between penetration resistance and relative density for natural grounds, results of various chamber tests performed in the laboratory were used instead.Thus, an attempt is made in the present study to seek such relations by compiling the results of chamber tests that have been reported so far, where the effect of K C -conditions was examined explicitly.By analyzing the chamber test results taking into account the effects of K C -condition and by establishing relations between relative density, penetration resistance and liquefaction resistance, design charts were formulated.These charts can serve as a guideline on how to incorporate the lateral stress condition in evaluating the liquefaction resistance of deposits improved by compaction methods.
EFFECTIVENESS OF SAND COMPACTION PILE METHOD
The sand compaction pile method is one of the popular methods for improving ground to resist liquefaction.As illustrated in Fig. 1, this method involves the installation of well-compacted sand piles of large diameters into the loose liquefiable sandy deposit through the process of repeated driving down and the extracting motion of a vibrating steel pipe.As the sand pile is compacted and enlarged, the adjacent ground is pushed laterally and compacted.The effectiveness of ground improvement is commonly assessed by evaluating the increase in the penetration resistance at the centre-point between the sand piles, which is considered to reflect mainly the increase in density of the ground.
To illustrate such increase in density, typical SPT N-values obtained from sites improved by both vibratory SCP and nonvibratory (Nv) SCP procedures are shown in Fig. 2(a) while examples of CPT q c values from vibratory and non-vibratory (Nv) SCP-improved ground are illustrated in Fig. 2(b).It is observed that penetration resistances obtained between the installed sand piles are increased as the piles laterally pushed and displaced the adjacent sandy ground.Moreover, results of cases where various instruments (e.g., pressuremeters and dilatometers) were used to measure the lateral stresses before and after implementation of both vibratory and non-vibratory SCP methods are presented in Fig. 3.In the figure, the relation between the lateral stress ratio, K C , and improvement ratio, a s , is plotted 2 years and 1 month after the SCP operation.Note that the data points corresponding to a s =0 refer to the condition prior to the implementation of SCP method.
The trend shows that substantial increases in K C -values are observed after SCP implementation, with larger increases in K Cvalues occurring at higher a s .
BASIC CONSIDERATIONS
In order to establish design charts relating the penetration resistance and the liquefaction resistance, but which would incorporate explicitly the influence of K C -conditions, the following methodology was adopted in this study: (1) Firstly, the relationship between the liquefaction resistance R and the penetration resistance, expressed in terms of penetration resistance normalised with respect to an effective overburden pressure of σ' v = 1 kgf/cm 2 or 98 kPa (i.e., SPT N 1 -value or q c1 -value), have been proposed and used both in Japan and in North America.
As mentioned earlier, it can be assumed with good reasons that these relations between R and N 1 or q c1 are applicable for soil deposits consolidated under K C =0.5 condition.These will therefore serve as the reference relationship in investigating the effects of various K Cconditions.
(2) It was necessary to express both R and N 1 or q c1 value as (3) Next, the results of calibration chamber tests were analyzed in which SPT N-value or cone penetration resistance q c is measured under controlled K C -conditions for sand deposits prepared at different relative densities.Thus, the penetration resistance (q c1 or N 1 values) is expressed in terms of the relative density, D r , and K Cvalue.Moreover, a relationship showing the K C -effects on liquefaction resistance (Ishihara and Takatsu, 1979) was used to establish the effects of K C -conditions on the liquefaction resistance R under different relative densities.
Mechanism of compaction
(4) Finally, with the above two kinds of relations ready for use, it would then be possible to eliminate the relative density between them and to obtain direct relations between the liquefaction resistance R and penetration resistance q c1 or SPT N 1 -value for different K C -values.
The details of the above procedure are summarized in the flowchart illustrated in Fig. 4. In pursuing the above approach, it is to be noticed that the basic data sets used in the study were obtained from the tests on samples or deposits artificially prepared in the laboratory.In comparison, there is practically no data available showing the effects of K C -value on in-situ deposits.Therefore, the following assumptions are made to support the methodology presented above: Firstly, the outcome of the torsional tests in the laboratory on reconstituted samples of clean sands by Ishihara and Takatsu (1979) and Harada et al. (2000) disclosed the relationship given below.
( ) ( ) where (R) 1.0 and (R) KC denote, respectively, the liquefaction resistance under K C =1.0 (isotropic) and other K C -conditions.Without knowing the similarly defined relationship for field conditions, it is necessary to make an assumption regarding the effects of K C -conditions as follows.
This relationship may not generally hold valid, because there are a variety of factors such as ageing and cementation in the field deposits.However, if a K C -condition is produced as a result of mechanical actions such as the sand compaction pile in the field, the relationship given by Eq. ( 2) may be considered reasonable in correlating in-situ and laboratoryproduced conditions.Nevertheless, tests are necessary to ` Relative Density, Dr (Kc=0.5)@ @ @ @ @ oe Kc effect on N 1 /qc 1 and Rl) @ @ @ @ @ @ @ @ @ @ @ Kc Ë @ N 1 or qc 1 @ @ @ @ @ @ @ @ @ @ @ Kc Ë R ‡ A ‡ C ` Relative Density, Dr (Kc=0.5)@ @ @ @ @ oe Kc effect on N 1 /qc 1 and Rl) @ @ @ @ @ @ @ @ @ @ @ Kc Ë @ N 1 or qc 1 @ @ @ @ @ @ @ @ @ @ @ Kc Ë R ‡ A ‡ C Secondly, the soil deposit inside the calibration chamber where penetration testing was conducted was also prepared by pluviating dry sands or by sedimenting sands under water.Since there has been no attempt ever made to quantify the effect of K C -condition for in-situ deposits of sands, it would be necessary to have the following assumption as formulated below.
As mentioned above, this correlation between field and laboratory conditions may be taken as reasonable, when the K C -value is changed by a mechanical action such as implementation of the sand compaction technique.
REFERENCE RELATIONS FOR NORMALLY CONSOLIDATED CONDITION
As mentioned earlier, the resistance to liquefaction of sand deposits during earthquakes has been investigated extensively and expressed in the form of charts correlating the liquefaction resistance R and penetration resistance, either by standard penetration or cone penetration tests.
In Japan, efforts have been expended towards obtaining highquality undisturbed samples from in-situ sand deposits and testing them in the laboratory by means of cyclic triaxial test apparatus.The results of the tests were combined with the SPT N-values at respective nearby sites to establish the chart.In the formula incorporated in the Japanese code, the liquefaction resistance, R, corresponding to a shaking which is expected to occur in an earthquake involving the subduction type of plate movement (one generally adopted in practice) is expressed as a function of SPT N-value as follows (JRA, 1996).
In the above equation, N 1 value is the corrected SPT N-value for an effective overburden pressure, σ' v = 1 kgf/cm 2 (or 98 kPa) by way of the formula N 1 = (1.7N)/(0.7+σ'v ).Moreover, the subscript 80 refers to the effective energy in hammer dropping, which is considered as approximately equal to 80% of the theoretical energy based on Japanese practice.This correlation is an outgrowth of an enormous amount of work, including undisturbed sampling by freezing techniques and laboratory testing.In the Japanese code, the cyclic stress ratio causing 5% double-amplitude axial strain in 20 cycles of uniform loading is obtained in the triaxial test under isotropic condition (K C = 1.0).Then, it is corrected for the K C = 0.5 condition by multiplying (1+2K C )/3 = 2/3 and also for the peak value of shear stress in irregular time histories of seismic excitation by multiplying 1.5.Thus, the value of R obtained by Eq. ( 4) is considered to represent the cyclic strength in terms of the peak shear stress divided by the effective overburden pressure in the K C = 0.5 condition.
In North American practice, on the other hand, this type of relation has been exploited mainly based on field observations of a number of sites, which had or had not manifested evidence of liquefaction during major earthquakes in the past.In terms of the cyclic resistance ratio (CRR), which is commonly used there, the liquefaction resistance can also be expressed as (Youd et al., 2001): (5) Note that in the above equation, which is valid for (N 1 ) 60 <30, an approximate relation (N 1 ) 60 = 1.3(N 1 ) 80 can be incorporated to correct for the difference in energy transfer between the Japanese and American SPT practice.Moreover, the correcting factor 0.65 indicated in Eq. ( 5) was used to take into account the fact that the liquefaction resistance in Japanese code is expressed in terms of the maximum value of acceleration, while the average value of acceleration during seismic shaking is used in American practice.
In similar vein, the formula for the liquefaction resistance, R, as a function of the q c1 -value as proposed by Suzuki and Tokimatsu (2003) is expressed as follows, ( ) In the above equation, I c is the CPT soil type behaviour index and q c1 is the cone resistance corrected for an effective overburden pressure, σ' v =1 kgf/cm 2 (or 98 kPa) with the equation q c1 = q c /(σ' v ) 0.5 .On the other hand, the relationship between R and q c1 based on North American practice is given by Robertson and Wride (1998).When expressed in SI units, the curve can be approximated by the following equations: q q q q CRR R (7) In the above equations, it is tacitly assumed that the lateral stress ratio in the sand deposit would be approximately equal to K C = 0.50.Thus, Eqns (4-7) will serve as the reference relations in investigating the effects of various K C -conditions.
PENETRATION RESISTANCE AND RELATIVE DENSITY RELATION
As a first step, a discussion on the relationship between penetration resistance and relative density is presented for the case of K C = 0.5.Firstly, considering various types of soils, Cubrinovski and Ishihara (1999) proposed a relationship between the mean grain size, D 50 , and the void ratio range, e max -e min , as shown below, where e max and e min are the maximum and minimum void ratios, respectively.In the analysis presented herein, only the results for clean sands are considered, and the sands are classified roughly as being either fine sand or coarse sand.For fine sand, the mean grain diameter will be taken as D 50 = 0.2~0.3mm with e maxe min = 0.4~0.5 (average = 0.45) while values of D 50 = 0.3~0.5 mm and e max -e min = 0.35~0.4(average = 0.375) will be assumed for coarse sand.
Corrected N-value and Relative Density
Several researchers have investigated the relationship between SPT N-value and relative density, D r .Among these, the data compiled by Fujita (1968) are considered the most comprehensive and these are shown in Fig. 5 other investigators.Since all the data were obtained from chamber tests, the energy transfer ratio in the hammer dropping is considered approximately equal and assumed as 80%.Note that because the test implementation is similar in both chamber and field conditions, it is reasonable to assume that a similar energy efficiency can be used in both conditions.The blow count number thus obtained is indicated as (N 1 ) 80 .Both fine and coarse sands are included in the figure, as well as saturated and dry/wet soils.It may be observed that a linear relationship can be roughly established between (N 1 ) 80 and the square of D r expressed as a ratio, not a percentage, with fine and dry sands showing higher gradients than coarse sands and dry/wet samples.It is to be recalled that the linearity between N 1 and D r 2 is consistent with the relation of N 1 = 40D r 2 , proposed by Meyerhof (1957).
Next, the slope of a straight line connecting the zero point and each of the data point in Fig. 5 where the relationship of Eq. ( 8) is used to account for the gradation effects of each sand.It can be seen in Fig. 5(b) that there is considerable scatter in the data points but, for practical purposes, the values of C D may be taken roughly as 27.5 and 35.5 for fine and coarse sands, respectively.
Corrected q c -value and Relative Density
Similar to that performed for SPT N-values, the data of cone resistance corrected for an effective overburden pressure, σ' v =1 kgf/cm 2 (or 98 kPa) are shown in Fig. 6(a) versus D r 2 .These were obtained from various test results (Jamiolkowski et al., 1988;Huang and Hsu, 2005).All the data points indicated in the figure are for dry samples.Note that unlike the case shown in Fig. 5(a), there is no significant difference between the trends for fine sands and coarse sands.The relation between the slope of this plot (q c1 /D r 2 ) and the void ratio range, similarly established, is shown in Fig. 6(b).For each of the data points, it may be seen that the effect of grain diameter is insignificant.Thus, similar to the empirical equation of Eq. ( 9), the relation between q c1 and D r may be expressed as follows: ( ) It is to be noted that there are relations of the form D r versus log q c1 proposed by Jamiolkowski et al. (1988), but the form of Eq. ( 10) will be used instead in this study.
From SPT Tests
The relationship between (N 1 ) 80 and D r , given by Eq. ( 9), is substituted into Eqs.( 4) and ( 5) to obtain the relationship between R and D r .The plots thus obtained from Eq. ( 4) for the Japanese code are displayed in Fig. 7(a), corresponding to fine sand and coarse sand.Similar plots can be obtained from Eq. ( 5), but they are not shown here.It can be observed in Fig. 7(a) that the liquefaction resistance shows a sudden increase when the relative density is greater than about D r = 90% for fine sand and when D r is larger than 80% for coarse sand.
From CPT Tests
Similarly, the relationship between the q c1 value and D r given by Eq. ( 10) is substituted into Eqs.( 6) and (7) to obtain the relationship between R and D r .The plots thus obtained from Eq. ( 7) are shown in Fig. 7(b) for fine and coarse sands.The relationship from Eq. ( 6) can be similarly obtained, but is not shown here.It is seen in Fig. 7(b) that the liquefaction resistance shows a sudden increase when the density exceeds D r = 60% for both fine and coarse sands.
As there is not much difference between the curves for fine and coarse sands in both figures, the average curves shown by the solid lines in Fig. 7 will be adopted hereafter.
K C -effect on SPT
There has been no data ever reported in the literature on the effects of K C -conditions on SPT N-values, except for the recent data obtained at the Tokyo Denki University.This data is shown in Fig. 8(a) in terms of C SPH defined as the increase in standard penetration resistance associated with increase in K C -value, i.e., C SPH = (N 1 ) KC /(N 1 ) KC=0. 5 which is plotted versus the relative density (expressed in %).One set of data pertains to the values of C SPH when K C was increased from 0.5 to 1.0 and another set was for the case of K C = 0.5 to 1.5.From the data, the following relationship, given by Eq. ( 11), is proposed.This relationship is shown by the lines in Fig. 8(a).Note that these lines were drawn to best-fit the data points for denser state of saturated Toyoura sand in Fig. 5(a).
K C -effect on CPT Various relationships have been proposed based on the chamber test results indicating the effect of K C on the q cvalues of CPT.These are as follows: Salgado (1997): where C CPH indicates the increase in cone penetration resistance associated with an increase in K C -value, i.e., C CPH = (q c1 ) KC /(q c1 ) KC=0. 5 .The subscript NC represents normally consolidated condition (i.e., K C = 0.5).Eqs. ( 12) and ( 13) indicate constant values for any changes in the value of K C irrespective of the relative density.However, Eq. ( 14) indicates that the parameter C CPH tends to decrease with increasing relative density (expressed in %), which appear more reasonable in the light of the results of recent tests.
The values of C CPH as given by Eq. ( 14) are plotted against relative density in Fig. 8(b) using dashed lines.Also plotted in the same figures are the data obtained from the recent chamber tests at the Tokyo University of Science and at Chao Tung University in Taiwan.It can be seen that for q c1 , the rate of increase in penetration resistance tends to decrease with increase in D r .Looking over the new data together with the trends given by Eq. ( 14), it might be possible to draw new lines as shown by solid lines in Fig. 8(b).This is expressed by
Effect of K C on Liquefaction Resistance
As described by Eq. ( 1), Ishihara and Takatsu (1979) have shown that the effect of K C on liquefaction resistance can be expressed as follows: By applying the above equation on the average curves of the liquefaction resistance -relative density relations for fine and coarse sands (see Fig. 7), which can be considered as the reference curves (K C = 0.5), the effects of K C on R can be obtained, as shown in Fig. 9, for example.
RELATION BETWEEN PENETRATION RESISTANCE AND LIQUEFACTION RESISTANCE
As mentioned in previous sections, it was possible to establish the basic relationship between penetration resistance and liquefaction resistance at K C = 0.5 using relative density.Moreover, the effects of K C on penetration resistance and on liquefaction resistance have also been discussed above.Therefore, the relationship between penetration resistance and liquefaction resistance for different K C -values can now be established.The liquefaction resistance curves expressed in terms of K C -values can be formulated by substituting Eqs.(11) and ( 16) into Eqs.( 4) and ( 5) for N 1 values, and Eqs. ( 15) and ( 16) into Eqs.( 6) and (7) for q c1 values.The plots thus obtained are shown in Figs. 10 and results of high quality undisturbed samples.Another possible reason is that Japanese practice uses 20 cycles as reference for liquefaction resistance, while 15 cycles is tacitly assumed in the US practice.In contrast, the reference curve for K C = 0.5 based on AIJ specification more or less coincides with that by Robertson-Wride (1998) in the range of q c1 smaller than about 10 MPa.
Looking at the effect of lateral stress, it can be seen from the figures that as K C increases, the liquefaction resistance also increases.It may also be seen that all the curves generally tend to merge to be a curve close to each other at large values of penetration resistance, indicating that the effect of K C tends to decrease in denser deposits.
To explain this tendency, schematic diagrams showing the effect of K C on the relationship between liquefaction resistance and penetration resistance are shown in Figs.12(a) and 12(b) for deposits with low and high penetration resistance, respectively.Suppose the penetration resistance is increased from a value at point a to point b while keeping K C = 0.5, the liquefaction resistance R is increased from point b to e.If the K C -value is increased, then additional increase in R is expected, as indicated by the shift from point e to d.It can also be observed that the gradient of the liquefaction curve at point a for K C = 0.5 is smaller than the gradient due to combined increase in K C and N 1 or q c1 (point a to d).Hence, the liquefaction curve has shifted upwards.
When the ground has low penetration resistance (loose deposits), the gradient due to the increase in K C is much greater than the gradient coming from the density increase alone.This indicates that the effect of K C on R is more significant than the effect of penetration resistance for a loose state of deposits as shown in Fig. 12(a).On the other hand, when the ground has high penetration resistance (dense deposits), the gradient of the liquefaction curve for K C = 0.5 is generally high, indicating that the effect of penetration resistance is much more significant than the effect of K C , as illustrated in Fig. 12(b).Thus, it can be said that with increasing K C -value, the liquefaction resistance increases, but its effect becomes smaller at higher density.
COMPONENTS CONTRIBUTING TO INCREASED LIQUEFACTION RESISTANCE
Based on the above discussion, the increase in liquefaction resistance of ground improved by the sand compaction pile method is due to two components i.e., the increase in penetration resistance and the increase in K C -values.To expound on this in more detail, the contributions of increased K C -value and increased penetration resistance on the resulting increase in liquefaction resistance were analyzed quantitatively.Both the liquefaction curves based on Japanese and American practice were considered.For illustration purposes, the data for loose (pre-SCP N 1 -value = 5 or q c1 = 5 MPa) and medium dense deposits (pre-SCP N 1 -value = 15 or q c1 = 10 MPa) were evaluated for K C = 0.5, 1.0 and 1.5.
The results are illustrated in Figs. 13 and 14 for N 1 -values based on Japanese and American practice, respectively, while the corresponding results are given in Figs. 15 and 16 for q c1values, respectively.The left graphs in each figure correspond to low initial (pre-SCP) penetration resistances, while the right graphs are for higher penetration resistances.In the graphs, the vertical axes represent the increase in liquefaction resistance ∆R, while the horizontal axes show the increase in penetration resistance, i.e. ∆N 1 or ∆q c1 .The numbers indicated in the charts correspond to the contribution of increased penetration or increased K C -value (from 0.5 to 1.0, or from 1.0 to 1.5).For example, consider the left-most graph on Fig. 13(a), representing a loose deposit (N 1 = 5) prior to SCP implementation.After compaction the N 1 -value rose to 10 and such an increase in penetration resistance alone accounted for 54% of the total increase in liquefaction resistance, while the remaining 46% was due to an increase in K C -value from 0.5 to 1.0.On the other hand, if the K C -value is increased from 0.5 to 1.5 during SCP implementation, the contribution of the increase in N 1 -value to the increase in R is about 39%, while the contributions of the increase in K C from 0.5 to 1.0 and from 1.0 to 1.5 are 33% and 28%, respectively.
For all figures, it can be observed that the larger the increase in penetration resistance, the increase in liquefaction resistance becomes higher.However, compared to the contribution of increase in the K C -values, the contribution of increase in penetration resistance is relatively more significant, accounting for about 50-80% of the increase in liquefaction resistance in the case of the high increase in penetration resistance (e.g., if ∆N 1 = 15 or 20, or ∆q c1 = 7.5 or 10 MPa).Moreover, it is observed that such trend is stronger when the initial penetration resistance is high or if the penetration testing is done through CPT.Similar trends were observed for the charts correlating R and N 1 or q c1 based on North American practice.In practice, the magnitude of the lateral stress in ground improved by compaction methods can be obtained using dilatometer, pressuremeter and similar equipment.Once the penetration resistance and the K C -value are known, the corresponding liquefaction resistance of the improved ground can be determined from the design charts to evaluate the potential to liquefaction.Care must be taken in interpreting the results because, as observed in Fig. 3, the K C -value can change with time, as a result of stress relaxation and other factors.
It is worthy to mention that the proposed charts have been derived based on assumptions which require further validation and on empirical correlations containing significant scatter and uncertainties.Hence, users must be aware of these limitations when employing these charts in important design projects.
CONCLUDING REMARKS
In order to examine the effects of K C -states on the penetration resistances and liquefaction resistance of ground improved by the sand compaction pile method, experimental data from chamber tests with controlled K C -values were compiled and arranged.From the curves relating penetration resistances (N 1 and q c1 ) and relative density, as well as those relating liquefaction resistance and relative density, charts were formulated showing the relationship between liquefaction resistance and penetration resistances as functions of the K Cvalues.
Based on a detailed analysis of the charts, it was observed that if the increase in penetration resistance due to SCP is larger, the contribution of increased K C to the increased liquefaction resistance becomes smaller.These charts can be used to quantify the effects of increase in lateral stress on the liquefaction resistance of grounds improved by the sand compaction pile method.
Although quite straight forward, the proposed charts are limited by the assumptions used, which need further verification.Furthermore, the charts have been developed for clean sand deposits and the applicability of this study to ground containing some amount of fines would be an issue to be pursued in more detail in future studies.
BULLETIN
OF THE NEW ZEALAND SOCIETY FOR EARTHQUAKE ENGINEERING, Vol.43, No. 1, March 2010
Figure 3 :
Figure 3: Example of results showing increase in K C -values due to SCP implementation.
(a).Additional data points are plotted from the results of chamber tests conducted by Yoshida et al. (1988), Yasuda et al. (1996) and
(N 1 )Figure 8 :Figure 7 :
Figure 8: Relation between relative density and increase in penetration resistance (a) SPT data; and (b) CPT data.
Figure 10 :
Figure 10: Recommended charts correlating corrected N value and liquefaction strength through K C .
Figure 11 :
Figure 11: Recommended chart correlating corrected q c value and liquefaction strength through K C .
Figure 13 :
Figure 13: Plots showing the contributions of increased N 1 -value and K C -value on the increase in R for grounds with (a) low and (b) high initial SPT N 1 -values (based on Japanese practice).
Figure 12 :
Figure 12: Schematic diagram showing the effect of K C on penetration resistance and liquefaction resistance.
Figure 16 :Figure 15 :Figure 14 :
Figure 16: Plots showing the contributions of increased q c1 -value and K C -value on the increase in R for grounds with: (a) low and (b) high initial CPT q c1 -values (based on American practice). | 7,254.8 | 2010-03-31T00:00:00.000 | [
"Geology"
] |
Low-temperature plasma treatment induces DNA damage leading to necrotic cell death in primary prostate epithelial cells
Background: In recent years, the rapidly advancing field of low-temperature atmospheric pressure plasmas has shown considerable promise for future translational biomedical applications, including cancer therapy, through the generation of reactive oxygen and nitrogen species. Method: The cytopathic effect of low-temperature plasma was first verified in two commonly used prostate cell lines: BPH-1 and PC-3 cells. The study was then extended to analyse the effects in paired normal and tumour (Gleason grade 7) prostate epithelial cells cultured directly from patient tissue. Hydrogen peroxide (H2O2) and staurosporine were used as controls throughout. Results: Low-temperature plasma (LTP) exposure resulted in high levels of DNA damage, a reduction in cell viability, and colony-forming ability. H2O2 formed in the culture medium was a likely facilitator of these effects. Necrosis and autophagy were recorded in primary cells, whereas cell lines exhibited apoptosis and necrosis. Conclusions: This study demonstrates that LTP treatment causes cytotoxic insult in primary prostate cells, leading to rapid necrotic cell death. It also highlights the need to study primary cultures in order to gain more realistic insight into patient response.
Despite continual improvement and refinement, long-term treatment for prostate cancer (PCa) is still recognised as inadequate (Jemal et al, 2011). In the case of early onset, organ-confined tumours, patients may be treated with a focal therapy (Kasivisvanathan et al, 2013;Donaldson et al, 2014). Radiotherapy or photodynamic therapy (PDT), which rely on the production of reactive oxygen species (ROS) for cytotoxic effects, are two treatments of choice for localised PCa. However, around a third of patients will experience recurrence of their disease following radiotherapy (Jones, 2011). This may be due to inherent radio-resistance of a small fraction of the tumour -the cancer stem-like cells (Frame et al, 2013). Furthermore, numerous side effects are often experienced following treatment (Chen et al, 2007;Lips et al, 2008), even with more recent technological developments, such as stereotactic body radiation therapy (Cyberknife) (Woo et al, 2014).
In recent years, low-temperature plasmas (LTPs) have shown considerable potential as active agents in biomedicine. They are formed by applying a high electric field across a gas, which accelerates electrons into nearby atoms and molecules, leading to a cascade effect of multiple ionisation, excitation and dissociation processes. This creates a complex and unique reactive environment consisting of positive and negative charges, strong localised electric fields, UV radiation, reactive species, and mainly background neutral molecules.
Operated at atmospheric pressure and around room temperature, LTPs produce high concentrations of reactive oxygen and nitrogen species (RONS), including but not limited to: atomic nitrogen (Wagenaars et al, 2012) and oxygen (Knake et al, 2008;Waskoenig et al, 2010;Niemi et al, 2013), hydroxyl (OH) (Ninomiya et al, 2013), singlet delta oxygen (SDO) (Sousa et al, 2011), superoxide , and nitric oxide (NO) (Ma et al, 2014). It is now widely believed that the principal mode of plasma-cell interaction is the delivery of RONS, a key mediator of oxidative damage and cell death in biological systems (Wiseman and Halliwell, 1996;Bandyopadhyay et al, 1999), generated in the plasma and transferred to target source (Graves, 2012(Graves, , 2014. In contrast, cell death by PDT relies on the generation of ROS, specifically SDO, which is highly cytotoxic (Sharman et al, 1999). Nonetheless, strong treatment resistance is encountered in hypoxic tumour regions (Krzykawska-Serda et al, 2014). The limitations of both radiotherapy and PDT, combined with the fact that LTPs concurrently produce both a multitude of RONS (Murakami et al, 2013) known to be toxic to cells and potentially strong localised electric fields, promotes the potential of LTP as a future cancer therapy, which we have recently reviewed (Hirst et al, 2014b).
Many studies now describe the effects of LTPs on various cancer cell lines in culture, with reported effects including reduced cell viability (Cheng et al, 2014;Plewa et al, 2014), growth arrest , and apoptotic cell death (Keidar et al, 2011;Gibson et al, 2014;Ishaq et al, 2014). We have reported induction of DNA damage by application of LTP treatment to primary prostate epithelial cells (Hirst et al, 2014a). Recent in vivo studies also revealed that LTP treatment of subcutaneous tumours (grown from cell lines) induced growth arrest and cell death, thus significantly reducing tumour volume in glioblastoma cells (Vandamme et al, 2012). Another study showed that short, daily exposure of tumours (squamous cell carcinoma) to LTP causes DNA damage leading to apoptosis . Internal treatment with LTP has also been explored using an endoscopic approach to application (pancreatic and colorectal cells), which demonstrated reduced tumour volume and also invasion capacity (Robert et al, 2013). However, the penetrative capability of LTP treatment through solid tissues leading to complete tumour eradication is yet to be established in vivo.
Here we first conducted a proof-of-principle study in order to validate the cytopathic effect of LTP treatment on two commonly used prostate cell lines derived from benign disease (BPH-1) and prostate cancer metastasis (PC-3). We then analysed in detail the effect of LTP treatment on a matched pair of primary samples. We cultured prostate epithelial cells from normal prostate and prostate cancer tissue (Gleason grade 7) retrieved from biopsies from a single patient, allowing for direct comparison of the effects of LTP on both normal and cancer cells. We present the first experimental evidence that LTP may be a suitable candidate for focal therapy treatment of patients with early onset prostate cancer through the induction of high levels of DNA damage, leading to a substantial reduction in colony-forming capacity, and ultimately necrotic cell death, in clinically relevant and close-to-patient samples.
MATERIALS AND METHODS
Culture of cell lines and primary prostate epithelial cells. Two prostate cells lines were used in this study: BPH-1 cells, derived from benign prostatic hyperplasia (BPH), and PC-3 cells, derived from PCa bone metastases. BPH-1 cells were cultured in RPMI 1640 medium supplemented with 5% foetal calf serum (FCS) and 1% L-glutamine. PC-3 cells were cultured in Ham's F12 medium, supplemented with 7% FCS and 1% L-glutamine. No antibiotic or antimycotics were added to the cell culture medium. Cells were incubated at 37 1C with 5% CO 2 .
Primary prostate epithelial cells were cultured from human tissue samples as described previously (Collins et al, 2005). Needle core biopsies (14 g) were taken immediately following surgical removal of the prostate. The site of each biopsy was determined by previous pathology, imaging, and palpation. Tissues were transported in RPMI-1640 with 5% FCS and 100 U ml À 1 antibiotic/antimycotic solution at 4 1C and processed within 6 h. Needle core biopsies were verified as normal or Gleason grade 7 cancer by subsequent pathology, both cores originating from the same patient undergoing radical prostatectomy. Samples were obtained with full ethical permission and patient consent. Primary cells were cultured in stem cell media, based on keratinocyte serum-free media supplemented with L-glutamine, stem cell factor, granulocyte macrophage colony stimulating factor, cholera toxin, bovine pituitary extract, epidermal grown factor and leukaemia inhibitory factor (Collins et al, 2005). Significantly, these cells are cultured in media without FCS. Cells were grown on collagen-Icoated 10-cm dishes in the presence of irradiated STO feeder cells and incubated at 37 1C with 5% CO 2 . No antibiotic or antimycotics were added to the cell culture medium.
LTP jet configuration and characterisation. The LTP jet consisted of a quartz glass tube of inner/outer diameter 4/6 mm, with two copper tape electrodes spaced 20 mm apart ( Figure 1A). One electrode was powered (6 kV sinusoidal voltage at 30 kHz) and one grounded. Helium was used as a carrier gas at 2 standard litres per minute (SLM), fed with 0.3% molecular oxygen admixture. Cells were exposed to the LTP jet at a distance 15 mm from the end of the bottom electrode for a range of treatment times from 0 to 600 s in centrifuge tubes in a volume of 1.5 ml media. The distance between the end of the glass tube and the media surface was B2 mm. Hydrogen peroxide (H 2 O 2 , Fisher Scientific, Loughborough, UK) was used throughout as a positive cytoxicity control at a concentration of 1 mM. Using a thermocouple, treatment times of up to 600 s did not raise the surface temperature of culture media above 36.5 1C. The temperature and relative humidity of the laboratory were B20 1C and B25% respectively.
Optical emission spectroscopy was performed using an Ocean Optics HR4000CG-UV-NIR spectrometer (Dunedin, FL, USA) (200-1100 nm range) and the Spectra suite analysis software (Dunedin, FL, USA). Integration time and scans to average were set at 6 and 50 s, respectively. A background dark spectrum was obtained and subtracted from subsequent spectra. The optical fibre was aligned directly with the core plasma region and fixed at B2 cm from the quartz tube.
Cell viability and clonogenic recovery assays. Cell viability was determined by use of the alamarBlue assay (Invitrogen, Life Technologies Ltd, Paisley, UK). Cells were treated with LTP and then plated into black-walled 96-well plates in triplicate at a density of 5000 cells per well in 100 ml of media. At 24, 48, 72, and 96 h after treatment, 10 ml of alamarBlue reagent (DAL1025, Invitrogen) was added to each well and incubated for 2 h at 37 1C. Fluorescence was recorded at excitation/emission values of 544/590 nm using a microplate reader (Polarstar Optima, BMG Labtech, Aylesbury, Bucks, UK), with cell viability recorded against normalised untreated samples.
Clonogenic recovery assays were used to measure cellular recovery posttreatment. Cells were treated in suspension and replated in six-well plates in triplicate at a density of 200 disaggregated cells per well. Cells were supplemented with 2 ml of growth media, which was changed every other day. In the case of primary epithelial cell cultures, STO feeder cells were also added. At 12 days after treatment, plates were stained with crystal violet (PBS, 1% crystal violet, 10% ethanol). Only colonies of 450 cells (equating to 45 population doublings) were counted (Francipane et al, 2008).
DNA damage. LTP-induced DNA damage was quantified using the alkaline comet assay (adapted from Sturmey et al, 2009). Cells were treated with LTP in 1.5 ml centrifuge tubes at a density of 20 000 cells in 1.5 ml media suspension. Immediately after treatment, cells were resuspended in 30 ml PBS and mixed with 225 ml low melting point agarose. This was then pipetted onto microscope slides precoated with high melting point agarose and placed into lysis buffer (2.5 M NaCl, 10 mM Tris, 1 mM EDTA, 10% DMSO, 1% Triton X-100), overnight at 4 1C. The following day, cells were placed in alkaline buffer (0.3 M NaOH, 1 mM EDTA, pH 13) on ice for 40 min, before being electrophoresed at 23-25 V, 300 mA in alkaline buffer for a further 40 min on ice. Slides were then placed into neutralising buffer (0.4 M Tris, pH 7.5) for 2 Â 10 min, before DNA was stained using SYBRgold (1 : 10 000 in TE buffer: 10 mM Tris, 1 mM EDTA, pH 7.5). Images were acquired by fluorescence microscopy (Nikon Eclipse TE300 microscope (Nikon, Surrey, UK), Â 10 objective lens) using Volocity software (Volocity 6.3, PerkinElmer Inc., Waltham, MA, USA). Autocomet software (Tritek Corp., Sumerduck, VA, USA) was used to analyse cell images, with the median percentage of DNA-in-tail values used to record DNA damage in a minimum of 100 cells per treatment.
Detection of ROS. Extracellular H 2 O 2 formed in the culture media as a result of LTP treatment was detected and quantified using the ROS-Glo H 2 O 2 assay (Cat. no. G8820, Promega, Southhampton, UK). Cells were treated with LTP, before being plated into black-walled 96-well plates at a density of 10 000 cells in 80 ml of culture media, before following the manufacturer's protocol. Luminescence intensity was quantified using a microplate reader (Polarstar Optima, BMG Labtech) and normalised to untreated wells.
Caspase-Glo 3/7 assay. Cells were treated with LTP and plated into collagen-coated 96-well plates at a density of 20 000 cells per well in 100 ml. A further 100 ml of caspase-glo 3/7 detection reagent (Cat. no. G8090, Promega) was immediately added to each well. Cells were incubated at 37 1C, with luminescence intensity (Polarstar Optima, BMG Labtech) recorded at 24 h after treatment. Based on findings from other results, a reduced set of LTP exposures was used for this assay.
CellTox necrosis assay. LTP-induced necrosis was quantified using the CellTox green cytoxicity assay (Cat. no. G8741, Promega). Cells were treated with LTP and plated into collagencoated black-walled 96-well plates at a density of 10 000 cells in 50 ml of media per well. In addition to H 2 O 2 and staurosporine, 2 ml of lysis solution (supplied with assay) was added to necrotic control wells. Fluorescence intensity was recorded using a plate reader (Polarstar Optima, BMG Labtech), at excitation/emission wavelength 485/520 nm, with readings at 2, 4, 8, 12, and 24 h after treatment. Fluorescence was normalised to untreated wells. Complementary fluorescence-brightfield merged microscopy images were also taken (Nikon Eclipse TE300 microscope, Â 10 objective lens) at the same posttreatment time intervals.
Statistical analysis. All experiments were performed in triplicate, and results are expressed as the mean with associated s.e., with the exception of comet assay data, which shows the median DNA damage value. Plots were constructed and statistical analyses performed using Prism 6 (GraphPad software, San Diego, CA, USA). Statistical significance was determined using unpaired Mann-Whitney test (DNA damage results only) or unpaired t-test with Welch's correction (assumes non-equal s.d.) and displayed on figure plots as *Po0.05, **Po0.01, ***Po0.001, and ****Po0.0001.
Reduction in cell viability is observed following LTP treatment.
The viability of cells was quantified at 24, 48, 72, and 96 h following LTP treatment (Figure 1). A reduction in viability in both BPH-1 ( Figure 1B) and PC-3 ( Figure 1C) cell lines was observed. BPH-1 viability was reduced to o20%, whereas viability of PC-3 cells was reduced to o40%. In addition, reduced cell viability was recorded in both normal and primary cells ( Figure 1D and E), with 30-s LTP exposure leading to a small decrease in viability and 600-s exposure reducing cell viability to o20%. The positive H 2 O 2 control was less toxic to both primary samples than the longer LTP exposures (180 and 600 s). The cell lines were more susceptible to 1 mM H 2 O 2 , (up to 90% reduction) than primary cells, which only had a B30% reduction with H 2 O 2 alone. Furthermore, the duration posttreatment had little effect on viability in primary samples, with comparable results recorded at 24 and 96 h, particularly in the tumour cells ( Figure 1E).
DNA damage is sustained as a result of LTP exposure. LTPinduced DNA damage was assessed using the alkaline comet assay, with the percentage of DNA-in-tail recorded for analysis. Figure 2A and B show the percentage of DNA damage in both BPH-1 and PC-3 cells, respectively, for various exposure times. Each dot represents the DNA-in-tail percentage value from a single cell. Exposures as short as 30 s induced high levels of DNA damage, with a saturation of damage levels occurring from 180 s. This concurs with findings in normal and tumour primary cells ( Figure 2C and D). The level of DNA damage from LTP exposure was found to be consistently comparable to the H 2 O 2 treatment control, and the level of damage in the tumour-derived primary sample ( Figure 2D) was marginally higher (but statistically significant, Po0.001) than that recorded for the normal sample ( Figure 2C).
Inhibition of colony-forming capacity as a result of LTP treatment. Treatment with LTP showed a dose dependent inhibition of cell recovery in both BPH-1 and PC-3 cells, with the cancer cell line being more resistant than the benign cell line ( Figure 3A and B). Findings in primary cells showed that treatment with 600-s LTP reduced the surviving fraction to B20% in both normal and tumour samples ( Figure 3E and F). The tumour cells appeared significantly more resistant to the shorter 180-s LTP exposure and to the H 2 O 2 control than the normal cells.
Evaluation of H 2 O 2 formation in cell culture media. Cells in suspension were treated with LTP for a variety of times before being analysed for the presence of extracellular H 2 O 2 in the cell culture media, as an indication of LTP-induced ROS production. It is well known that H 2 O 2 is extremely toxic to cells (Bandyopadhyay et al, 1999), even at micromolar concentrations (Gulden et al, 2010). Figure 3C and D show an increase in the relative concentrations of H 2 O 2 generated in the culture media with increasing LTP exposure times for BPH-1 and PC-3 cells. LTP exposure induces different cell death pathways in cells lines and primary prostate epithelial cells. Our results indicate that LTP exposure causes necrosis in both BPH-1 and PC-3 cell lines, as seen in Figure 4A and B. It is clear that PC-3 cells are more resistant to LTP-induced necrosis than BPH-1 cells. Significantly, necrotic cell death was also observed in both normal and cancer prostate primary cells. Figure 4C and control did not present until around 12-24 h after treatment. Likewise, the staurosporine treatment induced necrosis only at 24 h, indicative of late-stage apoptosis. In addition to necrosis, a proportion of BPH-1 cells also underwent apoptosis following LTP exposure as verified by western blotting for the presence of cleaved-PARP, whereas PC-3 cells did not ( Figure 5A and B). Primary cells treated with LTP did not undergo apoptosis ( Figure 5C and D). This was further confirmed by assessment of caspase 3 and 7 activity in primary samples (Caspase-glo 3/7 assay, Promega), where only staurosporine-treated positive control cells showed positive expression ( Figure 5E). Indeed, LTP-treated primary cells showed apoptotic activity levels below those of untreated control, further verifying that cell death following LTP exposure occurs through necrosis and not apoptosis.
In addition to apoptosis and necrosis, another cellular response to stress is autophagy, which can serve as a protective mechanism, but also results in cell death. Quantitation of LC3 II/I band intensity revealed that, by 24 h posttreatment, a more than twoand a more than four-fold increase (over untreated controls) was present in the cancer and normal samples, respectively, indicating that an autophagic response occurred following LTP exposure ( Figure 5C and D).
DISCUSSION
In this work, we have shown that treatment with LTP causes DNA damage, a reduction in both cell viability and recovery, and ultimately necrotic cell death in normal and cancer primary prostate epithelial cells. The results indicate that LTP-induced H 2 O 2 in the culture media is a likely facilitator of these effects. We also observed that, unlike primary cells and the PC-3 cell line, BPH-1 cells also die through apoptotic mechanisms following plasma treatment ( Figure 6). Our findings in primary cells highlight the potential of LTP as an alternative to, or for use in conjunction with, other existing treatments for organ-confined prostate cancer. Furthermore, the differential cell death response between cell lines and primary cells stresses the need to study clinically relevant models in order to gain insight into the potential patient response.
LTP exposure is known to cause cytotoxic effects in cells via the delivery of RONS to the liquid environment (Ahn et primary epithelial lysates were probed for apoptosis (C-PARP) and also autophagy (LC3B I/II). b-Actin was used as a loading control throughout. Band intensity quantification was performed using the ImageJ software. Further analysis of apoptotic activity was conducted in (E) primary epithelial cells using Caspase-glo 3/7 assay (Promega). Immediately following treatment, caspase-glo 3/7 detection reagent was added to all wells, and luminescence intensity was quantified at 24 h. Readings were normalised to untreated control and are expressed as mean ± s.e.
Primary cells Cell lines
Necrosis Apoptosis Autophagy Cell Figure 6. Overview of cellular response mechanism following LTP treatment. As a result of exposure to LTP, cells were observed to undergo (or a combination of) autophagy, apoptosis or necrosis. The relative proportions of, and differences between, cell lines (red arrows) and primary epithelial cells (green arrows) that exhibit these phenomena is emphasised. Adapted from Kepp et al (2011Kepp et al ( ). et al, 2014Ma et al, 2014). Our results indicate that 180-s LTP treatment of prostate primary cells leads to H 2 O 2 concentrations approximately equal to that of a 1 mM H 2 O 2 control. Additionally, LTP exposure of 600 s produced statistically significant H 2 O 2 readings two-to three-fold to those of the control. Interestingly, following exposure to LTP, the levels of H 2 O 2 recorded in the tumour cells were found to be lower generally than those from the normal cells, resulting in an enhanced colony recovery following treatment at 30-and 180-s LTP treatment times but not at the longest exposure of 600 s. This is in keeping with recent data, suggesting that cancer cells have the ability to quench the effects of ROS more effectively than normal cells (Diehn et al, 2009;Gorrini et al, 2013). Despite this, cell viability is still strongly reduced following LTP treatments of 180 and 600 s, indicating that any RONS produced initially in the culture medium remain strongly damaging to the primary cells at increased time periods postexposure. In contrast to data from the proof-of-principle study on prostate cell lines ( Figure 1B and C), the primary samples appear far more resistant to the H 2 O 2 control, yet the reduction in viability as a result of LTP exposure is comparable between the different samples ( Figure 2C and D). This suggests that the enhanced effect of the plasma is likely to be due to a cumulative effect on the cells of a multitude of reactive species produced in the plasma (the presence of atomic oxygen in the plasma core was verified by optical emission spectroscopy, Supplementary Figure S1), and/or additional plasma components such as electric fields, charges, and UV radiation (Graves, 2012;Kang et al, 2014), rather than just solely due to H 2 O 2 . This may also make it unlikely for cancer cells to become resistant to treatment, as increased tolerance to a particular reactive species would not protect against the perceived multi-faceted action of LTP. Because of the added presence of reactive nitrogen species produced by some LTPs (Cheng et al, 2014;Gibson et al, 2014), this may also present an advantage over radiotherapy, which relies heavily on ROS alone (Palacios et al, 2013), and over PDT, which relies predominantly on the single reactive species SDO for its cytotoxic effect (Sharman et al, 1999).
A contribution of the cell culture media to the observed effects cannot be discounted. We measured a three-fold increase over control of H 2 O 2 production after plasma treatment in primary cells. Yet, we see that, in the BPH-1 cell line, the LTP-treated H 2 O 2 concentrations are broadly similar, and in the PC-3 cell line the H 2 O 2 concentrations are much lower than the control ( Figure 1E and F). It is known that different cell culture media can produce different amounts of H 2 O 2 (Promega Technical Services, private communication). We therefore considered treating all cell types in a buffered saline solution and re-plating the cells in their optimal culture media. However, a counter-argument is that this would not have been physiological (with respect to treating a patient) and that any cytopathic effect would be likely to be predominantly due to short-lived reactive species, and the prolonged effects of long-lived species would be lost. Significantly, both normal and cancer primary cells used in this study were cultured and treated in identical media without serum, and so media was not a variable factor and the results from these cells can be directly compared.
Differences in H 2 O 2 levels were recorded in treated media containing cells and treated media only. All plasma-treated samples showed a reduction in H 2 O 2 production in the presence of cells (vs treated media), suggesting that the cells consume, or quench, H 2 O 2 in the media (Supplementary Figure S2A). This was by far the most pronounced in primary cells, where the H 2 O 2 level following 180-s LTP exposure was reduced by 78% in the presence of cells. There was far less of a reduction in BPH-1 cells (17%) and PC-3 cells (41%). It was also found that, by 2 h following treatment, the levels of H 2 O 2 (induced by either 600-s plasma treatment or 1 mM H 2 O 2 ) were strongly reduced in both normal and tumour primary cells. This effect was more pronounced in the tumour cells and demonstrates the strong ROS-quenching capacity of the primary cells (Supplementary Figure S2B and C). The level of H 2 O 2 formed by the positive control was further reduced to that of the untreated cells by 8 h; however, there were still elevated levels of H 2 O 2 induced by plasma treatment detected at this time point.
We have found that high levels of DNA damage, which is uniform across all cell types, is inflicted after an LTP exposure of only 30 s. In addition, a reduction in colony-forming ability following LTP treatment was observed, as cells treated with 600-s LTP recovered significantly less than those treated with the H 2 O 2 control. This is despite the DNA damage values between 600 s and H 2 O 2 control differing by only a few percent across all samples, in support of the hypothesis that the cytocidal effect of the plasma on cells is not solely due to H 2 O 2 production. Therefore, in vitro, retaining the cells in treated media is necessary to realise a strong anti-proliferative effect (which we investigated and found to be the case; data not shown), as would be seen in tissues. Other LTPbased studies report a selective plasma effect (Wang et al, 2013;Guerrero-Preston et al, 2014), that is, that the plasma preferentially induces cell death in cancer cells. However, normal and tumour cell lines studied often originate from different sites or hosts or are cultured in different media. We observe similar responses in both primary prostate tumour and normal cells from the same patient, highlighting the necessity for supporting live imaging, for example, MRI, for precise targeted tumour ablation in patients (Sullivan and Crawford, 2009).
Finally, for any progression towards a patient therapy, further elucidation of the mechanism of LTP-induced cell death is required. Following a fatal stimulus, cell death can occur broadly in one of the two ways; apoptosis -a regulated chain of events involving cell shrinkage, blebbing, and ending with the formation of apoptotic bodies that retain membrane integrity (Cohen, 1997), or necrosis -an uncontrolled swelling that leads to membrane rupture and spillage of the cell contents into the surrounding environment, provoking an inflammatory response (Casiano et al, 1998). It is clear from our results that primary cells rapidly undergo necrosis, in the almost complete absence of apoptosis. A major advantage of this is that necrotic cell death has the potential to promote immune-activation against tumour cells (Melcher et al, 1999). In contrast, apoptotic cell death has been observed to promote an immune-suppressive environment (Voll et al, 1997), allowing tumour cells to evade detection by the immune system (Gregory and Pound, 2010). Our findings were common to both normal and cancer primary sample with some subtle differences. Marginally higher levels of necrosis were observed in the cancer cells following 600-s exposure, yet both samples show almost identical recovery from this treatment (20% surviving fraction). Both normal and cancer cells treated with long LTP exposures (180 and 600 s) undergo autophagy: a completely novel finding in LTP studies on human cells. This may be a survival process for cells that do not undergo necrosis. Our observation of higher levels of autophagy in primary normal cells may be attributed to the hypothesis that normal cells have a higher ROS-threshold tolerance than cancer cells (Gorrini et al, 2013).
Although this study argues that LTP could become a potential focal therapy for localised PCa, it remains possible that a reduction in metastatic tumour volume could be observed after treatment with LTP, as a result of necrotic cell death and its associated immune response as outlined earlier. Referred to as spontaneous regression, this response has been documented following necrosisinducing thermal ablation treatments for other cancers (Sanchez-Ortiz et al, 2003;Kim et al, 2008;Chu and Dupuy, 2014), but the mechanisms responsible are largely unknown. Nevertheless, a proportion of cells survive LTP treatment and are able to proliferate following exposure to LTP, as demonstrated by their residual colony-forming capacity. The reasons for this must be determined and may potentially be overcome by manipulation and optimisation of the plasma parameters (Cheng et al, 2014) and/or pretreatment with a sensitising agent (Frame et al, 2013).
Finally, the differences in response we have observed between prostate cell lines and primary cells, particularly in terms of the mechanism of cell death, highlights the importance of studying primary cultures in order to gain an insight into patient efficacy. More specifically, the cell death mechanisms that are triggered following administration of LTP should be elucidated in close-topatient models.
CONCLUSIONS
In summary, we have clearly demonstrated the potential of LTP as a future therapy option for localised prostate cancer. Through the formation of reactive species (H 2 O 2 and more than likely also others, e.g., OH, O 2 À ) in cell culture media, we observed high levels of DNA damage in primary cells cultured directly from patient tissues, together with reduction in cell viability and colony-forming ability. These ultimately lead to necrotic cell death in both normal and tumour samples. However, further optimisation of the LTP operational parameters needs to be conducted, in order to kill the proportion of cells that remain viable after treatment. In addition, although we have previously outlined a potential approach (Hirst et al, 2014b), the feasibility of physically treating patients who have PCa with LTP has yet to be established. This would require some modification of the LTP device itself to deliver the LTP to the tumour bed, sparing normal tissues, perhaps employing existing apparatus for cryotherapy and/or brachytherapy.
We believe that with appropriate imaging techniques to facilitate accurate tumour targeting and spare normal tissues, the multifaceted action of LTP will provide advantages over other focal therapies. More specifically, therapies such as PDT relies on SDO production to destroy cells; plasmas are known to be able produce a multitude of RONS that are toxic to cells. Given that LTPs can be propagated from tubes o100 mm in diameter (Kim et al, 2011), we believe that LTP therapy could be more targeted than radiotherapy and more controlled than ice-ball formation in cryotherapy. LTP would not require additional equipment such as the warming catheters used in cryosurgery. Moreover, LTP treatment should prove far more cost-effective than existing treatments. | 7,231 | 2015-04-02T00:00:00.000 | [
"Medicine",
"Biology"
] |
Tunable resistivity exponents in the metallic phase of epitaxial nickelates
We report a detailed analysis of the electrical resistivity exponent of thin films of NdNiO3 as a function of epitaxial strain. Thin films under low strain conditions show a linear dependence of the resistivity versus temperature, consistent with a classical Fermi gas ruled by electron-phonon interactions. In addition, the apparent temperature exponent, n, can be tuned with the epitaxial strain between n = 1 and n = 3. We discuss the critical role played by quenched random disorder in the value of n. Our work shows that the assignment of Fermi/Non-Fermi liquid behaviour based on experimentally obtained resistivity exponents requires an in-depth analysis of the degree of disorder in the material.
T he tunable resistivity of materials undergoing a metal-insulator transition (MIT) holds great promise for resistive switching applications, such as adaptable electronics and cognitive computing [1][2][3][4][5][6][7] . However, a complete understanding of the metallic phase in these strongly correlated electron systems is still one of the central open problems in condensed matter physics 8,9 . Electronic transport is generally explained by means of Boltzmann's theory, which considers a fluid of free quasi-particles that scatter occasionally. In normal metals, the resistivity increases linearly with temperature as electrons are more strongly scattered by lattice vibrations. At low temperatures, weak interactions between electrons can significantly affect the electrical properties, and give rise to a T 2 dependence of resistivity, according to Landau's Fermi liquid (FL) theory 10 . Therefore, the scaling exponent of the powerlaw term of the resistivity as a function of temperature (n) is often used to infer the type of interactions ruling the metal state. In materials with strong electron-electron interactions and undergoing ordering phenomena, other exponents (n ≠ 1, 2) are usually observed, being the physics behind this so-called 'Non-Fermi liquid' (NFL) behaviour [11][12][13] , a subject of active discussion [14][15][16][17] .
Among strongly correlated electron materials, nickelates (RENiO 3 , with RE denoting a trivalent rare-earth element) present a very interesting case. They have attracted attention due to their MIT 18 and the possibility to tune it using different RE elements or by epitaxial strain [19][20][21][22][23][24] . Bad metallic behaviour in nickelates has also been claimed 25 . Different models for the origin of the MIT have been put forward, based on either positive or negative charge transfer as responsible for the insulating state [26][27][28][29][30][31][32][33][34] . The negative charge transfer model supports the bond disproportionation picture, and is strongly supported by recent experiments [35][36][37] . Independent from the exact microscopic picture, the origin of the MIT is a cooperative lattice distortion that reduces the symmetry from a high-temperature orthorhombic phase to a low-temperature monoclinic phase, involving two Ni sites, with the associated need for cooperative accommodation of different Ni-O bond lengths 38 . Remarkably, it has been reported that eliminating the MIT in nickelates by orbital engineering would give rise to a superconducting state 39 , with a very recent experimental achievement in this direction 40 . It becomes, then, important to have an accurate picture of the relevant electron interactions in the intermediate-and low-temperature regimes, just before the MIT takes place. However, despite the vast amount of recent works, the metallic behaviour of the nickelates is not yet fully understood.
In nickelates, different n exponents of the resistivity as a function of temperature have been reported 14,25,[41][42][43][44][45][46] . Linear dependence with temperature has been measured in the whole Nd x La 1−x NiO 3 series in ceramic pellets 41 . Liu et al. 42 obtained n = 5/3 and n = 4/3 for NdNiO 3 (NNO) films under compressive strain, while Mikheev et al. reported a crossover between FL (n = 2) and NFL (n = 5/3) in NNO films with varying epitaxial strain 43 . The need for an empirical parallel resistor model to introduce the effect of the saturation resistivity rises questions about the interpretation of the apparent (experimentally obtained) exponents, as discussed by Hussey et al. 47 .
Here, we report the evolution of the resistivity exponent of NdNiO 3 under different degrees of epitaxial strain. Strain-free (bulk-like) thin films show a linear temperature dependence of the resistivity (n = 1). The combined effect of epitaxial strain and random disorder produces a continuous departure from n = 1, in agreement with recent theoretical work by Patel et al. 48 .
Results
Tuning the resistivity-temperature exponent in the metallic phase. Crystalline NNO films have been grown by pulsed laser deposition (PLD) on <001>-oriented LaAlO 3 (LAO), NdGaO 3 (NGO), SrTiO 3 (STO) substrates and <110>-oriented DyScO 3 (DSO) substrates, using a single-phase ceramic target (see the 'Methods' for more details). Perovskite NNO possesses an orthorhombic structure with a pseudocubic lattice parameter of 3.807 Å, which is slightly larger than that of the LAO substrate (3.790 Å). Thus, the films on LAO are expected to be subjected to small compressive strain. On the contrary, the films grown on NGO (3.858 Å), STO (3.905 Å) and DSO (3.955 Å) substrates should experience increasing tensile strain. Supplementary Fig. 1 (see Supplementary Note 1) shows the typical atomic force microscope (AFM) topography image of a 5-nm NNO film grown on a LAO substrate (NNO/LAO), showing that the atom-high steps from the substrate are still visible after the deposition of the film. In situ high-energy electron diffraction (RHEED) intensity oscillations recorded during the film growth indicate that at least the first 13 layers (~5 nm) of the NNO film are deposited atomicby-atomic layer (see Supplementary Fig. 1a for NNO/LAO and NNO/STO films). The crystalline quality and strain state of the NNO films with different thickness and on different substrates was determined by X-ray diffraction (for details see Supplementary Note 1 and later discussions). Figure 1a, b shows the sheet resistance (R S ) of NNO films grown on LAO and STO substrates, respectively, as a function of temperature. The NNO films grown on LAO substrates (under small compressive strain) exhibit a sharp MIT and a pronounced thermal hysteresis, while the hysteresis is strongly reduced in the NNO/STO films, in agreement with previous reports 1 . The evolution of the first-order transition towards a continuous, percolative-like metal-insulator transition is consistent with the presence of quenched random disorder in the films grown on STO 49 . This interpretation is supported by a higher resistivity and a smaller residual-resistivity ratio in these films compared with those grown on LAO. A further distinction is observed in the evolution of the metal-insulator transition temperature (T MI ) as a function of thickness (see insets to Fig. 1a, b), which has been attributed to the opposite alteration of orbital polarisation in response to different signs of the epitaxial strain 50 .
Like in most of the metals, the electrical resistivity in the metallic state of nickelates can be fitted using a power law: where A is a coefficient related to the strength of electron scattering, and n is the apparent power-law exponent. As shown in Fig. 1c, the metallic resistivity of all NNO films grown on LAO substrates in the measured temperature range (from T MI1 00-400 K) can be well described with a linear temperature dependence (n = 1.00 ± 0.01), independent of film thickness. This temperature dependence has been observed in other systems, ranging from cuprates to heavy fermions, in spite of their different mechanisms of electron scattering 51 . What they have in common, however, is a constant scattering rate per kelvin (≈k B / ℏ), indicating that the excitations responsible for scattering are governed only by temperature. On the other hand, in the case of NNO/STO films (Fig. 1d), the temperature-resistivity scaling of films with different thickness deviates from linearity, showing the departure from this intrinsic mechanism. The values of n-and A coefficients in both NNO/LAO and NNO/STO systems are shown, as a function of thickness, in Fig. 2a (for details on the determination of n, see Supplementary Note 2). Interestingly, n shows a clear evolution with thickness in the NNO/STO films: n decreases with increasing NNO/STO film thickness from a value of n = 3.00 ± 0.05 for a 5-nm film to an apparent linear dependence (n = 1.01 ± 0.01) for the thickest film (40 nm). To understand this behaviour, we turn to an in-depth structural characterisation of the films. Fig. 3c-h. All NNO films grown on LAO grow coherently with the substrate (with coincident in-plane reciprocal lattices of film and substrate), for all investigated thicknesses, as expected from the very similar lattice of the bulk NNO (signalled in the maps by the yellow stars) and the substrate. On the contrary, in the NNO/ STO films, only the thinnest films grow coherently with the substrate, and show an in-plane lattice significantly larger than that of the bulk, due to the large differences between the bulk NNO and the STO substrate lattices. For increasing thicknesses, a gradual shift of the film peak can be observed, in agreement with the expected evolution of the lattice parameters and strain relaxation towards the bulk lattice, with increasing thickness. Thus, the observed evolution of n ( Fig. 2a) corresponds to the gradually relaxed in-plane strain of the films. Figure 2b summarises the n values extracted from the NNO films as a function of the in-plane strain, ε xx , obtained from the diffraction data in Fig. 3. Data from NNO films on NGO substrates (ε xx = +1.34%) are also included. A 5-nm NNO/NGO film also shows apparent linear T scaling in the metallic phase, confirming the correlation between the magnitude of the tensile strain and n (see Supplementary Fig. 4). Similarly to the films on STO, the extended resistivity data of the NNO/NGO films (inset of Supplementary Fig. 4) also shows a reduced hysteresis compared with that of the films on LAO. Figure 2b is completed with n values reported by other authors for bulk NNO 41 and NNO films under larger compressive strains 42,43 . Indeed, we observe a clear dependence of n on the in-plane strain. Both tensile and compressive strains are expected to induce an increase of the orbital splitting between the Ni 3+ x 2 − y 2 and 3z 2 − r 2 e g levels 43 . However, the large asymmetry observed, with a significantly stronger dependence for the tensile strain regime, points to an additional influence on n.
Interplay between strain and defect formation. In order to shed light into this behaviour, we performed scanning transmission electron microscopy (STEM) on the films. Cross-sectional specimens of the films were studied by atomic resolution STEM (for experimental details, see 'Methods'). The high-angle annular dark-field (HAADF) STEM image shown in Fig. 4a evidences the epitaxial, cube-on-cube growth of a 5-nm-thick NNO film on a LAO substrate, with a flat, atomically sharp interface. No defects or misfit dislocations are observed. The strain state of the films was determined by geometrical phase analysis (GPA) of the HAADF images; the deformation of the in-plane lattice parameter of the film with respect to the substrate (ε xx ) is depicted in Fig. 4b. ε xx is virtually zero across the 5-nm NNO film, showing a good in-plane lattice match between film and substrate, in agreement with the X-ray diffraction data. A thicker NNO film on LAO substrate also shows ε xx = 0 across most of the film, but it starts showing small regions with Ruddlesden-Popper (RP) faults, often reported in nickelates 52 , as seen in Fig. 4c, d. Some effect of these RP defects can be seen in the electrical properties, which show a strongly decreased resistance in the insulating state (Fig. 1a), as well as an increased resistivity in the metallic state for the 40-nm films on LAO (Fig. 1c). However, the PR defects do not preclude the presence of hysteresis at the metal-insulator transition, or the apparent linear behaviour of the metallic resistivity in Fig. 1c, as it will be discussed in detail later. RP faults are known to have a significantly enlarged out-of-plane lattice parameter 52 , which can explain the unusual evolution of the outof-plane lattice parameters as a function of thickness for the NNO/LAO films, shown in Fig. 3a. Similar images for the thinnest and the thickest films on STO, shown in Fig. 5, reveal a higher abundance of RP faults, which are present even in the thinnest films. The data, thus, strongly suggest that the RP secondary phases present in the films are not correlated with the observed changes of n.
The effect of strain on n may be indirect. Planar defects, such as misfit dislocations or stacking faults, have been often observed in nickelate films 53 , and the creation of oxygen vacancies is known to be an efficient mechanism to relax tensile strain in epitaxially grown perovskites, as oxygen vacancies locally enlarge the lattice 22,24,[54][55][56][57] . In nickelate-thin films, a pair of oxygen vacancies favour the reduction of the Ni ions to Ni 2+6,58,59 . Indeed, measurement of Seebeck coefficients on films with a thickness of 10 nm grown on LAO and STO, shown in Fig. 6a, shows that while the film on LAO displays metallic-like transport, the film on STO shows a flat temperature dependence, a characteristic of polaronic systems.
Another indication of the existence of an increased content of oxygen vacancies in our films on STO comes from the structural data. From the definition of Poisson ratio, ν, the pseudocubic lattice parameters that would correspond to the unstrained case for the different films can be estimated as a o = (2νa+(1−ν)c)/ (1+ν) 60,61 , where a and c are the in-plane and out-of-plane lattice parameters of the films, respectively, obtained from the structural data of Fig. 3, and ν = 0.30 has been used for all films. The results, in Fig. 6b, show that the films on LAO display a lattice volume close to the bulk value, while the unit-cell volume of the films on STO is significantly increased, which is consistent with a larger oxygen vacancy content that decreases with increasing thickness. Moreover, the residual-resistivity ratio (RRR), which is often used as a measurement of materials' purity, increases with increasing thickness in the films on STO (Fig. 6c), also in agreement with a lower vacancy content in the thicker films.
Our experiments, therefore, indicate that NNO films subjected to relatively small strain values, display T-linear resistivity scaling.
For larger values of tensile strain, an increase of the power-law resistivity-temperature exponent with the magnitude of the strain is observed. This is related to both the effect of strain on the orbital splitting and the degree of disorder, most likely due to oxygen vacancies, whose concentration is believed to increase with increasing tensile strain. These results validate recent theoretical predictions by Patel et al. 48 . Their computational work uses the Anderson-Hubbard Hamiltonian to predict that the metallic state that arises for small and intermediate values of both the on-site Coulomb interaction of 3d electrons (U) and the disorder (V) can be continuously tuned. The calculations predict values varying from n = 1 to n = 2 by the joint action of both U and V (it is to be noticed that in our experiments, larger values up to n = 3 are also observed). Interestingly, power-law exponents varying with the degree of disorder have also been reported for SrRuO 3 thin films by Herranz et al. 62 .
Discussion
In nickelates, epitaxial strain lifts the orbital degeneracy and causes orbital polarisation of the e g band: compressive strain lowers the energy of 3z 2r 2 orbitals, while tensile strain lowers the x 2 -y 2 orbitals 43 . In this sense, both compressive and tensile strain have a similar influence on U. Since the amount of defects is smaller in the films under compressive strain, the values of n under epitaxial compression should be a closer measure of the direct effect of strain in the absence of disorder. On the other hand, the introduction of oxygen vacancies in the tensile case gives rise to a combined effect of strain and disorder, which is reflected in a stronger dependence with strain in the tensile region of Fig. 2b. Actually, to directly clarify the effect of disorder on n, a plot of n versus defect density, instead of epitaxial strain as in Fig. 2b, would be more appropriate. However, an accurate quantitative estimation of the amount of defects in such thin films is very challenging and could lead to erroneous conclusions (see Supplementary Note 4). Given the relationship between strain and defect concentration demonstrated by several authors 58,63,64 , such a conservative plot is more adequate.
In addition, a direct investigation of the correlation between electrical transport properties and defect density can be achieved by tuning the concentration of oxygen vacancies of a single film by changing the annealing conditions after growth. For this, a 20nm NNO film grown on a STO substrate with different amounts of oxygen vacancies was prepared in this work (see 'Methods' and Supplementary Note 5), and the corresponding changes in structure and resistivity were characterised (see Supplementary Fig. 6). As we mentioned above, the existence of oxygen vacancies gives rise to an enlarged unit-cell volume of the films. This is an effect of chemical expansivity due to electrons being donated to σ bands. Hence, the change in the density of oxygen vacancies is correlated with a change of the lattice parameters of the films 59 . As shown in Supplementary Fig. 6a, the out-of-plane lattice parameter of the 20-nm NNO/STO film after vacuum annealing is about 3.799 Å. This value is larger than 3.782 Å of the optimised film (see Fig. 3b), which has been annealed with a 900mbar oxygen pressure, as explained in the 'Methods'. This is consistent with a larger content of oxygen vacancies for the vacuum-annealed films, as expected. As a consequence of this increase in oxygen vacancies, the metallic phase is fully suppressed, accompanied with several orders of magnitude increase in resistivity, as shown in Supplementary Fig. 6c. If the film is subsequently annealed in an oxygen-enriched environment at increasingly large temperatures, oxygen can be gradually replenished, resulting in a decrease of the out-of-plane lattice parameter and, thus, a shift of (002) diffraction peak towards larger angles. Correspondingly, the resistivity shows a decrease, and the metallic phase is recovered after annealing at sufficiently high temperature. More importantly, with the further reduction of oxygen vacancies, a clear evolution of the exponent n from 2.24 to 1.64 is also observed in the resistivity of the metallic phase (see inset in Supplementary Figs. 6c and 7), deviating from the T 1.33 dependence measured for this thickness on samples annealed with the standard procedure (see Fig. 2a). For comparison, the same annealing treatment was also employed in a 20-nm NNO/LAO film. However, only a linear T dependence of resistivity (n = 1) is found in this system after the recovery of the metallic phase, regardless of the oxygen content (see Supplementary Fig. 6b, d). These experiments reveal that the oxygen vacancy content in the films on LAO is not large enough to induce changes in the macroscopic transport through the film, while the larger oxygen vacancy content in tensile-strained nickelate films clearly affects the resistivity-temperature-scaling exponent. Next to vacuum annealing, a large enough tensile strain can also induce a large density of oxygen vacancies, and should, eventually, suppress the metallic phase. This is confirmed in films grown on DSO substrates, under +3.86% strain, for which the resistivity data can be described by a variable range hopping (VRH) conduction model for T < 70 K (see Supplementary Fig. 8) followed by a nearest-neighbour hopping (NNH) model with E a = 32 meV for temperatures above T = 70 K, as often observed in disordered solids 65,66 . It is interesting to notice that a film of the same thickness on STO shows similar behaviour in the insulating state: comparable E a in the NNH regime and comparable crossover temperature from VRH to NNH conduction (see Supplementary Fig. 9). It is known that the presence of quenched disorder strongly impacts the transport properties inducing percolation and changing the nature of the phase transition 49 . In such percolation picture, a coexistence of metallic and insulating clusters could persist into the metallic phase. Indeed, the data of the films under intermediate strain (on STO) show a magnitude of the resistivity in the metallic state that is in between those of the film on DSO and the film on LAO. It is worth to point out that oxygen vacancies can also order in nickelates, as recently shown both in thin films 53 crystals 67 of metallic LaNiO 3−δ . The controlled tunability of oxygen vacancies with strain and its direct relationship with the transport properties demonstrated could also be of importance in the context of the bond disproportionation and negative charge transfer models 35 , as well as the recent work proposing the metal state as a bipolaron liquid and the insulating phase as its ordered (bond-disproportionated) version 37 .
To summarise, this work reports a clear evolution of the apparent scaling exponent of the resistivity-temperature characteristics (n) with strain and disorder, supporting recent theoretical predictions that show the tunability of the scaling exponents arising from the interplay between electron interactions and disorder in nickelates 48 . The overall picture helps to clarify that the underlying physics behind the observed evolution of exponents from T-linear to quadratic scaling and beyond, does not necessarily imply a crossover between FL and NFL behaviour or other exotic physics. On the contrary, for the films reported here with bulk-like in-plane lattice parameters, the contribution to the transport properties from delocalised electrons, for the intermediate-temperature region above the metal-insulator transition, is fully consistent with a classical Fermi gas ruled by electron-phonon scattering.
Methods
Materials' synthesis. Epitaxial NdNiO 3 thin films were deposited on single-crystal LaAlO 3 (LAO), NdGaO 3 (NGO), SrTiO 3 (STO) and DyScO 3 (DSO) substrates by pulsed laser ablation of a single-phase target (Toshima Manufacturing Co., Ltd.). The quality of the target is of crucial importance to attain reproducibility of the film properties, as reported in ref. 68 . Before deposition, the LAO substrates were thermally annealed at 1050°C in a flow of O 2 and etched with DI water to obtain an atomically flat surface with single terminated terraces. The NGO and STO substrates were etched with buffered NH 4 F (10 M)-HF solution (BHF), and the DSO substrates were etched with NaOH. All the substrates displayed single terminated terraces after the treatment. The substrates were heated to a temperature of 700°C, prior to the deposition of the films, and were kept at that temperature during growth. Oxygen was present in the growth chamber during deposition with an oxygen pressure of 0.2 mbar, and the laser fluence on the target was 2 J/cm 2 . After deposition, the samples were cooled down to room temperature at 5°C/min with a oxygen pressure of 900 mbar. The growth was monitored using Reflection High Energy Electron Diffraction (RHEED). The films showed a constant deposition time of about 22 s per unit cell (s/uc) for NNO/LAO and 24 s/uc for NNO/STO. Films with various thicknesses were grown by precisely tuning the deposition time. The oxygen-deficient NNO films were grown on STO and LAO substrates followed by a vacuum-annealing process at 10 −7 mbar. The concentration of oxygen vacancies in these films is tuned by annealing the specimens in tube furnace with a oxygen-enriched environment (400 cc/min) and step-by-step increased temperature. The annealing time for each step is 1 h.
Structural characterisation. The thicknesses, crystal orientation and phase purity of the films, as well as the epitaxial relation between the film and substrates, were assessed using X-ray diffraction by means of 2θ -ω scans and reciprocal space maps (RSM), respectively, on a Panalytical, Xpert MRD Pro diffractometer. Crosssectional specimens of the films were prepared and studied by scanning transmission electron microscopy (STEM) on a probe-corrected FEI Titan 60-300 microscope equipped with a high-brightness field-emission gun (X-FEG) and a CEOS aberration corrector for the condenser system. This microscope was operated at 300 kV. High-angle annular dark-field (HAADF) STEM images were acquired with a convergence angle of 25 mrad and a probe size below 1 Å. The strain state of the films was determined by geometrical phase analysis (GPA) of these HAADF images.
Electrical property measurement. Electrical transport properties were measured between 5 K and 400 K by the van der Pauw method in a Quantum Design Physical Property Measurement System (PPMS), using a Keithley 237 current source and a Agilent 3458 A multimeter.
Data availability
The data that support the findings of this study are available from the corresponding authors upon reasonable request. b The unstrained film lattice parameter (a 0 ) and c the reversed residualresistivity ratio (ρ 0 /ρ 300K ) of NNO films grown on LAO and STO substrates with different thickness. The error bar was determined from the deviation of repeated measurements. | 5,819.8 | 2019-09-13T00:00:00.000 | [
"Physics"
] |
Structural and Magnetic Properties of Yb 0.5 Ce 0.5 Ni 5
: The rare-earth magnetism in the intermetallic compound Yb 0.5 Ce 0.5 Ni 5 was studied using X-ray diffraction, magnetization, heat capacity, and electrical resistivity measurements. The effect of spin fluctuations (SF) was observed in M ( T ) at ~40 K. The measurement of thermal and transport properties supported the results obtained from magnetic measurements. Collected experimental data showed that Yb/Ce substitution shifts the maximum temperature for spin fluctuations to a lower temperature compared to that for pure CeNi 5 . Moreover, at low temperatures, an anomaly in the heat capacity of possible magnetic origin arising from Yb 3+ was detected. Ce atoms seemed to remain in a non-magnetic valence state at almost 4+.
Introduction
The systematic investigation of rare-earth intermetallic compounds has brought new knowledge to the field of condensed matter [1][2][3][4]. A strong correlation between electrons, due to the hybridization of f -electrons and conduction electrons, can cause a number of outstanding low-temperature features. Among the rare-earth elements, a large variety of these phenomena have been found for Yb-and Ce-based intermetallics. One of the most fascinating issues in the study of these compounds is the quantum phase transitions that take place in heavy fermions. In this case, this kind of transition results from the competition between the Kondo effect, which acts to screen the Ce (or Yb) magnetic moments, and long-range RKKY interactions, which favor an ordered magnetic state [5][6][7][8]. Compared to heavy fermions, far fewer studies have been dedicated to the transition between enhanced paramagnetic behavior, on the verge of itinerant magnetism, and ferromagnetic order. This is the case for CeNi 5 and YbNi 5 compounds. Both compounds crystallize in a hexagonal CaCu 5 form, and therefore they are prone to form a continuous solid solution (Ce,Yb)Ni 5 .
CeNi 5 is a Stoner-enhanced paramagnet with a spin fluctuation contribution [9], whereas YbNi 5 orders ferromagnetically at 0.55 K, with magnetic properties dominated by Yb 3+ ions and a negligible contribution from the Ni atoms [10]. It is worth noting that the magnetic susceptibility of CeNi 5 does not follow the Curie-Weiss law, showing a broad maximum around 100 K. This maximum originates from spin fluctuations due to hybridization, which are characteristic of systems close to the onset of magnetism. In fact, in a detailed investigation of polarized-neutron scattering, it was found that magnetization was localized exclusively on the Ni atoms, whereas Ce was found to be non-magnetic, almost in the 4+ valence state [11,12].
The effect of alloying on the ground state of CeNi 5 has been investigated in both Ce and Ni sites. Regarding the Ce site, the substitution of Ce for Pr or Nd suppresses the contribution of spin fluctuation to the electrical resistivity [13].
In this work, we focus on the Yb/Ce substitution in the Ce site, which drives the competition between spin fluctuations and magnetically ordered states. In particular, this paper presents the results of an experimental investigation on the structural and physical properties of the Yb 0.5 Ce 0.5 Ni 5 compound, which is located midway between a compound on the verge of itinerant magnetism (CeNi 5 ) and a compound where magnetism is dominated by 4f electrons (YbNi 5 ).
Experimental Details
The polycrystalline sample was prepared using the induction melting technique. Stoichiometric amounts of the elements with the purities Yb, 99.99 wt.%; Ce, 99.99 wt.%; and Ni, 99.999 wt.% were enclosed in small tantalum crucibles and sealed by arc welding under pure argon. The samples were melted in an induction furnace (homemade) under a stream of pure argon. To ensure homogeneity during the melting process, the crucible was continuously shaken. After that, the sample was annealed at 700 • C for ten days in a quartz ampule sealed in a vacuum and quenched at room temperature in cold water.
The crystal structure was studied using an X-ray Bruker D8 Advance diffractometer (Bruker Corporation, Billerica, MA, USA) located at Department of Earth Sciences and Condensed Matter Physics at University of Cantabria equipped with a Lynxeye multidetector (Bruker Corporation, Billerica, MA, USA) which uses a solid-state array, and the data were recorded between 20 • and 100 • with a 2θ increment of 0.02 • at high resolution with the wavelength of 0.15418 nm, corresponding to Cu Kα radiation. The surface analysis was performed with the scanning electron microscope EVO MA 15. (Carl Zeiss, Oberkochen, Germany) The electron acceleration voltage reached 30 kV, and the magnification varied between ×7 and ×106. The system was equipped with an X-ray energy dispersive spectroscopy system (EDX).
Magnetic measurements were performed using a Magnetic Property Measurement System (MPMS) commercial device (Quantum Design, San Diego, CA, USA), SQUID, in the temperature range 2-300 K with an applied magnetic field of up to 5 T. Heat capacity, electrical resistivity, and magnetoresistivity were measured with DynaCool (Quantum Design, San Diego, CA, USA)) and the Helium-3 refrigerator PPMS (Quantum Design, San Diego, CA, USA) in the temperature range 400 mK-300 K with an applied magnetic field of up to 9 T. Figure 1a shows the experimental X-ray powder diffraction pattern and the Rietveld refinement performed for Yb 0.5 Ce 0.5 Ni 5 compound with the FULLPROF suite package (version September 2020, open source software) [14] under the WinPlotr shell (version April 2019, open source software) [15]. The Bragg diffraction reflections were correctly identified and indexed based on the hexagonal CaCu 5 crystal structure (space group P6/mmm), while the lattice parameters obtained were a = b = 0.4869 nm and c = 0.3985 nm. These parameters were located between those of CeNi 5 and YbNi 5 . Only a couple of very weak extra peaks in the low theta range were observed. The reliability factors obtained from the Rietveld refinement were R f = 13.05%, R B = 15.13%, and Chi 2 = 2.48. In Table 1, the atomic coordinates are displayed.
Atom
Site The morphological analysis of the studied sample is presented in Figure 1b. From more than 10 EDX spectra (not shown) collected from the sample surface, it was determined that, within the experimental error (0.1 at. %), the small discrepancies between the synthesis compositions and the measured compositions may be due to the presence of traces of some spurious phases, as evidenced by the very weak extra peaks present in the XRD at low angles.
The temperature dependence of the magnetization in the applied magnetic fields (B = 0.1 T and B = 0.01 T) is displayed in Figure 2a. The broad maximum connected with spin fluctuations is visible at around T = 40 K. At lower temperatures, an upturn of magnetization occurs, which, in the case of CeNi5, was interpreted in different ways. Some authors believe that it is associated with the intrinsic properties of the material, which is on the verge of ferromagnetism [16,17]. On the other hand, it is known that even small concentrations (in the order of tens of ppm) of magnetic impurities can cause such an upturn. Earlier investigations of CeNi5 material do not show low-temperature upturns in magnetic susceptibility measurements [18][19][20][21], indicating that this effect does not arise from intrinsic properties but from ferromagnetic impurities. Instead, the broad maximum at 100 K in the magnetic susceptibility for pure CeNi5 is an intrinsic property mediated by spin fluctuations. The shifting of this maximum to lower temperatures observed in magnetization in Yb0.5Ce0.5Ni5 may be related to the competition of magnetic order and spin fluctuation which is present in this sample, which was prepared midway between CeNi5 and ferromagnetic YbNi5 [10]. By applying increasing magnetic fields, the temperature where the maximum occurs was not shifted, but its intensity decreased. From the Curie- The morphological analysis of the studied sample is presented in Figure 1b. From more than 10 EDX spectra (not shown) collected from the sample surface, it was determined that, within the experimental error (0.1 at. %), the small discrepancies between the synthesis compositions and the measured compositions may be due to the presence of traces of some spurious phases, as evidenced by the very weak extra peaks present in the XRD at low angles.
The temperature dependence of the magnetization in the applied magnetic fields (B = 0.1 T and B = 0.01 T) is displayed in Figure 2a. The broad maximum connected with spin fluctuations is visible at around T = 40 K. At lower temperatures, an upturn of magnetization occurs, which, in the case of CeNi 5 , was interpreted in different ways. Some authors believe that it is associated with the intrinsic properties of the material, which is on the verge of ferromagnetism [16,17]. On the other hand, it is known that even small concentrations (in the order of tens of ppm) of magnetic impurities can cause such an upturn. Earlier investigations of CeNi 5 material do not show low-temperature upturns in magnetic susceptibility measurements [18][19][20][21], indicating that this effect does not arise from intrinsic properties but from ferromagnetic impurities. Instead, the broad maximum at 100 K in the magnetic susceptibility for pure CeNi 5 is an intrinsic property mediated by spin fluctuations. The shifting of this maximum to lower temperatures observed in magnetization in Yb 0.5 Ce 0.5 Ni 5 may be related to the competition of magnetic order and spin fluctuation which is present in this sample, which was prepared midway between CeNi 5 and ferromagnetic YbNi 5 [10]. By applying increasing magnetic fields, the temperature where the maximum occurs was not shifted, but its intensity decreased. From the Curie-Weiss law and its higher temperature fit, a paramagnetic Curie temperature θ P = −32.33 K and an effective paramagnetic moment of µ eff = 4.07 µ B /f.u were obtained ( Figure 2a in the inset at the top). The negative value of the paramagnetic Curie temperature indicates a dominating antiferromagnetic exchange interaction in a high temperature range. The value of the effective paramagnetic moment (4.07 µ B /f.u.) is slightly smaller than the free Yb 3+ ion value (4.54 µ B /f.u.). Weiss law and its higher temperature fit, a paramagnetic Curie temperature θP = −32.33 K and an effective paramagnetic moment of μeff = 4.07 μB/f.u were obtained (Figure 2a in the inset at the top). The negative value of the paramagnetic Curie temperature indicates a dominating antiferromagnetic exchange interaction in a high temperature range. The value of the effective paramagnetic moment (4.07 μB/f.u.) is slightly smaller than the free Yb 3+ ion value (4.54 μB/f.u.). range of parameters of the two binary compounds. This fact seems to indicate a scenario where Ce and Yb valences are unchanged in the solid solution compared to the binary compounds-i.e., almost 4+ for Ce and 3+ for Yb ions. In fact, with Ce 3+ being larger than Ce 4+ , a change in Ce valence would increase the lattice parameters of Yb 0.5 Ce 0.5 Ni 5. To verify this, one should investigate this compound using spectroscopic techniques such as XPS. On the other hand, we observed a possible magnetic order at low temperatures (see the section on heat capacity), which should correspond to Yb in the magnetic 3+ state. In fact, it is well known that if the valence of Yb decreased to, e.g., 2.9, this would be sufficient for the magnetic order to vanish. Finally, the slightly lower value of the effective moment obtained with respect to the free Yb 3+ ion value may be due to the contribution of the "almost" 4+ valence state of Ce.
The isothermal magnetization is plotted in Figure 2b as a function of the applied magnetic field up to 5 T at different temperatures. The tendency towards saturation is evident, and the ordering temperature is expected to be below 2 K. The Arrott plot for Yb 0.5 Ce 0.5 Ni 5 is presented as the inset in Figure 2b. The nature of the magnetic transition can be obtained by analyzing the Arrott isotherms giving M 2 as a function of B/M. The "S" shape of the Arrott plot is typical for temperatures above the critical temperature. According to the Banerjee criterion [22], this method allows us to determine the nature of the magnetic transition depending on the slope of the M 2 vs. (B/M) plots at high magnetic fields. Indeed, a positive slope indicates a second-order magnetic transition. The Banerjee criterion shows that these curves present a positive slope at high magnetic fields, implying that this compound exhibits a second-order phase transition.
The heat capacity measurement of Yb 0.5 Ce 0.5 Ni 5 up to 300 K (not plotted) shows a typical metallic behavior and, in the high-temperature range, it follows the Dulong-Petit law 3nR~150 J/mol·K. From the C(T)/T vs. T 2 dependence (shown in Figure 3), at 6 T the magnetic order is suppressed, and it is possible to estimate the electronic Sommerfeld γ coefficient as γ 6T~2 00 mJ/mol·K 2 . In Figure 4, the low temperature dependence of the heat capacity is shown for various values of applied magnetic fields up to 6 T. We observe at around 0.8 K a sharp anomaly in Metals 2022, 12, 230 6 of 8 the zero magnetic field. By increasing the magnetic fields, this anomaly evolves, with its intensity increasing, shifting to higher temperatures, and becoming broader, similar to the trends observed in other Yb systems [23]. The sharp anomaly at 0.8 K may be associated with a magnetic order, but this should be confirmed by magnetic measurements below 1 K. Figure 5 shows the electrical resistivity between 0.5 K and 300 K for different magnetic fields. The measurements detect a typical metal behavior. Figure 5 shows the electrical resistivity between 0.5 K and 300 K for different magnetic fields. The measurements detect a typical metal behavior.
Conclusions
Based on our knowledge of the compounds YbNi5 and CeNi5, we prepared a new polycrystalline sample Yb0.5Ce0.5Ni5 of the hexagonal CaCu5 type. Microstructure analysis, as well X-ray diffraction, confirmed the good quality of the prepared sample with desired stoichiometry. At high temperatures, Yb atoms exhibit a localized 4f electron nature with | 3,277.4 | 2022-01-26T00:00:00.000 | [
"Physics",
"Materials Science"
] |
Heavy Flavor Enhancement as a Signal of Color Deconfinement
We argue that the color deconfinement in heavy ion collisions may lead to enhanced production of hadrons with open heavy flavor (charm or bottom). We estimate the upper bound of this enhancement.
where N AB coll (b) is the average number of primary nucleon collisions, which is determined by the geometry of the colliding nuclei, σ N N →HF +X is the total cross section of the HF hadron pair production in N + N collisions and σ inel N N is the total inelastic cross section of N + N interaction. (Note that in high energy collisions the HF production is dominated by the creation of hadrons with open HF. The HF quarkonia correspond to a tiny fraction of the total HF yield and can be safely neglected in our consideration.) There are however some indirect indications that an essential deviation from the standard formula (1) may exist. Recent analysis of the dimuon spectrum measured in central Pb+Pb collisions at 158 A GeV by NA50 Collaboration [1] reveals a significant enhancement of the dilepton production in the intermediate mass region (1.5÷2.5 GeV) over the standard sources. The primary 2 interpretation attributes this observation to the enhanced production of open charm [1]: about 3 times above the direct extrapolation (1) from N+N data. Similar result has been recently obtained in the framework of the statistical coalescence model [3][4][5]. This model connects the multiplicities of hadrons with open and hidden charm. It was found [4,5] that an enhancement of the open charm by the factor of about 2 ÷ 4 over the direct extrapolation is needed to explain the data on the J/ψ multiplicity. It was suggested in Ref. [5] that this enhancement may appear due to the broadening of the phase space available for the open charm because of the presence of strongly interacting medium.
In the present letter we demonstrate that a deconfined medium (quark-gluon plasma (QGP) or its precursor) can make an essential influence on the hadronization of HF (anti-)quarks. This leads to an enhancement of the HF hadron production in A+B collisions in comparison to the direct extrapolation (1) from the N+N data. We restrict ourselves to a rough estimation of the upper bound of possible HF enhancement due to the color deconfined medium.
The process of production of a HF hadron pair can be subdivided into two stages: the hard production of a HF quark-antiquark pair (QQ) and its subsequent hadronization into observed particles. Therefore, there is an essential difference between HF hadron production and, e.g., hard dilepton production (the Drell-Yan process): created Q and Q can and even have to interact with the surrounding quarks and gluons to be transformed into observed HF hadrons.
To get an intuitive picture of possible medium effects let us start from the open HF production in e + e − annihilation. The HF QQ pair created at the first stage, hadronizes into observed particles. The hadronization has a nonperturbative nature. Its dynamics can be qualitatively understood in the framework of the string picture. When the distance between Q and Q reaches the range of the confinement forces, a string connected these colored objects is formed. If the e + e − center-of-mass (c.m.) energy √ s (equal to the invariant mass of QQ pair M QQ ) lies well above the corresponding HF meson threshold 2m M (equal to 2m D or 2m B for cc and bb quarks, respectively), Q and Q break the string into two (or more) peaces, so that the final state contains a HF hadron pair (and possibly a number of light hadrons). However, when the e + e − c.m. energy exceeds the heavy quark threshold ( √ s > 2m Q ) but lies below the corresponding HF meson threshold 2m M ( √ s < 2m M ), the string cannot be broken and the open HF hadron pair can not be formed.
Let us imagine now the e + e − annihilation inside a deconfined medium. Due to the Debye screening, no string is formed between colored objects in this case. If the heavy Q and Q are created, they can fly apart within the medium as if they were free particles. It does not matter whether their initial invariant mass M QQ exceeds the corresponding hadron threshold or not. The created QQ pair will be able to form a HF hadron pair at the stage of QGP hadronization. This means that the e + e − annihilation inside the QGP would produce HF hadrons even if the collision energy is not sufficient for producing these hadrons in the vacuum.
In N+N or A+B collisions the HF QQ pairs are produced due to hard parton interactions. The calculations in the leading order of the perturbative quantum chromodynamics (pQCD) show that a great fraction of QQ pairs are created with invariant masses M QQ below the corresponding meson threshold 2m M even at the largest RHIC energy. If this QQ pair creation takes place in the deconfined medium, which is expected to be formed in high energy A+B collisions, the presence of such a medium makes possible a hadronization of these pairs. This should lead to an enhancement of the HF hadron production in A+B collisions in comparison to the standard result (1) obtained within the direct extrapolation of the N+N data.
There are of course essential differences between the open HF hadron production in the e + e − annihilation and in N+N or A+B collisions. Even in N+N collisions, when no deconfined medium is expected, the created QQ pair can interact with the spectator partons and has therefore a chance to form HF hadron pair even if its primary invariant mass was insufficient for this process. Moreover, in contrast to the e + e − annihilation, the most of QQ pairs are created in the color octet state and therefore they have to interact with the spectators to form a color neutral final state. Instead of breaking the string, the Q and Q can form hadron states by means of coalescence with light spectator (anti-)quarks 3 .
As no theoretical descriptions of this complicated process exist, we restrict ourselves to a rough estimation of the upper bound of possible HF hadron enhancement due to the color deconfined medium. We assume that • In the case of N+N collisions, no subthreshold QQ pairs contribute to the HF hadron production 4 .
confined medium is formed, all QQ pairs hadronize into particles with the open HF 5 .
The first assumption looks reasonable at low collision energies, whereas to justify the second one high energies are evidently preferable. This means that assuming validity of the both statements we overestimate the expected HF enhancement effect and, therefore, the above assumptions give its upper bound.
We make now the numerical estimates which follow from the above assumptions. The total cross section of heavy QQ pair production by colliding nucleons is given by the formula (see e.g. Ref. [6]) where s is the squared c.m. energy of the colliding nucleons, x 1 (x 2 ) is the fraction of the momentum of the first (second) nucleon carried by the parton 1 (2), f 1 and f 2 are the fractional-momentum distribution functions or structure functions, µ F is the factorization scale, σ 12→QQ (ŝ) is the cross section of heavy quark-antiquark pair production by interacting partons at squared centerof-mass energyŝ. For ultrarelativistic nucleons,ŝ is given by the formulaŝ = x 1 x 2 s. The sum in the right hand side of Eq.(2) runs over all the pairs of parton types, that give nonzero contribution to the production cross section. We restrict ourselves to the leading order of pQCD. In this case, two basic processes of heavy flavor creation have to be taken into account: the gluon fusion gg → QQ and the light quark-antiquark annihilation qq → QQ. So the sum in Eq.(2) includes (1, 2) = (g, g), (q, q), (q, q), where q in its turn runs over the light flavors q = u, d, s. The corresponding parton cross sections are given by the formulas [6]: where χ = 1 − 4m 2 Q /ŝ , µ R is the renormalization scale and m Q is the mass of the heavy quark. The masses of light quarks are neglected.
Eq.(2) can be rewritten in the form where the differential cross section with respect to the squared invariant massŝ = M 2 QQ of the QQ pair is given by the formula The probability distributions of QQ pairs with respect toŝ are shown in Fig. 1 and Fig. 2 for charm and bottom, respectively. The computation were done using the CERN library of parton distribution functions PDFLIB [7]. The default set of structure functions MRS (G) [8] was chosen. The HF quark masses are fixed as m c = 1.25 GeV for charm and m b = 4.2 GeV for bottom, the c.m. energy of the colliding parton pair was used as the renormalization and factorization scales: µ F = µ R = √ŝ . We estimate now the upper bound of the HF enhancement in A+B collisions. We assume that in N+N collisions the HF QQ pairs cannot hadronize, unless its c.m. energy exceeds the corresponding HF hadron threshold. Therefore, to calculate the total HF hadron production cross section we cut the integral in Eq.(5) at its lower bound by the corresponding meson threshold 6 : where m M is the mass of the lightest meson containing corresponding HF quark (D-meson for the charm and B-meson for the bottom), m N is the nucleon mass.
In contrast, when two nucleons interact in the deconfined medium (as in high energy A+B collision), our assumption states that all QQ pairs survive and form HF hadrons at the stage of the QGP hadronization. Therefore, the cross section σ N N →HF +X in the formula (1) should be replaced by the cross section σ N N →QQ+X . Hence for the upper bound of the enhancement factor we use the formula The behavior of E max (s) for charm and bottom is shown in Fig.3. It is seen that the largest effect is expected at low energies. Therefore an experimental study of the effect should be done at the minimum energy, where the deconfinement medium is expected to be formed and the inclusive cross-section of HF production is large enough to make its measurement feasible.
The upper bound of open charm enhancement at SPS energy is by the factor of about 5 ÷ 6. This means that the enhanced production of open charm hadrons by the factor 2 ÷ 4 found in Ref. [1] and Refs. [4,5] can be explained by the influence of the deconfined medium.
We conclude that the deconfined medium, which is expected to be formed in nucleus-nucleus collisions can influence the process of hadronization of heavy quarks, this leads to the enhanced production of hadrons with open heavy flavors (charm and bottom). The rough estimation of the upper bound of the effect at SPS energies is found to be large enough to explain the indirect experimental data [1] and the phenomenological evaluations [4,5]. We consider the enhancement of the heavy flavor yield as a possible signal of the color deconfinement. | 2,646 | 2001-03-06T00:00:00.000 | [
"Physics"
] |
The nexus among green financial development and renewable energy: investment in the wake of the Covid-19 pandemic
Abstract Environmental protection has become a significant issue around the globe. The extensive use of renewable energy and green finance is considered as the solution to this dramatic issue, especially in the Covid-19 lockdown. To answer this demand, the present study examines the impact of green financial development such as green credit, green investment, and green securities along with corporate social responsibility (CSR) in reporting renewable energy investment based on evidence from an emerging economy. Economic growth was used as the control variable of the study. The data was gathered from the central bank and World Development Indicators (WDI) from 1976 to 2020. The error correction model (ECM) was used to test the nexus among the variables. The findings revealed that green credit, green investment, and green securities along with CSR reporting and economic growth have a significant positive nexus with renewable energy investment in the selected emerging economy. These outcomes are helpful for new arrivals to investigate this area in the future along with regulators who want to formulate policies related to green finance and renewable energy usage and investment in the context of emerging and developing countries.
The Islamic Republic of Pakistan reaches Central Asia in the north-west zone of South Asia. It borders East India, West Iran and Afghanistan, and North China. It has a southern shore in the Arab Sea. It is the sixth most populated nation in the world, with an approximate population of 195.4 million. The country's economy has displayed signs of change in recent years. The Gross Domestic Product (GDP), which experienced the fastest growth since 2007, has risen by 5.3% in 2017 . This rise is attributable to numerous factors such as increased foreignexchange reserves, decreased budget deficits, and improved security. Two other significant factors have led to economic stability: a dramatic decrease in world oil prices and increased foreign remittances. Pakistan's key electricity sources include natural gas, oil, hydropower, coal, and nuclear energy. Natural gas and oil account for 43% and 36% of the overall production of primary resources (TPES) respectively in 2015 . The balance of recoverable natural gas reserves decreased in June 2014 from 31 trillion cubic feet in 2009 to 23.64 TSCF. Still, the proportion of natural gas in the energy mix is forecasted to grow (Raza et al., 2019). In 2006, crude, for example, accounted for 28.27% of TPES, 32.04% in 2011and 34.42% in 2015. Over 2006-2015, the total annual oil use increased by 4.5%. A decline in oil rates in 2014 contributed to a fall in the import bill. Crude oil accounted for USD 14.77 billion (USD) in 2014, USD 12.167 billion in the year 2015, and USD7.668 billion in the year 2016 as part of the overall bill of imports (Huang et al., 2020).
The remaining sources include hydropower, LPG, gas, nuclear, biomass, and energy imports. Pakistan has expanded its nuclear power share, with an additional 2880 megawatts (MW) potential under development. Renewables (except hydroelectricity) in 2015 were just 0.3% of the TPES (Hydrocarbon Development Institute of Pakistan, 2016). Most of the world's rural population depend on conventional usage of biomass, but the government officially does not quantify the year and publish it. The IEA reports that in 2014, there were 105 million Pakistanis who depended on conventional biomass. According to IRENA's 2015 final figures on green energy use, conventional biomass usage was 8.2 Mtoe. The major sources of LPG and manufactured power are Iran and a limited share of the overall energy supply. In 2015, final energy consumption (including energy production use) amounted to 41.98 Mtoe. Thirty-six percent of the final energy use was for the manufacturing sector, preceded by transport (32%), domestic (also called household or residential) (24%), commercial (4%), and agricultural (2%) industries. In the year 2015, only agriculture had a negative annual compound growth rate. In the past decade, no sufficient increased capacity has been matched by population development nor by increasing industrial and commercial energy demand. The country's electrification rate has risen from 54% in 2006 to 73% in 2016, generating an emerging demand-supply grid disparity in the second half of 2005. In the year 2006 (NEPRA, 2008), it crossed 55 MW and increased further to 4,574 MW in the financial year 2008, to a historic peak of 6,758 MW in for the year 2012. The Planning Commission forecasts that the power market will begin to grow annually by 4-5% in the next five years Nguyen et al., 2021). The CO2 emission in Pakistan from 2005 to 2016 is given in Table 1. There are mixed increasing and decreasing trends reported.
The CO2 emission in Pakistan from 2005 to 2016 is given in Figure 1. There is an increasing trend reported in CO2 emission from 2012 onwards. Initially, there was an increasing trend from 2005 to 2007. There is a mix of increasing and decreasing trends from 2008 to 2011.
Covid-19 is a contagious disease that poses a serious threat to the quality of natural resources, the environment, and the health of the general people since the beginning. It is caused by the respiratory syndrome coronavirus which affects people through breathing in the air or interaction and touch from someone affected. The disease has spread fast across the world, especially in regions that are exposed to a number of polluting factors like the spreading of carbon emissions into the atmosphere. Thus, regions which are industrialized and where the economy is mostly covered by manufacturing or transportation activities are more likely to be exposed to the spreading of Covid-19, leading to the ban on the movement of people and decline in economic growth (Mukherjee et al., 2020). In this difficult situation, green finance to environmental or eco-friendly programs is a powerful tool to overcome the spread of Covid-19 and related problems as this would encourage renewable energy, which is a great response to the carbon emission into the atmosphere. This is the most serious issue which needs a long investigation and discussion (Chehal et al., 2020). Our study is also a struggle in this regard. The main objective of the current study is to examine the impacts of green credit, green investment, and green securities along with CSR reporting and economic growth on renewable energy investment. 1) For a long time, the role of green finances in encouraging renewable energy production and consumption within the economy for the sake of controlling environmental pollution spreading has been analyzed but in a normal situation. As the current study analyzes the green fianc e in relation to renewable energy production during the covid-19 situation, it is a great addition to the literature. 2) green finance, like the impact of green credit, green investment, and green securities, and renewable energy in a mutual relationship, has been the subject of many studies. But either the studies have addressed green finance without its dimensions for determining the renewable energy investment or the impacts of green credit, green investment, and green securities on renewable energy investment individually. The present study, which examines the impact of green credit, green investment, and green securities on renewable energy investment at the same time, contributes to the literature. 3) In the economy of Pakistan, few studies have been conducted which examine the impact of green credit, green investment, and green securities on renewable energy investment. This study is the initial struggle to analyze the impact of green credit, green investment, and green securities on renewable energy investment.
The paper is composed of five portions. The 2 nd portion describes the relationship among the green credit, green investment, and green securities, and renewable energy investment with references from past literature. The 3 rd part describes the ways to handle data in support of the study. Then, the study results are set and supported by another study. Then, the study ends with a study conclusion and implications.
Literature review
Investors are people who directly and indirectly support the economy of a country. Better investment plans and options are always beneficial for a sustainable economy. Financially stable economies of the world have clear strategies for implementation. Global warming, greenhouse effects, ozone depletion, population over-growth, and the pandemic have destroyed the whole world's economies (Anagnostopoulos et al., 2020;Li, Chien, Hsu, et al., 2021). In the modern age, all business firms and factories must invest their funds in developing strategies to cope with all such environmental crises. Governments worldwide have added a new chapter in their by-laws that supports green economy development. The green economy is based on renewable energy resources, and these resources include fossil fuels and the use of biodegradable materials like plastic and paper (Sardianou & Kostakis, 2020). Novel strategies that can support biomass usage as fertilizers and ignition materials for all firms must be implemented. The smoke generated from these bio-degradable materials is not hazardous to nature, and the waste materials or effluents are also eco-friendly. They can be recycled economically using simple machines.
Since contagious diseases like the Covid-19 began to spread in the countries across the world, all economic, social, private, and government activities have been disturbed. Thus, all economic and social organizations and private and government entities have paid attention to serious matters and try to overcome issues that may cause an increase in the cases of Covid-19 (Ali et al., 2020). Changes have been made in policies, strategies, and the rules of any economic or social sector so that the capacity of all social and economic entities to maintain the environmental sustainability can be improved. Just like other sectors of the economy, the financial sector has also been active in implementing strategies to overcome pollution and thus enable all social and economic entities to fight against Covid-19. Green financing is an initiative by financial institutions to overcome environmental pollution by encouraging renewable energy consumption during Covid-19 (Verma et al., 2021). Many studies have been conducted to analyze the impacts of green financing on renewable energy consumption during Covid-19, some of which are cited below.
The investment in renewable energy resources is not only eco-friendly, but also economically sound (Han, 2020;Nawaz, Hussain, et al., 2021). The Covid-19 pandemic has not ended, yet it has caused complexities in all situations. The use of masks made up of biodegradable materials is essential for waste management practices. These masks and other personal protective equipment are economically sound and cost-effective for manufacturing purposes (Al Asbahi et al., 2019;Shair et al., 2021). Thus, investment in renewable energy resources helps the well-being of the environment in enduring circumstances during the pandemic (Hager & Hamagami, 2020;Nawaz, Seshadri, et al., 2021). Green credit is the investment into a specific interest rate on eco-friendly business ventures. Developed countries have a well-developed and organized infrastructure to support the green economy (Mengyao, 2020). Business firms that lend loans to start new business projects based on eco-friendly approaches are well-established throughout the world. Business companies are investing more and more funds in green economic projects (Baloch et al., 2020). Developing countries like Pakistan have initiated the investment into new business ventures, but the efforts are not emerging at large scales . The Covid-19 pandemic has created great havoc for economic well-being and financial stability. The healthcare cost increment has devastated all economic sectors. Business firms are nowadays struggling to cope with the crisis. The need of the hour is to devise new and innovative ways to support green credit investment initiatives Sun et al., 2020).
Green credit initiatives are important to make the economy of Pakistan stable and well-established. Eco-friendly biofuels and recyclable material usage are essential for the growth of the economy (Chien, Ajaz, et al., 2021;Mohsin et al., 2021). Green credit investment options are beneficial for Pakistan's prosperity . Green securities are safe investment options for the well-being of green economic growth and development. The use of environmentally friendly materials in business firms for manufacturing purposes is imperative for the community's well-being (Al-Mutairi et al., 2020;. Developed countries like China have implemented safe financing approaches for the well-being of their economy, including companies which have proper strategies to support health insurance and other health-related implications. Environmental sustainability options are essential to support prosperity and economic growth (Pisedtasalasai & Edirisuriya, 2020).
The coronavirus lockdown has disturbed the budget of all people globally and has specifically destroyed developing countries like Asia and Africa. In these countries, the workforce is usually comprised of the poor and daily wagers Zhuang et al., 2021). They cannot perform their routine activities during this lockdown, and as a result of these implications, all will have to cope with this havoc with wise planning and cooperation (Wahyuningrum et al., 2020). Like the USA and UK, the developed countries of the world have started new projects that will support the native and small-scale firms to scale up their production and manufacturing practices. These initiatives will improve the condition of the business in developing countries Ermakova, 2020). Healthcare costs due to corona-related issues have also increased. The need of the hour is that all business firms and industrial units should devise new strategies and plans to support their staff. These initiatives will support the business firms and the confidence of the staff in their employers (An & Pivo, 2020).
Green investment initiatives are credit options that have future implications for the development and prosperity of the economy of a country. In developing countries like Pakistan, insurance-based options are scarce, but efforts are in progress for the economic sector's well-being . Business communities throughout the world have focused on the development of green insurance-based options. Companies are providing green investment platforms that support overall green credit loans. Companies around the world have a transparent set of social and economic development initiatives that supplement the overall green economy and investments. These initiatives support eco-friendly approaches and show the extent of social responsibility in industries. New and innovative production units generate no harmful effluents and waste materials. These plants are not only cost-effective but also supportive of the growth and development of economic growth initiatives. Biomass and agricultural wastes are abundant in developing countries like Pakistan. Pakistan can use all these wastes for the generation of bio-friendly fuels. These fuels can easily produce new and innovative products that can support the infrastructure of the country. Industrial units that have adopted eco-friendly ways of production have more production rates than traditional industrial units. The need is to enhance the number of such eco-friendly and economically sound industrial units (Dwivedi et al., 2020).
Green investment is the main theme of the green economic development program. These investments are made in renewable and eco-friendly raw materials, producing more efficient and cost-effective products (Rosefielde, 2019). Developed countries like the USA and UK are already investing in green fuels and production units (Rajiani & Ismail, 2019), which have huge benefits for environmental protection and the ethical well-being of the world's economies. In Covid-19 crisis-related situations, all companies that have started green investments could manufacture more and more products. Their businesses have flourished ten-fold compared to other manufacturing units. Developing countries like Pakistan have not modernized much in green investment areas, so most production units are dependent on imported raw materials and products (Tran et al., 2020).
In the Covid-19 crisis, as the world transportation and delivery options were stopped and banned, many businesses were not able to operate as usual Li, Chien, Ngo, et al., 2021), and they have fired most of their poor workers due to this situation (Siala & Jarboui, 2019). On the other hand, production units in developed countries have proper materials and eco-friendly manufacturing units that can produce eco-friendly products in a very minute period. Thus, green investment approaches have a very promising future. All these initiatives are cost-effective and essential for the development of countries' economies. In developed countries, green investment initiatives have provided a complete cover to all economic development efforts . Corporate social responsibility is the proper, financially stable approach for the well-being of the economy. In developed countries, all companies and business firms are responsible for showing cooperative social responsibility and planning initiatives for their economic infrastructure's well-being. Corporate social responsibility has improvised the living and health standard of all serving employees of a specific business or company. In developed countries, all employers strictly obey the government's restrictions and implications, which are devised by the government for the support of the employee's health and well-being.
In developing countries like Pakistan and other Asian countries, employers do not follow the government's proper rules and regulations. In the recent era, Covid-19 has devastated the whole world and economy of all developing countries. The hour's need is to support and respond ethically to all environmental protection initiatives implemented by the government. The social responsibility of all business firms is for the support of eco-friendly initiatives. Green economy measures have a great tendency to support the economic growth of the country. Bio-friendly approaches are important for human beings. The trickle-down effect of the green economy supportive measures also support animals and plants' well-being. The green economy ensures well-regulated economic growth that supports societies' environmental and social well-being. In the modernized world, there are many parameters to gauge economic growth, and most important of these measures is the implementation of an information technology system in the economic and health sector (Meyer & Meyer, 2020).
A proper implementation of a CIS system in healthcare practices can make the economic sector better. An improved communication system is the most helpful tool in modernized medical practices. All medical staff in the corona pandemic must have up-to-date knowledge for data collection, data management, and storage of the patient's entire medical history. The Patient Care Information System (PCIS) is a digitalized platform to collect, store, and cross-verify the entire medical history of a patient. The working professional or paramedical staff in ER and OPD must have complete knowledge of all the SOPs related Covid ICUs. All paramedics must have fundamental knowledge of effective communication and data sorting. Professional software-based knowledge is essential for the well-being of the economy as well. The knowledge of data management and nursing information systems coordinated with the budging and finance department is useful. The medical staff's responsibility in this pandemic is to maintain complete documentation of drug usage, dosage information, schedule management, and maintenance of the complete record of body temperature.
The practical implementation of all the healthcare safety practices, communication skills, utilization of all the healthcare devices, and proper knowledge of ethical standards can be improvised with continued practice (Shi et al., 2019). These efforts serve as the gateway to a new and revolutionized world of telemedicine and information technology. The world has become a digitalized hub of technology . The most innovative and effective way to keep up with the modernized world is to gain knowledge and technology expertise. The new trends of telecommunication have made this relatively easy. Tailored medicine and humanized medicine are modern terms in the medicine world. The knowledge for implementation of this digitalized technology is essential for every healthcare professional. In combination with modern medicine, bioinformatics is the future of the new and digitalized medical world after the Covid-19 pandemic (Wang et al., 2021).
Ancient knowledge and systems are based on the same mechanical principle. Modern-day systems are highly innovative and adaptable to change; for instance, natural systems are always unpredictable. The human body is like a complex organizational framework. It regulates itself according to the different inputs. The response time and nature of the response is dynamic and different for each input. Fruits and vegetables have different outcomes compared to fast food. Thus, it can be said that organizational setups are not homogenous, and they change frequently and continuously. The orthodox concept of organizations as fixed mechanical manufacturing units is changed altogether and is replaced by green management practices. Modernized organizational setups are complex entities with dynamic outcomes related to eco-friendly practices (Hussain et al., 2019).
Leadership is the essence of any organizational setup. Leaders must adopt an innovative and adaptable mindset to accept change. The notion that leaders can control the results or outcomes of a process is outdated. The new trend of leadership and an environment-friendly approach serves as clear guidance for the employees. Influential support can help employees to think independently and provide them with freedom of speech to coordinate and communicate with leaders. The notion of psychological safety and green process management practices in organizations is imperative for generating the desired outcome. Modernized companies have formulated new strategies and setups-the flexibility in managerial mindset is necessary. Organizational learning behavior provides an opportunity to integrate research and developmental strategies that are directly based on green economy-based approaches. The learning environment provides psychological security to all employees. When communicating effectively with lower managerial and clerical staff, the upper management then gets resolved quickly. The situational humility of the upper management is overcome by organizing to learn based setups. Business companies that adopt eco-friendly strategies can generate a promising future. The psychologically secure teaming environment, well-regulated green finances, and green investment-related initiatives help the staff work as a dynamic cooperative and communicative unit for a business ventures' well-being.
Material and methods
The present study examines the impact of green credit, green investment, and green securities along with CSR reporting on renewable energy investment in Pakistan.
Economic growth was used as the control variable of the study. The data was gathered from the SBP and WDI from 1976 to 2020. Based on the above-reviewed literature, the present study has estimated the following equation: The variables that have been used by the researchers include the investment in renewable energy that is measured by logarithm of investment on renewable energy sector development programs and is used as the dependent variable. In addition, green credit, green investment, and green securities are used as predictors of green finance and are measured as the ratio of green credit to total credit, the ratio of environmental protection public expenditure, and the ratio of environmental protection companies' market value, respectively (Anh Tu et al., 2021). The current study has taken three different measurements to measure green credit, green investment, and green securities that shows lack of chances of multicollinearity. In addition, green credit, green investment and green security are also used by the past studies like He et al. (2019) and Ren et al. (2020) and not faced multicollinearity issue. However, CSR reporting is also used as a predictor and is measured as the 1/0 indicator variable where 1 identifies that the country is publishing CSR report in year t (Sadiq et al., 2020), while economic growth is used as a control variable and measured as the GDP growth (annual %) . These measurements are shown in Table 2.
The selection of the models depends on the stationary of the variables such as pooled OLS. All the constructs are stationary at a level while error correction model is used if all the constructs are stationary at the first difference. The autoregressive distributed lag (ARDL) model is used when some constructs are stationary at level but some constructs are stationary at first difference . The stationary of the constructs has been checked by the Augmented Dickey-Fuller Test (ADF). The estimation equation for ADF is given below: The stationarity of the variables was examined individually and if the probability values are less than 0.05, that means that the variable is stationary and vice versa.
The individual variable was examined in the ADF procedure of checking the stationary of the constructs. If the p-value is less than 0.05, the variable is said to be stationary, and these are the estimation model of ADF for all the constructs as given below: This research has also used the ECM to analyze the nexus among variables because all the constructs are stationary at first difference. The estimation models of the ECM are as below: The error term is also stationary at the level and the estimation model with error term is as follows: This research also formulated the ECM by using the understudy variables that are mentioned as below: The findings show the descriptive statistics that exposed the average value of IRE as 5.45 percent while green credit ratio is 0.377 on average. Meanwhile, green securities ratio is on average 0.554 and green investment ratio is on average 0.368. Finally, the average economic growth is 3.556 percent annually. The minimum value of IRE is 7.634 percent while the maximum value is 13.288 percent. In addition, the minimum value of green credit is 0.202 while the maximum value is 0.435. Meanwhile, the minimum value of green security is 0.286 while the maximum value is 0.716. Additionally, the minimum value of the green investment is 0.084 while the maximum value is 0.634. Finally, the minimum value of CSR reporting is 0 while the maximum value is 1. The minimum value of economic growth is 3.556 percent while the maximum value is 3.448 percent. These values are presented in Table 3.
This study also presents the descriptive statistics of the variables in the form of graph that shows the minimum and maximum values along with standard deviation and means of the variables. These are shown in Figure 2.
Results and discussion
This research also presents the constructs such as IRE, GC, GS, GINV, CSRR and EG in the form of scatterplots. These scatterplots are highlighted in Figure 3 below: The correlation matrix is also shown in the findings section that exposes the nexus among the variables. The figures highlighted that all the predictors such as green credit, green investment, green securities, CSR reporting, and economic growth have a positive association with investment in renewable energy. In addition, all the values are less than 0.90 which is an indication of no multicollinearity issue in the model. This nexus is mentioned in Table 4.
The results also show the ADF unit root test that shows the stationarity of the variables. The statistics show that all the variables are stationary at the first difference, which indicates that ECM is appropriate for this study. These figures are highlighted in Table 5.
This study also shows the Johnson co-integration test to check the co-integration in the model. The statistics show that only one co-integration exists in the model because the calculated f-statistics are larger than the critical value only in one case where the probability value is less than 0.05. These values are mentioned in Table 6. The results also revealed that green credit, green investment, green securities and CSR reporting and economic growth have a significant positive nexus with renewable energy investment in Pakistan. The beta values have positive signs that show a positive association among them while t-values are greater than 1.64 and p-values are less than 0.05 which is an indication of significant association among variables. The R square value shows that 76.72 percent variation among investments in renewable energy is due to all the predictors used by the study. These values are highlighted in Table 7.
This study also shows the nexus among the variables by using the regression plots. The results show that GC, GS, GINV, CSRR and EG have a positive association with the IRE of the study. These links are shown in Figure 4.
Robustness analysis
The results of robustness analysis also show that green credit, green investment, green securities, and economic growth have a significant positive nexus with renewable energy investment in Pakistan. The t-values are greater than 1.64 and p-values are less than 0.05 which indicates significant association among variables. The R square value has shown that 74.25 percent variation in the investments of renewable energy is due to all the predictors used by the study. These values are shown in Table 8.
Discussions
This research investigation has revealed that the implementation of green credit policy is a part of green development in finance, which has positive impacts on the investment in renewable energy enterprises. The study examines the implication of green practices in the form of credit card and the conditions of the issuance of credits encourages investment in renewable energy enterprises. These results are approved by the studies of Liu et al. (2019), which show that the essential purpose of an ecofriendly credit policy is to protect the natural environment according to environmental regulations. This provides financial support to renewable energy enterprises based on eco-friendly principles. These results are also approved by the studies of Taghizadeh-Hesary and Yoshino (2020), which reveal that the implementation of green practices into the formation of credit cards and credit policies contribute a lot to renewable energy as the primary purpose of these practices is also to encourage eco-friendly projects. The results have revealed that during the prevalence of Covid-19, the green securities policy whether it is to issue equity or debt securities puts significant positive impacts on the investment tendency into renewable energy enterprises (Hussain et al., 2018). These results are in line with past studies of Wang and Bernell (2013), which indicate that renewable energy enterprises whose objective is to remove the negative environmental impacts are encouraged to be invested with the introduction of green aspects in the policy of financial securities. These results also agree with the investigation of Berensmann and Lindenberg (2016) into the environmental performance of different economic sectors, which show the immense role of green financial securities in the growth of renewable energy projects. New and innovative environmental well-being efforts have upgraded the human living standard (Ahmed et al., 2020). The global crisis can be considered a slow-paced problem, and Covid is a fast and highly stimulated environmental problem (Khaskheli et al., 2020). The outcomes of this pandemic are not only dangerous, but also alarming for every individual in the society. The supply and demand gap has increased more than ever in the post-pandemic era (Vo, 2020). The energy-food and water supply nexus is previously considered a burden to economies, but now healthrelated issues have increased this burden even further. Cost-effective, eco-friendly, and healthy protection-related efforts are necessary for the society's well-being (Cheng et al., 2020). Environmental protection has become a significant issue around the globe with the extensive use of renewable energy and green finance is considered as the solution to this dramatic issue especially, in the Covid-19 lockdown situation.
Covid-19 has changed the situation of the economy and business completely. In recent years, the business and the economy's issues were huge, but this virus has completely devastated all things. Businesses are shackled. The transportation and trading activities between different continents are banned, so every country's economy has crashed. Covid-19 is a medical emergency and a financial one (Yanling Wang, Xu, & Wang et al., 2020). The economic losses are huge due to this virus. The unavailability of vaccination and other preventive medication has resulted in the deaths of millions of people around the globe. This pandemic has changed the standards and lifestyles of all people around the globe. Health-based risks and insurances have become more important for employees and employers. The whole situation of sanitization and related precautionary measures have transformed all industries (Pesqueux, 2020). Nowadays, employees are more concerned about health and sanitization related aspects of the firms. This global problem can only be tackled with increased planning and by making more resource-efficient infrastructures. In the recent era, everyone must make their way into the business community. The business world is a resource-based community, and a green economy is now the only solution to devise resource-efficient and smart ways to run the businesses. In developed countries, policymakers are generating new and innovative strategies that will supplement not only environment-friendly approaches but also Covid-19 preventive treatments (Khan et al., 2018). Moreover, it has been represented by the results that the issuance of green investment policies from different insurers has positively influenced the investment into renewable energy. These results match the studies of Ping et al. (2014), which check the green financial development in the emerging economies and conclude that renewable energy projects are being financially supported by insurers who intend to maintain environmental protection. The results also match with the literary works of Mills (2012), which try to elaborate the contribution of green investment in finance to make possible the establishment of renewable energy enterprises and the improvement in its environmental performance. Furthermore, the results have indicated that green investment is one of the methods of green financial development which, even in the period of Covid-19, encourages investment in the renewable energy projects as the purpose of green investment is to put money in the projects whose basic objective is to protect the natural environment. These results are in line with the past studies of Pueyo (2018), which state that the encouragement in an economy to make the investment into eco-friendly projects also brings improvement to the performance of renewable energy enterprises by providing them with a sound financial basis. These results also agree with past studies of Nesta et al. (2014), which indicate that green economic development is an essential contributor to the financial sources of renewable energy enterprises because of their shared purpose of environmental protection. Besides that, the study findings have revealed that the corporate social responsibility report issued by different companies, as observed during the pandemic of Covid-19, has a positive relationship with the investment in renewable energy projects. These results are in line with the studies of Bons on and Bedn arov a (2015). These studies examine the periodic corporate social responsibility report issued by the business organization, which stresses the need to have sustainable environmental performance. For this purpose, they invest in renewable energy projects which can reduce the emission of toxic gases and chemicals with the recycling of energy resources. These results are also approved by the studies of Szczepankiewicz and Mu cko (2016) which show that the periodical issuance of a social responsibility report from the business organization encourages investment in the enterprises that can renew energy resources and reduce pollution creation. Finally, the study findings revealed that economic growth has a positive relationship with investment in renewable energy projects. These results are in line with the research investigation of Eren et al. (2019). It states that when an economy is growing, all economic sectors have sound financial resources which enable them to make investments in projects like renewable energy projects which are beneficial for their success.
CSR reporting is essential for innovative business planning initiatives (Khan & Alam, 2020). The banking sector is the backbone of every economy. Pakistan has a dwindling economy, and its major reason is corruption (Bulovsky, 2020;Khan, 2007). Corrupt politicians have made the Pakistani economy weak and unsustainable. Corporate social responsibility and related efforts are vital for economic sustainability and environmental well-being (Shabbir et al., 2020). Pollution is a major hazard in Pakistan. Air, water, and noise pollution are major problems in Pakistan ( Van et al., 2020). All policymakers must formulate innovative planning strategies for new businesses that can make the economy stable and well-flourished. The use of ecologically sound raw material in factories and waste treatment initiatives can improve overall environmental health. Financial problems can easily be overcome with honest and well-coordinated efforts of all people belonging to different walks of life (Rehmana et al., 2020).
Conclusion and policy implications
This study sheds light on the changes in financial policies and eco-friendly inscriptions in government economic policies and explores their impacts on the investment in renewable energy projects in an emerging economy that is facing the prevalence of Covid-19. The study examines how financial sources can be made prosperous for renewable energy projects, which helps the economy to recover from the energy crisis and ensure environmental protection. The study examines the rise or fall in the equity of renewable energy enterprises and projects due to the inclusion of green aspects in the financial policies like credit policy, the policy of financial securities, and investment policies in Pakistan's economy in the period of Covid-19. The higher the eco-friendly economic and investment policies that are efficiently implemented and executed in the economy, the higher is tendency of investment in the renewable energy projects as there are rich financial sources available for these projects on comfortable conditions to create a sense of environmental responsibility in business entities. Similarly, the pressure from the environmental regulations on business organizations to produce a corporate social responsibility report after specific time intervals leads to the encouragement of spending money in carrying out the projects to renew the energy resource to avoid the occurrence of financial crisis. The movement in economic growth considerably affects the initiation and performance of renewable energy projects as the change in economic development changes the financial capacity of the organizations.
Due to the occurrence and spread of the Covid-19 pandemic, people's health has been exposed to an open threat as the virus affects human beings through air, touch, or by interacting with affected people. It has adversely affected all social, economic, private, and government activities and brings a fall in the economic growth of a country. In this situation, all economic sectors including financial institutions have made amendments in their policies and strategies to overcome the issues associated with the spread of Covid-19. As green finance is an effective tool to encourage ecological friendly programs and overcome the disturbance created by the Covid-19, the institution must bring positive changes in their policies related to green finance.
Since a contagious disease like Covid-19 pandemic has started and spread to countries across the world, all economic, social, private, and government activities have been disturbed. Thus, all economic and social organizations as well as private and government entities have paid attention to serious matters in trying to overcome issues that may cause an increase in Covid-19 cases. Changes have been made in policies, strategies, and the rules of any economic or social sector so that the capacity of all the social and economic entities to maintain environmental sustainability can be improved. Just like other sectors of the economy, the financial sector has also been active in developing strategies to overcome pollution and thus enable all the social and economic entities to fight against the Covid-19. Green financing is one of the initiatives by financial institutions to overcome the environmental pollution by encouraging renewable energy consumption during Covid-19.
This investigation has great significance as it succeeds in making theoretical implications along with an empirical impact. It is of much importance if it is talked about in its theoretical essence on account of its contribution to the literature conducted on environmental protection. The study examines the development of green finance in an emerging economy and analyzes its contribution to environmentally friendly projects like renewable energy enterprises whose purpose is to protect the environment from pollutants. The implementation of eco-friendly practices inscribed in the credit, investment, and financial securities (both equity and debt securities) policies, which result in the beginning of different environmental projects, initiate the renewable energy enterprises in the economy. Similarly, the study suggests that the force from the government investment in the technologies and techniques to take care of environmental protection leads to the establishment of renewable energy enterprises by supporting them financially. Several past studies have been written which deal with eco-friendly projects, environmental-friendly economic and financial policies, and their impacts on the economy. However, this study is an initiation as it explores the same areas with reference to the economy, which threatened by the Covid-19 pandemic. The literary workout sheds light on the prevalence of Covid-19, its adverse impacts on the economic conditions, its problems, and then proposes the solution to those problems. This study is helpful for new arrivals that will investigate this area in the future, along with regulators who want to formulate policies related to green finance and renewable energy usage and investment.
The study also makes an empirical implication because of its considerable significance in the emerging economies especially the economies which has a large industrial sector and confronted to health crisis due to the prevalence of a pandemic like Covid-19. Such economies have to face health problems and destruction of the resources. The resources and healthy public are crucial for the growth of the country. This is guideline for the government and environmental regulators as it helps them in maintaining ecological protection.
It drives them on how to encourage environmental-friendly projects like renewable energy projects with eco-friendly amendments in fiscal policies or financial policies the credit, securities, and investment policies. Environmental protection can be maintained by the government by increasing the growth rate of the economy and the force on the issuance of a periodic social corporate report. This study is also a theoretical guideline for the economists on how they can save the planet and people for the future economic growth along with the improvement in present as it tells with the encouragement of the green integration in the finance practices which boost up investment in renewable energy investment. Moreover, this study is useful to the economists and government in the sense that they guide on how to mitigate the health destructive impacts during covid-19 and maintain the economy with the rise in the green finances.
Limitations and future directions
This study also carries several limitations. These limitations motivate future researchers and academics to give insight into their subject and to take specific steps to remove these limitations. The study has been supported by taking data from a single source. So, a question may arise on the adequacy and accuracy of data that future scholars should answer with the collection of data through multiple sources. Moreover, the implementation and the execution of the green practices in finance and its contribution to the investment in renewable projects is examined in the economy of Pakistan, an emerging and lower-middle-income economy. These results are true and accurate in Pakistan's economy or similar economies, but they may not as suitable in developed economies. These results relate to the introduction of environmental aspects into financial policies like green investment and securities and thereby, the movement in the establishment and development of the renewable energy enterprises in an economy where Covid-19 prevails. Thus, the study lacks generalizability, which should be recovered by future scholars while addressing the same associations between the underlying variables. Future studies must also address the green development in different financial areas and check the influences on the financial sources of renewable energy enterprises in the typical situation, which may prove to be a guideline for the economists of any era rather than for the economy suffering from a pandemic. | 9,893.8 | 2022-02-17T00:00:00.000 | [
"Environmental Science",
"Economics"
] |
Digital Twins in Industry 4.0: A Literature Review
. Digital Twin is one of the most promising fields in Industry 4.0 due to its advantages related to real-time monitoring, performance analysis, and predictive maintenance. It is a virtual up-to-date representation of a real-world asset, system, being, and even city that is updated in real-time with data from its physical counterpart. By bridging the physical and digital, it is considered to be the innovation backbone of the future. In this contribution, we review the concept of digital twins, the development of its uses in industrial applications, and the level of integration in scientific work.
Introduction
There is no doubt that industrial revolutions have had a massive impact on every aspect of our life. On a large scale, it improved the standard of living and contributed to deep economic and social change. Beginning from 1765 through the present day, we have perceived four industrial revolutions: From coal, gas, electronics, and nuclear to artificial intelligence, processes became mechanized and manufacturing became smarter and faster. There is no doubt that industrial revolutions have had a massive impact on every aspect of our life. On a large scale, it improved the standard of living and contributed to deep economic and social change. Beginning from 1765 through the present day, we have perceived four industrial revolutions: From coal, gas, electronics, and nuclear to artificial intelligence, processes became mechanized and manufacturing became smarter and faster.
The world has recognized the importance of adopting Industry 4.0, digital has become essential for companies wishing to meet the ever-increasing needs of tomorrow and remain competitive in the future world. With advanced automation, monitoring, and real-time communications, the impact of Industry 4.0 on manufacturing drives unprecedented advances in quality, reliability, and agility.
Recently, this new paradigm has been the subject of several scientific contributions; cyber-physical systems and technologies like the Industrial Internet of Things, data mining, and cloud computing offer the potential to transform industrial fields from the factory floor to logistics. Nevertheless, current industrial practices based on CPS are prone to some limitations that can hinder several desirable objectives. This is due to their heterogeneous nature, complexity, and process of implementation.
This paper provides a systematic review that discusses the concept of the Digital Twin in the context of production science, an overview of the key enabling technologies, areas of application, and the general level of integration in scientific work. This review serves as the basis for further work in the field of the Digital Twin in Industry 4.0. It concludes with perspectives.
Overview
In an increasingly dynamic world, digital twins are powerful masterminds to significant growth in the coming years and drive innovation and performance in various industries like manufacturing, automotive industries, energy, agriculture, and healthcare [1,2].
Significant market growth is expected in the coming years in the digital Twin market; It will help companies improve the customer experience by better understanding customer needs, enhancing existing products and services, and driving the innovation of new business opportunities. Digital twin technology seems to be a perfect solution that helps companies in realizing Industry 4.0 standards. The economic value of the digital twin technology will vary widely, depending on the monetization models that drive them. For complex and expensive industrial businesses, improving utilization by reducing asset downtime and lowering overall maintenance costs will be extremely valuable, making internal software competencies critical to driving value with digital twins.
Enabling Technologies for digital twins
Digital twin technology uses a variety of enabling technologies to support different modules of DT: The asset and its physical environment, the virtual representation of the asset, and the communication channel between the physical and virtual representations [4].
For the physical entity (PE), it is essential to exploit abstract knowledge and understanding of the physical world. Digital Twins involves a prerequisite knowledge of mechanics, electromagnetism, materials science, etc. Combined with the Industrial Internet of Things IIOT, smart sensing, and intelligent perception systems that take multi-source signals as inputs and deliver more reliable results with the aid of information fusion and adaptive learning [5,6] make the models more accurate and closer to reality. For the virtual model (VE) that exhibits similar behavior to its physical counterpart, various modeling technologies are essential it uses machine-learning algorithms to process the large quantities of data collected by sensors and identify the patterns. Artificial intelligence and machine learning provide data insights about optimization, maintenance, and efficiency.
Data is the key to the development and application of digital twin technology. The implementation of digital twins will introduce large computational loads. Thus, cloud computing and big data technologies are suitable solution is to improve the operational performance of data [7].
3 Digital Twins in Industry 4.0
Implementation of Digital Twins
The first stage of the process of implementing a digital twin includes analyzing the practical needs to make long-term strategic reform plans and modeling the static properties of the asset to determine the system requirements and constraints, functionalities, and behaviors, including the functional decomposition. Model data flow and communication are also considered, as the logical structure, architecture of the asset, and the technical requirements to implement the solution, including physical and software parts.
The second stage focuses on the system specifications and the targets of the design. The VE is meant to be designed as a mirror of the PE. Therefore, a high level of modeling precision is required to design the DT. In the same cases, synchronization delay and measurement error are not tolerable.
The third stage establishes the appropriate modeling techniques to meet the practical need. In terms of the geometry features, the use of mechanical drawing software for digitalization is recommended. In terms of the physical characteristics, they are described by using first principle knowledge, system identification, and data-driven modeling.
Industrial Applications of Digital Twins
The development of digital twins started in the Aerospace Industry. However, manufacturing is exploring the technology the most. Digital twins are the key enablers of Industry 4.0 and Smart Manufacturing.
In the process of manufacturing, products go through four main phases: design, manufacture, operation, and disposal. The intervention of smart twins is possible in all four phases of the product [8].
In the first phase, Digital twins offer the possibility to verify design virtually, enabling them to test different product versions and choose the best one. Using real-time data from the products of previous generations, designers get an insight into the features that are working best for the consumers and those which need improvements. This makes the process of improving the design easier, more efficient, and faster.
During the second step, the raw materials are turned into the final product. Digital twins can ensure management, production planning, and process control by planning, executing the orders automatically, and improving decision support through a detailed diagnosis, maintenance by evaluating and analyzing machine conditions, identifying any changes in the production system and its effects, and taking reactive maintenance [9] by predicting failure and remaining useful life (RUL), and thus identify and apply the required maintenance to avoid possible breakdowns.
The final phase of the product can be managed by digital twin technologies also by trucking the real-time product operation state via its DT and developing a maintenance strategy accordingly, which can improve the next generation product.
Methodology
This article emphasizes the review of digital twin integration in Industry 4.0, by highlighting the current state-of-art in scientific work.
A systematic literature review was conducted to analyze the current use of Digital Twins in manufacturing processes. As suggested by the guideline for systematic literature review [10]. First, databases were chosen to capture the wide range of digital twin applications. Here, we searched three multidisciplinary bibliographic databases, including Scopus (ScienceDirect, Elsevier), Web of Science, and SpringerLink. The search was limited to the subject areas of engineering and computer science. To elaborate on the current developments on this topic, the timeframe was limited to the years 2019 to 2023.
The term Digital Twin in Industry 4.0 resulted in 1918 hits. When further duplicate and no full text available were excluded, 773 were found.
After identifying relevant and qualitative papers, a broad search strategy was used to obtain a comprehensive data set for the analysis in this research, limiting the search to only the two focus areas of digital twins and Industry 4.0. To fulfill the overall aim of this paper, a screening title and abstract were adopted to assess for illegibility. Irrelevant, duplicated topics and in-progress research were excluded at this step. As displayed in Fig. 2 below, the PRISMA method was used for the systematic literature review, although the method has been slightly adapted regarding a backward search as suggested in the guideline for reporting systematic reviews [11].
The papers found were analyzed by their content and categorized according to their different perspectives. Firstly, the publications were classified by year of publication to indicate the increasing research interest in this topic. Then concerning their type -if they were case studies, concept papers, or reviews. When a publication could be within more than one category of type, the dominant one was chosen. Furthermore, the focused area as well as the technologies mentioned were derived from the paper's contents.
Evolution of digital twins scientific contributions
There is no doubt that advances in science and technology and their integration into the real world are channelized by scientific research. The Digital twin is one of the research technologies that will affect business interpretation in the future. Although digital twin technology has been practiced since the 1960s by NASA, it gained recognition in 2002 after the presentation of Michael Grieves at the University of Michigan on technology. The number of contributions has started to increase since 2019. Last year, 2516 contributions has been done and great growth is expected this year.
Publications type
As mentioned before, digital twins integration in scientific research is developing increasingly, The majority of the reviewed literature is conference papers and articles as shown in the graphic of Fig. 4. Fig. 4. Classification by publication type.
Focused area
When it comes to the focused area within the analyzed publications, the majority of them focused on production planning and control, predictive maintenance, and product lifecycle management [12].
Qinglin Qi and Fei Tao [13] reviewed Digital twins in manufacturing, the study includes the concept of DT as well as their applications in product design, production planning, manufacturing, and predictive maintenance. To provide insight into intelligent manufacturing Zhang et. al., [14] propose a data-and knowledge-driven framework for digital twin manufacturing cells which could support autonomous manufacturing by an intelligent perceiving, simulating, understanding, predicting, optimizing, and controlling strategy. Jinsong Bao, Dongsheng Guo, Jie Li, and Jie Zhang [15] propose an approach to modeling and operations for the digital twin in manufacturing. Bazaz et. al., 2019 [16] propose a comprehensive model of a Digital Twin approach for a manufacturing environment and related production processes. Redelinghuys et al. [17] introduced a digital twin architecture capable of exchanging data and information between remote simulations or simulations and the physical twin, which includes a local data layer, an IoT gateway layer, a cloud-based database, and includes simulation.
Concluding remarks and perspectives
Digital twins present an opportunity to merge the physical world and digital world and then, help in addressing the challenges faced by Industry 4.0. With the support of digital twin techniques, Industry 4.0 brings a wide range of tasks, covering different economical aspects.
The applications of digital twins for any product can be realized through its lifecycle, from design to disposal, and addressing the challenges faced by Industry 4.0 by remoting monitoring and predictive maintenance. There has been objective progress since the inception of digital twin technology; however, practical applications and implementations of the technology in the Industry remain uncharted territory.
This paper emphasizes the challenges in the industrial practice of today and presents an overview of the economic benefits of adopting the digital twins and their enabling technologies; it will act as the backbone of Industry 4.0. | 2,780.4 | 2023-01-01T00:00:00.000 | [
"Computer Science"
] |
New improved gamma: Enhancing the accuracy of Goodman–Kruskal’s gamma using ROC curves
For decades, researchers have debated the relative merits of different measures of people’s ability to discriminate the correctness of their own responses (resolution). The probabilistic approach, primarily led by Nelson, has advocated the Goodman–Kruskal gamma coefficient, an ordinal measure of association. The signal detection approach has advocated parametric measures of distance between the evidence distributions or the area under the receiver operating characteristic (ROC) curve. Here we provide mathematical proof that the indices associated with the two approaches are far more similar than has previously been thought: The true value of gamma is equal to twice the true area under the ROC curve minus one. Using this insight, we report 36 simulations involving 3,600,000 virtual participants that pitted gamma estimated with the original concordance/discordance formula against gamma estimated via ROC curves and the trapezoidal rule. In all but five of our simulations—which systematically varied resolution, the number of points on the metacognitive scale, and response bias—the ROC-based gamma estimate deviated less from the true value of gamma than did the traditional estimate. Consequently, we recommend using ROC curves to estimate gamma in the future. Electronic supplementary material The online version of this article (10.3758/s13428-018-1125-5) contains supplementary material, which is available to authorized users.
An important question in many domains of psychology is whether people are metacognitively accurate. One type of metacognitive accuracy is resolution, which is the degree to which a metacognitive rating discriminates between a person's own correct versus incorrect responses. For example, people may rate how confident they are in a particular response on a 1 to 6 scale (6 = highest confidence). If, on average, accurate responses are assigned higher values on the scale than inaccurate ones, then resolution is good. Resolution is best if people use the extremes of the scale to discriminate correctness. For example, someone who assigns B6^to all her accurate responses and B1^to all her inaccurate ones is demonstrating perfect resolution. The same principle applies to other metacognitive ratings, such as judgments of learning (JOLs) and feelings of knowing.
Resolution is considered important because it affects control (Nelson & Narens, 1990). For example, students writing a multiple-choice test for which errors are penalized but omissions are not face a metacognitive decision: Is the candidate answer under consideration for a question accurate or not (e.g., Higham, 2007)? If it is assessed as correct, students may well risk the penalty and offer it as a response. However, if it is assessed as incorrect, the decision may be to withhold the response. Clearly, resolution determines whether the decision to report (or withhold) the answer increases the test score. A student with perfect resolution will offer all her correct responses and withhold all her incorrect ones, resulting in the highest score possible given her knowledge. Conversely, another student with equal knowledge may score lower on the test if her resolution is poor. With poor resolution, the student may offer a portion of her incorrect candidate responses and withhold some of her correct ones, resulting in penalties and lost opportunities for points, respectively (see Arnold, Higham, & Martín-Luengo, 2013;Higham, 2007;Higham & Arnold, 2007, for discussion of the metacognitive processes involved in formula-scored tests).
Given the importance of resolution for understanding metacognitive processes and people's behavior, it is critical that it be measured properly. However, the best index of resolution has been an issue of ongoing debate (e.g., Higham, 2007Higham, , 2011Higham, Zawadzka, & Hanczakowski, 2016;Masson & Rotello, 2009;Nelson, 1984Nelson, , 1986Nelson, , 1987Rotello, Masson, & Verde, 2008;Swets, 1986). On the one hand, there are proponents of Goodman-Kruskal's gamma coefficient (Goodman & Kruskal, 1954), an ordinal measure of association ranging between -1 (perfect negative relationship) and + 1 (perfect positive relationship). One such highly influential proponent was Nelson (1984), who compared a variety of different measures of association and advocated gamma for a number of reasons. First, it made no scaling assumptions beyond the data being ordinal. Second, it could achieve its highest value possible (+ 1) under most circumstances. Third, it could be computed from data arranged in a number of different table formats (e.g., 2 × 2 tables or 2 × R tables, where R > 2). By far, this index continues to be the most common measure of resolution in the metacognitive literature. Nelson's (1984) review of potential measures of resolution and ultimate promotion of gamma as the best one has had tremendous impact on the field since it was first published.
On the other hand, other researchers and statisticians have recommended signal detection theory (SDT) as an alternative to gamma (e.g., Benjamin & Diaz, 2008;Higham, 2011;Higham et al., 2016;Masson & Rotello, 2009;Rotello et al., 2008;Swets, 1986). Resolution is a discrimination task-people must discriminate the correctness of their own responsesso a suitable measure based on SDT seems like an obvious choice, given that this theory was designed to provide a pure measure of discrimination, free from response bias. Proponents of SDT have argued that, unlike SDT measures such as A z or d a , gamma is contaminated by response bias (e.g., Masson & Rotello, 2009). However, despite clear demonstrations of this fact, as well as other undesirable properties such as a tendency to produce Type I inferential errors (Rotello et al., 2008), gamma continues to be used pervasively throughout the metacognitive literature.
The purpose of the present article is to contribute to this debate regarding the best measure of resolution in a unique way; we highlight similarities rather than differences between the measures. By sidestepping the typically confrontational nature of this debate (see, in particular, the exchanges between Nelson and Swets in the 1980s; e.g., Nelson, 1986Nelson, , 1987Swets, 1986), we hope to encourage new insights not only regarding which measure of resolution is the best one to use in a given situation, but also to demonstrate how it is possible to translate one measure from the so-called probabilistic approach involving gamma to SDT measures, and vice versa. By emphasizing the similarities between the measures rather than their differences, we introduce a new computational formula for gamma that is based on SDT. Our simulations show that when this SDT-based formula is used instead of the one suggested by Goodman and Kruskal (1954), which is derived from concordant and discordant pairs of observations (explained next), the estimates of gamma obtained from sample data deviate far less from the true value.
Traditional gamma: Concordant and discordant pairs of observations
In this section, we briefly review the original computational formula for gamma introduced by Goodman and Kruskal (1954), and its limitations. Suppose that experimental participants are presented with a list of 50 unrelated cue-target pairs, such as digit-hungry. Following presentation of each pair, participants are asked to judge the likelihood (using a 0%-100% scale) that they will recall the target if presented with the cue in a cued-recall memory test held at the end of the experiment-a so-called judgment of learning (JOL). On the cued-recall test, suppose that one participant recalled 30 of the targets from the 50 cues on the test (60% accuracy). The participant's 30 correct and 20 incorrect recall attempts can then be tabulated contingent on the JOLs she made during study. Suppose that the JOLs, which can assume any integer value between 0 and 100, are divided into ten bins, as in Table 1. Binning data in this way is a common procedure in metacognitive research, used to, for example, construct calibration curves. To compute gamma using the original formula, one first determines the total number of concordant (C) and discordant (D) pairs of observations. These terms refer to the ordering of the two observations within the pair on the two variables. If the ordering of the two observations on one variable is the same as the ordering on the second variable, then there is a concordance (e.g., JOL a > JOL b and Recall a > Recall b , where a and b refer to items within the pair). Alternatively, if the ordering of the two observations on the two variables are opposite (e.g., JOL a > JOL b and Recall a < Recall b ), then there is a discordance. In Table 1, the concordant pairs would be those for which the JOL assigned to a correct response exceeds that assigned to an incorrect response. Discordant pairs, on the other hand, are those for which the JOL assigned to an incorrect response exceeds that assigned to a correct response. The numbers of concordant and discordant pairs for the data in Table 1 are shown at the bottom of the table. Gamma is then computed as the number of concordant pairs minus the number of discordant pairs, all divided by the total number of concordant and discordant pairs-that is, In the example shown in Table 1, gamma is equal to .904. This value corresponds to excellent resolution, since the maximum value that gamma can assume is 1.0. This excellent resolution can be intuited by noting that the JOLs for correct versus incorrect responses tend to be clustered toward the top versus the bottom of the scale, respectively. In other words, correct responses tend to be assigned high JOLs, whereas incorrect responses tend to be assigned low JOLs, showing that this participant was metacognitively accurate in predicting her future memory performance. Now consider the same participant's data divided into five bins instead of ten, a scenario depicted in Table 2. One might expect that the gamma computed from the data in Table 2 would also be .904 as it was in Table 1, given that the two tables are based on exactly the same data; the only difference between the tables is the seemingly arbitrary decision about how to bin the data. However, reducing the number of bins reduces both the number of concordant pairs (540 instead of 554) and the number of discordant pairs (24 instead of 28). This has the effect of increasing gamma from .904 to .915. At the extreme, where there are only two bins corresponding to, say, JOL < 50 and JOL ≥ 50, producing a 2 × 2 table, there would be only 425 concordant pairs and 15 discordant pairs, to yield gamma = .932. In short, the fewer the bins for a given data set, the greater the distortion of gamma if it is computed with the original formula.
The reason why reducing the number of confidence bins distorts gamma is that it increases the total number of ties (T)-that is, pairs of observations that do not differ on one, the other, or both the JOL and recall accuracy variables. Referring to the tables again, some pairs that were either concordant or discordant in Table 1 are tied in Table 2. The number of ties can be computed by subtracting the numbers of concordant and discordant pairs from the total number of pairs (i.e., T = 0.5N[N -1] -C -D, where N equals the total number of observations). Out of the 1,225 total pairs in the data set used to generate Tables 1 and 2 (50[49]/2 = 1,225), there are 643 ties in Table 1 with ten bins, 661 in Table 2 with five bins, and 785 in the 2 × 2 case (if confidence is split at 50%). There are three types of ties (Gonzalez & Nelson, 1996): pairs that are tied on (1) the metacognitive judgment (i.e., the two JOLs are in the same bin) but not the recall test (i.e., one is correct, but the other is not); (2) the recall test (i.e., both correct or both incorrect) but not the metacognitive judgment (i.e., the two JOLs are in different bins); and (3) both variables (i.e., pairs assigned the same JOL, which are both correct or both incorrect). In Table 2, the 661 total ties are made up of 212, 36, and 413 ties of these three types, respectively. However, regardless of the particular nature of the ties caused by decreasing the number of bins, the effect on gamma is the same: Ties mean that gamma is distorted. Only in the case of no ties is the value of gamma accurate (Masson & Rotello, 2009).
The problem of tied observations and their effect on gamma has been known for some time. Potential solutions have been offered that typically entail including some of the tied pairs in the denominator of the computational formula for gamma, thereby reducing the overestimation (e.g., Kim, 1971;Somers, 1962;Wilson, 1974;see Freeman, 1986, for a review). The purpose of our commentary is not to adjudicate on which correction might be the most suitable. Rather, we wish to offer an alternative method for computing gamma that ) 0 0 1 2 2 4 3 3 4 11 30 Not recalled (incorrect) These are the same data as in Table 1 V: The proportion of concordant pairs Nelson (1984) described a statistic that is closely related to gamma: V, the proportion of concordant pairs. In an ideal circumstance in which there are no ties, then If there are no ties, the proportion of concordant pairs (V) and the proportion of discordant pairs are complementary, such that Nelson showed that, because gamma is equal to Eq. 2 minus Eq. 3 (i.e., the difference in the proportions of concordant and discordant pairs), The relevance of V and Eq. 4 will become apparent later.
Alternatives to gamma: Signal detection theory
Adopting a signal detection framework, Masson and Rotello (2009) showed that gamma is contaminated by response bias. In the metacognitive context, liberal versus conservative response biases would be represented in Table 1 as a clustering of observations in the bins associated with high versus low confidence values, respectively. At the extreme, maximally liberal versus maximally conservative responding would result in all observations falling into the 90-100 bin versus the 0-10 bin, respectively. At these extremes, all the observations are ties, with the number of ties reducing as the clustering is reduced. As an alternative to gamma, Masson and Rotello recommended parametric signal detection measures such as d a or A z , which are free of response bias if the parametric assumptions are met. However, as we discuss in more detail later, these measures present their own practical as well as potential theoretical problems. We now turn to the area under the receiver operating characteristic (ROC) curve, which A z estimates. ROC curves, introduced to psychology from engineering in the 1950s, are now used widely in both experimental psychology and medicine, as they provide a great deal of useful information about discrimination performance. In short, an ROC curve is a plot of the hit rate (HR) as a function of the false alarm rate (FAR) at different levels of response bias. Within the metacognitive context, the HR and the FAR are the conditional probabilities that participants identified correct and incorrect responses, respectively, as correct. There are a variety of ways that a response might be identified as correct. Participants may choose to report (rather than withhold) an answer in a formula-scored testing situation, or they may respond Byesŵ hen asked if they are confident in their answer. However, identification of correct answers using binary responses (report/withhold or yes/no), by itself, only produces one point for the ROC curve, because it produces only one HR and FAR pair. To generate several points for the ROC, which gives a better indication of its shape, confidence ratings are commonly used.
To illustrate a confidence-based ROC curve, consider again the data in Table 1. The first step in creating an ROC curve of these data is to generate a table of the cumulative frequencies, shown in panel A of Table 3. Starting at the highest level of confidence and moving to lower confidence levels, observations are accumulated until all of the observations are represented at the lowest confidence level. The cumulative nature of the data in Table 3 is indicated by the B+^sign following each confidence level. For example, the column corresponding to B70+^includes all the correct and incorrect responses assigned a confidence level of 70 or higher. For the column B0+,^all responses are assigned a confidence level of 0 or higher; hence, the values in that column match the row totals at the right-hand end of the row.
Next, the cumulative frequencies are converted into rates, shown in panel B of Table 3. Specifically, the cumulative frequencies are divided by the total number of observations of a given type. Correct responses yield HRs, whereas incorrect responses yield FARs. Note that the rates for higher confidence levels generally are smaller than those at lower confidence levels. This mapping corresponds to more conservative responding versus more liberal responding, respectively. A way to understand the table of HRs and FARs is to treat decreasing levels of confidence as decreasing levels of conservatism. That is, for confidence level B90+,^it is as if participants are only identifying as correct those responses assigned 90 or higher. On the other hand, for confidence level B30+,^it is as if participants are identifying as correct those responses assigned 30 or higher, which means that more items have been identified as correct (for 90+ vs. 30+, respectively: HRs, .37 vs. .97; FARs, 0 vs. .30).
The values in the rates table can then be plotted in a unit space, with FARs on the x-axis and HRs on the y-axis. The ROC curve for the data shown in panel B of Table 3 is shown in Fig. 1. A number of interesting performance metrics can be gleaned from the ROC curve. Note that if participants were completely unable to discriminate between their own correct and incorrect responses, the HR and FAR would be equal to each other. In other words, correct responses would be just as likely to be identified as correct as incorrect. By convention, chance performance is depicted in the ROC space as the diagonal line, commonly referred to as the chance diagonal. Note, however, that the actual ROC curve is bowed away from the chance line. This bowing indicates that discrimination is above chance, because the HRs exceed the FARs at all confidence levels. Because more bowing is indicative of better discrimination, area under the curve (AUC) provides a useful measure of discrimination. A z , mentioned earlier, is a measure of this area and can be obtained from sample data using maximum-likelihood estimation if it is assumed that there are Gaussian correct and incorrect response distributions. Such an assumption may not be valid in the context of metacognitive discrimination (resolution), a point to which we will return later. A nonparametric alternative is A g , which estimates the area by connecting the points on the ROC curve (as well as the [0,0] point) with straight lines and computing the area using the trapezoidal rule (Pollack & Hsieh, 1969). In particular, the formula for A g is where k represents the different criteria plotted on the ROC and n is the number of criteria. Therefore, for the ROC curve in Fig. 1, which is based on the data in Table 3, The relationship between V and area under the ROC curve Figure 2 shows another way to depict monitoring and confidence in a signal detection model. The model assumes that there is an underlying dimension constituting the subjective evidence (for correctness). 1 In most cases, correct items (in the current context, those that are successfully recalled) have more subjective evidence than incorrect items. The vertical Table 3. The plus signs next to the points on the ROC indicate that the rates are cumulative. HR = hit rate, FAR = false alarm rate 1 The evidence dimension in SDT is commonly described as Bmemory strength.^This has typically caused SDT to be rejected by metacognitive theorists because that label seems to imply that people make metacognitive judgments on the basis of direct access to the contents of memory (e.g., Koriat, 2012). However, as is discussed in Higham et al. (2016), there is no need to equate the underlying dimension with memory strength. It is better to consider the dimension as reflecting all sources of influence that participants subjectively consider relevant to correctness. These influences can be based on memory access or can be based on myriad other metacognitive cues that are more inferential in nature (e.g., font size; see Luna, Martín-Luengo, & Albuquerque, 2018). lines represent different confidence criteria. Thus, for an item to be assigned 75%, it must be associated with enough evidence to equal or exceed the 75% confidence criterion, but not to equal or exceed the 100% confidence criterion (in which case it would be assigned 100%). Note that there are only five criteria in this example, rather than ten as in Fig. 1 and Table 3. The number of criteria was reduced simply to avoid the figure seeming too busy and is not important for the present purposes.
One interpretation of the area under the ROC curve (which A g estimates) is that it is equal to the likelihood that an observation drawn at random from the correct item distribution will be higher on the subjective evidence dimension than an observation drawn at random from the incorrect item distribution. In Fig. 2, two such pairs are shown, as c (a correct item drawn at random) and i (an incorrect item drawn at random), joined by a line with two arrow heads to indicate that they are part of the same pair. In the upper example, c exceeds i. In bottom example, the opposite is true. It is straightforward to see that as the distributions separate, such that there is less overlap, cases of c > i will increasingly prevail over cases of c < i , until P(c > i) = 1. In other words, with no overlap of the distributions, there is perfect discrimination, and AUC will also be equal to 1. Conversely, if the distributions are drawn together until they completely overlap, then P(c > i) = P(c < i) = .5, which is also equal to AUC (chance diagonal). We provide a mathematical proof that P(c > i) is equal to AUC in the supplementary materials.
There is another way to interpret these pairs of observations and how they compare on the subjective evidence dimension. Specifically, for the c > i pairs, both confidence and accuracy are higher for c than for i, making the observation pair concordant. In contrast, for the c < i pairs, c is less than i on confidence, but higher than i on accuracy, making the pair discordant. Equation 2 indicates that the proportion of concordant pairs in the entire sample (i.e., P[c > i]) is equal to V. However, above we noted that P(c > i) is equal to AUC. Thus, Substituting AUC for V in Eq. 4 produces a very simple formula for relating gamma and AUC: Moreover, we can estimate AUC using Eq. 5 for A g , and then A g can be substituted in Eq. 8 in order to obtain an estimate of gamma: Equation 9 provides an alternative method for computing gamma that is no more complex to compute than the original formula proposed by Goodman and Kruskal (1954), but that does not rely on the concepts of concordance and discordance. Consequently, it is not subject to the associated problem of ties. However, it is also well known that A g has its own problems under certain circumstances (e.g., Grier, 1971;Simpson & Fitter, 1973). Because the trapezoidal rule necessitates drawing straight lines between the points on the ROC curve, AUC will be underestimated if the ROC is curvilinear, which is the usual case if the underlying evidence distributions are Gaussian. In short, the trapezoidal rule yields the minimum possible area under the ROC curve for a particular set of ROC coordinates. Some measures have been offered to compensate for this problem. For example, Donaldson and Good (1996) suggested A' r , which is the average of the minimum and maximum possible areas subtended by the ROC points. However, the computational procedure for this measure is considerably more complex than is that for A g , and it cannot be used for all data sets (e.g., there are slope restrictions). Consequently, for most of the remainder of this article, our aim is to compare the overestimation of true gamma caused by the concordance/ discordance formula to the underestimation of true gamma caused by the trapezoidal rule, to determine which approach yields the better estimate. In the Discussion section, we will justify our nonparametric approach to this problem.
Overview of the simulations
Our strategy for determining which measure provides the best estimate of gamma required us to compute each estimate for multiple simulated Bparticipants^under a variety of circumstances The vertical lines represent the confidence criteria associated with confidence levels 0, 25, 50, 75, and 100. The c and i pairs joined by the horizontal, double-headed arrows represent pairs of observations drawn at random, one each from the correct and incorrect item distributions, respectively. In the upper case, c has more evidence than i, making the pair concordant. However, the opposite is true in the bottom case, making the pair discordant and then to compare the results to a true measure of gamma. Henceforth, we refer to the estimate derived from concordant and discordant pairs as G pairs , the estimate based on ROC curves and the trapezoidal rule as G trap , and the true value of gamma as G true . G pairs and G trap were computed under conditions that simulated a variety of highpowered experiments, each with 100,000 participants and different parameter settings, as detailed later. To simulate realistic experimental conditions, each participant rated only 100 items (50 correct and 50 incorrect items; accuracy = 50%) drawn from Gaussian evidence distributions. The SD of the incorrect item distribution was fixed at 1.0 for all simulations, whereas the SD of the correct item distribution was varied. Confidence criteria were placed on the evidence dimension, and on each cycle of the simulation (corresponding to one participant), 50 items were randomly selected from each of the incorrect and correct evidence distributions and their subjective evidence values were evaluated with respect to the confidence criteria, to create a frequency table analogous to Table 1 or 2. The numbers of concordant and discordant pairs were computed from the data in the table, and G pairs was computed using Eq. 1. To compute G trap , the data in the table were converted to cumulative frequencies, and the HRs and FARs at each confidence criterion were computed (Table 3). Once these rates had been obtained, G trap was computed using Eq. 9. The end result was 100,000 estimates of both G pairs and G trap , with each estimate being based on 100 items, from which the mean of each estimate could be computed for different underlying models with a varying set of parameters.
The next step was to compute G true so that the accuracy of G pairs and G trap could be evaluated. There are a variety of methods to estimate G true . For the simplest (2 × 2) case, Masson and Rotello (2009) randomly selected 200,000 pairs of observations, one each from the correct and incorrect item distributions. They then compared the magnitudes of these two observations across all pairs, determining whether the pair was concordant or discordant (see Fig. 2), which allowed them to compute G true . Because real-valued numbers with high precision were used in these comparisons, there were few if any ties, thereby yielding an accurate gamma estimate.
Other methods can be used to estimate G true that take advantage of the insights offered in this article regarding the relationship between AUC and G true . That is, G true could be computed by first accurately estimating AUC and then converting that estimate to gamma by using Eq. 8. For example, if thousands of confidence criteria were used to derive A g , the process of computing the area becomes analogous to integration, so any underestimation of AUC would be negligible. However, an even better area estimate can be obtained by using the population parameters rather than by trying to minimize error in the sample estimate. Specifically, A z can be computed if the ROC curve is transformed into a zROC by calculating z-scores corresponding to each HR and FAR pair plotted on the ROC. If the evidence distributions are Gaussian, as they were in all our simulations, the zROC becomes a straight line, intercepting both the x-and y-axes. If the slope and y-intercept of the population-based zROC are known, A z can then be computed with the following equation (Stanislaw & Todorov, 1999;Swets & Pickett, 1982): where Φ (Bphi^) is the function that converts z-scores into probabilities. Because we fixed the SD of the incorrect item distribution at 1.0 in all simulations, the y-intercept was equal to the standardized distance between the means of the incorrect and correct item distributions, divided by the SD of the correct item distribution. The slope of the population-based zROC was equal to one divided by the SD of the correct item distribution. G true was then calculated by substituting A z for AUC in Eq. 8. Because the y-intercept and slope in Eq. 10 were population parameters for Gaussian distributions that we defined a priori, this method provides a perfect measure of AUC, and hence a perfect measure of gamma (G true ).
As we noted earlier, we ran a variety of simulations testing different model parameters. The first set of 18 simulations assumed equal-variance Gaussian evidence distributions, whereas the second set of 18 simulations assumed unequal variances (total = 36 simulations). Specifically, the ratios of the SDs of the incorrect and correct item distributions in the first versus the second set of simulations were 1.0:1.0 and 1.0:1.25, respectively. An SD ratio of 0.8 (1.0:1.25) was chosen because research in recognition memory has demonstrated that a zROC with a slope of 0.8 fits the data well (e.g., Wixted, 2007).
Resolution was tested under two conditions, low and high, corresponding to standardized distances between the means of the evidence distribution of 0.5 and 2.0, respectively. In all simulations, the mean of the incorrect item distribution was fixed at 0 (SD = 1) on the evidence dimension. Thus, the means of the correct item distributions were 0.5 and 2.0 for the low-and high-resolution models, respectively.
Three levels of bias were tested: liberal, unbiased, and conservative. These different bias levels were created by varying the placement of the confidence criteria on the evidence dimension.
To determine the placements, we first specified the locations of the highest and lowest criteria. The lowest, most liberal criterion for any dataset necessarily yields an HR-FAR pair corresponding to the (1,1) point on the ROC (see Figs. 1 and 2 and the bottom panel of Table 3). This occurs because confidence judgments are usually required for all items, which means that 100% of both incorrect and correct items are assigned the lowest level of confidence or higher. Because the HR and FAR are necessarily equal to 1.0 regardless of the model assumed, it was not informative to include this criterion in the simulations. Instead, the lowest criterion was associated with the second lowest value on each scale. This criterion was placed at -2.0 on the evidence dimension for the liberal and unbiased cases, and at 0.0 for the conservative case (i.e., at the mean of the incorrect item distribution). The highest criterion for the unbiased and conservative cases was equal to the resolution value (either 0.5 or 2) plus two times the SD of the correct item distribution. For the liberal case, the highest criterion was equal to the resolution value (i.e., at the mean of the correct item distribution). The remaining criteria, the number of which varied according to which type of scale was being simulated, were spaced at equal intervals between the highest and lowest criteria. This methodology ensured that criteria were spread across the full range of both distributions if responding was unbiased, regardless of resolution or the SD of the correct item distribution. It also ensured that both the lowest HR for the liberal case and the highest FAR for the conservative case were equal to 0.5, again, regardless of the other parameters that were varied.
Schematic depictions of several models with different parameters and their associated ROC curves are shown in Figs. 3 (equal-variance model) and 4 (unequal-variance model). The top panel of Fig. 3 displays the equal-variance model corresponding to unbiased responding, a 6-point scale, and low resolution. The bottom panel displays the equal-variance model corresponding to conservative responding, high resolution, and a 10-point scale. In comparing the bottom panel with the top panel, note that the ROC curve is considerably more bowed in the bottom panel, which occurred because of the higher level of resolution. Also, the confidence criteria are shifted to the right (most liberal criterion at 0 rather than -2 on the evidence dimension). This means that the points on the ROC do not represent the full range over which the items are distributed on the underlying evidence dimension. However, at high levels of resolution, this incomplete representation does not appear to affect the ROC much. That is, even though the conservative responding means that the highest FAR is only 0.5 on the ROC, the high resolution means that the HR is already close to 1.0. Now consider the schematic depictions of the unequalvariance model shown in Fig. 4. The top panel corresponds to the case of a 101-point scale, low resolution, and unbiased responding. Note that the ROC for the unequal-variance case is not symmetric with respect to the chance diagonal, unlike the ROCs associated for the equal-variance models in Fig. 3. Note also that with a 101-point scale, the distances between the points on the ROC are much smaller, which should yield an accurate estimate of G trap because very little of the true AUC is cut off by the straight lines joining the ROC coordinates. In contrast, the model in the bottom panel has a similar level of low resolution, but there are only five criteria (corresponding to a 6-point scale) and responding is liberal. Comparing the bottom panel with the top one, note that the large distance between the points on the ROC coupled with the liberal responding means that very few points represent the ROC in the conservative (bottom-left) region, where the bowing is greatest. Consequently, the straight line joining the most conservative ROC point and the (0,0) point cuts out a significant amount of area, suggesting that G trap may not be very accurate in cases of low resolution, few confidence criteria, and liberal responding. We will return to this point later.
Equal-variance model
The results of the simulations for the equal-variance model are shown in Fig. 5. The top versus bottom panels of Fig. 5 display the results for low (0.5) versus high (2.0) resolution, respectively. G true is shown as the horizontal dashed line in each panel. Note that in all cases, regardless of the resolution level, G pairs overestimated G true , whereas G trap underestimated it. Note also that as the number of points on the scale increased, the accuracy of both estimates improved (i.e., the unsigned deviation from G true was reduced). Unsurprisingly, increasing resolution had the effect of substantially increasing both G true and the two estimates of gamma.
On the other hand, the effect of bias on each estimate was less straightforward. First consider the effect of bias at low resolution (toppanel of Fig. 5). For G pairs , unbiased responding led to poorer estimates than did conservative or liberal responding for the 6point scale, equivalent estimates for the 10-point scale, and better estimates for the 101-point scale. On the other hand, for G trap , unbiased responding led to better estimates than either conservative or liberal responding regardless of the number of scale points. However, this advantage for unbiased responding increased as the number of scale points increased. Now consider the effect of bias at high resolution (bottom panel of Fig. 5). For G pairs , the pattern was similar to the pattern observed at low resolution. That is, unbiased responding led to worse estimates than either liberal or conservative responding for the 6-point scale. This difference was reduced for the 10-point scale and was slightly reversed for the 101-point scale, although all estimates with 101 scale points were close to G true . For G trap , the pattern was opposite to that observed at low resolution. That is, unbiased responding produced worse accuracy than either conservative or liberal responding for the 6-point scale, the difference was reduced for the 10-point scale, and slightly reversed for the 101-point scale. However, as with G pairs , all levels of bias produced estimates that deviated little from G true for scales with a large number of response categories.
Most important for the present purposes is the relative accuracy of G trap and G pairs . To facilitate this comparison, asterisks have been added above the data points in both panels of Fig. 5 to indicate which estimate produced the least unsigned deviation from G true . As Fig. 5 shows, G trap yielded a better estimate in eight out of nine cases for low resolution (89%) and in nine out of nine cases for high resolution (100%; total for the equal-variance model = 17/18 = 94%).
Unequal-variance model
The results of the simulations for the unequal-variance model are shown in Fig. 6. As with Fig. 5, the top versus bottom panels of Fig. 6 show the results for low (0.5) versus high (2.0) resolution, respectively, and G true is shown as the horizontal dashed line in each panel. As with the equalvariance model, G pairs tended to overestimate G true , whereas G trap tended to underestimate it. Also as before, increasing resolution increased G true and both gamma estimates. Generally speaking, increasing the number of points on the scale improved both gamma estimates, which also was true of the equal-variance model.
The effect of bias was again less straightforward. For G pairs at low resolution, liberal responding tended to give the best estimates, with the exception of the 101-point scale condition, for which unbiased responding was best. The same pattern was evident for high resolution. For G trap at low resolution, on the other hand, conservative responding tended to produce the best estimates, with the exception of the 101-point scale, for which unbiased responding was slightly better. However, at high resolution, liberal and conservative responding produced approximately equal levels of G trap accuracy, regardless of the type of scale. Compared to biased responding, unbiased responding Asterisks are again displayed in Fig. 6 to indicate which of the two gamma estimates, G pairs or G trap , was more accurate (i.e., produced the lesser unsigned deviation from G true ). For low resolution, G trap was more accurate than G pairs in six out of nine cases (67%). The exceptions were cases of liberal responding. The reason that liberal responding produced poor estimates of G trap with the unequal-variance model at low resolution can be understood by examining the bottom panel of Fig. 4. With an unequal-variance model, the ROC bows more from the diagonal in the conservative region (i.e., the region associated with low HR and FAR values) than in the liberal region (i.e., the region associated with high HR and FAR values). However, because responding is liberal, there are few (or no) points on the ROC representing that bowed region. Consequently, the straight line extending from the most conservative ROC point to the (0,0) point cuts out a significant portion of the most bowed region of the ROC, causing G trap to underestimate G true .
For high resolution, G trap was more accurate than G pairs in eight out of nine cases (89%). The exception was again a case of liberal responding in which, as with low resolution, there were few (or no) points representing the conservative region of the curve. However, as we noted earlier, the impact of this poor representation in the high-resolution case was not as great as in the low-resolution case, due to the nature of the ROC curves (i.e., the magnitude of the reversal was very small: 0.0008). The intuition for this fact can be obtained by examining the bottom panel of Fig. 3. 2 Although there are no points representing any part of the subjective evidence 2 Although Fig. 3 depicts an equal-variance model, it still highlights the point that conservative responding has little effect on G trap if resolution is high. dimension lower than 0 (where FAR = 0.5), the impact on G trap is small because almost the whole of the correct item distribution has been mapped out at higher evidence levels.
In other words, the HR is close to 1.0 at the most liberal confidence criterion, even though all the confidence criteria are quite far up the subjective evidence dimension.
Discussion
There has been decades-long debate between the so-called probabilistic and signal detection camps regarding the best measure of metacognitive monitoring. The former camp, mostly led by Nelson (1984Nelson ( , 1986Nelson ( , 1987, has promoted gamma computed with Goodman and Kruskal's (1954) original concordance/discordance formula. As an alternative, others have suggested using area or distance measures derived from SDT (e.g., Benjamin & Diaz, 2008;Higham, 2007Higham, , 2011Masson & Rotello, 2009;Swets, 1986). We have provided mathematical proof that the two approaches are far more similar than has previously been assumed (see the supplementary materials). Specifically, true gamma is simply a linear function of the true area under the ROC curve (see Eq. 8). This means that both gamma and AUC in their true form are sensitive to the same metacognitive information and correlate perfectly across both participants and items. Thus, in their true form, there is no logical basis for preferring one measure over the other. If the two measures are essentially the same, why have their relative merits been a subject of contention in the literature for so long? The problem lies not with the inherent superiority of one approach over the other. Instead, the problem lies in the method used to estimate the true values. Under the probabilistic approach, gamma has traditionally been estimated using the concepts of concordant and discordant pairs. Conversely, signal detection measures have typically been derived by estimating the distance between the signal and noise distributions (e.g., d' or d a ) or AUC (e.g., A z or A g ). All of these measures are imperfect to varying degrees. The original gamma formula is distorted by ties and can overestimate the true gamma value quite substantially, particularly if there are only a few points on the metacognitive scale. A g underestimates the true area under the ROC curve, particularly if there are few scale points and resolution is high. A z and d a provide accurate measures of discriminability as long as the underlying distributions are normal. However, if the normality assumption is violated, these measures also become grossly inaccurate. Hence, the question that researchers must ask themselves is not whether they should compute gamma versus some signal detection measure of resolution, as if these are opposing alternatives. The question should be which method should be used to estimate the true value of gamma, distance, or AUC in a given research context.
In an attempt to address this important question, we conducted 36 simulations involving 3,600,000 virtual participants to compare the relative accuracy of gamma computed with the original concordance/discordance formula against gamma computed with ROC curves and the trapezoidal rule. In all but five of these simulations, the method of computing gamma using area under the ROC curve was superior. That is, compared to gamma estimated with the concordance/discordance formula, computing AUC with the trapezoidal rule, doubling it, and subtracting one yielded less unsigned deviation from the true gamma value in 86% of our simulations. This superiority was true for myriad conditions. Across the 36 simulations, we manipulated the relative variances of the correct and incorrect item distributions, response bias, resolution, and the number of response categories on the confidence scale. The fact that ROC curves yielded the better gamma estimate across all these different conditions suggests that gamma computed in this way can be considered, in general, to be a better estimate of resolution than gamma computed with the original formula. Consequently, the former should be favored as the method of estimating resolution except in very specific circumstances (see the Limitations section).
Although the difference in the amounts that G pairs and G trap deviated from G true may seem negligible in some cases, particularly if a large number of scale values were used, the relative deviations were not. To illustrate, we compared the unsigned deviations (from G true ) for G trap and G pairs for the 31 (of 36) cases in which G trap had higher accuracy. These comparisons indicated that G trap was 3.41, 20.54, 34.56, and 4.06 times more accurate than G pairs in the equal-variance/low-resolution, equalvariance/high-resolution, unequal-variance/low-resolution, and unequal-variance/high-resolution simulations, respectively.
Other criticisms might be that researchers, for the most part, are interested in whether gamma differs between experimental conditions or whether it is significantly different from zero, not in the true value of gamma. Given these interests, why is it so important to be concerned about accurate measurement of gamma? Our response to the first criticism is that the over/ underestimation of gamma is not consistent across different contexts, which could result in spurious experimental differences being reported. As our opening example in the introduction reveals, G pairs is generally greater for smaller than for larger contingency tables, even for the same data set. Thus, if gamma Fig. 6 Means for two gamma estimates across 18 simulations (each based on 100,000 virtual participants) assuming unequal-variance Gaussian evidence distributions (1:1.25 ratio for signal and noise standard deviations). Low resolution (standardized difference between means of the signal and noise distributions = 0.5) is shown in the top panel, whereas high resolution (standardized difference = 2.0) is shown in the bottom panel. At each level of resolution, response bias and the number of scale points were varied. The true value of gamma is the horizontal dashed line in each panel. Asterisks indicate which of the two gamma estimates (G_ trap = gamma estimated via ROC curves and the trapezoidal rule, G_ pairs = gamma estimated by the original concordance/discordance formula) deviated the least from true gamma computed in an experimental condition with data arranged in a small contingency table (e.g., Report/Withhold × Accurate/ Inaccurate) is compared to gamma in another experimental condition with data arranged in a larger contingency table (e.g., 1-6 Confidence × Accurate/Inaccurate), the former is likely to be larger than the latter purely as an artifact of the table size. Regarding the second criticism, overestimation or underestimation of gamma could produce spurious differences when gamma is compared against zero, leading researchers to conclude that gamma is above or below chance, respectively, when in fact it is not. This problem is particularly evident with small contingency tables. In our view, for these reasons and others, it is always preferable to estimate gamma as accurately as possible.
The number of points on the metacognitive scale was one of the most important factors affecting the accuracy of both G pairs and G trap . Nelson (1984) argued that, although a correction may be needed for 2 × 2 tables so that the sample gamma (G pairs ) is an unbiased estimate of the population gamma (G true ), corrections were not needed for larger tables. The simulations reported here indicate that this statement is clearly not true; a 2 × 6 table, associated with a 6-point scale, showed large overestimations for G pairs . There was also a moderate amount of overestimation for the 10-point scale (10 × 2 table). Even the 101-point scale (101 × 2 table) yielded a small amount of overestimation, particularly if there was response bias. G trap fared somewhat better but was also most distorted with the fewest scale points.
How might researchers overcome the estimation problem associated with few values on a metacognitive scale? One obvious option would be to ensure that experimental participants are provided with a full percentage scale and are encouraged to use any value between 0 and 100. Our simulations showed that these scales led to accurate estimates. One potential drawback with this approach is the introduction of measurement error: Scales with many values tend to have lower reliability than those with fewer points (e.g., Bishop & Herron, 2015). Another issue is that people tend to prefer 10-point scales (e.g., Preston & Colman, 2000). Therefore, given the opportunity, 101-point scales may be reduced to 10-point scales (i.e., participants only respond with values that are evenly divisible by 10: 10, 20, 30, etc.). To avoid these issues, an alternative approach may be to avoid explicit response categories altogether by having participants use a graphical interface to make metacognitive ratings. For example, if a computer is used to collect metacognitive ratings such as JOLs in an experimental setting, participants may be presented with a Bslider^on the computer screen with labels ranging from not at all likely to remember on the far left to very likely to remember on the far right (see, e.g., Metcalfe & Miele, 2014). The number of pixels between the starting point at the far left of the scale to the point at which participants click to indicate confidence could then be calculated as a confidence measure. With modern computers, this would amount to a scale with even more points than a scale with 101 response categories and might avoid excessive measurement error and participants' tendency to simplify scales with a large number of explicit numerical values. Nelson (1984) argued that A g is too variable to be used in most metacognitive experiments because of the limited number of items. In Nelson's own words: Bfor nonparametric SDT to be appropriate in the feeling-of-knowing situation, it will be necessary to have many more observations per subject than currently are obtained^ (pp. 122-123). Later, he argues that in most metacognitive experiments Bthe typical number of observations has been roughly one or two dozen per subject. . . . This number of observations, particularly when divided up via multilevel feeling-of-knowing ratings, is much too small for nonparametric SDT^(p. 123).
Variability of measures
Thus, according to Nelson (1984), it is not possible to obtain a stable per-participant estimate of resolution unless there are 100 or more observations, due the inherent variability of A g (and hence G trap ). However, in our view, the more appropriate approach to understanding the effect of variability would be to compare the relative variability of measures such as G pairs and G trap rather than focusing solely on one measure or the other. Our simulations allowed us to do just that; that is, it was possible to compare the betweensubjects standard deviations for both G pairs and G trap across our 100,000 virtual participants in each simulation. The results of this comparison indicated that, for both the equaland unequal-variance Gaussian models with low resolution (standardized distance between the evidence distributions = 0.5), there was less variability for G trap than for G pairs in all cases, whereas the opposite was true for all cases of high resolution (standardized distance = 2.0). However, if the magnitudes of the differences are considered, G trap was the less variable measure overall; that is, collapsing over the equal-and unequal-variance models, the mean advantage that G trap had over G pairs at low resolution was 0.023, whereas the mean advantage that G pairs had over G trap at high resolution was only 0.008, nearly a threefold difference.
One criticism with this analysis is that each of our simulations involved 100 items (50 correct, 50 incorrect), and Nelson (1984) claimed that 100 items or more would make nonparametric SDT analyses acceptable. Hence, the real question is how the variability of each gamma estimate compares when there are fewer items. To answer this question, we repeated all 36 simulations reported earlier with only 20 items per participant (10 correct, 10 incorrect). We also reduced the number of virtual participants from 100,000 per simulation to just 40. If Nelson's claims are correct, then the variability of G trap should become large and unmanageable with these parameter settings and should far exceed that of G pairs . However, although the per-participant standard deviations increased with the reduction in items, they increased for both G trap and G pairs . In terms of the comparison of the two measures, the results were very similar to the previous results; that is, there was less variability for G trap than for G pairs for both the equal-and unequalvariance Gaussian models in all cases at low resolution, whereas the opposite was true for all cases of high resolution. Again, however, if the magnitudes of the differences are considered, G trap was the less variable measure overall. As before, collapsing over the equal-and unequal-variance models, the mean advantage that G trap had over G pairs at low resolution was 0.020, whereas the mean advantage that G pairs had over G trap at high resolution was 0.018.
Overall, these comparisons of the between-subjects standard deviations of G trap and G pairs indicate that, if anything, G trap is the less variable measure regardless of the number of items or the number of virtual participants that contribute to the estimates, at least with Gaussian evidence distributions. Hence, there is no evidence that nonparametric SDT should be rejected on the basis of high variability, as Nelson (1984) claimed, regardless of whether one is computing A g or G trap as the measure of resolution.
Parametric versus nonparametric measures of resolution
As we noted earlier, if the underlying evidence distributions are Gaussian and the true (population) values of the zROC's y-intercept and slope are entered into Eq. 10, A z is a perfect estimate of AUC. Indeed, the A z value from Eq. 10 was substituted for AUC in Eq. 8 in order to compute G true for our simulations, the gold standard against which G trap and G pairs were compared. Why, then, did we use the trapezoidal rule to estimate gamma in our simulations rather than A z , particularly since we assumed Gaussian distributions for our simulations, anyway? There were two reasons for this decision. First, very little is known about the nature of the evidence distributions in metacognition. In one of the few formal tests that have been conducted to determine the nature of these distributions, Higham (2007) found that an equal-variance Gaussian model was a good fit for the metacognitive ROC curves generated by performance on the SAT. However, whether this finding is generally true across the myriad ratings that are used in modern metacognitive research is an open question.
Furthermore, some authors have suggested that signal detection measures of resolution are inappropriate in the first place, because there may be only a single distribution of items rather than two (signal and noise). The reasoning here seems to be that, unlike in tasks that lend themselves easily to signal detection analyses, such as old-new recognition, there are no distractors in the usual sense of the word in recall tasks; therefore, there is only one distribution of items (e.g., Murayama, Sakaki, Yan, & Smith, 2014, note 1). The spirit of this single-distribution assumption is captured in Jang, Wallsten, and Huber's (2012) stochastic model of JOL accuracy. However, in our view, this reasoning confuses Type 1 (stimuluscontingent) and Type 2 (response-contingent) discrimination. Metacognitive discrimination is essentially a Type 2 SDT task involving accuracy discrimination, so distractors are not defined by their stimulus characteristics (e.g., old vs. new items), but rather by their response characteristics (e.g., correct vs. incorrect responses on a criterial test). In the context of recall, then, the distractors are errors of commission or omission on the memory test (see Arnold et al., 2013;Higham, 2007Higham, , 2011. Nonetheless, for the present purposes, the important point is that there is some doubt regarding the nature of the evidence distributions. Consequently, we thought it would be hasty to jump to the conclusion that the distributions are unquestionably Gaussian. Such an assumption seems plausible, which is why we adopted it for the simulations that we reported, but it is not a certainty. 3 Because neither G trap nor G pairs is reliant on any particular evidence distribution shape, Gaussian or otherwise, these were the measures we chose to compare. However, it should be noted that if the ROC data conform to a Gaussian model-and there are fairly straightforward statistical methods for testing this assumption (see, e.g., DeCarlo, 2003)-then gamma estimated via A z would certainly be more accurate than gamma estimated via A g .
The second reason we focused on nonparametric measures is more pragmatic. Unlike recognition tasks, in which the number of targets and distractors making up the signal and noise distributions are defined a priori by the experimenter and are often equated (i.e., 50% targets, 50% distractors), the correct versus incorrect evidence distributions in metacognitive applications of SDT are determined by participants' accuracy on the criterial test. Depending on the experimental circumstances, accuracy can be extreme, which would result in only a few items populating one distribution or the other. The high variability in HRs and FARs derived from only a few items in cases of extreme accuracy can result in many zeroes and/or ones in the dataset. For example, suppose participants are engaged in a very difficult recall task with 100 items and they are informed in advance that the test will be difficult. Because the memory test is hard, suppose that accuracy is only 10%. Furthermore, because participants are told about the difficulty of the upcoming memory test, the few correct responses that are made on the test are assigned the lowest JOL. Under these circumstances, all the HRs on the metacognitive ROC (apart from the [0,0] point) would be equal to 10/10 = 1.0.
The problem with HRs and/or FARs equal to either 0 or 1 is that parametric estimates such as d', d a , and A z are undefined. Of course, some commonly used corrections can be applied to the frequencies prior to computing the HRs and FARs, to avoid 0s and 1s. However, when the frequencies underlying these rates are low, these corrections can distort the rates considerably (see Hautus, 1995, for cases of distortion caused by common corrections even when frequencies are not low). To illustrate, consider again the participant who produced only ten correct responses on a difficult recall test that were all assigned the lowest JOL. If the common 1/(2N) rule is applied, the HRs = 10/10 = 1.0 are corrected to 1.0 -1/(2*10) = .95. If the participant's performance was even worse, such that there were only five correct responses (5% recall accuracy), the 1/ 2(N) rule would adjust the HRs from 1.0 to .90. Although these examples are extreme (i.e., very few correct responses), they illustrate the point that in the context of metacognitive discrimination, the magnitude of the correction using the 1/ 2(N) rule is confounded with accuracy on the criterial test. Such confounding means that the correction would greatly distort all parametric indices if accuracy were extremely high or low. The situation would be even worse if both the HRs and the FARs required correction (as in cases of HR = 1.0 and FAR = 0). Critically, however, HRs and/or FARs equal to 0 or 1 do not need to be corrected at all in order to compute either A g or G trap . For this reason, we recommend avoiding corrections altogether in the context of metacognitive research and relying on nonparametric estimates of resolution.
Negative resolution
We have focused solely on positive relationships between metacognitive ratings and accuracy. However, in rare circumstances this relationship can be negative, such as when deceptive general-knowledge questions are used (e.g., Higham & Gerrard, 2005;Koriat, 2018). With such questions, people typically respond with, and are more confident in, incorrect rather than correct answers (e.g., many people confidently, but erroneously, believe that Sydney is the capital of Australia). This results in negative resolution, and if gamma is computed with the original concordance/discordance formula, it assumes values less than 0. Is computing gamma with ROC curves still possible under these circumstances? The short answer is Byes.^The ROC curves would bow below, rather than above the chance diagonal, yielding area measures that were less than 0.5. G trap can be computed in the same way as before: doubling A g and subtracting 1, resulting in negative G trap values. To illustrate with an example, suppose that participants answer some deceptive questions and provide retrospective confidence ratings regarding the accuracy of their answers. 4 Because they assign higher confidence ratings to incorrect than to correct responses, suppose that the area under the metacognitive ROC curve is only 0.3. If this value is doubled and 1 is subtracted from the product, the resultant gamma value would be 0.3*2 -1 = -0.4. In the extreme case, AUC would be equal to 0 and gamma would be equal to -1.
Limitations
One drawback to computing G trap instead of G pairs is that G trap can only be used in situations in which there are two outcomes on the criterial test (e.g., correct vs. incorrect recall). Hence, G trap cannot be used to estimate resolution for criterial tests such as trials to criterion or reaction times. However, the vast majority of research in metacognition focuses on resolution computed with respect to correct and incorrect responses, so this is unlikely to pose a significant problem in most situations.
Our simulations showed that G trap does not perform well if there is a combination of low resolution, unequal-variance Gaussian evidence distributions, and liberal responding. With this combination of factors, G trap is a poorer estimate of G true than is G pairs . Indeed, four of the five cases in which G trap was less accurate than G pairs in our simulations occurred with the unequal-variance Gaussian model and liberal responding. The best way to identify cases such as these is to construct an ROC curve of the data, as such curves provide information pertaining to the levels of all three variables: Resolution is indicated by the extent to which the ROC curve bows from the chance diagonal; the shape of the ROC curve gives an indication of the nature of the underlying evidence distributions (and can be formally evaluated using a goodness-of-fit test); and the level of bias can be determined by where the points are clustered on the ROC. Of course, there are limitations to this analysis, as well. For example, if responding is highly biased, portions of the ROC curve will not be represented by any points, so it will be difficult or impossible to get an accurate indication of the full shape of the ROC curve. Nonetheless, if the ROC coordinates are clustered in either the bottom left (conservative) or top left (liberal) portion of the ROC, then researchers will be alerted to response bias. More generally, ROC curves usually provide an excellent visual representation of metacognitive data. In our view, constructing an ROC should be the first step researchers take when deciding on an analysis strategy.
Author note Portions of this research were presented at the 56th Annual Meeting of the Psychonomic Society, Chicago, IL.
Open Access This article is distributed under the terms of the Creative Comm ons Attribution 4.0 International License (http:// creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. | 14,890.4 | 2018-09-27T00:00:00.000 | [
"Mathematics"
] |
Divergence-Based Segmentation Algorithm for Heavy-Tailed Acoustic Signals with Time-Varying Characteristics
Many real-world systems change their parameters during the operation. Thus, before the analysis of the data, there is a need to divide the raw signal into parts that can be considered as homogeneous segments. In this paper, we propose a segmentation procedure that can be applied for the signal with time-varying characteristics. Moreover, we assume that the examined signal exhibits impulsive behavior, thus it corresponds to the so-called heavy-tailed class of distributions. Due to the specific behavior of the data, classical algorithms known from the literature cannot be used directly in the segmentation procedure. In the considered case, the transition between parts corresponding to homogeneous segments is smooth and non-linear. This causes that the segmentation algorithm is more complex than in the classical case. We propose to apply the divergence measures that are based on the distance between the probability density functions for the two examined distributions. The novel segmentation algorithm is applied to real acoustic signals acquired during coffee grinding. Justification of the methodology has been performed experimentally and using Monte-Carlo simulations for data from the model with heavy-tailed distribution (here the stable distribution) with time-varying parameters. Although the methodology is demonstrated for a specific case, it can be extended to any process with time-changing characteristics.
Introduction
Many real-world systems change their parameters during the operation. It could be a continuous progressing change (like start up of the machine) or switching from regime A to another regime B (for example, loaded/unloaded machine). Analysis of such timevarying processes (using acquired data) is difficult. If the analyzed data have a complex structure, then before further analysis they should be divided into simpler subsignals. That requires the use of different methods. Such approaches are commonly called signal segmentation [1][2][3].
The task of segmentation considered in this paper is more general than the classical segmentation, where one is looking for a moment where characteristics of the signal have changed. Usually basic statistics (mean, variance, kurtosis, etc.) or more advanced features are used as criteria for splitting the signal into two or more homogeneous parts.
As mentioned, the reasons for using segmentation may be very different. It is commonly used as pre-processing for a non-stationary signal which consists of stationary segments. In real data, there are many situations related to this problem. A good example is a machine that may change the regime of operation or a car with a manual gearbox where changing gear may be associated with a change of internal conditions in the system. Thus, it requires separate treatment.
is important to emphasize that this procedure does not split the entire signal only into homogeneous parts due to the nature of the signal. Nevertheless, the method allows indicating homogeneous fragments of time series that correspond to the parts of the signal with stabilized parameters.
In the proposed procedure, we divide the original signal into segments with a priori predefined length. For each segment, we calculate its empirical pdf and utilise the divergence (called the Jeffreys distance) to measure the distance between the calculated empirical densities. Finally, we differentiate the Jeffreys distance and use log-likelihood ratio to detect the change point. The introduced segmentation algorithm is applied to real acoustic signals acquired during coffee grinding. The proposed procedure can be summarized in three steps: the estimation of the pdf for data corresponding to the selected segments, calculation of the divergence measure, and identification of the change points for data representing the empirical divergence using one of the classical segmentation algorithm. Justification of the methodology has been performed experimentally and using Monte-Carlo simulations for data from the model with heavy-tailed distribution (here the stable distribution) with timevarying parameters. Although the methodology is demonstrated for a specific case, we hope it is universal and can be extended to any process with time-changing characteristics. Example of such a signal could be seismic signal (vibrations with damping), speech signal, or vibrations in mechanical systems with processes as cutting, compressing or crushing.
The rest of the paper is organized as follows. In Section 2 we formulate the problem and provide information about the performed experiment. In Section 3 we present the methodology used in the paper, including the preliminary study, a reminder of the stable distribution and divergences. Moreover, we introduce the steps of the proposed segmentation procedure. Section 4 is devoted to the analysis of the real signals acquired during coffee grinding, while in Section 5 we present the results of the simulation study. In Section 6 we discuss the results. Section 7 concludes the paper.
Problem Formulation
In this paper, we consider a non-stationary, highly impulsive, and energetic signal with the distribution stabilizing close to Gaussian. To confirm such an assumption, some preliminary analysis of real data has been performed, see Section 3.1.
The transition between the process with property A ( later called process A) and the process with property B (process B) is smooth, as there is no sharp boundary. It makes change recognition difficult. We may say that the process at time t is more A than B or vice versa. However, as the transition is nonlinear and at some point the transition stabilize, it may be use as a criterion for segmentation.
As an illustration, we will use the acoustic signals captured during a grinding of coffee beans in a grinder. At the beginning, due to the cutting of beans by sharp spinning knives in the grinder, the process is very noisy and cutting is an impulsive process. Once the coffee beans are ground, the material in the grinder is more powdery than grainy. Thus, the acoustic signature of the grinding process is no more impulsive nor energetic. The key issue is how much time we should perform the grinding to achieve a satisfactory structure of the ground coffee. To validate our procedure, many experiments have been performed with various durations of grinding and after each of them a photo of the structure of coffee was taken and analyzed. A discussion on that is presented in Section 6. Finally, we proposed a model of the real signal and Monte Carlo simulations have been performed, too.
From the mathematical point of view, the problem can be formulated in the following way. Let us imagine that the given observations correspond to some theoretical time series (i.e., a process with discrete time) {X t } t∈Z that is defined as follows where for each t, s ∈ Z the random variables Z t,θ and Z s,θ are independent and have the same distribution. The θ is the parameter of the distribution (in one-or multidimensional space), however we assume that it depends on the time point, i.e., one can write θ = θ(t).
The simplest case is when θ is a constant value. In that case, the segmentation is not needed, as the data are homogeneous and constitute a sample of independent identically distributed (i.i.d.) time series. The relatively simple case is also when θ(t) is a constant, one-or multidimensional-valued, function on intervals. In such a case, the segmentation of the corresponding set of observations seems to be relatively easy and reduces to the identification of the points when the parameter θ(t) changes its value. In this case, we can determine the structure change point by analyzing some statistics-depending on the interpretation of the parameter θ, e.g., the estimator of the θ parameter, calculated for the given time windows. Clearly, in the case when the θ(t) parameter is a one-dimensional valued function, the segmentation is simpler than it is for a two-or even multidimensional one. In that case, one considers the problem of structure break point detection in the multidimensional space. The case when the θ(t) is a constant function on the interval can be generalized to the case when it is any deterministic function (one-or multidimensional valued). In such a case, the segmentation (i.e., division of the set of observations into homogeneous parts) seems to be much more difficult. The segmentation gives us quasi-homogeneous parts, i.e., the parts when the θ(t) parameter has relatively small fluctuations and the behavior of θ(t) is significantly different for the segmented parts. The problem seems to be more complicated when θ(t) is a multidimensional valued function or a random variable. In that case, advanced statistical methods need to be applied to identify the significant change in the signal. The analysis of the estimator of θ(t) for a given time window may not be enough and other statistics should be examined to identify the structure change point. This is the case considered in this paper. As an example, distribution in which the parameter change in time is the stable one, is useful for the analysis of heavy-tailed distributed data. More details of this distribution are presented in the next section. However, this methodology can be extended to any other distribution.
Experimental Illustration of the Problem
To illustrate the problem and to validate our procedure using experimental data, we perform several experiments related to coffee bean grinding. Using a popular coffee grinder for domestic use, we prepared dozens of coffee samples with comparable volume and quality parameters. For each sample, the acoustic signal has been registered using a mobile phone, see Figure 1. It was found that approx. 30 s is enough to obtain coffee powder. Conditions of each experiment (not critical here) were approximately similar: the same amount of coffee, the same type of coffee, the same data acquisition device located in the same position and direction, c.a. 1 m from the grinder. During the experiment, one may hear how a "sharp" sound related to "cutting" of coffee beans by rotating knifes in the grinder is changing into a much lower level of noise with a rather narrowband character (related to rotating knifes). For several samples, grinding was stopped at T = [5,10,15,20,25,30] and photos of coffee bean fragmentation phase have been made for validation purposes. After acquisition, the data were transferred to Matlab where appropriate algorithms have been applied. Also all numerical experiments have been performed in Matlab.
Methodology
The acoustic signals analyzed in the paper, obtained through the experiment described in the previous section, show some special properties. It should be emphasized that the mentioned experiment was conducted many times and the nature of the data is repetitive. For illustration purposes, we selected several realizations, which are subject to preliminary investigation in the next sections. Then, on the basis of the observations made during the initial analysis, a method of data segmentation is proposed.
Preliminary Study
In the paper, we examine eight signals denoted as Signals 1-8 presented in the subsequent panels of Figure 2. The sampling frequency is equal to 44,100 Hz and the length of the trajectories is about 30 s. As it can be seen in Figure 2, the data are clearly non-stationary. The characteristics of the signals change with time, i.e., the amplitude and the observation range decrease over the observed period. From a statistical point of view, we can say that the values of dispersion measures (such as standard deviation, interquartile range, and average deviation from the mean) vary over time, whereas the values of mean and median remain constant. Moreover, the data have an impulsive character, as demonstrated by the outliers present in the signals, see the zoomed fragments of the plots in Figure 2 (between 2 s and 4 s). Nevertheless, the number of impulses and their amplitude also decrease in time. In the zoomed fragments of the plots in Figure 2 (around 28 s), one can observe also some deterministic trends that are present in the data.
Following the above-mentioned remarks, we can conclude that the distribution behind the data presented in the subsequent panels of Figure 2 belongs to the class of heavy-tailed distributions for which large observations (called outliers) are more likely to appear than in the Gaussian case. A classical example of such a distribution is the stable one. Due to the Generalized Central Limit Theorem, it constitutes a natural extension of the Gaussian distribution. Moreover, its tail decays to zero according to a power-law function (slower than exponentially decaying tails in the Gaussian case) and therefore it is often used as a model for data with impulsive behavior. Here, we propose to use this distribution to describe the considered real signals presented in Figure 2. At the same time, we should take into account that, since the properties of the data change in time, the parameters of the distribution are not time-constant. More information about the stable distribution and the proposed model can be found later in this paper, see Sections 3.2 and 3.3.
The main goal of the paper is to introduce a procedure leading to the segmentation of the signals presented in Figure 2. The proposed method is designed to answer the question of when the signal's properties stabilize by decomposing the data into segments in which the probability distributions of the signal show similar features. See Section 3.5 for more details.
The Stable Distribution with Changing Parameters
The stable distribution (also called α-stable or Lévy stable), introduced by Lévy and Khinchine in the 20 s and 30 s of the previous century [69,70], is considered as an extension of the Gaussian distribution. It can be defined in four equivalent ways [71][72][73][74][75][76] and one of the definitions concerns Generalized Central Limit Theorem, stating that the stable distribution is the limiting distribution for normalized sums of independent random variables with identical distribution and diverging variance. Here, we present the characteristic function of the stable random variable Z, which provides the parameters of the distribution, where The parameter α is called the stability index and takes values in (0, 2], σ is the scale parameter greater than 0, β is the skewness parameter in [−1, 1], and µ is the shift parameter taking values in R. For β = µ = 0 the distribution is called symmetric and the characteristic function simplifies to the following one One can notice that for α = 2 one obtains the Gaussian distribution with mean equal to µ and standard variation equal to √ 2σ. However, it is important to emphasize that for the non-Gaussian case, the distribution differs significantly from the Gaussian one [77]. As it was mentioned in the previous section, for α = 2 the distribution tails decay as power-law functions and therefore the random variables take extreme values with greater probability than in the Gaussian case. Moreover, the stability index regulates the rate of tail convergence, therefore, the smaller α is, the more impulses are present in the data. Because of that, the stable distribution can be used to model the signals with impulsive behavior. Moreover, for α = 2 the variance of Z diverges and the first moment is finite only for 1 < α ≤ 2. Therefore, many classical tools and techniques cannot be applied for the non-Gaussian distribution, e.g., the classical measures of dependence. It is worth noticing that the pdf of the stable distributed random variable exists and is continuous, however, it is not given in an analytical form in general.
In the presented simulation study, we assume that the considered observations, after a certain pre-processing step mentioned at the beginning of Section 4, can be treated as a sequence of independent stable distributed random variables. However, since one can notice that the characteristics of the signals change in time, we cannot assume that the subsequent random variables are equally distributed. Therefore, we assume here that the distribution of the data changes in segments, which is equivalent to the fact that the parameters of the distribution, α, σ, β, and µ, change in those segments.
Signal Parameters Identification and Modelling
To identify how the parameters of the distribution change in time, we propose to fit the stable distribution to the signals in narrow windows of length L. Then, the observations within each such segment are assumed to be independent and identically distributed, and the parameters of the distribution may change as the window moves. To estimate α, σ, β and µ in each segment, we propose to use the regression-type method introduced in [78], see the results presented in Section 4. Based on the outcomes obtained for the real signals, in Section 5 we propose a model of the signal, which is used to perform the simulation study. For the simulated signal, we assume that the parameters of the stable distribution change not only in a deterministic way but the values are also disturbed by some random noise. Moreover, we assume that at a certain moment the parameters of the distribution remain unchanged. More information about the simulated signal, including its generation and the performance of the segmentation method, are provided in Section 5.
Divergence Measure
In this section, we present the statistics on which the proposed segmentation procedure relies. In probability theory, the similarity between two probability distributions can be quantified by means of divergences that measure the distance between the pdfs. However, the concept of divergence (called also contrast function) is not as strong as the notion of distance. The divergences do not have to be symmetric in the arguments nor satisfy the triangle inequality. An essential class of contrast functions are the f -divergences defined as follows [79][80][81] The functions p(x) and g(x) in Equation (5) are the pdfs corresponding to the two probability distributions under consideration and f (t) is a continuous convex real function on R + such that f (1) = 0. The divergences defined in this form are always non-negative. Moreover, the function given in Equation (5) is equal to zero if and only if the pdfs p(x) and g(x) coincide (take the same values for all arguments), which corresponds to the case when the probability distributions are the same. More properties of f -divergences one can find, for example, in [81,82].
In this paper, to evaluate how the probability distribution of data changes over time, we use one specific measure, which belongs to the class of f -divergences defined above, called the Jeffreys distance and is defined in the following way [83] which corresponds to f (t) = √ t − 1 2 in Equation (5). We mention here that some authors refer to the divergence defined in Equation (6) as the Hellinger distance. One can notice that the considered statistics is symmetric in the arguments, i.e., J(p(x), g(x)) = J(g(x), p(x)), and takes values between 0 and 2 with the minimum value corresponding to the case when p(x) = g(x) for each x ∈ R, and the maximum value when p(x) is equal to zero for every x for which g(x) is nonzero and vice versa. In practice, for the empirical data the pdfs in Equation (6) are replaced by their estimators denoted by p(x) and g(x). Therefore, one calculates the empirical counterparts of the Jeffreys distance defined in Equation (6), namely where x 1 , x 2 , . . . , x n ∈ R are the arguments of the pdfs and h denotes the step, i.e., h = x i − x i−1 for i = 2, . . . , n. To estimate the pdf p(x) (or analogously g(x)) in Equation (7) one can use the kernel density estimator of the following form [84,85] p where Y 1 , Y 2 , . . . , Y N is the sample from which we estimate the pdf, x ∈ R, K(·) is the non-negative kernel smoothing function, and k is the bandwidth. The kernel smoothing function determines the shape of the curve used to estimate the pdf. We use the Gaussian kernel of the following form for x ∈ R the bandwidth is chosen using the Silverman's rule of thumb [86]. The kernel density estimator is implemented in most programming languages. We use the function "ksdensity" in Matlab.
Segmentation Procedure
In this section, we describe the proposed segmentation algorithm. The methodology relies on comparing the pdfs using the Jeffreys distance presented in Section 3.4 and segmenting the Jeffreys distance increments by means of the log-likelihood ratio (LLR) method. The subsequent steps of the procedure are as follows: 1.
Divide the signal into M segments of length equal to L.
2.
Estimate the pdf in each segment, Estimate the pdf corresponding to the last L samples in the signal, p * (x).
4.
Calculate the Jeffreys distance between the pdfs in subsequent segments and the pdf of the last L samples,
The procedure described above leads to detecting the indexes i * , i * * , and i * * * which divide the increments of Jeffreys distance into four regimes. Interpretation of the determined results will be presented later in the paper, see Sections 4-6. Since the procedure is based on the pdfs in the subsequent segments, the method indicates the segment number, not the observation number, when the regime switches. However, in practice, we transfer that information into the number corresponding to the first observation in the determined segment, see Sections 4 and 5. It is also important to mention that the procedure described in items 1-7 can be used twofold. Namely, while diving the signal into segments in item 1, one can consider the case of non-overlapping and overlapping windows. More precisely, for the non-overlapping case, as the first segment we take the samples with indexes from 1 to L, then we move by L samples and as the second segment we take the samples from 1 + L to 2L and so on. The overlapping window corresponds to the case when as the first segment we take the samples from 1 to L, then we move by l samples and as the second segment we take the samples from 1 + l to L + l and so on. As one can notice, when l = L we obtain the non-overlapping case. The choice of l affects the accuracy with which we indicate the moment of regime change. In Sections 4 and 5 the value l is called step.
In items 6 and 7, to segment the increments of Jeffreys distance, we use the loglikelihood ratio method due to the fact that the scale parameter in (D 1 , . . . , D M−1 ) defined in Equation (11) changes in several regimes as the distribution of the signal varies with time, see the results presented Sections 4 and 5. The LLR method enables detecting the moment of change in scale in the considered dataset by maximizing the log-likelihood ratio. For details of the method, we refer the readers, for example, to [87]. In the literature, there are known several methods designed for the same purpose (e.g., Regime Variance technique [22] and Absolute Median Deviation technique [17]), however, according to the simulations performed by the authors, the LLR method is the most powerful tool among those mentioned. The procedure described above is summarized in Algorithm 1. The most important points of the proposed procedure are the estimation of the pdf for data corresponding to the selected segments and calculation of the divergence measures (here Jeffreys distance). The last step is the application of the LLR method for the data representing the values of the divergence measure. Calculate J( p k (x), p * (x)) Calculate the increments of Jeffreys distance:
Results
In this section, we present the results of applying the procedure described in Section 3.5 to the real signals presented in Figure 2. In the first step, the raw data are pre-processed by removing the deterministic components present in the signals. The pre-processing procedure is analogous to the one proposed in [30]. Examples of the cleaned signals, which are analyzed in detail in this section, are presented in Figure 3. Panel (a) corresponds to Signal 1 and panel (b) corresponds to Signal 2. One can see that they visually do not differ from the raw signals presented in Figure 2 (see panels (a) and (b)), however, the deterministic components present in the data are removed.
To illustrate how the distribution changes over time, for the signals presented in Figure 3 we estimate the pdfs in non-overlapping windows of length 2500. The obtained density maps are shown in Figure 4, where panel (a) corresponds to Signal 1 and panel (b) corresponds to Signal 2. One can see that the values tend to be more concentrated around zero over time, i.e., the functions take extreme values with smaller probability. It confirms that the dispersion statistics are smaller as time goes. Moreover, we can notice that the pdfs at the end of the signals, at least visually, are very similar (or almost identical), i.e., the distribution stops changing after a certain point. In the following part, we estimate the parameters of the stable distribution, described in Section 3, in non-overlapping windows of length 2500. The results for Signal 1 are presented in Figure 5 and for Signal 2 in Figure 6. Panels (a), (b), (c), and (d) correspond to α, σ, β, and µ, respectively. One can notice that the calculated values change over time.
For both examples, the stability index α takes values between 1.6 and 2. Moreover, one can see an upward trend with time and at the end of the signal the values stabilize very close to 2 which means that the distribution is similar to the Gaussian one. Since the parameter α in panel (a) is related to the probability of occurrence of impulses, such behavior is natural as the number of impulses in the signal decays with time. This is consistent with our assumption that the distribution of the data smoothly transforms and stabilizes with time. The values of skewness parameters presented in panel (c) also indicate that the distribution is getting close to Gaussian with time since at some point β begins to take values from the entire interval [−1, 1]. This is related to the fact that the skewness parameter becomes irrelevant for the Gaussian distribution (see Equation (4) for α = 2). The values of the scale parameter σ presented in panel (b) also decrease with time, which agrees with the results seen on the density maps. One can see the exponential-type decaying, so we can assume that from a certain point σ stabilizes to a certain level. The location parameter µ presented in panel (d) is always close to zero. According to the segmentation procedure presented in Section 3.5, now we compare the pdfs of the signal in the moving windows of length 2500 with the pdf estimated based on the last 2500 samples. The comparison is done using the Jeffreys distance. For Signal 1 and Signal 2 the values of the measure and its differences are presented in Figures 7 and 8, respectively. One can notice that the values of Jeffreys statistics decrease and therefore we can conclude that the distributions become more similar to the distribution in the last window or, in other words, they stabilize. We recall here that the Jeffreys distance equal to 0 indicates that the pdfs are the same. Moreover, the behavior of the statistics increments also changes with time, i.e., the values taken by Jeffreys distance get more stabilized as time goes (there are fewer oscillations), which is visible in panel (b), showing the differences in the statistics. From about 25 s on, the values of Jeffreys distance are very close to 0, and at the same time, their differences are small. Using the above observations, we propose a method described in Section 3 which relies on dividing the values of Jeffreys distance into four separate regimes with a constant scale parameter of their differences. According to this, the point marked with the solid red line was detected first. Then the values preceding and following this point are also divided into two regimes using the same method. This leads to the designation of four regimes. For the signals presented in this section, the regimes are as follows. For Signal 1, the first regime change occurs in 12. In the second last regime (between solid red and dashed yellow lines) the pdfs are also very similar, however, we observe more oscillations of the values in comparison to the last regime. This different behavior of the Jeffreys distances in the third and fourth regimes may be caused by the impulses that still occur in the signal related to the third regime more frequently despite the fact that the dispersion of the data is similar in both intervals. The division into the first and the second regimes (separated by the dotted purple line) is mainly related to the change in the rate of decline of the values taken by the Jeffreys distance. We mention here that the proposed procedure leads to the indication of a specific window in which the behavior of the values taken by the Jeffreys distance changes. Here, as the results, we present a moment (in seconds) corresponding to the number of the first observation in the window indicated by the procedure. Since we compare the pdfs in moving windows, the accuracy of the method is related to the step in which we move windows. Therefore, we consider three cases here: a moving overlapping window of length 2500 with step equal to 250, a moving overlapping window of length 2500 with step equal to 500, and a moving non-overlapping window of length 2500 (equivalent to step equal to 2500). The results presented in Figures 7 and 8 correspond to the step equal to 250. However, Table 1 contains the results for all signals with three different steps (250, 500, 2500). For the considered signals, the detected regime change points, which are shown in Figures 7 and 8, are also plotted on the cleaned signals, see Figure 9. As mentioned, the procedure is repeated for all signals presented in Figure 2 while considering different values of steps. The results obtained for the chosen step values (see Table 1) are usually similar, although in a few cases there are larger discrepancies (purple point for Signal 2, Signal 3 and Signal 7 for step 2500; red point for Signal 3 for step 2500, and yellow point for Signal 7 for step 2500). Most often, these discrepancies appear when detecting the purple point, see also the results for the simulated signals presented in Section 5. One can notice that for different signals, the obtained results of change regime points are not the same. It is intuitive, because for each experiment there is a different arrangement of the grains in the grinding mill, which affects the grinding speed, and thus affects the data distribution over time. The first regime change point (purple) occurs at the earliest for Signal 5 (around 9 s) and the latest for
Signal Number
Step
Simulations
In this section, we apply the introduced methodology to the simulated signals constructed based on the assumptions mentioned in Sections 3.2 and 3.3, i.e., for a sequence of independent random variables from the symmetric stable distribution with stability index and scale parameter changing over time. As it was mentioned before, for the real signals considered in Section 4 we do not know the exact moments when the characteristics of the process stabilize. Here, for the simulation study purpose, we set the moments at which the parameters α and σ remain unchanged, and therefore we are able to validate the efficiency of the proposed procedure.
As an illustration, in Figure 10 we present the estimated values of α and σ for Signal 1 examined in detail in Section 4. The parameters are calculated in non-overlapping windows of length 2500. Here, the number of segments is equal to 575 and, according to the results presented in the previous section, the regimes change around the segment of number 200, of number 350, and of number 450. Under the taken assumptions, the stability and scale parameters change with some deterministic trends, namely, a sum of two exponential functions for α and a exponential function for σ, respectively, which are marked in red in Figure 10. In the following part of this section, we examine the simulated signals consisting of, similarly to the real signal mentioned above, 575 segments with 2500 independent symmetric stable random variables in each of them. The parameters of the distribution in the subsequent segments change according to the deterministic functions fitted to the parameters α and σ for Signal 1 (Figure 10) disturbed by the zero-mean Gaussian noise with standard variation equal to 0.03 for the stability index and 0.0005 for the scale parameter. Additionally, as it was mentioned before, we fix the values of α and σ at a certain point. The moments of σ stabilization and α stabilization are chosen to correspond to the change of regimes detected for Signal 1 in the previous section, i.e., the scale parameter and the stability index remain unchanged from the segment of number 351 and of number 451, respectively.
The construction of the sample simulated signal is presented in Figure 11. Panels (a) and (b) show the theoretical values of α and σ in the subsequent segments, whereas panel (c) presents the obtained trajectory. In panel (d) we also show the corresponding map of the pdfs calculated in the subsequent non-overlapping segments of length 2500. As one can see, the map looks similar to the ones presented for the real signals in Figure 4. The pdfs become more concentrated around zero with time: the functions take extreme values with smaller probabilities. It should be emphasized that in the plot given in panel (d) of Figure 11 there is a noticeable change in the pdfs after 350 segments (around 875,000 sample) which results from the stabilization of the scale parameter. However, the stabilization of the stability index after 450 segments (about 1,125,000 sample) is barely visible in the pdf map. Nevertheless, by using the proposed procedure, one can also detect this change. In Figure 12 we present the parameters of the stable distribution estimated in nonoverlapping windows of length 2500 for the trajectory presented in Figure 11. The estimated values of α and σ are close to the theoretical ones. It is worth noticing that for the stability index of the segment of number 451 the estimated values are not equal to exactly 2, which is the value set while simulating, but they are close to 2 with some oscillations. Since the generated observations are symmetric stable, the shift parameter µ is close to 0 for all segments, and the skewness parameter β from a certain point takes values in the whole interval [−1, 1], because the distribution is very close to Gaussian (or even Gaussian after 450 segments).
In Figure 13 we present the results of applying the proposed procedure to the considered sample simulated signal. Panel (a) shows the values of the Jeffreys distance and panel (b) presents the differences of Jeffreys distance on the basis of which the regime change points are determined and marked, respectively, in purple (dotted line), red (solid line), and yellow (dashed line). Additionally, the theoretical regime change points (corresponding to stabilization of σ and α) are marked with black dots. One can see that the values of Jeffreys divergence decrease with time and at the same time the rate of decline of the function changes. The last two regimes, detected using the proposed method, are related to the stabilization of the scale parameter and the stability index, respectively. The determined moments of regime changes in σ and in α coincide with the theoretical ones (red solid line and yellow dashed line, respectively). Finally, Figure 14 shows the trajectory of the simulated signal with the theoretical and detected regime change points marked with lines and dots, respectively. We mention here that the results presented in Figure 13 are obtained by comparing the successive pdfs determined in the overlapping windows of length 2500 with the pdf in the last window. The windows overlap since we shift by 250 observations when counting the successive densities. In the last step, we verify the efficiency of the proposed methodology by conducting a Monte Carlo simulation study for a number of signals generated analogously to the one with the trajectory presented in panel (c) of Figure 11. Namely, we simulate 100 signals (each one with different values of σ and α in the subsequent segments, see panels (a) and (b) of Figure 11) and to each of them, we apply the proposed procedure. As a result, we obtain 100 values corresponding to the moments of regime change: the 1st change (purple), the 2nd change (red), and the 3rd change (yellow). The outcomes are presented in the boxplots given in Figure 15, where panel (a) corresponds to the case of comparing the pdfs in non-overlapping windows of length 2500 (with step equal to 2500) and panels (b) and (c) correspond to the case of comparing the pdfs in overlapping windows of length 2500 (with step equal to 500 or 250, respectively). For the second and third regime changes, the theoretical values (i.e., the observation numbers) related to the stabilization of the scale parameter (observation of number 875,000, i.e., after 350 segments of length 2500) and of the stability index (observation of number 1,125,000, i.e., after 450 segments of length 2500) are marked with horizontal black dashed lines. We remind here that our procedure leads to detecting the window number, however, we transfer that information to the number of the first observation in the identified window. As one can see in Figure 15 for the second and the third regime change, the estimated values are close to the theoretical moments of σ and α stabilization. That can also be seen in Table 2 where we present the median, interquartile ranges, and 80% quantile intervals calculated based on the results of the Monte Carlo study. For the first regime change, for which we do not have the theoretical equivalent, we can see that the medians, IQRs, and the length of quantile intervals get smaller as the step decreases, whereas for the second and third regime change the medians are similar for all three values of the step and they are close to the theoretical moment of stabilization. Moreover, the IQRs and the length of quantile intervals are smaller for the second regime change than for the third one, which indicates that the moment of second regime change (related to the scale parameter) is detected with higher precision, which can also be seen in Figure 15.
Discussion and Validation of the Procedure
To validate the obtained results, we repeated the experiment of coffee grinding several times. With each repetition, we extended the grinding time and took photos of the ground coffee beans after the experiment was completed. The same amount of coffee was ground for about 5 s, 10 s, 15 s, 20 s, 25 s, and 30 s sequentially. Pictures of the product obtained after grinding are shown in Figures 16-21, respectively. One can notice that the structure of the coffee beans is clearly grainy in the photos presented in Figures 16 and 17, i.e., after 5 s and 10 s of grinding. Then, as expected, the longer the coffee beans are ground, the more powdered the product becomes. Nevertheless, in the picture shown in Figure 18, i.e., after 15 s, and even in the picture presented in Figure 19, i.e., after 20 s, one can see individual unground or half-ground coffee beans. In the last two photos in Figures 20 and 21, corresponding to a grinding time of 25 s and 30 s, the appearance of the ground product is very similar. The structure is powdered without individual coffee beans in the product. This is in line with the results presented in Section 4 for real signals, which indicate that after about 24 s, on average, the probability distribution in the acoustic real signals stabilizes (the arithmetic mean of the values in the last column in Table 1 is equal to 24.0371 s).
The preliminary evaluation of the results based on the photos is also supported by the analytical results. We remind here that the method proposed in Section 3.5, leading to the segmentation of data, is designed in such a way that the signal is first split into two regimes, and then each regime is again split into two segments. As a result, we obtain the signal segmented into four regimes within which the pdfs of the data show certain similarities. To confirm the interpretation based on real data analysis, a simulation study was carried out. The outcomes presented for the simulated signals confirmed our assumption that the regime change related to data dispersion is detected first (marked in red on the plots). Detecting this regime change initially splits the data into two parts and in the second one, the scale parameter is stabilized. Then, as mentioned above, each of the two segments is divided once again into two regimes. For the part of the signal corresponding to the regime with stabilized dispersion, this step detects the moment when the impulsivity in the data stabilizes (marked in yellow on the plots), i.e., it distinguishes between non-Gaussian and Gaussian data. For the part of the signal corresponding to the regime with non-stabilized dispersion, the above step also leads to detecting the regimes with different behavior of the pdfs (marked in purple on the plots), but it does not have an interpretation related to any parameter.
As mentioned in Section 2, the problem formulated in the paper concerns data segmentation where the parameters of the probability distribution change over time. These considerations were motivated by real data, in which the changes in the probability distribution are not sudden but occur smoothly and at some point, the parameters stabilize at a certain level. In the analyzed signals, both real and simulated, the scale and impulsivity of the data changed over time. To segment the signals, we proposed a method based on the assessment of the similarity of the pdfs in the moving windows. The proposed procedure allows determining the moments at which the scale parameter and the parameter responsible for the impulsivity of the data stabilize. For the analyzed real signals, the last detected moment of regime change, indicating that the segment with stabilized amplitude and stabilized impulsivity in the data, can answer the question of how long coffee should be ground to achieve a satisfactory effect. Since the probability distribution of the acoustic signal does not change from that moment on, we can conclude that the structure of the ground product also remains unchanged and the ground coffee obtains its final structure.
Conclusions
In this paper, an original signal segmentation procedure for a random process with time-varying characteristics is proposed. Typically, the signal segmentation process is related to the detection of a moment in time when the process A switches to process B. The situation is relatively simple when the segments contain data described by different distributions or the corresponding distribution is the same but the parameters are different in those segments. The rule is simple here: the bigger the difference between the segments, the easier the segmentation. In that case, the segmentation algorithm divides the data into homogeneous parts.
The case considered in this paper is much more complicated. The analyzed input data correspond to the model given by Equation (1). In that case, the distribution of the time series is the same, however, we assume that the parameters are time-varying deterministic functions which can be even disturbed by random noise. The examined process is a specific one. It is highly non-stationary, strongly non-Gaussian (impulsive) at the beginning, during the single realization, transforming to a nearly Gaussian process with much smaller amplitudes.
Stable distribution with varying parameters has been proposed to describe the data. For some combination of values of its parameters (α, σ) the signal could be impulsive or not, may have higher "energy", or may be weak. It is perfectly matched with the analyzed case. However, as it was mentioned, the segmentation algorithm does not use the assumption of stable distribution of the data. It could be generalized to any signal containing two processes with transitions or simply switched from A to B.
To identify the difference between parts of the signal, we used segmentation with a priori predefined length of the segment with two versions: without or with overlapping. For each segment, the probability density function is estimated and the difference between them is evaluated by distance measurement (Jeffreys distance). The final step is to find (using LogLikelihood ratio LLR) the change point in the differentiated distance time series. Note that there is just one important parameter (segment size). For longer segments, we will improve the quality of pdf estimation, but we will lose the resolution in time.
To minimize this effect, overlapping is used. The method is pretty intuitive and universal.
To illustrate the problem and provide evidence of the effectiveness of the proposed method, two approaches have been proposed. A model of the signal has been proposed using the mentioned α−stable distribution with parameters changing in time. We applied the Monte Carlo approach to validate the segmentation efficiency statistically. Obtained results were very good. Next, an acoustic signal acquired during the grinding of coffee beans in a grinder has been used. It was found that the process of grinding of coffee beans matches our research problem perfectly. In the beginning, due to the cutting of beans by sharp spinning knives in the grinder, the process is very noisy. Moreover, the cutting process is an impulsive one. Once the coffee beans are ground, the material in the grinder is more powdery than grainy. The acoustic signal is much more narrow-band (no impulses) due to rotating elements. The transition between processes is smooth-thus, the segmentation is complicated and the regime change point is much more difficult to detect. To prove the quality of the results for the real data, we prepared the photo documentation of the experiment. Based on the proposed method, the identified change point corresponds to low granularity of the coffee.
Both approaches (photos and simulations) provide similar information, so we assume that the method is appropriate and effective.
We believe that the problem discussed in the paper may be important in many engineering applications (for example, impulsive noise/vibration with damping). One may assume also that other specific parameters of the process can vary simultaneously. Therefore, further work might be related to validation on other real cases as well as generalization of segmentation to any processes with smooth transition. | 10,833.2 | 2021-12-01T00:00:00.000 | [
"Engineering",
"Physics"
] |
Study of $B\to\pi\ell\nu_{\ell}$ and $B^{+}\to\eta^{(\prime)}\ell^{+}\nu_{\ell}$ decays and determination of $|V_{ub}|$
We reassess the $B\to\pi\ell\nu_{\ell}$ differential branching ratio distribution experimental data released by the BaBar and Belle Collaborations supplemented with all lattice calculations of the $B\to\pi$ form factor shape available up to date obtained by the HPQCD, FNAL/MILC and RBC/UKQCD Collaborations. Our study is based on the method of Pad\'{e} approximants, and includes a detailed scrutiny of each individual data set that allow us to obtain $|V_{ub}|=3.53(8)_{\rm{stat}}(6)_{\rm{syst}}\times10^{-3}$. The semileptonic $B^{+}\to\eta^{(\prime)}\ell^{+}\nu_{\ell}$ decays are also addressed and the $\eta$-$\eta^{\prime}$ mixing discussed.
Introduction
Quark flavour-changing transitions in the Standard Model are described by the Cabibbo-Kobayashi-Maskawa (CKM) matrix whose elements, V ij , weight the strength of the interaction. The CKM matrix satisfies unitarity imposing i V ij V * ik = δ jk and j V ij V * kj = δ ik . To verify these relations a precise determination of the magnitude of the CKM elements becomes of capital importance since an eventual deviation of unitarity of the CKM matrix may be a hint of new physics. The most common (unitarity triangle) combination to look at is which contains the best-known side quantity V cd V * cb but also involves V ub , one of the least-known elements. The inclusive, B → X u ν , and exclusive, B → π ν , semileptonic decays of a B meson represent an advantageous laboratory to determine the value of |V ub | yielding the most precise value up to date. Inclusive determinations are based, for example, on the Operator Product Expansion and perturbative QCD while exclusive determinations require knowledge of the shape of the participant meson Form Factor (FF) as a function of q 2 , yielding the hadronic transition. Numerically, the 2015 PDG reported values showed a 3.1σ deviation between the inclusive, |V ub | = (4.41 ± 0.15 +0. 15 −0.17 ) × 10 −3 , and the exclusive, |V ub | = (3.28 ± 0.29) × 10 −3 [1], determinations with a resulting average of |V ub | = (4.13 ± 0.49) × 10 −3 . The updated 2016 PDG version [2] reports, respectively, |V ub | = (4.49 ± 0.16 +0. 16 −0.18 ) × 10 −3 and |V ub | = (3.72 ± 0.19) × 10 −3 [3] for the inclusive and exclusive decays whose deviation, 2.6σ, has slightly been reduced due to the one-σ shift of the exclusive result. At present, the PDG reports [4], respectively, |V ub | = (4.49±0.15 +0. 16 exp−0.17 th ±0.17)×10 −3 and |V ub | = (3.70 ± 0.10 ± 0.12) × 10 −3 [5] for the inclusive and exclusive determinations. The origin of this long-standing discrepancy between the inclusive and exclusive determinations still remains unclear, demanding the resulting combined average, |V ub | = (3.94 ± 0.36) × 10 −3 [4], to be borrowed with caution. As pointed out already in Ref. [6], and recently adopted in Ref. [7], a new physics explanation of this tension is very unlikely and, therefore, it might be due to an underestimation of uncertainties in the experimental and/or theoretical analysis.
In this work, we reexamine the exclusive B → π ν decays and extract |V ub | following an alternative approach, regarding the parameterization of the participant vector form factor, slightly different than the most commonly used z-expansion and Vector Meson Dominance models, and profiting from the large set of experimental data and lattice simulations. A detailed scrutiny of each individual data set, explored bin by bin, allows us to identify agreements and tensions among them and propose a path towards further determinations.
The hadronic matrix current for the B → π ν decay can be written as where q = p B − p π = p + p ν is the transferred momentum to the dilepton pair while f + (q 2 ) and f 0 (q 2 ) are, respectively, the participant vector and scalar form factors encoding the dynamics of the strong interactions occurring in the heavy-to-light B → π hadronic transition. For light leptons (e and µ), one can safely take the m → 0 limit so that only the f + (q 2 ) is relevant and the corresponding partial decay width distribution is given by 1 where p π = (m 2 B + m 2 π − q 2 ) 2 − 4m 2 B m 2 π is a kinematical factor accounting for the momentum of the pion in the B meson rest frame. The main source of uncertainty in the extraction of |V ub | lies on the vector form factor which, in turn, requires a reliable parameterization in terms of q 2 .
From the experimental side, the CLEO Collaboration reported the first measurement of the B → π ν branching fraction in 1996 [9], later updated in 2003 [10], and released the partial branching ratio distribution measured in 4 q 2 bins in 2007 [11]. More recently, the q 2 decay spectra have been measured, respectively, in 6-and 12bins of q 2 by BaBar in 2011 [12] and 2012 [13], and by Belle in 13-bins in 2011 [14] and in 13-and 7-bins for the B 0 and B − mode, respectively, in 2013 [15].
On the lattice QCD side, results on the form factor shape at large q 2 were obtained by the HPQCD Coll. in 2007 [16] and by the FNAL/MILC Coll. in 2008 [17] in 5-and 12-bins of q 2 , respectively. In 2015 the RBC/UKQCD Coll. released new results in 3 bins of q 2 [18] and the FNAL/MILC Coll. presented an updated analysis [3].
In total, we have a set of five experimental measurements of the B → π ν decay spectra driving the form factor shape at small q 2 and a set of four lattice QCD simulations for the form factor dominating the large q 2 region. In order to determine |V ub | with good precision (beyond 10%), it is desirable to have a suitable parameterization of the intermediate energy region (15)(16)(17)(18)(19)(20) GeV 2 ) connecting both the small-and large-q 2 regions in a continuous and derivable way, under the constraints of unitary and analyticity.
From a theoretical perspective, parameterizations based on resonance-exchange ideas [19,20,21,22] have been widely used so far to describe the B → π FF shape. The parameterizations proposed by Bećirevic-Kaidalov [23] and Ball-Zwicky [24], incorporating some properties of the FF such the value of the kinematical constraint at q 2 = 0 and the position of the B * pole in the old-times spirit [19,20,21,22], became rather popular in the first decennial of this century. Both descriptions contain free parameters, such additional poles that pick up effects of multi-particle states, to be fixed from fits to experimental data. However, the election of these ansätze induces a source of theoretical (or systematic) uncertainty difficult to quantify. Moreover, as argued in Ref. [17], if the reconstruction of the FF obtained only from fits to experimental data is seen inconsistent with the shape derived by the lattice Collaborations, one would not unveil whether experiment and theory disagree or simple parameterizations are insufficient. To improve on that, the so-called z-parameterization was proposed [26,27,28]. This is based on a conformal transformation expansion which guarantees unitary constraints on its coefficients, even though in practice the constraints are rather weak.
Lets us return to the 2015 and 2016 PDG editions reported values for |V ub | from exclusive processes, (3.28 ± 0.29) × 10 −3 and (3.72 ± 0.19) × 10 −3 , respectively. They were obtained from simultaneous fits to the four most precise measurements of BaBar [12,13] and Belle [14,15] together with the 2008 and 2015 MILC Collaboration lattice simulations on the FF, respectively. While the 2015 PDG value corresponded to the determination provided by the HFAG as of summer 2014 [1], the updated 2016 PDG version reports the value obtained by the MILC Collaboration in 2015 [3]. These two values have been determined by using a z-parameterization as fit function and show a sizable deviation of 1.3 σ whose origin stems mainly from the following fact. While the lattice FF simulations from the MILC Collaboration in 2008 [17] were included into the fit in the HFAG analysis of 2014, and on top of that only 4 of the 12 points were used to avoid correlations between neighboring points, the result obtained by MILC in 2015 considers their updated FF simulation ones [3]. Moreover, while the 7 bins of the B − decay mode measurement reported by Belle in 2013 were not included into the HFAG 2014 fit, the MILC 2015 include them into their analysis.
Previous PDG reported values, e.g. |V ub | = (3.23 ± 0.30) × 10 −3 in PDG 2012, showed the corresponding HFAG fit results obtained from simultaneous fits to the existent experimental measurements at the time together with the MILC form factor predictions of 2008 using 6 of the 12 points instead of 4 of 12 as in the HFAG result of 2014. Both the choice of the MILC 2008 number-and bin-points to fit and the omission of the HPQCD form factor lattice simulations of 2007 is not rather clear to us. Accepting the FNAL/MILC lattice form factor calculation of 2015 presents several improvements with respect to their 2008 predictions, 2 still the theoretical error associated to the FF represents the largest uncertainty in |V ub |. In this respect, the lattice simulation provided by the RBC/UKQCD Collaboration [18] has been welcomed, obtaining |V ub | = (3.61 ± 0.32 stat+syst ) × 10 −3 from a combined fit to their results for the form factor together with BaBar and Belle experimental data.
In 2016, the FLAG working group reported |V ub | = (3.62 ± 0.14) × 10 −3 from a fit to lattice and BaBar and Belle experimental data [35] while the current PDG edition reports the HFLAV value of 2017 |V ub | = (3.70 ± 0.10 ± 0.12) × 10 −3 [4,5] obtained from an averaged q 2 spectrum of all BaBar and Belle data sets constraining the χ 2 minimization by averaged values for the coefficients of the form factor parameteriza-tion derived by the lattice groups and by the LCSR prediction at q 2 = 0.
Finally, a closer look to the plots of the corresponding fit results of the different analyses reveals a discrepancy between HFAG 2014 and Ref. [29], and lattice groups [3,18] on the position of the last experimental datum of both BaBar 2011 and BaBar 2012 measurements 3 .
Although the different |V ub | determinations are consistent with each other, we find the situation slightly unclear and without consensus among different groups regarding the use of experimental and theoretical data to fit.
The main purpose of this work is to reanalyze the B → π ν experimental data and discuss the impact of including into the fit each of the lattice-QCD simulations on the FF shape. We use the method of Padé approximants (PAs in what follows) to parameterize the B → π transition. These provide for a model-independent method, simple and user-friendly, with the important advantage of incorporating FF's unitary and analyticity constraints by construction, thus providing a systematic error.
We have discussed the Padé method in Refs. [36,37,38] and illustrated its usefulness as fitting functions in Refs. [39,40,41,42] applied to the description of the π 0 , η and η transition form factors. In these cases, the approximants showed an interesting ability to connect the low-and high-energy realms while improving the description of part of the intermediate-energy regime. The method allows us here to obtain a value for |V ub |, including both statistical and systematic uncertainties coming from the fit function, with a stamp of model independence. Constraints from unitary of the form factor will show up naturally and will provide for a roadmap towards next steps to follow both for theoretical as well experimental studies.
Although being the most precise, B → π ν only amounts to ∼ 7% of the B → X u ν decays. Measurements of the branching fractions distributions of B + → ω + ν and of B + → η + ν in 5 bins of q 2 were released in 2012, and the branching ratio of B + → η + ν reported, by the semileptonic charmless program of BaBar [13]. In the second part of this work, we will tackle the B + → η ( ) + ν decays, predicting the differential branching ratio distributions and extracting the η-η mixing angle taking advantage of the B → π form factor parameterizations obtained in the first part of this work.
As a final introductory remark, we shall mention that a method based on dispersion theory to extract |V ub | from the B 4 decay has been proposed in Ref. [43].
This article is structured then as follows: In section 2 we address the analytical structure of the participant B → π form factor and discuss the most common theoretical descriptions that have been considered in literature so far. In this section we also present our proposal, a parameterization based on the unitary and analyticity of the FF which allows us to use a sequence of PAs. In section 3.1, we show our fit results to the BaBar, Belle and CLEO differential branching ratio distribution experimental data which enables us to determine the product |V ub f + (0)| and extract, subsequently, |V ub | by using the LCSR prediction of f + (0) given in Ref. [30]. In section 3.2 we discuss the impact of including the different lattice QCD predictions on the FF shape into the analysis and determine |V ub | directly from a simultaneous fit.
In this section we present our central fit results, evaluate the role of introducing the value of f + (0) as an additional restriction in the χ 2 minimization, and perform fits to the lattice data alone. Unitary constraints on the Padé approximants are discussed in section 3.3. In section 4 we predict the B + → η ( ) + ν differential branching fractions distributions and determine the η-η mixing. Finally, our conclusions are devoted to section 5.
Preliminary results of this study have been presented in Refs. [31,33].
B → π form factor
A form factor is an analytic function everywhere in the complex plane except for isolated poles and branch cuts. Poles correspond to single particle intermediate states while branch cuts originate when the energy reaches a threshold for producing multiparticle intermediate states. For the B → π ν decay concerning us, the lightest production threshold is located at s th = (m B + m π ) 2 GeV 2 , lying slightly above the available kinematical energy range of the decay, 0 < q 2 < (m B − m π ) 2 GeV 2 . A first approximation to the form factor suggests a single pole description driven by the exchange of aūb intermediate state, the B * meson with mass m B * = 5.325 GeV (with very small width) and quantum numbers J P = 1 − . For illustrative purposes, let us consider a dispersive representation of the form factor in terms of q 2 , where q 2 is the invariant mass of the lepton pair, with Imf + (s) = πρ(s). The single pole description in Eq. (4) would correspond to using for the spectral function ρ(s) = f + (0)m 2 B * δ(s − m 2 B * ). This give raise to the Vector Meson Dominance model (VMD) with a B * pole appearing between the available phase space and the lowest production threshold, (m B − m π ) 2 < s p < (m B + m π ) 2 GeV 2 , where f + (0) is a normalization constant. However, this model obviates effects of heavier vector states. Bećirević and Kaidalov (BK) [23] proposed a modification of the VMD via including, above q 2 = (m B +m π ) 2 GeV 2 , a heavier narrow-width resonance, a B * , through adding Imf Implementing that the form factor behaves as 1/q 4 at large q 2 together with f + (0) = r 1 + r 2 , the standard expression for the BK form factor reads where α fixes the position of the second fitted effective pole. Later on, Ball and Zwicky (BZ) [24,25] proposed a similar expression in terms of three parameters {f + (0), r, α} by imposing the form factor to fall-off as ∼ 1/q 2 at large q 2 instead. The matching f + (0) = r 1 + r 2 and r = r 2 (α − 1) leads where r may be understood as a parameter which encodes the relative weight of the second effective resonance with respect to the first one. The above two parameterizations fix the position of the B * pole to its mass, m B * = 5.325 GeV, while the rest of free parameters, {f + (0), α} and {f + (0), r, α} in Eqs. (7) and (8), respectively, are inferred from fits to experimental data.
Exploiting the analyticity and positivity properties of the vacuum polarization functions, Okubo and collaborators proposed the method of unitary bounds [26] in the context of kaon decays, which later on was applied for semileptonic B decays [28,44]. This method, called z-parameterization and reviewed in Refs. [27,45], parameterizes f + (q 2 ) as a Taylor expansion in terms of a conformal complex variable z as follows: where with t + = (m B + m π ) 2 GeV 2 and φ(q 2 , q 2 0 ) a function given in Ref. [27]. The function P (q 2 ) = z(q 2 , m 2 B * ) is the Blaschke factor which accounts for the pole at q 2 = m 2 B * . The free parameter q 2 0 is chosen to optimize the fit. Assuming the spectral function driving the FF to be saturated by Bπ vector intermediate states, unitary and crossing symmetry guarantee the coefficients a n (q 2 0 ) to satisfy ∞ n=0 a 2 n (q 2 0 ) ≤ 1. In practice, Eq. (9) is truncated at a finite order (typically up to first or second order) which implies the FF to behave as ∼ 1/q 4 at large |q 2 | due to φ(q 2 , q 2 0 ), in contradiction with perturbative QCD scaling [21,22]. Beyond, as discussed in [28], the outer function has an unphysical singularity at the Bπ production threshold t + . This unphysical singularity may distort the behavior near the upper end of the physical region, where the FF is poorly known. These considerations triggered an alternative z-parameterization proposed in [28] by Bourrely-Caprini-Lellouch (BCL): where the pole included by hand ensures the correct analytic structure in the complex plane and the proper scaling, f + (q 2 ) ∼ 1/q 2 at large q 2 . Let us comment that the z-parameterization is not a zero-preserving transformation with respect of q 2 unless the particular choice q 2 0 = 0 is made, which implies z → 0 does not come from q 2 → 0, but rather from a large q 2 value. This poses a word of caution when using the z-parameterization to determine the behavior of the FF at low q 2 . We shall add here that the definition of z(q 2 , q 2 0 ) corresponds formally with a Quadratic approximant, a well-defined extension of a PA that includes square-root terms [31,32] 4 . As such, it is formally a PA of order given by the truncated series, either in Eq. (9) or (11), for which the PA convergence constraints must be applied.
To complete the overview, the AFHNV approach [46,47] is based on the Omnès representation which expresses the analytic function in terms of its phase along the boundary of the analyticity domain. If one takes into account the pole q 2 = m 2 B * , assumes that the FF have no zeros in the complex plane, and by Watson's theorem that the phase δ(t) is equal, below the first inelastic threshold, to the phase of the P -wave with I = 1/2 of the πB → πB elastic scattering, the representation reads (assuming n 1 which implies a multiply-subtracted dispersion relation-and neglects altogether the dispersive integral): where . Notice that after the multiply-subtracted dispersion relation the exponential behavior at large q 2 does not corresponds to the one from QCD and, due to the lack of the dispersive integral, the original branch cut q 2 ≤ t + .
Our proposal: Padé approximants
The form factor f + (q 2 ) is a Stieltjes function, which is a function that can be represented by an integral form defined as [48] f + ( where φ(u) is any bounded and non-decreasing function. By defining R = (m B +m π ) 2 GeV 2 , identifying dφ(u) = 1 π Imf + (1/u) u du, and making the change of variables u = 1/s, Eq. (13) returns the dispersive representation of the form factor given in Eq. (4). To be strict, Eq. (4) is a meromorphic function of Stieltjes type.
Since the FF, and its imaginary part, is created by the vector current, Imf + (s) is a positive function, the requirement of φ(u) to be non-decreasing is fulfilled and the convergence of PAs to the FF is guaranteed.
Padé Theory not only provides a convergence theorem for a sequence of PAs to Stieltjes (or Stieltjes-type) functions, i.e., lim N,M →∞ P N M (q 2 ) − f + (q 2 ) = 0, but also its rate of convergence [48,49], given by the difference of two consecutive elements in the PA sequence. As we will see later, this error prescription will return very small theoretical uncertainties. To be more conservative, in Refs. [37,38,39,40] we designed a different method to extract such uncertainty which yields errors at the level of the statistical ones. 4 In particular, z(q 2 Padé approximants to a given function are ratios of two polynomials (with degree M and N , respectively) 5 , with coefficients determined after imposing a set of a accuracy-through-order conditions with the function one wants to approximate, which is f We would like to remark that Eqs. (5), (7), (8), and (9) can be seen as particular elements of the general sequence of PAs given in Eq. (14).
Besides ordinary sequences of PAs we will also consider Padé Type approximants T M N (q 2 ) and Partial Padé approximants P M Q,N −Q (q 2 ) in our study. T M N (q 2 ) have the denominator fixed in advanced (by imposing the location of the zeros of it), while P M Q,N −Q (q 2 ) only Q zeros of the denominator are fixed in advanced while the rest are left free. Strictly speaking then VMD, BK and BZ correspond, respectively, to the T 0 1 (q 2 ), P 0 1,1 (q 2 ) and P 1 1,1 (q 2 ) elements, while the z-parameterization corresponds to a polynomial in terms of a Q 1 1,1 as we argued before. The main advantage of fixing a pole is that the number of parameters to fit decreases by one and typically allows to reach higher elements of the sequence [37]. If the sequence is large enough and the position of the first singularity is accurately known, the convergence of the T M N (q 2 ) is faster than the convergence of ordinary PAs for Stieltjes functions [36,48,50].
Fits to the B → π ν BaBar and Belle data
Our first analysis consists of fitting the most recent B → π ν branching ratio distribution experimental data released by BaBar in 2011 [12] and 2012 [13] and by Belle in 2011 [14] and 2013 [15]. We will also briefly discuss the effect of including CLEO 2007 results [11] into the fit, which are usually neglected. In order to facilitate the reproduction of our results, we would like to write down from which Tables of the papers the experimental data we use come from. For CLEO 2007 we use the results reported in Table I Tables III, IV and V of Ref. [14], respectively. Finally, for Belle 2013 we employ the data given, respectively, in Tables XVII, XVIII, XIX and XX of Ref. [15]. To this later data we have added, as suggested in Table XII of [15], a systematic uncertainty of 5.0% and 5.1% of the q 2 bin value for the B + and B 0 mode, respectively, and assumed 5 With any loss of generality, we take b 0 = 1 for definiteness. 6 CLEO 2007 result consists in measurements of partial branching fractions in only 4 unequal q 2 subregions and no bin-to-bin correlation matrix is reported. For our analysis, we have placed each experimental datum at the middle of each of the corresponding subregions and scaled the bin values accordingly. 7 For BaBar 2012, we use the combined analysis of both B 0 and B − modes assuming isospin symmetry. a systematic correlation of the 49% between the two modes as written in the paper below that Table. For our study, we assume, for convenience, isospin symmetry to translate the Belle 2013 data on the B − mode to the B 0 ones through where τ B 0 = (1.520 ± 0.004) × 10 −12 s and τ B − = (1.638 ± 0.004) × 10 −12 s, are, respectively, the mean life time of the neutral and charged B mesons [2]. In all, we will treat the five experimental data sets as independent measurements, i.e., no statistical either systematic correlations between the five different analyses is considered [3,18]. The χ 2 function minimized in our first fit is defined as where with Γ B the full width of the B meson, Cov ij denotes the corresponding covariance matrix and P M N (q 2 ) = P M N (q 2 )/P M N (0) the PA normalized to unity at the origin of energies whose coefficients will be determined by the fit.
We start fitting with ordinary PAs of the type P M 1 (q 2 ) and P M 2 (q 2 ) where the poles are left free to be fitted and we reach M = 2 and M = 0, respectively. Then, we proceed to fit with sequences of the type T M 1 (q 2 ) and P M 1,1 (q 2 ) by fixing the B * pole to m B * = 5.325 GeV reaching, respectively, M = 2 and M = 1. In Fig. 1, we provide a graphical account of the fit as obtained with P 2 1 (q 2 ) compared to data while our fit results are collected in the first row of Table 1. From the plot we observe that the uncertainty associated to the fit, given by the gray error band, is slightly larger in the low-q 2 energy region while from the Table we read that the values for |V ub | determined with approximants with two poles i.e. P 1 2 (q 2 ) and P 1 1,1 (q 2 ), give identical results than the single pole ones, P 2 1 (q 2 ) and T 2 1 (q 2 ). Then, we add CLEO 2007 experimental data into the χ 2 minimization of Eq. (16) and report the corresponding fit results in the second row of the Table 1, Figure 1: Simultaneous fit to BaBar [12,13] and Belle [14,15] B → π ν experimental data as obtained from the χ 2 data minimization of Eq. (16) with a P 2 1 (q 2 ) approximant (black solid line). CLEO data [11] is not included from the fit and rather shown for comparison.
comparison with the results shown in the first row we conclude that the effect of including this data into the fit is tiny.
In order to further improve on what can be learned from experimental data, we have also fitted experimental data of each Collaboration separately, an exercise that will be very illustrative in order to determine |V ub |.
The individual fits are displayed in Fig. 2 and the corresponding results shown, respectively, in the third (CLEO07), fourth (BaBar11), fifth (BaBar12), sixth (Belle11) and seventh (Belle13) rows of Table 1. From our set of fits collected in this Table, the diagonal and near diagonal P 2 2 (q 2 ), P 1 1,(1) (q 2 ) and P 2 1,(1) (q 2 ) approximants deserve special attention. For these approximants we find some extraneous poles placed either far away from the origin (marked with † in the Table) or pair up with a close-by zero in the numerator becoming what is called a defect or Froissart doublet (marked with † † in the Table), in accordance with the Nutall-Pommerenke's convergence theorem [36,48]. We would like to point out that the zeros of the numerator when individual fits to the BaBar 2012 data are performed tend to lie within the radius of convergence in the region of negative q 2 , a region we expected out of zeros. This feature may explain why the corresponding distribution is more rounded and with a sizable negative fall off at the origin in comparison with the other three individual fits. We also note that a Froissart doublet doublet appears at q 2 = −1 GeV for the The product |V ub f + (0)| as obtained from fits to B → π ν data depending if the B * pole is let as a free parameter to fit or fixed at m B * = 5.325 GeV. The corresponding |V ub | value extracted using f + (0) = 0.261 +0.020 −0.023 [30], the pole(s) of the approximants and the χ 2 dof are also shown. Poles placed far away from the origin and Froissart doublet are denoted by † and † † , respectively. Errors are only statistical and symmetrized in the last column.
that a PA to a Stieltjes function is also a Stieltjes function as well [48]. As such, it must have an imaginary part positive defined. This feature does not correspond with what we obtain in our fits and disagrees with what we expect from a Stieltjes function. All zeros and poles of our approximants must lie along the unitary branch cut in order to fulfill the unitary requirements that the FF imposes. If a particular PA does not show this feature means the set of data fitted is not fulfilling the unitary requirements that must have. Thus, both defects and the appearance of poles and zeros outside the unitary branch cut are indications of a violation of unitary to a certain degree. We shall come back to this point later (see section 3.3).
We would like to notice that individual fits to CLEO data lead unrealistic results but for P 0 1,1 (q 2 ). Also notice that the fits to BaBar11 experimental data lead the worst χ 2 /dof, in agreement with Ref. [3], and the largest values for |V ub |, in line with Ref. [30] but in contradiction with Refs. [3,18]. These two features are somehow reflected in the left-top panel of Fig. 2 both in the error band and in the value of the branching ratio distribution at q 2 = 0 which are, respectively, wider and larger than in the other three panels of the figure. On the contrary, the fits to the BaBar12 data gives the best χ 2 /dof and tends to give smaller |V ub | values.
From each of the individual fits shown in Table 1 we can order the experimental Collaborations according to their bottom-up |V ub | values as: BaBar12, Belle11, Belle13 and BaBar11. This ordering is in line with the corresponding |V ub | values reported by the experimental groups from fits to their own experimental data.
The neat effect of fitting all experimental data sets together with respect to fitting data of each Collaboration separately can be seen in Fig. 3, where we represent the number of σ deviations of each experimental datum with respect the corresponding fits. In this figure, markers given by solid geometric figures accounts for the fit as given in Fig. 1 while empty geometric figures stand for the fits as shown in Fig. 2. This allows us to order the four experimental data sets according to their increasing degree of soundness with respect to the common fit i.e., BaBar11, Belle13, Belle11 and BaBar12. Clearly, BaBar11 data points suffer the largest deviation when including the other sets of data into the fit (see left-top panel in Fig. 3) while, on the contrary, BaBar12 and Belle11 data points seem to drive the χ 2 minimization dominating the fit (see right-top and left-down panels, respectively, in Fig. 3). Regarding Belle13 experimental data points, they show some oscillatory scatter lying in between BaBar11 and BaBar12/Belle11 cases.
Incorporating form factor lattice calculations
In the previous section we have not accessed the description of the form factor but rather its normalized version to unity at q 2 = 0. In order to achieve a parameterization of the form factor we include the form factor shape predictions at large q 2 obtained on the lattice as new data sets to be fitted. In particular, we consider [18] and, finally, the updated analysis of the FNAL/MILC Collaboration of 2015 [3] 9 . The main advantage of performing a si- 9 While the FF predictions as obtained by HPQCD2007, MILC2008 and RBC/UKQCD in Refs. [16,17,18], respectively, are publicly available and the corresponding results given in the papers, the updated 2015 results of MILC are not. However, we have generated the FF from the fit as given in Table XIV of Ref. [3]. For the sake of comparison with their former 2008 predictions, we have generated 12 data points placed at the same q 2 -bins. For the extracted data points, the interested reader may contact the multaneous fit to all measured q 2 spectra experimental data supplemented by lattice QCD results on the FF shape is that not only |V ub | but also f + (0) can be determined directly from the fit since lattice data drive the height of the curve of the decay spectra. The χ 2 function to be minimized reads where χ 2 data corresponds to Eq. (16) and σ i is the uncertainty corresponding to the i-th bin.
The results derived from the minimization of Eq. (18) are collected in Table 2. In contrast to Table 1, Table 2 collects the results as obtained with each element of corresponding authors. We would like to thank Elvira Gámiz for correspondence along these lines. the corresponding sequences going up to P 2 1 , P 2 2 , T 2 1 and P 2 1,1 , respectively. The final results, given in the last column, include both statistical, from the fit, and systematic uncertainties from the truncated PA sequence as the difference of central values of the element we have stopped the sequence and the preceding one. Notice that the systematic uncertainty increases when the B * pole is fixed.
The impact of including lattice data into the fit is evident and allow us to determine |V ub | with improved precision reducing the associated statistical uncertainty by ∼ 80% with respect to the case when only the decay spectra is fitted (cf. Table 1). In addition, the sequence P L 2 has been enlarged by one element. Again, we find some extraneous poles for the diagonal P 2 2 and P 2 1,1 elements. In the former, we find a complex-conjugate (c.c.) pole with a small imaginary part (see the dedicated discussion in section 3.3) while in the latter we find that one pole tends to pair up with a close-by zero in the numerator.
As a matter of example, we gather the coefficients of the Padé approximant P 2 1 (q 2 ) in Table 3. In this Table we also provide the series coefficients b (n) + corresponding to the BCL parameterization (cf. Eq. (11)) that are obtained by matching the Taylor series expansion of P 2 1 (q 2 ) with the power series expansion of the BCL parameterization at O(q 4 ). The coefficients thus obtained are not directly fitted to data but rather reconstructed from our rational function. These lie in the ballpark of the most recent RBC/UKQCD and FNAL/MILC lattice determinations [18,3] shown, respectively, in the fifth and sixth columns of the Table and are seen in nice agreement with the HFLAV fit values [5] given in the last column. A graphical account of the corresponding P 2 1 (q 2 ) combined fit result is depicted in Figs. 4 and 5 as compared to the decay spectra and FF lattice data, respectively. In the latter, our prediction for the BCL parameterization is also shown (purple dashed curve), accommodating pretty well all lattice data but the last datum and seen in nice agreement with the P 2 1 (q 2 ) (black solid curve) element it is reconstructed from. A closer look to the FF shape displayed in Fig. 5 reveals that the lattice simulations derived by the FNAL/MILC Collaboration in 2015 seem to dominate the large q 2 region (cf. Table 5).
Our preferred values for |V ub | and f + (0) after the simultaneous fit results shown in Table 2 is corresponding to P 2 1 (q 2 ) when the pole is let as a free parameter to fit. This choice is based on the fact that the second pole of the sequence of the type P M 2 is rare indicating that the single pole behavior for the form factor seems favored.
To compare on the same footing regarding the number of free parameters we choose |V ub | = 3.53(8) stat (5) syst × 10 −3 , f + (0) = 0.264(10) stat (5) syst , (20) corresponding to the partial Padé P 2 1,1 when the B * pole is fixed. Notice that the corresponding systematic uncertainties are large enough to cover the difference with P 2 2 and T 2 1 , respectively. The impact of including the value of the FF at q 2 = 0, f + (0) = 0.261 +0.020 −0.023 [30], as an external restriction in the χ 2 , Eq. (18), is probed through the fits displayed in Table 4 for our best fit sequences discussed previously when f + (0) was not included in the minimization (cf. Table 2), and repeated in this Table 4, second column, for ease of comparison. The corresponding fits are almost identical, which guarantees the independency of our results with respect of the model calculation of f + (0).
We have also performed fits including CLEO 2007 data [11] in the χ 2 , and found that their impact in the global fit is marginal and we hence refrain to show them, and explored the effect of fitting all experimental data together with certain groups of lattice FF simulations. We have considered three groups, HPQCD+RBC/UKQCD, HPQCD+RBC/UKQCD+MILC08 and HPQCD+RBC/UKQCD+MILC2015, and collected the results, respectively, in the second, third and fourth columns of Table 5 for the P 2 1 (q 2 ) (the other sequences yield almost identical results). While the second and third columns lead similar results, the fourth column, including the updated FNAL/MILC form factor simulation of 2015, clearly shifts upwards(downwards) the |V ub |(f + (0)) value by about 1.3σ yielding smaller statistical uncertainties and slightly enlarging the χ 2 dof .
Fits to BaBar and Belle data [12,13,14,15] Table 2: |V ub | and f + (0) values as obtained from a simultaneous fit to B → π ν decay data and lattice QCD form factor simulations. The pole(s) of the approximants and the χ 2 dof are also shown. Poles placed far away from the origin and Froissart doublet are denoted by † and † † , respectively, while c.c. stands for a complex-conjugate pole with a small imaginary part. The results in the last column include a systematic error coming from the difference of central values of the last two elements of the corresponding PA sequences. The errors are symmetrized. Table 3: Coefficients of the Padé approximant P 2 1 (q 2 ), with the pole let as a free parameter, and of the reconstructed BCL parameterization, where the pole is fixed to the B * . The latter are compared with the fitted coefficients determined by the RBC/UKQCD and FNAL/MILC lattice groups [18,3] and with the HFLAV results [5]. Figure 4: Differential branching ratio distribution for B → π ν decays as obtained from a combined fit to experimental data and lattice predictions on the form factor shape with a P 2 1 (q 2 ) (black solid curve). CLEO data [11] is excluded from the fit but rather shown for comparison.
Upon comparison with last column, we conclude that FNAL/MILC simulations of 2015 drives the form factor while disagreeing for more than 1σ with respect to all the other lattice simulations (including their own determination but from 2008). This fact explains why the 2016 PDG reported value has been shifted with respect to the earlier edition by +1σ.
We close this section by performing fits to the lattice FF predictions alone and extract f + (0). This kind of exercise is new and, as a byproduct, allows us to determine |V ub | by equating the corresponding expression for the branching ratio (BR) to the measured ones, BR(B 0 → π − + ν ) = (1.45±0.05)×10 −4 [4]. We only obtain reliable results when, at least, the HPQCD 2007 and MILC 2015 predictions are included into the data sets to be fitted and for approximants with two poles. The corresponding fit results are gathered in Table 6.
Unitary constraints on PA's fits
All zeros and poles of our approximants must lie along the unitary branch cut in order to fulfill the unitary requirements that the FF imposes [48]. Figure 5: B → π form factor as obtained from a combined fit to experimental data and lattice predictions on the form factor shape with the approximant P 2 1 (q 2 ) (black solid curve). Our prediction for the BCL parameterization is also shown (purple dashed curve). a certain degree. Since we have performed a dedicated analysis Collaboration by Collaboration, bin by bin, and since we have found some cases which slightly violate these two statements, mostly happening when the BaBar 2012 data is involved, we are able to identify the source of the unitary deviation.
We find either complex-conjugate poles with an small imaginary part or a zero(s) within the radius of convergence for the P 0 2 (q 2 ) and P 1,2 1 (q 2 ) elements when fitting individually the BaBar 2012 data set, respectively. In particular, the complexconjugate pole of P 0 2 (q 2 ) is found to be at 5.72 ± i0.53 GeV while the zeros of the numerator are placed at −4.88 GeV and −4.40 GeV for the P 1 1 (q 2 ) and P 2 1 (q 2 ) elements, respectively (the second zero of P 2 1 (q 2 ) is placed at 12.97 GeV, far away from the origin). A complex-conjugate pole with an small imaginary part also shows up in the P 2 2 (q 2 ) element when performing the joint fit to data and lattice. In order to further explore on the origin of these extraneous poles and zeros we have also performed fits removing one experimental datum of each Collaboration e.g. those with more tension according to our Figures 2 and 3, and see what can we learn.
In particular, we remove the fifth datum of BaBar 2011, the tenth of BaBar 2012 and Belle 2011 and the bin located at 9 GeV 2 of Belle 2013. By doing this, we find that the zeros tend to move away from the radius of convergence while the complexconjugate poles become cancelled by a close-by zero in the numerator, a Froissart pole. Table 2 Constraining f + (0)
The impact of these four points is remarkable, inducing a positive shift on the |V ub | after removing them by about ∆|V ub | = 0.05 × 10 −3 , a 0.4σ deviation.
Breaking of unitary is then reducing the value of |V ub | an enlarging the discrepancy between inclusive and exclusive determinations. In view of this fact and the difficulty on deciding the best strategy to take this unitary violation into account when dealing with experimental data (other strategies beyond removing bins could be envisaged), we have decided to add in quadrature the difference ∆|V ub | as an extra source of error in our final determination of the CKM parameter and Eq. (19) should be sound |V ub | = 3.53(8) stat (6) syst × 10 −3 . This error could be removed as soon as the experimental Collaborations could take our observation into account and explore systematically the potential unitary violation within their data sets.
B + → η ( ) + ν decays and η-η mixing
In the previous section, the B → π form factor f + (q 2 ) has been parameterized using PAs to fit experimental data on the B → π ν differential branching ratio distribution w/o lattice FF simulations. In this section, we would like to take advantage of these parameterizations to describe the B + → η ( ) + ν decays as discussed in the following.
The expression for the differential B + → η ( ) + ν decay width is given by the same expression as for the B → π ν decay mode in Eq. (3) by replacing the final state pion by η ( ) where now f B + η ( ) + (q 2 ) represents the hadronic B + → η ( ) transition. What the B + → η ( ) transition is probing is the light-quark content of the η ( ) mesons since the ss component can only be accessed via a B s meson decay. This is so because from the quark-flavour perspective, η ( ) mesons are an admixture of uū, dd and ss components. Defining |η q = 1 √ 2 |uū + dd and |η s = |ss in this quark-flavour basis, one can relate the mathematical |η q,s states with the physical |η ( ) ones through the following matrix rotation where φ gives the degree of admixture.
Contrary to f Bπ + (q 2 ), there are no, to the best of our knowledge, FF simulations of f B + η ( ) + (q 2 ) on the lattice while only a few calculations at q 2 = 0 exist [8,24,54].
Therefore, we will relate the f B + η ( ) + (q 2 ) with the f Bπ + (q 2 ) using the quark-flavor basis. Assuming isospin symmetry between the u and d quarks, the form factor f B + η ( ) + (q 2 ) can be related to the f B + π 0 + (q 2 ) ones through [55] taking η uū π 0 as in Refs. [40,56]. From Eqs. (21) and (23), and taking f Bπ + (q 2 ) from any of the descriptions given in Table 2 together with the corresponding values for |V ub |, we can describe the differential branching ratio distribution of the B + → η ( ) + ν decays by setting the numerical value of the η-η mixing angle to, for example, φ = 38.3(1.6) [40]. Our prediction for the B + → η + ν differential branching fraction distribution is shown and compared with BaBar 2012 measurements in 5 bins of q 2 in Fig. 6 for P 2 1 (q 2 ) (black solid curve). Our description is seen quite in accordance with data although the second experimental datum seems to be slightly in tension. In Fig. 6 we also show our prediction for the B + → η + ν branching ratio distribution (blue solid curve), in this case without any experimental data to compare with.
The corresponding results are collected in Table 7. We observe that the central values show some scatter though they all agree within errors due to the large uncertainties in Eq. (25). In order to extract the mixing angle φ from B → η ( ) transitions with more precision, measurements of these decays with higher precision are required.
As a final exercise, we would also liked to fit either the individual BaBar 2012 B + → η + ν decay experimental data or perform a combined fit to B + → π + ν and B + → η + ν decays with the goal to provide an alternative semileptonic charmless B decay determination of |V ub |. However, due to the poor experimental situation in the case of B + → η + ν , we decide to postpone this analysis for the future.
Conclusions
In this paper we have reexamined the B → π ν decays to extract the CKM parameter |V ub | based on experimental data, lattice calculations and unitary constraints of the participant form factor. Contrary to the most commonly used z-expansion and Vector Meson Dominance models, we perform our analysis based on the method of Padé Approximants after realizing that most of the recent previous analyses belong to the Padé Theory, even though no one mention it. Thus, the rules and constrains imposed by the convergence theorems for Padé Approximants to the form factor, so far neglected, are fully exploited here, allowing to ascribe to our final result a new source of systematic or truncation error.
From our dedicated analysis we obtain |V ub | = 3.53(8) stat (6) syst × 10 −3 . This quantity includes both statistical, from the fitted data, and systematic, from the truncation of the Padé sequence, uncertainties, and has been obtained guaranteeing the independency with respect of the model calculation of f + (0) as external constrain.
On a first stage, after a detailed review of the state-of-the-art experimental data, determinations of |V ub | and theoretical representations of the analytical structure of the form factor, we have analyzed the measured q 2 differential branching ratio distribution experimental data released by the BaBar and Belle Collaborations. Our fitting strategy started by performing a combined analysis to all data sets using different types of Padé sequences. We thus have determined first the product |V ub f + (0)| directly from the fits and then extracted the CKM element |V ub | by invoking external theoretical information on f + (0). The resulting fit results are presented in Table 1 and a graphical account provided in Fig. 1. We then have carried out a detailed analysis Collaboration by Collaboration. The outcome of the individual fits is displayed in Fig. 2 and the neat effect on each experimental datum due to fitting all experimental data together with respect to fitting data of each Collaboration separately is shown in Fig. 3. This exercise allow us to classify the four differential experimental data sets according to their increasing degree of robustness: BaBar 2011, Belle 2013, Belle 2011 and BaBar 2012.
On a second stage, we have included into the analysis the four available lattice QCD predictions on the form factor shape. This data dominates the large-q 2 region and it is essential for a precise determination of |V ub |. The corresponding fit results are collected in Table 2 indicating that the statistical uncertainty associated to |V ub | is reduced by ∼ 80% after the inclusion of lattice data. We have also found that, out of the four lattice form factor simulations, the predictions released by the MILC Collaboration in 2015 tends to drive the form factor (see Table 5) but slightly enlarging the χ 2 dof . As a byproduct of our analysis, we have predicted the BCL form factor series coefficients that are obtained by matching the corresponding Taylor series expansion. The coefficients thus obtained are shown and compared with the determinations given by lattice groups in Table 3 while the q 2 shape of the reconstructed BCL parameterization is displayed in Fig. 5 proving the ability of the Padé Approximants in this transition.
On a third stage, motivated by the impact of the lattice data, we have also explored fits to the lattice predictions alone. The fit results are shown in Table 6 reflecting that only those approximants with two poles have the ability to extract first f + (0) and then determine |V ub | by equating the theoretical expression for the branching ratio to the corresponding experimental measurement.
Our central result, |V ub | = 3.53(8) stat (6) syst × 10 −3 , is presented and compared with other determinations using other methods and fitted data sets in Fig. 7. We would like to remark two features concerning this value that are related to the use of Padé Approximants. The first one, is that the central value tends to fall slightly downwards with respect to the values determined with the z-expansion parameterization in the studies carried out in the recent years. And the second one, is that the method allow us to ascribe a systematic uncertainty from the truncated Padé sequence. In fact, the z-paramaterizations do also allow to attribute a systematic error following the same reasoning. However, in practice, it has not so far usually been considered. For example, based on our criterion, the result as obtained by the FNAL/MILC Collaboration in 2015 would read |V ub | = 3.72(16) stat (9) syst , where the systematic uncertainty stems from the differing results for N = 3, 4 (cf. Eq. (11)). In our study, the ascribed systematic uncertainty includes, for the first time, an additional conservative source of error due to the unitarity constraints discussed in section 3.3. These constraints have to do with the appearance of extraneous poles and zeros outside the unitary branch cut and might indicate, to a certain degree, violations of unitarity.
As a final concluding remark for the B → π ν decays, we would like to point out that, contrary to the z-expansion and VMD models where the B * pole position is fixed to 5.325 GeV in advance, a very competitive value for |V ub | can be extracted without imposing any information regarding the position of it as we have shown along the lines of our detailed analysis.
In the second part of this work, we have addressed the B + → η ( ) + ν decays taking advantage of the B → π form factor parameterizations derived in the first part. In particular, we relate the participant Bη ( ) form factor to the Bπ ones by a single Euler angle rotation assuming that the light-quark component of the η ( ) is a qq pion to a large extent. Under this simple assumption, we obtain a reliable prediction for the differential branching ratio distribution of the B + → η + ν decay as shown in Fig. 6 compared to the BaBar measurement in 5 bin of q 2 released in 2012. As a byproduct of our study, we have also extracted the η-η mixing angle. This quantity, however, carries a large statistical error due to the large uncertainty on the measured B + → η ( ) + ν branching ratios. Regarding our prediction for the B + → η + ν decay distribution, there is no experimental data to compare with so far. In order to go beyond the simple quark-flavour basis decomposition and extract the η-η mixing angle with improved precision we would like to encourage experimental groups to measure these semileptonic B + → η ( ) transitions with improved precision. [53] and this work ( ), from B → ω ν ( ) and B → ρ ν ( ) Bharucha 2015 [58] and from Λ b → pµν µ (•) LHCb [59], and from indirect fits (•) UTFit 2016 [60] and CKMfitter 2015 [61]. The solid and dashed error bar account, respectively, for the statistical and systematic uncertainties. | 13,228.6 | 2018-05-29T00:00:00.000 | [
"Physics"
] |
Molecular biomarkers in Batagay megaslump permafrost deposits reveal clear differences in organic matter preservation between glacial and interglacial periods
. The Batagay megaslump, a permafrost thaw feature in north-eastern Siberia, provides access to ancient permafrost up to ∼ 650 kyr old. We aimed to as-sess the permafrost-locked organic matter (OM) quality and to deduce palaeo-environmental information on glacial– interglacial timescales. We sampled five stratigraphic units exposed on the 55 m high slump headwall and analysed lipid biomarkers (alkanes, fatty acids and alcohols). Our findings revealed similar biogeochemical signatures for the glacial periods: the lower ice complex (Marine Isotope Stage (MIS) 16 or earlier), the lower sand unit (sometime between MIS 16– 6) and the upper ice complex (MIS 4–2). The OM in these units has a terrestrial character, and microbial activity was likely limited. Contrarily, the n -alkane and fatty acid distri-butions differed for the units from interglacial periods: the woody layer (MIS 5), separating the lower sand unit and the upper ice complex, and the Holocene cover (MIS 1), on top of the upper ice complex. The woody layer, marking a permafrost degradation disconformity, contained markers of terrestrial origin (sterols) and high microbial decomposition ( iso - and anteiso -fatty acids). In the Holocene cover, biomarkers pointed to wet depositional conditions and we identified branched and cyclic alkanes, which are likely of microbial origin. Higher OM decomposition characterised the interglacial periods. As climate warming will continue permafrost degradation in the Batagay megaslump and in other areas, large amounts of deeply buried ancient OM with variable composition and degradability are mobilised, likely significantly enhancing greenhouse gas emissions from permafrost regions.
Abstract. The Batagay megaslump, a permafrost thaw feature in north-eastern Siberia, provides access to ancient permafrost up to ∼ 650 kyr old. We aimed to assess the permafrost-locked organic matter (OM) quality and to deduce palaeo-environmental information on glacialinterglacial timescales. We sampled five stratigraphic units exposed on the 55 m high slump headwall and analysed lipid biomarkers (alkanes, fatty acids and alcohols). Our findings revealed similar biogeochemical signatures for the glacial periods: the lower ice complex (Marine Isotope Stage (MIS) 16 or earlier), the lower sand unit (sometime between MIS 16-6) and the upper ice complex . The OM in these units has a terrestrial character, and microbial activity was likely limited. Contrarily, the n-alkane and fatty acid distributions differed for the units from interglacial periods: the woody layer (MIS 5), separating the lower sand unit and the upper ice complex, and the Holocene cover (MIS 1), on top of the upper ice complex. The woody layer, marking a permafrost degradation disconformity, contained markers of terrestrial origin (sterols) and high microbial decomposition (iso-and anteiso-fatty acids). In the Holocene cover, biomarkers pointed to wet depositional conditions and we identified branched and cyclic alkanes, which are likely of microbial origin. Higher OM decomposition characterised the interglacial periods. As climate warming will continue permafrost degradation in the Batagay megaslump and in other areas, large amounts of deeply buried ancient OM with variable composition and degradability are mobilised, likely significantly enhancing greenhouse gas emissions from permafrost regions.
Introduction
Rapid warming of the terrestrial Arctic leads to widespread permafrost thaw. This can mobilise organic matter (OM) and results in greenhouse gas release, which contributes to the permafrost-carbon climate feedback (Schuur et al., 2015). The global permafrost region contains roughly half of the world's soil carbon (3350 Gt) and, in addition, a large deep permafrost carbon pool (> 3 m), which is often not accounted for and whose amount is uncertain (∼ 500 Gt) (Strauss et al., 2021). While it was estimated that gradual permafrost thaw might contribute up to 208 Gt carbon into the atmosphere until 2300 (McGuire et al., 2018), abrupt permafrost thaw processes, such as the formation of retrogressive thaw slumps and thermokarst development, could contribute an additional 80 ± 19 Gt of carbon released into the atmosphere (Turetsky et al., 2020). Abrupt thaw processes occur on local to regional scales and are difficult to capture, which is why they have not yet been implemented in climate models.
Retrogressive thaw slumps are a result of slope failure following the thaw of ice-rich permafrost. They develop rapidly and can displace large quantities of ice and/or water, sediments and OM (Lewkowicz, 1987;Lantuit and Pollard, 2005;Tanski et al., 2017). Thaw slumps typically consist of a nearly vertical headwall, a slump floor and a lobe and are often situated along rivers or coasts. Triggers for the formation include lateral or thermal erosion by water (Kokelj et al., 2013); active layer detachment following heavy rainfall (Lacelle et al., 2010); and human activity such as road construction, mining or deforestation. Once initiated, thaw slumps can develop very rapidly due to the constant removal of thawed material by meltwater streams, changes in the vegetation and snow cover, and the albedo leading to further intense permafrost degradation.
The Batagay megaslump in East Siberia is the largest known retrogressive thaw slump on Earth (roughly 1.8 km long and 0.9 km wide in 2019) that developed over the last ∼ 5 decades (Kunitsky et al., 2013). The megaslump provides access to ancient permafrost deposits, with stratigraphical discordances, including the second-oldest directly dated permafrost in the Northern Hemisphere . This makes the large slump headwall an ideal target for palaeo-environmental studies, including cryostratigraphy, sedimentology and chronology (Ashastina et al., 2017;Murton et al., 2017Murton et al., , 2022; ground ice stable isotopes Vasil'chuk et al., 2020); pollen and plant macroremains (Ashastina et al., 2018) and ancient DNA (Courtin et al., 2022).
The study of lipid biomarkers has been proven useful in previous work to characterise permafrost OM and carbon cycling as well as tracing permafrost thaw (Zech et al., 2010;Strauss et al., 2015;Elvert et al., 2016;Stapel et al., 2016;Jongejans et al., 2018Jongejans et al., , 2020Martens et al., 2020;Bröder et al., 2021;Yao et al., 2021). With the present study we aim (1) to explore the source and preservation of biomarkers in permafrost on geologic timescales during several glacial and interglacial periods and (2) to deduce the past floral and microbial sources of the still preserved OM in order to characterise palaeo-environments of OM deposition. To our knowledge, we present the first OM signatures, i.e. biomarkers preserved in ancient permafrost since about 650 ka.
Study site
The Batagay megaslump (67.58 • N, 134.77 • E) close to the village of Batagay is located in the Yana Uplands, part of the Yana-Oymyakon mountain region (interior Yakutia; Fig. 1a). This region is characterised by the most continental climatic conditions of the Northern Hemisphere, manifesting in an extreme climate with a mean winter (December to February) temperature of −40.0 • C, a mean summer (July to August) temperature of 13.7 • C and a mean annual temperature of −12.4 • C (period 1988-2017) . For the same time period, mean annual precipitation was 203 mm, with mean summer precipitation of 106 mm. Since the mid-20th century, both temperature and precipitation have significantly increased. The permafrost in this region is continuous and ∼ 200 to 500 m thick with mean annual ground temperatures of −8.0 to −5.5 • C . The seasonally thawed uppermost (active) layer is between 0.2 and 1.2 m thick, depending on vegetation type . The modern vegetation is dominated by open larch forest (Larix gmelinii), and Siberian dwarf pines (Pinus pumila) and birch trees (Betula exilis, B. divaricata and sparse B. pendula) are common. The ground is covered by a thick layer of lichens and mosses, and almost no grasses and herbs are present (Ashastina et al., 2018;Murton et al., 2022).
The Batagay megaslump is located on an east-facing hillslope and has developed after anthropogenic disturbance of the protective vegetation cover in the middle of the 20th century (Kunitsky et al., 2013;. A gully formed in the 1960s that grew progressively wider and deeper and developed into a retrogressive thaw slump. In spring 2019, the slump diameter, which was determined using a UAV survey (Jongejans et al., 2021b), was about 890 m. Grow rates are fast with spatially and temporally varying headwall retreat rates of 7 to up to 30 m yr −1 (Kunitsky et al., 2013;Günther et al., 2015;Vadakkedath et al., 2020). The ∼ 55 m high headwall and the slopes of the slump provide access to stratigraphically discontinuous ancient permafrost deposits since the Middle Pleistocene . The headwall consists of six stratigraphical units from the bottom to the top: the lower ice complex (Marine Isotope Stage (MIS) 16 or earlier), the lower sand unit (sometime between MIS 16 and 6); the woody layer (MIS 5), which was present as lenses up to 3 m thick; the upper ice complex (MIS 4-2), also called Yedoma; the upper sand unit (MIS 3-2); and the Holocene cover on top (MIS 1) (Ashastina et al., 2017;Murton et al., 2022). It should be noted that there are large hiatuses (marked by erosional surfaces below and above the lower sand unit) and dating uncertainties in the chronostratigraphy . While the ancient permafrost buried deep in the ground has survived multiple interglacials, the region has been subject to repeated permafrost thaw and sediment removal by thermo-erosional processes, amplified in recent decades.
Sample collection
The slump headwall was sampled during a spring expedition to Batagay in March and April 2019 ( Fig. 1b and c) (Jongejans et al., 2021b). The samples were taken by rappelling with a rope from the top of the slump headwall to each sample location and then using a hole saw (diameter 57 mm, 40 mm deep) mounted on a handheld power drill to sample small horizontal cores of frozen sediments exposed in the headwall. Sample depth is given in metres below the surface (m b.s.) (Fig. S1 in the Supplement). At each sampled depth, three cores were taken next to each other for biomarker, sedimentological and ancient DNA analyses. Sampling resolu-tion was 0.5 m in the top 10 m and 1 m below. Due to the presence of large ice wedges, profile 1 consisted of four subprofiles (Figs. 1c and A1). Using a hammer, axe and chainsaw, more profiles were sampled at the lower part of the headwall from the slump bottom (profile 2; Fig. A2), as well as at two large permafrost blocks at the slump bottom that had fallen from the headwall (profiles 3 and 4; Figs. A3 and A4, respectively), and at a baidzherakh (thermokarst mound) in the north of the slump (profile 5) (Figs. 1b and c and A5). All samples were stored in sterilised glass jars and kept frozen until laboratory analyses at Alfred Wegener Institute (AWI) Potsdam. A total of 30 samples (19 from profile 1; 5 from profile 2; and 2 each from profiles 3, 4 and 5) were selected for biomarker analysis. With these profiles, we covered five of the six stratigraphical units (all but the upper sand unit, which is not exposed in the central headwall). As we have no detailed sample depth information from the blocks and the baidzherakh, we report the results according to the respective stratigraphic units.
Laboratory analyses
The samples were freeze-dried, and after homogenisation of the samples, the total carbon (TC), the total organic carbon (TOC; vario TOC cube elemental analyser) and the total ni-trogen (TN) content were measured (rapid MAX N exceed elemental analyser) and expressed in weight percent (wt %).
Samples were treated for biomarker analysis as described by Jongejans et al. (2021a): after extraction of the OM (Dionex ASE 350) and removal of asphaltenes, four internal standards were added and the extracts were separated by medium-pressure liquid chromatography (MPLC; Margot Köhnen-Willsch Chromatography, Jülich) into aliphatic, aromatic and polar NSO (nitrogen, sulfur and oxygencontaining) fractions (for details see Radke et al., 1980). We selected 10 samples for further separation of the NSO fraction into an acid and neutral polar fraction using a KOHimpregnated silica gel column (Schulte et al., 2000). This sample selection was based on the biogeochemical parameters, as well as to cover the entire profile.
We measured alkanes, fatty acids (FAs) and alcohols using a TRACE 1310 gas chromatograph coupled to a TSQ 9000 mass spectrometer (Thermo Scientific), following the same method and settings as described in Jongejans et al. (2021a). Prior to the measurements, the fatty acid fraction was methylated using diazomethane and the alcohol fraction was trimethylsilylated using N-methyl-N-(trimethylsilyl)trifluoroacetamide (MSTFA). We quantified the compounds relative to the internal standards from fullscan mass spectra (m/z 50-600 Da, 2.5 scans s −1 ) using the software Xcalibur.
We calculated indices from the n-alkane and n-FA concentrations (Table 1) to obtain insights into OM origin and preservation: the average chain length (ACL), the proxy for aquatic OM (P aq ), the carbon preference index (CPI), the ratio of isoand anteiso-branched relative to long-chain n-FAs (IA), and the higher-plant fatty acid (HPFA) index. The ACL can be used as an indicator of OM source, where long-chain n-alkanes (> 25) are mostly produced by terrestrial higher plants (Poynter and Eglinton, 1990;Ficken et al., 1998;Zech et al., 2010). Variations in the ACL can be caused by different plant type material and climatic-induced changes in the environmental conditions. For example, different temperature and wetness conditions as well as the length of the vegetation period can influence the long-chain n-alkane distribution (e.g. Sachse et al., 2006). P aq shows the share of OM derived from aquatic plants, which are thought to contain more C 23 and C 25 n-alkanes -compared to terrestrial plants which generally have longer chains (Ficken et al., 2000). In addition, Sphagnum mosses are also dominated by n-C 23 and n-C 25 . The CPI expresses the ratio of the odd-over-even nalkane chains and decreases with OM decomposition (Marzi et al., 1993). We calculated the IA using the isoand anteisobranched FAs C 15 and C 17 representing bacterial biomass relative to long-chain n-FAs representing the terrestrial OM. This ratio is thought to reflect changes in the microbial abundance (and presumably activity) with respect to the terrestrial background biomass, where a higher ratio may correspond to microbial membrane adaptation with respect to warmer environmental conditions (Rilfors et al., 1978;Stapel et al., 2016). Finally, the HPFA index was used to indicate the level of OM degradation: due to the presence of the polar carboxyl group, FAs are more vulnerable to biological and chemical degradation (Killops and Killops, 2013) compared to the respective n-alkanes, leading to decreased HPFA values with decomposition.
Lower ice complex
This lowermost exposed sediment sequence consisted mostly of sandy silt to silty sand. The lower ice complex (profile 2, 53.1-52.0 m b.s.) contained partly truncated ice wedges and composite wedges. A reddish erosional layer containing gravel marked the top of the lower ice complex. In places, a similar layer cuts through the lower ice complex at an angle. Here, we found pool ice and wooden remains. The TOC (0.69 wt %-0.83 wt %) and the TN (0.10 wt %-0.11 wt %) were very low in this unit (Figs. 2 and S1), and the C/N ratio ranged from 6.4 to 7.5 (Jongejans et al., 2022c). The concentrations of short n-alkanes (47-75 µg g −1 TOC), long n-alkanes (213-405 µg g −1 TOC), and branched and cyclic alkanes (46-161 µg g −1 TOC) were also quite low in this unit. The ACL ranged between 28.5 and 29.2 and the P aq from 0.14 to 0.23. The CPI varied between 6.4 and 7.6. The main Table 1. Abbreviations and equations of calculated indices from n-alkane and n-fatty acid (n-FA) concentrations.
Index
Name Equation
Lower sand unit
The lower sand unit (profile 2, 51.5-51.0 m b.s.; profile 1, 49.4-38.4 m b.s.; one sample of profile 4) was characterised by narrow chimney-like composite ice-sand wedges. The TOC was higher (0.65 wt % to 1.36 wt %) compared to the lower ice complex, and the TN was comparably low (< 0.10 wt %-0.13 wt %). The C/N ratio ranged from 7.6 to 10.8; it could only be calculated for the samples with a TOC and TN content above the detection limit. The alkane concentrations ranged between 13 and 145 µg g −1 TOC for the short n-alkanes, 140 and 1329 µg g −1 TOC for the long nalkanes, and 41 and 553 µg g −1 TOC for the branched and cyclic alkanes. The ACL and P aq ranged from 28.6 to 29.2 and 0.11 to 0.20, respectively. The CPI ranged between 7.2 and 11.5. The concentrations of short-chain n-FAs spanned a large range from 130 to 432 µg g −1 TOC, and the long n-FAs ranged from 214 to 447 µg g −1 TOC. The IA was at the low end (0.04 to 0.08), and the HPFA index was between 0.18 and 0.58.
Woody layer
The woody layer (profile 1 33.5-31.7 m b.s.; one sample each of profiles 3 and 4) was present in lenses up to 3 m thick. This debris layer was abundant in organic remains, peat lenses, roots and wood. The TOC (1.47 wt % to 4.93 wt %) and TN (0.12 wt % to 0.40 wt %), as well as the C/N ratio (12.4 to 16.7), were highest in this unit. Here, the short n-alkanes and branched and cyclic alkanes were scarce (13-71 and 16-132 µg g −1 TOC, respectively; Fig. S2), but the long-chain nalkanes covered a large range (194-1841 µg g −1 TOC). The ACL (28.3-30.4) had its maximum in this unit and the P aq (0.07-0.25) its minimum (both at 31.7 m b.s. in profile 1). The CPI was moderate to high (6.5 to 18.3). In this unit, we analysed the neutral fraction of one sample: the sample at 31.7 m b.s. from profile 1. In this sample, the isoand anteiso-FAs (as well as the unsaturated FAs) were most abundant ( Fig. S3 in the Supplement) and, therefore, the IA value was the highest (0.41). The FA concentrations were 328 µg g −1 TOC for the short-chain and 313 µg g −1 TOC for the long-chain n-FAs. The HPFA index was very low (0.09). Furthermore, we found many different sterols and triterpenoids in this sample ( Table 2). The gas chromatogram and molecular structures can be found in the Supplement (Figs. S4 and S5). In the samples from the other units (n = 9), we found only the sterols campesterol and β-sitosterol.
Upper ice complex -Yedoma
The upper ice complex (profile 1, 30.7-4.2 m b.s.; one sample of profile 3; profile 5) contained large (up to a few metres wide) syngenetic ice wedges. The TOC (0.66 wt %-2.36 wt %) and TN (< 0.10 wt %-0.24 wt %) contents were moderately high compared to the other units. The C/N values (7.4-11.7) were very similar to those of the lower sand unit. Alkane concentrations spanned a wide range in the upper ice complex: 16-497 µg g −1 TOC for the short-chain nalkanes, 68-1620 µg g −1 TOC for the long n-alkanes, and 8-1302 µg g −1 TOC for the branched and cyclic alkanes. The ACL and P aq spanned quite a wide range (28.6 to 29.2 and 0.11 to 0.20, respectively). The CPI was low to moderate in this unit (5.11 to 12.3). The n-FA concentrations were also quite variable with the short-chain n-FAs ranging between 144 and 262 µg g −1 TOC and the long-chain n-FAs between 294 and 666 µg g −1 TOC. The IA index was very low (0.03 to 0.05) and the HPFA index low to medium (0.13 to 0.36).
Holocene cover
The Holocene cover unit (profile 1, 2.0-0.2 m b.s.) seemed quite organic-rich and contained a variety of cryostructures (e.g. massive, porphyric, basal, belt-like and layered). Nevertheless, the TOC (0.39 wt % to 0.63 wt %) and TN (< 0.10 wt %) values were very low. Due to the TN values below the detection limit, we could not calculate the C/N values of this unit. Especially the branched and cyclic alkanes were very abundant (790-1422 µg g −1 TOC), whereas the short-(211-295 µg g −1 TOC) and long-chain (669-972 µg g −1 TOC) n-alkanes were moderately high. The ACL (27.5-28.4) was lowest in this unit and the P aq the high- est (0.24-0.37) in all profiles. The CPI was also the lowest and ranged from 3.7 to 5.7.
Discussion
Variations in the TOC contents and fossil biomolecule concentrations along the sedimentary succession provide in-sights into quantitative differences in the buried OM deposited over time. These differences are mainly caused by changes in the depositional regime (e.g. water availability, temperature, accumulation rates), the associated bioproductivity (autochthonous signal) and transport processes of the OM (allochthonous signal) following different climatic periods (e.g. glacial and interglacial periods). Additionally, qualitative variations in the fossil biomolecules can give insight into different OM sources such as the biomarker indices ACL and P aq . Indicative biomarkers are a useful tool in these old sediments as they are generally very well preserved in sediments, even on geological timescales, compared to for example sugars, proteins and DNA.
Biogeochemical legacy of glacial periods
In the Batagay dataset, we found generally only minor variations in the biogeochemical and biomarker parameters for the lower ice complex, lower sand unit and upper ice complex. This suggests that the OM signal representing permafrost deposits since about 650 ka is qualitatively similar, suggesting that vegetation patterns might have been similar over time in glacial periods. These observations fit well with the palaeovegetation records of Ashastina et al. (2018). They found that meadow-steppe vegetation persisted throughout most of the reconstructed period (i.e. lower sand unit and upper ice complex) and argued that fossil plant macro-remains mirror mostly changes in the relative abundance of plant communities rather than complete changes in plant species compositions over time (Ashastina et al., 2018). Such relative quantitative variations in the vegetation might be responsible for the observed variability in individual biomolecule markers (e.g. n-alkanes and FAs). For the MIS 3 and MIS 2 deposits, Courtin et al. (2022) confirmed the open steppetundra landscape by sedimentary DNA analyses; they revealed that herb communities dominated the glacial vegetation, and they found traces of megaherbivores corresponding to this landscape. The generally higher ACL (> 28) and lower P aq in these units indicate a higher-plant and less aquatic or mossy character of the OM in these deposits. This corroborates the strong continentality and dry conditions, especially during the cold stages, as found by isotopic and palaeo-ecological analyses (Ashastina et al., 2018;Opel et al., 2019). The relatively low IA index presumably points to lower microbial activity during the glacial periods.
Cryostratigraphic observations and isotopic findings suggest that the lower ice complex sediments might have been deposited under relatively wet conditions, providing enough snowmelt water to form huge ice wedges . These findings suggest that these sediments were deposited during a glacial period. In contrast, shotgun DNA analyses from sediments taken in 2017 from the upper part of the lower ice complex just below the erosional surface (sample B17-D3) point to an interglacial origin of the deposited OM (Courtin et al., 2022). Courtin et al. (2022) suggested that the environment was characterised by forested vegetation but that there were also more open, herb-dominated areas with large herbivores. Pollen findings (Andrei A. Andreev, unpublished data) of the same samples from the lower ice complex at its transition into the above-lying erosional surface point to woodland and steppe vegetation, characteristic of an interglacial period that might have induced thermo-erosion and permafrost thaw that partly degraded the lower ice complex from above. In the sediments above the erosional surface, in the lower sand unit (sample B17-D5), Courtin et al. (2022) detected small mammals and forest-specific insect families supporting dense forest vegetation. Furthermore, they found signs of strong microbial activity related to soil decomposition such as members of the fungus Pseudogymnoascus, which are related to decaying roots or plants, and aerobic bacteria (Nocardioidaceae and Clostridia) which are considered to be consumers of OM. In contrast to this transition layer, the samples of the underlying lower ice complex taken in 2019 cover the entire exposed sequence, and our biochemical and biomarker results do not differ for the lower ice complex, the lower sand unit and the upper ice complex. Therefore, we assume that all three sequences formed during glacial periods. Moreover, we found relatively low values for the IA index in the lower ice complex deposits, suggesting low microbial activity. Possibly the samples from the lower ice complex (2017 and 2019) represent a transition from a glacial to an interglacial period, the latter of which is represented in the erosional surface topping the lower ice complex. Apart from the erosional surface above the lower ice complex (Fig. 4 from Opel et al., 2019), there were signs of erosion events within the lower ice complex as indicated by pockets of wooden remains (Jongejans et al., 2021b). In any case, the complicated permafrost formation and degradation history might also explain the mixed signal in the OM: the C/N ratio and HPFA index show opposite results for the lower ice complex. The high HPFA index might be influenced by the high long-n-FA concentration. The low C/N could point to the deposition of older transported OM. The CPI was strongly correlated with the ACL (r 0.74, p < 0.01) and P aq (r −0.70, p < 0.01) across all units (Table S2 in the Supplement). This suggests that the CPI is highly influenced by the OM source, and therefore, its use as an OM quality indicator might be restricted. However, general CPI values above 5 might indicate that the OM is still of relatively good quality. A deeper insight into the quality might be provided by the FA concentrations as they are indicators for more labile biomolecules. The FA data show quite variable values within the individual glacial periods (Fig. 3). In addition to a mixed OM source, this might also indicate a heterogeneous level of OM decomposition, which is also supported by variable HPFA values. Thus, the data point to an overall variable OM quality in the glacial deposits.
The occurrence of narrow composite sand-ice wedges in the lower sand unit compared to the large ice wedges in both ice complex units suggests very high accumulation rates in the lower sand unit. Furthermore, there was likely more snowmelt water available during the ice complex formation that allowed the formation of huge ice wedges as present in the lower and the upper ice complex units. Nevertheless, these changes in available winter moisture are not reflected in the biomarker record of, for example, the ACL and P aq values.
Biogeochemical legacy of interglacial periods
In contrast, the woody layer and the Holocene cover differ in their biogeochemical and biomarker parameters from the other stratigraphic units. Compared to the glacial units, we found not only distinct differences in the n-alkane and FA distribution for the Holocene cover and the woody layer but also some specific biomarkers in these sediments such as branched and cyclic alkanes, stenols, stanols, and pentacyclic triterpenoids. We discuss the characteristics of the OM in these sediments and the sources and implications of these compounds in the woody layer and the Holocene cover below.
The woody layer samples show wide variability among all determined biogeochemical and biomolecular parameters, indicating a layer of high inhomogeneity. Ashastina et al. (2017) found high TOC and C/N values, as well as low δ 13 C values for the woody layer. Similarly, we found variable but overall higher TOC contents in these sediments, pointing to high OM accumulation in this layer, and, compared to the other units, a higher C/N and ACL 23−33 suggesting a strong higher-plant contribution in the deposited OM. However, a variable input of aquatic or mossy organic biomass is indicated by the P aq index. The higher OM accumulation could result from higher productivity as is typical of warmer conditions during interglacial periods. However, the fact that the woody layer marks a disconformity related to massive permafrost degradation and erosion suggests that the OM can also stem from remobilisation of older material, redistribution and accumulation in erosional forms.
The sediments of the woody layer had a distinctly different n-alkane and FA distribution compared to the other studied sediment units. The woody layer almost completely lacked the short n-alkanes and branched and cyclic alkanes, and the high ACL and low P aq suggest drier conditions (Ficken et al., 1998(Ficken et al., , 2000. Apart from the distinct n-alkane and FA distribution, the sediments from the woody layer (sample at 31.7 m in profile 1) also contained specific stenols, stanols and pentacyclic triterpenoids (Table 2). While it is thought that C 27 and C 28 sterols dominate in algae and zooplankton, C 29 sterols are generally abundant in vascular plants (Volkman, 1986). Furthermore, many of the compounds identified in the Batagay sediments were found to be typical of higher land plants: campesterol, stigmasterol, β-sitosterol, stigmastanol, β-amyrin, α-amyrin, oleanenone and lupeol (Brassell et al., 1983;Peters et al., 2005;Killops and Killops, 2013). The presence of these markers points to a strong terrestrial signal of OM, which is partly corroborated by the high ACL and lower P aq values in the woody layer sediments. These findings match those of Ashastina et al. (2017), who found no aquatic or wetland plants for this unit but only terrestrial plant remains.
The woody layer accumulated in an erosional gully, which is indicated by the presence of organic-rich lenses and abundant trash wood in the headwall. Similar "forest beds" that were associated with the Last Interglacial were found in non-glaciated Yukon and Alaska (Hamilton and Brigham-Grette, 1991;Reyes et al., 2010). In the woody layer, a mixture of different autochthonous and allochthonous organic biomass was transported and accumulated. Thermo-erosional processes such as the formation of gullies (the combined mechanical and thermal action of moving water) (van Everdingen, 2005) are associated with running or standing water that can transport sediments and organic remains. However, aquatic markers are only present in minor abundance but might be represented by short-chain FAs and sterols such as brassicasterol (Killops and Killops, 2013). In addition, Ashastina et al. (2018) reconstructed dry conditions during the Last Interglacial with a herb-rich light coniferous taiga and a pronounced plant litter cover. They argued that this could be related to the low ice content of the underlying lower sand unit, providing little meltwater from thawing permafrost. Furthermore, they found that plant and insect species composition pointed to frequent fire disturbances in the Last Interglacial. The high abundance of isoand anteiso-FAs (IA index) as well as high quantities of branched and unsaturated short-chain FAs (Fig. S3) suggests increased microbial activity for this interval (Stapel et al., 2016). Together with the very low HPFA index, this indicates an increased level of microbial transformation of the OM and thus a lower quality of the OM in the woody layer.
In the Holocene cover sediments, the relatively low ACL and high P aq values suggest an increasing number of aquatic plants formed under wet conditions or mosses (Ficken et al., 1998(Ficken et al., , 2000. In the sediments of the Holocene cover and some samples from the upper ice complex, the short nalkanes were abundant. Especially in these sediments, we found the presence of branched and cyclic alkanes. The branched alkanes, among which are the diethylalkanes and the ethyl-methylalkanes, have one or two quaternary carbon atoms (branched aliphatic alkanes with a quaternary substituted carbon atom -BAQCs). Kenig et al. (2005) argued that the BAQCs are widespread in sediments and sedimentary rocks due to their low biodegradability but have not been identified often or have been misidentified before. The source of these, as well as of the cyclic alkanes (alkylcyclohexanes and alkylcyclopentanes) and methylalkanes, has been a topic of debate (e.g. Shiea et al., 1990;Greenwood et al., 2004;Kenig et al., 2005). The strong positive correlation (r > 0.97) between the concentrations of the BAQCs and cyclic alkanes suggests similar sources for these compounds. Previous studies have also found the co-occurrence of these compounds (e.g. Ogihara and Ishiwatari, 1998;Kenig et al., 2005). Several studies have proposed a microbial origin, such as cyanobacteria (Shiea et al., 1990), non-photosynthetic sulfidic oxidising bacteria (Kenig et al., 2003), thermophilic acidophilic bacteria (Ogihara and Ishiwatari, 1998), or microbes exploiting redox gradients or involved in either the sulfur or the nitrogen cycle (Greenwood et al., 2004). Zhang et al. (2014) suggested that the long-chain cyclic alkanes could be produced by the reduction of FAs. Cyanobacteria could have been present in polygonal ponds, running water or even liquid pore water. However, we did not find a correlation with concentrations of certain FAs that are major components produced by cyanobacteria such as 16:0, 16:1ω7 and 18:1ω9 (Piorreck et al., 1984). Nevertheless, these FAs are not very specific and thus can be a signal of different sources preventing a direct correlation to the alkylated and cyclic alkanes. Plastic contamination was also proposed as the source of BAQCs by Brocks et al. (2008), but we would expect that previous studies where sediment samples were prepared in a similar way would have found these compounds as well (e.g. Strauss et al., 2015;Jongejans et al., 2018Jongejans et al., , 2020Jongejans et al., , 2021a. Further, petroleum contamination can be ruled out as we did not find corresponding oil-related geothermally transformed compounds such as hopanes and steranes. Further research is needed to be able to reduce the number of possible sources. Generally, we assume a microbial origin for the branched and cyclic alkanes. This is corroborated by the strong positive correlation between the branched and cyclic alkanes and the short n-alkanes (r 0.90, p < 0.01). Also, even though the correlation was not significant when looking at the complete sample set, higher concentrations of branched and cyclic alkanes did match lower ACL and higher P aq values. These findings suggest that these alkanes are also produced under relatively warm and wet conditions, which fits the Holocene origin of these samples very well. The low TOC contents and lowest CPI values suggest a higher degradation level and thus lower quality for the Holocene OM. Our findings point to drier conditions during the Last Interglacial compared to the Holocene, as well as more bioproductivity and microbial degradation, indicating higher temperatures. This fits nicely the findings of Kienast et al. (2008).
Altogether, it would be expected that there is a distinct difference between the upper ice complex and the Holocene cover. Still, it is likely that the uppermost part of the upper ice complex was degraded during the Holocene. This might have led to a rather gradual transition of the biogeochemical and biomarker parameters within the Holocene cover sediments and into the upper ice complex.
Modern organic matter mobilisation in the Batagay megaslump
Using satellite imagery, Vadakkedath et al. (2020) analysed the expansion of the thaw slump for the past 3 decades and found increasing expansion rates over time with a mean of 2.6 ha yr −1 . This means that an enormous quantity of sediments and OM is mobilised every year. Following the thaw of the ice-rich sediments (especially of the lower and upper ice complex units), the mobilised material can be transported by the meltwater rapidly downslope through a gully network into the Batagay River and further into the Yana River. The OM in these sediments can be decomposed by microbes upon thaw, leading to greenhouse gas emission from the sediments directly (Vonk et al., 2013) or from rivers. Intense permafrost thaw occurred during interglacials, and we found stratigraphic discordances above the lower ice complex, the lower sand unit and the lower ice complex. Nevertheless, the presence of large ice wedges in the lower and the upper ice complex and composite wedges in the lower sand unit shows that the sediments that are still exposed in the Batagay megaslump were affected only in their upper parts and remained largely undisturbed. Hence, OM decomposition was presumably limited. Previous studies have shown the high lability of OM in permafrost and especially in the MIS 4-2 Yedoma Ice Complex sediments (Vonk et al., 2013;Jongejans et al., 2021a). Although the biomarkers indicate variable OM quality for the different sedimentary intervals, we expect that a large amount of biodegradable OM is still mobilised from the Batagay thaw slump every thawing season. From the glacial and Holocene deposits, mostly mineral OM is mobilised, whereas from the woody layer, wellpreserved OM including wooden remains and detritus is mobilised which can be readily decomposed upon thaw. The increased formation of retrogressive thaw slumps that has been observed over the past decades in many Arctic regions (e.g. Lacelle et al., 2010;Lewkowicz and Way, 2019) is likely to continue with ongoing climate warming, and the mobilisation of large quantities of previously frozen sediments and OM likely will lead to higher greenhouse gas release from thawing permafrost (Bröder et al., 2021;Mann et al., 2022;Yao et al., 2021).
Multiple studies have pointed to accelerating rapid degradation of ice-rich permafrost landscapes by thaw slumping, including not only regions with buried glacial ice but also regions with large syngenetic Yedoma ice wedges (Lantz and Kokelj, 2008;Lacelle et al., 2010;Kokelj et al., 2017;Lewkowicz and Way, 2019;Runge et al., 2022). In their study of thaw slumps in north-western Canada, Lacelle et al. (2015) found 189 active slumps of which 10 exceeded 20 ha. However, recent remote sensing work on thaw slumps (e.g. Kokelj et al., 2015;Runge et al., 2022) has suggested that megaslumps (up to 52 ha or larger) have been rather rare so far. Therefore, at this point the Batagay thaw slump is very unique in its size and is the largest such feature as far as we know. As the initial disturbance of the Batagay megaslump is possibly anthropogenic, it represents an outstanding example of rapid permafrost thaw that is promoted but was not originally caused by Arctic warming.
Conclusions
Biogeochemical analyses provide valuable information on palaeo-environments. Here, for the first time ancient permafrost that formed about 650 kyr ago in NE Siberia was studied for carbon and nitrogen contents and lipid biomarker characteristics. Our findings show that there was no substan-tial vegetation change of the prevailing meadow steppe over large glacial periods during MIS 16, sometime between MIS 16 and MIS 6, and MIS 4-2, which are represented in the exposed strata of the Batagay megaslump by the lower ice complex, lower sand unit and the upper ice complex, respectively. The interglacial woody layer (MIS 5), a layer of eroded and accumulated material, showed a high content of higher-plant OM and strong microbial decomposition. In the Holocene cover, we found relatively wet depositional conditions. For the interglacial periods, the biomolecule inventory indicates a higher microbial OM transformation and thus a decreased OM quality. In contrast, in the glacial periods a variable but overall higher OM quality is suggested by the biomolecules compared to the interglacial periods. Thus, microbial decomposition was likely limited during the glacial periods. Therefore, a substantial amount of less decomposed OM is mobilised in the Batagay thaw slump every year, in particular since the thaw slump process allows access to deeply buried OM. Our biomarker analyses of ancient permafrost sediments contribute to a better understanding of how OM is incorporated and preserved in permafrost deposits during glacial and interglacial periods. Furthermore, they help to improve our comprehension of possible consequences resulting from future permafrost thaw and OM mobilisation. Data availability. The alkane and fatty acid data as well as the biogeochemical data (TC, TOC, TN) are freely accessible in the PAN-GAEA (https://www.pangaea.de/, last access: 31 August 2022) data repository (Jongejans et al., 2022a, b, c).
Financial support. This research has been supported by the Deutsche Bundesstiftung Umwelt (PhD Scholarship); the Leverhulme Trust (grant no. RPG-2020-334); the Lomonosov Moscow State University (grant no. 121051100164-0); and the Alfred Wegener Institute, Helmholtz Centre for Polar and Marine Research (baseline funding).
The article processing charges for this open-access publication were covered by the Alfred Wegener Institute, Helmholtz Centre for Polar and Marine Research (AWI).
Review statement. This paper was edited by Florent Dominé and reviewed by Jack Hutchings and two anonymous referees. | 9,291.8 | 2022-09-08T00:00:00.000 | [
"Environmental Science",
"Geology"
] |
Numerical solution of systems of differential equations using operational matrix method with Chebyshev polynomials
ABSTRACT In this study, we introduce an effective and successful numerical algorithm to get numerical solutions of the system of differential equations. The method includes operational matrix method and truncated Chebyshev series which represents an exact solution. The method reduces the given problem to a set of algebraic equations including Chebyshev coefficients. Some numerical examples are given to demonstrate the validity and applicability of the method. In Examples, we give some comparison between present method and other numerical methods. The obtained numerical results reveal that given method very good approximation than other methods. Moreover, the modelling of spreading of a non-fatal disease in a population is numerically solved. All examples run the mathematical programme Maple 13.
Introduction
Differential equation and systems are very useful materials both mathematical modelling and to find out some mathematical equations. For instance, the some Fredholm and Volterra integral equations can be transformed into a nonlinear differential equations with conditions. Recently, many problems in applied science are modelled mathematically by using a systems of ordinary differential equations, for example, pollution modelling and its numerical solutions [1], kinetic modelling of lactic acid [2], the prey and predator problem [3,4], modelling of the epidemiological model for computer viruses [5], modelling of mosquito dispersal [6], modelling a thermal explosion [7], dynamical models of happiness [8], stagnation point flow and Lorentz force [9], non-spherical particles sedimentation [10], boundary layer analysis of micropolar dusty fluid with TiO 2 nanoparticles in a porous medium [11], boundary layer flow of an Eyring-Powell non-Newtonian fluid over a linear stretching sheet [12], heat transfer [13], two-phase Couette flow analysis [14].
Due to these reasons, the solutions of these equations are great importance among scientists. To obtain the exact solutions of systems of differential equations is very crucial or impossible the classical methods especially nonlinear forms. Therefore, many scientists need confidential, quick, easy a numerical method. By the given reasons, many scientists have been motivated that scientists have been studied many numerical methods to solve systems of differential equations such as Runga kutta method [11], DTM and DQM [14], collocation method [12,13,15,16], homotopy perturbation method [17], Adomian decomposition method [18], pseudospectral method [19] and other [20][21][22].
In this study, a numerical algorithm is revealed to get numerical results the following systems of differential equations, for i = 1, 2, . . . , m, m j=1 P ij (x, y j , y (1) j , y (2) j , . . . , y using operational matrix method associate the truncated first kind shifted Chebshev polynomials with the (N + 1) terms as: where T * r (x) denotes the Chebyshev polynomials of the first kind, a j r are unknown Chebyshev coefficients and N is chosen any positive integer, f ij (x) and P ij are analytic functions.
Nomenclature
T n (x) The first kind Chebychev polynomials T * n (x) The first kind shifted Chebychev polynomials a j n The unknown coefficients y j N (x) The approximate solutions N 1 The absolute error
These polynomials have the following properties [26][27][28]: (i) [23] T * n+1 (x) has exactly n + 1 real zeroes on the interval [0, 1]. The ith zero x i is (ii) [23] We known that the relation between the powers x n and the shifted Chebyshev polynomials T * n (x) is where ′ denotes a sum whose first term is halved.
The given function y(x) [ L 2 [0, 1] can be approximated as a sum of shifted Chebyshev polynomials as: Now, we consider the truncated shifted Chebyshev polynomials with the first (N + 1) terms as Equation (3). Matrix representation of the approximate solution Equation (3) and its k derivatives are given where By the expression (5) and for n = 0, 1, … N, we find the matrix relation where Then, aid of (8), we get the following relations and To obtain the matrix X (k) (x) in terms of the matrix X(x), we can use the following relation: where Consequently, substituting the matrix forms (8) and (12) into (7) we have the matrix relation of the approximate solution
Method of solution
In this section, we introduce the numerical solution method for Equation (1) with initial conditions Equation (2). We suppose that it can be given the expansion of f i (x) by the shifted Chebyshev polynomials as: Using matrix representation of approximate solution and its derivatives, Equation (1) can be written as: The residual R i (x) for Equation (17) can be written as We use the Tau method in [17,[29][30][31][32][33], Equation (15) can be converted in m(N − (m ij − 1)) linear or nonlinear equations The initial conditions are given by Hence, we have the m(N + 1) sets of equations with m(N + 1) unknowns by Equations (17) and (18). We write the algorithm procedure in the Maple 13 and solve the m(N + 1) sets of equations with m(N + 1) unknowns, and so the approximate solution y j N (x) can be calculated.
Error estimation
We assume that y(x) is a sufficiently smooth function on [0, 1] and I N (x) is the interpolating polynomial to y at x i , where x i , i = 0, 1, . . . , n are the Chebyshev-Gauss grid points, then we have Therefore, we have [17,29,33] |y Theorem: Suppose that the known functions in Equation (1) are real (N + 1) times continuously differential functions on the [0, 1] and are the shifted Chebyshev polynomials expansion of the exact solution. Let be the approximate solution obtained by proposed method, then there exist real number a such that where A = a 0 a 1 · · · a N and A = a 0 a 1 · · · a N . Proof: Let y N (x) is real-valued polynomials of degree ≤ N and y N (x) is the best approximation of y(x). We can write Using (9) we obtain and we have Then from (19), (20) and (21) we found following error bound:
Example 4.2: Secondly, we take the following the nonlinear stiff problem [21][22] with initial conditions y 1 (0) = y 2 (0) = 1. The exact solutions of this problem are Solving this problem given present method for N = 6, 7, 8, we give the absolute errors for N = 6, 7, 8 in Table 1. In Table 2, we give the comparison of some numerical methods and present method. We plot the this numerical results in Figure 1 for y 1 (x) and Figure 2 for y 2 (x). Table 3 displays the maximum norm errors ||y 1 N − y 1 || and ||y 2 N − y 2 ||.
Example 4.3:
We consider the following linear differential equation with initial conditions y 1 (0) = 1 and y 2 (0) = 0. The solutions of this problem are The numerical results are given by Table 4 for y 1 (x) and y 2 (x). Moreover, Obtained numerical results are displayed in Figures 3 and 4.
Obtaining results are displayed in Figures 5-7. The comparison of numerical results of susceptible, infective and recovered populations is displayed in Figures 5-7 with N = 6, 8, 10, respectively. These results are coherent in [38].
Conclusion
In this article, Chebyshev operational method has been applied to numerically solve the systems of differential equations. Given problem has been transformed into an algebraic equations including unknown coefficients of Chebyshev series. The given algorithm has been written in Maple 13 in order to simply solving given examples. Several examples are given to demonstrate the effectiveness and accurate of the numerical method. The obtained results are compared with exact solutions and also the solutions which are obtained by some other numerical schemes in literature. Results say that present method gives acceptedly accurate results from tested problems. In Example 4.5, we numerically solve the modelling of spreading of a non-fatal disease in a population. From Example 4.5, each figure is shown that susceptible populations are decreasing while infective and recovered populations are increasing during t times. | 1,905.4 | 2018-03-04T00:00:00.000 | [
"Mathematics"
] |
Multiple Regulatory Roles of the Mouse Transmembrane Adaptor Protein NTAL in Gene Transcription and Mast Cell Physiology
Non-T cell activation linker (NTAL; also called LAB or LAT2) is a transmembrane adaptor protein that is expressed in a subset of hematopoietic cells, including mast cells. There are conflicting reports on the role of NTAL in the high affinity immunoglobulin E receptor (FcεRI) signaling. Studies carried out on mast cells derived from mice with NTAL knock out (KO) and wild type mice suggested that NTAL is a negative regulator of FcεRI signaling, while experiments with RNAi-mediated NTAL knockdown (KD) in human mast cells and rat basophilic leukemia cells suggested its positive regulatory role. To determine whether different methodologies of NTAL ablation (KO vs KD) have different physiological consequences, we compared under well defined conditions FcεRI-mediated signaling events in mouse bone marrow-derived mast cells (BMMCs) with NTAL KO or KD. BMMCs with both NTAL KO and KD exhibited enhanced degranulation, calcium mobilization, chemotaxis, tyrosine phosphorylation of LAT and ERK, and depolymerization of filamentous actin. These data provide clear evidence that NTAL is a negative regulator of FcεRI activation events in murine BMMCs, independently of possible compensatory developmental alterations. To gain further insight into the role of NTAL in mast cells, we examined the transcriptome profiles of resting and antigen-activated NTAL KO, NTAL KD, and corresponding control BMMCs. Through this analysis we identified several genes that were differentially regulated in nonactivated and antigen-activated NTAL-deficient cells, when compared to the corresponding control cells. Some of the genes seem to be involved in regulation of cholesterol-dependent events in antigen-mediated chemotaxis. The combined data indicate multiple regulatory roles of NTAL in gene expression and mast cell physiology.
Introduction
Activation of mast cells upon exposure to antigen (Ag) is one of the major events in the allergic reaction. It is initiated by Agmediated aggregation of the high-affinity immunoglobulin (Ig) E receptor (FceRI) armed with Ag-specific IgE, and results in degranulation leading to the release of a number of preformed allergy mediators such as histamine, serotonin, proteases, preformed cytokines, and proteoglycans. Mast cell activation also leads to the synthesis and release of numerous compounds like cytokines and those formed by arachidonic acid metabolism [1]. The first biochemically well-defined step in FceRI signaling is tyrosine phosphorylation of the immunoreceptor tyrosine-based activation motifs (ITAMs) in the FceRI b and c subunits by Src family kinase LYN [2,3]. Phosphorylation of the ITAMs leads to the recruitment and activation of SYK kinase, which phosphorylates tyrosine residues of numerous proteins involved in the intracellular signaling pathways, including two transmembrane adaptor proteins (TRAPs), linker for activation of T cells (LAT) and non-T cell activation linker (NTAL; also called linker for activation of B cells or LAT2). Both these TRAPs possess multiple sites of tyrosine phosphorylation and act as scaffolds for recruitment of various cytosolic adaptors and effector proteins [4][5][6].
NTAL is expressed in hematopoietic cells such as B cells, natural killer cells, dendritic cells, monocytes, and mast cells but not in resting T cells. NTAL is the product of human WBSCR5 gene located on chromosome 7 encoding a 243 amino acids protein. Its murine ortholog contains 203 amino acids, has a molecular weight of approximately 25 kD and is encoded by a gene located on chromosome 5 [7,8]. NTAL contains a short extracellular domain, a transmembrane domain and a cytosolic tail which possesses a CxxC motif responsible for palmitoylation of the protein and its targeting to detergent-resistant plasma membrane microdomains. The cytoplasmic domain contains 10 tyrosines which are potential targets for tyrosine kinases. NTAL is structurally similar to another TRAP, LAT; after phosphorylation both molecules are capable of binding a number of cytoplasmic signaling molecules including GRB2, SOS1, GAB1 and C-CBL. NTAL, unlike LAT, is however unable to directly bind the phospholipase Cc1 [7,8].
Previously we and others showed that bone marrow-derived mast cells (BMMCs) from Ntal -/mice were hyper-responsive to FceRI stimulation [9,10], whereas BMMCs from Lat -/mice were hypo-responsive [11]. Interestingly, loss of both NTAL and LAT caused stronger inhibitory effect on FceRI-mediated degranulation than loss of LAT alone. This suggested that NTAL could also have a positive regulatory role in FceRI signaling, manifested only in the absence of LAT [9,10]. In contrast to studies with cells from mice with NTAL knock out (KO), NTAL knockdown (KD) by RNAi in human mast cells [12] and also in rat basophilic leukemia cells [13] resulted in impaired degranulation; it implies that NTAL has positive regulatory roles in these cells even in the presence of LAT.
To rigorously examine the regulatory role(-s) of NTAL in murine mast cells signaling and to test the contribution of compensatory developmental alterations in mast cells from NTAL KO mice, we prepared BMMCs with NTAL KO or KD and the corresponding controls and cultured them under comparable welldefined conditions. For functional comparison of mast cells with NTAL KO or KD we examined several parameters characteristic for FceRI signaling including degranulation, calcium mobilization, tyrosine phosphorylation of LAT and ERK, depolymerization of filamentous (F) actin, and chemotaxis. The results obtained with the NTAL KD BMMCs were very similar to those of NTAL KO cells and thus support the notion that in murine mast cells NTAL is predominantly a negative regulator of FceRI signaling and that compensatory developmental alteration do not contribute to this phenotype.
To gain a better understanding of the genes that are regulated through NTAL-dependent pathways, we further examined the gene expression profiles of resting and Ag-activated BMMCs with NTAL KO or KD and corresponding controls. Several genes have been identified that differ by a factor of 1.8 and higher in their expression in resting and FceRI-activated NTAL-deficient cells when compared to wild type (WT) cells. Through gene ontology analysis we identified a subset of NTAL-dependent genes, which were related to metabolism and biosynthetic processes. Further analysis showed that some of the genes could be involved in regulation of cholesterol-dependent events in chemotaxis towards antigen.
Cells and their activation
Bone marrow cells were isolated from femurs and tibias of [8][9][10][11][12] week-old WT or NTAL KO mice (males and females) of C57BL6 background [9]. Mice were bred and maintained in specific pathogen free facility of the Institute of Molecular Genetics and used in accordance with the Institute guidelines. The protocol, including killing mice by decapitation, was approved by the Institutional Animal Care and Use Committee (Permit number 12135/2010-17210). All efforts were made to minimize suffering. The cells were cultured for 6-8 weeks in mast cell medium [Iscove's modified Dulbecco's medium supplemented with 10% fetal calf serum (FCS), penicillin, streptomycin, 2-mercaptoethanol, recombinant interleukin (IL)-3 (20 ng/ml; Peprotech), and mouse stem cell factor (SCF; 40 ng/ml; Peprotech)]. In some experiments BMMCs were cultured for the indicated time intervals in mast cell medium supplemented with 10% cholesterol-depleted FCS (see below) instead of FCS. For activation, BMMCs (6610 6 / ml) were sensitized in medium without SCF and IL-3, but supplemented with trinitrophenyl (TNP)-specific IgE (IGEL b4 1 monoclonal antibody; 1 mg/ml). After 4 hours the cells were washed in buffered salt solution (BSS; 20 mM HEPES, pH 7.4, 135 mM NaCl, 5 mM KCl, 1.8 mM CaCl 2 , 5.6 mM glucose, and 1 mM MgCl 2 ) supplemented with 0.1% bovine serum albumin (BSA) and stimulated with various concentrations of Ag (TNP-BSA conjugate) and/or SCF. Degree of degranulation was determined by measuring the release of b-glucuronidase from the activated cells as previously described [14].
Cholesterol-depleted FCS and cholesterol determination
FCS was cleared of cholesterol and other lipids by organic extraction as described [15]. Briefly, 100 ml of FCS was mixed with 200 ml of a mixture n-butanol and diisopropylether at a 40:60 (v/v) ratio. After incubation at room temperature (22uC) for 1 hour in dark, the mixture was centrifuged at 22uC for 15 minutes at 6000 rpm in a JA-10 rotor, Beckman Coulter. The bottom phase containing delipidated serum was recovered and lyophilized. The resulting dry pellet was resuspended in 100 ml deionized H 2 O and filter-sterilized through 0.22 mm filter. Concentration of cholesterol in serum and cell samples was determined by the Amplex Red Cholesterol Assay kit (Life Technologies) according to the manufacturer's instruction. Using this kit, no remaining cholesterol was detectable in delipidated serum. This indicates that cholesterol concentration was reduced from ,80 mg/ml to ,15 ng/ml. For determination of cellular cholesterol, frozen cell pellet containing 0.35610 6 cells was lysed in 30 ml of ice cold lysis buffer (10 mM EDTA, 100 mM NaCl, 10 mM Tris-HCl, pH 7.5, 0.2% SDS, 0.5% Nonidet P-40, 0.5% sodium deoxycholate) and 2 ml aliquots were analyzed for cholesterol content using the same kit as above. Protein content in the lysates was determined by BCA protein assay kit (Pierce Chemical Co.) and the amounts of cholesterol were normalized to protein contents.
Measurement of free cytoplasmic Ca 2+
Concentration of free cytoplasmic Ca 2+ [Ca 2+ ] i was determined using cells labeled with Fura-2-AM (Molecular Probes) as described [9]. Ca 2+ levels were monitored by means of fluorescence reader Infinite M200 (Tecan) with excitation wavelengths of 340 and 380 nm, and emission wavelength of 510 nm.
Lentiviral vectors and gene transduction
A set of 5 shRNA constructs designed to target murine NTAL (GenBank accession number NM_020044) and cloned into the pLKO.1 vector was purchased from Open Biosystems to prepare BMMCs with NTAL KD. From these five shRNA constructs (TRCN0000127239, NTAL KD 1; TRCN0000127240, NTAL KD 2; TRCN0000127241, NTAL KD 3; TRCN0000127242, NTAL KD 4; TRCN0000127243, NTAL KD 5), the NTAL KD 3 and NTAL KD 5 showed reproducibly the highest reduction of NTAL protein expression in target cells and were used in most of the experiments with similar results. In all experiments in this study we obtained similar data with these two constructs and therefore the data were pooled and are referred to as NTAL KD. For microarray gene expression analysis and related qPCR validation, cells with the NTAL KD 5 construct were used.
Lentiviral transduction was performed as described previously [17]. Briefly, 21 ml ViraPower packaging mix (Invitrogen Life Technologies) and 14 mg NTAL shRNA or pLKO.1 empty vector as a negative control [in 1.4 ml medium Opti-MEM (Invitrogen)] were co-transfected into 293T17 packaging cells in the presence of 84 ml Lipofectamine 2000 (Invitrogen) or 105 ml polyethylenimine (25 kD, linear form, 1 mg/ml; Polysciences). After 2-3 days, the culture supernatants were centrifuged to pellet the viruses, which were then used to infect NTAL WT BMMCs. Stable transfectants were selected in puromycin (5 mg/ml; InvivoGen). After one week of selection, cells were analyzed for NTAL expression by immunoblotting. Before the tests, cells were transferred for 2-3 days into fresh media without puromycin.
F-actin assay
Total amount of F-actin in nonactivated and activated cells was determined by flow cytometry. Cells in 96-well plates (50 000 cells per well) were exposed to various stimuli at 37uC, fixed with 3% paraformaldehyde in phosphate buffered saline and then permeabilized and stained in a single step by a mixture of lysophosphatidylcholine (200 mg/ml) and 1000x diluted Alexa Fluor 488phalloidin (Molecular Probes) in phosphate buffered saline. Fluorescence intensity was measured with the help of LSRII flow cytometer (Becton Dickinson). Acquired data were analyzed using FlowJo software (Tree Star Inc).
RNA preparation
Total RNA was isolated from 3610 6 resting or Ag-activated (100 ng/ml TNP-BSA in BSS-0.1% BSA, 37uC, 2 hours) BMMCs using the RNeasy mini kit (Qiagen) according to the manufacturer's protocol. Three biological replicates were carried out with each cell type: NTAL KO, WT, NTAL KD, and mock (empty pLKO.1 vector) infected WT cells (referred to as WT pLKO). Cells in each group were cultured in parallel for 24 hours in complete media deprived of SCF and then sensitized with TNPspecific IgE in IL-3-and SCF-deprived medium for 4 hours. After removal of unbound IgE by washing, cell suspensions were divided into 2 aliquots; one activated for 2 hours with Ag (100 ng/ml) and the other incubated without Ag (nonactivated control cells; 0 hours). RNA was isolated from all 24 samples and processed under identical conditions. For RNA quantification, the absorption at 260 nm was measured using NanoDrop spectrophotometer N-1000 (NanoDrop Technologies).
Microarray gene-expression profiling and data analysis
Preparation of cRNA, hybridization and gene expression profiling was done by an Affymetrix authorized service provider (AROS Applied Biotechnology A/S) using the Affymetrix GeneTitan HT MG-430 PM 24-array plate with the 39 IVT express labeling kit according to the manufacturer's protocol. Briefly, following fragmentation, 6.5 mg aliquots of cRNA were hybridized for 16 hours at 45uC on the Affymetrix array plate using the Affymetrix GeneTitan system. The array plate was washed, stained and scanned using the Affymetrix GeneTitan system with GCOS 1.4 software. One of the 24 samples analyzed, activated NTAL KO replicate 2, failed during the hybridization, wash and scan step and was removed. Data analysis was carried out by importing raw data CEL files into Genomic Suite Software Partek 6.4 (version 6.09.0602), where the Robust Multichip Analysis was used for background correction. Using the same software, principal component analysis (PCA) of the normalized microarray expression values was performed as a visualization technique to determine the similarity in the data. Lists of significantly upregulated or downregulated gene transcripts were created based on a change greater than 1.8-fold and false discovery rate (FDR) ,0.1, with one exception in which activated cells were compared with nonactivated cells (fold change .4; FDR ,0.05). Only well annotated probe sets (Affymetrix annotation version from July 2011) are listed in the tables. The microarray study was performed according to the standards of the Microarray Gene Expression Society. Data complying with the Minimum Information About Microarray Experiments (MIAME; [19]) were uploaded in the NCBIs Gene Expression Omnibus (GEO) database and are available under the accession number GSE40731.
Reverse transcription quantitative PCR (RT-qPCR)
cDNA was synthesized using mouse Moloney leukemia virus reverse transcriptase (Invitrogen) according to manufacturer's instructions. For reverse transcription, 0.3 mg aliquots of total RNA were used from the same samples, which were used for microarray analysis. qPCRs were performed using a PCR mastermix supplemented with 0.2 M trehalose, 1 M 1,2-propanediol and SYBR green I as described [20]. Ten ml reaction volumes in 384-well plates sealed with LightCycler 480 sealing foil (Roche Diagnostics) were processed in LightCycler 480 (Roche Diagnostics) under the following cycling conditions: initial 3 minutes denaturation at 95uC, followed by 50 cycles at 95uC for 10 s, 60uC for 20 s and 72uC for 20 s. Melting curve analysis was carried out from 72uC to 97uC with 0.2uC increments; Ct values for each sample were determined by automated threshold analysis. Primer pairs used for cDNA amplification are listed in Table 1. Data were normalized to a housekeeping GAPDH mRNA. The qPCRs for each of the biological triplicates was performed in quadruplicates.
Chemotaxis
The migration of IgE-sensitized BMMCs towards Ag as chemoattractant was determined in a 24-well Transwell system with inserts containing polycarbonate filters having 8-mm diameter pores (Corning). TNP-specific IgE-sensitized BMMCs (0.3610 6 ) in 120 ml of chemotaxis medium (RPMI-1640 supplemented with 20 mM HEPES, pH 7.4 and 1% BSA) were added into each Transwell insert and Ag (250 ng/ml TNP-BSA) in 600 ml of chemotaxis medium was added into lower wells of the Transwell system. Cells passing through the polycarbonate filter were counted 8 hours later in 50 ml aliquots with Accuri C6 flow cytometer.
Statistical analysis
Statistical significance of differences was evaluated by Student's t-test, except for microarray gene-expression profiling in which intergroup differences were evaluated by ANOVA test.
Efficient NTAL KD in BMMCs
In order to obtain mast cells with stable reduction of NTAL expression by KD approach, five different shRNAs were introduced into Ntal +/+ BMMCs by lentiviral-mediated infection, followed by selection in puromycin. To minimize the effect of variables other than the presence of NTAL, target Ntal +/+ cells were the same as those which served as WT littermate controls to BMMCs isolated from Ntal -/mice (KO). Controls for NTAL KDs were the same Ntal +/+ BMMCs infected with empty pLKO.1 vector and selected in puromycin. Immunoblotting with NTALspecific monoclonal antibody confirmed that all five shRNAs inhibited NTAL expression to different degrees ( Figure 1A, B). Two of them, namely NTAL KD 3 and NTAL KD 5, showed the highest (.90%), reproducible and highly significant inhibition of NTAL expression and were therefore selected for further experiments. No decrease in NTAL expression was observed in cells infected with empty pLKO.1 vector (WT pLKO; Figure 1A, B). Flow cytometry analysis showed that BMMCs with NTAL KD expressed FceRI (.95% cells positive) and KIT (.95% positive) at levels comparable to those in WT cells and WT pLKO cells (not shown).
NTAL KD results in enhanced degranulation and Ca 2+ response
As described in Introduction, there are conflicting reports on the role of NTAL in mast cell signaling in mammalian systems. To address these discrepancies, we compared under well-defined conditions the effect of NTAL KO and KD on mast cell signaling. First we evaluated degranulation. The cells were sensitized with IgE and stimulated with various concentrations of Ag. Degranulation was then determined as the amount of b-glucuronidase released into the supernatant. Data presented in Figure 2A indicate that BMMCs with both NTAL KD and NTAL KO showed enhanced degranulation when compared to the corresponding controls (WT pLKO and WT). The difference between NTAL-deficient cells (KD or KO) and controls (WT pLKO and WT) was more pronounced at suboptimal concentrations of Ag (100 and 200 ng/ml) and was not significant at supraoptimal concentration (1000 ng/ml). Our results show a similar trend for both NTAL KD and KO BMMC. To check for possible off-target effects of lentiviral infection and puromycin selection we also examined antigen-induced degranulation of NTAL KO cells transduced with NTAL shRNA vectors and found no significant difference between infected and noninfected cells (data not shown).
It is known that degranulation is enhanced in cells simultaneously triggered via FceRI and KIT, a receptor for SCF [21]. We found degranulation not only after exposure of the cells to IgEantigen complexes, but also after triggering with SCF alone. This is probably due to the fact that mast cells were differentiated from their precursors in the presence of IL-3 and SCF [22]. NTAL KO has no effect on degranulation induced by SCF alone and SCF enhances Ag-induced degranulation in both WT and NTAL KO cells [18]. We therefore investigated degranulation of cells with NTAL KD and the corresponding controls activated by Ag and/ or SCF. Compared to separate activation by Ag (100 or 500 ng/ ml) or SCF (40 ng/ml), simultaneous activation by Ag and SCF raised degranulation in Ntal +/+ controls (WT and WT pLKO; Figure 2B). In cells with NTAL KD, the enhanced degranulation induced by Ag was further increased if the cells were simultaneously activated with SCF (Ag + SCF), even though the difference between NTAL-KD and control cells transduced with empty pLKO was not significant. Similar data were obtained in BMMCs from NTAL KO mice. Calcium mobilization is another hallmark of mast cell activation. We therefore activated IgE-sensitized BMMCs with Ag in the presence of extracellular Ca 2+ and evaluated calcium mobilization by means of Ca 2+ -sensitive fluorophore Fura-2-AM. Both Ntal +/+ controls (WT and WT pLKO) showed a comparable increase in [Ca 2+ ] i peaking at 50-60 s after exposure to Ag ( Figure 2C). NTAL KO cells showed the expected [9,10] longlasting increase in Ca 2+ mobilization. Within 120-600 s this response significantly (P,0.01) differed from that seen in WT cells.
Significant increase in calcium response was also observed in NTAL KDs between 180-450 s. No significant difference in Ca 2+ response between NTAL-deficient cells and controls was observed after stimulation with SCF ( Figure 2D). Exposure of all cell types to a mixture of Ag and SCF resulted in an accelerated increase in the [Ca 2+ ] i , and again, BMMCs with NTAL KO and KD cells showed higher Ca 2+ mobilization than the controls, WT and WT pLKO ( Figure 2E). These data indicate that negative regulatory roles of NTAL on FceRI-mediated degranulation and Ca 2+ response are due to the absence of NTAL rather than possible compensatory developmental changes induced in NTAL KO.
NTAL depletion and deletion induce tyrosine phosphorylation of ERK and LAT
Mast cell activation is initiated by tyrosine phosphorylation of the b and c subunits of FceRI, followed by phosphorylation of numerous substrates, including ERK and LAT [23,24]. It has been suggested that enhanced tyrosine phosphorylation of LAT and some other substrates in Ntal -/cells could reflect a better accessibility of kinases to LAT in the absence of competition between NTAL and LAT as substrates [9]. This process could be subjected to compensatory developmental alterations. We therefore decided to determine phosphorylation of ERK and LAT in cells with NTAL KD. Immunoblotting experiments showed an increase of tyrosine phosphorylation of ERK ( Figure 3A) and LAT ( Figure 3B) in NTAL KD cells when compared to WT cells. WT pLKO cells showed similar phosphorylation profile as WT cells (data not shown). Since the same enhanced phosphorylation of LAT and ERK was observed in NTAL KO and NTAL KD cells, developmental compensation mechanisms are unlikely to be responsible for enhanced phosphorylation of the targets.
Effect of NTAL on cell spreading and chemotaxis
Activation through FceRI or KIT results in enhanced spreading of BMMCs on fibronectin [25,26]. Our previous studies with NTAL KO BMMCs showed that full-value spreading on fibronectin was dependent on the presence of NTAL in FceRIactivated, but not KIT activated, cells [18]. Spreading on fibronectin requires expression of intact integrins and signaling pathways, which could be developmentally regulated. Therefore, we analyzed spreading of BMMC on fibronectin in controls and cells with NTAL KD after exposure to Ag and/or SCF. Data presented in Figure 4A show that in relation to WT and WT pLKO cells, cells with NTAL KD exhibited decreased spreading after activation with Ag. Activation with both Ag and SCF also reduced the spreading of cells with NTAL KD, which showed similar response as cells from NTAL KO mice. No inhibition of spreading was observed in NTAL-deficient cells after activation with SCF. Quantitative analysis of the data obtained is shown in Figure 4B. The area of individual cells was measured and normalized to that of nonactivated cells. Compared to corresponding controls, NTAL KDs and KOs exhibited a significant decrease in surface area after triggering with Ag. Similarly, clear inhibition of cell spreading was observed in both NTAL KOs and KDs activated with Ag + SCF. The difference between NTAL KDs and WT pLKO control, stimulated with Ag + SCF was, however, not significant, mainly because of slight decrease in the spreading of cells with WT pLKO.
We also tested the chemotactic response of NTAL-deficient cells in comparison to WT cells. Data presented in Figure 4C indicate that BMMCs with NTAL KD exhibited significantly enhanced Ag-mediated chemotaxis, similarly as cells with NTAL KO. There was no significant difference between the two cells types.
NTAL KD increases F-actin depolymerization
FceRI-induced activation of BMMCs is accompanied by rapid F-actin depolymerization [27]. To determine whether F-actin depolymerization is similarly regulated in NTAL KDs, we activated cells with NTAL KD or NTAL KO and corresponding controls with Ag and/or SCF for the indicated time intervals, and determined the amount of F-actin by flow cytometry. Data presented in Figure 5A show that triggering with Ag stimulated both NTAL KOs and KDs to significantly higher F-actin depolymerization when compared to WT cells. We also observed that SCF activation induced clear increase in F-actin formation, rather than actin depolymerization, and no difference between NTAL KOs and KDs was noticed ( Figure 5B). Cells activated by both activators (Ag + SCF) responded by stronger depolymerization than cells activated with Ag alone, and again no difference between cells with NTAL KO and KD was observed ( Figure 5C).
Transcriptome profiles of cells with NTAL KO or KD
To better understand the role of NTAL in mast cell physiology and Ag-induced signaling pathways, we compared gene expression profiles of nonactivated and Ag-activated NTAL-deficient BMMCs with the corresponding controls. Four groups of cells (NTAL KO, NTAL KD, WT, and WT pLKO) were prepared and maintained under comparable culture conditions. Each group consisted of BMMCs isolated from three mice to account for variability of cell donors and procedures of BMMCs isolation. RNA was isolated from IgE-sensitized nonactivated cells or cells activated for 2 hours with Ag. The same RNA was used for microarray analysis and for later confirmation of the gene expression by qPCR. First, we compared expression profiles of nonactivated NTAL KO cells with nonactivated littermate WT controls; from 209 differentially expressed genes (258 probe sets), 70 showed more than 1.8 fold upregulation in NTAL KO cells (Table S1). When focused on biological processes and molecular functions of the genes (Table 2) . Interestingly, among differentially expressed genes was Idi, which was also downregulated in NTAL KO cells.
Numbers of differentially expressed genes in nonactivated NTAL KO and KD cells and their overlaps are schematically depicted in Figure 6A. Expression levels of the overlapping genes were verified by qPCR. The data presented in Figure 6B show that from 9 differentially expressed genes showing overlap between KO and KD, 5 genes were upregulated in NTAL KO cells (Spink4, Plau, Otub2, Dusp5 and Sdf4). Two of them were also upregulated in NTAL KD cells (Plau and Dusp5); one gene, Otub2, was dowregulated in NTAL KD cells, and Spink4 and Sdf4 gave in NTAL KD cells results which were not consistent between microarray analysis and qPCR. Downregulated genes involved Mlec, Slain1, Idi1 and Nt5dc2 and were comparably reduced in both NTAL KO and KD cells.
Analysis of gene expression in Ag-activated cells revealed 194 genes (235 probe sets) differentially expressed between NTAL KO cells and WT cells (Table S3). Among them, 83 genes showed more than 1.8 fold upregulation. Most of the genes are involved in transcription, other genes are involved in metabolic processes, production and function of cytokines and in cytoskeleton organization and function. Analysis of Ag-activated NTAL KDs revealed 165 genes (203 probe sets; Table S4).
Numbers of differentially expressed genes in Ag-activated NTAL KO and KD cells and their overlaps are schematically shown in Figure 6C. Expression levels of the overlapping genes were verified by qPCR. Data presented in Figure 6C and D show that from the 5 overlapping genes in activated KD and KO, 4 of them showed transcriptional regulation in the same direction. In NTAL KO cells, N4bp2I1 gene was upregulated but KIhI24 gave results which were not consistent between microarray analysis and qPCR. In NTAL KD cells both these genes were upregulated. In both NTAL KO and KD cells Mki67 and Tmcc3 were downregulated in both cells types.
We also looked at changes of gene expression profiles after FceRI triggering of NTAL KO, NTAL KD, WT and WT pLKO BMMCs. With a more stringent cut-off point of .4 fold up-or down-regulated gene expression and with FDR ,0.05, we obtained a list of 308 probe sets representing 244 genes which are shown in Table S5. It is noteworthy that when performing PCA using all probe sets, differences between activated NTALdeficient cells and the corresponding nonactivated controls were preserved. The highest clustering according to the treatment was found for the first principal component (PC; Figure 7; PC#1) demonstrating that activation of mast cells is a robust process with high impact on transcriptional changes. Smaller clustering was shown according to the type of cells along the second principal component (Figure 7; PC#2).
In this context it was also of interest to determine the contribution of lentiviral infection and selection procedure. We therefore focused on differences in gene expression between WT and WT pLKO cells. In nonactivated cells we found about 100 genes with different expression at the cut-off .1.8 fold change (FDR ,0.1), but no difference was found between Ag-activated WT and WT pLKO cells. This was also corroborated by PCA as closer clustering of activated WT and WT pLKO cells (Figure 7).
NTAL-cholesterol crosstalk in regulation of Ag-mediated chemotaxis
Detailed analysis of the microarray data and gene sorting with the help of Gene Onthology Molecular Function and Biological Process (a module incorporated in the Partek software), suggested that NTAL KO led, among others, to decreased expression of several genes involved in cholesterol synthesis. The genes included isopenthyl-diphosphate delta isomerase1 (Idi1), farnesyl diphosphate synthase (Fdps), lanosterol synthase (Lss) and the phosphomevalonate kinase (Pmvk; Table S1 and Figure 6B). Decreased expression of the genes in NTAL KO cells was confirmed by RT- qPCR ( Figure 6B). In further experiments we therefore investigated whether NTAL-deficient cells exhibit any change in amount of cellular cholesterol. Using Amplex Red Cholesterol Assay kit we found, however, no significant differences in total amount of cholesterol in both NTAL-deficient cells and corresponding control cells whether the cells were activated or not (data not shown).
Experiments with macrophages showed that local redistribution of cholesterol from inner to outer leaflet of the plasma membrane is of key significance for chemotaxis [28]. We therefore compared chemotaxis of NTAL-deficient and control cells cultured for 66 h in media supplemented with FCS or cholesterol-depleted FCS. This latter approach has been previously shown to decrease cholesterol level in BMMCs by ,25% ( [29] and our unpublished data). We found ( Figure 8A) that if WT cells grew in media containing cholesterol-deprived FCS, they exhibited lower chemotaxis towards antigen than cells cultured in cholesterolcontaining medium (decrease to 78.8%610.5%, mean 6 SD; n = 8). When NTAL KO cells were used the inhibitory effect of cholesterol deprivation was more pronounced (decrease to 66.8%67.1%, mean 6 SD; n = 8). The observed difference in chemotaxis decrease between WT cell and NTAL KO cells was significant (P = 0.004). These data indicate that chemotaxis of NTAL-deficient cells is more sensitive to decreased cholesterol levels than chemotaxis of WT cells.
In parallel experiments we compared chemotaxis of NTALdeficient and control cells after treatment with various concentrations of methyl-b-cyclodextrin (MbCD), a compound which has been previously shown to reduce cellular cholesterol in mast cells [14,30]. In accord with previous findings (Figure 4C), Ag-driven chemotactic response was higher in NTAL-deficient cells than that of WT cells ( Figure 8B). When NTAL KO BMMCs were exposed to increasing concentrations of MbCD (0.1-2.5 mM), significant decrease in chemotactic response was observed at all concentrations of MbCD tested ( Figure 8B). In contrast, WT cells showed no decrease in chemotaxis after exposure to 0.5-2.5 mM MbCD. When exposed to 0.1 mM MbCD even a small but significant increase in chemotactic response to Ag was observed in WT BMMCs. These data suggest that low concentrations of MbCD change distribution of the plasma membrane cholesterol in NTAL KO cells in such a way that their chemotactic response is reduced.
Discussion
This study was initiated because of long-standing discrepancies in published data indicating that NTAL in mouse mast cells is a negative regulator of FceRI signaling [9,10], whereas in human or rat mast cells is a positive regulator [12,13]. However, it was not clear whether these discrepancies reflect different methods/ strategies used for NTAL down-regulation (NTAL KO in mice, whereas NTAL KD in human and rat mast cells) and developmental alterations in KO mice as described in other systems where absence of a given gene is compensated for by enhanced transcriptional activity of other genes [31][32][33][34]. In attempt to understand the contribution of the compensatory mechanisms, we investigated for the first time the properties of mouse BMMCs with NTAL KD and compared them with BMMCs from mice with NTAL KO and well-matched controls. Several lines of evidence obtained in this study, indicate expressive similarities between the properties of BMMCs with NTAL KD or KO, and support the concept that NTAL is mostly a negative regulator of FceRI signaling, independently of possible compensatory developmental alterations.
First, BMMCs with both NTAL KO and NTAL KD showed comparable increase in degranulation induced by FceRI triggering. Compared to WT cells, NTAL KDs showed the highest increase in degranulation at suboptimal concentrations of Ag, similarly to NTAL KOs. At optimal and supraoptimal Ag concentrations the differences were less pronounced. Interestingly, activation through KIT was not potentiated by the absence of NTAL, even though NTAL is tyrosine phosphorylated in KITactivated mast cells [12,35] and activation through KIT enhances degranulation of FceRI-activated WT cells, and even more so of cells with NTAL KO or KD.
Second, Ag-activated BMMCs with NTAL KD exhibited higher Ca 2+ response when compared to WT pLKO cells, but lower when compared to NTAL KO cells. Similarly to degranulation, down-regulation of NTAL had no effect on Ca 2+ response after KIT triggering, even though KIT activation enhanced Ca 2+ response in Ag-activated WT cells, and even more so in NTALdeficient cells.
Third, when compared to WT cells, Ag activation of cells with NTAL KD resulted in enhanced tyrosine phosphorylation of ERK and LAT. Similar enhancement was also observed in Ag-activated NTAL KOs ( [9,10] and this study). These data support the hypothesis that competition between NTAL and LAT as kinase substrates could attenuate the response in WT cells through decreased tyrosine phosphorylation of LAT, followed by decreased binding and activation of phospholipase Cc1 and subsequent events [6,9,36].
Fourth, BMMCs with both NTAL KD and NTAL KO exhibited enhanced F-actin depolymerization after stimulation with Ag alone and even more after simultaneous triggering with both Ag + SCF. F-actin depolymerization precedes degranulation [27,37] and the observed decrease in amount of F-actin could account for the observed higher degranulation in NTAL-deficient cells than in WT cells after simultaneous activation with Ag + SCF.
Fifth, cells activated through FceRI or KIT exhibited enhanced spreading on fibronectin. In cells with NTAL KD spreading was significantly decreased after activation with antigen, but was unaffected after SCF triggering. The observed data suggest that positive regulatory role of NTAL on Ag-mediated spreading ( [18] and this study) is not the result of developmental compensatory events. Rather, spreading could be related to transient actin depolymerization which was observed in Ag-activated WT cells and even more in NTAL-deficient cells, but not in SCF-activated cells, WT or NTAL-deficient.
Sixth, BMMCs with NTAL KD exhibited migration towards Ag comparable with that seen in NTAL KO cells, and significantly higher than in WT cells. We recently showed that the level of active RhoA in resting NTAL KO BMMCs is at least twice as high as in WT cells [18]. Although active RhoA transiently decreased after FceRI triggering, more in NTAL KO cells than in WT cells, it is likely that differences in regulation of RhoA activity in NTALdeficient cells and WT cells are responsible for the enhanced NTAL-regulated chemotaxis. It should be stressed that previous reports have shown that RhoA regulates chemotaxis in other cell types, such as neutrophils [38][39][40], macrophages [41], dendritic cells [42] and lymphocytes [43].
The data presented in this study, together with those obtained in mice experiencing systemic anaphylaxis [9] indicate that in mouse mast cells NTAL is a negative regulator of FceRI signaling. In contrast to mouse cells, NTAL in human mast cells and rat basophilic leukemia (RBL)-2H3 cells was described as a positive regulator of mast cells signaling [12,13,35]. The observed differences could have several causes. Thus, NTAL could play different roles in mast cells of different origin. It has been shown that human mast cells differ from mouse mast cells in cytokine production, immunoglobulin receptor expression, and the ability of different stimuli to cause degranulation and release of mediators [44]. Furthermore, when total tyrosine phosphorylated proteins were compared between RBL-2H3 cells and freshly isolated peritoneal and pleural rat mast cells, dramatic differences were observed [45]. These differences could reflect tumor origin of RBL-2H3 cells and could be responsible for the observed properties of NTAL. Importantly, mouse and human mast cells were obtained after differentiation under different cell culture conditions, which could modify their responsiveness. Mouse BMMCs were obtained by culturing bone marrow precursors in the presence of IL-3 and SCF (this study; [9]) or IL-3 alone [10], whereas human mast cells were derived from CD34+ pluripotent peripheral blood progenitors cultured in the presence of human SCF, IL-6 and IL-3 [12,35]. Previous study showed that differentiation of mast cells from their precursors in the presence of various cytokines could result in different responsiveness of the cells to various activators [22]. Finally, one cannot exclude the possibility that silencing vectors used for NTAL KD in human and/or RBL-2H3 mast cells exhibited off-target effects, which modified responsiveness of the cells to FceRI triggering.
To clarify the role of NTAL in FceRI signaling and to find out whether absence or decreased expression of NTAL has any effect on transcriptional regulation of genes, we compared under thoroughly controlled conditions RNA expression profiles of resting and Ag-activated BMMCs with NTAL KO or KD and the corresponding controls. We found that number of genes were up-or down-regulated, in BMMCs with NTAL KO or KD when compared to WT cells; most of the genes were not related to known immunoreceptor signaling pathways. The exact mechanisms and pathways through which NTAL causes changes in transcription of these genes remains to be determined. As expected, FceRI activation induced robust changes in gene expression in all four types of mast cells studied (NTAL KO, NTAL KD, WT and WT pLKO). At the given cut-off level (.1.8fold difference from proper controls), 209 genes showed different expression in nonactivated NTAL KO cells. It is remarkable that no differences in gene expression were noticed between Agactivated WT and WT pLKO when similar criteria for analysis of differential gene expression were used. This confirms that infection and puromycin selection had no significant effect on the data obtained from lentivirally infected and activated cells. This is in marked contrast with comparison of RNA from activated cells with NTAL KO vs WT and NTAL KD vs WT pLKO, where 194 and 165 genes, respectively, were found differentially expressed.
When comparing expression levels in various cell types we noticed that the degree of overlap between nonactivated and activated NTAL KO and KD cells was rather modest. This could be due to methodological differences in production of NTALdeficient cells. However, it should be kept in mind that although lentiviral infection itself and puromycin selection caused differential expression of some genes, as can be deduced from the observed differences in gene expression between nonactivated WT and WT pLKO cells, this difference disappeared in activated cells. Thus, lentiviral infection and puromycin selection did not contribute to the differences observed, at least in activated cells.
A hypothetical simplified model on the role of NTAL in mast cell activation and transcriptional regulation in WT and NTALdeficient cells is shown in Figure 9. In nonactivated WT cells both adaptor proteins, NTAL and LAT, as well as FceRI b and c subunits are only weakly tyrosine phosphorylated, because of the equilibrium between kinases and phosphatases and/or decreased access of the kinase to their substrates [46]. Quiescent cells also exhibit low [Ca 2+ ] i and standard gene expression (Transcription profile 1; Figure 9A). After Ag-mediated activation there is enhanced tyrosine phosphorylation of FceRI b and c subunits by LYN and SYK kinase. Activated SYK phosphorylates NTAL and LAT and this leads to further propagation of the activation signal, increased [Ca 2+ ] i and dramatic changes in gene expression by so far not fully understood mechanism (Transcription profile 2; Figure 9B). In cells with decreased expression of NTAL due to NTAL KO or NTAL KD, gene expression is changed when compared to WT cells (Transcription profile 3; Figure 9C). After activation of NTAL-deficient cells, LAT is phosphorylated by SYK. However, because of NTAL absence, LAT is more phosphorylated than in WT cells. This leads to increased [Ca 2+ ] i and transcriptional regulation which is different from WT cells (Transcription profile 4; Figure 9D). These processes contribute to enhanced response to Ag in NTAL-deficient cells, including enhanced degranulation, calcium response, chemotaxis and depolymerization of F-actin.
Unexpected findings in this study were NTAL-dependent changes in the expression of a number of genes related to metabolism and biosynthetic processes. A subgroup of these genes was involved in lipid metabolism, including synthesis of cholesterol. Although decreased transcription of several genes involved in cholesterol synthesis was confirmed by RT-qPCR, no significant difference in total amount of cellular cholesterol was detected between WT and NTAL-deficient cells. Yet, surprisingly, we found that pretreatment of BMMCs with MbCD had different effect on NTAL KO and WT cells. In NTAL KO cells MbCD significantly inhibited chemotaxis at all concentrations of MbCD tested (0.1-2.5 mM), whereas in WT cells MbCD either slightly, but reproducibly increased chemotaxis at a low concentration (0.1 mM) or had no significant effect at higher concentrations (0.5-2.5 mM). MbCD is known to remove cholesterol from the cells [47,48] and therefore one can hypothesize that enhanced chemotaxis in NTAL-deficient cells is regulated in part by plasma membrane cholesterol distribution. Molecular mechanism of the cholesterol-dependent regulations of chemotaxis is poorly understood, but could be related to differences in synthesis and/or distribution of cholesterol into plasma membrane sheets. One such possible mechanism has been recently described in macrophages with defect in ATP-binding cassette transporters ABCA1 and ABCG1, which are involved in the movement of cholesterol from the inner to the outer leaflet of the plasma membrane and play role in chemotaxis towards C5a chemoattractant [28]. Regulation of chemotactic response by cholesterol has been described in other cell types including T cells [49], monocytes [50] and neutrophils [51]. Molecular mechanisms of the cross-talk between NTAL and cholesterol remains to be determined.
In summary, the results based on functional studies of BMMCs with NTAL KD and the corresponding controls indicate that NTAL is a negative regulator of FceRI-mediated signaling pathways. Because similar findings were observed in BMMCs with NTAL KD or KO, no significant role of compensatory developmental alterations appear to account for FceRI signaling in BMMCs from Ntal -/mice. Expression profiles of nonactivated or FceRI activated BMMCs with NTAL KO, NTAL KD, and the corresponding controls identified several genes which were up-or down-regulated in NTAL-deficient cells. The data indicate that some of these genes could be involved in regulation of cholesteroldependent events in Ag-mediated chemotaxis.
Supporting Information
Table S1 Differentially expressed gene transcripts in nonactivated NTAL KO cells when compared with nonactivated WT cells. The table represents a list of probe sets for the corresponding genes that were up-or down-regulated in nonactivated (0 h) NTAL KO cells (KO) when compared to corresponding nonactivated WT cells (WT) and passed the filter of FDR ,0. | 9,891 | 2014-08-25T00:00:00.000 | [
"Biology",
"Medicine"
] |
Spin-Off and Commercialization of University Researches
This paper analyzes the university spin-off which is considered as a new solution to exploit and commercialize current university research. Since the func-tion of universities is research and training, not production, their researches or inventions are often academic and fundamental in nature. The need to convert research results into machines or equipment that can be used in industrial production has created a spin-off enterprise in the university. A spin-off is one of the bases for commercializing research results of universities. With the method of analyzing and synthesizing results from related studies, the study provides recommendations for effective spin of development as a useful solution for improving the commercialization of research results in universities.
Introduction
Currently, the efficiency of commercializing research results in universities is not high. The reason is that universities almost exclusively focus on training and scientific research tasks. In addition to the active communication system, modern universities in the world also focus on other jobs such as linking with the business sector and gradually play an important role in the economic development of the country ethnic. Accordingly, many universities have focused on the commercial exploitation of modern companies through scientific research activities, which not only greatly contributes to economic growth and development but also a source of economic growth economic growth provides cutting-edge technology upgrades to industry players. Among the models of commercializa-tion of scientific research, the establishment of subsidiary enterprises by universities is considered an important mechanism to improve this efficiency. This business industry first appeared in England at the end of the nineteenth century and quickly gained attention and application in developed countries around the world.
Definition of Spin-Off (or Spin-Off Enterprise)
According to Oxford Dictionary (online) a spin-off is "A subsidiary of a parent company that has been sold off, creating a new company." Similarly, the Cambridge Advanced learner's dictionary & thesarus (online) also defines a spin-off is "A new business created by separating part of company, or the act of creating such a business." In this study, spin-off is a new company formed by selling or distributing shares of an existing business or division of the parent company. Furthermore, spin-offs are expected to be more valuable as independent entities rather than parts of a larger business. Spin-off is a type of divestiture and is also known as spin out. The university is not a manufacturing enterprise, so it has to form spin-off businesses to deploy ideas, models, and results of scientific research into machinery and equipment or to cooperate with external units in the production and technology transfer. There are three different types of spin-offs with associated characteristics: equity carveouts, split-offs, and split-ups.
-Under equity carve-outs, a portion of the subsidiary's shares is offered for sale to the public. This has the effect of injecting money into the parent company without loss of control. -A split-offs occurs when shareholders exchange their parent stock for the shares of the subsidiary. These transactions give the company the opportunity to dispose of a subsidiary in a tax-free manner, and even to remove an unwanted shareholder. -Under a split-ups, the parent company distributes shares of each subsidiary, and the parent company liquidates and ceases to exist. Thus, spin-off operations are likely to yield some new results such as: -The generation of new revenue from scientific research at universities in addition to tuition fees. -The transformation of a passive perspective into an active one concerning the productive activities of society. -The implementation of real market orientation in the university's activities.
-The addition of experience from practical activities and relevant lecture updates. -The implementation of real market orientation in the university's activities.
Some of the top spin-offs in the world are shown in Table 1
Commercialization of University Researches
According to modern economic theories, national development is associated with scientific and technological capacity. National competitiveness is expressed by the preeminent characteristics of products in the market and all products are the result of scientific research and technological innovation.
Therefore, economic development must be closely linked to national technological research and innovation, and at the national level, research of universities and academies plays a decisive role in research and innovation. However, research results of universities are often solutions and applications, while industry wants new machinery and equipment, technological lines that can be evaluated for features and effectiveness of research results. To meet these industry requirements, the university must either cooperate with investors or on its own set up a new business-spin-off enterprise-to convert or transfer the results of academic research (Iacobucci & Micozzi, 2015). This is a relatively new research problem for Vietnamese universities, and lecturers and managers in Vietnamese universities have little experience in production and business activities, so the quantitative research through questionnaire survey may not be satisfactory. Therefore, the study uses the method of analysis and synthesis of results from related studies at home and abroad. On the basis of previous research results and the current situation of our country, we will propose solutions to develop spin-off in order to commercialize scientific research results of universities to meet the requirements of the industry.
Macro Environment
Each country defines its own development strategy in accordance with its own characteristics, which always includes the development strategy of Science, Technology and Innovation (STI) because STI determines competitiveness of countries in the world (Broughel & Thierer, 2019). The adoption of new technologies can improve productivity and innovation makes processes more efficient and companies can provide higher quality goods and services. Investments in research and development (R&D), and innovation boosts production capacity and support overall growth. Much empirical and theoretical work focuses on research and development as an important factor for economic growth. R&D spending leads to growth through its positive effects on innovation and total factor productivity (TFP). Due to the outstanding features of science and technology, which is to compete have to compete to win the global innovation race, so they face different challenges. In fact, the nation that can effectively manage three sides of the innovation success triangle is like to be a nation that wins the race and reap the rewards of greater economic growth and prosperity (Atkinson, 2020).
In recent years, international trade has expanded rapidly due to the rise of global value chains (GVCs goals. Innovation activity in any country is embedded in the national innovation system (NIS) and innovation is not only science and technology but also includes many other factors including economic institutions, other political and social influences that influence innovation (e.g., policies and regulations, financial systems; business organizations; labor markets; higher education systems; culture and tax incentives).
More than ever, economic stability, good governance, sound regulations and the provision of basic infrastructure remain essential to attracting the investments needed to increase regional production and support economic integration. In particular, strategic business clusters and the establishment of special economic zones, in which governments provide access to quality infrastructure and reliable regulation, can further support industrialization and regional specialization.
University
The higher education community can contribute solutions to strengthen leader- (Ivano, 2018).
Universities play an essential role in the growth of the nation. The university's research and innovation solves technology gaps, encourages investment, promotes exports, and creates a thriving economy. University research activities train a highly skilled and innovative workforce that underpins the success of the knowledge-based economy. The UK's fastest growth stems from boosting productivity, increasing socioeconomic efficiency. This result is rooted in world-leading research and innovation by universities and policies that support the most effective collaboration between universities and industry.
The UK creates an amazing and diverse research and innovation ecosystem that is a key driver of growth and productivity: IP revenue alone generated £86,6 million in 2012-2013, £376 million from graduate start ups and another £2,7 billion from partnerships with businesses.
Cooperation Model
National innovation programs to create competitive advantage across the country require daily changes in factors affecting value-added chains such as polities and regulations, institutional actors, interactions and relationships. Practice cutting-edge research capabilities, external partnerships, quantification of scientific knowledge and output, and collective entrepreneurship that facilitate growth. To improve the quality and efficiency of their production, businesses need to partner with academic institutions to tackle technological challenges and share industry knowledge. The government acts as an assistant organization with the right policy, and the venture capitalists act as evaluators and investors to fund operations. In that context, technology innovation cooperation becomes an activity linking universities-industry-government under the model of "Triple Helix" in order to maintain the sustainable development of a country (Etzkowitz, 2021). Because the university (parent organization) does not have a production function, it is necessary to set up a business-called a spin-off enterprise-to develop ideas, solutions, and inventions into specific machines and equipment to put into production (Benassi et al., 2017).
The establishment of a spin-off is a very important activity that transforms the university mindset into an enterprise mindset in meeting industry requirements by transferring technology from the university to the industry. Spin-off may include not only members of the school but also outside members such as investors or venture capital funds. Although there are many types of spinoffs, the importance of spin-off universities as a technology transfer mechanism for creating and sustaining regional economic growth is widely recognized. Universities have established organizational mechanisms and procedures to encourage developers. More than ever, economic stability, good governance, sound regulation and the provision of basic infrastructure remain essential to attracting the investments needed to increase regional production and support economic integration.
In particular, strategic business clusters and the establishment of special economic zones, in which governments provide access to quality infrastructure and reliable regulation, can further support regional industrialization and specialization. Both the spin-off business and the parent institution (university) perceive spin-off formation as a win-win situation. Spin-off sometimes have some con- not-for-profit organizations, and small businesses to retain certain IP rights related to inventions made via federally supported R&D. Serving as the statutory foundation facilitating federally supported R&D technology transfer, the Act was designed to promote commercialization of innovations arising from such R&D through cooperation between the research community, industry, and state and local governments."
Investors
In the beginning, although there is a lot of growth potential from the exploitation of inventions and technology transfer that generates high profits, spin-off may face difficulties in business capital. Moreover, it is difficult for them to get loans from banks and credit institutions due to limited collateral and business documents. Meanwhile, venture capital has been investing in small and medium-sized enterprises _SMEs operating in the high-margin innovation sector.
With the business philosophy of "high risk and big return", the venture capital fund has invested in many manufacturing industries and achieved outstanding business results in developed countries. Venture capital has created a business
Status of Innovation Activities in Vietnam
In recent years, Vietnam's innovation activity has grown and in 2019 Vietnam became the first among the group of 29 lower-middle-income economies. Results are shown in Table 2.
The results of scientific research have contributed to good solutions such as Ton That Tung's liver surgery, frozen semen method in In Vitro Fertilisation (IVF), organ transplantation from donated tissues, stem cells from umbilical cord membranes, hematopoietic stem cell transplantation to treat blood diseases, fetal intervention techniques, wheelchairs controlled through human thought, fire suppression by sound waves, water measurement methods in agriculture, etc. However, scientific research of universities is not regular and has not met the requirements of society and national development, the added value and contribution to GDP is still very small. School officials are not familiar with the flexible operating mechanism of enterprises. The number of scientific and technological (S&T) research groups and the investment in equipment are lacking and inconsistent. The necessary policies and legal regulations to develop S&T activities are lacking and inconsistent. The results of S&T researches have not been proven through production practices, so businesses have not been persuaded to use them.
Government
First of all, the Government should incorporate the Finally, it is vital to expand cooperation with developed countries in scientific research and engage leading researchers in key government projects.
Enterprises
Enterprises should formulate a development strategy in accordance with the new requirements of the world and the national strategy, including orientations for sustainable development and green production.
In addition, it is necessary for enterprises to actively participate in the development programs of the national innovation system including the triple helix model and spin-off model, and to strengthen cooperation with the university's scientific research as well. Besides, enterprises should dynamically take part in
Universities
Firstly, universities leaders should develop a growth strategy of university in line with the national innovation system development strategy and actively participate in the development of the triple helix model, as well as the establishment of spin-offs and technology transfer office (TTO). In addition, it is important for universities to consider scientific research not only as a brand-building activity, but also as a second income-generating activity. Furthermore, universities should provide material and legal support for spin-off activities and determine appropriate income among stakeholders in exploiting research results and technology transfer according to provisions of law.
Besides, it is necessary for university to expand the spin-off operation with the guarantee, relationships and reputation of the university and resolve conflicts arising with the spin-off by negotiation on the basis of long-term benefits as well.
At the same time, universities should strengthen connection with industry associations to promptly grasp innovation requirements and periodically organize thematic activities with industries to jointly solve problems arising from enterprises.
Next, it is critical for university to engage with national laboratories in scientific research, to expand cooperation with developed countries across multiple channels of relations to share scientific research experiences as well as to promulgate regulations on attracting venture capital investment in exploiting research results and technology transfer. Moreover, universities should rationally allocate faculty activities between teaching and scientific research responsibilities and apply a special regime with top talents in science and technology in the formation and operation of strong research groups.
Finally, it is urgent for university to train legacy human resources in science and technology through practical researches, expand relationship with businesses, industry associations, provinces, cities, ministries, national programs and projects, etc. to meet the development requirements of the society.
Conclusion
COVID-19 pandemic has disrupted supply chains in the world, formed new international trade relations and sustainable development trends that require improved production conditions, technological innovation, investing in environmentally friendly production, reduction of CO 2 emissions according to the 2030 objectives of the United Nations. Green economy development requires control of pollution in industry, efficient use of natural resources, no harm to the environment and nature. Vietnam promotes exports through trade agreements and Vietnamese enterprises must innovate technology to survive in the international market. In addition, technological innovation improves productivity to avoid the middle-income trap, export barriers in the world and limits future dependence on foreign technology. In the coming time, research to develop and innovate technology in industries is an urgent requirement of research institutions and enterprises. In order to quickly put research results into production, it is necessary to form many spinoffs with strong technology transfer capacity, effective operation to increase connectivity, and quickly respond to market requirements. In fact, in the chain of value-creating activities to implement Vietnam's STI strategy, spin-off activities play a very important role. As a result, it is a bottleneck in the formation of a technology innovation ecosystem and effectively solving this bottleneck will promote the coordination between universities and industries, and develop the country's economy in the integration period. Finally, in order to achieve these issues, it is imperative to improve and synchronous coordinate between the three parties: government, business and research institutions including universities | 3,652.6 | 2022-01-01T00:00:00.000 | [
"Business",
"Engineering",
"Education"
] |
Eigenvalues of the Laplacian on the Goldberg-Coxeter constructions for $3$- and $4$-valent graphs
We are concerned with spectral problems of the Goldberg-Coxeter construction for $3$- and $4$-valent finite graphs. The Goldberg-Coxeter constructions $\mathrm{GC}_{k,l}(X)$ of a finite $3$- or $4$-valent graph $X$ are considered as"subdivisions"of $X$, whose number of vertices are increasing at order $O(k^2+l^2)$, nevertheless which have bounded girth. It is shown that the first (resp. the last) $o(k^2)$ eigenvalues of the combinatorial Laplacian on $\mathrm{GC}_{k,0}(X)$ tend to $0$ (resp. tend to $6$ or $8$ in the $3$- or $4$-valent case, respectively) as $k$ goes to infinity. A concrete estimate for the first several eigenvalues of $\mathrm{GC}_{k,l}(X)$ by those of $X$ is also obtained for general $k$ and $l$. It is also shown that the specific values always appear as eigenvalues of $\mathrm{GC}_{2k,0}(X)$ with large multiplicities almost independently to the structure of the initial $X$. In contrast, some dependency of the graph structure of $X$ on the multiplicity of the specific values is also studied.
Introduction
The Goldberg-Coxeter construction is a subdivision of a 3-or 4-valent graph, and it is defined by Dutour-Deza [4] for a plane graph based on a simplicial subdivision of regular polytopes in [1,7]. In [4], it is pointed out that it often appears in chemistry and architecture, and its combinatorial and algebraic structures are investigated. Goldberg-Coxeter constructions of regular polyhedra generate a class of Archimedean polyhedra, and infinite sequence of polyhedra, which are called Goldberg polyhedra. For example a Goldberg-Coxeter construction of a dodecahedron generates a truncatedicosahedron, which is known as a fullerene C 60 [10,17]. Goldberg-Coxeter constructions are also applied to Mackay-like crystals, and explain large scale of spatial fullerenes [14,16]. Mathematical modeling of self-assembly in nature is also widely studied in [1,11]. Recently, Fujita et. al. have synthesized molecule structures with 4-valent Goldberg polyhedra, and they explain self-assembly from viewpoints of chemistry and biology [6].
On the other hand, the stability of a molecule is explained by eigenvalues of the finite graphs which express the molecule structure by Hückel method [2]. Hence, studies for eigenvalues of Goldberg-Coxeter constructions are worth trying. The Goldberg-Coxeter construction GC k,l (X) of a 3-or 4-valent graph X has the parameters k and l both of which are integers and they are regarded to indicate a point in the triangular or square lattices, respectively. Then we are concerned with behavior of eigenvalues of GC k,l (X) when k and l tends to infinity.
Throughout this paper, unless otherwise indicated, a graph is always assumed to be connected, finite and simple. For a graph X, let us denote by V(X) the set of vertices of X, and by E(X) the set of undirected edges of X. For p ∈ V(X), the set of its neighboring vertices is denoted by N X (p). The combinatorial Laplacian ∆ X , simply called the Laplacian, of a graph X acts on the set R V(X) of functions on V(X) and is defined as (X) and p ∈ V(X), where deg(p) = 3 or 4 provided X is respectively a 3-or 4-valent graph. As is well-known, the eigenvalues of ∆ X for a regular graph X of degree r necessarily lie in the interval [0, 2r].
The definition of the Goldberg-Coxeter constructions extends for general 3-or 4-valent graph X = (V(X), E(X)) equipped with an orientation at each vertex, in the sense that, for each p ∈ V(X), the set of edges emanating from p is ordered, and the following is proved. Theorem 1.1. Let X = (V(X), E(X)) be a connected, finite and simple 3or 4-valent graph equipped with an orientation at each vertex, X = GC k,l (X) be the Goldberg-Coxeter construction of X, where k ≥ l ≥ 0 and k 0 and 0 = λ 1 (X) < λ 2 (X) ≤ · · · ≤ λ |V(X)| (X), 0 = λ 1 (X ) < λ 2 (X ) ≤ · · · ≤ λ |V(X )| (X ) be the eigenvalues of their Laplacians ∆ X , ∆ X , respectively. Then there exist integers µ(k, l) and ν(k, l) depending only on k and l satisfying for i = 1, 2, . . . , |V(X)|. When X is 3-valent, µ(k, l) satisfies When X is 4-valent, ν(k, l) satisfies As shall be explained later (cf. Proposition 2.2), if, in particular, X is "appropriately" embedded in an oriented surface, then X is endowed with a natural orientation at each vertex and GC k,l (X) remains to be also embedded in the same surface. Thus (1.1) also gives an upper bound for such a graph X.
There is a long line of works on upper bounds for the (especially, first nonzero) eigenvalues of general planar or genus g finite graphs (see [12,18] and the references therein). In [13], it is proved that the i-th eigenvalue of a graph embedded in an oriented surface of genus g is estimated from above by O((g + 1) log 2 (g + 1)i/n), where n is the number of the vertices. Our estimate (1.1) is different from their estimate on the point that (1.1) is independent of the genus.
On the other hand, as for the last several eigenvalues of GC k,0 (X) the following holds.
for i = 1, 2, . . . , |V(X)|. If X is a bipartite 3-valent graph, then the convergence (1.3) remains valid also for arbitrary GC k,l (X). Furthermore, for a fixed k, the last |V(X)| eigenvalues of n-th iterated Goldberg-Coxeter constructions GC n k,0 (X) converge to 6 or 8 exponentially fast as n → ∞. As the following theorems show the Goldberg-Coxeter constructions have also steady eigenvalues. Theorem 1.3. Let X be a connected, finite and simple 3-valent graph equipped with an orientation at each vertex, and GC 2k,0 (X) be its Goldberg-Coxeter constructions for k ∈ N.
Here x is the smallest integer ≥ x, and x is the largest integer ≤ x.
Theorem 1.4. Let X be a connected, finite and simple 4-valent graph equipped with an orientation at each vertex, and GC 2k,0 (X) be its Goldberg-Coxeter constructions for k ∈ N.
Problems on eigenvalues of combinatorial Laplacian on regular graphs are extensively investigated. In particular, an explicit formula of a limit density of eigenvalue distributions of certain sequences of regular graphs was obtained in [15], and its geometric proof using a trace formula is given in [9] (see also [3]). One of points in these works is that the sequence {X n } of q-regular graphs with number of vertices |X n | → ∞ as n → ∞ is assumed to have large girths g(X n ) → ∞ as n → ∞. From this assumption, the graphs X n get similar, as n → ∞, to a universal covering graph, namely a q-regular tree at least locally, and then a trace formula becomes able to apply. The girths of the Goldberg-Coxeter constructions {GC k,l (X)} k,l with an initial graph X are uniformly bounded with respect to the parameters k and l, and hence it would not be so straightforward to apply a trace formula to obtain a limit distribution of the eigenvalue distributions. Indeed, from several numerical results it is considered that the limit distributions of eigenvalue distributions of Goldberg-Coxeter constructions is not quite universal. This speculation is also supported by the following results.
Theorem 1.5. Let X be a connected, finite and simple 3-valent graph which is embedded in a plane. Assume that the number of edges surrounding each face is divisible by 3. Then the following hold.
(2) For any k ∈ N, both GC k,0 (X) and GC k,k (X) have eigenvalue 4 (resp. 2), whose multiplicity is at least k/2 (resp. k/2 ). This paper is organized as follows. In Section 2, after giving the precise definition of the Goldberg-Coxeter constructions GC k,l (X), let us study their structure which is related with the spectral problems. In Section 3, we obtain some comparisons of the eigenvalues of X and GC k,l (X) to prove Theorem 1.1. In Section 4, we first present proofs of Theorem 1.3 and 1.4. At the end of this paper, we shall give a few criteria for a 3-valent plane graph X so that some GC k,0 (X)'s have eigenvalues 2 or 4, which proves Theorem 1.5.
Goldberg-Coxeter constructions
This section studies the structure of Goldberg-Coxeter constructions, which shall be necessary in the subsequent sections.
The notion of Goldberg-Coxeter constructions is defined, due to Deza-Dutour [4,5], for a plane graph. The definition can extend for a nonplanar graph X; indeed, X has only to be equipped with an "orientation at each vertex", and if, in particular, X is "appropriately" embedded on an oriented surface, then the constructions can be done on the surface (see Proposition 2.4 gives the triangular lattice on C having 0, 1 and ω as its fundamental triangle, while Z[i] gives the square lattice on C having 0, 1, 1 + i and i as its fundamental square. Definition 2.1 (cf. Deza-Dutour [4,5]). Let X be a connected, finite and simple 3-or 4-valent (abstract) graph equipped with an orientation at each vertex in the sense that, for each p ∈ V(X), the set of edges emanating from p is ordered. For (k, l) ∈ Z 2 , (k, l) (0, 0), the Goldberg-Coxeter construction of X with parameters k and l is defined through the following steps.
(i) Let us first consider the equilateral triangle = (0, z, ωz) in Z[ω] having the vertices 0, z = k + lω and ωz (resp. the square = (0, z, (1 + i)z, iz) in Z[i] having the vertices 0, z = k + li, (1 + i)z and iz). (ii) Let us take all the small triangles in Z[ω] (resp. squares in Z[i]) intersecting with (resp. ) in its interior and join the barycenters of the neighboring small triangles (resp. squares) to obtain a graph, which is, as an associated (abstract) graph with p for each p ∈ V(X), denoted by (p) (resp. (p)). Let us assign each of the edges emanating from p to exactly one edge of the triangle (resp. square) so that the orientation at p coincides with the standard orientation of in Z[ω] (resp. in Z[i]). Note that (p) (resp. (p)) has the 2π/3-rotational symmetry (resp. the π/2-rotational symmetry).
Proposition 2.2. Let X be a connected, finite and simple 3or 4-valent graph which is embedded in an oriented surface M in such a way that the closure of each face is simply connected. Then for (k, l) ∈ Z 2 , (k, l) (0, 0), GC k,l (X) is well-defined and is also embedded in M.
Proof. The oriented tangent plane to M at p ∈ V(X) defines the orientation at p, and GC k,l (X) is defined. The notion of faces is also well-defined. Since each face of X is simply connected, we can take a dual graph D X of X in M, all of whose faces are simply connected triangles (resp. rectangles) for the 3-valent case (resp. 4-valent case). The dividing step (ii) and the gluing step (iii) in Definition 2.1 are well done in M via respective appropriate local charts.
A Goldberg-Coxeter construction GC k,l (X) for 3-valent (resp. 4-valent) graph X inserts some hexagons (resp. squares), according to its parameter k and l, between each pair of original faces of X. The most famous example is a fullerene C 60 , called also a buckminsterfullerene or a buckyball, which is nothing but GC 1,1 (Dodecahedron). This construction owes its name to the pioneering work [7] due to M. Goldberg, where a so-called Goldberg polyhedron (a convex polyhedron whose 1-skeleton is a 3-valent graph, consisting of hexagons and pentagons with rotational icosahedral symmetry 3-valent graph as its 1-skeleton) is studied and is proved to be of the form GC k,l (Dodecahedron) for some k and l. A Goldberg-Coxeter construction for 3-or 4-valent plane graphs occurs in many other context; see [4] and the references therein. Several examples of Goldberg-Coxeter constructions for nonplanar 3-valent (infinite or finite quotient) graphs, such as for carbon nanotubes and Mackay-like crystals, are provided in [14].
The following proposition summarizes a few fundamental properties of Goldberg-Coxeter constructions. [4,5]). Let X = (V(X), E(X)) be a 3-valent (resp. 4-valent) graph equipped with an orientation at each vertex. Then the following hold.
Proposition 2.3 (Deza-Dutour
(1) If X is embedded in an oriented surface in such a way that the closure of each face is simply connected, and the orientation at each vertex coincides with the one of the surface, then GC z (GC z (X)) = GC zz (X), for any z, z ∈ Z[ω] (resp. z, z ∈ Z[i]). (2) For any (k, l) ∈ Z 2 , (k, l) (0, 0), we have the following graph isomorphisms: In consideration of Proposition 2.3 (2), in the rest of this paper, we assume that k is a positive integer and l is a nonnegative integer satisfying k ≥ l ≥ 0 and k 0.
Clusters for Goldberg-Coxeter constructions.
A cluster is the central notion in this paper. Its definitions shall be given below in two different cases: where X is 3-valent and where X is 4-valent.
, called the (k, l)-cluster, so as to have k 2 + kl + l 2 vertices and the 2π/3-rotational symmetry of (p). For this, we just have to define V(p) by the set of vertices x of (p) (considered as the graph on ⊆ Z[ω]) satisfying one of the following conditions: (i) x ∈ (p) corresponds to a triangle in Z[ω] whose barycenter lies in the interior of = (0, z, ωz), where z = k + lω; (ii) x ∈ (p) corresponds to an upward triangle in Z[ω] whose barycenter lies on an edge of . Here we mean an upward triangle (a, b) by the triangle in Z[ω] with vertices a + bω, a + 1 + bω and a + (b + 1)ω for a, b ∈ Z (see Figure 1). We also denote by (a, b), called downward triangle, the triangle with vertices a + bω, a + (b + 1)ω and a − 1 In the case that l = 0, X(p) is nothing but (p) itself, has k 2 vertices and has the dihedral symmetry D 3 (of order 6) (see Figure 1).
In the case that k = l > 0, it is easily seen that there are 3(k 2 − k) vertices satisfying (i) and 3k vertices satisfying (ii). The obtained subgraph X(p) has 3k 2 vertices and has the 2π/3-rotational symmetry because upward triangles are mapped to upward triangles by the rotation (see Figure 1).
The following lemma makes clear the cases where there is a barycenter lying on an edge of among the remaining cases.
Moreover, in the case above, each edge of passes through exactly 2m = 2 gcd(k, l) barycenters. Among these 2m vertices, exactly m vertices corresponding to upward triangles have just two adjacent triangles with barycenters lying in . The combined 3m vertices on the three edges of are located in symmetric position with the rotation by 2π/3 of .
Lemma 2.4 shows that the subgraph X(p) has (k − l) 2 + 3kl = k 2 + kl + l 2 vertices and also has the 2π/3-rotational symmetry in the remaining case that k > l > 0.
Here we can prove the following proposition, which guarantees that the bipartiteness is kept after a Goldberg-Coxeter construction.
Proposition 2.5. Let X be a 3-valent bipartite graph equipped with an orientation at each vertex. Then for any (k, l) ∈ Z 2 , (k, l) (0, 0), GC k,l (X) is also bipartite. So the spectrum of GC k,l (X) is symmetric with respect to 3.
Proof. Let a bipartition of X be given and either black or white be assigned to each vertex p ∈ V(X).
Each vertex x of each (k, l)-cluster X(p) can be colored according to a rule that if p is white, then • paint x black, provided the triangle in Z[ω] corresponding to x is upward; • paint x white, provided the triangle in Z[ω] corresponding to x is downward; and if p is black, then exchange black and white above. A white vertex is adjacent only to black vertices in X, and two adjacent clusters X(p) and X(q) are positioned, in Z[ω], at π-rotation around the midpoint of an edge of , which switches upward and downward triangles. So, the rule above gives a bipartition of GC k,l (X).
Similarly as in the 3-valent case, we construct for each p ∈ V(X) an appropriate subgraph X(p) = (V(p), E(p)) of (p), still called the (k, l)-cluster, so as to have k 2 + l 2 vertices. To this end, we need to clarify the cases where a barycenter of a small square in Moreover, if this is the case, each edge of passes through exactly m barycenters.
Unlike the 3-valent case, we cannot choose a cluster X(p) with k 2 + l 2 vertices to have the π/2rotational symmetry in the case where k 1 0 (mod 2), k 1 ≡ l 1 (mod 2) and m 0 (mod 2) because no vertex of (p) is positioned at the barycenter of and k 2 + l 2 = m 2 ((k 1 − l 1 ) 2 + 2k 1 l 1 ) is not divided by 4. Even in such cases, X(p) only has to have the same number of outward edges among the four directions to every adjacent cluster.
Lemma 2.7. Let X be a 4-valent graph equipped with an orientation at each vertex. Then there exists an Euler circuit ε of X which turns either left or right at every vertex of X.
Proof. As is well-known, any 4-valent graph X has an Euler circuit, which is by definition a closed path in X which visits every edge exactly once. Let us take an Euler circuit ε of X and suppose that ε goes straight ahead at a vertex p ∈ V(X). The circuit ε comes back to p again from one of the other directions after it straight ahead at p (because X is 4-valent). By following the interval in opposite directions, the obtained circuit goes straight ahead one time fewer than ε. This proves Lemma 2.7.
The Euler circuit ε obtained in Lemma 2.7 assigns a direction to each edge of X such that the direction alternates between inward and outward at each vertex of X.
Now we can clearly define V(p) by the set of vertices x of (p) satisfying one of the following conditions: (i) x corresponds to a square in Z[i] whose barycenter lies in the interior of ; (ii) x corresponds to a barycenter lying on the two edges of with opposite sides which correspond to the outward edges of X with respect to the Euler circuit ε in Lemma 2.7. (C) V(X) can be colored by two colors, say black and white, with the following properties: (C-i) A black vertex is adjacent to three white vertices; (C-ii) a white vertex is adjacent to exactly one black vertex, so the other two adjacent vertices are white; (C-iii) for any pair of black vertices x, y ∈ V(X) which are three vertices away from each other, there is a path from x to y either turning left twice or turning right twice.
The coherent edge numbering (CN) implies the condition (N); indeed, let p ∈ V(X) and let e 1 , e 2 and e 3 be three edges of X emanating from p. We assign 0 to p regarded as a vertex of GC 2,0 (X), and, for i = 1, 2 and 3, assign i to the vertex of GC 2,0 (X) positioned at the "opposite-side" to e i . The resulting numbering of vertices of GC 2,0 (X) satisfies (N-i) and (N-ii) (see Figure 3). Moreover, as is easily proved, (N) implies the condition (F). So the following proposition shows that (F), (CN) and (N) are mutually equivalent. where H 1 (X, Z) is the 1-dimensional homology group of X. Now any γ ∈ H 1 (X, Z) can be written as γ = f : face of X a f ∂ f , where a f ∈ Z and ∂ f is the cycle consisting of edges around f . Our assumption implies that ϕ(∂ f ) = 3 for any face f of X. Hence we conclude that ϕ ≡ 3, which implies that ϕ ≡ 3 on CP(X, e 0 ).
A relation between (F) and (C) is stated as follows.
Proposition 2.9. Let X be a 3-valent plane graph satisfying (F). Then X has a vertex coherent coloring satisfying (C-i), (C-ii) and (C-iii).
Proof. let p 0 ∈ V(X) be an arbitrary fixed vertex and color it black. Every vertex which is accessible by either turning left twice or turning right twice from a black vertex is, one after another, colored in black until no more vertices can be colored in black. The remaining vertices are colored in white. Now we have to check that (C-i) and (C-ii) are satisfied (while (C-iii) is necessarily satisfied). It is easily seen that a white vertex is adjacent to at least one black vertex; otherwise, all vertices of X must be white. It is also easily checked that if a white vertex is adjacent to two or more black vertices, then two other black vertices are necessarily adjacent somewhere else. So, it suffices to show that any pair of black vertices cannot be adjacent. Suppose that there is a pair of adjacent black vertices, say p, q ∈ V(X). From our way of the coloring, there is a path γ from p to q which is a sequence of either twice turning left or twice turning right between black vertices. Then γ ∪ (q, p) is a closed path, which surrounds a finitely many faces, say f 1 , f 2 , . . . , f n , after removing back-trackings. Now if n = 1, then γ consists of a circuit on the boundary ∂ f 1 of a face f 1 and of some back-trackings with black base points on ∂ f 1 , which is a contradiction because the total of τ defined by (2.3) is 0 (mod 3) after the crossing just prior to a lap of γ ∪ (q, p). So assume that n ≥ 2. There are just two possibilities of paths along the boundary of n i=1 f i connecting a pair of black vertices with distance 3, as indicated in Figure 4. In either case, we can replace γ ∪ (q, p) by a closed path which does not surround a face f i (by ignoring back-trackings), and is still a sequence of either twice turning left or twice turning right between black vertices. Therefore the conclusion for the case where n ≥ 2 can be deduced from the discussion given for the case n = 1.
Examples 2.10.
(1) The tetrahedron and any of its Goldberg-Coxeter constructions satisfy all the conditions above. (2) GC 2,0 (X) for any 3-valent plane graph X always satisfies (C-i), (C-ii) and (C-iii); indeed, we just have to color only the "center" of each (2, 0)-cluster black, and the others white. (3) GC 1,1 (X) for any 3-valent plane graph X also always satisfies (C-i), (C-ii) and (C-iii); indeed, we just have to color in accordance with the rule shown in Figure 6. 3.1. The case where X is 3-valent. Let p ∈ V(X), q ∈ N X (p) and set . Note that, for any x ∈ V(p) and q ∈ N X (p), there are at most two edges emanating from x to V(q).
Proof of Theorem 1.1. Since there is nothing to discuss when (k, l) = (1, 0), we only consider the other cases. Let c = 1/ |V(p)| = 1/ √ k 2 + kl + l 2 and define a linear map Q : (X) and for x ∈ V(p) by The transpose t Q : R V(X ) → R V(X) of Q is then written as for g ∈ R V(X ) and p ∈ V(X). It then follows that for any f ∈ R V(X) and for any p ∈ V(X), The second term equals −3c 2 |V 0 (p)| f (p) and the third term is computed as where the last equality follows from the symmetry of X(p). Therefore we obtain where µ(k, l) is the number of edges in X connecting two clusters and depends only on k and l.
(1.1) of Theorem 1.1 now immediately follows from the following.
Theorem 3.1 (Interlacing property, see for example [2]). Let Q be a real n × m matrix satisfying t QQ = I m and A be a real symmetric n × n matrix. If the eigenvalues of A and t QAQ are The equality (1.2) for l = 0 or k = l > 0 are easily proved. Let us estimate the number of edges crossing the edge E = 0z when k > l > 0. Notice first that there is at most one crossing edge emanating from an upward triangle (a, b), and that there are at most two crossing edge emanating from a downward triangle (a, b). For each c ∈ Z, "the zigzag path" which is obtained by joining the barycenters of (a, b), (a, b) and (a + 1, b − 1) for all a, b ∈ Z with a + b = c crosses the edge E = 0z exactly once provided 0 ≤ c ≤ k + l − 1 and does not cross E otherwise. Also, the line passing through a ∈ Z with slant 1 + ω crosses E exactly once provided 0 ≤ a ≤ k − l and does not cross E otherwise. Therefore the number of edges crossing E is at most k + l + (k − l − 2) = 2k − 2. (See Figure 7 for an example. ) Figure 7. (k, l) = (9, 3). 15 edges cross the edge E = 0z, (z = 9 + 3ω).
The assertion in Theorem 1.2 for a bipartite graph X is an immediate consequence from Theorem 1.1 and Proposition 2.5. The former one in Theorem 1.2 follows from the following. The following remark shall be repeatedly used in the sequel: by assigning the same function u to the other clusters, we have a global function u : GC k,0 (X) → R, which is an eigenfunction of ∆ GC k,0 (X) with eigenvalue λ; indeed, (i) ∆ X(p) u = λu is equivalent to a Neumann problem: Theorem 3.2 is an immediate consequence from the the following Lemmata 3.4 and 3.5.
Proof. Let us replace c in (3.2) by u : GC k,0 (X) → R which is obtained from a D 3 -invariant eigenfunction on the (k, 0)-cluster. We may assume that x∈V(p) u(x) 2 = 1/|V(X)|, so that t QQ = id R V (X) . After a straightforward computation using (i) and (ii) in Definition 3.3 for u, we can obtain the following equality: for any f ∈ R V(X) and any p ∈ V(X), where q ∈ N X (p) is an adjacent vertex to p. Again from Theorem 3.1, the desired inequality is proved. Proof. Let us first construct all the eigenfunctions on a hexagonal lattice with toroidal boundary condition. If we set m := (1 + ω)/3, where ω = e πi/3 , then the discrete set is naturally regarded as a hexagonal lattice. For a fixed k ∈ N, let us consider the equations for a function v on the parallelogram where a and b in (3.7) are considered modulo k, such as for the former equation of (3.7) with a = b = 0. So if v solves (3.7), then it gives an eigenfunction with eigenvalue λ on the finite 3-valent graph T (k) with 2k 2 vertices obtained by adding edges between a and m + a + (k − 1)ω, and between bω and m + k − 1 + bω for each a, b = 0, 1, . . . , k − 1.
We now claim that (the real part of) the average u := σ∈D 6 σv + 1,0 under an action of D 6 on T (k) gives a function on the (k, 0)-cluster {a+bω ∈ P(k) | a+b ≤ k−1}∪{m+a+bω ∈ P(k) | a+b ≤ k−2} satisfying (i) and (ii) in Definition 3.3 with λ = λ + 1,0 . Here D 6 is a dihedral group of order 12 generated by the three automorphisms on T (k) induced from • the rotation by 2π/3: • the reflection along a diagonal line of the parallelogram: • and the reflection along the other one: where a and b are again considered modulo k and these maps are considered as P(k) → P(k). Since the Laplacian on a graph is equivariant under the action of an automorphism and the Neumann boundary condition as in (3.3) is satisfied by the definition of u, u satisfies ∆ X(p) u = λ + 1,0 u. Moreover it is easily checked by computing the total sum of v + 1,0 along the "boundary" of P(k) and the "diagonal line between (m+) k − 1 and (m+) (k − 1)ω" of P(k) that u is not identically zero (except in the case k = 1). This proves that λ + 1,0 (k) is a D 3 -invariant eigenvalue for the (k, 0)-cluster (k ≥ 2). Since λ ± s,t = 6 if and only if (s, t) = (0, 0) and the sign is positive, and since a D 3 -invariant eigenvalue is necessarily an eigenvalue of T (k), λ + 1,0 above is the largest D 3 -invariant eigenvalue for the (k, 0)-cluster.
3.2. The case where X is 4-valent. The same notation as (3.1) is used also in the 4-valent case. The notion of D 4 -invariant eigenvalue is also defined exactly in the same way as in the 3-valent case. Since the proof of Theorem 1.1 for the 4-valent case is almost similar as that for the 3-valent case, we omit it. Moreover, (3.4) is valid also for a 4-valent graph; indeed, the same equality as in (3.5) holds, whose proof is also omitted. A corresponding result to Lemma 3.5 is stated as follows (its proof is omitted again).
Lemma 3.7. Let k be an integer with k ≥ 2. If k is even (resp. odd), then λ = 4 + 4 cos 2π k resp. 4 + 4 cos π k is the second largest (resp. the largest) D 4 -invariant eigenvalue for the (k, 0)-cluster and converges to 8 as k tends to infinity.
On the eigenvalues 2 and 4 for Goldberg-Coxeter constructions
This section provides proofs of the theorems on multiplicities of eigenvalues 2, 4 stated in Section 1. In the first two subsections, we shall prove Theorems 1.3 and 1.4. As is seen below, a reason for large multiplicities of eigenvalues 2 or 4 of GC 2k,0 (X) is that the (2k, 0)-clusters also have large multiplicities of eigenvalues 2 or 4. On the other hand, it is considered that the structure of an initial graph X would affect the eigenvalue distribution of its Goldberg-Coxeter constructions. A few remarkable examples shall be provided in Section 4.3, where a proof of Theorem 1.5 is also included.
Proof. The function given in Figure 8 (a), where α ∈ R is arbitrary, is a D 3 -invariant eigenfunction with eigenvalue 4 for the (2, 0)-cluster. Also, the function given in Figure 8 (b) is a D 3 -invariant eigenfunction with eigenvalue 2 for the (4, 0)-cluster. Proof of Lemma 4.2. We introduce the coordinate in (3.6) on the vertex set V(p) of the (2k, 0)cluster X(p). For each positive integer n, we set which is considered as the vertex set of an (n, 0)-cluster. The subgraph of the hexagonal lattice induced by V n is denoted by X n , which is identified with an (n, 0)-cluster. Also, for each a + bω, m + c + dω ∈ V n , we label for the corresponding equation of ∆ X(p) u = 4u as follows: where u is supposed to take the same value at a vertex outside V n as at the unique adjacent vertex of V n , such as E(0) : u(0) + u(m) + u(0) + u(0) = 0, E(1) : u(1) + u(m + 1) + u(m) + u(1) = 0. Let us first discuss the solvability of the following families of equations and the T 1 -invariance of the solutions, where T 1 : C → C is defined by T 1 (z) := ωz.
(1-a) {E(a + bω) | a + b = l, m ≤ a ≤ l − m}; assume that u is defined on and that u is invariant under T 1 on this set.
(1-b) {E(m + a + bω) | a + b = l, m ≤ a ≤ l − m}; assume that u is defined on and that u is invariant under T 1 on this set.
It is easily proved that (1-a) for any l and (1-b) for l odd are uniquely solvable and that each u of the solutions is invariant under T 1 on the set where u is newly defined. In the case where l is even, it is also proved from the T 1 -invariance of u that (1-b) is uniquely solvable if and only if u(l) = u(l + 1), and that the solution is invariant under T 1 . Let us define T 2 : C → C by T 2 (z) = T k 2 (z) := (1 − ω)z + (2k − 1)ω and T 3 : C → C by T 3 (z) = T k 3 (z) := −z + 2k − 1. We denote by (2-a) and (2-b) the families of equations transferred via T 3 from (1-a) and (1-b) respectively, and by (3-a) and (3-b) via T 2 . Then similar arguments as above (or simply symmetry of V 2k ) show the solvability and the T 2 -invariance (resp. T 3 -invariance) of the solutions of (2-a) and (2-b) (resp. (3-a) and (3-b)) provided u(2k −l−2+(l+1)ω) = u(2k −l−1+lω) (resp. u((2k − l − 1)ω) = u((2k − l − 2)ω)). We shall finish the proof by using the solvability and the symmetry of the solutions of (1-a)- (3-b) in an appropriate order.
Let us start by (i-b) with l = 2s − 1, m = 0 and with (4.2) for each i = 1, 2, 3. So far u is defined on with j = 0, on which u is invariant under the D 3 -action on V 4s+2 , and on which E is satisfied except on "the inside boundary": We assign α to the six large black vertices.
with j = 0. Now we assume that on (4.3) with j replaced by j − 1 ( j ≥ 1), u is defined and is invariant under D 3 -action on V 4s+2 and E is satisfied except on the inside boundary (4.4) with j replaced by j − 1. Then, by symmetry we can solve (i-a) with l = 2s − 1 + j, m = 2( j − 1) for i = 1, 2, 3, whose solution has the desired symmetry. It then follows the solvability and the symmetry of the solution of (i-b) with l = 2s − 1 + j, m = 2( j − 1) for i = 1, 2, 3. What we have to see is the solvability of (4.5) which is valid because of u(2a) = u(2a+1) and u(2aω) = u((2a+1)ω) for a = 0, 1, . . . , 2s. Now we conclude that u is defined on (4.3), where u is invariant under D 3 -action on V 4s+2 and E is satisfied except on the inside boundary (4.4).
A very similar proof works for eigenvalue 2 and the following is obtained. Lemma 4.3. A (2k, 0)-cluster has D 3 -invariant eigenvalue 2, whose multiplicity is exactly k/2 .
(1) The function given in Figure 11 (a), where α ∈ R is arbitrary, is a D 4 -invariant eigenfunction with eigenvalue 4.
(a) (4, 0)-cluster (b) (10, 0)-cluster Figure 11. D 4 -invariant eigenfunctions with eigenvalue 4 (2) Since the proof is again almost similar as that of Theorem 1.3, let us explain part of the differences. We introduce the same coordinate Z[i] as before on the vertex set V(p) of the (2k, 0)cluster X(p). For each positive integer n, we set The induced subgraph by V n is denoted by X n . Also, for each a + bi ∈ V n , we label for the corresponding equation of ∆ X(p) u = 4u as follows: E(a + bi) : u(a + 1 + bi) + u(a + (b + 1)i) + u(a − 1 + bi) + u(a + (b − 1)i) = 0, where u is supposed to take the same value at a vertex outside V n as at the unique adjacent vertex of V n . As in the proof of Theorem 1.3, we can construct a D 4 -invariant eigenfunction u on X 2k satisfying u(2a) = −u(2a + 1) by an inductive argument. Let us omit the remaining proof. (See Figure 11 (b) for an example. ) | 8,822.8 | 2018-07-28T00:00:00.000 | [
"Mathematics"
] |
Two-dimensional distributed-phase-reference protocol for quantum key distribution
Quantum key distribution (QKD) and quantum communication enable the secure exchange of information between remote parties. Currently, the distributed-phase-reference (DPR) protocols, which are based on weak coherent pulses, are among the most practical solutions for long-range QKD. During the last 10 years, long-distance fiber-based DPR systems have been successfully demonstrated, although fundamental obstacles such as intrinsic channel losses limit their performance. Here, we introduce the first two-dimensional DPR-QKD protocol in which information is encoded in the time and phase of weak coherent pulses. The ability of extracting two bits of information per detection event, enables a higher secret key rate in specific realistic network scenarios. Moreover, despite the use of more dimensions, the proposed protocol remains simple, practical, and fully integrable.
Scientific RepoRts | 6:36756 | DOI: 10.1038/srep36756 is performed using a phase modulator (PM), where a random phase between sub-blocks is either {0, π}. By combining the effect of the IM and the PM, Alice prepares states from the quaternary alphabet: v ac vac , 2 v ac vac , 3 v ac vac (1) Bob may distinguish unambiguously between these states by employing an unbalanced interferometer which interferes pulses in adjacent sub-blocks separated by T = 2/ν, where ν is the laser repetition rate. Depending on the time of arrival (t e or t l in Fig. 1) and on which detector fired (D 1 or D 2 ), Bob can decide which of the four states was prepared. We would like to point out that, due to the used interferometer delay, no interference occurs in the case of a transition sequence, such as |± α〉 |vac〉 |vac〉 |± α〉 .
It is important to note that, analogous to the differential phase shift (DPS) protocol, each sub-block may participate in defining up to two states 14 . For instance, the sequence: |α〉 |vac〉 |− α〉 |vac〉 |α〉 |vac〉 |vac〉 |α〉 |vac〉 |− α〉 encodes the states: |1〉 |1〉 − |3〉 . Here, the '− ' indicates a change of the temporal sequence over the sub-block separation, in which case Bob is not able to interfere the non-empty pulses in his interferometer (for a detailed example, see Supplementary information).
To minimize the number of transition sequences, Alice and Bob may benefit from repeating the temporal encoding over long pulse intervals (i.e. only preparing |0〉 and |1〉 , or |2〉 and |3〉 ). However, doing so permits a potential eavesdropper, Eve, to gain partial information on a given state by measuring the time-of-arrival of pulses in adjacent sub-blocks. This effectively means that the time-of-arrival information is more vulnerable to eavesdropping. To counteract this potential attack, Alice introduces the concept of blocks. Each block consists of N pulses (counting both empty and non-empty), within which the temporal sequence is repeated independently from the previous block (the sequences |0〉 |1〉 |1〉 and |3〉 |2〉 are examples of blocks with N = 8 and N = 6). The value of N is for each block chosen randomly from a uniform distribution: N ∈ {4, 6, … , N max }. In contrast, if the value of N was fixed at e.g. N = 6, then Eve would know exactly for which sequences of pulses the temporal encoding was repeated. The modification of random block lengths, means that both Bob and Eve are essentially unaware of the positions of the block separations. Whereas this is of no importance to Bob (see section 'Protocol definition'), it is fundamental to Eve.
The security of DPTS relies on the same principle as other DPR protocols: the coherence between non-empty pulses 20,21 . In fact, the DPS aspect of the DPTS protocol makes it very robust against attacks such as the intercept-resend attack and the photon-number splitting attack 21,22 . Eve can not perform a measurement on any finite number of states without at some point breaking coherence between successive pulses. This is specifically true for the DPTS protocol as Eve is not able to predict the positions of the transition sequences. However, since coherence is distributed across sub-block separations whereas the temporal information lies within sub-blocks, a sophisticated Eve can address each sub-block separately trying to just learn the time-of-arrival information (i.e. is a state |0〉 , |1〉 or is it |2〉 , |3〉 ). Doing so, she only breaks coherence within sub-blocks, and thus Bob, who only checks coherence across sub-blocks, is not able to reveal her presence. To counter this attack, Alice introduces decoy sequences with probability p 1 decoy , in which blocks consist of N non-empty pulses 20 . Interestingly, this decoy is just a DPS sequence in which the phase encoding is carried between every second pulse (as measured by Bob). Consequently, if Eve probes one or more sub-blocks containing two non-empty pulses, she inevitably disturbs the phase relation between these pulses 11 . As a result, there are cases where Eve introduces phase errors into the communication. Protocol definition. We now describe in detail how Alice and Bob establish a common key using the DPTS protocol: • Alice prepares states for transmission in the quantum channel using her phase-and intensity modulators. We assume that Alice chooses equally and randomly between the four different states {0, 1, 2, 3}. The temporal sequence is repeated within each block of random length, N ∈ {4, 6, … , N max }, whereas the phase difference between each sub-block is randomly chosen to either 0 or π. • Once Bob has received a photon in one of the two detectors, he reveals over a public classical channel the subtime (the number of the sub-block) instances of his recorded detection events. • Alice reports back by telling which of the events corresponded to an overlap between adjacent blocks with opposite temporal sequence (a block separation was present in that instance). Bob must discard these events. • For each of the remaining detection events, Alice and Bob establish two bits of information for their key: Alice easily figures out the detection time from her sent temporal sequence, and infers from her phase encoding which detector clicked at Bob's side. • After estimating the quantum bit error rate (QBER), Alice and Bob perform standard error reconciliation and privacy amplification [23][24][25] . At the end of the process Alice and Bob share a secure identical key.
Secret key rate. To further describe the proposed protocol, let us consider the maximum extractable secret key rate R sk 11 . For the DPTS protocol this quantity reads , μ is the mean photon number of non-empty pulses, t represents the quantum channel transmission coefficient, η d is the (common) detector efficiency, and p d is the dark count probability. The pre- is the average block length, takes into account the fraction of Bob's detection events that is assigned to the key string. The unused fraction 1/〈 N〉 is due to detections associated with adjacent sub-blocks of different temporal sequences. In these cases, the clicks are randomly distributed between the two detectors, and so the instances are discarded.
The mutual information between Alice and Bob, is expressed in terms of the Shannon entropy as 26 . Alice has a total of four different states to choose from, and by assuming that she prepares each state with equal probability, one finds . Note that we, for convenience, measure information using a base-4 logarithm rather than the common base 2 [in units of bits one acquires H(A) = 2]. Furthermore, the conditional entropy H(A|B) is expressed as ( 4) are given as , represents the visibility of the interferometer used by Bob and p D 1 p ( ) represents the probability of detection in detector D 1 (D 2 ). Note that, in the definition of the error probabilities, the visibility appears in only two of the four terms, since an interferometer error does not alter the time of arrival. Thus, since the time-of-arrival information remains correct, the DPTS protocol suffers less from interferometer imperfections in comparison with the DPS protocol which solely relies on relative phase measurements. On the other hand, the higher dimensionality of the DPTS protocol renders it more vulnerable to detector dark counts: each dark-count occurrence results in two random bits rather than one. This effectively makes the DPTS protocol less useful at longer communication distances where the dark count rate becomes comparable with the signal rate.
In order to evaluate the achievable secret key rate for Alice and Bob, we next introduce an upper bound on the information that a potential eavesdropper might obtain by performing the most basic attack; the beam-splitting attack. In the family of collective attacks, Eve is assumed to be able to interact with the same strategy on a predefined number of pulses. She can store the photons and try to extract the largest possible information after Alice and Bob has performed post-processing. A complete analysis would concentrate on I BE since Eve is clueless about detection events resulting from imperfections at Bob's side (see equation (2)). However, as a first attempt to estimate her information, we restrict ourselves to the more simple analysis of I AE .
Security analysis. This section presents an analysis of security based on the collective beam-splitting attack (BSA) and follows the method used in ref. 27 for the DPS and COW protocols. In the BSA, Eve replaces the quantum channel connecting Alice and Bob by a lossless line. Using a beam-splitter to simulate the losses of the quantum channel, Eve acquires 1 − t of the signal without disturbing the state sent by Alice. Thus, the BSA belongs to the family of zero-error attacks, and is therefore undetectable by Alice and Bob 28 . The states prepared by Alice consist of sequences α ⊗ k k with α k ∈ {+ α, vac, − α}, so by performing the BSA, Eve receives states of the form At this point we assume that Eve stores the states in her quantum memory for measurement after Bob reveals his detection events. Indeed, for such a collective attack, the maximum information she may extract is given by the Holevo quantity (which must be maximized with respect to the strategies available to Eve, though here we only consider the BSA) 11,29 Here, S(ρ) = − Tr {ρ log 4 (ρ)} is the von Neumann entropy, ρ ρ = ∑ p E j j E j is a density operator with p j being the probability of Alice preparing the four states j ∈ {0, 1, 2, 3}, and ρ E|j being Eve's state conditioned on preparation of state j.
As mentioned earlier, we consider only the balanced situation where Alice prepares each state with a probability p j = 1/4. In the current protocol each value in the quaternary alphabet is encoded in four consecutive pulses. It follows that Eve's states conditioned on Alice's preparation are , where λ n are the eigenvalues. The resulting Holevo quantity is and presents an upper bound on the information Eve can obtain by trying to distinguish between the four different states after Bob announces a detection event.
In the cases where Eve fails to get a conclusive measurement, she may instead try to establish partial information about the state Alice and Bob agreed upon. She can do this by trying to measure the temporal position (i.e. is a state |0〉 , |1〉 or |2〉 , |3〉 ) of the pulse in a sub-block adjacent to the sub-block corresponding to Bob's detection. In general for the considered block lengths, the probability of this measurement to be correct (if conclusive) exceeds 1/2 (for details, see Supplementary information), and thereby effectively provides Eve with information on the state. Since this additional attack by Eve is conditioned on her not getting a conclusive result in the primary measurement, the corrected Holevo quantity becomes where χ AE (1) is derived and given in the Supplementary information. Note however, that Eve is essentially ignorant about the position of block separations. Therefore, making conclusions based on this secondary attack will result in errors for Eve, effectively reducing the gained information.
Numerical results. Combining the results of the previous sections (equations (2)(3)(4)(5)(6)(7)(8)) the secret key rate for DPTS , where the factor of two stems from the conversion from a quaternary to binary alphabet. This expression enables us to plot a first upper bound on the secret key rate under the assumption of collective attacks. Specifically, Fig. 2 shows R sk versus communication distance at the optimized values of the mean photon number μ. To assess the performance of the DPTS protocol, we have included plots for both COW and DPS. The secret key rate for COW and DPS are obtained by: where R is defined below equation (2), Q (COW) and 27 . These equations are derived under the same assumptions as made for the DPTS protocol to allow for a fair comparison. As a result, the COW protocol does not exhibit any visibility dependence (see Fig. 2(b)).
In comparison, the DPTS protocol has a similar performance as the other protocols under the realistic condition of non-ideal visibilities (as example we have used V = 0.9). Noteworthy, the DPTS protocol displays a less critical dependence on the visibility when compared to the DPS protocol.
In a more realistic situation, the comparison of the protocols must take into account the detector dead times. For example, considering the case of commercial InGaAs infrared single-photon detectors (the most used in fiber links and the most promising thanks to the non-cryogenic requirement), they generally exhibit a dead time in excess of 1 μs 30,31 . Thus, in any scenario where the detector dead time significantly influences the key generation rate, the ability to extract two bits of information per detection event grants the DPTS protocol an advantage. To illustrate this effect, Fig. 3 shows an example of the secret key rate in bits s −1 , after inclusion of the dead-time dependency.
Discussion
The main figure of merit in a QKD system is the achievable secret key rate. Therefore, to assess the performance of DPTS, Fig. 2 displays this quantity for DPTS in comparison with the standard COW and DPS protocols. The comparison shows very similar behavior of the three DPR protocols. Considering more specifically the case of DPTS, the final key rate is influenced by the length of the blocks N prepared by Alice. Even though a higher value of N allows an increased sifted key rate, it is necessary to consider a trade-off between the length of blocks and Secret key rate in real case scenario. Different secret key rates achievable in a medium-length link scenario, where the detector dead times play an important role. We use mean photon numbers for the different protocols of μ DPTS = 0.23, μ DPS = 0.19, and μ COW = 0.52, at repetition rate ν = 2 GHz, and average block length of 〈 N〉 = 6. The detectors are specified by dark-count probability p d = 2 × 10 −8 , a dead time of t d = 2 μs, and efficiency η d = 0.1. We assume V = 0.98, and a decoy-sequence probability of p decoy = 0.02 for COW and DPTS. the information leakage to Eve. In the case of long-distance links (in excess of 100 km), the behavior of the three protocols is maintained, but as the DPTS protocol is more severely influenced by dark count events, it is generally limited to shorter distances. On the other hand, as seen from Fig. 2(b), the DPTS protocol is less dependent on the interferometer visibility. This fact permits the proposed protocol to achieve a more stable secret key generation rate in comparison with the DPS protocol.
In implementing a QKD protocol, it is necessary to consider the limitations set by the optical and electronic devices [32][33][34] . An important example is the single-photon detector dead time t d , which sets an upper limit on the key generation rate. This parameter is important in a short-or medium-length link scenario, where the average wait time between detection events is of the same order of magnitude as t d (which is typically on the order of microseconds). In Fig. 3, it is shown that DPTS may achieve a significant increase in the secure key rate at distances where the detector dead time is a limiting factor. This potential arises due to the ability of the DPTS protocol to extract two bits of information per detection event.
The use of multiple degrees of freedom in transmission of information, intuitively increases the complexity of the scheme in comparison with protocols dealing with each individual degree of freedom. Despite DPTS not being an exception to this rule of thumb, the complexity overhead in comparison to DPS or COW is not crucial. On the other hand, DPTS does exhibit two significant practical advantages. Firstly, the COW protocol requires a monitoring line to check for the presence of an eavesdropper. However, such a monitoring line is unnecessary for DPTS, as an interferometer is directly used in the data line, and hence implements the necessary coherence check. Thus, the decrease in rate related to monitoring of the data line in COW, is not a limitation for DPTS. Secondly, the stability of the interferometer over time, is a considerable challenge in implementations of the DPS protocol in non-stable environments. The performance of the DPTS protocol is inherently more resilient against fluctuating interferometer visibilities, because the temporal bit remains unaffected by such inefficiencies. This entails, that DPTS might be better suited in cases where it is difficult to maintain the interferometer visibility above a certain required operation threshold.
Finally, DPTS can potentially play an important role in QKD networks spanning from metropolitan to intercity distances [35][36][37][38][39] . Interestingly, the required measurement apparatus is identical to the one used in DPS, and in fact, the receiver does not need to know a priori whether the signals arise from a DPS or a DPTS encoding. This compatibility suggests that a versatile network encompassing the use of both the DPS and DPTS protocols is feasible.
In conclusion, we have proposed a novel kind of distributed-phase-reference protocol for quantum key distribution. Utilizing both the time-and phase degrees of freedom, this protocol provides a significant step towards realization of fast, reliable, and practical quantum communication. Future directions include a finite-key analysis and a real-time field implementation. | 4,289.6 | 2016-06-27T00:00:00.000 | [
"Computer Science"
] |
A Two-stage Sieve Approach for Quote Attribution
We present a deterministic sieve-based system for attributing quotations in literary text and a new dataset: QuoteLi3. Quote attribution, determining who said what in a given text, is important for tasks like creating dialogue systems, and in newer areas like computational literary studies, where it creates opportunities to analyze novels at scale rather than only a few at a time. We release QuoteLi3, which contains more than 6,000 annotations linking quotes to speaker mentions and quotes to speaker entities, and introduce a new algorithm for quote attribution. Our two-stage algorithm first links quotes to mentions, then mentions to entities. Using two stages encapsulates difficult sub-problems and improves system performance. The modular design allows us to tune for overall performance or higher precision, which is useful for many real-world use cases. Our system achieves an average F-score of 87.5 across three novels, outperforming previous systems, and can be tuned for precision of 90.4 at a recall of 65.1.
Introduction
Dialogue, representing linguistic and social relationships between characters, is an important component of literature. In this paper, we consider the task of quote attribution for literary text: identifying the speaker for each quote. This task is important for developing realistic character-specific conversational models (Vinyals and Le, 2015;Li et al., 2016), analyzing discourse structure, and literary studies (Muzny et al., 2016). But identifying speakers can be difficult; authors often refer to the 1 Quotes in Literary text from 3 novels. speaker only indirectly via anaphora, or even omit mention of the speaker entirely (Table 1).
Prior work has produced important datasets labeling quotes in novels, providing training data for supervised methods. But some of these model the quote-attribution task at the mention-level (Elson and McKeown, 2010;O'Keefe et al., 2012), and others at the entity-level (He et al., 2013), leading to labels that are inconsistent across datasets.
We propose entity-level quote attribution as the end goal but with mention-level quote attribution as an important intermediary step. Our first contribution is the QuoteLi3 dataset, a unified combination of data from Elson and McKeown (2010) and He et al. (2013) with the addition of more than 3,000 new labels from expert annotators. This dataset provides both mention and entity labels for Pride and Prejudice, Emma, and The Steppe.
Next, we describe a new deterministic system that models quote attribution as a two-step process that i) uses textual cues to identify the mention that corresponds to the speaker of a quote, and ii) resolves the mention to an entity. This system improves over previous work by 0.8-2.1 F1 points and its modular design makes it easy to add sieves and incorporate new knowledge.
In summary, our contributions are: • A unified dataset with both quote-mention and quote-speaker links labeled by expert annotators. • A new quote attribution strategy that improves on all previous algorithms and allows the incorporation of both rich linguistic features and machine learning components. • A new annotation tool designed with the specifics of this task in mind. We freely release the data, system, and annotation tool to the community. 2
Related Work
Early work in quote attribution focused on identifying spans associated with content (quotes), sources (mentions), and cues (speech verbs) in newswire data. This is the approach taken by . More recent work by Almeida et al. (2014) performed entity-level quote attribution and showed that a joint model of coreference and quote attribution can help both tasks.
In the literary domain, Glass and Bangay (2007) did early work modeling both the mention-level and entity-level tasks using a rule-based system. However, their system relied on identifying a main speech verb to then identify the actor (i.e. the mention) and link to the speaker (i.e. the entity) from a character list. This system worked very well but was limited to explicitly cued speakers and did not address implicit speakers at all.
Elson and McKeown (2010) took important first steps towards automatic quote attribution. They formulated the task as one of mention identification in which the goal was to link a quote to the mention of its speaker. Their method achieved 83.0% accuracy overall, but used gold-label information at test time. Their corpus, the Columbia Quoted Speech Corpus (CQSC), is the most wellknown corpus and was used by follow-up work. However, a result of their Mechanical Turk-based labeling strategy was that this corpus contains many unannotated quotes (see Table 4).
O' Keefe et al. (2012) also treated quote attribution as mention identification, using a sequence labeling approach. Their approach was successful in the news domain but it failed to beat their baseline in the literary domain (53.5% vs 49.8%
Quote Types
Emma The Steppe with mention 546 (74.4%) 371 (59.6%) with speaker 491 (66.9%) 258 (41.5%) Table 4: Coverage of the CQSC labels accuracy). This work quantitatively showed that quote attribution in literature was fundamentally different from the task in newswire.
We compare against He et al. (2013), the previous state-of-the-art system for quote attribution. They re-formulated quote attribution as quotespeaker labeling rather than quote-mention labeling. They used a supervised learner and a generative actor topic model (Celikyilmaz et al., 2010) to achieve accuracies ranging from 82.5% on Pride & Prejudice to 74.8% on Emma.
Data: The QuoteLiCorpus
We build upon the datasets of He et al. (2013) and Elson and McKeown (2010) to create a comprehensive new dataset of quoted speech in literature: QuoteLi3. This dataset covers 3 novels and 3103 individual quotes, each linked to speaker and mention for a total of 6206 labels, more than 3000 of which are newly annotated. It is composed of expert-annotated dialogue from Jane Austen's Pride and Prejudice, Emma, and Anton Chekhov's The Steppe.
Previous Datasets
The datasets described in section 2 are valuable but incomplete and hard to integrate with one another given their different designs.
The Columbia Quoted Speech Corpus is a large dataset that includes both quote-mention and Novel He et al. CQSC (Elson and McKeown, 2010) QuoteLi3 q-mention q-speaker q-mention q-speaker q-mention q-speaker Pride and Prejudice Emma The Steppe It suffers from problems often associated with crowdsourced labels and the use of low-accuracy tools. In this corpus, quote-mention labels were gathered from Mechanical Turk, where each quote was linked to a mention by 3 different annotators. Elson and McKeown (2010) report that 65% of the quotes in CQSC had unanimous agreement and that 17.6% of the quotes in this corpus were unlabeled. To generate quote-speaker labels, an offthe-shelf coreference tool 3 was used to link mentions and form coreference chains. We find that 57.8% of the quotes in this corpus either i) have no speaker label (48.1%) or ii) the speaker cannot be linked to a known character entity (9.7%). O' Keefe et al. (2012) find that 8% of quotes with speaker labels are incorrectly labeled. Our analysis of the relevant part of CQSC for this work is shown in Table 4.
The data from He et al. (2013) includes highquality speaker labels but lacks quote-mention labels. There is no overlap in the data provided by He et al. (2013) and CQSC, but this work did evaluate their system on a subset of CQSC. This dataset assumes that all quoted text within a paragraph should be attributed to the same speaker. 4 While this assumption is correct for Pride and Prejudice, it is incorrect for novels like The Steppe, which use more complex conversa-tional structures 5 . This assumption leads to a problematic method of system evaluation in which all quotes within a paragraph are considered in the gold labels to be one quote, even if they were in fact uttered by different characters. We refer to this strategy as having "collapsed" quotes in our evaluations and present it for the purpose of providing a faithful comparison to previous work.
In QuoteLi3 we add the annotations that are missing from both datasets and correct the existing ones where necessary. A summary of the annotations included in this dataset and comparison to the previous data that we draw from is described in Table 2. Our final dataset is described in Table 3. It features a complete set of annotations for both quote-mention and quote-speaker labels.
Annotation
Two of the authors of the paper were the annotators of our dataset. They used annotation guidelines consisting of an example excerpt and a description, which is included in the supplementary materials §A.5. The annotators were instructed to identify the speaker (from a character list) for each quote and to identify the mention that most directly helped them determine the speaker. Unlike Elson and McKeown (2010), mentions can be pronouns and vocatives, not just explicit name referents. Mentions that were closer to the quote and speech verbs were favored over indirect mentions (such as those in conversational chains). Figure 1 shows an example from Pride and Prejudice.
Annotation was done using a browser-based an- (He et al., 2013). One problem with the CQSC annotations was that the annotators were shown short snippets that lacked the context to determine the speaker and no character list. We designed our tool to provide context and a character list including name, aliases, gender, and description of the character. Similar to CHARLES, the character list is not static and the annotator can add to the list of characters. Our tool also features automatic data consistency checks such as ensuring that all quotes are linked to a mention. Our expert annotators achieved high interannotator agreement with a Cohen's κ of .97 for quote-speaker labels and a κ of .95 for quotemention labels. 6 To preseve the QuoteLi3 data for train, dev, and testing sets, we calculated this inter-annotator agreement on excerpts from Alice in Wonderland and The Adventures of Huckleberry Finn containing 176 quotes spoken by 10 characters, chosen to be similar to the data found in QuoteLi3. 6 The reported agreement is the average of the Cohens kappas from these passages. Table 3 shows the statistics of our annotated corpus. Unlike He et al. (2013), we do not assume that all quotes in the same paragraph are spoken by the same speaker. To compare with the dataset used by He et al. (2013), we provide the collapsed statistics as well. As Table 3 shows, we have roughly the same number of annotated quotes for Pride and Prejudice as He et al. (2013). For Emma and The Steppe, which were taken from the CQSC corpus, we have considerably more quotes because of our added annotations (see Table 4).
The Quote Attribution Task
The task of quote attribution can be summarized as "who said that?" Given a text as input, the final output is a speaker for each uttered quote in the text. We assume that all quotes have been previously identified. O' Keefe et al. (2012) find that regular-expression approaches to quote detection yield over 99% accuracy for clean Englishlanguage data. A number of other approaches to quote detection have been studied in recent years for more complex data (Pouliquen et al., 2007;Pareti et al., 2013;Muzny et al., 2016;Scheible et al., 2016). Following He et al. (2013), we assume that there is a predefined list of characters available, with the name, aliases, and gender of each character. 7 Some key challenges in quote attribution are resolving anaphora (i.e., coreference) and following conversational threads. Literature often follows specific patterns that make some quotes easier to attribute than others. Therefore, an approach that anchors conversations on easily identifiable quotes can outperform approaches that do not. Figure 1 shows an example of a complex conversation at the beginning of Pride and Prejudice. This example illustrates the spectrum of easy to difficult cases found in the task: simple explicit named mention (lines 9, 13, 21), nominal mentions (lines 7, 19, 27), and pronoun mentions (line 5). Sometimes explicitly named mentions embedded in more complex sentences can still be challenging as they require good dependency parses. This example also illustrates a conversational chain with alternating speakers between Mrs. Bennet and Elizabeth Bennet (lines 7 to 11), and between Mr. Bennet and Mrs. Bennet (lines 27 to 34). In this case, vocatives (expressions that indicate the party being addressed) are cues for who the other speaker is (lines 9, 23, 31). When the simple alternation pattern is broken, explicit speech verbs with the speaking character are specified. To summarize, there are several explicit cues and some easy cases in a conversation that can be leveraged to make the hard cases easier to address.
First, consider the quote→mention linking subtask. This is an inherently ambiguous task (i.e. any mention from the same coreference chain is valid,) but we know that if the target quote is linked to the annotated mention that this is one correct option. This means that the evaluation of the quote→mention stage is a lower-bound. In other words, since a given quote may have multiple mentions that could be considered correct, our system may choose a "wrong" mention for a quote but link it to the correct speaker in the end. Thus, if our mention→speaker system could perfectly resolve every mention to its correct speaker, our overall quote attribution system would be guaranteed to get at minimum the same results as the quote→mention stage.
The quote→speaker task can be tackled directly without addressing quote→mention, but identifying a mention associated with the speaker allows us to incorporate key outside information. An-7 Character lists are available on sites like sparknotes.com. The automatic extraction of characters from a novel has been identified as a separate problem (Vala et al., 2015). other advantage of this approach is that we can then separately analyze and improve the performance of the two stages.
Therefore we evaluate both subtasks to give a more complete picture of when the system fails and succeeds. We use precision, recall, and F1 so that we can tune the system for different needs.
Approach
Our model is a two-stage deterministic pipeline. The first stage links quotes to specific mentions in the text and the second stage matches mentions to the entity that they refer to.
By doing both quote→mention and mention→entity linking, our system is able to leverage additional contextual information, resulting in a richer, labeled output. Its modular design means that it can be easily updated to account for improvements in various sub-areas such as coreference resolution. We use a sievebased architecture because having accurate labels for the easy cases allows us to first find anchors that help resolve harder, often conversational, cases. Sieve-based systems have been shown to work well for tasks like coreference resolution (Raghunathan et al., 2010;Lee et al., 2013), entity linking (Hajishirzi et al., 2013), and event temporal ordering (Chambers et al., 2014).
Quote→Mention
The quote→mention stage is a series of deterministic sieves. We describe each in detail in the following sections and show examples in Table 5.
Trigram Matching This sieve is similar to patterns used in Elson and McKeown (2010). It uses patterns like Quote-Mention-Verb (e.g ''...'' she said) where the mention is either a character name or pronoun to isolate the mention. Other patterns include Quote-Verb-Mention, Mention-Verb-Quote, and Verb-Mention-Quote.
Dependency Parses The next sieve in our pipeline inspects the dependency parses of the sentences surrounding the target quote. We use the enhanced dependency parses (Schuster and Manning, 2016) produced by Stanford CoreNLP (Chen and Manning, 2014) to extract all verbs and their dependent nsubj nodes. If the verb is a common speech verb 8 and its nsubj relation points to a Sieve Example Trigram Matching "They have none of them much to recommend them," replied he.
Dependency Parses
Mrs. Bennet said only, "Nonsense, nonsense!" Single Mention Detection ...Elizabeth impatiently. "There has been many a one, I fancy, overcome in the same way. I wonder who first discovered the efficacy of poetry in driving away love!" Vocative Detection "My dear Mr. Bennet,..." "Is that his design in settling here?" Paragraph Final Mention Linking After a silence of several minutes, he came towards her in an agitated manner, and thus began, "In vain have I struggled..." Supervised Sieve -Conversation Detection "Aye, so it is," cried her mother ... "Then, my dear, you may have the advantage of your friend, and introduce Mr. Bingley to her." "Impossible, Mr. Bennet, impossible, when I am not acquainted with him myself; how can you be so teazing?" Loose Conversation Detection "I will not trust myself on the subject," replied Wickham; "I can hardly be just to him." Elizabeth was again deep in thought, and after a time exclaimed, "To treat in ... the favourite of his father!" She could have added, "A young man, too,... being amiable"but she contented herself with, "and one, too, ... in the closest manner!" "We were born in the same parish, within the same park; the greatest part of our youth was passed together;..." character name, a pronoun, or an animate noun, 9 we assign the quote to the target mention.
Single Mention Detection
If there is only a single mention in the non-quote text in the paragraph of the target quote, link the quote to that mention.
Vocative Detection If the preceding quote contains a vocative pattern (see supplemental section A.2), link the target quote to that mention. Vocative detection only matches character names and animate nouns.
Paragraph Final Mention Linking If the target quote occurs at the end of a paragraph, link it to the final mention occurring in the preceding sentence.
Conversational Pattern If a quote in paragraph n has been linked to mention m n , then this sieve links an unattributed quote two paragraphs ahead, n + 2, to mention m n if they appear to be in conversation. We consider two quotes "in conversation" if the paragraph between is also a quote, and 9 The list of animate nouns is from Ji and Lin (2009). the quote in paragraph n + 2 appears without additional (non-quote) text.
Loose Conversational Pattern
We include a looser form of the previous sieve as a final, highrecall, step. If a quote in paragraph n has been linked to mention m n , then this sieve links quotes in paragraph n + 2 to m n without restriction.
Mention→Speaker
The second stage of our system involves linking the mentions identified in the first stage to a speaker entity. We again use several simple, deterministic sieves to determine the entity that each mention and quote should be linked to. A description of these sieves and example mentions and quotes that they are applied to appears in Table 6. For the following sieves, we construct an ordered list of top speakers by counting proper name and pronoun mentions around the target quote. If gender for the target quote's speaker can be determined either by the gender of a pronoun mention or the gender of an animate noun (Bergsma and Lin, 2006), this information is used to filter the candidate speakers in the top speakers list.
We use a window size from 2000 tokens before the target quote to 500 tokens after the target quote. If no speakers matching in gender can be found in this window, it is expanded by 2000 tokens on both sides.
Exact Name Match If the mention that a quote is linked to matches a character name or alias in our character list, label the quote with that speaker.
Coreference Disambiguation If the mention is a pronoun, we attempt to disambiguate it to a specific character using the coreference labels provided by BookNLP (Bamman et al., 2014).
Conversational Pattern
Similarly as in the quote→mention section, we match a target quote to the same speaker as a quote in paragraph n + 2, if they are in the same conversation and it is labeled. Next, we match it to the quote in paragraph n − 2 if they are in the same conversation and it is labeled. This sieve receives gender information from the mention that the target quote is linked to.
Family Noun Vocative Disambiguation If the target quote is linked to a vocative in the list of family relations (e.g. "papa"), pick the first speaker in top speakers that matches the last name of the speaker of the quote containing the vocative.
Majority Speaker If none of the previous sieves identified a speaker for the quote, label the quote with the first speaker in the top speakers list.
Experiments
In all experiments, we divide the data as follows: Pride and Prejudice is split as in He et al. (2013) with chapters 19-26 as the test set, 27-33 as the development set, and all others as training. Emma and The Steppe are not used for training.
Baseline
As a baseline, for the quote→mention stage we choose the mention that is closest to the quote in terms of token distance. This is similar to the approach taken in BookNLP (Bamman et al., 2014), in which quotes are attributed to a mention by first looking for the closest mention in the same sentence to the left and right of the quote, then before a hard stop or another quote to the left and right of the target quote. For the mention→speaker stage, Table 9: Breakdown of the accuracy of our system per type of quote (see Table 3) in each test set.
we use the Exact Name Match and Coreference Disambiguation sieves. Table 7 shows a direct comparison of our work versus the previous systems. We replicate the test conditions used by He et al. (2013) as closely as possible in this comparison. In this comparison, the evaluations based on CQSC are of non-contiguous subsets of the quotes that are also not necessarily the same between our work and the previous work. As discussed in section 3, CQSC provides an incomplete set of quotespeaker labels. In this work we follow the same methodology as He et al. (2013) to extract a test set of unambiguously labeled quotes by using a list of character names to identify those that are unambiguously labeled. In section 7, we analyze The Steppe and Emma more thoroughly, showing that this method results in an easier subset of the quotes in these novels.
Comparison to Previous Work
Our preferred evaluation, shown in Table 8, differs from previous evaluations in four important ways. We hope that this work can establish consistent guidelines for attributing quotes and evaluating system performance to encourage future work.
• Each quote is attributed separately. 10 • The test sets are composed of every quote from the test portion of each novel, no subsets are used because of incomplete annotations. 11 • No gold data is used at test time. 12 • Precision and recall are reported in preference to accuracy for a more fine-grained understanding of the underlying system. Table 8: Precision, recall, and F-Score of our systems on un-collapsed quotations and the fully annotated test sets from the QuoteLi3 dataset.
Adding a Supervised Component
To test how orthogonal our two-stage approach is to previous systems, we experiment by adding a supervised sieve to the quote→ mention stage. We train a binary classifier, using a maxent model to distinguish between the correct and incorrect candidate mentions.
Candidate Mentions We take as candidate mentions all token spans corresponding to names, pronouns, and animate nouns in a one-paragraph range on either side of the quote. Names are determined by scanning for matches to the character list. We restrict pronouns to singular gendered pronouns, i.e. 'he' or 'she'.
Features We featurize each (quote, mention) pair based on attributes of the quote, mention, and how far apart they are from one another. These features largely align with previous work and can be found in supplemental section A.3 (Elson and McKeown, 2010;He et al., 2013).
Prediction At test time our model predicts for each quote whether each candidate mention is or is not the correct mention to pair with that quote. If the model predicts more than one mention to be correct, we take the most confident result. This sieve goes just before the conversation pat-tern detection sieves in the quote→mention stage (see Table 5). This forms our +supervised system.
Creating a High-Precision System
One advantage of our sieve design is that we can easily add and remove sieves from our pipeline. This means that we can determine the combination of sieves that result in the system that achieves the highest precision with respect to the final speaker label. We use an ablation test to find the combination of sieves with the highest precision (95.6%) for speaker labels on the development set from Pride and Prejudice. These results are achieved by removing the Loose Conversation Detection sieve for the quote→mention stage and keeping only the Exact Name Match and Coreference Disambiguation sieves for the mention→speaker stage. Together, these sieves create a system that we call +precision that emphasizes overall precision rather than F-score or accuracy.
Results
We show that a simple deterministic system can achieve state-of-the-art results. Adding a lightweight supervised component improves the system across all test sets. The sieve design allows us to create a high precision system that might be more appropriate for real-world applications that value precision over recall.
The results in Table 8 confirm that the subset of test quotes from Emma and The Steppe used in previous work were an easier subset of the whole set of quotations. When evaluating based off of the whole set of quotations, we lose 0.2 and 11.1 points of accuracy for Emma and The Steppe, respectively. As we show in Table 4, The Steppe is missing a significant portion (50.9%) of the annotations whereas Emma is missing 28.6%. Our error analysis shows us that The Steppe features more complicated conversation patterns than the novels of Jane Austen, which makes the task of quote attribution more difficult.
One type of error analysis we performed was inspecting the accuracy of our system by quote type. As seen in Table 9, the main challenge lies in identifying anaphoric and implicit speakers. We find that resolving non-pronoun anaphora is much more challenging for our system than pronouns. This is because the only mechanism for dealing with these mentions is the Family Noun Vocative Disambiguation sieve; otherwise, the only information we gather from them is gender information. This indicates that adding information about the social network of a novel and attributes of each character (such as job and relationships to other characters) would further increase system performance.
Conclusion
In this paper, we provided an improved, consistently annotated dataset for quote attribution with both quote-mention and quote-speaker annotations. We described a two-stage quote attribution system that first links quotes to mentions and then mentions to speakers, and showed that it outperforms the existing state-of-the-art. We established a thorough evaluation and showed how our system can be tweaked for higher precision or refined with a supervised sieve for better overall performance.
A clear direction for future work is to expand the dataset to a more diverse set of novels by leveraging our annotation tool on Mechanical Turk or other crowdsourcing platforms. This work has also provided the background to see the pitfalls that a dataset produced in such a way might encounter. For example, annotators could label mentions and speakers separately, and examples with high uncertainty could be transferred to expert annotators. An expanded dataset would allow us to evaluate how well our system generalizes to other novels and also allow us to train better models. Another interesting direction is to eliminate the use of predefined character lists by automatically extracting the list of characters (Vala et al., 2015).
A Supplemental Material
A.1 Nested Conversation Example Figure 2: An example paragraph that contains multiple speakers from The Steppe Figure 2 shows a screen shot of our annotation tool displaying a paragraph with a complex conversational structure from The Steppe.
A.3 Supervised Classifier Features
We used the following features in our supervised classifier: • Distance: token distance, ranked distance (relative to mentions), paragraph distance (left paragraph and right paragraph separate) • Mention: Number of quotes in the mention paragraph, number of words in mention paragraph, the order of the mention within the paragraph (compared to other mentions), whether the mention is within conversation (i.e. no non-quote text in the same paragraph), whether the mention is within a quote, POS of the previous and next words. • Quote: the length of the quote, the order of the quote (i.e. whether it is the first or second quote in a paragraph), the number of words in the paragraph, number of names in the paragraph, whether the quote contains text in it, whether the present quote contains the name of the mention (if mention is a name).
A.4 Words Lists
Common Speech Verbs Similar to He et al. (2013), we use say, cry, reply, add, think, observe, call, and answer, present in the training data from Pride and Prejudice.
Family Relation Nouns ancestor aunt bride bridegroom brother brother-in-law child children dad daddy daughter daughter-in-law father father-in-law fiancee grampa gramps grandchild grandchildren granddaughter grandfather grandma grandmother grandpa grandparent grandson granny great-granddaughter greatgrandfather great-grandmother great-grandparent great-grandson great-aunt great-uncle groom half-brother half-sister heir heiress husband ma mama mom mommy mother mother-in-law nana nephew niece pa papa parent pop second cousin sister sister-in-law son son-in-law stepbrother stepchild stepchildren stepdad stepdaughter stepfather stepmom stepmother stepsister stepson uncle wife
A.5 Annotation Guidelines
• Each quote should be annotated with the character that is that quote's speaker. • Each quote should be linked to a mention that is the most obvious indication of that quote's speaker.
-Quotes can be linked to mentions inside other quotes. -Multiple quotes may be linked to the same mention. • Mentions should also be annotated with the character that they refer to.
-If a character's name is meaningfully associated with an article (e.g. "...," said the Bear), include that article in the mention. | 6,930 | 2017-04-01T00:00:00.000 | [
"Computer Science",
"Linguistics"
] |
GW/BSE Nonadiabatic Dynamics Simulations on Excited-State Relaxation Processes of Zinc Phthalocyanine-Fullerene Dyads: Roles of Bridging Chemical Bonds†
In this work, we employ electronic structure calculations and nonadiabatic dynamics simulations based on many-body Green function and BetheSalpeter equation (GW/BSE) methods to study excited-state properties of a zinc phthalocyanine-fullerene (ZnPcC60) dyad with 6-6 and 5-6 configurations. In the former, the initially populated locally excited (LE) state of ZnPc is the lowest S1 state and thus, its subsequent charge separation is relatively slow. In contrast, in the latter, the S1 state is the LE state of C60 while the LE state of ZnPc is much higher in energy. There also exist several charge-transfer (CT) states between the LE states of ZnPc and C60. Thus, one can see apparent charge separation dynamics during excited-state relaxation dynamics from the LE state of ZnPc to that of C60. These points are verified in dynamics simulations. In the first 200 fs, there is a rapid excitation energy transfer from ZnPc to C60, followed by an ultrafast charge separation to form a CT intermediate state. This process is mainly driven by hole transfer from C60 to ZnPc. The present work demonstrates that different bonding patterns (i.e. 5-6 and 6-6) of the C−N linker can be used to tune excited-state properties and thereto optoelectronic properties of covalently bonded ZnPc-C60 dyads. Methodologically, it is proven that combined GW/BSE nonadiabatic dynamics method is a practical and reliable tool for exploring photoinduced dynamics of nonperiodic dyads, organometallic molecules, quantum dots, nanoclusters, etc.
In this work, we employ electronic structure calculations and nonadiabatic dynamics simulations based on many-body Green function and Bethe-Salpeter equation (GW/BSE) methods to study excited-state properties of a zinc phthalocyanine-fullerene (ZnPc-C 60 ) dyad with 6-6 and 5-6 configurations. In the former, the initially populated locally excited (LE) state of ZnPc is the lowest S 1 state and thus, its subsequent charge separation is relatively slow. In contrast, in the latter, the S 1 state is the LE state of C 60 while the LE state of ZnPc is much higher in energy. There also exist several charge-transfer (CT) states between the LE states of ZnPc and C 60 . Thus, one can see apparent charge separation dynamics during excited-state relaxation dynamics from the LE state of ZnPc to that of C 60 . These points are verified in dynamics simulations. In the first 200 fs, there is a rapid excitation energy transfer from ZnPc to C 60 , followed by an ultrafast charge separation to form a CT intermediate state. This process is mainly driven by hole transfer from C 60 to ZnPc. The present work demonstrates that different bonding patterns (i.e. 5-6 and 6-6) of the C−N linker can be used to tune excited-state properties and thereto optoelectronic properties of covalently bonded ZnPc-C 60 dyads. Methodologically, it is proven that combined GW/BSE nonadiabatic dynamics method is a practical and reliable tool for exploring photoinduced dynamics of nonperiodic dyads, organometallic molecules, quantum dots, nanoclusters, etc.
I. INTRODUCTION
Donor-acceptor dyads have attracted a broad range of research interests owing to their successful applications in photo-fuel production, photo-electricity conversion, optoelectronic devices, etc. [1][2][3][4]. Their optoelectric performances heavily depend on initial photoinduced charge separation and subsequent carrier dynamics. Clearly, correctly understanding these dynami-cal processes is very important for rationally regulating and designing new types of donor-acceptor dyads with superior optoelectrical properties. Thus, numerous experimental and theoretical works have been carried out in the past decades to study excited-state properties of dyads with distinct kinds of donor-acceptor interfaces [5][6][7][8][9][10][11].
One kind of the most popular donor-acceptor dyads is formed by combining both zinc phthalocyanine (ZnPc) molecules and fullerenes (C 60 ), e.g. ZnPc-C 60 , etc. These ZnPc-C 60 complexes are overall categorized into two types according to whether they are formed by chemical bonds or not. Non-covalently bonded ZnPc-C 60 complexes are primarily formed through van der Waals interaction. Alternatively, these ZnPc-C 60 dyads can be generated via chemical bonds connecting both donor and acceptor fragments. In these complexes, ZnPc and C 60 usually act as electron donor and acceptor, respectively, and their unique optoelectronic properties have been extensively studied in the past years [12][13][14][15][16][17][18][19][20][21][22][23][24][25][26][27][28][29]. Once exciting the donor fragment, i.e. ZnPc, singlet excitons are first generated and then migrate toward ZnPc-C 60 interfaces near which a series of nonadiabatic transitions such as internal conversion and intersystem crossing processes could occur. Finally, these excitons are converted into charge-transfer excitons followed by charge separation to generate free electrons and holes. Furthermore, these photoinduced interfacial dynamical processes are very complicated and seriously dependent on relative spatial orientations of ZnPc and C 60 in noncovalently bonded dyads, and chemicalbond properties connecting ZnPc and C 60 in covalently bonded dyads, as observed in many experimental and theoretical studies [22, 24-26, 28, 29]. In order to uncover chemical bonding effects on excited-state properties and charge separation dynamics of covalently bonded ZnPc-C 60 dyads, several ZnPc-C 60 dyads have been synthesized and their photochemical and photophysical properties have been explored [30][31][32][33][34]. Nevertheless, mechanistic details on excited-state dynamics, in particular effects of chemical bonds connecting both donor and acceptor fragments, remain elusive.
Theoretically, there are few works carried out to address excited-state properties and photo-driven dynamical processes of ZnPc-C 60 dyads. Santos and Wang carried out Ehrenfest dynamics simulations to explore photoinduced electron and hole transfer dynamics between ZnPc and C 60 in two different covalently bonded ZnPc-C 60 complexes [29]. Their results demonstrated that chemical bonds connecting ZnPc and C 60 have remarkable influences on photoinduced charge separation dynamics. However, molecular-orbital-based mean-field methods are not suitable for simulating excitation energy transfer due to lacking of electron-hole interactions. Later, our theoretical works based on the linearresponse time-dependent density functional theory (LR-TDDFT) method revealed that excited-state electronic structures and relaxation dynamics of ZnPc-C 60 dyads are significantly regulated by spatial orientation in noncovalently bonded dyads and chemical bonding patterns in covalently bonded ones [28,35]. However, in previous works, only C−O linking groups are considered. Whether other chemical linkers, such as C−N groups, have similar effects, is not clear. Motivated by this question, we here developed and employed GW/BSE (combined many-body Green function and Bethe-Salpeter equation) nonadiabatic dynamics simulations to study excited-state properties and photoinduced dynamics of ZnPc-C 60 dyads linked by two C−N bonds but with different chemical bonding patterns.
A. GW and BSE methods
The GW and BSE methods are derived from manybody perturbation theory (MBPT) [36][37][38][39][40][41][42]. The core object in MBPT is time-ordered Green function G, which is expressed as where Φ represents a ground-state wavefunction and T is a time-ordering operator.ψ † (x ′ ,t ′ ) andψ(x, t) represent electron creation and annihilation operators in the Heisenberg picture. t represents time and x=(r, σ) are space and spin coordinates (r: space; σ: spin). G describes time-dependent propagation of electrons and holes and only depends on x and t. Meantime, , depends on frequency ω. One-particle Green function G σ (r, r ′ , ω) could be expressed with one-electron wavefunctions ψ σ m (r) and energies ε σ m calculated in meanfield potential of electrons where indexes i and a represent occupied and unoccupied one-electron states with spin number σ, and η is a small positive number. Non-interacting polarization χ 0 is related to oneparticle Green function G σ (r, r ′ , ω) and dielectric function ε(r, r ′ , ω) with random-phase approximation (RPA) is finally given by in which v represents Coulomb interaction. With ε, one can get screened Coulomb interaction W (r, r ′ , ω) as then, self-energy is written as where e iηω1 is merely used to enforce a correct timeorder form of self-energy. With these definitions, GW quasiparticle energies ε GWσ n can be obtain by which is also called one-shot G 0 W 0 scheme. In addition, one can also further iterate GW equations by updating Eqs.(2−7) with new quasiparticle eigenvalues, while electron wavefunctions are kept fixed without update in this procedure. This procedure is called eigenvalue-only self-consistent GW, i.e. evGW or G n W n in literature, in which n means iteration cycles for Green function and screened Coulomb interaction. Of course, there are GW methods with updated eigenvalues and wavefunctions, however, which are beyond the scope of our present work. Interested readers are referred to recent literature [43][44][45][46][47][48][49].
On the basis of GW results, BSE can be used to calculate excitation energies, which is expressed as where L 0 represents the non-interacting counterpart of L and the electron-hole interaction kernel K(r 3 , r 5 ; r 4 , r 6 ) is written as Then, after further mathematical derivation, one can get excitation and de-excitation amplitudes of electronhole transition pairs for the lth excited state which is similar to that of LR-TDDFT method, i.e. the Casida equation [50]. Thus, BSE excitation energies can be similarly obtained by in which Ω l represents excitation energy of the lth excited state, and X l and Y l are the corresponding excitation and de-excitation amplitudes. Matrix elements of A and B are expressed as (13) and in which i, j represent occupied states while a, b represent unoccupied states. α S/T is a constant which equals to 0 for triplet states and 2 for singlet states.
B. Fewest-switches surface-hopping method
Nonadiabatic molecular dynamics simulations play important roles in understanding ultrafast photochemical and photophysical processes of molecules, biological systems, and materials. Nowadays, there are a lot of simulation methods proposed and widely applied [10,11,. One kind of the most popular simulation algorithms is trajectory-based surface hopping methods [73,74]. The main idea of surface hopping method is that nuclei propagate on a specific adiabatic potential energy surface and have probabilities of hopping to other adiabatic ones. The fewest-switches surface hopping method, proposed by Tully et al. [73,74] is one of the most popular surface hopping methods. Recently, we have developed and implemented this surface hopping simulation method at the LR-TDDFT level, and successfully applied this method to simulate many complex photoinduced processes [28,35,[75][76][77][78]. However, sometimes, TDDFT calculated excited-state electronic properties are heavily dependent on the used density functionals. They could vary qualitatively from one to another. By contrast, the GW/BSE method based on MBPT is more robust, in particular evGW/BSE, which provides reliable results of excited states. Thus, in this work, we develop and implement the fewest-switches surface hopping method combined with the GW/BSE method.
Here we give a brief introduction. The timedependent Schrödinger equation is written as whereĤ 0 (R, r) is zero-order electronic Hamiltonian while r and R represent spatial coordinates of electron and nuclear, respectively. Here, time-dependent total electronic wavefunction ψ(R, r, t) is expanded as where c i (t) is a time-dependent expansion coefficient of the ith electronic state and Φ i (R, r) is corresponding adiabatic electronic wavefunction at nuclear configuration R. Taking Eq. (17) into Eq.(16) and multiplying ⟨Φ i (R, r)| from left side, one can get in which d ij and v(t) are adiabatic derivative couplings and nuclear velocities, respectively. E j is the eigenvalue ofĤ 0 (R, r) with corresponding eigenfunction Φ j (R, r). Eq. (18) is the final electronic propagation equation. Relevant fewest-switches criterion that computes electronic transition probability from i to j states, which is written as
C. Numerical algorithm for nonadiabatic couplings
Numerical algorithm for calculating nonadiabatic couplings has been developed and implemented by us at the LR-TDDFT level [28,35,[75][76][77][78]. Since BSE is similar to LR-TDDFT in mathematical forms, it is easy and straightforward to modify the algorithm to calculate nonadiabatic couplings at the GW/BSE level. Similarly, the Kth excited-state wavefunction Φ K at the BSE level is written as linear combination of GW molecular orbitals ψ a i : where ω stands for a linear combination coefficients of ψ a i , and i, a represent occupied states and unoccupied states, respectively. Nonadiabatic couplings τ KJ between states K and J can be written as DOI:10.1063/1674-0068/cjcp2109162 c ⃝2021 Chinese Physical Society in which i, j represent occupied states while a, b represent unoccupied states. Note that mathematical expression of τ KJ within the GW/BSE framework is the same as that within the LR-TDDFT one [75,79]. After mathematical derivation, the final expression of τ KJ can be written as where P ij is an additional phase factor which depends on the ordering convention of molecular orbitals. Time differentiation on molecular orbitals can be obtained by finite-difference scheme (23) in which ϕ p (t) and ϕ q (t+∆t) represent MOs at time t and t+∆t, respectively. Because our used GW/BSE calculations do not involve updating molecular orbitals, molecular orbitals from converged DFT calculations are used in GW/BSE calculations as ϕ p (t) and ϕ q (t+∆t) in Eq.(23).
D. Simulation details
Geometries of the ZnPc-C 60 dyad were optimized using the B3LYP+D3 method [80][81][82][83][84]. The C, H, O, and N atoms were described with the cc-pVDZ basis sets [85]. For the Zn atom, LANL2DZ basis sets were employed to describe outer-valence electrons while innercore electrons were treated with pseudopotential [86]. Geometry optimization was performed with Gaussian 16 [87]. Energy decomposition analyses (EDA) were performed at the B3LYP+D3/TZP level of theory with ADF2018 [88,89]. Ground-state molecular dynamics simulations were performed at the PBE level with DZVP-MOLOPT-SR-GTH basis sets and Goedecker-Teter-Hutter pseudopotentials [90][91][92][93][94], which were implemented in the QUICKSTEP module of CP2K-5.1 [95][96][97]. Starting from optimized structures, we first heated the system to 300 K and equilibrated it for about 1 ps with a 1 fs time step, during which the Nosé-Hoover chain thermostat technique (chain length: five) was employed [98,99]. After that, 2 ps microcanonical dynamics simulations were performed. All GW and BSE calculations are performed with MOLGW-2.C [100] with cc-pVDZ basis sets used for all atoms except cc-pVDZ-PP basis sets and pseudopotentials for the Zn atom [85,101]. In evGW/BSE calculations, GW calculations with two iteration steps were found to give converged results in Table S2 in Supplementary materials, which was thus employed in all the evGW/BSE calculations. The required molecular orbitals and initial eigenvalues were calculated with the DFT method. All nonadiabatic dynamics simulations were conducted using our developed GTSH package [59]. Electronic transition density analyses on evGW/BSE results were calculated using MULTIWFN3.6 [102,103].
III. RESULTS AND DISCUSSION
As the first step to decipher excited-state properties of the C-N linked ZnPc-C 60 complex, we first optimize the two ground-state structures with different linking patterns, i.e. 5-6 and 6-6 types, at the B3LYP+D3 level. In the former, two N atoms of ZnPc are bonded to two C atoms shared by one pentagonal and one hexagonal rings of C 60 ; while, in the latter, two N atoms are bonded to two C atoms shared by two hexagonal rings of C 60 (see FIG. 1). As can be seen, these two structures are overall similar to the previously studied C−O linked 5-6 and 6-6 configurations. Additionally, the energy decomposition analysis based on natural orbitals from chemical valence method is performed with respect to both ZnPc and C 60 fragments. The results are listed in Table I. Total interaction energy E tot is composed of four kinds of contributions, i.e. E tot =E Pauli +E ele +E orb +E dis , in which E Pauli is exchange repulsion energy due to Pauli principle among fragments, E ele represents electrostatic interaction energy among fragmental charge densities, E orb is energy due to fragmental orbital mixing, and E dis is dispersion correction energy among fragments (see more details and discussion in Supplementary materials). As with the C−O linked ZnPc-C 60 dyad [35], E tot of the 6-6 configuration is larger than that of the 5-6 configuration (116.1 vs. 93.6 kcal/mol), which indicates that the 6-6 configuration is more stable than the 5-6 one. This difference is mainly originated from E orb , which is −540.2 kcal/mol for the 6-6 configuration and −517.3 kcal/mol for the 5-6 configuration. In comparison, the other three kinds of interactions (i.e. E Pauli , E ele , and E dis ), contribute less to the difference.
To elucidate excited-state properties of these two ZnPc-C 60 complexes, we perform evGW/BSE calcu- lations to obtain their excitation energies and electronic transition densities of the lowest 10 singlet excited states, as shown in FIG. 2, FIG. S2 and Table S3 in Supplementary materials. As discussed above, molecular orbitals from converged DFT calculations are input as initial wavefunctions and will not change in evGW/BSE calculations. In order to check the influence of exchange-correlation functionals on evGW/BSE results, we have tested several exchangecorrelation functionals and found that evGW/BSE calculated excited-state properties are not sensitive to the functionals used (see Tables S4 and S5 in Supplementary materials). Therefore, the B3LYP functional is chosen in all the evGW/BSE calculations. In Table S3, one can see that excitation energies from S 2 to S 10 for the 5-6 configuration range from 1.649 eV to 2.131 eV and energy difference between all the nearby states is less than 0.15 eV (i.e. 3.5 kcal/mol). Among these states, the S 2 state is of pure CT character and its electron is localized on C 60 while hole is localized on ZnPc. The S 7 and S 8 states are of mixed LE and CT character (see Table S6 in Supplementary materials). All the other states are pure LE states of either C 60 or ZnPc (see Table S3 in Supplementary materials). In addition, oscillator strengths of the lowest six excited states are very small. In contrast, excited states from S 7 to S 10 have larger oscillator strengths. As mentioned before, ZnPc is usually chosen as electron donor and the lowest LE state on ZnPc is chosen as initial populated singlet state in the present non-adiabatic dynamics simulations.
As to the 6-6 bonding configuration, the LE state of ZnPc is the lowest singlet excited state indicating that both electron and hole localize on ZnPc (see Table S3 in supplementary materials). Therefore, if this state is populated in the Franck-Condon region, there is small probability for ultrafast charge separation taking place. The same situation is also seen in the C−O bonded ZnPc-C 60 dyad studied recently [35]. According to these results, we will next focus on photoinduced dynamics of the 5-6 bonding configuration.
Despite valuable results from electronic structure calculations, photoinduced dynamical processes of the present ZnPc-C 60 dyad remain unclear. In order to figure out the details of photodynamics, we carried out nonadiabatic dynamics simulations based on the fewestswitches surface hopping method at the evGW/BSE level. The classical path approximation is widely used for simulating photoinduced processes without large conformational changes or bond closing/breaking [28,35,[59][60][61][62][75][76][77][78]. In dynamics simulations, the lowest bright LE state of ZnPc is chosen as the initial state, 300 initial conditions are randomly chosen from a pre-defined microcanonical trajectory; for each initial condition, 200 surface hopping trajectories are propagated for 500 fs. The results are finally averaged over 300×200 trajectories.
We first plot time-dependent populations of the lowest 10 excited states, as shown in panel of FIG. 3(a), in which weights from S 4 to S 10 are summed together since the LE state of ZnPc varies among these states due to their close energies at different initial conditions. It is clear that these excited states population decays to zero in an ultrafast way within ca. 400 fs. Fitting the population curve with a simple mono-exponential function gives an effective excited-state population decay rate constant of 159 fs (S 4 to S 10 ). At the same time, the S 3 weight first increases to a maximum of 0.35 at 150 fs and then decays to nearly zero at 500 fs. The S 3 population rising and decay rate constants are predicted to be 116 and 193 fs, respectively. In contrast, both S 1 and S 2 populations increase monotonously within 500 fs and do not reach their maxima. The S 2 weight increases to 0.85 while that of the S 1 state increases slowly to 0.15 at 500 fs (see FIG. 3(a)). In a similar way, their population rising rate constants are estimated to be 223 and 1386 fs, respectively. Clear analysis reveals that the ultrafast decay of the S i (i=4−10) state to the S 3 state is due to their extremely small energy differences and large nonadiabatic coupling terms, as shown in FIG. 3(c) and FIG. 4, and Table S7 in supplementary materials. As can be seen, most energy gaps between E i+1 −E i (i≥3) are less than 5 kcal/mol, and the average energy gap is 1 kcal/mol (see FIG. 3(c)). On the other hand, nonadiabatic coupling terms between i+1 and i states (i≥3) are very strong and average values are larger than 75 ps −1 (see FIG. 4). Once arriving at the S 3 state, a little larger energy gap between S 3 and S 2 , an average value of 2.5 kcal/mol, delays further nonadiabatic transition to the S 2 state (see FIG. 3(b)). Nevertheless, it still results in a fast decay of the S 3 state in 400 fs because of still small energy gap. Finally, unlike the fast population of the S 2 state, its de-population dynamics to the S 1 state is very slow due to large energy gaps between them and Since different CT and LE (ZnPc or C 60 ) electronic states are involved in nonadiabatic dynamics simulations, nonadiabatic transitions will drive exciton, electron, and hole transfer among ZnPc and C 60 fragments. With the fragment-based exciton analysis method proposed in our previous works [28,35,75,76,78], we have plotted time-dependent electron and hole amounts on both ZnPc and C 60 fragments in FIG. 5. One can see that at the beginning of nonadiabatic dynamics simulations, electron and hole are mainly located on ZnPc with weights of 0.8, which agrees with the |ZnPc * -C 60 ⟩ exciton weight at the initial simulation time (see FIG. 6(a)). Then, both electron and hole amounts located on ZnPc sharply decreases and becomes stable at 200 fs. The corresponding electron and hole population decay rate constants are estimated to be 60 and 64 fs, respectively, according to fitting the curve with two exponential functions. In this process, electron and hole transfer are completely synchronous and in the same direction from ZnPc to C 60 , which is consistent with the feature of excitation energy transfer. This is also seconded by FIG. 6 (a) and (b) where the |ZnPc * -C 60 ⟩ exciton decrease is accompanied with the |ZnPc-C 60 * ⟩ exciton increase (ca. 100 fs). The corresponding population decay and rising rate constants are fitted to be 60 and 54 fs, which agree very well with the above hole and electron population decay rate constant of 60 fs. Afterwards, the remaining electron amount on ZnPc is gradually transferred to C 60 until the end of simulations (see FIG. 5(a) and (b)); while, the hole amount, already transferred from ZnPc to C 60 , starts to move back to ZnPc at 100 fs, because of exciton transfer from |ZnPc-C 60 * ⟩ to |ZnPc + -C 60 − ⟩.
Its hole population then becomes stable at 400 fs with a rising rate constant of 111 fs. At 500 fs, both ZnPc and C 60 possess equal hole amounts revealing a mixture of |ZnPc-C 60 * ⟩ and |ZnPc + -C 60 − ⟩ (see FIG. 6(b) and (c)).
Then, one must be noted that due to large energy gaps between S 2 and S 1 , there are comparable S 2 and S 1 populations at the end of the 500 fs simulations (see FIG. 3(a)). Nevertheless, according to electronic structure calculations, further nonadiabatic decay to S 1 is thermodynamically allowed and must happen but with a longer time. Therefore, one can safely postulate that the hole localized on ZnPc will finally transfer back to C 60 and the pure LE exciton |ZnPc-C 60 * ⟩ will be eventually populated. Interestingly, in the entire dynamics simulations, there is no |ZnPc − -C 60 + ⟩ charge-transfer exciton from C 60 to ZnPc observed, which is consistent with the preceding electronic structure calculations. As shown in FIG. 2, there is no such kind of CT states from C 60 to ZnPc available in the low-lying excited states.
Finally, time-dependent exciton size is useful for a better understanding of time-dependent excited properties of dyads. At the beginning of nonadiabatic dynamics simulations, exciton size is 5.84Å. It is nearly not changed, 5.60Å in the first energy transfer process of 50 fs, due to the local character of both |ZnPc * -C 60 ⟩ and |ZnPc-C 60 * ⟩ excitons. After that, the |ZnPc-C 60 * ⟩ exciton with the LE character hops to the |ZnPc + -C 60 − ⟩ exciton with the CT character, which results in a significant increase of exciton size from 5.60Å to 8.58Å at 500 fs due to charge separation.
The excited-state dynamics of the C−N bonded ZnPc-C 60 dyad is generally similar to that of the C−O bonded ZnPc-C 60 dyad. For example, for the 6-6 bonding configuration, the LE state of ZnPc is the lowest S 1 state [35]. Thus, initial photoexcitation populates this S 1 state and its further charge separation is relatively slow and cannot complete within several hundreds of femtosecond, as demonstrated by corresponding dynamics simulations. However, for the 5-6 bonding configuration, the S 1 state is the LE state of C 60 with very small oscillator strength; while, that of ZnPc is much higher in energy. Importantly, there are some CT states between the LE states of ZnPc and C 60 . As a result, one can see obvious charge separation process during excited-state relaxation processes from higher [35]. In addition, the energy gap between S 2 and S 1 is relatively smaller in the C−O bonded complex than that in the C−N bonded one. Thus, at 500 fs, the S 1 population is close to 0.7 in the former; while, it is only 0.15 in the latter. This difference makes the exciton size first increase and then decrease in the C−O bonded complex. But, the decrease process is not seen within 500 fs in the C−N bonded complex (see FIG. 7). In one word, the different properties of chemical bonds connecting both ZnPc and C 60 of the ZnPc-C 60 dyad have remarkable influences on its excited-state properties and nonadiabatic dynamics, and thereto optoelectronic properties.
IV. CONCLUSION
In this work we have studied excited-state properties and photoinduced dynamics of both 5-6 and 6-6 bonding configurations of a ZnPc-C 60 dyad with accurate evGW/BSE calculations and related nonadiabatic dynamics simulations. It is found that different chemical bonding patterns between ZnPc and C 60 have remarkable influences on excited-state properties and subsequent nonadiabatic dynamics. In the 6-6 bonding configuration, the S 1 state is the LE state of ZnPc with large oscillator strength and thus is initially populated excited state. However, it is difficult for further charge separation from this S 1 state. In stark contrast, in the 5-6 bonding configuration, the LE state of ZnPc is much higher, which is the S 9 state at the Franck-Condon point. Below this bright state, there are several excited states with obvious CT character between ZnPc and C 60 , and LE character on either ZnPc or C 60 . Therefore, thermodynamically, it is allowed from the higher S 9 to S 1 states via several intermediate states. These points are supported by 500 fs dynamics simulations. In the first 50 fs, one can see an ultrafast excitation energy transfer from ZnPc to C 60 (concerted electron-hole pair transfer). This process is followed by a fast charge separation driven by hole migration back to ZnPc from C 60 within 500 fs. The last nonadiabatic transition to the S 1 state is not observed due to limited simulation time, but is allowed thermodynamically in terms of electronic structure calculations. In combination our previous work [35], we have proven that different chemical bonding patterns and chemical linkers can be used to tune excited-state properties and optoelectronic prop-erties of covalently bonded ZnPc-C 60 dyads. Therefore, if new ZnPc-C 60 like dyads with ultrafast energy transfer properties are expected, the 5-6 bonding configuration might be more preferred. Methodologically, we demonstrate that the combined evGW/BSE nonadiabatic surface-hopping dynamics method is reliable and accurate, which will encourage more theoretical simulations on ultrafast photophysical and photochemical processes of molecules, clusters, etc.
Supplementary materials: Additional theoretical methods, evGW/BSE results with different DFT wavefunctions, evGW/BSE results with different iterative cycles, additional data for averaged NAC and for the 6-6 bonding configuration, and Cartesian coordinates are available. | 6,654.2 | 2021-01-01T00:00:00.000 | [
"Chemistry",
"Physics"
] |
A Fresh View for Maxwell ’ s Equations and Electromagnetic Wave Propagation
Equations related with wave propagation are reexamined as in certain circumstances law of conservation of energy is not fulfilled even though it is cautiously clarified with the help of Heisenberg’s uncertainty principle. Recently, attempt has also been made to understand certain discrepancies in optical phenomena like diffraction or interference. The purpose of the present investigation, therefore, is to overcome some discrepancies by introducing constants of integration in Maxwell’s Equation. It turns out that the presence of vibrating strings (or store energy) in the medium becomes essential to understand several details of the wave propagation.
Introduction
A set of Maxwell Equations [1] is fundamental in electricity and magnetism and they are developed on the basis of numerous experimental data.Almost all the theoretical work is based directly or indirectly on this set of equation together with the equation related with Lorentz force.Transformers, inductors, and many types of electrical motors and generators are based on these principles.Even more, a significant property of these equations is that they give rise to the wave equation for electric and magnetic fields denoted by E and B respectively.In spite of the importance of this set of equations, the proper attention for the origin and the basic mechanism behind these principles has not received due attention.Recently, some aspects of Lorentz force [2] and Faradays law [3] are evaluated on the basis of the presence of strings in the form of a compact liquid by Joshi.It turns out that fluid dynamics explains several aspects of Maxwell's Equation as mentioned earlier [4].The purpose of the present investigation, therefore, is to examine details of these equations related with electromagnetic wave propagation and the variation of the electric field (E) with respect to the magnetic field (B) in free space.
Theoretical Developments
It is known that wave equations are obtained from Faraday law and they are given by [1] 2 2 Here E y and B z are the strengths of electric and magnetic fields in Y and Z directions respectively.The stander solution for this equation is generally given by where ω is the angular frequency of the oscillation, 2π k λ = and λ is the wavelength.In this solution of the wave equation (or these types of equations) the constant of integration is always neglected or considered as 0 and the wave propagation is explained perfectly.However, if constants of integration are added then the solution of Equation (1a) can be expressed as ( ) Here n is a positive integer.
In fact, the introduction of constants, term 2 and A do not affect or alter the second order partial differential equations, but, are necessary to examine them.Contribution from 2 n λ takes into account the periodicity of the motion and its importance will be discussed later.These constants which will make a substantial difference and their consequences will be evaluated in the present investigation.
At the start, when x and n both are 0, the electromagnetic energy has a constant value given by A. It suggests that the medium through which the wave is propagating has energy which is in agreement with the string theory, according to it the space is filled with vibrating strings [4].Moreover, several experimental data confirm that vacuum has energy and even it can be converted into electromagnetic energy [5].Some patients have been already registered to convert space energy into electrical energy for application purpose [6].
Recently, it has been noted that several electromagnetic phenomena have been explained on the basis of strings as a compact, non viscous, surface tension free liquid [2]- [4].From the point of view of quantum field theory, this forms a system of coupled harmonic oscillators and obviously the oscillatory motion is not limited to a single oscillator but it forms a collective oscillating system of vibrating strings.A coupled oscillating system forms a vibrating line of strings [7].It has been found out that there is a little difference between continuous and discrete line.Smaller the dimension of the vibrating elements, the difference between continuous and discrete nature is reduced considerably.Moreover, the system has another advantage namely that it is a flexible.The details of the strings from the point of view of vibrational properties are not known and consequently, there does not exist a precise formalism to estimate the vibrational properties of the system and hence the constant, needs to be introduced to understand the basic mechanism of the collective system.The negative − so that it represents the force constant corresponding to harmonic oscillator or for a system of coupled oscillators.Moreover, the term 2 indicates that every pulse of the wave contributes in the activation of the harmonic oscillator.The variation of the electromagnetic field in x di- rection directly creates the compression of the system of strings.Therefore, J represents the restoring force constant for the corresponding displacement.According to quantum field theory, a close association of fields and harmonic oscillations is well established.This is the reason why the term 2 has been introduced as a constant of integration.It is worth mentioning that Equation (3) satisfies the wave equation given by Equation (1a).
The other important reason is that the field quantization is based on the assumption that there exists a discrete field quantum and therefore each field is connected to a discrete quantum oscillator.Quantum features of the fields are extensively discussed earlier and it is established that it can be expressed in the form of harmonic oscillator.Hamiltonian for harmonic oscillator is given by [7] [8] ( ) which is used for formalism of quantum field theory for electromagnetic fields.
Here q is a generalized co ordinate and p is the momentum operator given by [7] [8] It is known that the non interacting Hamiltonian of electromagnetic fields participates in the creation and annihilation of particles, the corresponding operators a + and a are given by [7] In this case, Hamiltonian for harmonic oscillator turns out to be [7] [8] ( ) Here 2 ω is the zero point energy and it is excluded in further discussion as it is not relevant.
Creation and annihilation of particles are also examined from the point of view of increase or decrease of the excited states of the harmonic oscillator.Therefore, it is necessary to introduce the presence of oscillators in the system.Moreover, it is found that in the wave propagation process [7], the conversion of fields into particles (or vice versa) is necessary to conserve the total energy of the system [8].Therefore, the constant J, related with the creation-annihilation process, has been introduced in Equation (3)
Relation between Electric and Magnetic Field
Now, let us consider electromagnetic wave motion where the variation of electric field is in x direction and apply Faradays law, namely By using Equations ( 8) and (3), we get as ω/k is the velocity of light C, then the Equation ( 9) becomes ( ) ( ) Here, k is the wave vector indicating that the direction of the photon flux coincides with the direction of the propagation of the wave.Equation ( 10) is very significant and it differs from the conventional as it includes an extra term which incorporates J corresponding to the creation annihilation of particles; thus fields and particle approach are taken into account.If this aspect is neglected, and if it is accept that J = 0, then only the Equation ( 9) takes the conventional form An additional term is necessary to balance the equation as E is directly proportional to B .Electric and magnetic fields are increasing (or decreasing) simultaneously without having the mechanism for adjusting the conservation of energy.Instead of providing a suitable explanation, this observed phenomenon is justified (not rigorously) with the help of Heisenberg's uncertainty principle.This aspect has been already brought to notice by Joshi [8] [9].According to the quantum field theory, free non interacting Hamiltonian of the field plays the role in the creation and the annihilation of the particle.Therefore, the insertion of integration constant J is necessary.Thus, at every point is space and time the sum of energies associated with electric, magnetic fields and photons (created or annihilated) are conserved in the radiation propagation process.Now let us examine the Equation ( 10) which indicates the motion of the flow of the energy of the electromagnetic fields.As mentioned earlier the term 2 represents the force which excites the quantum oscillator in one dimension originated from the vibrating string.As J is a restoring force constant, the total energy, U, or Hamiltonian of the harmonic oscillator corresponding to the vibrating string system is given by where and Ω is the amplitude of the vibrating system.It is not possible to estimate the exact elements of the vibrating system and the nature of the interaction between them.Moreover, the total energy U, of the vibrating system of strings is difficult to evaluate as part of it is stored as a buffer and it will not be reflected in the amplitude.Only the fraction f of the energy can be associated with the amplitude.Therefore, ( ) and hence Therefore, Equation ( 10) can be written as The Equation (15) clearly indicates that as the electric field increases, the magnetic field also increases but the energy is conserved by the annihilation process which is expressed with the second term.Similarly, when electric and magnetic fields start decreasing, the particles are created and their density becomes maximum when electric field becomes zero.It also takes into account the possibility of energy storage in the form of increasing excited state of the oscillator.Thus, the energy is conserved in the process and when the pulse passes it is stored and reused for the generation and the flow of the next pulse of electromagnetic field.Now, the solution of wave Equation (3) becomes ( ) ( ) The second term contributes in the annihilation or creation of the particles and free (non interacting) Hamiltonian is a determining factor for this process.The term A corresponds to the buffer energy of the vibrating system.Thus the medium through which the wave is propagating is excitable, continuous and it can also be used as a source of buffer energy, it means it can be stored and reused.In such a system, the flow of energy is easy to understand.It means that when the electric field (or magnetic field) is oscillating, the total energy of the system is conserved without applying Heisenberg principle of uncertainty.
It is worth mentioning that Equation ( 11) is derived from Equation ( 8) which is based on Faradays law.Recently, the origin for this important law has been examined on the basis of the presence of strings and it has been concluded that a medium is required for converting the variation of magnetic field into electric field.This view also supports the approach presented here.
In short, the insertion of two constants namely J and A in well accepted solution of wave equation suggests that there must be an excitable medium through which the electromagnetic wave propagates and the mechanism of conversion of free field into particles (vice versa) based on quantum field theory helps to conserve the energy during the entire processes.This supports the presence of vibrating elements in the space.It is worth mentioning that all aspects of string theory are not considered here; but it is only assumed the presence of vibrating elements.
Discussion
The above approach clearly indicates that when the electric field (or magnetic field) is maximum, the free Hamiltonian is zero and hence the rate of particles creation is zero.Meanwhile, when the intensity of the electric field starts decreasing, the operators start creating particles.From the point of view of quantum field theory, the creation operator raises the energy level of an Eigen state of harmonic oscillator or they are getting excited.When the electric field is zero (at nλ/2,) the density of the particles created by the operators is maximum and the harmonic oscillator is de-excited converting the energy into photons.Thus, Huygens wave front is formed and the process of propagation is continued.
This point of view is in agreement with an earlier work where some discrepancies have been reported about the propagation of electromagnetic waves with the help of Huygens wave front.Moreover, a different approach for diffraction phenomenon has been suggested where the separation between two wave fronts is not infinitesimally small but it is found to be λ/2 where the density of photons or particles is maximum.This has explained the observed diffraction pattern successfully for circular aperture without ad hoc assumptions [9] and can be generalized to similar interference and diffraction patterns.
The other important aspect is that the present approach also helps to understand the wave motion when it is expressed in a spherical polar coordinate system.The solution of the wave equation in polar coordinates is given by [10] ( ) ( ) where K is a constant corresponding to the intensity of the electric field.
According to Equation (17), when r = 0, the amplitude becomes infinite and the wave propagation at the origin cannot be explained.This difficulty has been overcome with the present view as at time t = 0, the propagation starts with the particle nature of the radiation [10].The details of the dual nature of the radiation and its validity during the propagation of the wave have been pointed out earlier.The new point of view presented here explains several optical phenomena like diffraction, the role of Huygens wave front, Kirchhoff's correction factor etc. satisfactorily [9].The foundation given in this paper and earlier publications support the inference expressed by Einstein indicating that radiation does not only propagate in the form of wave but it is self trapped giving origin to the quanta [11].
Conclusion
In the present investigation, wave equations are reexamined by introducing the constants of integration.It is noted that equation E = BC needs an additional term to balance the energy.Moreover, the presence of the constants strongly suggests that wave propagation needs an excitable, continuous medium where the energy can be stored and reused.The situation is explained with the help of the system of vibrating strings in the form of compact liquid. | 3,264.8 | 2015-06-11T00:00:00.000 | [
"Physics"
] |
The molecular machinery of meiotic recombination
Meiotic recombination, a cornerstone of eukaryotic diversity and individual genetic identity, is essential for the creation of physical linkages between homologous chromosomes, facilitating their faithful segregation during meiosis I. This process requires that germ cells generate controlled DNA lesions within their own genome that are subsequently repaired in a specialised manner. Repair of these DNA breaks involves the modulation of existing homologous recombination repair pathways to generate crossovers between homologous chromosomes. Decades of genetic and cytological studies have identified a multitude of factors that are involved in meiotic recombination. Recent work has started to provide additional mechanistic insights into how these factors interact with one another, with DNA, and provide the molecular outcomes required for a successful meiosis. Here, we provide a review of the recent developments with a focus on protein structures and protein–protein interactions.
Introduction
Meiosis, a specialised form of cell division, is key to generating the diversity of life.This process, culminating in the generation of haploid gametes such as eggs, sperm, or spores, facilitates subsequent syngamy, the fusion of these gametes, during fertilisation to create a new euploid organism (Figure 1A).The reduction in genome size during meiosis is achieved through a unique sequence of events: a single round of DNA replication followed by two distinct rounds of chromosomal segregation.Meiosis I segregates homologous chromosomes, while meiosis II, akin to mitosis, segregates sister chromatids.
Physical linkages between chromosomes allow tension to be generated across the bivalent, facilitated by the forces generated by the spindle, and ultimately satisfy the spindle assembly checkpoint [1].Therefore, such linkages are essential for the faithful segregation of chromosomes.Sister chromatids, segregated during mitosis and meiosis II, are linked by cohesive cohesin that is loaded during DNA replication.During meiosis I, homologous chromosomes do not have intrinsic linkages, and therefore inter-homologue connections must be established prior to the chromosome segregation event during meiosis I, in order that homologous chromosomes be properly sorted and segregated at anaphase I.This will ensure the formation of viable gametes, and subsequent healthy euploid offspring.
Most sexually reproducing species use recombination to link homologous chromosomes, initiated through the programmed formation of double-stranded DNA breaks (DSBs).This mechanism solves the mechanistic conundrum of how to organise homologous chromosomes in meiosis I and simultaneously introduces genetic diversity by reshuffling parental haplotypes.Interestingly, some organisms, like the model nematode Caenorhabditis elegans, have decoupled homologous pairing from recombination [2], while others, such as male fruit flies [3], eschew recombination entirely.However, this review will concentrate on meiotic recombination during 'canonical' meiosis I, a process common to fungi, plants, and vertebrates.
One defining feature of meiosis I is the formation of a distinct chromosomal architecture: a proteinaceous axis from which loops of chromatin emerge.DSBs, essential for recombination, are introduced in these DNA loops, with the break-forming machinery localised to the axis [4], initiating the process of pairing followed by synapsis.These breaks are repaired using the homologous chromosome rather than the sister chromatid, initiating the process of synapsisdefined by the progressive development of zipper-like connections forming along the chromosome axis.Certain repair intermediates mature into crossovers (COs), the critical junctures where homologous chromosomes exchange arms [5].Cohesive cohesin complexes between sister chromatids distal to COs sites then provide the necessary physical links between homologues.
This minireview aims to concisely delineate our current understanding of the molecular machinery with an emphasis on recent advances and, in particular, protein-protein interactions vital for meiotic recombination (summarised in Figure 2).
The meiotic axis
At the onset of meiotic prophase, chromosomes undergo a morphological change, as a proteinaceous axis forms along their length, from which chromatin loops emerge [13].For instance, in Saccharomyces cerevisiae (budding yeast), loops are ∼25 kb long [14], while in mice, they range from 1 to 2 Mb [15].The precise structural organisation of the meiotic axis has not yet been fully elucidated, but it generally comprises at least Rec8-containing cohesin, condensin, a Red1-type axial filament protein (such as Red1 in budding yeast or SYCP2 in mice [16,17]), and one or more HORMA domain proteins such as Hop1 in yeast.The axis is believed to play a pivotal role in determining the proper placement and number of meiotic DSBs, modulating DNA repair, particularly in favouring inter-homologue bias, and in the formation of the synaptonemal complex (SC).
While the composition of meiotic cohesin varies between species, there seems to invariably exist at least one meiosis-specific kleisin variant -Rec8 [18].It is thought that Rec8 cohesin contributes to the formation of chromatin loops, through loop extrusion activity [19], but that it also directly recruits Red1-type proteins.Red1 co-IPs with Rec8 [20], and co-localises with cohesin in ChIP-chip [4] and super-resolution microscopy [21], but formal proof of a direct interaction is lacking.Red1/SYCP2 contains an N-terminal globular domain (consisting of an ARM-like and PH domain [22]), of an unknown function (Figure 3A).The C-terminal region of Red1/SYCP2 contains a coiled-coil region with the ability to form both tetramers and higher-order filaments [17].Red1 is presumed to recruit Hop1-like HORMA domain proteins.HORMA domains are dynamic domains that can adopt two topologically distinct conformations, open and closed.The transition to the energetically more stable closed state is catalysed by the interaction with a closure motif [27].Unlike other HORMA domain proteins, the Hop1-like meiotic HORMA domains contain selfbinding cis closure motifs in their C-terminal region (Figure 3A) [28].Meiotic HORMAs can also interact with closure motif(s) in trans (for example, in Red1-like proteins, Figure 3A, right) but this presumably requires an active remodelling of the HORMA domain through the AAA + ATPase Pch2 (TRIP13 in mammals) [29].Mammals have two meiotic HORMA domain proteins, HORMAD1 (Figure 3A, left) and HORMAD2 [30], and HORMAD2 has been shown to bind to the SYCP2 closure motif [17].
Hop1-like proteins in many species (though notably not in mammals) contain an additional chromatin binding domain (CBR) consisting of at least a winged-helix-turn helix domain, and in some cases combined with a PHD domain [31].In yeast, this region can specifically bind to nucleosomes, which provides a second recruitment pathway for Hop1 and Red1 [31].In budding yeast, one function of the CBR appears to be to enhance DSB formation on small chromosomes through localisation of Hop1 to nucleosome-rich islands [32].Inversely, the removal of Hop1 is an important mechanism for the suppression of DSB formation in, for example, rDNA repeat regions [33,34].Thus regulating levels of chromosomal Hop1 locally appears to be a fundamental mechanism for regulating DSB formation.
Meiotic DNA break formation and initial repair
DSBs, essential to initiate meiotic recombination, are catalysed by the topoisomerase-like enzyme Spo11.The complexity of DSB formation is underscored by its dependence on numerous additional factors; at least nine proteins (in addition to Spo11) in budding yeast are needed for DSB formation [35].This regulatory framework (A) Structures of meiotic axis proteins.Left, X-ray structure of human HORMAD1 (Hop1 in budding yeast).The N-terminal HORMA domain of HORMAD1 physically entraps the C-terminal closure motif (red) due to the movement of the safety belt (maroon) [23].Right, the crystal structure of mouse SYCP2 N-terminus consisting of the ARM-like (ARML) and PH domains likely reflects a balance between preventing genome instability due to uncontrolled DSB formation and ensuring sufficient breaks for reliable homologue linkage.Spo11 is similar to the TopoVI family of type II DNA topoisomerases, which require an 'A' subunit (here, Spo11) and a 'B' subunit for full functionality [36].DeMassy and co-workers discovered the Spo11 'B' subunit, TOPOVIBL, in mice [37], while at the same time the Grelon laboratory reported the discovery of the plant Spo11 'B' subunit -MTOPVIBwhich was found to bind to both plant Spo11 proteins [38].These discoveries allowed the realisation that a 'B' subunit in budding yeast is encoded in the Rec102 protein [37].
Recent work made use of recombinant yeast Spo11 'core complex' (Spo11, Rec102, Rec104, and Ski8), molecular modelling and mass spectrometry to confirm the role of Rec102 as a 'B' subunit that functions together with Rec104 [39].Importantly, this work also showed that the Spo11 complex contains only one copy of the Spo11 subunit; two catalytically active Spo11 subunits would be required to break the backbone of double-stranded DNA.This is in line with the idea that a key role for the additional Spo11-associated factors is to accommodate the dimeriz (or multimerization) of Spo11 to activate it [40,41].A recent breakthrough from the Keeney laboratory has taken the work with recombinant Spo11 complex a step further and revealed the cryoEM structure of the Spo11 core complex in complex with dsDNA (Figure 3B) [24].Ski8 is canonically involved in regulating the RNA exosome 'moonlights' as part of the yeast Spo11 core complex, where it interacts with Spo11 through the same motif that is also found in Ski3 [42,43], but the role of Ski8 in the Spo11 complex is thought to be restricted to yeasts.
What types of DNA sequence are cleaved by Spo11 complexes?Several extrinsic factors guide the Spo11 machinery to DNA break 'hotspots'.This includes the concentration of axial proteins (see above), the chromatin state [44] and post-translational modifications on nucleosomes (reviewed in [45]).Recombinant Spo11 core complexes have a preference for binding to bent DNA [39].Consistent with this, in vivo it was also observed that Spo11 has a preference for sequences that match a DNA bending site.Moreover, the periodicity of the break sites observed is consistent with Spo11 cutting on the same face of underwound DNA.Finally, DSB sites correlated with TopoII binding sites, strongly indicating a role for topological stress in DSB site preference [46].Additional factors play a further role in modulating DSB site selection.The PHD domain protein Spp1, canonically part of the COMPASS methyltransferase complex [47], also targets the meiotic DSB forming machinery to promoter regions through an interaction with H3K4me3 nucleosomes and the Spo11 accessory factor Mer2 [48,49].In vertebrates, the protein PRDM9 recognises certain DNA sequences via a C-terminal Zn-finger array and also targets the DSB machinery to these loci [50][51][52].
The association of the Spo11 core complex with the meiotic axis is a key aspect of its functionality (Figure 1C).This interaction is thought to be facilitated by Mer2, a protein capable of binding directly to Hop1 within the chromosome axis [7].The mammalian ortholog of Mer2, IHO1, also binds directly to the axial protein HORMAD1 [53], and this is facilitated by DDK phosphorylation of the C-terminus of IHO1 [54], consistent with the previously described role of DKK phosphorylation of Mer2 [55][56][57][58].Initially, Mer2 was identified as a component of a complex alongside Rec114 and Mei4, termed the RMM complex [59].Rec114 and Mei4, including their mammalian counterparts REC114 and MEI4, form a stoichiometric 'RM' complex characterised by two Rec114 molecules bound to Mei4 [60].In mice, the factor ANKRD31 was shown to be a direct interactor of REC114, necessary for normal DSB patterning and essential for recombination in the X/Y pseudoautosomal region (PAR) [61,62].
The interaction between Mer2 and the RM complex adds a layer of complexity to this system.Experiments have shown that in yeast, both Mer2 and the Rec114-Mei4 complex can independently form nucleoprotein condensates on DNA in the presence of a crowding agent [60].Interestingly, mutations impairing Mer2's ability to bind DNA result in the loss of in vivo foci formation and a subsequent decrease in Spo11-induced DSBs [60].Recent studies have demonstrated that in mice, IHO1 can bind directly to the REC114-MEI4 Part 2 of 2 [22], C-terminal to the PH domain is a closure motif [17].(B) CryoEM structure of the S. cerevisiae Spo11 core complex (Spo11, orange; Rec102, pale orange; Rec104, pale yellow; Ski8, grey) in complex with dsDNA (pink).The catalytic tyrosine of Spo11 (Y135) is highlighted proximal to the sugar-phosphate backbone of the dsDNA [24].(C) cryoEM structure of human RAD51 bound to single-stranded DNA ( pink) [25].(D) Crystal structure of the human SYCP1 (Zip1 in S. cerevisiae) αN-end head-to-head assembly region [26].
complex, even in the absence of condensate formation [60,63].This was also shown for yeast, but the assembly showed low affinity [60].These discrepancies could come from the differing need for specific post-translational modifications in different species, or might indicate that the stoichiometric mouse RMM complex represents an intermediate stage in the formation of higher-order nucleoprotein condensates.How is the Spo11 core complex recruited to the RMM complex?Rec102 and Rec104 are known to bind with Rec114, as established through yeast two-hybrid (Y2H) assays [43].The importance of this interaction is underlined by the observation that mutations in Rec114's N-terminal PH domain disrupt its association with Rec102 and Rec104, as seen in Y2H assays.This disruption is associated with a decrease in Spo11-initiated DSB formation [60].Further elucidating these interactions, De Massy, Robert, Kadlec, and colleagues have recently demonstrated a direct physical connection between the C-terminus of TOPOVIBL and the N-terminal PH domain of REC114 in mice.Disrupting this interaction results in a loss of DSB formation in female mice and a delayed formation in males [64].
Interestingly both ANKRD31 [61,62] and IHO1 [63] bind to the PH domain of REC114 in a mutually exclusive manner.This presents an apparent paradox for the function of the REC114 PH domain in mice.A more complex assembly might occur through a series of compatible or cooperative interactions.Considering that IHO1 and Mer2 both exist as tetramers and the REC114-MEI4 complex shows a 2:1 stoichiometry, a single IHO1 tetramer might ostensibly recruit four REC114-MEI4 complexes.This arrangement would leave the four PH domain binding sites available for ANKRD31 and TOPOVIBL.
MRX complex (Mre11, Rad50, Xrs2; Figure 3C) is required for the first steps in the DNA damage response and for telomere maintenance in mitotically dividing cells, but it is also required for the creation of meiotic DSBs in budding yeast [65,66] and in C. elegans [67], but not in plants [68] or fission yeast [69].The physical connection between MRX and the Spo11 complex appears to also be mediated by Mer2.Mer2 was shown to interact with Xrs2 in a Y2H experiment [43], and Mer2 was recently shown to interact with Mre11 in a manner dependent on several conserved N-terminal residues in Mer2 [7].Similarly, in Arabidopsis, Mer2 (PRD3) also interacts with Mre11 [70].
Regarding the regulation of the above-described interactions and their precise roles in Spo11 activation, our understanding, particularly of the latter, is still developing.It is anticipated that future structural studies, biochemical assays, and genetic analyses will provide deeper insights.We do have more information about the regulation of these interactions.For instance, Mer2 requires phosphorylation by both Cdk and DDK kinases at its N-terminal region to recruit Rec114 and Mei4, which is essential for DSB formation [4,56,71].This phosphorylation might enhance Mer2's binding to the Rec114 PH domain.Additionally, the recent discovery that a Mer2 residue crucial for Mre11 interaction undergoes significant SUMOylation in meiosis [72] suggests that SUMOylation might be a key regulator of Mer2's interaction with the MRX complex.
DNA repair and inter-homologue bias
There are many excellent reviews on the detailed mechanisms of homologous recombination [73][74][75][76].Briefly, once DSB resection has been initiated by Mre11 and Sae2 (CtIP), long-range resection occurs via Exo1 (EXO1), generating long tracts of ssDNA.This ssDNA is initially coated in RPA, before it is exchanged for one of two recombinases, Rad51, which is active in both the soma and the germline, or Dmc1, which is a germlinespecific recombinase.ssDNA coated with recombinases are known as presynaptic filaments (Figure 3C), and these are competent to invade dsDNA (generating a displaced ssDNA) to interrogate these regions for sequence homology.In budding yeast, the meiosis-specific factors Mei5-Sae3 promote the exchange of RPA for Dmc [77].Dmc1 seems to localise preferentially to the cut ends of resected DNA, and Rad51 to the opposite end [78]).Hop2-Mnd1 promotes the strand exchange activity of Dmc1/Rad51 [79,80].
The replicated sister chromatid is an ideal template to repair DSBs, with repair from the sister in somatic cells being some 4-fold more frequent than from the homologue [92].However, intersister recombination events are non-productive for the formation of inter-homologue COs and during meiosis repair from the homologue is ∼5-fold more frequent than from the sister [93,94].This inversion of DNA repair frequency is known as inter-homologue bias.
The axial proteins play an important role in the establishment of inter-homologue bias.Removal of Red1 reverts the meiotic bias and gives rise to a mitotic-like DNA repair [95].The S/T kinase Mek1 is recruited to the axis in response to DSB formation, by binding to the ATM/ATR phosphorylation site on Hop1 [96][97][98][99].Mek1 phosphorylates both Rad54 and Hed1, which attenuate Rad51 activity [100,101].How might this enable Mek1 to contribute to inter-homologue bias?One model proposes that Mek1 kinase activity is spatially restricted to the axis, and DNA repair is suppressed in a zone of influence around the initial break site.This zone of DNA repair suppression includes the proximal (and aligned) sister chromatid, but the homologous chromosome is presumably outside this Mek1 sphere of influence [102].One issue with this model is that Mek1 also phosphorylates global targets, especially the transcription factor Ndt80, which prevents binding to target sequences and thus prevents progression through meiosis until DNA damage has been resolved [103,104].
ZMM proteins in crossover formation
The formation of germline COs is crucial in most organisms to establish the specific physical linkages between homologous chromosomes necessary for satisfying the spindle assembly checkpoint.However, COs are generally detrimental in somatic cells and are thus typically disfavoured [105]).Nascent DNA repair intermediates are often disassembled by the STR (Sgs1-Top3-Rm1) complex (Figure 1C).A group of meiosis-specific proteins, collectively known as ZMM, play a pivotal role in stabilising DNA repair intermediates and channelling them towards pathways more likely to result in COs (refer to Table 1 for details).These ZMM proteins were identified due to the shared impact of mutations on CO formation and synapsis [121,122].We will briefly explore what is currently known about the different ZMM factors.
Zip2, Zip4, and Spo16 together form a complex known as ZZS [9].Within this complex, Zip2 and Spo16 interact to form a heterodimer [116], structurally akin to the XPF-ERCC1 nuclease (Figure 3E), albeit lacking endonuclease activity [9,116].In vitro studies reveal that the Zip2-Spo16 complex has an affinity for DNA, particularly structured or bent DNA forms.Zip4, characterised by its TPR repeat structure, binds to the N-terminal region of Zip2 [9].TPR repeat proteins are often structural scaffolds that interact with a wide range of peptide motifs [123].Consistent with this function, it was found that Zip4 not only interacts with the axial protein Red1 in Y2H assays but Red1 also shows strong enrichment in Zip4 IP-MS experiments [9].Furthering our understanding, recent work has shown that Zip4 also directly binds to central element proteins of the SC, specifically through Ecm11 [124].This study from the Borde laboratory is particularly significant as it for the first time elucidates the physical connection between ZMM proteins and the central element of the SC.
Mer3 is a helicase with many extra domains beyond its helicase core.It is most closely related to the spliceosomal RNA helicase Brr2 [11,125].The helicase activity of Mer3 has been previously suggested to expand nascent D-loops, thus stabilising them.Indeed, in vitro Mer3 clearly has a strong preference for D-loop DNA [11,126].The preference for D-loop DNA binding suggests that Mer3 may bind to early recombination intermediates.This is supported by in vivo data that shows Mer3 loci forming early in meiotic prophase [127] and with a higher number of foci than subsequent COs forming [128].
In vivo mutations that abrogate the helicase activity of Mer3 result in mild CO phenotypes, in contrast with the deletion of Mer3 [126,129].In the spliceosome, the extra domains of Brr2 contribute to protein-protein interactions, and it seems likely to be similar for Mer3; the Ig-like domain of Mer3 contributes to the direct binding of Mlh1-Mlh2 (MutLβ) [126].The Mer3-MutLβ complex functions to constrain D-loop extension through binding to, and inhibiting Pif1 helicase, thus reducing the size of gene conversion tracts [126,130].We recently discovered that Mer3 can also bind to the meiotic recombinase Dmc1, and to the Top3-Rmi1 complex which is involved in the disassembly of DNA repair intermediates [11].It is currently unclear whether Mer3 has further direct physical connections to the ZMM proteins, or if this is mediated through DNA substrates.
Msh4 and Msh5, collectively known as MutSγ, form a heterodimer that is structurally and functionally akin to the bacterial DNA mismatch repair factor MutS, which is characterised by its ring-like structure [118].In vitro, studies demonstrate MutSγ's preference for binding to double Holliday junctions (dHJs), though it generally exhibits high affinity for a variety of DNA repair intermediates [119,120].This binding is believed to physically entrap two duplexes of double-stranded DNA, thereby stabilising the recombination intermediate.N-terminal ARM domain followed by a PH domain [16,22].Closure motif for interaction with Hop1 [106].C-terminal coiled-coil region [17] Speculated to interact with cohesin, may also interact with centromere proteins [22].Forms a filament that likely is the basis of the axis and subsequent axial element of the SC [ Stabilises early DNA intermediates by D-loop extension [114] Zip11 SYCP1 ZYP1a and ZYP1b Coiled-coil protein; tetramer that self-assembles at N-and C-terminal ends into a lattice [26] Transverse filament component of SC.N-terminus associated with central element of SC, C-terminus with the meiotic axis [115] Continued However, super-resolution microscopy data suggest that the Msh4/5 complex may only embrace one dsDNA in the recombination intermediate [131].
After they have been established, dHJs need to be resolved prior to the removal of cohesive cohesin from chromosomal arms at anaphase I.The resolution of dHJs can result in either non-crossover (NCO) or CO formation.In most model organisms the majority of meiotic COs are generated through the activity of the MutLγ endonuclease, a complex of Mlh1 and Mlh3, the activity of which exclusively generates COs [132].MutLγ is not a structure-specific endonuclease [133], though it does preferentially bind Holliday Junctions [134].How then does MutLγ only generate COs? Two recent studies from the Hunter and Cejka laboratories revealed that MutLγ endonuclease activity is stimulated in vitro by EXO1, PCNA, and RFC [135,136].These findings lead to a model which proposes that the asymmetry of PCNA retained at joint molecules might provide a signal that stimulates MutLγ endonuclease to generate COs.
Significant gaps in our understanding of the function of ZMM proteins remain, leaving fundamental questions unanswered.Key among these are the mechanisms by which specific DSB sites are 'selected' by ZMM proteins, the exact order of binding events among these proteins, and the intricate details of how the temporal and spatial organisation of the ZMM interactome is controlled, particularly in relation to post-translational modifications.
Synapsis and crossover distribution
In the study of meiosis across a broad range of organisms, a common observation is the simultaneous occurrence of CO formation and the physical 'zippering' or synapsis of homologous chromosomes.This process is mediated by the SC, a structure integral to this pairing.Synapsis typically begins at DSB sites and progresses along the chromosomal axis [137].The COs, as discussed earlier, occur within the SC, which connects to and forms the axial element of the SC.The intricate structure, function, and implications of the SC in disease have been comprehensively reviewed recently [115].The SC is composed of three gross morphological elements, the central element, which runs along the midline of the SC, the axial element, which is a remodelled meiotic axis in the context of the SC, and the transverse filaments which link the axial and central elements.Numerous recent structural studies from the Davies laboratory have provided insight into the detailed organisation of the SC.Highlights include the revelation that part of the central element can polymerise by itself, forming intermediate filament-like structures [138], and details of the tetramerization regions of SYCP1 that form the gross structural arrangement of the transverse filaments [26] (Figure 3F).The proteins discussed in this minireview are described here and, where known or present, the orthologous proteins from S. cerevisiae, Mus musculus, and Arabidopsis thaliana are shown. 1As outlined in the text, Zip1 is functionally a 'ZMM' but also the major component of the transverse filament of the synaptonemal complex.
The distribution of COs, a topic of current interest in meiotic research, was thoroughly reviewed in this journal [139].However, a brief summary is pertinent.In most organisms, CO distribution is not random.The occurrence of a CO at one locus typically reduces the probability of another CO nearby, a phenomenon termed 'crossover interference'.The ZMM proteins are primarily responsible for generating these interfering, or class I, COs.Among other factors, a group of key regulators in this process are RING E3 ligases belonging to two related families, Zip3 and HEI10.In S. cerevisiae only e Zip3 is present [6], whereas plants and Sordaria macrospora only have the HEI10 member [140].Mammals have both HEI10 and the Zip3-related RNF212 [141,142] (and the paralog RNF212B) (Table 1).On synapsed chromosomes, HEI10 forms loci that exhibit 'coarsening,' where their size increases as their number decreases [117,143,144].The mechanisms and regulatory processes behind HEI10 coarsening are currently under active investigation, promising to unveil further insights into the complex orchestration of CO distribution, and perhaps offering the possibility of exogenously manipulating CO numbers.
Conclusions and outlook
Meiotic recombination is an essential process, required at the organismal level to facilitate the proper segregation of homologous chromosomes, and at the species level to continually generate new allele combinations.The fundamental mechanism of meiosisthe breaking and subsequent modified repair of the genomeis a high-risk strategy.To ensure the necessary outcome, without compromising genome integrity, requires the temporal and spatial coordination of a variety of meiosis-specific factors that modify and act in consort with somatic factors.
One of the challenges of studying any cellular process is pleiotropic mutant effects, especially with the use of deletions.Thus one goal must be the generation of separation of function mutants.The recent protein structure prediction revolution, spearheaded by AlphaFold2 [145] has considerably lowered the barriers to highresolution structural information necessary for point mutant design.From this, separation-of-function mutants can be used to study the details of meiotic recombination, as has been demonstrated recently [63,108,110].At the time of writing, advanced prediction algorithms like AlphaFold2 do not yet possess the capability to model post-translational modifications, small molecule ligands, or nucleic acids.However, given the rapid advancements in this field, it's plausible to anticipate that these features will be integrated shortly.Far from rendering in vitro biophysical and biochemical studies obsolete, the advent of AlphaFold2 underscores their importance.These studies are crucial not only for validating the predictions of such algorithms but also for providing detailed input for complex components and stoichiometries, which are essential for accurate modelling.
Perspectives
• Meiotic recombination is at the very centre of the continuation and diversity of eukaryotic life.
• It will be necessary to explore the relationships between different subcomplexes of the meiotic machinery and understand the contributions made by various post-translational modifications.
• Large-scale biochemical reconstitutions will explore the role of each meiotic factor in a reductionist approach, while CryoET will provide detailed images of meiotic machines in situ.
Open Access
Open access for this article was enabled by the participation of Max Planck Digital Library in an all-inclusive Read & Publish agreement with Portland Press and the Biochemical Society under a transformative agreement with MPDL.
Figure 1 .
Figure 1.Overview of the key stages in meiosis.(A) Cartoon overview of meiosis in the context of the generation and continuation of eukaryotic life.On the right-hand side, cartoon chromosomes are considered to be homologues.The stages of meiosis I are shown as DSB formation, crossover formation and the segregation of homologues.The outcome of meiosis II is shown at the bottom as four genetically distinct haploid gametes.(B) Inset of meiotic DSB formation.The axial proteins Hop1 and Red1 recruit the RMM complex proteins.The RMM proteins, together with the MRX, recruit and activate the Spo11 core complex that catalyses meiotic DSB formation in loops of chromatin emerging from the axis.(C) The ZMM group of proteins functions to promote meiotic crossover formation by antagonising the activity of anti-crossover factors such as the STR (Sgs1-Top3-Rmi1) complex.
Figure 3 .
Figure 3. Examples of key experimental protein structures of the meiotic machinery.
Figure 3 .
Figure 3. Examples of key experimental protein structures of the meiotic machinery.Part 2 of 2
Table 1 .
Core recombination proteins
Table 1 .
Core recombination proteins Part 2 of 2 | 6,442.8 | 2024-02-13T00:00:00.000 | [
"Biology"
] |
An Innovative Compact Split-Ring-Resonator-Based Power Tiller Wheel-Shaped Metamaterial for Quad-Band Wireless Communication
A split-ring resonator (SRR)-based power tiller wheel-shaped quad-band ℇ-negative metamaterial is presented in this research article. This is a new compact metamaterial with a high effective medium ratio (EMR) designed with three modified octagonal split-ring resonators (OSRRs). The electrical dimension of the proposed metamaterial (MM) unit cell is 0.086λ × 0.086λ, where λ is the wavelength calculated at the lowest resonance frequency of 2.35 GHz. Dielectric RT6002 materials of standard thickness (1.524 mm) were used as a substrate. Computer simulation technology (CST) Microwave Studio simulator shows four resonance peaks at 2.35, 7.72, 9.23 and 10.68 GHz with magnitudes of −43.23 dB −31.05 dB, −44.58 dB and −31.71 dB, respectively. Moreover, negative permittivity (ℇ) is observed in the frequency ranges of 2.35–3.01 GHz, 7.72–8.03 GHz, 9.23–10.02 GHz and 10.69–11.81 GHz. Additionally, a negative refractive index is observed in the frequency ranges of 2.36–3.19 GHz, 7.74–7.87 GHz, 9.26–10.33 GHz and 10.70–11.81 GHz, with near-zero permeability noted in the environments of these frequency ranges. The medium effectiveness indicator effective medium ratio (EMR) of the proposed MM is an estimated 11.61 at the lowest frequency of 2.35 GHz. The simulated results of the anticipated structure are validated by authentication processes such as array orientation, HFSS and ADS for an equivalent electrical circuit model. Given its high EMR and compactness in dimensions, the presented metamaterial can be used in S-, C- and X-band wireless communication applications.
Introduction
Metamaterial is a congress of non-natural physical structures designed to achieve advantageous and uncommon electromagnetic properties. The effective properties of metamaterials are defined and measured in terms of permittivity (ε) and permeability (µ) [1,2]. A hypothetical ε-negative and µ-negative metamaterial termed DNG or LHM metamaterial was introduced in 1968 by Russian physicist Victor Veselago [3]. The unique properties of this metamaterial have drawn the attention of scientists all over the world for various applications in the microwave frequency range [4][5][6]. Nowadays, microwave-based applications are used in filtering [7], hidden cloaking [8], SAR reduction [9], absorber design [10], bandwidth enhancement [11], etc. A unit cell itself cannot acts as a complete metamaterial but is a systematic periodic array of metal-dielectric-metal or dielectricmetal structure upon a host substrate [12]. An S-shaped metamaterial with an EMR of 4.8 was designed for sensing applications in the microwave range [13]. A dual-band flexible metamaterial was designed on a nickel aluminate (NiAl2O4) substrate with a 42% aluminum concentration and dimensions of 12.5 × 10 mm 2 , covering the X and Ku bands [14]. Recently, a metamaterial was reported that a contained rectangular-shaped SRR. This metamaterial was utilized to sense concrete, temperature and humidity [15]. Islam et al. in [16] introduced a SNG metamaterial that shows triple band resonance for microwave application. Moreover, Smith et al. proposed a three-dimensional metamaterial built on reedy wire, along with a split-ring resonator [17]. In numerical simulation, the MM exhibited a double-negative characteristic with a wideband spectrum. A triple-band polarization-dependent MM with dimensions of 8 × 8 mm 2 was designed on an RT6002 substrate and yielded at 0.92 GHz, 7.25 GHz and 14.83 GHz, covering the S, C and Ku bands [18]. A tri-band MM with dimensions of 10 × 10 mm 2 and a Greek key shape was designed on an RT 5880 dielectric. In numerical simulation, it showed triple resonance peaks at 2.40, 3.50 and 4.0 GHz [19]. An epsilon-negative, delta-shaped metamaterial comprising an SSR (square ring resonator) exhibited tri-resonance crests that covered the C and X bands [20]. Another triple-band metamaterial with dimensions of 5 × 5 mm 2 was presented by Liu et al. in 2016 [21] with an RCER (reformed circular electric resonator). This MM with a low (5.45) effective medium ratio (EMR) was resonant in the frequency ranges of 9.70 GHz to 10.50 GHz and 15 GHz to 15.70 GHz.
A different metamaterial with a pie-shaped metallic resonator surrounded by an SRR was presented in [22]. This tri-band MM was designed on a substrate with dimensions of 8 × 8 mm 2 and covered the microwave S, C and X bands. An SRR-based triple-band metamaterial was designed with a double circular ring [23]. This multiunit cell-based MM was resonant at 5.6 GHz of Wi-MAX and 2.45 of WLAN. In 2019, Almutairi et al. [24] designed a metamaterial based on a CSRR (complementary split-ring resonator) with dimensions of 5 × 5 mm 2 . It showed resonance at 7.5 GHz with an EMR of 8. Moreover, an SNG metamaterial with dimensions of 5 × 5 × 1 mm 3 which comprising a concentric ring, along with a cross line, was designed on an FR-4 substrate [25]. It exhibited dual resonance peaks at 13.9 GHz and 27.5 GHz and was used to enhance the performance of a microstrip transmission line. A metamaterial was designed on an elliptical graphene nanodisk with a periodic pattern on a thin SiO 2 dielectric layer, as reported in [26]. Recently, two ceramic dielectrics were synthesized using MGa 2 O 4 (M = Ca, Sr) and LiF for to enhance the gain and performance of antennae [27,28]. An MM was designed using critical coupling at the gaps of two SRRs for total broadband transmission electromagnetic (EM) waves in a C-band application [29]. A cadmium sulfide (CdS) nanocrystalline coating with conducting polyaniline was designed to synthesized polyaniline-coated CdS nanocomposites characterized by UV-vis absorption [30]. In 2022, Amali et al. designed a nanocomposite using a potentiostatic method, which offered excellent electrocatalytic activity for nitrite oxidation [31].
In his research article, we present a new metamaterial that is an aggregation of three modified octagonal rings, along with a split-ring resonator. This power tiller wheel-shaped MM is compact in size, with an EMR of 11.61. In numerical simulation, it exhibits quadband resonance peaks at 2. 35, 7.72, 9.23 and 10.68 GHz, covering the S, C and X bands. Moreover, it also exhibits negative permittivity (ε) and a negative refractive index (n), with simultaneous near-zero permeability (µ). Such characteristics can be applied to comprehend various electronic components with different features and utilities. The main aim of this simple but first-hand design is to target versatile uses in wireless communication. The simulated results are verified by validation processes, confirming the reliability, consistency and efficiency of the proposed metamaterial. The ADS simulated result using a circuit model and Ansys HFSS 3D high-frequency software (high-frequency structure simulator) results show excellence harmony with the CST results. Figure 1a shows the front view of the unit cell, which is labeled with symbols. It is a new combination of three different octagonal rings surrounded by a split-ring resonator (SRR). A popularly used dielectric Rogers RT 6002 with dimensions 11 × 11 mm 2 and a thickness of 1.524 mm is used as a substrate. The dielectric constant, thermal conductivity and tangent loss of RT6002 are 2.94, 0.6 W/m/K and 0.0012, respectively. Copper (annealed) with an electrical conductivity of 5.96 × 10 7 Sm −1 is used for all resonators of the upper layer. The outer and inner radii of the first octagon are R 1 = 4.3 mm and R 2 = 3.8 mm, respectively, whereas the radii of the second octagon are r 1 = 3.3 mm and r 2 = 2.8 mm, respectively. Each split gap (g) of the octagon is 0.40 mm. The outer and inner radii of the smallest octagon are r 3 = 1.5 mm and r 4 = 0.75 mm, respectively. These three octagons (OSRRs) are placed at the center an SRR with dimensions of 10.40 × 10.40 mm 2 and a split gap (G) of 0.50 mm. The three octagons are attached to each other by four metal strips with a length of 3 mm and a width of 0.40 mm.
Design Parameters of the Metamaterial and Simulation Setup
Materials 2023, 16,1137 3 of 24 Figure 1a shows the front view of the unit cell, which is labeled with symbols. It is a new combination of three different octagonal rings surrounded by a split-ring resonator (SRR). A popularly used dielectric Rogers RT 6002 with dimensions 11×11 mm 2 and a thickness of 1.524 mm is used as a substrate. The dielectric constant, thermal conductivity and tangent loss of RT6002 are 2.94, 0.6 W/m/K and 0.0012, respectively. Copper (annealed) with an electrical conductivity of 5.96 × 10 7 Sm −1 is used for all resonators of the upper layer. The outer and inner radii of the first octagon are R1 = 4.3 mm and R2 = 3.8 mm, respectively, whereas the radii of the second octagon are r1 = 3.3 mm and r2 = 2.8 mm, respectively. Each split gap (g) of the octagon is 0.40 mm. The outer and inner radii of the smallest octagon are r3 = 1.5 mm and r4 = 0.75 mm, respectively. These three octagons (OSRRs) are placed at the center an SRR with dimensions of 10.40×10.40 mm 2 and a split gap (G) of 0.50 mm. The three octagons are attached to each other by four metal strips with a length of 3 mm and a width of 0.40 mm. Table 1. Proper boundary conditions are applied to attain the expected results from the proposed metamaterial design. The electromagnetic radiation propagates along the z coordinate, whereas the perfect electric conductor (PEC) and the perfect magnetic conductor (PMC) propagate along the x coordinate and y coordinate, respectively. It is noteworthy that the width of the SRR (t), as well as that of the first two octagons, is 0.50 mm, whereas the width of the smallest octagon (e) is 0.75 mm. The perspective view and the simulation setup of the proposed MM are depicted at Figure 1b,c respectively. The symbolic presentations of the design parametric values of the projected unit cell are given in the Table 1. Proper boundary conditions are applied to attain the expected results from the proposed metamaterial design. The electromagnetic radiation propagates along the z coordinate, whereas the perfect electric conductor (PEC) and the perfect magnetic conductor (PMC) propagate along the x coordinate and y coordinate, respectively.
Extraction Process of Medium Parameters
To extract the various properties of the material, the S-parameters model of the postprocessing module of CST can be deployed [32]. This software is applied to obtain information associated with the three important characteristics of permittivity (ε r ), permeability (µ r ) and refractive index (n r ) of the proposed unit cell of metamaterial to realize EM properties [33]. Moreover, the refractive index, S parameters (reflection and transmission coefficient) and impedance can be correlated with the help of Equations (1)-(5) of the robust retrieval method described in [34]. (1) (2) Here, impedance is expressed as: where S 11 = reflection coefficient, and S 21 = transmission coefficient. Then, The electromagnetic wave is fixed to propagate along the z direction, together with the perfect electric and the magnetic fields, which are applied as boundary conditions along the x and y directions, respectively. Additionally, the relative permittivity (ε r ) and relative permeability (µ r ) can be derived from Equations (6) and (7), respectively, using the Nicolson-Ross-Weir (NRW) technique [35].
Permeability, µ r = c jπfd where c = speed of light, f = frequency and d = the thickness of the substrate. MATLAB codes are written based on Equations (6) and (7). The values of the material parameters extracted through the NRW technique are verified and compared with the results of numerical simulation.
Design Hierarchy
The chronological development of the proposed metamaterial unit cell is shown in the Figure 2. The design architecture and its morphology are set up to achieve the highest performance possible. An iterative method is applied to record feedback of the unit cell in order to determine the transmission coefficient (S 21 ). Design 2(a) comprises a split-ring resonator (SRR) along with an octagonal ring on the substrate layer. It yields resonance at 2.44 GHz, 8.67 GHz and 10.84 GHz. Another comparatively smaller octagonal ring of the same width is added to the first design, which is shown in the Figure 2b. In CST simulation, it exhibits quad-band resonance peaks at 2.51 GHz, 8.55 GHz, 9.50 GHz and 11.02 GHz. Again, to test the enhancement of the bandwidths, a small octagon with a width of 0.75 mm is placed at the center of the previous structure, which is shown in Figure 2c Table 2. Figure 3 shows the numerical results of S 21 for all design steps.
for the proposed MM. The simulated results of the S21 from Figures 2(a-e) are shown in the Table 2. Figure 3 shows the numerical results of S21 for all design steps. S-, C-,X-
Effect of Substrate Materials
Proper dielectric selection is an important task for any metamaterial design. An investigation is conducted to observe the response of different substrate materials. Commercially available flame-retardant FR-4 material, along with two Rogers dielectrics, RT 5880 and RT 6002, are taken into consideration. Three individual substrates are simulated by keeping the resonator structure unchanged. First, dielectric FR-4 shows resonance at 2.04 GHz, 6.66 GHz, 7.97 GHz and 9.42 GHz, with very low magnitudes. Secondly, Rogers RT 5880 yields triple-band resonance peaks at 3.7 GHz, 8.67 GHz and 11.33 GHz, whereas RT6002 shows quad-band resonance peaks at 2.35 GHz, 7.72 GHz, 9.23 GHz and 10.68 GHz, with satisfactory magnitudes and moderate bandwidths. The simulated results cover the S, C and X bands. The transmission coefficients ( ) for the three substrate materials are shown in Figure 4.
Effect of Substrate Materials
Proper dielectric selection is an important task for any metamaterial design. An investigation is conducted to observe the response of different substrate materials. Commercially available flame-retardant FR-4 material, along with two Rogers dielectrics, RT 5880 and RT 6002, are taken into consideration. Three individual substrates are simulated by keeping the resonator structure unchanged. First, dielectric FR-4 shows resonance at 2.04 GHz, 6.66 GHz, 7.97 GHz and 9.42 GHz, with very low magnitudes. Secondly, Rogers RT 5880 yields triple-band resonance peaks at 3.7 GHz, 8.67 GHz and 11.33 GHz, whereas RT6002 shows quad-band resonance peaks at 2.35 GHz, 7.72 GHz, 9.23 GHz and 10.68 GHz, with
Unit Cell Dimension Optimization
Various sizes of unit cell for the same dielectric (RT6002) and for the same metal (copper annealed) are inspected to select the appropriate size of the proposed metamaterial. First, the unit cell is simulated with substrate dimensions of 13 × 13 × 1.524 mm 3 , exhibiting quad-band resonance at 2.58 GHz, 7.60 GHz, 9.47 GHz and 10.36 GHz. Secondly, it is simulated with a unit cell with dimensions of 12 × 12 × 1.524 mm 3 , showing quad-band resonance peaks with a small decrement of resonance frequencies. Lastly, it is simulated for dimensions of 11 × 11 × 1.524 mm 3 , showing quad-band resonance at 2.35 GHz, 7.72 GHz, 9.23 GHz and 10.68 GHz, with a better progression of bandwidths. Figure 5 demonstrates the simulated results for the selected sizes of the unit cell.
Unit Cell Dimension Optimization
Various sizes of unit cell for the same dielectric (RT6002) and for the same metal (copper annealed) are inspected to select the appropriate size of the proposed metamaterial. First, the unit cell is simulated with substrate dimensions of 13 × 13 × 1.524 mm 3 , exhibiting quad-band resonance at 2.58 GHz, 7.60 GHz, 9.47 GHz and 10.36 GHz. Secondly, it is simulated with a unit cell with dimensions of 12 × 12 × 1.524 mm 3 , showing quad-band resonance peaks with a small decrement of resonance frequencies. Lastly, it is simulated for dimensions of 11 × 11 × 1.524 mm 3 , showing quad-band resonance at 2.35 GHz, 7.72 GHz, 9.23 GHz and 10.68 GHz, with a better progression of bandwidths. Figure 5 demonstrates the simulated results for the selected sizes of the unit cell.
The Effect of Field Propagation Direction
A change in transmission coefficient (S 21 ) is observed with varying electric field and magnetic field direction. Figure 6 demonstrates the simulation setup for changing the field propagation. Initially, electric field (Ex) propagates along the X direction, and the magnetic field (Hy) is applied to the Y direction. The simulation result shows quad-band resonance peaks at 2.35 GHz, 7.72 GHz, 9.23 GHz and 10.68 GHz. If the fields are interchanged with each other, the simulated results show two resonance peaks at 4.58 GHz and 8.38 GHz. Figure 7 illustrates the simulated results for propagation in the ExHy and HxEy directions. magnetic field direction. Figure 6 demonstrates the simulation setup for changing the fi propagation. Initially, electric field (Ex) propagates along the X direction, and the m netic field (Hy) is applied to the Y direction. The simulation result shows quad-band r onance peaks at 2.35 GHz, 7.72 GHz, 9.23 GHz and 10.68 GHz. If the fields are int changed with each other, the simulated results show two resonance peaks at 4.58 G and 8.38 GHz. Figure 7 illustrates the simulated results for propagation in the ExHy a HxEy directions.
Analysis of Electromagnetic Field and Surface Current
The upper layer of the proposed metamaterial unit cell contains resonant assemblies composed of split gaps and metallic conductors. The split gaps and conductors play the roles of capacitors and inductors, respectively. Electromagnetic force is exerted on the resonators due to the interaction between time-varying EM fields and the unit cell. The induction current flows from one resonator to another through the capacitive split gaps, which are smaller than the wavelength of the incident EM wave. Produced electric and magnetic moments influence the transmission ability and change the material characteristics such as permittivity and permeability. The surface current distribution of the presented MM is illustrated in Figure 8, predicting that at a low-resonance frequency of 2.35 GHz, the outer ring subsidizes a higher amount of current. At the lower frequency, the inductive reactance is also low because the outer ring contributes a low impedance route. A significant amount of current flow decreases in the first outer ring at the second resonance frequency of 7.72 GHz because an increase in impedance occurs with the increase in frequency. At this frequency, non-uniform and random movement of current is detected in bars connecting the octagons, which eventually reduces the overall current flow. In the two inner octagons, current flow is reduced because of the neutralization of two opposite flows. For the same reasons, current flow becomes insufficient at a resonance frequency of 9.23 GHz, and high current flow is observed through the edges of all horizontal sides of all rings compared to the previous two positions. It is also noteworthy that a substantial amount of current is contributed by the two horizontal sides of the outer ring, owing to lower impedance applied by the split gap at a resonance frequency of 10.68 GHz.
Analysis of Electromagnetic Field and Surface Current
The upper layer of the proposed metamaterial unit cell contains resonant assemblies composed of split gaps and metallic conductors. The split gaps and conductors play the roles of capacitors and inductors, respectively. Electromagnetic force is exerted on the resonators due to the interaction between time-varying EM fields and the unit cell. The induction current flows from one resonator to another through the capacitive split gaps, which are smaller than the wavelength of the incident EM wave. Produced electric and magnetic moments influence the transmission ability and change the material characteristics such as permittivity and permeability. The surface current distribution of the presented MM is illustrated in Figure 8, predicting that at a low-resonance frequency of 2.35 GHz, the outer ring subsidizes a higher amount of current. At the lower frequency, the inductive reactance is also low because the outer ring contributes a low impedance route. A significant amount of current flow decreases in the first outer ring at the second resonance frequency of 7.72 GHz because an increase in impedance occurs with the increase in frequency. At this frequency, non-uniform and random movement of current is detected in bars connecting the octagons, which eventually reduces the overall current flow. In the two inner octagons, current flow is reduced because of the neutralization of two opposite flows. For the same reasons, current flow becomes insufficient at a resonance frequency of 9.23 GHz, and high current flow is observed through the edges of all horizontal sides of all rings compared to the previous two positions. It is also noteworthy that a substantial amount of current is contributed by the two horizontal sides of the outer ring, owing to lower impedance applied by the split gap at a resonance frequency of 10.68 GHz. Time-varying charge flow is mainly responsible for generating magnetic field accor ing to the Amperes law in association with Faraday's law of induction, which, in tur produces an electric field due to electromagnetic interaction [36]. The induced E field an H field can be inspected using the Maxwell's curl Equations (8)- (12), as presented in [37 Horizontal magnetic field component: ( Produced electric field related to the magnetic field: ( where the vector operator is expressed as: Equations (8) and (9) are not sufficient to explain why the two fields interact wi materials. Two more equations are required to overcome these limitations [38].
(1 (1 The material properties of permittivity ℇ and permeability µ in Equations (1 and (12) are complex parameters and real in the case of isotropic lossless material. A viv observation magnetic field and electric field for the four resonance frequenci (2.35, 7.72, 9.23 and 10.68 GHz) are illustrated in Figures 9 and 10, respectively. The inte sity of the magnetic field and polarity depend on the amount of current and its flow d rection. The H-field distribution in Figure 9 shows that at locations in the unit cell whe the current density is high, the magnetic field is also high. As shown by the patterns of H field and E-field distribution in Figures 9 and 10, if the magnetic field changes towards a increment, then the electric field changes inversely. The changing tendency of magnet and electric fields is determined according to Equations (8) and (9). Furthermore, as eve split gap of the unit cell of the proposed MM acts as a capacitor, the electric field intensi in the split gap is increased. Time-varying charge flow is mainly responsible for generating magnetic field according to the Amperes law in association with Faraday's law of induction, which, in turn, produces an electric field due to electromagnetic interaction [36]. The induced E field and H field can be inspected using the Maxwell's curl Equations (8)- (12), as presented in [37].
Horizontal magnetic field component: Produced electric field related to the magnetic field: where the vector operator is expressed as: ∇ =î ∂x ∂t +ĵ ∂y ∂t +k ∂z ∂t (10) Equations (8) and (9) are not sufficient to explain why the two fields interact with materials. Two more equations are required to overcome these limitations [38].
The material properties of permittivity (ε) and permeability (µ) in Equations (11) and (12) are complex parameters and real in the case of isotropic lossless material. A vivid observation magnetic field (H) and electric field (E) for the four resonance frequencies (2.35, 7.72, 9.23 and 10.68 GHz) are illustrated in Figures 9 and 10, respectively. The intensity of the magnetic field and polarity depend on the amount of current and its flow direction. The H-field distribution in Figure 9 shows that at locations in the unit cell where the current density is high, the magnetic field is also high. As shown by the patterns of H-field and E-field distribution in Figures 9 and 10, if the magnetic field changes towards an increment, then the electric field changes inversely. The changing tendency of magnetic and electric fields is determined according to Equations (8) and (9). Furthermore, as every split gap of the unit cell of the proposed MM acts as a capacitor, the electric field intensity in the split gap is increased.
Equivalent LC Circuit of the Unit Cell
An estimated electrical equivalent circuit is drawn and executed by ADS to validate the CST results of the proposed metamaterial. The unit cell is designed with a combination of both metal strips and some split gaps. Every metal strip represents a conductor, whereas every split gap represents a capacitor [39]. In the microwave band, metallic conductor copper can be treated as a perfect conductor that can ignore the ohmic losses [40]. Therefore, the whole unit cell is represented by an LC resonance circuit. The inductance and the capacitance are the main parameters of an LC circuit, denoted by and , respectively. Using these two parameters, resonance frequency can be calculated by applying Equation (13).
The quasi-state theory can be applied to measure the capacitance within a distance or in a split gap in a circuit [41].
where ℇ is the permittivity in free space, ℇ is the relative permittivity, is the crosssectional area of the conducting strip and is the split gap. The inductance of a rectangular metal bar can be calculated according to Equation (15) [42]. .
. (15) where is the correction factor, w is the width, is the length and is the thickness of the strip.
An equivalent LC circuit of the proposed MM is illustrated in Figure 11. The whole equivalent circuit comprising eleven (L1 to L11) inductors and twelve (C1 to C12) capacitors is simulated by ADS software. The first resonator on the upper layer is a split-ring resonator (SRR), as indicated by (L1, C1) and (L2, C2), contributing the first resonance frequency of 2.39 GHz, whereas (L3, C3) and (L4, C4) are used for the first octagon that belongs to 7.23 GHz. On the contrary, the second octagon is replaced by (L5, C5) and (L6, C6), which partially contribute to the frequency of 9.21 GHz. C7 and C8 are the coupling capacitors. The joining metal bars and associate gaps are represented by (L9, C10) and (L10, C11), whereas L11 is used for the small central octagon. These components are jointly responsible for the resonance frequency of 10.72 GHz. A comparison between two transmission coefficients that are determined by CST and ADS is shown in Figure 12.
Equivalent LC Circuit of the Unit Cell
An estimated electrical equivalent circuit is drawn and executed by ADS to validate the CST results of the proposed metamaterial. The unit cell is designed with a combination of both metal strips and some split gaps. Every metal strip represents a conductor, whereas every split gap represents a capacitor [39]. In the microwave band, metallic conductor copper can be treated as a perfect conductor that can ignore the ohmic losses [40]. Therefore, the whole unit cell is represented by an LC resonance circuit. The inductance and the capacitance are the main parameters of an LC circuit, denoted by (L) and (C), respectively. Using these two parameters, resonance frequency ( f ) can be calculated by applying Equation (13).
The quasi-state theory can be applied to measure the capacitance within a distance or in a split gap in a circuit [41].
where ε o is the permittivity in free space, ε r is the relative permittivity, A is the crosssectional area of the conducting strip and d is the split gap. The inductance of a rectangular metal bar can be calculated according to Equation (15) [42].
where K G is the correction factor, w is the width, l is the length and t is the thickness of the strip. An equivalent LC circuit of the proposed MM is illustrated in Figure 11. The whole equivalent circuit comprising eleven (L1 to L11) inductors and twelve (C1 to C12) capacitors is simulated by ADS software. The first resonator on the upper layer is a split-ring resonator (SRR), as indicated by (L1, C1) and (L2, C2), contributing the first resonance frequency of 2.39 GHz, whereas (L3, C3) and (L4, C4) are used for the first octagon that belongs to 7.23 GHz. On the contrary, the second octagon is replaced by (L5, C5) and (L6, C6), which partially contribute to the frequency of 9.21 GHz. C7 and C8 are the coupling capacitors. The joining metal bars and associate gaps are represented by (L9, C10) and (L10, C11), whereas L11 is used for the small central octagon. These components are jointly responsible for the resonance frequency of 10.72 GHz. A comparison between two transmission coefficients that are determined by CST and ADS is shown in Figure 12.
Results and Discussion
CST microwave studio is deployed to simulate the proposed metamaterial in the frequency range 1-14 GHz. Figure 13 demonstrates the scattering parameters (reflection and transmission coefficients). Numerical simulation yields four resonance frequencies at 2.35, 7.72, 9.23 and 10.68 GHz with magnitudes of −43.23 dB, −31.05 dB, −44.58 dB and −31.71 dB, respectively. These frequency bands cover the S, C, and X bands. Moreover, the response for the reflection coefficient shows at 3.33 GHz, 7.92 GHz and 10.39 GHz with magnitudes of −36.30 dB, −13.84 dB and −15.23dB, respectively. It is evident that every resonance of the transmission coefficient is followed by bandwidths of 0.36 GHz, 0.46 GHz, 1.42 GHz and 0.30 GHz at the concerned S, C, and X bands, respectively. It is also evident that each resonance of the transmission coefficient is tracked by a reflection coefficient minimum. Subsequently, the frequency of each value of minimum is always lower than the concerned value of the minimum frequency. In this regard, it can be concluded that every resonance can be treated as electrical resonance in the proposed metamaterial unit cell [43]. The permittivity ℇ , permeability µ and refractive index extracted by applying the RRM (robust retrieval method) in CST and
Results and Discussion
CST microwave studio is deployed to simulate the proposed metamaterial in the frequency range 1-14 GHz. Figure 13 demonstrates the scattering parameters (reflection and transmission coefficients). Numerical simulation yields four resonance frequencies at 2. 35, 7.72, 9.23 and 10.68 GHz with magnitudes of −43.23 dB, −31.05 dB, −44.58 dB and −31.71 dB, respectively. These frequency bands cover the S, C, and X bands. Moreover, the response for the reflection coefficient (S 11 ) shows at 3.33 GHz, 7.92 GHz and 10.39 GHz with magnitudes of −36.30 dB, −13.84 dB and −15.23dB, respectively. It is evident that every resonance of the transmission coefficient (S 21 ) is followed by bandwidths of 0.36 GHz, 0.46 GHz, 1.42 GHz and 0.30 GHz at the concerned S, C, and X bands, respectively. It is also evident that each resonance of the transmission coefficient (S 21 ) is tracked by a reflection coefficient (S 21 ) minimum. Subsequently, the frequency of each value of S 21 minimum is always lower than the concerned value of the S 11 minimum frequency. In this regard, it can be concluded that every resonance can be treated as electrical resonance in the proposed metamaterial unit cell [43]. The permittivity (ε), permeability (µ) and refractive index (n) extracted by applying the RRM (robust retrieval method) in CST and NRW in MATLAB are shown in Figures 14-16, respectively. Figure 14 shows that the permittivity of the designed MM varies from the positive value to the negative. Again, the values of S 21 begin when the magnitudes of permittivity fluctuate from maximum to minimum values. Moreover, the starting positive minimum value of µ occurs at the minimum resonance frequency shown in Figure 15. This tendency continues over the whole resonance frequency range. A graph of the refractive index is presented in Figure 16, which reveals negative refractive indices in the frequency ranges of 2. GHz. The negativity of refractive index is a function of frequency that can be utilized to increase the gain and directivity of antennae, whereas the ε-negative property is deployed to enhance the bandwidth [44,45]. Finally, a brief comparison is shown on the basis of some important parameters in Table 3.
Array Metamaterial Results
Different types of array combinations are also simulated to test the coupling ef and to verify the consistency of the results, which is the best way to achieve the expec electromagnetic features. Arrays of the proposed MM with dimensions of 1 × 2 and 2 are shown in Figure 17. These two designs are simulated by the CST and the reflect coefficient (S11), and transmission coefficient (S21) results are presented in Figure 18. variations of resonance frequencies among the unit cell and the 1 × 2 and 2 × 2 arrays given in Table 4, which confirms the consistency of the results.
Array Metamaterial Results
Different types of array combinations are also simulated to test the coupling effect and to verify the consistency of the results, which is the best way to achieve the expected electromagnetic features. Arrays of the proposed MM with dimensions of 1 × 2 and 2 × 2 are shown in Figure 17. These two designs are simulated by the CST and the reflection coefficient (S 11 ), and transmission coefficient (S 21 ) results are presented in Figure 18. The variations of resonance frequencies among the unit cell and the 1 × 2 and 2 × 2 arrays are given in Table 4, which confirms the consistency of the results.
Validation Using HFSS
In order to verify the reliability and consistency of the performance of the proposed MM, the CST result for the transmission coefficient (S21) is authenticated by Ansys HFSS. The simulated result obtained with this software also shows quad-band resonance peaks, with amplitudes remaining nearly unchanged and yielding four resonance peaks at 2.
Validation Using HFSS
In order to verify the reliability and consistency of the performance of the proposed MM, the CST result for the transmission coefficient (S 21 ) is authenticated by Ansys HFSS. The simulated result obtained with this software also shows quad-band resonance peaks, with amplitudes remaining nearly unchanged and yielding four resonance peaks at 2.
Conclusions
In this research article, a quad-band power tiller wheel-shaped ENG meta for S-, C-and X-band applications is presented. The size of the proposed MM uni dimensions of 10 × 10 × 1.524 mm 3 and is based on an RT6002 dielectric substr microwave studio is used to simulate the unit cell, showing quad-band resonan at 2.35 GHz, 7.72 GHz, 9.23 GHz and 10.68 GHz with amplitudes of −43.23 dB −3 −44.58 dB and −31.71 dB, respectively. The simulated results are also validated by tion processes such as an equivalent electrical circuit model, high-frequency sim software (HFSS) and various array orientations. The response and contribution of resonators of the unit cell are inspected by analyzing the E-field, H-field and sur rent distribution for propagated electromagnetic radiation. The important feature mittivity, permeability and the refractive index of the metamaterial are extracte MATLAB. The EMR of the proposed MM is 11.61, which indicates its reliability. culated value of 4 is less than the length (L) of the unit cell, highlighting the c ness of the size of the unit cell. This innovative MM can be deployed to enhance ciency of different microwave devices, owing to its NRI and epsilon-negative cha tics. Moreover, the S, C-and X bands are recurrently used for satellite and radar tions.
Conclusions
In this research article, a quad-band power tiller wheel-shaped ENG metamaterial for S-, C-and X-band applications is presented. The size of the proposed MM unit cell has dimensions of 10 × 10 × 1.524 mm 3 and is based on an RT6002 dielectric substrate. CST microwave studio is used to simulate the unit cell, showing quad-band resonance peaks at 2.35 GHz, 7.72 GHz, 9.23 GHz and 10.68 GHz with amplitudes of −43.23 dB −31.05 dB, −44.58 dB and −31.71 dB, respectively. The simulated results are also validated by validation processes such as an equivalent electrical circuit model, high-frequency simulation software (HFSS) and various array orientations. The response and contribution of various resonators of the unit cell are inspected by analyzing the E-field, H-field and surface current distribution for propagated electromagnetic radiation. The important features of permittivity, permeability and the refractive index of the metamaterial are extracted using MATLAB. The EMR of the proposed MM is 11.61, which indicates its reliability. The calculated value of λ 4 is less than the length (L) of the unit cell, highlighting the compactness of the size of the unit cell. This innovative MM can be deployed to enhance the efficiency of different microwave devices, owing to its NRI and epsilonnegative characteristics. Moreover, the S, C-and X bands are recurrently used for satellite and radar applications. | 8,489.6 | 2023-01-28T00:00:00.000 | [
"Physics"
] |
Important amino acid residues of potato plant uncoupling protein ( St UCP )
Chemical modifications were used to identify some of the functionally important amino acid residues of the potato plant uncoupling protein (StUCP). The proton-dependent swelling of potato mitochondria in K+-acetate in the presence of linoleic acid and valinomycin was inhibited by mersalyl (Ki = 5 μM) and other hydrophilic SH reagents such as Thiolyte MB, iodoacetate and 5,5'-dithio-bis-(2-nitrobenzoate), but not by hydrophobic N-ethylmaleimide. This pattern of inhibition by SH reagents was similar to that of brown adipose tissue uncoupling protein (UCP1). As with UCP1, the arginine reagent 2,3-butadione, but not N-ethylmaleimide or other hydrophobic SH reagents, prevented the inhibition of StUCP-mediated transport by ATP in isolated potato mitochondria or with reconstituted StUCP. The results indicate that the most reactive amino acid residues in UCP1 and StUCP are similar, with the exception of N-ethylmaleimide-reactive cysteines in the purine nucleotide-binding site. Correspondence
The protein chemistry of StUCP has not been studied to the same extent as has UCP1.A 32-kDa StUCP has been characterized as a hydrophobic protein which is not retained on hydroxylapatite in the detergent micellar solution (1,6,7).Chemical modifications of reactive amino acid residues, the cleavage pattern produced by proteases, and ligand binding (except for studies with 8azido-ATP (13)) have not been studied in StUCP.In the present study, we examined the effects of several chemical modifiers on StUCP-mediated transport as well as StUCP inhibition by purine nucleotides.Our results clearly show that the pattern of reactive amino acid residues in StUCP is similar to that of UCP1, with the exception that no N-ethylmaleimide (NEM)-reactive cysteines were found in the purine nucleotide-binding site of StUCP.
Isolation of mitochondria and protein determination
Potato mitochondria were isolated as described previously (5,8,9) in medium containing 250 mM sucrose, 10 mM HEPES, pH 7.2, and 0.3 mM EGTA.The protein concentration was 30-40 mg/ml, as determined by the biuret method.A crude fraction was used for swelling studies and for most of the isolations.For some isolations, a Percoll gradient centrifugation was used to remove contamination by plastid proteins, starch and other substances.Qualitatively, transport measurements using the crude fraction gave identical results as those performed with Percoll-purified mitochondria.
Swelling assay of StUCP transport function
Proton-dependent swelling of potato mi-tochondria (0.2 mg protein/ml) in K + -acetate (55 mM K + -acetate, 5 mM K + -HEPES, 0.2 mM Tris-EDTA, 0.1 mM Tris-EGTA, pH 6.9) initiated by valinomycin in the presence of linoleic acid (16 µM) has been used as a standard assay for StUCP-mediated transport (5).Since valinomycin allows the uniport uptake of K + and neutral acetic acid is able to penetrate the lipid bilayer, an efflux of H + is necessary to induce swelling.In our assay, this H + efflux was concomitant with linoleic acid cycling which allowed swelling since StUCP mediated the uptake of linoleic acid anion, while protonated linoleic acid passed spontaneously through the lipid bilayer by a flip-flop mechanism and released H + externally.Hexanesulfonate uniport was assayed as valinomycin-induced swelling in medium containing 51.1 mM Na + -hexanesulfonate, 30.8 mM K + -HEPES, pH 7.2, 190 µM Tris-EDTA and 95 µM Tris-EGTA.The side effects caused by the chemical modifiers used, including the induction of mitochondrial swelling without the addition of ionophore and membrane stiffening, were controlled by performing a swelling assay in K +acetate containing nigericin, which does not depend on protein carriers.When a decrease in this rate (v Nig [c]) was observed at a given concentration [c] of modifier, the rates of valinomycin-induced StUCP-mediated swelling were corrected by multiplying this decrease by the factor v Nig
Chemical modifications of potato mitochondria
For carrying reactions, mitochondria were resuspended in the sucrose isolation medium (5 mg protein/ml) and aliquots of stock solutions (aqueous or in dimethylsulfoxide) of various reagents were added and incubated for 1 h (unless otherwise indicated) at 0 o C. For NEM, DTNB and phenylglyoxal, pH was raised to 8.2 by adding 20 mM Tris-HEPES, pH 8.4, to the stock solution and 2 µM propranolol was added.
Effect of hydrophilic SH reagents on StUCPmediated transport in mitochondria
Proton-dependent swelling of potato mitochondria initiated by valinomycin in K + -acetate containing linoleic acid was reversibly inhibited by the organomercurial SH reagent mersalyl with an apparent K i of 5 µM (Figure 1A, only 10-s preincubations).This type of swelling reflected the ability of StUCP to translocate linoleic acid anions (5).The effect of mersalyl can be considered as a specific inhibition, since swelling independent of a protein carrier, i.e., the nigericin-induced swelling in K + -acetate, was not affected up to 100 µM mersalyl (Figure 1A).Above 100 µM, and above 40 µM in the presence of linoleic acid, mersalyl induced nonspecific permeability changes which were observed as mitochondrial swelling without the ionophore.Some mitochondrial preparations were more sensitive to mersalyl and this made measurements with them more difficult.
To avoid the interference of nonspecific permeability changes, we used Thiolyte MB, a covalently interacting SH modifier.Mitochondria were preincubated for 1 h with increasing Thiolyte MB doses (Figure 2).The IC 50 for Thiolyte MB was around 500 nmol/mg protein.Carrier-independent swelling was not significantly affected by Thiolyte MB, indicating that the modification of the SH groups in StUCP inhibits the transport activity of this protein.Carboxymethylation by iodoacetate (which also affects SH groups) also inhibited StUCP transport activity at higher doses (IC 50 of 100 µmol/mg protein), but only with 10-s preincubations (Figure 1B).Ellmans reagent (DTNB) inhibited the activity by 18 and 31% at 1000 and 3000 nmol/mg protein, respectively, after a 2-h incubation, as calculated from the rates corrected for the nonspecific effect (incubations at pH >8 lead to preswelling after a few hours).In contrast, NEM and other hydrophobic SH reagents (eosinmaleimide, phenylarsineoxide) were not inhibitory up to 10 µmol/mg protein.Hexanesulfonate uniport via StUCP was partially inhibited by hydrophilic SH reagents, e.g., by 1000 nmol Thiolyte MB/mg protein.
Effect of arginine reagents on ATP inhibition of StUCP-mediated transport
Reagents specific for other amino acid residues did not inhibit transport or prevent the inhibition by ATP at doses up to 10 µmol/mg protein.The reagents tested included DIDS, TNBS and lysine-specific pyridoxalphosphate.Only an arginine-specific reagent, 2,3-butadione, completely prevented the inhibition of linoleic acid transport by 4 mM ATP (Figure 3) at doses above 100 nmol/mg protein (see inset in Figure 3).Thus, a 1-h incubation with 4000 nmol/mg protein 2,3-butadione shifted the ATP dose-response curve so that the extrapolated apparent K i was much greater than 10 mM (Figure 3).Surprisingly, phenylglyoxal, a more bulky arginine reagent, had no effect at doses up to 10 µmol/mg protein.NEM, which prevented nucleotide inhibition of UCP, also had no effect on ATP inhibition of StUCP (data not shown).
Confirmation of the effects of 2,3-butadione and Thiolyte MB using reconstituted StUCP
The effect of 2,3-butadione on StUCP reconstituted into proteoliposomes after premodification by 2,3-butadione in mitochondria was identical to that found in potato mitochondria.2,3-Butadione prevented purine nucleotide inhibition of H + efflux, including inhibition by GTP, when the H + efflux was monitored with the fluorescent probe PBFI concomitant with K + influx (Figure 4).The inhibitory effect of Thiolyte MB was also confirmed for isolated StUCP reconstituted into liposomes for which linoleic acid uniport or concomitant H + efflux was detected by TES quenching of the fluorescent probe SPQ (Figure 5).Reconstituted Thiolyte MB-modified StUCP showed no transport activity (Figure 5).
Discussion
The pattern of reactive amino acid residues in StUCP was surprisingly similar to that of mammalian UCP1 (29-33; for reviews, see [17][18][19].This suggests that the structures of StUCP and UCP1 are very likely to be closely related, despite only about 40% identity in their sequences (14,15).
The chemical modification of reactive amino acid residues in proteins has been widely used to study protein structure/function relationships.Site-directed mutagenesis has shown that the identification of a residue as essential for a given function is not a straightforward task.In many cases, the effects of modifiers differ from the phenotypes of the corresponding substitution mutants.Interference by the reagent and/or the mutation with the protein function may indicate that i) the residue is essential for that function, i.e., is involved in the required functional interactions (in this case, the substitution mutants have an identical phenotype), ii) the modification of the residue produces steric hindrances which are the actual cause of the altered function (substitution mutations show no such effect), or iii) the residue is important for maintaining a proper conformation of the protein and cannot retain this position after being modified or mutated.With UCP1, case (i) is valid for its Arg 276, whereas case (ii) has been indicated for its cysteine residues.When Arg 276 was either substituted in a mutated UCP1 protein (34) or modified by phenylglyoxal and 2,3-butadione (32), purine nucleotide binding and gating were absent.Since the proximal third matrix segment was photolabeled at three different positions with 8-azido-, 2-azido-and 3-O-(5-fluoro-2,4-dinitrophenyl) adenosine 5triphosphate (FNDP-ATP) (35), and since the deletion of residues 261-269 resulted in the lack of nucleotide inhibition (36), it was concluded that the main location of the nucleotide-binding site in UCP1 was located between the fifth and sixth transmembrane segments.This site probably forms a waterfilled cavity which penetrates deeply into the membrane close to the opposite surface (35).This cavity in UCP1 is lined with SH residues (C213, C224, C253, C287, C304, and possibly C188).Studies on these residues identified the case (ii) described above, since SH substitution mutants of UCP1 have no disrupted binding or transport (33).
The modification of UCP1 by hydrophobic and hydrophilic SH reagents drastically reduces inhibition by GDP (31).In contrast to UCP1, NEM did not prevent ATP inhibition of transport in StUCP.However, transport was inhibited by the arginine reagent 2,3-butadione.These findings suggest a probable difference between the purine nucleotide-binding site of UCP1 and StUCP and indicate that StUCP does not contain modifiable SH groups at or close to the nucleotidebinding site.Alternatively, SH groups may not be important for maintaining the integrity of StUCP conformation.These findings agree with the amino acid sequence of potato plant UCP (14,16).Thus, C188 of UCP1 is conserved in UCP2 and UCP3, but is substituted by A197 in StUCP (14).Of the two cysteines conserved in the fifth a-helix of UCP1, 2 and 3, the first, C234, is shifted two residues towards the matrix in StUCP such Hydrophilic, but not hydrophobic, SH reagents were good inhibitors of UCP1-mediated FA-induced H + transport (30).Similarly, in StUCP only hydrophilic SH reagents inhibited StUCP-mediated transport of linoleic acid and hexanesulfonate, while hydrophobic SH reagents, arginine, lysine and other modifiers had no effect.Hence, inhibition by hydrophilic SH reagents is common to StUCP and UCP1.This inhibitory effect on UCP1 has not yet been fully explained.The SH groups which maintain the integrity of the translocation pathway or, alternatively, participate directly in the translocation mechanism, are probably distinct from those interacting with NEM (in UCP1) and interfere with nucleotide binding after modification (31).These SH groups are probably located at yet unknown similar positions in the StUCP sequence.In addition, the type of interference by SH reagents with the StUCP translocation mechanism is likely to be the same as for UCP1.A possible candidate for such a residue is C90, located in the second a-helix of StUCP, which does not have any counterpart in the sequences of UCP1, 2 and 3. Residue C24 of UCP1, absent in StUCP, may serve a similar function for C90 in StUCP. ^
Figure 1 -[Figure 2 -
Figure 1 -Inhibition of protondependent swelling of potato mitochondria by mersalyl (A) and iodoacetic acid (B) in K + -acetate buffer.The inhibition by mersalyl of StUCP-mediated transport (filled circles) and nigericin-mediated, protein-independent swelling (open squares) are specific and nonspecific effects of mersalyl, respectively.The solid line represents the fit of the data using the Hill equation with a Hill coefficient of 2, yielding an apparent K i of 5 µM.B, The iodoacetate dose-response curve, yielding an IC 50 around 100 µmol/mg protein, has already been corrected for the nonspecific effect produced by this compound.The correction and other details of the measurements are described in Material and Methods.
Figure 3 -
Figure3-Prevention of ATP inhibition of StUCP-mediated transport following modification of potato mitochondria with 2,3butadione.The inhibition by ATP of StUCP-mediated proton-dependent swelling in K + -acetate buffer vs log [ATP] is shown for unmodified potato mitochondria (triangles) and mitochondria premodified with 4000 nmol/mg protein 2,3-butadione (diamonds).Inset, Inhibition by 4 mM ATP vs butadione dose in the preincubations.The assay conditions are described in Material and Methods.
Figure 5 -
Figure 5 -Lack of H + efflux in proteoliposomes containing Thiolyte MB-modified StUCP.StUCP from mitochondria treated with Thiolyte MB (1000 nmol/mg protein) were isolated and reconstituted into vesicles (trace a).The response of normal reconstituted StUCP (control) is shown in trace b.H + efflux was monitored by TES quenching of the fluorescent probe SPQ.The addition of 53 µM linoleic acid (LA) caused internal acidification of the vesicles, resulting in the flip-flop of neutral fatty acids into the inner lipid leaflet and subsequent dissociation in the internal medium.StUCP function was seen as an H + efflux (internal alkalinization, indicated by the decrease in SPQ fluorescence), initiated by 1.3 µM valinomycin (val).This efflux was suppressed in Thiolyte MB-modified samples.Vesicles (25 µl per assay) contained 84.4 mM TEA sulfate, 28.85 mM TEA-TES, pH 7.2, ([TEA] was 9.2 mM) and 0.6 mM Tris-EGTA.In the external medium, 84.4 mM K 2 SO 4 replaced TEA sulfate.
in the a-helix is occupied by F231.The second SH (C213 of UCP1) is not conserved in StUCP and is substituted by T220.The similarity of the purine nucleotide-binding site in StUCP and UCP1 is reflected by the effect of 2,3-butadione, which probably interacts with the conserved arginines in UCPs (and in the mitochondrial carrier gene family as a whole), such as R276 of UCP1(37), which corresponds to R281 and R278 in StUCP and AtUCP, respectively(14,15). | 3,190.6 | 2000-12-01T00:00:00.000 | [
"Biology",
"Chemistry"
] |
Observation of two zeros of the real amplitude in pp scattering at LHC energies
Elastic scattering of charged hadrons is described by the combination of nuclear and Coulomb amplitudes. It is well know that at the very forward range the nuclear real and Coulomb parts interplay a crucial role in the determination of the magnitude of the real part at |t|=0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$|t|=0$$\end{document}. However, beyond |t|=0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$|t|=0$$\end{document} the real and imaginary nuclear amplitudes have different t dependencies and we show that at LHC energies the zeros formed by the combination TC(t)+TRN(s,t)=0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$T_C(t)+T_R^N(s,t)=0$$\end{document} in pp process can be potentially observed when the background due to the imaginary part is removed. This observation constrains the real part at this forward range.
I. INTRODUCTION
The complex amplitudes in quantum mechanics are not a direct measurable quantity, living in the abstract Hilbert space, while the measurable absolute square gives the probability of finding particles according to certain distribution.In elastic scattering of charged hadrons, the strong (complex valued) and electromagnetic (pure real) forces interplay and the interference between these quantities can be in principle observed in experiments, constraining the amplitudes.Also, some theorems associated with unitarity and analiticity of the nuclear amplitudes constrain the behaviour of these objects.
In the very forward range, the Coulomb amplitude drops down fast, and the differential cross section becomes dominated by the nuclear parts.The optical theorem, which extrapolates the imaginary part at |t| = 0 relates the magnitude of this amplitude to the total cross section.In 1970, before the experimental results pointing to the rise of pp and pp total cross sections, Cheng and Wu, based on a massive electrodynamics, predicted that σ should saturate the Froissart bound at infinity energies [1].For an increasing cross section such as σ ∼ C log 2 s the dispersion relations predict the parameter ρ ∼ π/ log s, where ρ is the ratio of the real and imaginary amplitudes as |t| = 0. Recently, Martin and Wu formally proved [2] that at large energies, if the total cross section goes monotonically to infinity at infinity energies, the real amplitude is positive in the forward direction.In addition, it was also proved by Martin that if the differential cross sections for crossing symmetric processes dσ/dt(ab → ab) and dσ/dt(a b → a b) tend to zero for s → ∞ in a strip 0 < |t| < | t| where | t| is arbitrarily small and if σ(ab → ab) and/or σ(a b → a b) tend to infinity for s → ∞ the real part cannot have a constant sign [3], which means that the real part must cross zero at some |t R | within the diffractive cone.This zero of the real nuclear part (|t R |) is dubbed Martin's zero.In the 1970s it was shown that for high energies, in the geometric scaling regime, the real nuclear part of a crossing symmetric amplitude in the forward range has a zero approximately at |t R | 1/ log s [4], i.e, it is shrinking with increasing energy.
Extending the ideas of crossing symmetric amplitudes in the forward range, a phenomenological model for pp and pp amplitudes was proposed describing the scattering data from ISR to LHC energies and the analytical form for |t R (s)| was suggested [5].The model satisfies dispersion relations since it connects the real and imaginary parts analytically.According to the model, in the ISR energies the real nuclear amplitude for both pp and pp is always smaller than the absolute value of the Coulomb amplitude.However, when the energy increases, as can be seen in Fig. 1, eventually the real nuclear part equates |T C | at some |t| and for larger energies, say, the LHC range, Using the above ingredients, in the present work we show that there might exist some critical energy s c such that beyond it T N R (s, t) > |T C (t)| for some region 0 < |t| < |t R | and if this condition is satisfied, we prove that in the pp case, the sum of the pure real and Coulomb amplitudes has two zeros within the diffractive cone.We then suggest how these zeros in pp scattering could be extracted from the data and where they are expected to be observed in the LHC range.
II. EMERGENCE OF TWO ZEROS IN THE REAL AMPLITUDE
We wish to show that the existence of a region where possibly T N R (s, t) > |T C (t)| is not a particular feature of a model and in principle can be observed in different models with independent real and imaginary nuclear amplitudes.For this purpose, let's consider a moving point Amplitudes We show the evolution of the nuclear real amplitude from ISR to LHC energies [5] and the absolute value of the Coulomb amplitude.We observe that around some critical energy √ sc = 0.15 TeV there is some |t| where where t R0 is some constant determined from the phenomenology and η 0 is some arbitrary constant.Since |t R | is shrinking as ∼ 1/ log s it is natural to expect that |η| ∼ 1/ log s also shrinks, in order to satisfy 0 < |η| < |t R |.On the other hand, since the Coulomb amplitude for pp is T C (t) ∼ −α/|t| it means that at |t m (s)| we have As discussed in the introduction, for large energies, if the total cross section behaves as σ ∼ C log 2 s, since σ ∝ T N I (s, t = 0), the real amplitude at the origin behaves as Very close to the origin the real amplitude falls as an exponential, much slower than the Coulomb part.In this sense at |t m | the magnitude of the real part is slowly varying and we could safely compare the real part at t = 0 i.e, Eq.( 3) with the absolute value of Eq. ( 2), and Although not explicitly mentioned in our previous works, where we study a broad t range, this dominance also occurs [6,7].
Starting from the region where T N R (s, t) > |T C (t)|, when |t| approximates to |t| = 0, the Coulomb amplitude decreases very fast to −∞.Therefore, we have the inequality, This means that for some non zero value |t| = |t ξ1 | < |t R |, the real amplitude crosses the absolute value of the Coulomb amplitude, On the other hand, as a consequence of the existence of a zero |t R |, the real part of the nuclear amplitude T N R decreases monotonically as function of |t| from the origin towards |t R |.Thus, after crossing the region where since |T C (t)| will never cross zero, eventually T N R (s, t) will reach again the absolute value of the Coulomb amplitude for some value The above arguments can be summarized as follows: Let T R (s, t) be the real part of the sum of the nuclear and Coulomb pp amplitudes, then, for s large, if In Fig. 2 we represent the above proposition, showing the situation where T R has two zeros.Despite the simplicity on the arguments about the existence of the first zero in pp scattering, it is not completely clear yet for which energy the first zero emerges.The possibility of the existence of the first zero was pointed out previously in Ref. [8].
For the sake of simplicity in the above arguments we neglect the effects of the hadronic electromagnetic form factor and the relative Coulomb phase, but including these ingredients, qualitatively the conclusions should remain the same.In our previous works we already discussed the emergence of the zeros |t ξ1 | and |t ξ2 | in the LHC energies [9,10] and also we test the effects of Coulomb phase.
In this letter we wish to point out that unlike in the ISR energies, the LHC range shows some evidence for the existence of these zeros.
III. AMPLITUDES AND OBSERVABLES
The differential cross section is formally written as where T N (s, t) is the complex nuclear amplitude and Φ(s, t) is the relative Coulomb phase.
The Coulomb amplitude is standard and is given by where α = 1/137 is the fine structure constant, G(t) is the proton electromagnetic form factor and Λ 2 = 0.71 GeV 2 is a momentum scale.
The nuclear amplitude depends on the model, but in the forward range, since the differential cross section is decreasing approximately as an exponential it is natural to parametrize the nuclear amplitudes with exponential forms.However, it is now clear that the real and imaginary amplitudes must have different t dependencies and since the imaginary part is much larger than the real amplitude in the forward range in order to observe such subtle effects i.e., the existence of the two zeros, one needs to remove the background due to the imaginary amplitude.
In the LHC analysis, in order to show some non trivial behaviour in the data, it has been common to present the very forward region by subtracting and then dividing the differential cross section from a reference, defined simply such as Ref (t) = A e −B |t| , where A and B are obtained by fitting the very forward data.As a result, the subtracted quantity shows some non-linear behaviour as function of t.
In our approach, instead of using a simple exponential reference function, we subtract and then divide the data by the square of the imaginary amplitude, which of course, depends on the chosen model.The interpretation is that the remainder is the square of the sum of the pure real and Coulomb amplitudes divided by the square of the imaginary amplitude In Fig. 3 we show the Totem [11][12][13] and Atlas [14][15][16] LHC data subtracted as in Eq.( 12), using for T N I the model in Ref. [5] pointing to the possibility of observing the two zeros.The first zero |t ξ1 | is more subtle to observe since it happens in a region where the magnitude of the Coulomb amplitude decreases fast with |t| compared to the nuclear real, which means that this zero should be sharp and less model dependent, and only more precise experimental data would clearly show this phenomenon.Unfortunately all LHC experiments, except Totem at 13 TeV, have large error bars and/or strong fluctuations in the region where this zero could be observed.In the Totem experimental data at 13 TeV one could see a trend of this dip being formed in Fig. 4 at |t| 0.0055 GeV 2 , which is exactly the position of expected for |t ξ1 |.On the other hand the second zero at |t ξ2 | can be clearly seen in all experiments when the non-constant curvature touches zero.However, since in this region the difference in the slopes of the pure real and Coulomb amplitudes is smaller, the behavior of the zero is shallow and its position is more model dependent.We expect that elastic scattering models with explicit real and imaginary amplitudes valid for a broad t range (beyond the diffractive dip) should also present two zeros.I from Totem data, we need to add a factor 0.015 to the subtracted data in order to make the log-log plot, avoiding negative values.In this representation the trend of the existence of the two zeros is more apparent.
To summarize, from the experimental point of view, we believe that with large statistics and better resolution in the very forward region one could observe the first zero |t ξ1 | constraining even more the nuclear real and the Coulomb amplitudes and also the role played by Coulomb phase.Besides, as advocated by Donnachie and Landshoff [17], in the present TOTEM analysis at 13 TeV, the Coulomb phase is forcing the ρ values to be rather small and according to their model the presence of the Coulomb phase will have a negative impact in the description of lower energies.In our previous work, we showed that at 13 TeV the relative phase reduces the value of the parameter ρ and as a consequence the magnitude of the real nuclear part near the origin becomes smaller than the magnitude of the Coulomb part mitigating the existence of first zero [10].A similar feature was recently noticed by Selyugin [18], who observed that a smaller value for ρ leads to wrong determination of t dependence of the real amplitude.The existence of |t ξ1 | may be important in the determination of the forward parameter ρ since it is determined in a region where there is a strong interplay between T N R and T C .From the point of view of the models, it would be interesting to see how their real part are as compared with the subtracted data.This could be an important test to constrain their real nuclear amplitudes.
) arXiv:2211.08885v2 [hep-ph] 16 Jan 2023 a dominance of the real nuclear amplitude over the Coulomb part at |t m | where 0 < |t m | < |t R |.The existence of such dominance is present in several models of elastic scattering.
FIG. 2 .
FIG. 2. We show the nuclear real, the Coulomb, the absolute value of the Coulomb and the combined nuclear real and Coulomb amplitudes.Within the region |t ξ 1 | < |t| < |t ξ 2 | we represent T N R > |TC | and as we prove, the combined T N R + TC has two zeros, one at |t ξ 1 | and the other at |t ξ 2 |.
FIG. 4 .
FIG.4.Subtracted Totem data at 13 TeV in log-log scale.Since we have negative values when subtract T N I from Totem data, we need to add a factor 0.015 to the subtracted data in order to make the log-log plot, avoiding negative values.In this representation the trend of the existence of the two zeros is more apparent.
[5] up to |t| = 0.12 GeV 2 compared with lines calculated from the model proposed in Ref.[5].For each experiment we add a factor to unstack the data sets.The open circles represent the positions of the zeros |t ξ 1 | and |t ξ 2 |.One can see that the experimental points in all the three energies for both Totem and Atlas present the shallow zero |t ξ 2 | close to 0.05 GeV 2 which slowly approaches the origin with increasing energy.However, in the region where the first zero should occur all the data sets present larger errors and/or strong fluctuations but at Totem 13 TeV it shows a trend precisely at 0.0055 GeV 2 , where |t ξ 1 | is expected to be observed. | 3,438.4 | 2022-11-16T00:00:00.000 | [
"Physics"
] |
High speed two-photon laser scanning stereomicroscopy for three-dimension tracking multiple particles simultaneously in three-dimension
In this paper, we will describe a video rate two-photon laser scanning stereomicroscopy for imaging-based three-dimensional particle tracking. Using a resonant galvanometer, we have now achieved 30 volumes per second (frame size 512 × 512) in volumetric imaging. Owing to the pulse multiplexing and demultiplexing techniques, the system does not suffer the speed loss for taking two parallax views of a volume. The switching time between left and right views is reduced to several nanoseconds. The extremely fast view switching and high volumetric imaging speed allow us to track fast transport processes of nanoparticles in deep light-scattering media. For instance, in 1%-intralipid solution and fibrillar scaffolds, the tracking penetration depth can be around 400 μm.
Introduction
Particle tracking has become a powerful tool to investigate molecular transport and biochemical dynamics in cells and tissues.To be able to track particles in tissues, commonly considered as scattering media, imaging methods with a high spatiotemporal resolution and a deep penetration depth are desired.For high spatiotemporal resolution imaging, camera (CCD or CMOS) based optical wide field microscopes would be a preferred choice because of the high imaging speed.However, the imaging depth of camera-based microscopies is limited by the mean free path (MFP, 1/scattering coefficient), which is a tissue dependent depth where scattering does not overwhelm imaging signals (Ntziachristos, 2010;Leigh et al., 2014).Confocal and multiphoton laser scanning microscopies are commonly used to overcome the MFP limit (Vasilis, Nature methods, 2010) (Ntziachristos, 2010).Note that two-photon scanned light-sheet (LSM) (Spille et al., 2012) or multifocal plane microscopy (MPM) (Ram et al., 2008) or selective plane illumination microscopy (SPIM) have been developed for particle tracking.However, these methods still use cameras as recording devices; the imaging depth is still at the level of the MFP.
To solve problems, two-photon (2p) laser scanning microscopy has become a standard method for deep lightscattering tissue imaging, but a few microscope developments for 3D particle tracking based on two-photon excitation have been reported.By using 2p laser scanning microscopy, the depth can be larger than mean free path.For example, optical phaselocked ultrasound lens (OPLUL) or tunable acoustic gradient (TAG) lens has achieved a fast continuous volumetric imaging (Kong et al., 2015;Hou et al., 2020) speed at tens of hertz in twophoton laser scanning microscopy.However, the continuous imaging method decreases exposure time, leading to lower S/N when it is applied to particle tracking.
More recently, engineered 3D point spread function (PSF) laser scanning microscopy has tracked fluorescent particles with 3D super resolution (Wang et al., 2017;Wang et al., 2020).However, this phase mask based engineered PSF method has a limited axial tracking range of 3 μm (Shuang et al., 2016).Moreover, two photon laser scanning microscopies using an active feedback tracking method (Ram et al., 2008;Ding and Li, 2016) provides high temporal resolution and high particle tracking localization precision, but cannot track multiple particles in one experiment.Our future application aims to investigate 3D dynamic transport of nanoparticles in biological tissues.Based on the ergodic hypothesis (Bel and Barkai, 2005) in particle tracking, we need to track multiple trajectories in one experiment.
We have previously reported on a two-photon laser scanning stereomicroscope at the speed of 1.4 volumes per second (Yang et al., 2016).Extended depth of field (EDF) can be used in twophoton laser scanning fluorescence microscopy to improve the volumetric imaging speed for observing fast particle tracking in three dimensions.By scanning extended beams with photomultiplier detection, we project particle motions in two views of a volume and recover the depth information.However, using conventional galvo scanning, the stereoscopic imaging approach with successive frames for left and right views resulted in a time delay of milliseconds between different views and tens of milliseconds between different volumes.For instance, Abbas at al. Developed Kilohertz frame-rate twophoton tomography using scanned line angular projection beam (Ding and Li, 2016).This destroys the simultaneity of stereoscopic imaging and decreases temporal resolution for particle tracking.Currently there is no single solution to all the above issues.
To solve these problems, we report on video rate two-photon laser scanning stereomicroscopy (vLSSM, Figure 1A), a new approach that can achieve fast volumetric imaging based on a previous two-photon stereo-microscope.Due to multiplexing and demultiplexing technique, our system switches left and right views within several nanoseconds.The temporal identified views enable this system to deal with moving objects overlay.Herein, we demonstrated that vLSSM can track tens of particle's motions at 30 Hz by using resonant scanning.The observation depth in intralipid solution and tissueengineered fibrillar scaffolds can be around 400 μm with 2p laser scanning excitation.Tank et al. applied V shape extended PSF to volumetric two-photon imaging of neurons using stereoscopy (vTwINS) (Song et al., 2017) at 30 Hz.However, defining moving objects overlay in particle tracking was challenging with these methods.Our reported system can avoid this pitfall with two temporal separated parallax views.
Stereomicroscopy setup
The three essential requirements for constructing a stereomicroscope are the extended depth of field (EDF) to make a large volume of the sample in focus, two viewing angles to form a parallax, and an appropriate simultaneity of imaging of the two views to co-register the features and enable a successful depth perception.To follow the requirements, it is beneficial to generate a parallax by tilting Gaussian beams with a resonant mirror because of its fast-scanning speed.The X-axis resonant mirror (SC-30, Electro-optical products Corp) and Y-axis galvo mirror (GVS001 single-axis motor/mirror, GHS001 post adapter, Thorlabs), shown in Figure 1A, rotate synchronously and are kept in parallel during scanning.Since the scanning speed of the resonant mirror is about 8 kHz for small angles, the imaging speed can reach video rate (30 volumes/s).The Stereo-scanner (resonant and galvo scanner pairs) can replace the conventional X-Y galvo scanner in the two-photon microscope and delivers fast, stable, and flexible scanning behavior that the vLSSM needs.Another remarkable feature of vLSSM is the time delay between left and right views.This delay can be reduced to the nanosecond level by switching views pulse-by-pulse instead of line-by-line.Figure 1B shows the multiplexing and demultiplexing scheme which recombines extended focus beams between the left and right views and separates them into PMT pulses by a discriminator (ORTEC 935, ORTECT).As shown in Figure 1B, by using veto synchronization signals of 80 MHz from pulse laser, the discriminator differentiates the multiplexing fluorescent signals into two channels.The gating frequency of discriminator can be as fast as 80 MHz to handle two channels.
In order to achieve high-speed volumetric imaging, an ultrafast Ti:Sapphire laser (Chameleon Ultra, Coherent) was tuned to 864 nm for excitation.The galvo control waveform was generated by an analog output channel of a DAQ board (PCI-6363, National Instruments).The emitted fluorescence was collected by a high NA objective lens (N16XLWD, 0.8NA, Nikon).Then the fluorescence passed through the dichroic mirror (FF662-FDi01, Semrock, Rochester, NY) and a lowpass filter (FF01-665/SP-25, Semrock, Rochester, NY), and was finally focused onto PMT (PMC-100-4, Becker & Hickl GmbH).These PMT events are digitized with a high-speed digitizer (NI 5732, National Instruments).FPGA detection module (PXIe 7961R, National Instruments) assembled digital signals into images, and the synchronized resonant line waveform was detected by the digital input channel.The data acquisition from FPGA module simultaneously synchronizes with laser pulse.The PXI is a perfect platform to utilize the mechanical control and high-speed data acquisition that realtime imaging requires in our system.High-speed data acquisition and digital analog modules will be inserted in the PXI chassis.Based on Sciscan (Scientifica), the control program was written in LABVIEW (National Instruments).
3D Depth reconstruction and particle trajectory detection
To track the trajectory of particle's motions in 3D, we need to first recover the stereoscopic depth of particles from stereo-images.In conventional two-photon laser scanning microscopy (TPM), the depth is directly related to different layers of a z-stack image.However, in vLSSM, the depth information is encoded in the stereo-pair.If the fluorophore distribution is sparsely dispersed in 3D and presents as recognizable features in EDF images, the depth information can be reconstructed by feature-based correspondence algorithms.As shown in Figure 2, the schematic in brief is that: 1) We first create the stereopair with left view and right view images.2) We denoise the images with low-pass filter and deconvolved the images with Lucy-Richardson algorithm (Solomon and Breckon, 2011) to increase the contrast of particles.3) The particles are then segmented by the adaptive threshold value.The morphology features such as simple geometric shapes, intensity, and position of single particles are extracted from left view and right view images.4) We calculate all features in adjacent rows between left and right images, and find the best match based on combination factor of feature position, size and encircled energy.The circular Hough Transform (Davies, 2005) is used to find circular objects in the stereo-pair captured by vLSSM.5) We calculate the depth of each matched particle from the distance of same objects in left and right views by linear transform with parallel projection geometry.The detail was described in our previous paper (Yang et al., 2016).
Tracking or particle linking is necessary in re-building the trajectories of one or several particles as they move along time.Their position is reported at each frame, but their identity is yet unknown; we do not know what particle in one frame corresponds to a particle in the previous frame.Tracking algorithms aim at providing a solution to this problem.As shown in Figure 2, the schematic in brief is that: 1) Hungarian linker (Liu et al., 2021;Stevens and Sciacchitano, 2021;Oleksiienko et al., 2022) starts by a frame-to-frame linking step, where links are first created between each frame pair.2) Then, a second iteration is done through the data, investigating the linker distance between frames until the track ends.
3)Finding the trajectories of particles and bridging the gap between frames could be realized with minimizing the linker distance.If a track beginning is found close to a track end in a subsequent frame, a link spanning multiple frames can be created and restored.Source to target assignment is based on the famous Hungarian algorithm written in MATLAB (MathWorks) (Tinevez, 2022).The depth reconstruction and particle tracking algorithm run on the computer (Intel i9-7920x CPU).The overall 3D reconstruction and tracking process costs 5-10 min for each image with 30-50 particles.
Localization precision of 3D reconstruction
For precise characterization of vLSSM, the whole excitation pathway was measured to have a focal volume of about 110 μm× 110 μm × 100 μm by scanning the tilted Gaussian beams at any position.The focal volume was measured by in-situ camera after objective.By moving the reflect mirror in z-axis by the digital stage, the axial FWHMs of the Gaussian foci is reconstructed with 100 μm.The x-axis and y-axis focal area is also calibrated by the digital stage by moving the resolution target with 110 μm × 110 μm.It is possible to change the FWHM (Full width at half maximum) of a beam such that the lateral resolution and the depth of field will be both tunable.Measured two-photon excitation PSF2p of Gaussian beams use small, tilted angles (≈28.1°).
We first demonstrated the imaging performance of the vLSSM using 4.46 μm diameter fluorescent beads (Cart21637, Polysciences) in Figure 3.The high-resolution stereo-pair frames clearly demonstrated VTPLSSM's depth recovery ability due to two-photon excitation of Gaussian beams.With the featurebased depth reconstruction method, the fluorescent beads, shown with red and cyan channels in Figure 3A, can be simply recognized as circular objects with their sizes and locations determined by the circular Hough Transform.We identified 13 circular objects and obtained their disparities between left and right views.The relative depths of the objects were calculated according to their disparities with triangulation.To determine the accuracy of the recovered depths, we acquired the 3D stack of the same volume with conventional two-photon microscopy for establishing the ground truth values of the depth.The value can be used to evaluate the accuracy of the depths from the stereo-pair in Figure 3E. Figure 3D shows all 13 objects with their depth errors labelled with a color map that spans from green to red. Figure 3E gives the histogram of the depth error.About 42.9% of the objects are localized axially with depth error less than 2 μm, and 100% less than 5 μm.The depth error cannot be properly described by normal distribution, which is depicted as the superposed red line with its mean depth error 2.4836 ± 0.2 μm.To characterize localization precision of system, we measured the standard deviation of positions of particles (4.46 μm in diameter), shown in Figure 3F.It is 0.1 and 0.08 μm in the X and Y axis respectively and 1.1 μm in the Z axis along imaging sequences.
Multiple particle tracking in 3D
We then demonstrated the performance of the vLSSM for particle tracking in light-scattering media such as intralipid/phosphate-buffered saline (PBS) solution.The depth information is maintained without distortion due to high-speed volumetric imaging.A time sequence (4 s) of stereo-pairs was recorded and processed by a tracking algorithm.Figure 4A shows the trajectory of nanoparticles with a 500 nm diameter in 1% intralipid/PBS solution at three penetration depths: 200, 300, and 400 μm (see Supplementary Video S1).The corresponding square displacement with respect to time was calculated.There were 120 recorded time sequential steps, and corresponding imaging speed was 30 volumes/s with a frame size of 512 × 512 × 2 pixels.Driven by gravity, the particles fell with mean velocity (≈10 μm 2 /s) in the Z-axis.Compared to previous stereomicroscopy, the volumetric imaging speed of the vLSSM provides more time steps and allows the recovery of particle trajectories.The reduced scattering coefficient (μs') and scattering coefficient (μs) of 1% intralipid/PBS solution under 864 nm excitation is around 0.85 and 3 mm −1 respectively (Grabtchak et al., 2012).By using 2p laser scanning microscopy, depth of imaging is expressed in terms of transport mean free path (TMFP = 1/μs') and mean free path (MFP = 1/μs) (Leigh et al., 2014).The theoretical penetration depth range is from mean free path (0.3 mm) to transport mean free path (1.2 mm).Compared to camera based wide-field microscopy, the theoretical penetration depth range is limited less than mean free path (0.3 mm).The penetration depth of the vLSSM in 1% intralipid/PBS can be around 0.4 mm.
We then tracked nanoparticle (500 nmin diameter) transport in both randomly distributed and aligned gelation scaffolds (see Supplementary Video S2), which were fabricated by the electrospinning (ES) machine at depth of 400 μm shown in Figure 4B.The imaging processing and analysis are the same as in intralipid/PBS solution.The transport behavior of particles in tissue-engineered materials depends on the size of particles and fibrillar orientation.During the material preparation, the fibrillar orientation is controlled by the rotation speed of the electrical spinning motor.
From scanning electron microscopy (SEM) images shown in Figure 4B, we measured the fibrillar orientation distribution.The analysis algorithm is based on DiameterJ, an opensource plugin created for ImageJ.In aligned scaffolds, the peak orientation direction is 90° with respect to the X-axis at normalized frequency of 0.05.In fibrillar randomly distributed scaffolds, the peak orientation direction is 40° with respect to the X-axis at normalized frequency of 0.02.Because of structural sliding and gravity driving in aligned gelation scaffolds, the particles migrated along the fibrillar aligned direction.In random distributed scaffolds, the particles migrated randomly, in no specific direction.
Discussion and conclusion
The current sensitivity and localization precision of our vLSSM cannot follow the nanoparticles (smaller than 500 nm) in deep scattering media.However, the system can be improved by following certain strategies with higher tracking localization precision: increasing the collection efficiency of system, increasing exposure time of particle objects, and increasing the brightness of tracked fluorescent particles.The microscopic collection efficiency is related to quadratic numerical aperture (NA).Through increasing the objective NA, the microscopic sensitivity can be highly improved.However, the major drawback of this approach is that the z-axis size of the focal spot also decreases.With a higher numerical aperture, the same system has a smaller DOF (Thériault et al., 2014;Zong et al., 2015).Moreover, through decreasing scanning scale, we can decrease scanning speed.The decrease of scanning speed leads to increased exposure time of particle objects.The brightness of particles depends on excited fluorescent core materials such as polymer, carbon, and inorganic sulfide with stabilization ligands.Generally, the brightness of quantum dots composed of inorganic sulfide with stabilization ligands are better than polymeric spheres, that is, we used in this paper.However, the quantum dot is the lack of both size-dependent emission wavelengths and toxicity compared to carbon dots (Cao et al., 2012;Liu et al., 2015).In future, we may track 100-500 nm carbon dots in live deep tissues.In summary, all above strategies has the potential to overcome 500 nm scale limitation using our tracking method in deep biological tissues.
In this paper, we have developed a video-rate two-photon laser scanning stereomicroscopy for 3D particle tracking using only two parallax frames.By stereoscopically scanning extended Gaussian beams in the excitation pathway, the particle depth could be reconstructed with 1.1 μm z-axis localization precision.We demonstrated the ability of vLSSM to track multiple nanoparticles in three dimensions in deep intralipid medium, aligned, and random distributed scaffolds.The fiber alignment was highly related to the tracked particle trajectories.
With resonant-galvo scanners, the system can achieve the speed of thirty volumes per second, and high S/Nto tackmultiple trajectories in three dimensions even in deep lightscattering media.Moreover, with the superior penetration ability of two-photon excitation, it is possible to capture the fast dynamic events in deep biological tissues by using twophoton stereomicroscopy.The stereoscopic technique described herein is anticipated to be implemented as an add-on imagingmode on a standard two-photon fluorescence microscope to meet various imaging applications.The three-dimensional fluorescent molecular motions in deep biological tissues will be directly viewed in real-time by wearing 3D glasses.Flowchart of 3D depth reconstruction and particle trajectory algorithm.
FIGURE 1 .
FIGURE 1. Schematic of vLSSM.(A) Setup of video rate two-photon laser scanning stereomicroscopy.M1-M5: Mirrors; DIC, Dichroic mirror; PBS, Polarizing beamsplitter; G, Galvo scanner; R, Resonant scanner; PMT, Photomultiplier tubes.Reflectance mirrors M2 and M3 keep two parallax beams in parallel.The X-axis resonant mirror and Y-axis Galvo mirror scan a projection volume by X-Y scanning.Two raw individual frames contain left and right views corresponding to two parallax beams respectively.(B)Multiplexing and demultiplexing scheme.In multiplexing, laser pulses split into two pathways, and pathway 2 is delayed.The 6.25ns delay in pathway 2 results in perfectly interweaved pulses upon recombination.In demultiplexing, PMT pulses are equally divided into two pathways.The second pathway is delayed by 6.25ns.A veto signal width of 6.25ns is applied to the two detection pathways resulting in two demultiplexed pulses streams.
FIGURE 3 .
FIGURE 3. Characterizations of vLSSM.Stereo-pairs of (A) fluorescent beads (4.46 μm) were captured in stereo-mode.Red color means left view and cyan color means right view.(B) Sum projection of the stack along the z-axis.Color bar: 1 frame.(C) 4 selected images from an image stack consisting of 101 slices taken in standard two-photon mode.(D) 3D map of objects with their depth recovered from the stereo-pair in (A).Higher depth error indicated by red.(E) Histogram of the depth error with superposed normal distribution (red line).(F) Localization positions in X, Y and Z axes along a time sequence of 2 s.
FIGURE 4 .
FIGURE 4. 3D particle tracking reconstruction with (X, Y, T), red color means left view and cyan color means right view.(A) At the depth of 200, 300, 400 μm, the trajectories of nanoparticles in 1% intralipid/PBS solution with respect to time was tracked.(B) The total trajectories of nanoparticles in aligned and random gels were tracked with the projection of time flow.Scanning electron microscopy (SEM) images of fibrillar structure of both aligned and random gelatin with 3.0 k magnification.(C)Partial particle trajectories were reconstructed with frame series.Scale bar: 5 μm. | 4,530.4 | 2022-09-06T00:00:00.000 | [
"Physics"
] |
The “ 3 Genomic Numbers ” Discovery : How Our Genome Single-Stranded DNA Sequence Is “ Self-Designed ” as a Numerical Whole
This article proves the existence of a hyper-precise global numerical meta-architecture unifying, structuring, binding and controlling the billion triplet codons constituting the sequence of single-stranded DNA of the entire human genome. Beyond the evolution and erratic mutations like transposons within the genome, it’s as if the memory of a fossil genome with multiple symmetries persists. This recalls the “intermingling” of information characterizing the fractal universe of chaos theory. The result leads to a balanced and perfect tuning between the masses of the two strands of the huge DNA molecule that constitute our genome. We show here how codon populations forming the single-stranded DNA sequences can constitute a critical approach to the understanding of junk DNA function. Then, we suggest revisiting certain methods published in our 2009 book “Codex Biogenesis”. In fact, we demonstrate here how the universal genetic code table is a powerful analytical filter to characterize single-stranded DNA sequences constituting chromosomes and genomes. We can then show that any genomic DNA sequence is featured by three numbers, which characterize it and its 64 codon populations with correlations greater than 99%. The number “1” is common to all sequences, expressing the second law of Chargaff. The other 2 numbers are related to each specific DNA sequence case characterizing life species. For example, the entire human genome is characterized by three remarkable numbers 1, 2, and Phi = 1.618 the golden ratio. Associated with each of these three numbers, we can match three axes of symmetry, then “imagine” a kind of hyperspace formed by these codon populations. Then we revisit the value (3-Phi)/2 which is probably universal and common to both the scale of quarks and atomic levels, balancing and tuning the whole human genome codon population. Finally, we demonstrate a new kind of duality between “form and substance” overlapping the whole human genome: we will show that—simultaneously with the duality between genes and junk DNA—there is a second layer of embedded hidden structure overlapping all the DNA of the whole human genome, dividing it into a second type of duality information/redundancy involving golden ratio proportions.
Introduction
"The beginning (1) is the middle (2) of the whole (Phi)."Here is my interpretation of this famous sentence from Pythagoras [#].In line with the last Sergey Petoukhov paper published in Symmetrion [1], we show here how codon populations forming the single-stranded DNA sequences can constitute a critical approach to the understanding of junk DNAfunction.Having devoted an entire book "Codex Bio-genesis" [2]-a French edition-to the analysis of single-stranded DNA codon populations of the entire human genome [3], after improving various methods of analysis, it seemed interesting to revisit a subset of these methods.The reader will find a summary of these methods in [4], and in [5] particularly.
Indeed, the focus of this article will be based on the study of the diversity of genomes and chromosomes by analyzing them across codon populations.For this, we consider comprehensively the single stranded DNA sequences forming chromosomes and genomes.
Until now, the genomic diversity has been studied on other genetic scales, most often analyzing the variability: -At the genes level.
-Then across the respective proportions of nucleotides TCAG based from Chargaff's second law, we recall here the following statement: The second parity rule holds that both %A ~ %T and %G ~ %C are valid for each of the two DNA strands.This describes only a global feature of the base composition in a single DNA strand [6].
-Among other approaches, we can mention for example the original research of Professor Giorgio Bernardi on "isochores" [7,8].
-Finally, we mention some other original research as "the Z curve" approaches: The Z curve is one of such tools available for visualizing genomes.The Z curve is a unique three-dimensional curve representation for a given DNA sequence in the sense that each can be uniquely reconstructed given the other [12].
The benefits resulting from a chromosome and genome level codon analysis will be amazing as well as significant: 1) Junk DNA and DNA strands atomic mass tuning: Obviously, our paper will reveal the strong utility of unknown junk DNA function.We show that this role most likely contributes to the fine tuning of the atomic masses of the huge double-stranded DNA molecule.
2) Universal genetic code table "lens" and "matrix": Second, everyone knows that the main function of the universal genetic code table is the correspondence between the 64 codons of DNA and RNA, on the one hand, and the 20 possible amino acids, on the other hand.Yet, as demonstrated by our 2010 paper [13] and then by Professor Petoukhov's research [1], we will demonstrate throughout this article a second function, equally important: Its role as a "filter" or "matrix" which determines the relative proportions of each of the 64 codons in single stranded DNA sequences of chromosomes or genomes.
3) Numerical DNA constraints: Third, our results demonstrate that the relative proportions of codons in DNA are "forced", constrained and controlled-one might even say "weighted" and "fine-tuned"-by laws of numerical mathematical nature, which is radically innovative.
4) The 3 genomic numbers species diversity: Fourth, the analysis of populations of codons obeys three num-bers characterizing each specific chromosome or genome: "the 3 genomic numbers".This law is universal.
5) Human genome and chromosome's genomic numbers diversity: Fifth, the methods and results presented here are related simultaneously to both the scale of whole genomes with each chromosome individually considered.This again is a universal character of these laws.Particularly, it appeared, in the human genome case, this dual level of strong mathematical constraints led to remarkable genomic numbers across all 24 chromosomes as well as across the entire genome.This result is remarkable.
6) Some 3-D speculations: Sixth, we thought about possible potential conceptualizations and materializations of these billion codons of the human genome unfolding in three-dimensional mathematical spaces determined by the three genomic numbers values.
7) "Form and Substance", "information and redundancy" in the human genome (Figure 1): Seventh, we will demonstrate that-simultaneously with the duality between genes and junk DNA-there is a second hidden level of structure sharing all the DNA of the human genome, dividing it into a second type of duality information/redundancy. 8) Another genomic number even stranger: Is (3-Phi)/2 a universal value?Finally, this tuning of the whole human genome adjusted on the outstanding value of (3-Phi)/2 leads us to the question of a possible universality of this number, well beyond genomics.
Symmetries and Numerical Structures of the Whole Human Genome
In his 2013 paper [1], Prof. Petoukhov computes whole human genome codon populations, illustrating his Symmetry Principle No 3. Then he provides evidence of Chargaff's second rule at the scale of the whole human genome.Then he provides T <==> A and C <==> G symmetry operator.Meanwhile, in [2] chapter 6, and in [13], we demonstrate from the same data (S0 level in Petoukhov matrix genomics) the existence of 3 other singular whole human genome symmetries.In this Figure 2 showing the 64 sorted codons frequencies, there is evidence of 3 facts: -the first, as reported by Petoukhov in his paper, is the extended diversity range of these frequencies.
-the second strange fact is the perfect symmetry of codons which appear sorted by pairs.
-the third even stranger fact is a perfect symmetry of codons within each pair.
This twin codons curve will be very informative because it already contains the "trace" and the precursors of the 3 genomic numbers and their association with 3 axes of symmetry that we discover progressively throughout this paper.
Thus, apart from the first mirror symmetry ( §2.2) emerging from the side partition between 32 master codons and the 32 linked twin codons-this will be the first of three symmetries-we will find successively: A second vertical symmetry dividing the ranking sorted populations codon partition the 32 most frequent codons and 32 less frequent codons.If we combine the first 2 partitions, we then obtain a clustering into 4 quartiles as shown in §2.3.
A third symmetry appears observing the twin codons curve: in this curve, there is a sharp break between the first 56 codons and the last 8 codons (Figure 2).These 8 codons are the last 8 codons "octave" of this hierarchy.By exploiting this information we discover the rich potential of a partition of the populations of 64 codons in 8 octaves of 8 codons each.
But before examining these three nested symmetries, we will introduce this article with a very curious discovery [2], chapter 1, which we state as follows: the ratio between the combined population of the 64 codons and the population of the two most frequent codons (TTT + AAA) is equal to the number "13".This ratio is checked for each of the three reading frames of the codons.
The corollary is, of course: the relationship between the cumulative population of 62 codons other than TTT and AAA and the combined population of these two codons.
AAA and TTT is the number "12".But many other surprises awaited us in this study … Notes: For readability of this article, I must add here these professor george church's (http://arep.med.harvard.edu/gmc)advices: "you are using terminology in a way that may confuse biologists.For example using the words "triplet" and "codon" interchangeably (rather than restricting the latter to "reading frames" = genomic regions known to be translated by ribosomes).Also using the term "mirror" instead of "reverse-complement". Mirror typically means same sequence but different chirality".So, I considere here triplets of nucleotides as codon reading frames overlapping whole genomic DNA independently with genes-coding restricted world amino acids translations.
"13 and 144": The TTT and AAA Fibonacci Symmetry
In [2], Chapter 5, Figure 5.1 presents the 3 sets of 64 codons populations related to each of the 3 possible codons reading frames.Then, observe that the 3 cumulated values of codons related to these 3 codons reading frames are: 3 frames: 947803867 947803881 947803864 On the other hand, the 2 × 3 populations of codons TTT and AAA for these 3 codons reading frames are summarized in the following Table 1: Speculations on these results: The reader will observe that 13 and 144 are both Fibonacci numbers.In addition, if the Fibonacci sequence is 1 2 3 5 8 13 21 34 55 89 144 ... We note that the differences from 1 to 13 as 13 to 144 are equidistant and = Phi*5 (with Phi the golden ratio).See details in the following Table 2 synthesis.
The First Symmetry Axis
First, when we analyze the detailed values of these codons, it appears to have a perfect "mirror codons property": each codon couple within the pairs has a complementary mirror reverse codon; example: TTC <==> GAA.In fact, we extend Chargaff's second rule from the domain of single TCAG nucleotides to the larger domain of codon triplets; please see details in [2], particularly in Chapter 6.
In Figure 2, we see that there is indeed a formal relationship between each codon ranked odd and its even ranked alter-ego: thus, in the front line TTT faces AAA, then, in the second line, AAT faces ATT, then, in the third line, AGA faces TCT, etc...It will be the same for each of the 32 pairs of codons... the formal relationship between each codon and associated mirror codon is so trivial that it can even make the algorithm: Consider "codon master" and "codon mirror" for any one of 32 pairs of codons matched by mirror symmetry.We compute the function: "codon mirror F (codon master)", by example TCG <==> CGA, DO: Step 1: with a palindrome, turn the master codon on itself: exp TCG ==> GCT Step 2: complement the results of step 1 using the Watson/Crick law of complementary bases: exp GCT ==> CGA The final result is now the mirror codon.
Then, to summarize: TCG ==> GCT ==> CGA Step 1 Step 2 Thus, each codon facing a mirror codon is obtained simply by turning codon symmetry on itself (exp TCG <==> GCT) then doing Watson/Crick law of complementarity bases: T <==> A or C <==> G (exp GCT <==> CGA).Then, to summarize, see Table 3 The evidence of "codon mirrors" emerging from the population of the 64 codons of the human genome, "odd", left, is classified odd codons, while "even" on the right represents the codons listed "odd".
The odd and even cumulated codon populations are: 474337193 473466674.Then the odd/even ratio is: 474337193/473466674 = 1.001838607.It is even a real "partition" of the whole human genome as shown in Table 3, the two respective populations of codons forming the two partitions of the genome are correlated to 99.9995%.
The Second Symmetry Axis
Furthermore, this ratio = 2, Table 4 and the summary Table 5 shows how the various ratios combining these four quartiles highlight several notable integers.We can conclude-already-the evidence of high level of numerical constraints structuring codon populations of the whole human genome.
As demonstrated by Table 4, the population of the 32 most frequent codons is exactly twice as large as the population of the 32 codons remaining in this case, the least frequent.The exact ratio is: 631430091/316373776 = 1.995835745.If we consider 2 clusters of 32 codon populations each, the most frequent (Q1 + Q2) is exactly 2X as numerous as the least frequent of the 32 codons (Q3 + Q4).
Exact ratio is 1.995859355.
The "Human Genome's PEACE SYMBOL" or "Cross of Nero"
An immediate consequence of this discovery: We provide here, for the first time in the history of genetics, the formal proof of the existence of a mathematic global organizational law of a whole genome: the human genome.This law is both digital (revealing and accurately reporting adjusted integers) and symbolic (through the graphical analogy with the universal and highly symbolic "PEACE SYMBOL" of the Figure 3).
The Third Symmetry Axis
SYMMETRY CG: the next table, Table 6, will focus on the 2 × 4 = 8 last codon arrangements of sorted codon populations from single-stranded DNA of the human genome, all containing a subset of two nucleotides formed of the sequence CG.This comment will be included later in this article when we will reorganize the 64 populations of codons following 8 successive octaves of 8 codons each.
Predicting Genome Level Codon Populations: The 3 Genomic Numbers
In Chapter 19 of the "Codex Biogenesis" book [2], we show how the combined population of the 24 chromosomes of the human genome can be modeled with correlations over 99% (99.99% in the case of human genome) from three characteristic numbers: we call these numbers "the three genomic numbers".In [14], the researcher Jordi Sola Soler from IBEC Barcelona summarizes and reproduces this very colorful and educational demonstration.However, his results were based on a very redun- dant version of the genome in which we had accumulated 12 = 2 × × 3 DNA strands: 2 reading directions, the two strands of the molecule and the three reading frames of the codons.It was only natural that some level of redundancy emerged from this type of analysis.We will show here how and why the analysis of one single-stranded DNA, representing the concatenation of the 24 chromosomes, also produces-exactly the same 3 genomic numbers characterizing the human genome.Now, we consider the single-stranded DNA corresponding to the first of three codon reading frames.The concatenation of the 24 human chromosomes is 947,803,867 combined codons.The Table 7 below shows the 64 populations of codons corresponding to this first basic codon reading frame.
The codon populations are analysed through the well known Universal Genetic Code matrix of Figure 4.The following Table 7 corresponds to the conventional representation of the genetic code; it contains 16 rows and 4 columns.We then split the table between lines 8 and 9 (lines 1 and 9 in bold), then pick up the entire second half and obtained the right of the top half of the table.The resulting new Table 8 is square and thus obtained contains 8 rows and 8 columns.Here is, for example, the first line that we call "octave1" TCT TTT TAT TGT ATT ACT AAT AGT.
For each of the eight octaves built, we cumulate the values of the eight columns in each of the 8 lines.Observe the values thus obtained for the 8 octaves (Table 9).
Observe also the high level of symmetry emerging from the global structure of these 8 octaves, sorting them in 3 clusters "low/medium/high" (Table 10).Effectively, Table 10 demonstrates evidence of fractal-like embedded symmetries between these 8 octaves long range structures of codon populations.
By analyzing these eight values, we see that they could be reduced to only 3 numbers: O1 = O3, O2 = O4 = O5 = O7, and O6 = O8.We also note that these values are remarkable because their proportions are very close to 1, 2 and Phi the golden ratio.Then, O1/O3 = 1 = 1.008758196.O3/O6 = 2 = 2.024032673 O1/O7 = Phi = 1.621070512To be more specific we can do the same calculations on the average of each of these three "numerical attractors".
We must discuss this redundancy nature of genomic DNA in Conclusion §5.7.
Secondly, in chapter 19 of the book "Codex Biogenesis" [2], we show the algorithm based on a cellular automata, which automatically computes the 64 modeled codon populations from only the 3 genomic numbers set: effectively, if the 8 octaves could be modeled from only 3 numbers, then, what about ratios between the 3 ones (1's), the 5 remaining (redundancy) and the whole (8 octaves)?
Thus, at the level of cumulated codon populations, the ratio of numbers (Fibonacci) 3, 5, 8, is the golden ratio based scale: 1, Phi, Phi*2.So, we must revisit this strange property in §5.7 conclusions...Then, if in the above Table 11, we replace the 9 real values by 9 ideal modeling values, there is a strong correlation between these two vector's 9 elements: 0.9999791052.
Finally, we show here 3 ways to build the 3 × 3 matrix: -the above Table 11 studying one single DNA strand.
In bold, the minimum values for each of three clusters.
Table 11.The famous 3 × 3 human genomic numbers matrix.-the below Table 12 unifying 12 DNA strands reading directions.
As shown by (Table 13) below, Dr Jordi Sola Soler [14] presents analog results in his "Phi and music in DNA" website (Table 12 referenced in his website).
We can therefore conclude that three numbers are sufficient to completely characterize the interrelationships between the respective populations of the 64 codons.For each chromosome in each genome for each species, these numbers vary with the exception of one: the number "1" -always present-which translates Chargaff's second law.Thus, in the human genome, if three numbers are 1, 2 and Phi across the entire genome, they will differentiate significantly when considering each of the 24 individual chromosomes, see details in [5].The summary array above (Figure 5) shows the calculated values of the 3 "genomic numbers" for specific genomes of 12 different species ... In this Figure 5, we have listed all the data related to various whole genomes: label genome, chromosome number, total bases, values derived from each of the 3 genomic numbers and genomic correlation model/ real as defined in [2], chapter 19.
Despite the great diversity of genomes studied, we observe that: -In all cases, the correlation between model prediction and actual measurement is greater than 99%, and often to 99.999% (HIV1 virus and H5N1, yeast, plant Arabidopsis and Plasmodium falciparum).
-For each of the genomes, two of the three genomic numbers are specific to the genomes and species.
On the other hand, if the chimpanzee is as remarkable as the man he shares the same 2 genomic numbers (2, Phi and more) with, the correlation model/reality is even better than for man.There is also evidence that some The ratios of cumulated populations of groups of eight codons taken alternatively correspond to the Golden Ratio, to a perfect octave, and to the quotient between both.
Figure 5. Generalizing the "3 GENOMIC NUMBERS" in the cases of various main complete genomes and species (copyright [2]).
species share exactly the same model, that is to say the same genomic couple of numbers: if it seems "natural" between humans and chimpanzees, it seems "strange" between the plant Arabidopsis and the worm C. elegans ...! Finally, it seems-but it would take additional refining -that the 2 genomic numbers specific to each genome are always a simple expression related to the golden ratio, Phi.
In [13], we showed that the population of the 64 codons of the whole human genome, when reorganizing the universal genetic table using the successive transformed fractal "dragon curve", self-organized codon populations around 2 attractors: 1 and (3-Phi)/2 = 0.6909830056.When publishing this article, I was very interested in the presence or the golden ratio, Phi: I already thought that this precise tuning certainly corresponded to an overall balance at the whole genome scale.You understand my surprise when I discovered that my article and especially the value 0.6909830056 is quoted in a web site dedicated to the intimate structure of atoms, quarks or Higgs boson [15]… (please visit http://quarks-divided.over-blog.fr/pages/Pi_e_Phi_and_1381976-7937512.html).We find this value of 0.69098 in various other quark studies: CERN, Washington University TeraScale project.(see http://www.phys.washington.edu/groups/ptuw/FlavorWorkshop.html).But this astonishment was transformed into amazement when I discovered that "my" constant (3-Phi)/2 is related to Phi but why also Pi and "e" (Euler's constant), all 3 universal constants?In addition, it would be connected to a key value of the geometric structure of the atom, (see http://quarks-divided.over-blog.fr/pages/Pi_e_Phi_and_1381976-7937512.html).I then had the intuition that this constant is hiding perhaps even greater universality, hence the need for revisiting its role in DNA and ge-nome… The reader can verify for himself that as noted by Dr Gielen.(see http://quarks-divided.over-blog.fr/pages/Pi_e_Phi_and_1381976-7937512.html).If AB = Pi * e * Phi = 13.817580227... then R = 6.9087901135... Details: to synthesize this, a main radius in the theory of quarks is R = 6.9087901135 because it matches the radius of a sphere of volume = 1381,976 ... and lot of other geometric properties (surface, etc.).One then finds that the ra-
Conclusions
To conclude, we emphasize the following points: The benefits resulting from a chromosome and genome scale level codon analysis will be amazing as well as significant: -Naturally, this tuning of the whole human genome adjusted on the outstanding value of (3-Phi)/2 leads us to the question of a possible universality of this number, well beyond genomics.
We will now conclude on the following seven other notable results: 1-Junk DNA and DNA strands atomic mass tuning; 2-Universal genetic code table "lens" and "matrix"; 3-Numerical DNA constraints; 4-The 3 genomic numbers species diversity; 5-Human genome and chromosomes genomic numbers diversity; 6-Some 3-D speculations; 7-"Form and substance" in the human genome.
Junk DNA and DNA Strands Atomic Mass Tuning
Obviously, our paper will reveal the strong utility of unknown junk DNA function.We show that this role most likely contributes to the balance and fine tuning of the atomic masses of the huge double-stranded DNA molecule.And, if this perfect codon populations balance was the ultimate goal, to ensure "the optimal balance of masses" of the DNA double helix within whole chromosomes and genomes...It is interesting to look now at this huge DNA molecule comprising the human genome looking for balanceing and tuning atomic masses or even at the quantum level [17]... Finally, I state that: the multiple equilibria that we have explored in this article are only achieving a main goal: securing and maintaining the costs and using these cute thousand tricks, each as beautiful as the other, TO BALANCE THE WEIGHTS simultaneously across the huge but fragile double-stranded DNA molecule, and across chromosomes and whole genome...We find some evidence of this subtle balance in my book "Codex Biogenesis" Chapters 12 and 13 particularly Table 19 below (from page 156 in book [2]) reports the perfect balance that we calculated by comparing the atomic masses of the two DNA strands accumulated throughout the human genome.
Universal Genetic Code Table "Lens" and "Matrix"
Secondly, everyone knows that the main function of the universal genetic code table is the correspondence between the 64 codons of DNA and RNA, on the one hand, and the 20 possible amino acids, on the other hand.Yet, as demonstrated by our 2010 paper [10] and then by Professor Petoukhov's research [1], we demonstrate throughout this article a second function, equally important: its role as a "filter" or "matrix" determines the relative proportions of each of the 64 codons in single-stranded DNA sequences of chromosomes or genomes.GENETIC CODE "coherent sunlight" reveals GENOMIC DNA "holographic like" CODING Then, a main conclusion is the following: In the difficult process of finding possible structures of singlestranded DNA sequences forming chromosomes and genomes, the universal genetic code table can play a central role as a "filter" or "matrix", revealing the HIDDEN CODES of DNA.So any method-particularly that of Pr.Petoukhov-will "reveal" fragments and "views".This kind of unattainable holographic-like "information hydra" is still DNA genomes.By analogy, the projections along the 64 codons of the genetic code table by their numerical consistency play the role, in the image of coherent laser light.It is therefore sufficient that the method of analysis is mathematically consistent as is the case in Petoukhov's genomatrix method.Finally, the synthetic Table 20 below shows how we successively "sailed" between various dimensions of exploration of this hyperspace in the population of these 947803881 codons of the whole single stranded DNA human genome.
Numerical DNA Constraints
Third, our results demonstrate that the relative proportions of codons in DNA are "forced", constrained and controlled-one might even say "weighted" and "fine tuned"-by laws of numerical mathematical nature, which is radically innovative.Now, take a step back: The universal genetic code table as a kind of "filter" or "genomic lens" to explore and discover the many dimensions and "views" of the genome.
Here we will limit ourselves to the study of the human genome.In this study, we first considered this kind of hyperdimensional space, equal to 64 respective populations of each of the 64 codons constituting the entire human genome.It then revealed relationships and remarkable symmetry of codon mirrors, but also curious ratios as sharing the same four (4) parts, which characterize the famous figure of the "peace symbol".The key then appeared as 64 codons sorted in descending order of populations of these codons (Figure 2).Then we discovered the ranking of the 64 populations; the clustering of 8 ("eight") octaves of cumulated codon populations (Table 9).21; 8-to be published [18].
The 3 Genomic Numbers Species Diversity
Fourth, the analysis of populations of codons obeys three numbers characterizing each specific chromosome or genome: "the 3 genomic numbers".This law is universal, then in light of what has been demonstrated here, we can state the following three laws: The First Law: Law of "computability genomes".The codon composition of any genome is "computable". The Second Law: "3 genomic numbers law": We discovered that three numbers determine genomic relationships between specific codon populations identified from the 64 positions within the universal genetic code array.This universal predictive modelrunning a cellular automata-is correlated with the real codon populations, revealing correlations above 99% (and often 99.999%) for all genomes analyzed (We recall that the technological consensus error from DNA sequencers is of the same order: one nucleotide TCAG false or indeterminate in 10000).For example, for the entire human genome (24 chromosomes and 3 billion bases TACG), 3 genomic numbers are "1, 2, and Phi = 1.618033…"These three numbers generate a square modeling matrix with 64 codon positions, with an accuracy of 0.9999695973 compared to the real codon populations!For the Arabidopsis Thaliana plant genome (5 chromosomes and 120 million bases, TACG), these genomic numbers become the triplet [1,2Phi] ... and the accuracy of the model is 0.9999910311 ... To simplify, it means that the respective populations of the 64 codons of any genome are calculable from 3 numbers, 2 are specific to the genome. Finally, faced with the evidence of such a strong DETERMINISM of the HUMAN GENOME in particular and in general all genomes, we even went on to explore the way that populations of codons of the human genome could be reduced to the solving a system of linear equations or non-linear ...? or inequalities?It is unfortunate that if the system can be put into equation-I realized that-the equations may be redundant, superfluous and over determined.This line of research: "a system of equations of the human genome" is very promising.It will be explored and deepened.I'm sure of its potential, imagine: "The system of equations of the human genome"!
Human Genome and Chromosome's Genomic Numbers Diversity
Fifth, the methods and results presented here are related simultaneously to both the scale of whole genomes with each chromosome individually considered.This again is a universal character of these laws.Particularly, it ap-peared in the case of the human genome.This dual level of strong mathematical constraints led to remarkable genomic numbers across all 24 chromosomes as well as across the entire genome.This result is quite remarkable.Thus, the ratio (3-Phi)/2 appears to us now as unifying the UNIVERSAL billion codons in the single-stranded DNA genome world.But what happens to these ratios at the individual level of each of the 24 chromosomes?
In [8], we have generalized a population analysis of codons across all 24 human chromosomes.It appears extremely DIVERSIFIED between these 24 chromosomes.We were able to establish a structure of order, a hierarchy between these 24 chromosomes.Curiously, the genomic ratios will-with great precision-range from 1/Phi (chromosome 4) to 1/Phi + 1/Pi (chromosome 19).The amplitude of variability is equal to 1/Pi.In a forthcoming paper [18], we explore the extraordinary properties of chromosome 4, which seems to be completely built around the Golden ratio, Phi... Figure 6 below illustrates the variability between populations of codons of the 24 chromosomes.Particularly, chromosome 4 and chromosome 19 constitute the end terminals of the hierarchy whose amplitude is 1/Pi (see details in [8]).
It also shows how the genomic numbers of chromosomes 4 and 19 adjust to new values.The remarkable fact of the human genome is that it tunes its codon populations simultaneously at the individual level of each of its 24 chromosomes, on the one hand, and the overall scale of the whole genome, on the other.
From "the 3 genomic numbers" to "the MASTER GENOMIC NUMBER"... We computed each of these 3 genomic numbers for both cases of the two chromosomes most extreme in this classification: chromosomes 4 and 19.See details in the following Table 21.
We then discover that the "master genomic number" and the "3 genomic numbers" are linked by the following formula: If G1, G2 and G3 are the 3 genomic numbers, and MG is the "master genomic number", then: MG = (G1 + G2 -G3)/G2.
Another example from One note: we observe that the Master Genomic Code corresponds to the scale of nucleotide populations expressing the ratio (T + A)/(C + G).We can easily verify, for example on the genome of Arabidopsis that this report is also 2/Phi*2.Instead, the 3 genomic numbers reflect a more subtle level of organization that verifies a balance between codon populations.
Some 3-D Speculations
Sixth, we thought about possible potential conceptualizations and materializations of these billion codons of the human genome unfolding in three-dimensional mathematical spaces determined by the three genomic numbers values.3 dimensional considerations...I suggest thinking now about a possible three-dimensional representation of the space of codons ... Of course, the 3 genomic numbers will guide us in this sketch.For example, an "egg" can be represented by the numbers [1, 1, and Phi], which correspond to each of the three axes of symmetry: 1 and 1 for its cut in a circle and one for the other and Phi sectional proportions the golden number.We mention the analysis of Joost Gielen [15] on this topic (see http://quarks-divided.over-blog.fr/pages/PiePhi_3_Pifie_the_egg-8265900.html).Similarly we could draw this hyperspace for the entire human genome: [1,2, Phi].Or for chromosome 4: [1, Phi, Phi]... Now, we return for a moment to the three genomic numbers managing the whole human genome: the triplet [1,2 Phi].Then I'll suggest you consider the strong links that could connect each of these three genomic numbers with different symmetries encountered at the beginning of this article: -on the first three dimensions, it will seem realistic to associate the number "1" with the symmetry 32/32 twin mirror codons.I recall here the remarkable ratio balancing the 32 populations odd/even on each of the 32 pairs of codons: 1.004090619 -the second of three dimensions, it will appear consisting of associating the number "2".It is obtained by forming the partition between the 32 most frequent codons and 32 less frequent codons.Remember this ratio: -accumulated populations first 32 codons: 631,430,091.
Then, the ratio between these two populations of codons is: 1.995835745 or a perfect value of the number "two".
Finally, on the third of the three dimensions, it will seem realistic to associate the number "Phi" ... How?
We could propose the ratio 3/5 involving two successive Fibonacci numbers is in fact obtained by computing the ratio between the last 3 quarters (the last 24 lines of twin codons in Table 3) and the first quarter of the codon population (the first 8 lines of twin codons in Table 3).This gives the following proportion: (2nd, 3rd, 4th quarters/1st quarter = 1.668509717, which is very close to the ratio 5/3 = 1.66666666. But the same proportion is also obtained by calculating the ratio of the first quarter (the first 8 rows of twin codons in Table 3) and the third quarter of the population of codons (lines 17 to 24 twin codons in Table 3).Either quarter 1/quarter 3 = 1.661511389 ... Strange!Isn't it?But it will be more realistic to propose the following approach: in § 3, we showed how to calculate each of the three genomic numbers.The ratio Phi = 1.618 was obtained by computing the ratio between the cumulative octaves 1 and 7. 4 with four columns by 16 lines).We will let each reader try to imagine the projection of this golden ratio on the chessboard of 64 squares of codons of the famous universal genetic code map ... This is the third and last of the three dimensions of the hyperspace of codons, 3 genomic footprints of the whole human genome 3 genomic numbers set!
"Form and Substance", "Information and
Redundancy" in the Human Genome We will demonstrate that-simultaneously with the dual- ity between genes and junk DNA-there is a second hidden level of structure sharing all the DNA of the human genome, dividing it into a second type of duality information/redundancy (background).Table 23 revisits the eight values of 8 octaves Table 9.
Here we have eight (8) numbers that can be reduced to three (3) major numbers, the remaining five (5) minor numbers are redundant.
We then had the intuition to calculate three populations corresponding to this trilogy of values.
At first, we used the minimum value for each of three sets of redundant values.The result is: cumulating the 8 octaves: 947803867 FORM: cumulating the 3 minimums from each set (octaves This value is very close with Phi the Golden ratio, the error is: Phi-1.610297435= 0.007736554. Similarly, the ratio between the whole of the 8 octaves (947803867) and the form of the 3 significant octaves (363101865) is: 947803867/363101865 = 2.610297435.This value is very close Phi*2 = 2.61803399.The reader may verify that the same calculation is performed by selecting the maximum values in each of the three redundant sets or the mean value leads to near results.
So we come to this fascinating result: It was thought for many years that significant "form" encoding human life is reduced to 2% to 5% of the DNA encoding genes.Then, the scientific world gradually discovered that 98% of junk DNA (i.e. the "substance") had a function, particularly in the case of cancer cells [18,19]...We could then say that of the entire DNA housed in genomic DNA, the substance corresponds to 98% (junk-DNA), while the form consists of 2% housed in genes.What we find here forces us to revisit the fundamental question beyond redundancy of information in the DNA in general and in the human genome specifically.Indeed it appears that only THREE of the EIGHT octaves convey enough of any meaningful information.The other remaining FIVE octaves are repeating-like a sort of harmonic wave-form-like "echo" [20]-the same modulated In bold, the minimum values for each of three clusters.The reader will note the central symmetry (octave4 <==> octave5), and even near perfect cyclic symmetry relative values of these eight octaves octave8 ==> octave1 ==> octave2 etc … (see symbolic fractal-like folding plotted graphic in Table 10 ).
information.What we find there is another "partition" between the substance and the form, controlling the entire human genome: the billion codons forming our singlestranded DNA genome, when partitioned according to the eight octaves through the matrix of the universal genetic code table, bring out a harmonic structure which can be summarized as follows: Eight (8) octaves are divided into three (3) Form octaves and Five (5) Substance (background) octaves.
The ratio of substance over form is adjusted to the Golden ratio Phi.
The ratio of whole over the form is adjusted to the square of the Golden ratio Phi * 2. This is absolutely fascinating ... like described in the Douglas Hofstadter's major book "Godel, Escher, Bach" [21], the great painter M. C. Escher who, after Kurt Godel in Mathematics, Jean-Sébastien Bach in music, had the genius to think about the paradoxical relationship of substance and form (Figure 1) in painting [22]!"The form and substance" here is the next step in our long research path for over 24 years running between DNA, Golden ratio, genomes and Fibonacci numbers [23][24][25][26][27][28][29]... Phi the Golden ratio, are Human and Nautilus very close?Although not a Golden spiral, the shape of the Nautilus shell exhibits multiple Golden ratio harmonics in its design [30], The Human genome too?
Figure 2 .
Figure 2. Evidence of a gradation of codon pairs (odd/even) in the hierarchy frequency of 64 codons throughout the whole single-stranded DNA human genome (copyright [2]).
Figure 4 .
Figure 4.The universal genetic code starting matrix.
These results from [ 2 ]
Chapter 19 cumulate codon populations related by only one single-stranded DNA codon reading frame.
Figure 6 .
Figure 6.The variability of the genomic ratio for each human chromosome.
First
, best thanks to 2008 Medicine Nobel prize Professor | 8,890 | 2013-09-30T00:00:00.000 | [
"Biology",
"Physics",
"Computer Science"
] |
Performance Evaluation of Machine Learning-Based Channel Equalization Techniques: New Trends and Challenges
Department of Computer Engineering, Bahria University Islamabad, Pakistan Department of Unmanned Vehicle Engineering, Sejong University, Seoul 05006, Republic of Korea Department of Electrical Engineering, Government College University, Lahore 54000, Pakistan Gofa Camp, Near Gofa Industrial College and German Adebabay, Nifas Silk-Lafto, 26649 Addis Ababa, Ethiopia Institute of Mathematical Sciences, Faculty of Science, University of Malaya, Kuala Lumpur 50603, Malaysia
Introduction
In wireless communication systems [1][2][3], the performance may be severely degraded because of wireless channel issues. The transmitted signal passes through the communication channel and has faced various impairment issues such as Intersymbol Interference (ISI), Doppler shift, and fading effects. All these effects tend to degrade and limit the data throughput during data communication [4]. To achieve higher data rates, it is mandatory to mitigate the effects of channel-induced impairments. This requires an adaptive filter for equalization to nullify the effects of the wireless channel and recover the originally transmitted data. Recently, the use of Machine Learning (ML) [5,6] techniques especially Artificial Neural Network-(ANN-) based methods has gained interest due to its remarkable success in the fields of Computer Vision (CV), speech recognition, and Natural Language Processing (NLP). These techniques although invented in the mid-20 th century were not very popular due to the lack of required computational power. The availability of high-speed computational resources and the success of ML in various other fields have provoked its applications for the development of robust communication systems [7]. Many researchers have proposed the use of ML for designing communication systems and have demonstrated improved results in terms of Bit Error Rate (BER). However, still, there are some concerns and questions which require answers such as the following: (i) What will be the maximum performance gain in terms of BER by using NN and its variants such as Multilayer Perceptron (MLP), Radial Basis Functions (RBFs), Functional Link Artificial Neural Network (FLANN), Support Vector Machines (SVM), and Long-Short Term Memory (LSTM) (ii) Is it possible to train the NN to estimate a wireless channel in real time as required by the modern-day channel equalizers to mitigate the channel in real time? Typically, an equalizer is required to train its taps in less than a few microseconds. What possible methods can be used to achieve this task During data transmission, networks have experienced various types of impairments such as path loss which results in attenuation of the signal, AWGN, and multipath effects caused by the reflections of the electromagnetic waves from various obstacles. The input digital data is fed into the source encoder which effectively transforms the bitstream into the compressed form by using Huffman encoding. The input can be an audio source, text, binary, or any other sensor input, which may require A/D conversion before feeding to the source encoder block [8].
The digital data at this stage can also be secured using encryption algorithms.
The resulting data sequence at the output of the source encoder is passed to the channel encoder which adds redundancy in a controlled manner, to help the receiver to detect and correct the channel-induced errors. This step should make the data robust against harsh channel conditions. In the next step, the output of a channel encoder is given to a modulator that applies digital modulation methods such as BPSK, QPSK, or some variants of FSK. The output of the modulator is fed to the frequency upconverter which translates the baseband signals to passband frequency, and finally, the signal is amplified to the appropriate levels and then transmitted through the antenna. The motivation of this research work is as follows.
(i) To identify the performance metrics for the existing channel estimation and equalization techniques (ii) To identify an improved channel equalization technique for the selected wireless channel To critically assess the performance of various channel equalization techniques by performing simulations, the mathematical formulation is presented in [8] for the communication system where they considered sðtÞ to be the transmitted signal. It is represented mathematically in Equation (1). Figure 1 shows the transmitter and receiver block: where xðtÞ is the baseband signal and "ωc = 2πf c" is the center frequency of the passband signal. The received signal is given as in where γ m ðtÞ represents the complex amplitude of the channel, τ m is the delay of the mth multipath, and N represents the total number of multipaths. wðtÞ represents the AWGN. The resulting received signal can be written as in where hðτ, tÞ = ∑ N−1 m=0 γ m ðtÞe jω c τ m δðt − τÞ is the impulse response of the time-varying channel. It is the main goal of wireless communication systems to estimate hðτ, tÞ which is the channel impulse response for the desired level of performance.
1.1. Performance Issues in Wireless Communications. One of the primary goals while designing a communication system is to achieve the performance as closer to Shannon's capacity definition [9] as given in where C is the capacity of the wireless channel, B represents the bandwidth, and "γ" represents the Signal to Noise Ratio (SNR). This theorem gives the fundamental bound on the achievable capacity of the wireless channel. All communication systems tend to achieve Shannon's capacity. As of today, this goal has not been fully achieved due to many reasons. Journal of Sensors Assuming the availability of the required bandwidth, these three objectives must be served to achieve the desired performance in wireless communication. SNR and link budget can be improved using high gain antennas, more transmit power, and better antennas; however, the effects caused by the channel require more sophisticated handling. Its effects must be mitigated using the channel equalization method. [10]. In [10], an estimation technique was presented for the transmitter sending a known sequence of data symbols to the receiver called pilot symbols. The receiver estimates the channel with the help of received pilots using mathematical techniques.
In [11], the receiver has no information about the input signal of the channel. This technique uses the data symbols for channel estimation by employing the precoding of the symbols at the transmitter. The receiver knows the parameters of the precoding used at the transmitter and then uses correlation-based methods to estimate the channel information [12][13][14]. In [15], the authors used the pilot symbols and the demodulated symbols for the channel estimation. In the absence of bit errors, the symbols can be used for estimation of the channel impairments and start acting as the pilot symbols. This technique proves to be more efficient as compared to the pilot symbol based on channel estimation techniques because it reduces the bandwidth by saving the number of pilots required in pilot-based channel estimation techniques.
Channel Equalization.
Channel equalization and channel estimation are interdependent. The inverse of the channel estimate can be used for channel equalization. The performance of the equalizer is proportional to the accuracy of the channel estimation.
The equalization mechanism can be divided into two modes including a training mode and a decision-directed mode. In the first mode, the equalizer is trained by sending a training sequence. The training sequence is known as a priori to the receiver. Equalizer weights are learned using the training sequence. In the second mode, the equalizer is operated on the channel to estimate the channel. Various types of equalizers are used in the digital communication receiver. Figure 2 depicts the classification of the equalizers [16].
Equalization is generally divided into two categories including linear equalizers and nonlinear equalizers. The linear equalizers employ only a feedforward path and do not use the output of the equalizer in the equalization process. On the other hand, the nonlinear equalizers use the output of the equalizer in the determination of the future samples. Both the linear and nonlinear equalizers employ adaptive algorithms such as LMS, NLMS, RLS, and Kalman filtering for the adaptation of the equalizer weights. Amongst the nonlinear equalizers, it is the Maximum Likelihood Sequence Estimator (MLSE). This type of equalizer does not use the filter for equalizing the channel but instead uses the Viterbi algorithm to decode the sequence and chooses the sequence with maximum probability as the output.
Machine Learning-Based Channel Equalization Technique Results
ML is a subfield of computer science that focuses on the development of algorithms to learn and solve complex problems. Unlike the traditional approach, it does not use predefined models or a set of equations to solve the given problem; instead, it learns to solve the problem. It consists of the human brain-like neurons termed "perceptrons." A perceptron is a simple mathematical model (function) that maps the set of inputs to the set of outputs and performs three basic operations: multiplication, summation, and activation. Each input value is multiplied by its corresponding weight.
The previously weighted inputs are then summed up and passed through the activation function. The activation function determines the output of the neuron concerning its input. The commonly used activation functions are threshold, linear, sigmoid, and "ReLU." Mathematically, a perceptron can be defined in Equation (6). By subtitling the values, Equation (6) becomes (8):
Journal of Sensors
Here, w is a weight vector, b is a bias, and w T x is a dot product of w and x as represented in Equation (7): φð·Þ is the activation function. A sigmoid and "ReLU" functions are defined in
Related Work
NNs are capable of processing nonlinear data and can produce complex decision regions. A new framework based on exploiting feature selection and neural network techniques has been proposed for identifing focal and nonfocal Electroencephalogram signals in TQWT domain [17]. Therefore, NNs can be employed for equalization purposes to overcome the difficulties associated with channel nonlinearities [18][19][20]. The performance of NN-based equalizers has been reported as superior to other conventional adaptive equalizers. In the recent past, the use of NNs has gained popularity in the design of software-defined radios where DNN, CNN, and RNN have been applied for classical radio operations [21][22][23][24]. In [25], the deep NNs have been used for the channel estimation of doubly selective channels which experience variations both in time and frequency. The deep learning-based algorithm is trained in three steps including the pretraining step, training stage, and testing stage. During the first two steps, the model is developed offline using training data. During the testing stage, the channel is estimated and equalized. The results show improved BER performance as compared to Linear Minimum Mean Square Error (MiMeSqEr). In [26], the ML and NN have been used in the Frequency Division Duplexing (FDD) system which is a double selective channel, and the results showed improvements in terms of MiMeSqEr in the prediction of the channel.
In [27], the NN and DL methods have been used to predict the behavior of the Rayleigh channel, and it has been reported through simulations that the MSE performance compared with the traditional algorithms has improved. In [22,28], DL has been thoroughly investigated and provided a review of the various ML-based techniques for wireless communication. It has been shown that traditional theories do not meet the higher data rate requirements of communication and limit the efficiency due to complex undefined channel requirements, fast processing, and limited block structure. On the other hand, AI-based communication systems face some challenges that need to be addressed. These challenges include the availability of a large amount of data and how easily it can be integrated into classical infrastructure [29]. Similarly, ML has been applied to the physical layer for modulation recognition and classification [26,[30][31][32][33].
An MLP is a feedforward NN that consists of an input layer, a hidden layer, and an output layer. It has nonlinear decision-making capabilities. The training of MLP is done through the backpropagation algorithm [34]. The MLP is the first neural network used for channel equalization [19,20,[35][36][37][38]. Gibson et al. [20] introduced an MLP-based nonlinear equalizer structure and demonstrated its superior performance over the linear equalizer (LMS). The major drawback of the MLP network is its slow convergence [39]. This is due to the backpropagation algorithm which operates based on first-order information. A genetic algorithm [40] can be used to solve this problem. The convergence can be improved by using the second-order data like the Hessian matrix, which is defined as the second-order partial derivatives of the error performance. In [41], the authors proposed an MLP-based DF equalizer with a lattice filter to overcome the convergence problem to improve the performance of MLP. However, this improvement increased the complexity of the MLP structure.
The RBFNN is a three-layer network that comprises an input layer, a nonlinear hidden layer, and a linear output layer. The input layer contains the source symbols. In the hidden layer, the input space is transformed into a highdimensional space by using nonlinear basis functions. The output layer linearly combines the output of the previous layers. RBFNN provides an appealing alternative to MLP for channel equalization.
Many techniques have been developed to solve the equalization problem using RBF [42][43][44]. In 1991 [19], the authors used RBFNN for equalization. Similarly, an RBFbased equalizer has been reported which showed satisfactory performance [45,46]. Another work has demonstrated the use of RBFNN for equalization and found an improvement in BER [47]. The performance of RBFNN is compared with the Maximum Likelihood Sequence Estimator (MLSE) over the Rayleigh fading channel [45,48,49]. Simulations have confirmed that RBFNN is a reasonable choice with low computational complexity. The authors in [50,51] proposed a complex RBF (CRBF) network, and improved performance is observed. The drawback of RBFNN is that it is not suitable for hardware implementation. The network needs a large number of hidden nodes to achieve the desired performance.
In the last few years, FLANN is very famous [52]. It is a single-layer NN that can form complex decision boundaries. FLANN provides less computational complexity and greater convergence speed than other traditional NNs. From the perspective of hardware implementation, FLANN has a simple design, less computational complexity, and higher computation performance [53,54].
The input dimension is expanded by using nonlinear functions which may lead to better nonlinear approximation. The expansion is done using three commonly used functions, i.e., trigonometric, Chebyshev expansion, and Legendre expansion. A traditional FLANN uses trigonometric functions, whereas the other two expansions are based on Legendre [55,56] and Chebyshev [57] polynomials. Ch-4 Journal of Sensors FLANN is another computational efficient network. It has many applications in functional approximation [58], nonlinear dynamic system identification [59,60], and nonlinear channel equalization [61]. In these networks, the expansion is performed using Chebyshev polynomials. RNN is a popular DL technique that was first introduced for processing sequential data [24] and gained a lot of attention in the recent past. They have been proven better than traditional signal processing methods in modeling and predicting nonlinear and time series [62] in a wide variety of applications ranging from speech processing and adaptive channel equalization [63][64][65][66][67].
Unlike ANN, which does not have memory and cannot deal with temporal data, RNN has feedback loops which make them attractive for the equalization of nonlinear channels. This means data can be fed back to the same layers. It has been demonstrated through simulations that a reasonable size of RNN can model the inverse of the channel. RNNs are known to outperform FLANN, MLP, and RBF [68,69]. In [70], the authors discussed that equalizers based on CNN and RNN reduce the channel's fading effects but also increase the overall coding gain by more than 1.5 dB.
RNN has one problem of exploding and vanishing gradient [71]. This problem arises when there is a long dependency in a sequence. To solve this problem, LSTM is proposed [72]. LSTM is slightly different from RNN. It has some special units in addition to standard units. These special units are called memory cells. These units can retain the information for a long period. This means that LSTM detected the patterns even in a long sequence. The sequence problems can be efficiently solved by LSTM and can also solve the channel equalization problem. In this case, future samples can be predicted by taking previous symbols into account. This means that variations in a channel can be easily tracked. We can specify the number of samples that LSTM can hold for the prediction of future sequences. If it is selected according to the delay spread of a channel, more accurate results may be observed.
SVM lies in the category of supervised learning. Originally, it is developed for binary classification. Then, it has been extended to perform regression and multiclass classification problems [73][74][75]. It has the potential to generalize well in classification problems by maximizing the margin. The trained classifier contains support vectors on the margin boundary and summarizes the information required to separate the data. It uses the parametric learning algorithm, in which a model has fixed learnable parameters which are adapted during the training process. Once the model is trained, these parameters are then used exclusively for testing while discarding all the training examples.
This makes the SVM more computationally efficient. On the other hand, NNs are nonparametric as the number of parameters increases with the number of layers. NN introduces nonlinearity by using nonlinear activation function whereas SVM uses kernel methods that implicitly transform the input space into higher dimensions. RBF kernel is the most commonly used kernel method. The SVM is suggested to address the number of digital communication issues due to its nonlinear processing capability. A DFE based on SVM is proposed, and it is observed that the performance of this equalizer is superior to MiMeSqEr DFE [76]. Similar work is done in [77].
This section provides a comprehensive overview of the channel estimation and equalization techniques. Different neural network structures are discussed in the context of channel equalization. The MLP network implementation is simple, but training takes a lot of time. The main disadvantage of the FLANN structure is its computational and time complexity which gradually increases as the number of input nodes increases. The RBF-based neural network equalizer is an interesting alternative and is successfully used for blind equalization. LSTM equalizers are superior to NN feedforwards, including MLPs, RBFs, and FLANNs.
Performance Comparison of NN-Based Channel Equalization Schemes
Channel equalization methods of the respective systems are highlighted. A critical review of the methods is provided. All the methods are found to perform well in Rayleigh communication channels. However, there is a need to compare the schemes and highlight the best possible NN scheme for channel equalization. To the best of the knowledge of the authors, this work is not being carried out in the literature. In this work, the selected NNs are used for channel equalization and their performance is compared.
Implementation of NN-Based Equalizers. ML techniques
are setting a path to replace the conventional communication techniques, and the combination of these two fields has led to a lot of successful work. NNs are capable of processing nonlinear data and can produce complex decision regions. Therefore, NNs can be employed for equalization to overcome the difficulties associated with channel nonlinearities [18][19][20]. The simulation setup is depicted in Figure 3. A typical NN-based channel equalizer is depicted in Figure 4. The transmitter first transmits the training symbols which are known to both the receiver and transmitter and then transmits the actual data. The equalizer uses the received training symbols to learn the equalizer weights. The optimization criterion is to minimize the MSE. Figure 4 shows the NN-based equalizer.
Data Generation and QPSK Modulation.
Data is randomly generated using the MATLAB rand function. It generates uniformly distributed data between 0 and 1. The data is QPSK-modulated and then passed through the channel filter. QPSK uses two signals I and Q, where I is an inphase signal and Q is a quadrature signal. Both of these signals are at a 90°phase difference. This modulation is popular due to its simpler design and efficient hardware realization.
The following steps are performed to produce a QPSKmodulated signal.
(i) The incoming digital data is converted into two streams. One stream contains the odd bits, and the other takes the even bits from the original stream 5 Journal of Sensors (ii) The streams are then pulse-shaped using root-raised cosine pulses. The duration of the pulse determines the data rate of the transmitter. In this phase, the incoming data is first upsampled by a factor "N" which corresponds to the symbol duration and then convolved with the RRC pulse. The resulting signal is termed a baseband signal (iii) The resulting I and Q streams are then multiplied with I/Q carrier signals. In other words, these streams are amplitude-modulating using I/Q signals (iv) Finally, the two modulated signals are summed up to form a QPSK-modulated signal. In QPSK, two bits are used in one symbol Mathematically, QPSK modulation can be derived as follows.
Let m k represent the message signal, where m k = x i + jy i is the complex representation of the ith message signal. This complex representation represents the group of bits together. One is represented as real, and the second one represents the imaginary bit. The message signal is QPSKmodulated as presented in Equation (11): where x i = 0:7071A and y i = 0:7071A are the amplitudes of the pulses. By substituting the values of x i and y i in Equation (11), Equation (14) becomes Using trigonometric relations, the equation can be simplified as From Equation (15), the four reference constellation points of QPSK modulation are given in The received signal is demodulated as follows. The received QPSK signal is multiplied with the local oscillators which are at 90 degrees' phase differences and are called I and Q. The resulting signals are low pass filtered using the RRC filters. This results in the recovery of the baseband pulses which are further downsampled by N, and the signal is received.
The received signal can be expressed mathematically in Equation (17) shows that the received signal "rðtÞ" is the sum of convolution of "hðtÞ" with transmitted signal "sðtÞ" and with noise "nðtÞ" added.
Wireless Channel
Model. The wireless channel model describes the underlying communication medium. The performance of the communication system is dependent on the condition of the channel. Rayleigh and Rician fading channel models are widely used to simulate the channel in that realistic wireless environment. The Rayleigh fading channel [78][79][80] is the conceptual model assuming the fact that there are several objects in the atmosphere. Due to these objects, the transmitted signal may be dispersed and replicated. It is also presumed that there is no direct path between the transmitter and the receiver. On the other hand, the Rician channel [78,79,81] assumes that there is a direct path between the transmitter and the receiver. The received signal contains both the dispersed and scattered (or reflected) paths. In this case, the scattered (or reflected) paths appear to be weaker than the direct path.
We have considered a complex-valued multipath channel mentioned in [51]. The coefficients of this channel are defined as in c = 1 − 0:3434j 0:5 + 0:2912j Figure 4, and results are obtained. These configurations and the respective results are discussed in the sequel. The primary performance criteria are used in BER. Loss function analysis and the computational complexity are also calculated. The detailed results are compared and discussed in the later sections. The flowchart of the NN-based equalizer is depicted in Figure 5.
MLP-Based
Equalizer. MLP is a simple three-layer network that maps the input to the output. MLP is designed using the "nntraintool" of MATLAB. It comprises an input layer, a hidden layer, and an output layer. The input layer contains two vectors. One vector is the real part of the input signal (X), and another is the complex part of the signal. The output layer generates four vectors "Y0" to Y3. The MLP is trained with these parameters as shown in Table 1.
RBFNN.
RBFNN is a three-layer network that comprises an input layer, a nonlinear hidden layer, and a linear output layer. Radial functions are used as an activation function. Radial functions are special functions. The output of these functions increases or decreases monotonically with distance from a center. The K-means algorithm is used to 7 Journal of Sensors find the centers. So first, centers of clusters are determined in an unsupervised manner, and then, classification is performed to recover the signal. We have implemented this work [51] and observed the improved BER. The simulation parameters of RBFNN are shown in Table 2.
FLANN.
FLANN is a single-layer neural network. The main concept of FLANN is to convert the input data to a higher dimension by using different functional expansions. Due to the absence of hidden layers, these networks have the following advantages: low computational complexity with very few adjustable parameters: (i) Faster training time (ii) Simple design that can be implemented on hardware Using the work in [54,55], we have implemented the FLANN-based equalizer. The block diagram of the equalizer is shown in Figure 6.
The simulation parameters of FLANN, Le-FLANN, and Ch-FLANN are given in Table 3.
SVM-Based Channel
Equalizer. SVM is a supervised algorithm used for classification problems. Channel estimation is a classification problem, so it can be used to deal with the nonlinear channel effects. In this work, we have implemented a basic SVM model equalization. Simulation parameters of the SVM are shown in Table 4. The generalization error computed during simulations is 0.00001 which indicates the best performance. 5.1.6. LSTM Channel Equalizers. LSTM is a popular RNNbased DL technique. It is different from feedforward NN which does not have memory and cannot deal with temporal data. The simulation parameters are given in Table 5. The training model of LSTM is illustrated in Figure 7.
Simulation
Analysis. All the simulations are executed and compared in this section. Figure 8 depicts the BER comparison of all the simulated NNs. Generally, the trend ver-ifies already established theories. As the SNR increases, the BER performance is getting better and better. The performance of FLANN is slightly worse as compared to the rest of the schemes due to its single-layer architecture. The performance of the traditional LMS algorithm is the worst. In [51], similar results are observed. All the other ML-based schemes are having the same BER performance.
In Figure 9, the zoomed version of the BER graph is depicted. The LSTM is slightly bearing higher BER than SVM and RBF-based ML methods. The performance of FLANN when compared with the rest is almost 4 dB poorer than the rest. The performance of LSTM is about 0.7 dB poorer than the RBF and SVM and MLP. This may be The loss function is an important parameter of the optimization and is therefore discussed. The lesser the value of the loss function, the better performance is considered. In Table 6, the values of the loss function for all the algorithms used in this text are depicted. It shows that all the NNs are well trained.
The minimum value of the loss function achieved is in the case of SVN where the value is 0.00001. The BER results depicted in Figure 9 are very much in line with these results. The loss function values of RBF, FLANN, and LSTM can be further reduced by using more training data and by using better optimization algorithms.
Computational Complexity.
Computational complexity analysis of the algorithms is presented. This presents the number of computation resources required to perform the Table 7 presents the computational complexity of various algorithms. The number of additions, multiplications, and other computational resources such as exponentiation, powers, and trigonometric functions is enlisted [82]. This analysis is useful for the HW implementations and for estimating the computational requirements for embedded systems.
The computational complexity analysis of the mentioned algorithms is verified by timing the MATLAB® implementations. The time of all the algorithms used in this work is measured using the MATLAB® built-in function called "timeit." The number of iterations performed for each algo-rithm is 10 6 . The machine used for the computation is DELL® 7920 running MATLAB® 2019b. The CPU is Intel® Xeon® Silver 4116 CPU running at 2.1 GHz. The time is enlisted in Table 8. The computational time computed endorses the computational complexity as given in Table 7. The minimum computational time achieved is for the SVM. SVM is running a KNN algorithm that is computationally efficient. Its BER results are also amongst the best. RBF and MLP bear good performance, but their computational time is more.
Conclusions
The communication system is an ever-evolving, wellestablished field of research and has shown major advances in signal estimation, equalization, and other fields such as channel coding. Channel equalization is very critical for achieving high data rates and improved spectral efficiency and has been achieved using the traditional theory of least squares estimation and minimum mean squares estimation techniques such as LMS, NLMS, RLS, and Kalman filtering. The use of NN-and SVM-based channel equalization 10 Journal of Sensors methods is currently under research and is proving to be performing better than the conventional methods mentioned above.
In this article, we have addressed the application of information theory-based methods for channel equalization comprising of neural networks and SVM techniques. It revealed that the methods used in traditional communication systems are difficult to understand and implement as compared to the ANN-based methods. Channel equalization when treated as a classification problem using ANN techniques resulted in simpler receiver structures especially in the case of OFDM. The results achieved are also found to be improved in terms of BER. Another advantage with the use of ANN-based methods is that this has resulted in a relatively simpler way to understand the communication systems, and many of the computer scientists who are not well versed with the communication system theories can also attempt to develop better communication systems by using their computer science and software development skills.
This work can be extended in many ways. The following is the list of possible emerging research areas. Computational complexity analysis and computing platform optimizations of the algorithms are mandatory for efficient implementation on hardware platforms such as ARM processors and FPGA and GPUs. In this work, the preliminary computational complexity analysis has been worked out. However, this can be further extended when the implementation of these algorithms will be carried out on the FPGAs or when optimized for the implementation on the microcontrollers and DSP processors. Two-dimensional treatment of the received signal is similar to time-frequency analysis where several frames are gathered and then processed as a block. This will enable the use of advanced neural network methods such as CNN, DNN, and RNN methods. Existing frameworks such as AlexNet may also be used. Currently, performance evaluation is performed using QPSK modulation. Performance evaluation using higher-order constellations such as 16QAM, 64QAM, and 8PSK may also be carried out in the future. Validation by developing hardware may be carried out.
Data Availability
No data were used to support this study. | 7,196 | 2022-01-06T00:00:00.000 | [
"Computer Science"
] |
Personalized local SAR prediction for parallel transmit neuroimaging at 7T from a single T1‐weighted dataset
Purpose Parallel RF transmission (PTx) is one of the key technologies enabling high quality imaging at ultra‐high fields (≥7T). Compliance with regulatory limits on the local specific absorption rate (SAR) typically involves over‐conservative safety margins to account for intersubject variability, which negatively affect the utilization of ultra‐high field MR. In this work, we present a method to generate a subject‐specific body model from a single T1‐weighted dataset for personalized local SAR prediction in PTx neuroimaging at 7T. Methods Multi‐contrast data were acquired at 7T (N = 10) to establish ground truth segmentations in eight tissue types. A 2.5D convolutional neural network was trained using the T1‐weighted data as input in a leave‐one‐out cross‐validation study. The segmentation accuracy was evaluated through local SAR simulations in a quadrature birdcage as well as a PTx coil model. Results The network‐generated segmentations reached Dice coefficients of 86.7% ± 6.7% (mean ± SD) and showed to successfully address the severe intensity bias and contrast variations typical to 7T. Errors in peak local SAR obtained were below 3.0% in the quadrature birdcage. Results obtained in the PTx configuration indicated that a safety margin of 6.3% ensures conservative local SAR estimates in 95% of the random RF shims, compared to an average overestimation of 34% in the generic “one‐size‐fits‐all” approach. Conclusion A subject‐specific body model can be automatically generated from a single T1‐weighted dataset by means of deep learning, providing the necessary inputs for accurate and personalized local SAR predictions in PTx neuroimaging at 7T.
transmit channels, thereby enabling optimization of the spin excitation process. This flexibility comes at the cost, however, of an increased range of potential local RF power absorption levels in the body, for which in Europe regulatory limits are defined by the IEC in terms of the peak 10 g-averaged specific absorption rate (SAR).
Although global SAR metrics such as head-averaged SAR can be adequately monitored via the RF input power, as is commonly done in single-channel (i.e., non-PTx) systems, local SAR cannot be measured and is generally a complex function of both system characteristics as well as the subject-specific anatomy. 5 Depending on the excitation pattern of the RF transmit array, local SAR can vary by as much as 600% for a given RF input power. 6 This aspect can be accounted for in the local SAR model by employing the so-called Q-matrix formalism, 7 often compressed to a smaller set of virtual observation points with a pre-defined safety factor to account for the compression loss. 8 Additionally, local SAR is known to vary by up to 70% depending on the anatomy of the subject, including aspects such as tissue distribution as well as positioning within the RF coil. [9][10][11] This intersubject variability is typically estimated offline, by evaluating multiple generic body models, and accompanied with conservative safety margins to ensure compliance in all subjects. This "one-size-fits-all" approach inevitably compromises the RF performance and limits the utilization of PTx at ultra-high fields, as well as limits our insight into the actual RF exposure levels imposed by ultra-high field MRI systems.
Several groups have previously demonstrated subject-specific approaches to SAR prediction by establishing a subject-specific anatomical model from MR data which is then evaluated in an electromagnetic solver. 12,13 This builds on the principle that local SAR depends predominantly on the geometry of electrically distinct tissues, rather than their exact dielectric properties. 14,15 To address the time-consuming process of image segmentation, techniques based on semi-automatic segmentation, 12 image registration, 13 computer vision 16,17 and deep learning have been proposed. 18 The resulting synthesized body model can then facilitate both subject-specific calculations of local SAR as well as tailored PTx pulse designs, both key to the ultra-high field MR workflow. 19,20 As these studies are typically based on 3T data which are relatively free from image artifacts, the resulting image segmentation methods are not directly suited to handle 7T data due to the increased level of image shading and contrast non-uniformity, which would lead to segmentation errors and inaccuracies in the resulting SAR predictions. Addressing these inaccuracies would require either time-consuming manual corrections or, alternatively, an additional MR examination at 3T.
In this work, we present a method based on deep learning to generate a subject-specific numerical body model for local SAR prediction automatically from a single 3D T1-weighted neuroimaging dataset acquired at 7T, which can be run in a few minutes and is standard in almost all neuroimaging protocols. The network is trained using a custom set of segmented body models derived from multi-contrast 7T data to serve as the ground truth. By using the original T1-weighted data as input for training, RF-induced image nonuniformities and artifacts typical to 7T are automatically accounted for by the network. Finally, the accuracy of the network-generated body models is evaluated in terms of the 10 g-averaged SAR in both a quadrature birdcage RF coil model as well as a PTx configuration and compared to the conventional "one-size-fits-all" approach.
METHODS
The approach for developing the custom set of body models and deep learning segmentation method is schematically illustrated in Figure 1 and described in more detailed in the following sections. Healthy volunteers were scanned under a protocol approved by the local institutional review board. Signed informed consent was obtained from all volunteers.
MR protocol
A multi-contrast MR protocol was acquired in 10 healthy volunteers (5 male, 5 female, age 26.9 ± 9.7) on a 7T MR system (Achieva, Philips Healthcare, Best, the Netherlands) equipped with a quadrature birdcage head coil and a 32-channel receive coil array (Nova Medical, Wilmington, MA). The imaging protocol started with image-based B 0 shimming up to third-order and image-based receive coil sensitivity calibration in the entire head and neck region using vendor-supplied routines. All anatomical data were acquired at an isotropic spatial resolution of 1 mm 3 and a field of view of 192 × 256 × 256 mm 3 in a sagittal orientation covering the head and neck.
The MR protocol included a T1w 3D MP-RAGE sequence (TR/TE/TI = 4.9/2.3/1050 ms, shot interval = 2500 ms, 69 shots, flip angle = 5 • , sensitivity encoding (SENSE) factor = 1.5 × 2 [AP × RL], acquisition time = 2 min 54 s), a T2w 3D fast spin echo (FSE) sequence (TR/TE/TE eq = 2500/205/132 ms, echo train length (ETL) = 128, refocusing angle = 70 • , SENSE factor = 2 × 2, partial Fourier factor = 6/8, number of signal averages = 2, acquisition time = 4 min 5 s), and a PDw 3D spoiled gradient echo sequence (TR/TE = 3.7/1.97 ms, flip angle = 10 • , acquisition time = 2 min 39 s). Additionally, a three-point multi-acquisition 3D Dixon sequence was acquired for Schematic illustration of the multi-contrast data used for generating the custom set of body models (N = 10) to serve as ground truth, of which the T1-weighted data is used as input for training the deep learning method. Whereas the semi-automatic segmentation process involves many steps with elaborate user interaction, the deep learning method produces the body model from the original T1-weighted data automatically water/fat separation (TR/TE 1 /ΔTE = 6.3/3.0/0.33 ms, flip angle = 15 • , SENSE factor = 2 × 2, acquisition time = 5 min 21 s), and B 1 + mapping was performed using a multislice DREAM sequence (in-plane resolution = 4 × 4 mm 2 , slice thickness = 4 mm, TR/TE = 4.0/1.97 ms, STEAM/imaging flip angle = 50 • /10 • , acquisition time = 13 s). 21 All image reconstructions were performed twice, with intensity normalization of the receive coils first calibrated to the volume coil and subsequently calibrated to a sum-of-squares combination of the receive elements, using vendor-supplied reconstruction routines. This results in having an intensity bias imprinted on the data that is similar to that obtained either in a transmit/receive RF coil or a receive-only RF coil array, respectively.
Semi-automatic segmentation for ground truth generation
The image data were segmented into eight distinct tissue types to ensure accurate predictions of local SAR, 15 using a semi-automatic segmentation pipeline involving Matlab 9.10 (MathWorks, Natick, Massachusetts, USA), FSL 6.0 (https://fsl.fmrib.ox.ac.uk/fsl/fslwiki/) and 3D Slicer (https://www.slicer.org/). 22,23 The target tissue types extended those suggested by the study of Buck et al. 15 and included internal air, bone, muscle, fat, white matter, gray matter, cerebrospinal fluid, and eye tissue. This resulted in 10 three-dimensional body models with corresponding T1w image data to serve as pairs of ground truth and input data in the development of the deep learning segmentation method. The approach is graphically illustrated in Figure 1.
The semi-automatic segmentation procedure started with a custom intensity bias correction procedure based on the DREAM data to correct for the RF-induced nonuniformities in the 7T image data. 24 The underlying stimulated-echo and FID images were first used to derive B 1 + and M 0 B 1 − maps based on the corresponding signal expressions, 21 which were subsequently fitted onto a spherical function basis to remove the M 0 component and noise. 25 The fitted maps were then used to generate a bias field estimate by using the signal equations corresponding to gradient-recalled (GRE) and spin-echo sequences, 26 viz.
which were applied to the corresponding datasets. The bias correction procedure is graphically illustrated and compared to conventional N4 bias correction in Figure 2. After intensity correction, all datasets were co-registered using the rigid registration procedure from the Elastix toolbox in 3D Slicer. 27 Body tissues were distinguished from bone and internal air by thresholding the PDw data, followed by manual correction of image artifacts such as eye motion or residual intensity bias. The PDw data were then median filtered and paranasal sinuses identified within the corresponding cranial bone sections by means of thresholding. Care was taken to ensure that the bone wall around the sinuses was no less than 2 mm thick. Brain extraction and segmentation were performed on the T1w data using the BET and FAST toolboxes within FSL. 28 The T2w data were used to segment the eyes using a region growing algorithm in 3D
F I G U R E 2
Custom intensity bias correction procedure based on DREAM data. Bias fields for gradient-recalled (GRE) and fast spin echo (FSE) sequences were estimated by fitting the DREAM-generated B 1 + and M 0 B 1 − maps to a spherical function basis (A), which were subsequently used to correct the image data (B) Slicer. The remaining body tissues were segmented into fat and muscle based on the fat fraction maps that were derived from the Dixon data. Finally, a 1 mm layer of skin was enforced by replacing fat voxels in the outer layer of the body model with muscle.
Deep learning segmentation
A convolutional neural network was designed based on the ForkNET topology 18 and implemented using Tensorflow 29 in Python. The network architecture consists of multiple U-net structures with one common encoder and nine parallel decoders, each output corresponding to one of the tissue segments in addition to one for the background. As 3D convolutional neural networks often pose demanding memory requirements, a 2.5D approach was adopted by training three independent 2D networks for each of the three orthogonal slice orientations. The network topology had a total of 23 layers, of which 6 were pooling layers. The first layer encoded eight feature maps, and this number doubled after each of the pooling layers. This yielded a total number of 5 million trainable network parameters per 2D network. All convolutions were performed using a kernel size of 3 × 3, stride of 1 × 1 and padding of 1. All deconvolutions and max pooling steps were performed using a kernel size of 2 × 2. Batch normalization was performed with a momentum of 0.9 and a stability parameter of = 0.001. After summing the three network outputs, tissue labels were assigned according to the maximum output channel.
In the case when none of the channels generated an output (i.e., all outputs being equal to zero), which would result in a void voxel within the model, a neighborhood majority vote was applied. 18 A cross-validation study was performed to test the performance of the deep learning segmentation method on independent data which were not used for training the network. To achieve this, all training was performed in a leave-one-out manner, in which the test subject (i.e., the 3D dataset that was used for testing the network) was excluded from the entire training stage. The network was then trained using randomized 2D slices of the original T1w data (i.e., without any pre-processing) as input, and corresponding 2D slices of the semi-automatic segmentations as the ground truth, in which 90% of the dataset was used for training and 10% for validation. This means that the transverse and coronal networks were trained with 2304 slices of 192 × 256 pixels in size and that the sagittal network was trained with 1728 slices of 256 × 256 pixels in size. Either the T1w data with volume coil or sum-of-squares intensity normalization were used as input data, yielding a dedicated network for either reconstruction setting. Training was performed using batches of 10 randomized training images per iteration in 40 epochs using the ADAM optimizer. 30 The Dice coefficient, also known as Dice similarity index, was used to measure segmentation quality and employed as a loss function for training. One epoch took approximately 114 s on a GPU (Tesla K40c, NVIDIA, Santa Clara, CA), which resulted in a total training time of approximately 4 h per test subject. After training and testing, the network was re-initialized with random weights, and the procedure was repeated on the following test subject such that the accuracy of the method could be evaluated in all datasets (N = 10) in an independent manner.
RF field simulations
RF field simulations on the ground truth and network-generated body models were obtained at 300 MHz using XFdtd (version 7.4, Remcom Inc., State College, PA) to evaluate the B 1 + and 10 g-averaged SAR distribution (SAR 10g ). Literature values for the dielectric properties and density were assigned to each of the tissue types. 31 SAR averaging was performed using a custom region growing algorithm, which ensures correct averaging around the outer borders of the model. 32 All simulations were performed in a 2 mm uniform discretization grid with a sinusoidal excitation at 300 MHz on an Intel Xeon 2.80 GHz processor equipped with a GPU (Tesla K40c, NVIDIA, Santa Clara, CA), and all custom post-processing was implemented in Matlab (version 9.10, MathWorks, Natick, MA). First, a single-channel RF exposure assessment was performed on each of the body models in a shielded 16-rung high-pass birdcage model driven in quadrature mode using fixed excitation ports at each of the capacitor gaps. The rungs of the birdcage were 18 cm long and 2.5 cm wide, the inner diameter was 30 cm and the outer diameter of the shield was 36 cm. The birdcage RF coil model was validated experimentally in a head-sized phantom through B 1 + mapping as well as MR thermometry. 33 Simulations in the birdcage model took approximately 130 s to reach a steady state with −40 dB of convergence, owing to the non-resonant nature of the coil model, and the resulting field data were normalized to 1 W of RF input power.
A PTx RF exposure assessment was finally carried out on each of the body models by evaluating 1000 random RF shims in a generic eight-channel unshielded loop array coil with an inner diameter of 30 cm. The loop elements had a 6 cm width and 24 cm length and had six tuning capacitor breaks. The RF coil was simulated using excitation ports at each of the 48 capacitor gaps and tuned using a circuit co-simulation method which involved a custom optimization procedure aimed to minimize both the input reflection coefficients and worst case coupling between channels. 34,35 The tuning process was performed by loading the coil with a reference body model "Duke" from the Virtual Family, 36 and yielded tuning capacitances of 3.6 pF and a series matching capacitor of 5.9 pF. All input reflection coefficients were below −12 dB, hence the coil model did not require retuning when different body models were inserted. After tuning the coil in the circuit co-simulation domain, field data were combined to produce the B 1 + and electric field response for each of the channels. The electric field data were then combined to construct Q-matrices, 37 which were averaged over 10 g of tissue and converted into a vectorized format to allow for efficient evaluation of the local SAR in arbitrary RF shim settings. 19,20 A series of 1000 random RF shims was finally evaluated in both the ground truth as well as the network-generated body models by assigning random phases and amplitudes to all RF channels and comparing the resulting SAR 10g distributions. All PTx simulation results were normalized to a total input power of 1 W. Port-wise simulations of the PTx coil model took around 30 s per port and post-processing (i.e., circuit co-simulation and averaging of the Q-matrices) took around 100 s. In all, the PTx exposure analysis in a single body model took approximately 25 min.
Deep learning segmentation
The segmentation results of the leave-one-out cross-validation study are shown in Figure 3. The network-generated models showed a strong similarity with the ground truth models, indicating that the network was able to account for the non-uniform intensity and contrast variations within the head as well as the strong signal drop-off towards the neck. In particular, the paranasal sinuses and bone segments were correctly distinguished despite having a very similar signal intensity in the T1w data, indicating the leverage obtained through the deep learning approach. Some models showed some undersegmentation in distal neck regions where SNR was very low, however this may not be problematic as local SAR is typically low here as well. On average, around 158 voxels within the 3D model were not classified by any of the decoder branches and were generated using the neighborhood majority vote rule. Results obtained for the sum-of-squares intensity normalized data were essentially the same. The Dice coefficients for the different tissue segments in the cross-validation study are shown in Figure 4, showing an overall Dice coefficient of 86.7% ± 6.7% (mean ± SD). Median Dice coefficients were greater than 80% in all segments, with fat reaching the lowest overall accuracy. We note that this metric reflects segmentation errors in the entire field of view, including areas where the SAR 10g is typically low, for example in the neck where the gross anatomy is expected to be more relevant than the local tissue properties. Structures with a well-defined MR contrast and shape, such as white matter and eye tissues, reached the highest overall dice coefficients.
F I G U R E 3
Leave-one-out cross-validation results comparing ground truth and deep learning-based segmentations in all volunteers. Shown are sagittal cross-sections of the T1-weighted data (top), ground truth segmentations (middle), and network-generated segmentations (bottom). The deep learning method shows to account for the nonuniform contrast and severe drop-off in intensity towards the neck. In each of these evaluations, the test subject was excluded from the training data to ensure generalizability
RF field simulations
The accuracy of the network-generated body models was evaluated by comparing simulations and measurements of the B 1 + field in the quadrature birdcage RF coil model, which are shown in Figure 5. The simulated B 1 + shows a high degree of correspondence with the measured B 1 + data, both in terms of the relative distribution as well as in terms of peak transmit efficiency. Simulations of the SAR 10g distribution in the ground truth and network-generated body models obtained in the quadrature birdcage model are shown in Figure 6. The bottom row shows the voxel-wise underestimation error obtained by subtracting the SAR 10g data obtained in the network-generated model from those obtained in the ground truth model. In other words, underestimation of SAR 10g (i.e., undesired from a safety compliance point of view) corresponds to a positive underestimation error. The peak SAR 10g values obtained in the network-generated body models were within 3.0% of those obtained in the corresponding ground truth body models, for all subjects. This is considerably lower than the intersubject variability in peak SAR 10g of 37.2% (i.e., absolute range divided by the mean value) and practical uncertainty levels associated with RF exposure assessments. 6,15,33 The head-averaged SAR values obtained in the network-generated models were within 1.8% of those obtained in the ground-truth models.
Results of the PTx RF exposure assessment are shown in Figure 7, showing sagittal cross-sections of the maximum SAR 10g value obtained in the 1000 random RF shims. Both maximum as well as minimum intensity projections of the voxel-wise underestimation error are shown in the two bottom rows, where the underestimation error corresponds to the maximum SAR 10g maps, here.
An overview of the peak SAR 10g underestimation error in the PTx configuration is shown in Figure 8a, obtained by comparing the peak SAR 10g produced in each of the network-generated models with that produced in the corresponding ground truth model, for each of the 1000 random RF shims. Figure 8B shows the peak SAR 10g overestimation error in the generic "one-size-fits-all" approach, obtained by comparing the peak SAR 10g produced in each of the ground truth body models with the maximum peak SAR 10g that is produced in the other nine body models of the dataset, for each of the 1000 random RF shims. The underestimation error had a mean value of −1.5%, which corresponds to a slight overestimation of the peak SAR 10g , and in 95% of the RF shims the underestimation error was found to be less than 4.8%. By incorporating these into a safety factor, the subject-specific approach would incur an effective peak SAR 10g overestimation of up to 6.3% with a 5% probability of underestimation, whereas the generic approach would result in an average overestimation of 34%, reaching over 95% of overestimation in 5% of the cases. For comparison, increasing the confidence interval of the safety factor to 99% would lead to an effective peak SAR 10g overestimation of up to 9% with a 1% probability of underestimation.
DISCUSSION
In this work we have explored the potential of deep learning for generating a subject-specific numerical body model from a single T1-weighted 7T image dataset for personalized local SAR prediction. Local SAR compliance is one of the current bottlenecks hindering clinical use of PTx at 7T. Most vendors impose restrictive safety margins on the use of PTx of up to 300% to ensure compliance, which compromise image quality by limiting the allowed range of sequence parameters such as the refocusing tip angles in FSE sequences or the minimum repetition time that can Experimental validation of the ground truth segmentations in the quadrature birdcage RF coil. Shown are the simulated (top) and measured (bottom) B 1 + data. All data were normalized to 1 W of input power
F I G U R E 6
Quadrature birdcage local SAR assessment. Shown are simulated SAR 10g distributions in ground truth (top) and network-generated body models (middle), and corresponding underestimation error maps (bottom). Figure footers denote peak SAR 10g values (top, middle) and the corresponding relative underestimation (bottom). Positive errors indicate a peak SAR 10g underestimation in the network-generated model
F I G U R E 7
PTx local SAR assessment. Shown are maps of the maximum SAR 10g value obtained in the evaluation of 1000 random RF shims in the ground truth (top) as well as network-generated models (middle), and projections of the SAR 10g underestimation and overestimation (bottom). Positive errors indicate SAR 10g underestimation in the network-generated model be attained. Such compromises make that 7T is currently not utilized to its full potential, limiting its clinical impact. Subject-specific information on local SAR would enable tailoring the RF safety margins to the individual subject, rather than applying generic models with overconservative safety margins, thereby removing unnecessary limitations and enabling PTx to be exploited at its full potential.
The segmentation performance of the proposed deep learning approach was found to be of high quality, as reflected in the local SAR results. By training the network on 7T MR data with severe intensity bias and contrast non-uniformities throughout the field of view, the method was found to correctly account for these intrinsic image characteristics, despite that only nine subjects were used for training the network in each of the cross-validation cycles. This means that the method relieves the operator from performing elaborate bias-correction procedures or other image processing steps, but instead can be directly applied to the 7T data without any pre-processing. Of all tissues, fat reached the lowest overall segmentation accuracy with a median Dice coefficient of 80%. This can be explained by the different MR contrast mechanisms that were used, with ground truth segmentations being based on chemical shift, encoded in the Dixon data, as opposed to the T1-weighted contrast of the input data. From Figure 3, it can be observed that fat is often undersegmented in the lower portion of the body models. This also corresponds to the region where the adiabatic RF inversion pulse fails
F I G U R E 8
Statistical analysis of the peak SAR 10g accuracy obtained with the deep learning segmentation method in the PTx configuration in 1000 random PTx excitation settings. Shown are histograms of the peak SAR 10g underestimation error obtained in the network-generated models (A) and the peak SAR 10g overestimation obtained in the generic "one-size-fits-all" approach (B) to reach a proper inversion, explaining the inconsistent T1-weighting of the input data in this region. Other groups have proposed acquiring multiple MR contrasts or even MR fingerprinting as input data to improve the segmentation quality 17,38 ; however, such approaches would substantially increase the acquisition time and interfere with the MR workflow. Finally, the ForkNET network design was chosen here, and was previously shown by Rashed et al. to outperform a conventional U-NET in semantic segmentation of MRI data; however, other network designs may also be conceivable. This may also involve different loss functions, such as cross entropy, or include attention mechanisms to promote SAR-sensitive regions of the model to be represented with improved quality. 39 In the current study, the RF exposure assessment took approximately 2 min in the quadrature birdcage model and 25 min. in the PTx configuration, both relatively time-consuming compared to the deep learning segmentation step taking only 14 s. Together with the acquisition of the T1w input data, which took almost 3 min, this constitutes a total workflow of around 6 min for the single-channel RF exposure assessment and close to 30 min for the PTx exposure analysis. Future work should therefore aim to reduce both the MR data acquisition and RF simulation time, to improve the integration of the subject-specific approach into the MR workflow. Options to speed up the RF simulations would include using a larger simulation grid size, leveraging parallel computing as well as using specialized EM solvers such as MARIE. 40 For example, increasing the simulation grid from 2 mm to 4 mm reduces the computation time for the PTx exposure analysis from 25 min to around 7 min. In a PTx setting, we should note that the B 1 + predictions obtained from the RF simulations would also allow subsequent PTx pulse calibrations, potentially saving time by avoiding volumetric B 1 + mapping procedures, which can take several minutes to acquire. [41][42][43] Recently, other groups explored methodologies to infer local SAR directly from B 1 + maps using deep learning, 44 exploiting the coupled structure of the magnetic and electric RF fields, or even directly from anatomical MR images. 45 Although such approaches show potential to resolve local SAR in a single-channel configuration or for a specific RF shim setting, these have not yet been demonstrated in a comprehensive PTx workflow, which would require channel-wise local SAR information as well as information accounting for the interference between the different channels. Our approach has the advantage that the subject-specific anatomical model can be used to perform a full RF exposure analysis, including for example channel-wise analyses or dedicated PTx excitation settings. Additionally, our approach can potentially handle MRI data from a wider variety of RF coils, as most PTx arrays optimized for neuroimaging are capable of generating a circular polarized (CP 1 + ) mode that will produce an excitation B 1 + field very similar in distribution to that obtained in the quadrature birdcage, which was used here. This would then also produce contrast variations and intensity bias effects comparable to those present in the data used for training the network. Additionally, different receive channel combination strategies have been addressed by including both sum-of-squares as well as volume-coil normalized data in the training dataset. Remaining intersystem variations in image intensity are anticipated to fall well within the range of intersubject variations, which the network was well capable of addressing as shown by the current study.
Limitations of the current study include the limited size of the dataset (N = 10). In a previous segmentation study at 3T stable training was obtained with a similar number of subjects. 46 To determine whether this was also adequate in the current study, we evaluated the convergence of the leave-one-out cross-validation study when using fewer subjects, for example, N = 5 up to N = 10 (cf. Supporting Information Figure S1, which is available online). The peak SAR 10g error was found to be no greater than 3.1% and converged smoothly to the values obtained when all subjects were included. Although this suggests generalizability of the network, segmentations in subjects with a significantly different anatomy, for example, pediatric subjects or in specific pathologies, may potentially reveal inconsistencies and may require further extensions of the training dataset. A challenge with including pathologies in the training data is that it is not yet clear whether the dielectric properties could still be represented using the current set of tissue clusters. Another limitation of our study, and of RF exposure assessments in general, is that it is not possible to validate the RF simulation results with in vivo measurements of the SAR distribution. We have experimentally validated our head models by comparing the measured and simulated B 1 + fields in the birdcage model, which despite showing a strong agreement leave some room for further model refinements. An underlying shortcoming in this validation approach, is that errors in local SAR may not always be directly reflected in the B 1 + distribution. 47 Additionally, the PTx exposure analysis was only performed in a single PTx coil model, and other PTx coil designs may show a different sensitivity to segmentation errors. Finally, in the PTx analysis, we considered only static RF shimming with random excitation settings, which also includes settings that do not produce practically useful B 1 + distributions. Although this enables generalization of the results, a more realistic analysis could target tailored PTx pulses such as kT-points, SPINS pulses or local SAR-optimized RF pulse designs, specifically. 19,20,48,49
CONCLUSIONS
In this work, we demonstrate a method based on deep learning to automatically generate a subject-specific numerical body model from a single T1-weighted 7T image dataset for personalized RF exposure prediction. The network-generated body models showed reproduction of the ground truth RF exposure results with a high level of agreement, with peak local SAR errors below 3.0% in the quadrature birdcage model. In the PTx configuration, a safety margin of 6.3% was sufficient to ensure a conservative local SAR prediction in 95% of the random RF shims, compared to an average overestimation of 34% in the "one-size-fits-all" approach. As a T1-weighted image is typically acquired at the start of a neuroimaging protocol as a basic anatomical reference, the procedure has the potential to be seamlessly integrated into the MR workflow.
SUPPORTING INFORMATION
Additional supporting information may be found in the online version of the article at the publisher's website. Figure S1. Convergence of the leave-one-out crossvalidation study evaluated in the quadrature birdcage configuration. When using fewer subjects (N = 5) the peak local SAR 10g is within 3.1% compared to the cross-validation result based on using all subjects (N = 10). Values shown are peak SAR 10g (top) and relative peak SAR 10g error (bottom) compared to the value obtained when using all subjects (N = 10). | 7,599.2 | 2022-03-28T00:00:00.000 | [
"Physics"
] |
MiR-222 in Cardiovascular Diseases: Physiology and Pathology
MicroRNAs (miRNAs and miRs) are endogenous 19–22 nucleotide, small noncoding RNAs with highly conservative and tissue specific expression. They can negatively modulate target gene expressions through decreasing transcription or posttranscriptional inducing mRNA decay. Increasing evidence suggests that deregulated miRNAs play an important role in the genesis of cardiovascular diseases. Additionally, circulating miRNAs can be biomarkers for cardiovascular diseases. MiR-222 has been reported to play important roles in a variety of physiological and pathological processes in the heart. Here we reviewed the recent studies about the roles of miR-222 in cardiovascular diseases. MiR-222 may be a potential cardiovascular biomarker and a new therapeutic target in cardiovascular diseases.
Introduction
Cardiovascular disease is a predominant cause of morbidity and mortality in the world [1]. The number of patients suffering from cardiovascular disease is growing larger and larger. The major categories of cardiovascular disease include disease of the blood vessels and the myocardium. The contemporary view thinks that most cardiovascular diseases resulted from a complex dysregulation of genetics and environmental factors. Also there are many molecular components that participate in this process, including noncoding RNAs.
MicroRNAs (miRNAs and miRs) are endogenous 19-22 nucleotide, small noncoding RNAs with highly conservative and tissue specific expression. miRNAs can modulate mRNA levels through decreasing transcription or posttranscription induced mRNA decay [2]. Since the first discovery of miR-NAs in 1993, they have been found in many species and could participate in various physiological and pathological processes [3][4][5][6]. So far, there are over 1000 miRNAs that have been identified, among which at least 200 miRNAs are consistently expressed in the cardiovascular system [7]. miR-NAs can regulate cardiomyocytes hypertrophy, senescence, apoptosis, autophagy, and metabolism. Changes of miRNAs have been found to participate in the genesis of many diseases including cardiovascular diseases [8].
miR-222, firstly discovered in human umbilical vein endothelial cells (HUVECs), has been reported to play important roles in epithelial tumors evidenced by its frequently increased expressions in epithelial tumors [9]. Reduction of miR-222 could inhibit cell proliferation and induce mitochondrial-mediated apoptosis through directly targeting the p53 upregulated modulator of apoptosis (PUMA) in breast cancer [10]. Its function on proliferation has also been confirmed in glioblastomas, thyroid papillary cancer, breast cancer, pancreatic cancer, hepatocellular carcinoma, and lung cancer [11][12][13][14][15]. On the other hand, miR-222 can play tumorsuppressive roles through the downregulation of c-kit in erythroleukemia cells. Apart from its role in cancer progress, miR-222 has been found to participate in many physiological and pathological processes in the cardiovascular system (Table 1). Here we reviewed the recent studies about the roles of miR-222 in cardiovascular diseases. MiR-222 may be a potential cardiovascular biomarker and a new therapeutic target in cardiovascular diseases. pathological hypertrophy, which is related to myocardial structural disorder and cardiac dysfunction, physiological hypertrophy is characterized by normal cardiac structure and normal or improved cardiac function [28]. MiR-222 expression levels were found to be commonly increased in two distinct models of exercise, namely, voluntary wheel running and a ramp swimming exercise model as well as the exercise rehabilitation after heart failure in human. MiR-222 was able to promote cardiomyocytes hypertrophy, proliferation, and survival through directly targeting p27, HIPK-1, HIPK-2, and HMBOX1 [17].
MiR-222 Regulates Physiological Function in Cardiac Stem
Cells. Heart has limited regenerative capacity, which might be based on cardiomyocyte division and cardiac stem and progenitor cell activation [29]. Cardiac stem cells (CSCs) are self-renewing, clonogenic, and multipotent, and they can differentiate to mature cardiomyocytes and improve the function and regeneration of the cardiovascular system [30]. CSCs can be activated by physical exercise training [18]. Interestingly, it has been found that the upregulation of miR-222 induced by coculturing human embryonic-stem cell-derived cardiomyocytes (m/hESC-CMs) with endothelial cells could increase and promote CSCs transformation to cardiomyocyte [18].
MiR-222 Regulates Physiological Function in Human
Umbilical Vein Endothelial Cells. Human umbilical vein endothelial cells (HUVECs) have unique ability to form capillary-like structures in response to some stimuli. MiR-222 has been reported to exert angiogenesis function through modulating HUVECs angiogenic activity by targeting c-Kit [31,32].
Sex-Specific Expression of miR-222.
There are differences between men and women in cardiovascular diseases incidence, while studies show that males are more likely to suffer from heart attacks than females [33,34]. MiR-222 are encoded on the X chromosome in mouse, rat, human and have sex-specific expression. Studies have indicated that miR-222 was specifically decreased in mature female mouse hearts as compared with male mouse hearts [31,35].
MiR-222 Regulates Pathological Function
Unraveling the role of miR-222 in regulating cardiac pathological function may foster new therapeutic targets for cardiovascular diseases (Figure 2).
Cardiac Ischemia Reperfusion Injury.
Myocardial ischemic reperfusion is a complex process involving numerous mechanisms including reactive oxygen species (ROS) overload, inflammation and calcium overload, energy metabolism dysfunction, and mitochondrial permeability transition pore (mPTP) opening [36][37][38]. MiR-222 has been reported to be able to protect against cardiac dysfunction after ischemic injury. MiR-222 can promote cardiomyocyte proliferation and reduce cardiomyocyte apoptosis through P27. In addition, miR-222 overexpression mice have well-preserved cardiac function and reduced cardiac fibrosis when subjected to cardiac ischemia reperfusion [17].
Heart
Failure. Heart failure is the terminal outcome of the majority of cardiovascular diseases, and it seriously reduces the quality of life. A significant inhibition of autophagy in Tg-miR-222 mice after heart failure was observed, which was through mTOR, a negative regulator of autophagy [19]. Inhibition of autophagy induced by miR-222 may cause accumulation of protein and organelles injury, even the impairment of cardiac function. Angiogenesis has been proposed as a promising therapy for ischemia heart disease and heart failure. miR-221/222 family seemed to inhibit angiogenesis [21]. MiR-222 was significantly decreased in endothelial cells (ECs) when cultured for 24 h with HDL from chronic heart failure (CHF) patients compared to healthy control. The downregulation of miR-222 may be a compensatory mechanism of ECs to counteract cardiovascular adverse events [39].
Viral Myocarditis.
Cardiac inflammation is an important cause of dilated cardiomyopathy and heart failure. In young healthy adults, it can cause sudden death. Viral myocarditis is one of cardiac inflammation diseases. MiR-222 has been reported to be able to orchestrate the antiviral and anti-inflammatory response through downregulation of IRF-2 [23]. Inhibition of miR-222 would increase the risk of cardiac injury. HIV-infected cardiomyopathies is another kind of inflammation diseases [22,40]. MiR-222 can regulate cell adhesion molecules ICAM-1 translation directly or indirectly (through IFN-) to inhibit inflammation [22,41].
Congenital Heart
Disease. Tetralogy of Fallot (TOF) is one of the most common congenital heart malformations in children [42]. miR-222 was found to display a high expression level in right ventricular outflow tract (RVOT) tissues compared with controls. Cardiac myocyte proliferation and differentiation is a key event in heart development. Further functional analysis showed that overexpression of miR-222 promoted cell proliferation and regulated cell differentiation by inhibiting the expression of the cardiomyocyte marker genes during the cardiomyogenic differentiation [25]. In another congenital heart disease, ventricular septal defect, the decreased expression of miR-222 also indicated its important role in heart development [20].
MiR-222 Regulates Pathological Function in Blood Vessels
3.2.1. Atherosclerosis. During the genesis of atherosclerosis, there are various molecules and cellular components that can make atherosclerotic plaque vulnerable and even rupture [43]. Many studies show that miRNAs also participate in this process [44]. MiR-222 derived from ECs may play its protective role by blocking intraplaque neovascularization and suppressing the inflammatory activation of ECs, without enhancing the proliferation of ECs [45,46].
Peripheral Arterial Disease. Smooth muscle cells
(SMCs) constitute the medial layer of arteries and regulate the vascular tone via their contractile apparatus [27]. MiR-222 was reported to take part in the development of neointima and promotes neointima formation after vascular injury by enhancing the proliferation of SMCs. Furthermore, in the peripheral artery disease (PAD) caused by atherosclerosis or inflammation of the peripheral arteries, studies have showed that miR-222 also inhibited the proliferation of vascular smooth muscle cell by targeting p27 [45] to stable the plaque [24] and promoted skeletal muscle regeneration after ischemia. Besides that, under the administration of superoxide dismutase-2 (SOD-2), miR-222 plays its protective role against peripheral artery disease by regulating p57 expression [26] but not P27.
Conclusions
In conclusion, miR-222 controls many cardiac physiological functions and its deregulation has been implicated in many cardiovascular diseases. Targeting miR-222 might be a promising therapeutic target for cardiovascular diseases. | 1,922.2 | 2017-01-03T00:00:00.000 | [
"Biology"
] |
Stable N-doped & FeNi-decorated graphene non-precious electrocatalyst for Oxygen Reduction Reaction in Acid Medium
NiFe nanoparticles-decorated & N-doped graphene is introduced as an effective and stable non-precious electrocatalyst for ORR in the acid medium. Compared to conventional Pt/C electrodes under the same conditions, the proposed nanocatalyst shows closer onset potential and current density. Typically, the observed onset potentials and current densities for the synthesized and Pt/C electrodes are 825 and 910 mV (vs. NHE) and −3.65 and −4.31 mA.cm−2 (at 5 mV.s−1), respectively. However, the most important advantage of the introduced metallic alloy-decorated graphene is its distinct stability in acid medium; the retention in the electrocatalytic performance after 1,000 successive cycles is approximately 98%. This finding is attributed to the high corrosion resistance of the NiFe alloy. The kinetic study indicates that the number of the transferred electrons is 3.46 and 3.89 for the introduced and Pt/C (20 wt%) electrodes, respectively which concludes a high activity for the proposed nanocomposite. The suggested decorated graphene can be synthesized using a multi-thermal method. Typically, nickel acetate, iron acetate, graphene oxide and urea are subjected to MW heating. Then, sintering with melamine in an Argon atmosphere at 750 °C is required to produce the final electrocatalyst. Overall, the introduced NiFe@ N-doped Gr nanocomposite shows remarkable electrochemical activity in the acid medium with long-term stability.
Considering that the most effective non-precious ORR catalysts are primarily nitrogen-doped nanocarbons (e.g. N-doped carbon nanotubes (CNT) 11 and CNT/graphene mixture 12 ), supporting of effective bimetallic nanoparticles on a proper N-doped carbonaceous material can distinctly enhance the ORR electrocatalytic activity 13 . Besides enhancing the electrocatalytic activity, N-doped carbon nanostructural supports showed more stability than nitrogen-free ones due to the high number of surface nucleation sites, which allows for anchorage and high dispersion of the catalyst nanoparticles on surface of the support material [14][15][16] . Moreover, due to the strong electron donor behavior of nitrogen, the doping process improves the durability of the produced carbon-support catalysts because of enhancement of π bonding 17,18 and their basic properties 19 .
Graphene is a charming carbon material with excellent characteristics including a large theoretical surface area (2675 m 2 g −1 ), strong mechanical strength, and excellent electrical conductivity. Consequently, it was exploited to enhance the performance of several promising electrode materials in the electrochemical devices [20][21][22] . Moreover, compared to other carbon nanostructures, the chemical route for graphene synthesis provides a good chance for functionalization by active groups which aids in decoration surface by metallic nanoparticles 9 .
The main target of this study is synthesis effective and highly stable non-precious bimetallic nanoparticles supported on nitrogen-doped graphene sheets to be exploited as electrocatalyst for the ORR in the acid media. It is known that the alloy structure of the transition metals can not only improve the catalytic activity of the final product but also may distinctly enhance the stability in the basic and acid media. Among the widely utilized transition metals in the electrochemical applications, Fe and Ni have very good contribution 23,24 .
In this study, we introduce NiFe alloy nanoparticle-decorated and N-doped graphene as a novel, stable, and efficient electrocatalyst for ORR in an acid medium. The proposed electrocatalyst was designed based on the following criteria: (1) Exploiting the alloy structure of the transition metals to enhance the stability in the acid media. (2) Maximizing the nitrogen content in the carbonaceous support to strength the catalyst activity toward the ORR. (3) Synthesis of the graphene sheets supports by the chemical route to develop chemical anchors for the transition metals nanoparticles which strongly improves the attachment and consequently ameliorates the activity of the final catalyst. Overall, the obtained results indicated that the proposed nanocomposite have very good electroactivity toward ORR with distinct stability in the acid media.
Results and Discussion
Catalyst characterization. The XRD analysis can be used to confirm the graphene preparation. In other words, XRD analysis can differentiate between the graphite, graphene oxide and graphene. First, the graphite is usually assigned by a sharp peak at the 2θ value of 26.5°, which is indexed to the [002] crystal plane 25 . However, after the violent oxidation, the graphite peak disappears and a new diffraction peak appears at 2θ of 10.5° 25 . On the other hand, due to the reduction process, the reduced graphene oxide shows a broad peak that can be fitted using a Lorentzian function into three peaks, which are centered at 2θ = 20.17°, 23,78° and 25.88°, which correspond to the interlayer distances of 4.47, 3.82 and 3.53 Å, respectively. These XRD results are related to the exfoliation and reduction processes of GO and the processes of removing intercalated water molecules and the oxide groups 26,27 . The observed broaden peak indicates the smaller crystalline size of graphene in the single-layer or few-layer structure. Accordingly, from left inset ( Fig. 1(A)) which displays the XRD pattern of a pristine and decorated graphene sheets, one can claim that the pattern corresponding to the pristine graphene confirm the successful preparation of multiwall graphene sheets.
Accordingly, the prepared GO was used to prepare the proposed FeNi-decorated & N-doped graphene as it was explained in the experimental section. The XRD pattern of the synthesized composite is displayed in the left inset ( Fig. 1(A)). The observed broad diffraction peak at 22.2~26.8° explains the graphene sheets disordered stacking. On the other hand, the three characteristic diffraction peaks corresponding to (111), (200) and (220) crystal planes at 2theta values of 43.8°, 51.1°, and 75.6°, respectively, confirm formation of the FeNi alloy (#47-1417) 28 . Based on XRD data base (#06-0696; Fe and #04-0850; Ni), iron and nickel are identified by the standard peaks at 2θ of 44.67°, 65.02° and 82.33°, and 44.05°, 54.85° and 76.37° corresponding to (110), (200) and (211), and (111), (200) and (220), respectively which indicates that some nanoparticles have a physical Fe/Ni mixture. Additionally, no peaks attributed to oxides or carbides were detected. According to the utilized characterizations which confirmed formation of pure Fe and Ni and considering the difficulty of evaporation of these metals due to the high melting points (Fe; 1538 °C, and Ni; 1455 °C), the weight of metals in the final produce can be estimated. Moreover, it was reported that calcination of the graphene oxide prepared by a similar chemical route to 750 °C leads to lose around 60 wt% 29 . Consequently, the Ni:Fe:C ratio in the produced composite can be determined to be 41:40:19 wt%, respectively.
Although XRD is highly trustable analytical technique, its utilization is limited to the crystalline materials. Therefore, to investigate nitrogen doping, X-ray photoelectron spectroscopy (XPS) was exploited. The obtained XPS spectra ( Fig. 1(A)) indicate the successful nitrogen doping with a corresponding contents of 10.1%; this percentage has been further confirmed by FE-SEM EDX and elemental analysis (data are not shown). Moreover, the right inset ( Fig. 1(A)) which displays the high-resolution of the N1s spectra indicates presence of N atoms with three different binding energies. These results depict that there are at least three typical nitrogen states in the introduced decorated graphene: amino (ca.399.05 eV), pyridinic (ca.398 eV) and pyrrolic (ca.399.63 eV) 30 . Figure 1(B) describes the TEM image of the introduced FeNi@ N-doped graphene. As shown in Fig. 1(C), the average diameter of the metallic NPs distributed on the graphene sheets was 15.9 nm. Notably, elemental mapping was achieved to detect Fe and Ni distribution in a randomly selected graphene sheet. As shown in Fig. 1(D and E), nickel and iron have similar distribution, which affirms the mentioned hypothesis about the formation of metallic alloy (FeNi) nanoparticles.
Electrochemical measurements. The electro-catalytic activity of the synthesized FeNi@ N-doped graphene sheets toward ORR was investigated by the corresponding current density and onset potential. Figure oxygen-saturated 0.5 sulfuric acid solutions at 5 mV.s −1 scan rate and room temperature. The synthesized electrode reveals comparable performance with the precious metal electrode in the form of current density and onset potential. Based on previous reports, the onset potential is defined as the potential at which the background subtracted current density is equal to 0.1 mA.cm −2 31,32 . The onset potential for the introduced decorated graphene was close to that of Pt/C (Fig. 2B). Typically, the detected onset potentials were 725 and 810 mV (vs. RHE) for the modified graphene and Pt/C, respectively. Alternatively, the observed current densities were −3.65 and −4.31 mA/cm 2 for the introduced and precious metal electrodes, respectively. Moreover, as shown in the figure, the oxygen adsorption process was a control step for the precious electrode, while the relatively good stability of the current density in the case of the introduced electrode indicates good oxygen adsorption affinity for the introduced catalyst. Specifically, the first step in the ORR process was adsorption of the molecular oxygen on the surface of the electrocatalyst 33,34 . As it was reported in literature, the oxygen reduction process in the acid media can occur by several pathways; direct reduction of hydrogen ions or formation of hydrogen peroxide intermediate 33,34 .
Regardless the reaction pathway, oxygen adsorption is the first step. Based on the results obtained in Fig. 2A, the indirect pathway is more likely to happen. It is worth mentioning that the oxygen adsorption step does not always distinctly affect the reaction rate. Accordingly, to investigate the influence of graphene support as well as to check the influence of oxygen adsorption step, FeNi nanoparticles have been prepared in absence of GO, urea and melamine. The electrocatalytic activity of the synthesized nanoparticles toward ORR is displayed in Fig. 2C. As shown, the unsupported nanoparticles have good activity toward hydrogen evolution reaction (HER), however they possess relatively low activity toward ORR compared to the supported ones. This finding emphasizes the role of the graphene support which can be attributed to the adsorption of both of hydrogens ions and oxygen molecules based on the known good adsorption capacity of the carbonaceous materials. At around 0.6 V, there is a pair of redox peaks can be observed in case of the introduced composite ( Fig. 2A) which cannot be seen in case of the unsupported bimetallic nanoparticles (Fig. 2C). Considering the XRD analysis explains that there are unalloyed Fe and Ni in case of the proposed composite, the observed redox peaks can be assigned to these free metals. On the other hand, in case of the unsupported nanoparticles, well alloying process was conducted so no peaks could be observed. This pair of the redox peaks was almost at the same potential which indicates very good reversibility.
Doping the carbon nanostructures with heteroatoms such as N can distinctly change the properties of carbon. For instance, doping of carbon by nitrogen strongly enhances the oxidation resistance capability and its ORR catalytic activity. For example, doping of carbon nanofibers by nitrogen led to increase the onset potential of the oxygen reduction reaction by 70 mV with a corresponding electron transfer number of approximately 4 18 . It was concluded that, in the N-doped carbon nanostructures, the active sites are placed on the carbon atoms adjacent to the nitrogen atom 35 . Similarly, in the introduced electrocatalyst, the detected pyridinic and pyrrolic nitrogen (inset(right)), Fig. 1A) have a distinct role in the adsorption of hydrogen ions because of the strong tendency of nitrogen for electron donating 19 , while the metallic nanoparticles have more adsorption capacity for the molecular oxygen. Delivery of electrons, from the anode to the cathode, leads to complete the oxygen-hydrogen combination to form water molecules; ORR reaction. From the chemistry point of view, the pyridinic nitrogen has more basic characteristic than pyrrolic because the nitrogen lone pair electrons do not share in the pyridine cycle resonance. However, in case in pyrrole, the lone pair electrons contribute in the cycle resonance which negatively affects the basic characteristic. Accordingly, it is expected that pyridinic nitrogen has more attraction capacity for the hydrogen ions that reflects more contribution in the ORR reaction. It is noteworthy mentioning that the expected adsorption influence of the utilized N-doped graphene support can partially enhance the electrocatalytic activity of the proposed composite. However, the main impact can be assigned to avoiding the agglomeration of unsupported bimetallic nanoparticles which strongly improves a very important parameter in the heterogeneous catalytic reactions; contact area between the reactants and the catalyst surface. Furthermore, the excellent electrical conductivity of the utilized support has also a regarded effect. The proposed mechanism of O 2 reduction on the surface of the introduced catalyst is visualized in Fig. 3.
Stability in the acid medium is the most important advantage of the precious metals and is also a main constraint facing the use of the pristine transition metals. Fast dissolution in acidic solutions is the first concern of transition metals. However, the alloy structure provides novel physicochemical characteristics. For instance, especially at a relatively high nickel content, iron-nickel alloys exhibit high corrosion resistance in acidic media 36,37 . The stability of the introduced and the precious (Pt/C) electrodes was first investigated by cyclic voltammetry analysis for 1,000 successive cycles. Figure 4 displays a comparison between the 4 th and 850 th cycles of the introduced FeNi-N-Gr (Fig. 4A) and Pt/C (Fig. 4B). Moreover, screen shots for all of the data of the two electrodes can be found in Fig. 5. The obtained results indicate better stability for the introduced electrode compared to Pt/C. The thermodynamic potential of oxygen reduction reaction (1.23 V vs. NHE at S.T.P) is so high that the Pt electrode cannot remain pure. Therefore, platinum undergoes oxidation, which changes the surface properties based on the following reaction: Thus, in the presence of oxygen, the surface of platinum is a mixture of PtO and Pt. Consequently, due to formation of PtO, steady-state open circuit potential (OCP) of 1.23 V is difficult to be obtained. Instead, the steady-state rest potential of the platinum electrode in the oxygen-saturated solutions is around 1.06 V, a mixed value of the thermodynamic potentials of Pt/PtO and O 2 /H 2 O because the two reactions take place simultaneously 38 . Accordingly, with successive cycles, the performance decreases due to formation of PtO. It is noteworthy mentioning that this finding is supported by other reports 39,40 . Alternatively, the introduced catalyst reveals a distinct stability due to the good corrosion resistance and electronic structure of the FeNi alloy. Besides the multiple cyclic voltammetry analysis, chronoamperometery test has been invoked to investigate the stability of the proposed composite; Fig. 6(A). As shown, the results reflect good stability and consequently supports the aforementioned conclusion about the distinct corrosion resistance in the acid media of the FeNi nanoparticles decorating N-doped graphene sheets.
In addition to the current density and onset potential, the number of sharing electrons (n) in the ORR is another important factor. Rotating ring-disk electrode (RRDE) analysis is typically invoked to measure the number of the utilized electrons. This technique can give indication about the relative importance of the H 2 O 2 routes in the overall oxygen reduction reaction process. However, in the literature, for the same material, several n values can be found. These different values of the number of transferred electrons can be attributed to the history of the electrocatalytic material which can distinctly affect the ORR rates. For instance, at the ring, in addition to the oxidation of hydrogen peroxides that generates an anodic current, decomposition of H 2 O 2 without current flow may take place. Moreover, the Pt ring potential can influence the number of electrons, e.g., by varying the relative amount of the formed PtO, and also the synthesis conditions of the investigated material can play a strong role. Therefore, other techniques have been introduced to estimate the number of electrons, such as scanning electrochemical microscopy 41 and cyclic voltammetry 42 . In the cyclic voltammetry-based procedure, the analysis is performed at different scan rates and the peak currents (I p ) increase linearly with the scan rate that is a typical characteristic of the reaction occurs on the surface of the electrode. The number of the electron can be estimated from the slope according to this equation: where n is the number of the electron transfer, A is the electrode active area (0.073 × 10 −4 m 2 ), v is the scan rate (mV/s), and S is the concentration of the adsorbed oxygen on the electrode surface (here, the maximum value was used as the oxygen solubility in the utilized solution; 3.1 × 10 −4 M at 20 °C). From the I p vs. v linear relation slope, the number of electrons n can be calculated. The voltammograms for the FeNi-N-Gr and Pt/C electrodes are introduced in the Fig. 7. As shown in Fig. 7(B), the linear regression model reveals good fitting (R 2 = 0.998) for the FeNi-N-Gr electrode data. Accordingly, the number of electrons was determined to be 3.89 and 3.465 for Pt/C (linear regression is not shown) and the introduced electrode, respectively. For further investigation for the kinetics of the ORR over the introduced catalyst, the polarization curves were carried using rotating disk electrode (RDE); Fig. 8A. As shown, the electrocatalytic current enhances along with the increment of the rotation rate. The increase in the current density can be attributed to an improved catalyst where I lim is the experimentally observed limiting current at the selected potential, I k is the kinetic current, ω is the rotation rate in rad s −1 , and B can be estimated from this equation: methodology (eq. 2). Steady-state Tafel plot is the most widely used technique in studying the kinetics of the multistep electrochemical reactions. Tafel expression neglects the mass transport limitations and assumes that the reaction is under kinetic control. Figure 8C shows the Tafel plot for the proposed composite at 1600 rpm. The estimated Tafel slope was 29 mV/decade.
Conclusion
In summary, graphene sheets were decorated with FeNi nanoparticles by hydrothermal treatment of graphene oxide in the presence of nickel acetate and iron acetate. Moreover, the presence of urea in the reaction medium followed by calcination in an argon atmosphere leads to the incorporation of nitrogen atoms in the graphene skeleton. The produced FeNi-decorated and N-doped graphene can be exploited as a stable and effective electrocatalyst for the oxygen reduction reaction process in acid medium. The high performance was attributed to the good affinity of the metallic nanoparticles and nitrogen atoms to oxygen molecules and hydrogen ions, respectively. Moreover, the good stability in acidic media can be assigned to the alloy structure of the metallic nanoparticles.
Experimental
Materials. In the introduced decorated graphene, the used precursors for the metal nanoparticles were iron (II) acetate (FeAc, 99% assay, Sigma Aldrich), and nickel acetate tetrahydrate (NiAc, 99.0% assay, Sigma Aldrich). The reduced graphene oxide was prepared by chemical route using graphite powder (particle size <20 μm), hydrazine monohydrate, hydrogen peroxide, and H 2 SO 4 (assay 95-97%); these chemicals were purchased from Sigma-Aldrich. All the used chemicals were utilized without further modification. DI water was used as a solvent. Procedure. The graphene was prepared chemically from reduction of exfoliated graphene oxide (GO).
Typically, GO was synthesized from natural graphite powder by a modified Hummer's method 26,44 . Briefly: treated twice by 5% HCl five grams of graphite was placed in ice bath concentrated H 2 SO 4 (130 mL). Later on, 15 g of KMnO 4 was added gradually to the mixture under zero Celsius temperature condition with stirring for 2 h. Then, distilled water was added to the mixture which results in increasing the temperature to 98 °C. After that the mixture was cold to room temperature, and H 2 O 2 (50 mL, 30 wt. %) was added, the mixture was kept under stirring for 24 h. Later on, the synthesized GO was separated by filtration under vacuum and washed by10% aqueous HCl several time and then dried at 50 °C. NiFe alloy nanoparticle-decorated and N-doped graphene was synthesized from mixing 0.5 mM iron (II) acetate and 0.5 mM nickel (II) acetate tetrahydrate aqueous solutions with 250 mg of urea (as a source of nitrogen) and stirring for 2 h, followed by ultrasonication for 30 min. As maximizing the nitrogen content in the final product was an important target during the synthesis process, using of urea during preparation of the FeNi-decorated graphene was done (in the reflux step) to incorporate nitrogen atoms within the graphene cycles which effectively increases the nitrogen content during the thermal treatment with melamine 45 . It is worth mentioning that, based on our studies and others, acetate salt was chosen due to the complete reduction during calcination at relatively high temperature under inert atmosphere to produce the corresponding metals rather than the expected metal oxides 46,47 . In another beaker, two hundred mg of the prepared graphene oxide were treated in a microwave oven for two minutes at around 600 W to achieve the thermal exfoliation. The two solutions were mixed and the obtained slurry was then refluxed for twelve hours at 150 °C. The produced slurry was filtered, and the solid material was dried for one day at 80 °C under vacuum. The dried powder was ground with twice as much melamine and calcined under argon atmosphere at 1 atm for four hours at 750 °C. High calcination temperature was chosen to ensure complete reduction of graphene oxide to graphene and avoid formation of metal oxides 46 . Both urea and melamine were used as nitrogen precursors to enhance the nitrogen content in the final product.
Characterization. Information about the phase and crystallinity was obtained by using Rigaku X-ray diffractometer (XRD, Rigaku, Japan) with Cu Kα (λ = 1.5406 Å) radiation over Bragg angle ranging from 10 to 80°. Normal and high resolution images were obtained with transmission electron microscope (TEM, JEOL JEM-2010, Japan) operated at 200 kV equipped with EDX analysis. The electrochemical measurements were performed on a VersaSTAT 4 (USA) electrochemical analyzer and a conventional three-electrode electrochemical cell. A Pt wire and an Ag/AgCl electrode were used as the auxiliary and reference electrodes, respectively. All potentials were quoted regarding to the Ag/AgCl electrode. Glassy carbon electrode was used as working electrode. Preparation of the working electrode was carried out by mixing 2 mg of the functional material, 20 µL Nafion solution (5 wt%) and 400 µL isopropanol. The slurry was sonicated for 30 min at room temperature. 15 µL from the prepared slurry was poured on the active area (0.073 cm 2 ) of the glassy carbon electrode which was then subjected to drying process at 80 °C for 20 min. Cyclic voltammetry measurements were carried out in 0.5 M H 2 SO 4 solution and the sweep potential range was adjusted from −0.2 to 1.0 V [vs. Ag/AgCl]. | 5,299 | 2018-02-28T00:00:00.000 | [
"Materials Science"
] |
Analysis of unidirectional and bidirectional magnetic-thermal coupling of permanent magnet synchronous motor
. To analyze accurately the temperature variation of permanent magnet synchronous motor, a bidirectional magnetic-thermal coupling method is proposed. Firstly, a two-dimensional magnetic field model of permanent magnet synchronous motor was built in Ansoft Maxwell, and the magnetic flux density, magnetic line distribution and radial air gap magnetic flux of the motor have been simulated. Secondly, the calculated winding copper loss, core loss and permanent magnet eddy current loss were coupled into the temperature field of ANSYS Workbench as heat source, and the transient temperature field of each part of the motor was studied. Finally, the electromagnetic and temperature fields of the motor were analyzed and calculated at the same time, and were mutually updated based on their iterations. The process was repeated until stable magnetic and temperature fields were generated. The results showed that the bidirectional coupling method took into account the influence of motor temperature rise on electromagnetic field, and the temperature rise prediction was more accurate than the unidirectional coupling method.
Introduction
Motor is the key component of new energy vehicles. Compared with the traditional drive motor, the permanent magnet synchronous motor (PMSM) is small in size and light in weight, but has high efficiency and power density. At present, many electric vehicles use PMSM as the drive motor. However, PMSM will produce various losses during operation, including winding copper loss, permanent magnet eddy current loss and stator and rotor iron loss [1]. The loss generated during the motor operation will be released from the inside in the form of heat, causing gradual rise of the motor temperature. However, the permanent magnet of PMSM has a critical temperature point. When the temperature reaches the critical temperature point, the magnetism of the permanent magnet will decrease or even will be lost completely, which will seriously affect the motor performance, even lead to a motor failure, because it is a potential safety hazard [2]. Therefore, accurate analysis of the loss and temperature field produced during the motor operation is the guarantee for the safe and effective operation of the motor.
At present, there are three main calculation methods for motor temperature field: simplified formula method, equivalent thermal network method and finite element method. Simplified formula method can be used to simply calculate the temperature rise and loss of motor, which is the simplest method to study the temperature field of motor [3]. This method is only valid when the internal temperature difference of the motor is small, the calculated temperature rise has poor accuracy. The equivalent thermal network method adopts the principle of graph theory, takes the thermal circuit as the basis and uses the network topology to calculate the motor temperature field [4]. The equivalent thermal network method only allows getting the average temperature of a certain part of the motor, but cannot accurately allow capturing the position of the hot spots of the motor, which is an important factor affecting the smooth motor operation. Zhu Z. Y. et al. [5] calculated the temperature rise of the permanent magnet synchronous motor by the equivalent thermal network method, and obtained the average temperature rise of each subdivision inside the motor, however, the position of the motor hot spot could not be accurately located. Tan D. et al. [6] studied the temperature rise of the submersible motor, calculated the equivalent thermal resistance in the model by using the improved empirical formula and established the equivalent thermal network model of the motor, but it failed to locate accurately the hot spot inside the motor. Ding S. Y. et al. [7] studied the temperature rise of the stress motor and proposed a thermal equivalent network method for calculating the temperature rise of the stator winding, but only indicated the location of the hot spots in the stator winding and did not specify the location of other hot spots in the motor.
The finite element method is the most commonly used method to calculate the temperature rise of the motor at present. It can complete the unidirectional and bidirectional solution of the electromagnetic and temperature fields of the motor, as well as realize the coupling calculation for the stress field, noise and temperature field of the motor [8]. Using the finite element method to study the temperature field of the motor, it is possible to analyze the distribution of the whole temperature field and the location of hot spots with high accuracy. Fan X. G. et al. [9] used the field-circuit coupling method to design and analyze the control strategy of electric vehicle hub motor, and demonstrated the feasibility of applying the field-circuit coupling method in the motor control circuit, but his study considered only the magnetic field, but not the temperature field. Chen Q. P. et al. [10] used the unidirectional magnetic-thermal coupling method to analyze the temperature field of electric vehicle hub motor under different operating conditions, and the analysis results had certain accuracy. Wang X. Y. et al. [11] established a field-circuit coupling model combining the finite element motor body model with the control circuit model by using the time-stepping finite element method, and simulated and calculated the motor losses under different operating conditions, but he did not couple the losses into the temperature field of the motor. Li L. Y. [12] used the time-stepping finite element method to optimize the operating efficiency of the permanent magnet synchronous motor under various operating conditions, and showed the application of finite element method in motor efficiency calculation.
To popularize the application of the permanent magnet synchronous motor in new energy vehicles and improve the performance of PMSM for electric vehicles, aiming at the influence of motor losses on temperature rise, this paper puts forward a bidirectional coupling research method based on the research of motor magnetic-thermal unidirectional coupling. It introduces various losses into the three-dimensional magnetic-thermal coupling model of the motor, solves and simulates the temperature field of the motor, and captures the hot spots of the motor. Compared with the unidirectional magnetic-thermal coupling method, it can improve the accuracy of calculating the motor loss and temperature rise. This method considers not only the influence of the motor magnetic field on the temperature field, but also temperature field variations caused by the interaction between the motor and the temperature field, which has strong application and popularization value.
Motor parameters and finite element model
A vehicle hub motor (permanent magnet synchronous motor, PMSM) was taken as the research object, its parameters were detailed in Table 1. Ansoft Maxwell software is used for modeling and subsequent electromagnetic simulation. The motor model is a two-dimensional model, as shown in Fig. 1. The grid division results are shown in Fig. 2. According to Fig. 2, the grid division is relatively uniform on the whole. But, in some places where the magnetic field varies greatly, and the magnetic field is relatively strong, thus creating an air gap between the rotor ANALYSIS OF UNIDIRECTIONAL AND BIDIRECTIONAL MAGNETIC-THERMAL COUPLING OF PERMANENT MAGNET SYNCHRONOUS MOTOR. FANG DING, AIGUO WANG, QIANBIN ZHANG JOURNAL OF VIBROENGINEERING 3 and the stator, the grid density shall be increased to improve the accuracy.
Mathematical model of motor magnetic field
Maxwell's equation is the basis of electromagnetic field theory and numerical analysis of engineering electromagnetic field. It is composed of Ampere Loop Law, Faraday Electromagnetic Induction Law, Gauss Electrification Law and Gauss Flux Law respectively, and its integral form can be expressed as follows: where: is the magnetic field intensity; Γ is the boundary of curved surface Ω; is the conduction current density vector; is the magnetic flux density; is the electric field intensity; is the magnetic induction intensity; is the charge bulk density, and is the volumetric area surrounded by the closed curved surface .
Maxwell's equations have its differential form in addition to integral form, which is: The relation among the field quantities , , and is determined by the characteristics of the medium. Generally, for linear medium, the relation is: where is the dielectric constant of the medium, F/m; is the permeability of the medium, H/m; is the conductivity of the medium, S/m; for isotropic media, , and are scalars; for anisotropic media, they are tensors.
Magnetic field simulation calculation
When the hub motor works at the rated speed and rated load in a transient state, the two-dimensional magnetic flux cloud map and magnetic force line distribution map at 0.5 ms are selected for the analysis, which results are shown in Fig. 3 and Fig. 4. According to the cloud chart of magnetic flux density shown in Fig. 3, the magnetic flux density of the motor is the highest in the yoke part, and it is the lowest in the core part. According to the distribution diagram of magnetic lines of flux shown in Fig. 4, the distribution of magnetic lines of flux is uniform, which indicates that the motor itself is designed reasonably.
Air gap magnetic density analysis
The energy conversion between the motor stator and rotor is accomplished in the air gap. Therefore, the research of the air gap is vitally important for the motor performance analysis. Usually, a magnetic leakage takes place, and it has a great influence on the AC and DC axis reactance of the motor, and then affects the heating condition of the motor [13]. In this paper, the field calculator in ANSYS Maxwell was used to solve the radial magnetic density of the permanent magnet synchronous motor, and the calculation results shown in Fig. 5 were obtained. According to Fig. 5, there are 48 flux density peaks at 360° circumference, which is related to the number of stator slots of the motor. Local cavities and peaks are caused by the slotting effect. By Fourier decomposition of radial magnetic density, the fundamental amplitude of radial magnetic density of air gap is obtained as shown in Fig. 6, and its maximum value is 0.9T. For the permanent magnet synchronous motor, it is generally 0.7T to 1.05T. Thus, it can be seen that the design of this motor meets the requirements.
Based on the above magnetic field simulation and analysis, it shows that the design of the target motor in this paper is reasonable, and the established model is credible, which lays a solid foundation for the next step of temperature field analysis.
Mathematical model of thermal field
According to the law of energy conservation and the basic law of heat transfer, for isotropic media, the thermal conductivity is constant. In a rectangular coordinate system, the transient temperature field in the motor can be obtained from the governing differential equation of thermal conductivity [14]: where, is the temperature function changing with time ; and are the thermal conductivity of the material along the x and y directions, respectively, and = ; is the heat generated per unit area; is the motor material density; is the specific heat of the material; and are the conduction rates of materials along the and directions, respectively. For steady-state temperature field, temperature does not change with time , that is, ⁄ = 0. So, Eq. (4) The thermal motor analysis considered the heat conduction and convection inside the motor and the heat convection on the external surface. According to the heat transfer principle and motor knowledge, the corresponding boundary conditions are established as follows: where, Γ is the hot edge interface; is the heat flux density, when = 0, it means that the motor system does not exchange heat with the outside world, which is also known as adiabatic boundary condition; represents the normal direction outside the boundary, that is, the direction of heat flux ; is the thermal conductivity (thermal conductivity) of the object; is the heat release coefficient (heat exchange coefficient) between the medium and the object.
Motor loss analysis
The motor loss during operation is the main cause of its temperature rise, so it is vitally important to analyze the motor loss accurately. The motor losses during operation mainly include: winding copper loss, stator and rotor iron loss, permanent magnet eddy current loss and mechanical loss [15]. Basically, these losses will be converted into heat energy, which will be transferred among the components inside the motor, and thus it affects the distribution of temperature field inside the motor. The total loss of the motor is: where, is the winding copper loss; is the core loss; is the eddy current loss of permanent magnet; is the mechanical loss.
Copper loss
The winding copper loss of the motor is caused by the motor current, which is mainly related to the number of winding phases, the effective current and resistance values of the winding. If to involve the Joule-Lenz law, the winding copper loss can be expressed as: where, is the circulation coefficient between parallel strands; ℎ is the number of wires in terms of the coil width; is the coil width; is the slot width; is the rated frequency of motor; is the motor phase current; is the winding resistance.
Because of the high power density of the permanent magnet synchronous motor, the temperature rises rapidly during the motor operation, and this process leads to the increase of winding resistance. The relation between resistance and winding temperature is as follows: where, is the initial ambient temperature; is the winding resistance value at temperature , which can be directly calculated by material properties and winding design; is the winding resistance at temperature ; is the temperature coefficient of the winding resistance at temperature .
Iron loss
The motor stator and rotor will produce iron loss under the excitation of sinusoidal alternating magnetic field, which includes hysteresis loss, eddy current loss and residual loss [16]. Iron loss is closely related to the magnetic field inside the motor, and its calculation is more complicated than other losses. It is influenced by the processing technology of stator and rotor and ferromagnetic materials [17]. The iron loss in the motor is mainly produced when the main magnetic field changes in the iron core, and this change includes hysteresis loss and eddy current loss. Hysteresis loss is caused by alternating the magnetization and rotating the magnetic field, while the eddy current loss is caused by current induced by changing magnetic field.
Core loss per unit weight can be expressed as: where, and are hysteresis constant and eddy current constant, respectively; generally, = 40 to 55, and = 0.04 to 0.07; is the Steinmetz coefficient related to the laminated material, and the motor generally takes = 1.8-2.0; is the synchronous angular velocity; is the magnetic flux density.
Permanent magnet eddy current loss
Permanent magnet placed inside the rotor have poor heat dissipation and are prone to demagnetization at a high temperature. Therefore, it is necessary to calculate accurately the eddy current loss of permanent magnet to prevent it from affecting the motor operation. According to the law of magnetic induction, when the external magnetic field varies, the induced electromotive force and current in a vortex shape around the magnetic flux will be generated in the permanent magnet, and the eddy current loss generated can be expressed as follows: where, is the axial length of the permanent magnet; is the radial width of the permanent magnet; is the volume of the permanent magnet; is the proportional constant of electromotive force; is the alternating frequency of magnetic field; is a permanent magnet; is the resistivity of the permanent magnet.
Mechanical loss
The mechanical loss of the motor is mainly wind resistance loss and bearing loss, and the calculation formula is: = 0.0152 1 + 8 7 where, is the wind resistance friction coefficient; is the axial Reynolds number; is the radial Reynolds number; is the rotor radius; and is the rotor length; is gas density; is the rotational angular velocity of the motor; is the bearing diameter; is the bearing coefficient.
Simulation analysis of magnetic-thermal coupling temperature field
In this section, a three-dimensional model of the motor is used to build a magnetic-thermal coupling model, which can help analyze the temperature field of the motor in a more comprehensive manner than the traditional two-dimensional model of the motor. Fig. 7 is a threedimensional magnetic-thermal coupling simulation model of PMSM, which consists of stator, rotor, permanent magnet, winding and rotating shaft. By using the formula mentioned above, all kinds of losses of the motor are calculated, and these losses are introduced into the threedimensional model of the motor, that is, the magnetic-thermal coupling model of the motor is formed. Then, this coupling model can be used to analyze the motor temperature field. The simulated operating condition: the motor speed is set at 1500 rpm and the load is set at 50 N·m.
Thermal conductivity coefficient
For the calculation of motor temperature field, it is very important to set motor materials. The properties of motor materials will change with the increase of motor temperature, which will further affect the motor operation. At the room temperature 25 ℃, the thermal conductivity of common motor materials is shown in Table 2. (1) Equivalent thermal conductivity coefficient of insulating materials. The calculation of thermal conductivity coefficient of insulating materials is complicated, and need to be assumed: the insulation layer in the winding part is evenly distributed, and the wires are also evenly distributed in the winding group, regardless of the temperature difference between the internal wires when the motor is running [18]. Then the calculation expression of the thermal conductivity coefficient of the insulation material inside the motor is as follows: where, is the equivalent thermal conductivity coefficient of insulating materials; is the thermal conductivity coefficient of various materials; is the equivalent thickness of insulating materials.
(2) Equivalent thermal conductivity coefficient of air gap. When calculating the thermal conductivity coefficient of the air gap, the following conditions are assumed: the inner surface of the motor stator and the outer surface of the rotor are ideal cylindrical surfaces, without considering the influence of machining. Then the Reynolds number in the air gap is calculated as follows: where, is the circumferential speed of the outer circumference of the rotor (m/s); is the air gap length (m); is the air viscosity (m 2 /s). The expression of critical Reynolds number is as follows: where, is the inner radius of the motor stator (m). When determining the equivalent thermal conductivity coefficient of the air gap, it is also necessary to determine whether the air in the air gap is in a laminar or turbulent state. When < , the air in the air gap is in the laminar state, and the equivalent thermal conductivity coefficient of the air gap is equal to that of air. Otherwise, the air in the air gap is in the turbulent state. The thermal conductivity coefficient of the air gap is calculated by the following formula [19]: where, is the ratio of the outer diameter of the rotor to the inner diameter of the stator.
Convection coefficient of heat transfer boundary
The air gap has a great influence on the heat transfer among the components inside the motor, because heat convection takes place between the air gap and the outer surface of the rotor, the inner surface of the stator and the slot wedge. Heat transfer also exists between the outer surface of the stator and the casing. So, the coefficient of convection between these contact surfaces has certain influence on the temperature rise of the motor.
The coefficient of convection between the inner surface of the air gap and the outer surface of the rotor is [20]: where, is the rotational linear velocity of the rotor surface. The coefficient of convection between the outer surface of the air gap, the inner surface of the stator and the slot wedge is [20]: Assuming that the temperature of the casing and the external temperature are both initial temperatures, the coefficient of convection between the stator outer surface and the casing is [20]: where, is the coefficient of convection of the heating element at the initial temperature; is the velocity of the outside air flow, and is the air flow efficiency at the initial temperature; is the initial temperature of casing and air.
If the casing is naturally cooled, the coefficient of convection between the outer surface of the stator and the casing is [20]:
Unidirectional coupling analysis
In this section, the unidirectional coupling method is used to carry out the magnetic-thermal coupling analysis of permanent magnet synchronous motor. The unidirectional coupling belongs to sequential coupling.
Firstly, the magnetic field distribution in a two-dimensional model of permanent magnet synchronous motor is simulated by Ansoft Maxwell, and the winding copper loss, core loss and permanent magnet eddy current loss of hub motor are calculated. Then, these losses are indirectly coupled into the ANSYS workbench temperature field as a heat source for analysis. Finally, the temperature rise of the motor is solved. Fig. 8
Bidirectional coupling analysis
Bidirectional coupling analysis is developed on the basis of unidirectional coupling method. The electromagnetic and temperature fields of motor are analyzed and calculated in parallel and are updated iteratively. Firstly, the material properties of each part of the motor are set in Ansoft Maxwell. Then the loss of each part of the motor is introduced into the temperature field as a heat source to calculate the motor temperature rise. Finally, the feedback iterator is added in the ANSYS Workbench, and the calculated temperature of each motor component is imported into the electromagnetic calculation unit. The feedback iterative calculation is carried out repeatedly until the calculation result of motor temperature and loss differs by less than 1 %. Fig. 9 contains the result of bidirectional coupling analysis. The simulation results of unidirectional coupling and bidirectional coupling are shown in Table 3. According to Table 3, on the whole, the temperature calculated by the unidirectional coupling method is higher than that calculated by the bidirectional coupling method, but the temperature variation range obtained by the two analysis methods does not obviously increase or decrease significantly. According to Fig. 8 and Fig. 9, the highest motor temperature usually appears in the middle part. The highest motor temperature appears in the winding, because of poor heat dissipation in the winding. There are two main ways for winding to dissipate the heat. One consists in dissipating the heat from the end face of winding, which has a small heat dissipation area. The other one is to transfer the heat out through the stator and then through the shell. The heat dissipation path is too long, and the heat transfer coefficient of the insulation layer of the motor winding is relatively small. Thus, it is difficult to transfer the heat out, so the motor winding temperature is higher than that of other parts. At the same time, it can be seen that the temperature variation range of the stator is larger than that of other parts of the motor because of the contact between the stator and the winding, and the closer contact between the stator and the external environment, so the heat dissipates faster than the motor rests.
Experiment
To verify the accuracy and error of the simulation results of unidirectional coupling and bidirectional coupling methods, a motor test bench shown in Fig. 10 was built. The motor test bench includes upper computer, motor drive controller, permanent magnet synchronous motor, magnetic powder brake, torque, speed and power measuring instrument, power battery and other devices. CCS3.3 software is to be run in the upper computer, connecting the motor drive controller with the upper computer through the emulator, opening and compiling the motor control model under the environment of MATLAB/Simulink. And the cSPACE experimental device is used to generate automatically the C language code, through the parameter adjustment controlling the operation of the permanent magnet synchronous motor. Finally, a thermal imager is used to observe the temperature rise of the motor within 1800 s. The statistical results are shown in Fig. 11. Experimental conditions: the motor speed is set at 1500 rpm; the motor operates with a torque of 50 N·m. According to the temperature rise curve of the motor winding in Fig. 11, the simulation data of unidirectional and bidirectional coupling are larger than the experimental data. But compared with the simulation method of unidirectional coupling, the simulation method of bidirectional coupling is closer to the experimental results, with less than 5 % error, which shows that the accuracy of bidirectional coupling method is better. The statistical temperature rise trend of stator, rotor and permanent magnet is similar to that of winding, so no statistical curve is given here additionally.
The simulated temperature rise curve is higher than that measured by experiment during operation. The possible reason is that the external environment temperature is low, and the motor dissipates heat quickly in the natural environment. Overall, the results of bidirectional coupling simulation are very close to the experimental results, which indicates that the simulation method proposed in this paper can approach the actual temperature rise of the motor more closely. Fig. 11. Temperature rise of motor winding
Conclusions
The Ansoft Maxwell and ANSYS workbench was used to create a magnetic-thermal coupling co-simulation model of permanent magnet synchronous motor to make a unidirectional and bidirectional coupling analysis. Based on experiments and simulations, the following conclusions were obtained: 1) Compared with the unidirectional coupling method, the bidirectional magnetic-thermal coupling method considers the influence of the various losses on the temperature rise of the motor. After repeated iterative calculation, the predicted temperature rise of the motor is closer to the actual experimental data with higher reliability, which provides a basis for accurate solving the temperature field of the motor.
2) The setting of unidirectional coupling analysis is relatively simple, and the solving speed is relatively fast. However, in view of the applications with strict temperature rise requirements, it is not rigorous to only conduct a magnetic-thermal unidirectional coupling analysis of the motor during its design. | 6,027 | 2022-08-23T00:00:00.000 | [
"Physics"
] |
On the energy resolution of a GaAs-based electron source for spin-resolved inverse photoemission
. The spin resolution in inverse photoemission spectroscopy is achieved by injecting spin-polarized electrons, usually produced by GaAs-based cold cathodes that replace hot-filament electron guns of spin-integrated setups. The overall energy resolution of the system can be enhanced by adjusting either the optical bandpass of the photon detector or the energy distribution of the electron beam. Here we discuss the influence of the photocurrent and the photocathode temperature on the energy broadening of the electron beam through the inverse photoemission spectra of the spin-splitted Shockley surface state of Au(111). First, we find that cooling down the GaAs photocathode to 77 K increases the band gap and reduces the number of allowed vertical transitions, monochromatizing the electron beam with an enhancement of about 30 meV for the energy resolution. Second, we observe a correlation between the generated photocurrent at the electron source, and the space-charge effects at the sample as a reduction of lifetime and spin asymmetry of a polarized bulk state. These observations allow defining a threshold of current density for the optimum acquisition in the measurements of spin-resolved inverse photoemission in Au.
Introduction
The exploration of unoccupied states with total spin control is now possible in inverse photoemission (IPES) by fully decoupling the spin polarization vector of the electron beam from its wavevector [1].The decoupling of the polarization orientation of the electron beam from the wavevector allows exploring the unoccupied bands with the spin orientation more suitable to explore every state.Another critical factor when studying the bands is the energy resolution in order to distinguish states close in energy.It has already been demonstrated that spinresolved measurements of spin-polarized inverse photoemission (SPIPES) allow discriminating unoccupied spin-dependent states whose energy splitting is below the energy resolution [2].Any enhancement of the total energy resolution ∆ in IPES setups will therefore allow to better discriminate states close in energy, as required in particular for the study of spin-textures in systems with e.g., spin-orbit coupling (SOC), magnetic exchange or Rashba effect.The total energy resolution depends on the thermal distribution of the electron beam and the bandpass energy of the photon detector with usual detection in the vacuum ultraviolet (VUV) regime.In isochromat IPES, where only the kinetic energy of the electron beam is varied, the bandpass detection of the emitted photons consists of a combination of the photoionization threshold of the detection gas (high-pass) and the transmission cutoff of an optical window (low-pass) in * Corresponding author<EMAIL_ADDRESS>counters.A modification of the bandpass characteristic through the decrease of the low-pass energy transmission allows for enhancement of the optical resolution of the photon detector system.The surface quality and purity [3], and the temperature of the window [4,5], usually made of alkaline earth fluoride crystals, are the most explored parameters.The temperature dependency of the cutoff wavelength of VUV transmission windows, explained by a photoexcitation model of excitons at thermally distorted lattice regions [5,6], allows to reach up to 165 meV for the overall energy resolution [4].In more elaborated devices the high-pass is modified by using the oxygen and krypton absorption lines by incorporating additional windows [7].However, the increase of energy resolution by the aforementioned methods is usually accompanied by a decrease in the quantum yield [8] which is quite disadvantageous due to the intrinsic cross-section of IPES.Thus, an alternative to enhance the energy resolution is by adjusting the beam distribution of the GaAs-based electron source.Only a few studies deal with the photoemission broadening of the polarized electrons, see for instance [9,10].Varying the temperature of the electron source rather than the temperature of the window of the photon detector is an alternative that does not reduce the counting rate in IPES since it does not filter the energy of the beam electrons.For these reasons, we report here the effects of the temperature of a GaAs-based electron source on the energy resolution in isochromat IPES by analyzing the spin-polarized Shockley surface state of Au(111).From the phase space considerations of the IPE process, it would be desirable to have a high-intensity source with a small beam size and high flux.Yet, even when samples could stand relatively high currents, the electronelectron repulsion becomes important in low-energy and high-brightness beams, affecting not only the energy but also the momentum resolution [11,12].The selected transfer energy in the electron optics has been chosen to reduce them [1].Therefore, the effects of the spacecharge are observed as a decrease in the lifetime of the surface state while increasing the electron current at the target.
Methodology
The Au(111) surface was prepared by Ar ion sputtering (1 keV) and annealing at 800 K under the UHV environment.The cycles were repeated until a clear LEED pattern with sixfold symmetry was obtained (not shown).The SPIPES measurements were subsequently performed.
The spin-resolution is incorporated in the room temperature measurements on freshly-prepared samples by the near-infrared photoemission (~830 nm) from a negative electron affinity GaAs wafer, already detailed for isochromat IPES [1].The net polarization of the photocathode is reported to be 0.30 0.03.The photocathode was cooled down to LN2 boiling point (77 K) through an in-house cold trap at the 3-axis photocathode manipulator and stabilized in about 1 hr.The temperature is monitored by a contact thermocouple at the photocathode surface.Because thermal fluctuations of the GaAs lattice modify the sweet spot of the NIR optical excitation, impacting the transmission of the electron beam to the sample during the cooling process, the experiment was performed until the electron transmission (~55%), and the photocathode temperature stabilized.
Results
The degree of polarization of electronic states due to SOC is more significant in heavy-atom compounds and it can be enhanced by surface adsorbates as occurs in noble-metal surfaces with Rashba splitting [13].However, if the Rashba parameter is very small, the spin effects may not be observed in IPES due to the constraints on the energy resolution, usually of hundreds of meV.A prototypical system for studying the Rashba splitting is the Shockley surface state of Au, studied below [14][15][16] and above the Fermi level [1,17], with a spin that is tangent to the concentric Fermi surface.We present in Fig. 1 the SPIPES raw spectra of Au(111) along the ΓM direction at room temperature.The spectra are normalized to the impinging current of ~ 0.6 µA.The spin-polarized Shockley surface state of the L-gap and surface-projected bulk bands are observed, and the 111) at θ = 3° as a function of the temperature of the GaAs photocathode.In the LT, the binding energy of the state is more defined as follows from the FWHM.details on the surface resonance and spin-polarized bulk states have been discussed elsewhere [1,17].The lifetime of the Shockley surface state is apparently smaller when approaching normal incidence due to the particular experimental constraints.It is thus desirable to increase the energy resolution to better estimate the spin-split final state binding energies.Therefore, we focus first on the thermal distribution of the electron beam while studying the surface state at an almost normal incidence (θ=3°), as shown in Fig. 2.
Two spectra are contrasted as a function of the photocathode temperature: the 77 K low temperature (LT) and the 300 K high temperature (HT) spectra.Both measurements are normalized to a target current kept at about 0.6 µA for both sets of measurements.The system presents the surface Rashba effect so that the binding energy of the polarized final states are shifted in energy with respect to each other for a given wavevector.It is evident that the IPES spectral linewidth of the Shockley surface state of Au( 111) is decreased when the GaAs temperature goes from HT to LT, facilitating the determination of the binding energy of the state.The difference of the full width at half maximum (FWHM) is 0.9 (0.6) eV for the spin up (down) component.By increasing the bandgap energy of GaAs, the optical transitions are then more concentrated around the Γpoint as indicated by the rise of the spectral intensity in the LT surface-emission while maintaining the same inelastic background of the HT at ~1.5 eV.In this scenario, the effective polarization of the GaAs photocathode should also increase due to the monochromatized electron beam.A difference in the binding energy is possibly due to a variation of the photocathode work function at these two temperatures.Experimentally, this variation can be compensated by modifying the acceleration potential of the electron source [1].Anyway, here we are concerned with the broadness of the SPIPES features.The SPIPES spectral broadening is mainly affected by two parameters: the thermal energy spread of the beam and the space-charge effects of the cathode.If we concentrate at first on thermal effects, the broadening when considering a Maxwellian momentum distribution projected along one transverse direction is given by [11,18]: where T is the photocathode temperature.Since electrons arriving at the sample are free electrons, their energy broadness is: And combining Eq.1 and Eq.2 we get that (3) If only the photocathode temperature is varied, the change between energy distributions can be quantified at each wavevector k i .Assuming that the binding energy of the spin-down state is ↓ ~ 0.2 eV, we get that ∥ ~0.06 Å !" and therefore ∆ 300 − ∆ 77 62 − 32 meV 30 meV.In other words, there is a difference of 30 meV in the thermal distribution of the electron beam when the photocathode is at 77 K.
The above energy resolution of the inverse photoemission setup has been determined by avoiding the space-charge effect.The space-charge at the photocathode surface may broaden the thermal energy distribution of the electron source if the emitted current is too high.This eventuality is usually prevented by decreasing the voltage in the extractor lenses at the electron source.Experimentally, we studied the spacecharge effects by varying the impinging current into the sp-bulk state (θ = 46°) of the Au(111) surface.This state presents spin asymmetry and pseudo-Rashba splitting as experimentally observed and ascribed to initial-state effects [15].The SPIPES spectra of the state as a function of current density are presented in Fig. 3 and their corresponding FWHM are presented in Fig. 4. The space-charge affects the electron beam for relatively high current densities over the sample.It can be seen a gradual increment of the FWHM of the state as a function of the current density, up to the point that it is difficult to determine the final state binding energy when the beam current approaches 1.0 µA⋅mm -2 .Thus, it is critical to characterize the current density over the specific sample since this may lead to misinterpretations of the lifetime of the state.This is why we have determined the most favorable current densities for exploring Au(111) with SPIPES and in the range between 0.2 and 0.6 µA⋅mm -2 .Fig. 4. FWHM of the SPIPES spectra of Au(111) sp-bulk state (θ = 46°) as a function of the current density over the sample.
In summary, cooling down the photocathode to 77 K increases the energy resolution attributed to the electron beam broadening by 30 meV while not reducing the electron beam intensity.This study has been performed by previously determining experimental conditions where the space-charge effects are negligible for the bulk state of Au(111).
Fig. 2 .
Fig. 2. SPIPES spectra of Au(111) along ΓM at room temperature.Surface and bulk states are present.Lines are added as a guide to the eye.
Fig. 1 .
Fig.1.SPIPES spectra of Shockley surface state of Au(111) at θ = 3° as a function of the temperature of the GaAs photocathode.In the LT, the binding energy of the state is more defined as follows from the FWHM.
Fig. 3 .
Fig. 3. SPIPES spectra of Au(111) sp-bulk state (θ = 46°) as a function of the current density over the sample.The data (triangles) are normalized to maximum intensity and fitting lines are shown.The effects of the current density are observed in: (i) the spin asymmetry and (ii) the spectral broadening of the state. | 2,748.2 | 2022-01-01T00:00:00.000 | [
"Physics"
] |
Acute Injection of Omega-3 Triglyceride Emulsion Provides Very Similar Protection as Hypothermia in a Neonatal Mouse Model of Hypoxic-Ischemic Brain Injury
Therapeutic hypothermia (HT) is a currently accepted treatment for neonatal asphyxia and is a promising strategy in adult stroke therapy. We previously reported that acute administration of docosahexaenoic acid (DHA) triglyceride emulsion (tri-DHA) protects against hypoxic-ischemic (HI) injury in neonatal mice. We questioned if co-treatment with HT and tri-DHA would achieve synergic effects in protecting the brain from HI injury. Neonatal mice (10-day old) subjected to HI injury were placed in temperature-controlled chambers for 4 h of either HT (rectal temperature 31–32°C) or normothermia (NT, rectal temperature 37°C). Mice were treated with tri-DHA (0.375 g tri-DHA/kg bw, two injections) before and 1 h after initiation of HT. We observed that HT, beginning immediately after HI injury, reduced brain infarct volume similarly to tri-DHA treatment (~50%). Further, HT delayed 2 h post-HI injury provided neuroprotection (% infarct volume: 31.4 ± 4.1 vs. 18.8 ± 4.6 HT), while 4 h delayed HT did not protect against HI insult (% infarct volume: 30.7 ± 5.0 vs. 31.3 ± 5.6 HT). HT plus tri-DHA combination treatment beginning at 0 or 2 h after HI injury did not further reduce infarct volumes compared to HT alone. Our results indicate that HT offers similar degrees of neuroprotection against HI injury compared to tri-DHA treatment. HT can only be provided in tertiary care centers, requires intense monitoring and can have adverse effects. In contrast, tri-DHA treatment may be advantageous in providing a feasible and effective strategy in patients after HI injury.
INTRODUCTION
Hypoxic-ischemic (HI) brain injury is a serious occurrence that frequently results in death or significant long-term neurologic disability in both neonates and adults (1)(2)(3). Currently, therapeutic hypothermia (HT) is the only established treatment for neonates with HI encephalopathy (4). Selective head cooling with cooling caps or whole body cooling with passive cooling (turning radiant warmers/incubators off), cool packs and/or commercially available cooling blankets are used for treatment in neonatal HI encephalopathy (5,6). With regard to acute ischemic stroke in adults, tissuetype plasminogen activator (tPA) is the only drug approved by the U.S. Food and Drug Administration (FDA) (7). However, the narrow therapeutic window and the risk of hemorrhage are major limitations of tPA treatment, resulting in only 8-10% of adult stroke patients eligible for this drug (8). Preclinical studies and small scale clinical trials in adults after stroke have shown that HT substantially diminishes the degree of neural damage, reduces the rate of mortality and improves neurofunctional recovery (9)(10)(11).
The major molecular mechanisms affected by HT include decreased free-radical production, reduction of blood-brain barrier disruption, decreased excitatory amino acid release and attenuation of cell mediated inflammatory responses to cerebral ischemia (12,13). Additionally, HT induces the inhibition of neuronal apoptosis through both mitochondrial based intrinsic pathways and receptor mediated extrinsic pathways (14). However, HT remains a complex medical approach, as it requires intense monitoring and is available only in tertiary care centers (15). Pilot studies on HT in stroke have shown that adult patients have less tolerance to cooling than neonates and HT may also induce unfavorable systemic effects, such as shivering, immune suppression, and pneumonia (16,17). Combining HT with other treatment methods may help in reducing the adverse effects from HT as well as reaching multiple molecular targets in the setting of HI insult to obtain an increase in therapeutic time windows and an enhanced repair in long-term recovery (18).
As one of the major omega-3 polyunsaturated fatty acids (PUFA) in the brain, docosahexaenoic acid (DHA) is essential for development and function of the brain (19). DHA has been shown to reduce inflammation, excitotoxicity and to prevent brain volume loss in different animal models of HI injury (20)(21)(22). Studies from our laboratory showed that acute administration of triglyceride (TG) emulsions containing only DHA (tri-DHA) reduces brain injury and preserves short-and long-term neurological outcomes in neonatal mice (23,24).
Based on these findings, we questioned if co-treatment with HT and tri-DHA would achieve synergic effects in protecting the brain from HI injury. We validated the neuroprotective efficacy of HT against HI injury in the neonatal model previously described by our laboratory (23,25). Our results showed that tri-DHA provides similar degrees of neuroprotection as that of HT and combining HT with tri-DHA emulsion does not offer additional therapeutic benefit in HI injury.
Ethics Statement
All research studies were carried out according to protocols approved by the Columbia University Institutional Animal Care and Use Committee (IACUC) in accordance with the Association for Assessment and Accreditation of Laboratory Animal Care guidelines (AAALAC).
Lipid Emulsions
Tri-DHA emulsions (10 g by TG weight/100 mL emulsion) were made in our laboratory with DHA TG oil and egg yolk phospholipids (PL) by sonication as previously detailed (23). The emulsions were analyzed for the amount of TG and PL using commercial kits (Wako Chemicals USA, Inc., Richmond, VA). The TG:PL mass ratio was 5.0 ± 1.0, similar to VLDL-sized particles. To prepare radiolabeled emulsions, [ 3 H]CEt was added to the TG-PL mixture before sonication (25).
Unilateral Cerebral Hypoxia-Ischemia Injury
Three-day-old C57BL/6J neonatal mice were purchased from Jackson Laboratories (Bar Harbor) with their birth mother. We used the Rice-Vannuci method of mild HI brain injury modified to 10-day old (p10) mice, as previously described (23). An initial pilot study on gender differences showed no significant changes in infarct volumes after HI injury between male and female mice. Hence, both male and female mice were used for these experiments and we did not separate our data by gender in the present study. Briefly, HI brain injury was induced by permanent ligation of the right common carotid artery. After 1.5 h of recovery, mice were exposed to hypoxic insult (humidified 8% O 2 /92% N 2 , Tech Air Inc., NY) for 15 min. Since HI brain injury in neonatal mice is associated with an endogenous drop in body core temperature (26), mice are kept at 37 ± 0.3 • C during hypoxia to avoid hypothermia during the hypoxia period.
HT and Tri-DHA Treatments
Immediately after HI injury, pups were kept for 4 h in temperature controlled chambers with either HT or normothermia (NT), reaching rectal temperatures of 31-32 • C or 37 • C, respectively (23). We observed that pups placed in circulating air chambers set at 27 • C maintained target rectal temperature 31-32 • C. For the NT group, pups were placed in chambers set at 32 • C, based on the protocol from our previous studies (23,24). As the core temperature in neonatal rodents could be affected by distance from the dam (27), the pups were kept separately from the dam during the 4 h HT or NT treatment period. Sequential temperature measurements were obtained immediately after hypoxia (0 h) followed by 1, 2, 3, and 4 h during HT (probe type: RET-4; Physitemp Instruments, Clifton, NJ). Tri-DHA treatment [0.375 g tri-DHA/kg bw, intraperitoneal (i.p.), two injections, 1 h apart] was based on the protocol from our previous studies on tri-DHA neuroprotection against HI injury in neonatal mice (23,24).
To investigate whether combined treatment of HT with tri-DHA emulsion enhances neuroprotection in HI damage, animals subjected to HT were administered with tri-DHA emulsion (0.375 g tri-DHA/kg bw, 2 injections, i.p.) at the beginning of HT and at 1 h after initiation of HT. NT or HT control animals received saline injections. Following 4 h NT, pups in the control group were returned to the dam. Pups in the HT group underwent slow rewarming by increasing the chamber temperature at a rate of 0.1-0.2 • C per minute till the pups reached a rectal temperature of 37 • C, and were then returned to the dam.
Uptake and Distribution of Radiolabeled Tri-DHA Emulsion Particles in HT Mice
Using radiolabeled tri-DHA emulsion, we determined whether HT affects the absorption and distribution of emulsion particles after i.p. injection. Naïve neonatal mice injected with radiolabeled tri-DHA emulsion (0.375 g tri-DHA/kg bw, i.p., single injection) were immediately subjected to 4 h of either HT (n = 3) or NT (n = 7). The use of a single bolus injection to study emulsion distribution was based on previously established protocols from our laboratory (25,28). Animals were sacrificed after 4 h of HT or NT and radioactivity in peritoneal fluid, blood, organs and tissues assessed by measuring the levels of [ 3 H]CEt.
Tissues and organs were homogenized using a Polytron Tissue Disruptor (Omni TH, Kenneswa, GA) and the radioactivity measured by liquid scintillation spectrometry (29). The samples were suspended in scintillation fluid (Ultima Gold scintillation fluid, PerkinElmer, Boston, MA), mixed and 3 H dpm assayed in a PerkinElmer Tri-Carb liquid scintillation spectrometer 5110 TR. Tissue uptake was expressed as percent of total recovered dose/organ for all the organs analyzed.
HT and Tri-DHA Therapeutic Time Windows
We determined the therapeutic window of HT after HI injury in mice: (1) 2 h delayed HT -pups placed with dam for 2 h after HI and then subjected to HT; (2) 4 h delayed HT -pups placed with dam for 4 h after HI and then subjected to HT. To investigate whether combined treatment of HT with tri-DHA emulsion prolongs the therapeutic window in HI injury, animals subjected to HT (2 or 4 h delayed after HI) were administered with tri-DHA emulsion (0.375 g tri-DHA/kg bw, 2 injections, i.p.) at the beginning of HT and at 1 h after initiation of HT. NT or HT control animals received saline injections. After the treatment period, pups in NT or HT groups were returned to the dam as described above.
Neuropathological Outcomes
At 24 h after HI insult, the animals were sacrificed and brains were harvested. Coronal slices of 1 mm were cut by using a brain slicer matrix. Slices were immersed in a PBS solution containing 2% triphenyltetrazolium chloride (TTC) at 37 • C for 25 min. TTC is taken up into living mitochondria, which converts it to a red color. Unstained areas that appeared white were defined as infarct regions whereas viable regions appeared red. Using Adobe Photoshop and NIH Image J imaging applications, planar areas of infarction on serial sections were summed to obtain the volume (mm 3 ) of infarcted tissue. Infarct areas were expressed as % of the total area of the ipsilateral hemisphere (24). In a separate cohort of mice treated with HT or HT plus tri-DHA immediately after HI, brain atrophy at 7 days after HI injury was detected by Nissl staining, as previously described. The entire brain was sectioned every 200 µm and the thickness of each coronal slice was 50 µm. Sections were then incubated in a solution of 0.1% cresyl violet (Sigma-Aldrich, St. Louis, MO, USA) for 7 min. After a quick rinse in H 2 O, slides were differentiated in 70% (v/v) ethanol with a few drops of acetic acid, followed by dehydration in graded ethanol and two changes of xylene. The sections were then mounted with Fisher Chemical TM Permount TM Mounting Media (30).
Statistical Analyses
Values are mean ± SEM. One-way ANOVA followed by post hoc Newman-Keuls multiple comparison test was applied to evaluate differences among the groups.
HT Does Not Affect Absorption or Organ Distribution of Tri-DHA Emulsion Particles
There was no mortality in animals subjected to NT or HT protocols. Table 1 summarizes results of sequential temperature measurements in HT animals. Radiolabeled experiments showed that at 4 h after i.p. injection, ∼96% of the injected emulsion exited the peritoneal cavity in both NT and HT mice. Further, no significant differences were observed in the organ distribution of tri-DHA emulsion particles in NT vs. HT mice. The highest uptake of emulsion particles was in the liver (44-47% of recovered dose of radiolabeled emulsion), followed by muscle (20-23%) and heart (8-9%) in both NT and HT mice. The lowest uptake of emulsion particles was in the brain (<0.3% of recovered dose) in both NT and HT animals (data not shown).
HT or Tri-DHA Treatment After HI Injury Provides Similar Degrees of Neuroprotection
We evaluated neuroprotective effects of HT plus tri-DHA treatment beginning immediately after HI injury. HT or tri-DHA showed significant reduction (∼50%) in brain infarct volumes compared to saline treated NT animals (Figures 1A,B). Combination of treatments with HT and tri-DHA immediately after HI injury did not provide any additional benefits compared to HT treatment alone (Figures 1A,B). Temperatures ( • C) are expressed as mean ± SEM. n = 7-9.
Frontiers in Neurology | www.frontiersin.org Neuroprotection by HT plus tri-DHA administration beginning immediately after HI injury was maintained at 7 days after ischemic insult. Nissl staining demonstrated greater preservation of the ipsilateral hemisphere in HT or HT plus tri-DHA treated mice compared to the control group. However, the combination did not offer any therapeutic advantage compared to HT treatment alone. Representative Nissl stained sections are shown in Figure 1C.
HT Plus Tri-DHA Treatment After HI Injury Does Not Extend the Therapeutic Time Window
In the present study, we performed delayed HT treatment protocols to determine the therapeutic window for neuroprotection after ischemic injury. HT delayed 2 h post-HI showed reduced brain infarct volumes compared to NT animals. Further, HT plus tri-DHA treatment did not offer significant additional protection over that provided by HT alone beginning at 2 h after HI injury although there was a tendency for slightly more reduction in infarct size (% infarct volume: 31.4 ± 4.1 NT + saline vs. 18.8 ± 4.6 HT + saline vs. 12.7 ± 4.0 HT + tri-DHA) (Figures 2A,B). HT treatment delayed to 4 h after HI insult did not offer protection against ischemic injury. Combining HT and tri-DHA treatment with a delay of 4 h after HI injury did not extend the therapeutic window of HT. Although we observed an increase in infarct volume in animals treated with 4 h delayed HT + tri-DHA combination, the difference was not significant compared to NT or HT alone groups (Figures 2C,D). Thus, our results indicate that combined treatment of tri-DHA emulsion with HT does not provide additional significant benefit in neuroprotection in ischemic injury.
DISCUSSION
In this study, our results show that HT administration exerts similar degrees of neuroprotection as that of tri-DHA. Further, combined treatment of HT with tri-DHA emulsion does not confer additional neuroprotection.
Therapeutic HT is a means of neuroprotection well established in the management of acute ischemic brain injuries such as anoxic encephalopathy after cardiac arrest and perinatal asphyxia (31). Randomized trials have shown that HT is also effective in improving neurological outcomes in traumatic brain injury patients (32). Neuroprotective benefits of systemic HT following ischemic stroke have been reported in clinical trials (9,11). However, the use of HT for acute stroke treatment is still controversial and is limited by logistical challenges (9,33).
HT initiated immediately after HI insult is neuroprotective and the degree of neuroprotection decreases linearly with the delay of initiation of cooling (34,35). In neonatal mouse models of HI injury, HT beginning at 0 or 2 h after HI provides neuroprotection (26), while no studies have assessed the effect of HT when delayed by more than 2 h in mice. Our results showed that HT is neuroprotective up to 2 h after HI injury and the protection is lost with prolonged 4 h delay in treatment. In contrast, in a neonatal rat model, Sabir et al. (35) showed that HT delayed up to 6 h after HI insult provides neuroprotection. This may be related to differences in pathways of ischemic injury progression and neuroprotection in mice vs. rats (36). The basal metabolic rate per kg of body weight is seven times greater in mice than in humans (37) and this may play a major role in providing longer treatment windows for HT in humans in response to HI injury. Therefore, neuroprotection with 2 h delayed treatment in our protocol in mice may translate into longer time windows with HT in humans. Of relevant interest, after we reported a 2 h treatment window in neonatal mice (23), in pilot studies we documented a 6 h therapeutic window for omega-3 emulsion treatment in an adult stroke model (unpublished data). Since myelination is still occurring in the neonatal brain and the water content of the neonatal brain is greater than that of the mature brain, injury has a different appearance and time-course in the neonatal brain than in the adult brain. Cell death mechanisms have been shown to be different in the developing brain compared to that in the adult (38). The mechanisms of mitochondrial permeabilization are age-dependent and while Cyclophilin D is critical in the adult brain, B-cell lymphoma 2 (BCL-2) associated X (BAX)related mechanisms dominate in the immature brain (39). Stroke triggers a robust inflammatory response in both adult and neonatal brain. Compared to the adult, microglial activation in neonates is much more rapid following ischemic injury. In the adult brain there is also a considerable contribution of infiltrating peripheral immune cells to the brain after stroke injury (40). In contrast, little infiltration of peripheral cells is seen acutely after neonatal stroke (41). Thus, these findings suggest differences in neonatal and adult central nervous system immune responses to injury (42,43). We assume that these differences in ischemic injury pathophysiology and the efficacy of omega-3 fatty acids to act through these molecular pathways account for the differences in therapeutic windows observed between neonates and adults. Our present results also suggest that HT offers a very similar therapeutic window as tri-DHA treatment. A therapeutic window shorter than 6 h is recommended in neonates with HI encephalopathy (44,45). However, few studies have demonstrated that HT initiated at 6-24 h after birth may also have benefits (46). The effective therapeutic window for HT in adult stroke patients is still not known (11,14).
We tested whether DHA might add better neuroprotection as an adjuvant therapy to enhance the efficacy of HT after HI injury. Our results suggest that combining HT and tri-DHA does not enhance neuroprotection or extend the therapeutic window of treatment after HI injury. This is similar to recent findings from studies in newborn piglet models of HI injury, which showed that combined treatment of HT plus DHA had no additional benefits than HT alone or DHA alone treatment in reducing brain injury, oxidative stress, and inflammatory markers following HI insult (47,48). However, another study in a neonatal rat model of HI injury reported that HT plus DHA synergistically reduced brain infarct volume and improved behavioral performances (49). Of interest, the inability to markedly enhance neuroprotection by HT plus tri-DHA treatment is not attributed to a reduction of absorption and distribution of tri-DHA emulsion particles, as demonstrated by our radiolabeled experiments. Additionally, low uptake of emulsion particles in the brain does not affect tri-DHA mediated neuroprotection in HI injury (25). Recent data from our laboratory have shown that injected tri-DHA emulsion is initially mainly taken up by the liver, which is then metabolized and secreted to plasma pools of lysophosphatidylcholine and nonesterified fatty acids, facilitating DHA brain transport (25). Further, we reported that tri-DHA administration increased DHA content in brain mitochondria and also induced a significant increase in DHA levels in blood and DHA derived specialized pro-resolving mediators (SPMs) in brain. Tri-DHA administration also increased blood levels of EPA and EPA derived SPMs in brain (24,25). These rises in DHA, EPA, and SPMs derived from DHA and EPA might also contribute and explain the neuroprotective actions observed for DHA.
Both DHA and HT share common pathways of neuroprotection against HI injury. DHA or HT downregulate pro-apoptotic BAX and upregulate anti-apoptotic BCL-2, resulting in reduced cytochrome c release and decreased caspase activation (20,50). DHA or HT promote activation of AKT that stimulates cell proliferation (51,52). Further, it has been reported that in experimental stroke, DHA or HT treatment induce a decrease in microglial activation and pro-inflammatory cytokines such as interleukin 1β (IL-1β), IL-6 and tumor necrosis factor alpha (TNF-α) (53,54). Additionally, both treatments inhibit nuclear factor kappa B (NF-κB), a transcription factor that activates many inflammatory signaling pathways (55,56). DHA or HT have also been shown to prevent accumulation or release of excitotoxic amino acids such as glutamate (57,58). Both DHA or HT limit reperfusion-driven acceleration in mitochondrial ROS release and protect against mitochondrial membrane permeabilization (24,59). Thus, we speculate that overlapping neuroprotective mechanisms of DHA and HT render the combined treatment ineffective in providing enhanced neuroprotection in HI brain injury.
Previously, we reported a significant impairment in the behavioral outcomes of neonatal mice subjected to HI injury, while animals treated either with tri-DHA or neuroprotectin D1 (NPD1) had reduced infarct size with preservation of neurofunctional outcomes (24,30). While in this study we did not measure neurofunctional outcomes following HI injury in the different groups, given the similar histological findings of HI injury and subsequent neuroprotection by HT or tri-DHA, we would predict similar levels of preservation of neurofunctional outcomes by both treatments. Furthermore, we did not delineate the potential molecular mechanisms of DHA compared to HT (10,20). Still these limitations do not negate the significance of our work, demonstrating that post-HI tri-DHA administration provides similar degree of neuroprotection as that of HT treatment.
Currently, HT is the only established treatment for moderate to severe encephalopathy in infants (60) and is a promising strategy still under investigation for stroke therapy in adults (61). Successful clinical translation of HT for stroke requires the control of different key parameters of HT therapy including onset time, duration, depth of HT and rewarming speed (14). Although cooling a patient is simple in concept, it is a complex medical procedure that involves coordination of efforts from specially trained health care staff along with preparedness for the management issues that may arise with HT (15,62). Using HT as a treatment for stroke usually requires settings in a tertiary care hospital and is associated with high financial costs (63). Our findings show that HT or injection of tri-DHA emulsion reduce infarct volume and the degree of neuroprotection is similar for both treatments. Omega-3 fatty acids are safe and well tolerated in humans without major adverse effects (64)(65)(66). Intravenous injections are a common feasible procedure, which can be easily performed in primary care settings. Thus, if our results using omega-3 rich lipid emulsions prove effective in treating stroke in humans, acute omega-3 therapy could be considered as an alternative costeffective therapy for HT after ischemic organ injuries such as stroke.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author/s.
ETHICS STATEMENT
The animal study was reviewed and approved by Columbia University Institutional Animal Care and Use Committee.
AUTHOR CONTRIBUTIONS
DM performed all the experiments and wrote the first draft of the manuscript. HZ provided experimental assistance. RD, VT, and HZ advised on study design, on data analyses, and in revisions of the manuscript. RD, VT, HZ, and DM conceived the study, coordinated the experiments, and wrote the final version of the manuscript. All authors contributed to the article and approved the submitted version.
FUNDING
This work was supported by the National Institutes of Health grant R01 NS088197 (RD and VT). | 5,388.6 | 2021-01-15T00:00:00.000 | [
"Biology",
"Medicine"
] |
Control of Two Satellites Relative Motion over the Packet Erasure Communication Channel with Limited Transmission Rate Based on Adaptive Coder
: The paper deals with the navigation data exchange between two satellites moving in a swarm. It is focused on the reduction of the inter-satellite demanded communication channel capacity taking into account the dynamics of the satellites relative motion and possible erasures in the channel navigation data. The feedback control law is designed ensuring the regulation of the relative satellites motion. The adaptive binary coding/decoding procedure for the satellites navigation data transmission over the limited capacity communication channel is proposed and studied for the cases of ideal and erasure channels. Results of the numerical study of the closed-loop system performance and accuracy of the data transmission algorithm on the communication channel bitrate and erasure probability are obtained by extensive simulations. It is shown that both data transmission error and regulation time depend approximately inversely proportionally on the communication rate. In addition the erasure of data in the channel with probability up to 0.3 does not influence the regulation time for sufficiently high data transmission rate.
Introduction
In recent years, there has been a growing interest in using the differential force (i.e., the difference between the aerodynamic drag forces applied to the satellites) to eliminate the relative drift between the satellites in a swarm moving in a group (without the mandatory requirement to maintain relative position, cf. [1][2][3]). Various control algorithms using differential aerodynamic drag have been proposed in numerous publications, see [4][5][6][7][8][9][10][11][12][13][14][15][16]. One of the fundamental publications is the work by Leonard [17] where, based on the assumption of the possibility of changing the effective cross-section of satellites, a method of switching control has been developed. The differential force is created by changing the attack angles of the plates, located on the satellites, due to the rotation of the satellites with respect to the incident airflow. Kim et al. [18] deal with a satellite constellation, consisting of a leader satellite and surrounding slave ones. The orbit of the leader is considered as a reference, whilst the relative orbits of the followers are considered to be the Projected Circular Orbit (PCO), which is the relative orbit between the master and slave satellites, the data transmission rate for the given power consumption can make it possible to increase the power of transmitted signals, expanding the area of inter-satellite interaction.
The present paper is focused on the reduction of the inter-satellite demanded communication channel capacity taking into account dynamics of the satellite's relative motion and possibility of erasing the navigation data in the channel.
The limitations of control under constraints imposed by the limitations of the communication channel capacity, have been deeply studied within the control theoretic literature, see [31][32][33][34][35] and the references therein. A fundamental result establishing the smallest value for which the stabilization (estimation) problem for linear time invariant (LTI) systems is obtained by Nair and Evans [31] and presented in the form of the seminal Data Rate Theorem. However, the study of the application control problems in aerospace under communication constraints is still very limited.
In the present paper, a system of two coupled satellites is considered. As in [21], the satellites are assumed to be launched at the starting time in accordance to the specified separation conditions. It is assumed that the satellites move in a low circular near-Earth orbit and are controlled using the aerodynamic drag force, which is achieved by rotating the satellite relative to the incoming flow using a flywheel attitude control system. The main focus of the paper is in the navigation data exchange between the satellites to be used to keep the satellite motion in a swarm. To this end, the adaptive coding procedure is proposed and is studied for the cases of ideal and erasure communication channel. The regulation time is taken as the performance criterion, and its dependence on the data transmission rate is numerically studied.
The reminder of the paper is organized as follows. The existing results on control and estimation under information constraints are briefly recalled in Section 2, where the minimum necessary data rate for the estimation and control of Linear Time Invariant (LTI) systems in the form of the data rate theorem is specified and various coding/decoding schemes are described. Section 3 is devoted to the dynamics of two satellites' relative motion in a near-circular orbit. The main result is concentrated in Section 4. This section starts with designing the control law, which ensures the asymptotic regulation of the satellites relative motion (Section 4.1). The design is based on the linearized dynamics model without taking into account the control signal saturation. The classical modal control approach based on the pole-placement technique is employed [36]. The behavior of the system with the saturation in control is studied in the subsequent sections by the simulations. The next stages of the present study are dedicated to the evaluation of the proposed scheme of the inter-satellites data transmission over a digital communication channel, aimed for reducing the necessary channel capacity. To this end, the adaptive coding procedure for the position transmission between the satellites in the formation, employing the kinematics process description, is introduced in Section 4.2. It is worth mentioning that the application of this procedure makes it possible to avoid measuring the time derivatives of the satellites relative position (these derivatives are needed for control) due to the state observer, which is embedded into the adaptive coder/decoder pair. Then in Section 4.3, the model of the erasure communication channel, adopted in the present research, is described. Results of the numerical study of the closed-loop system performance and accuracy of the data transmission algorithm dependence on the communication channel bitrate and erasure probability obtained based on the extensive simulations are presented in Section 4.4. Concluding remarks and the future work intentions given in Section 5 finalize the paper.
Problem Description
Let us consider control and observation (estimation) systems containing a digital communication channel. For these systems, plant output measured by the sensor at discrete instants t k = kT 0 , where T 0 denotes the sampling interval, k = 0, 1, . . . , is converted by the coder into the characters of the coding alphabet S. The sequence of characters is transmitted over a digital communication channel to the decoder. The decoder transforms messages from the transmitted form in a form adequate to subsequent calculations and transformations by the controller. In [37][38][39] the communication channel was considered of limited capacity, but was otherwise ideal. The cases of packet erasure channel and 'blinking' channel widely appear in various real-world systems, see, e.g., [40][41][42][43][44][45][46][47][48][49]. Therefore, for a more realistic analysis, the properties of the communication channel, such as distortion, erasure, and data loss, should be taken into account. In the present study, the effect of data erasure is considered.
Signal quantization introduces essentially non-linear properties into the system, characterized by the presence of the dead zone, discontinuities, and saturation (associated with bit grid overflow). Additionally, the signal sampling on time involves the hybrid (continuous-discrete) system description. A rigorous examination of the influence of time sampling and the level quantization is a complex nonlinear analysis problem, that usually does not have an exact analytical solution. In the early studies, the level quantization in digital control systems was usually considered a source of independent additive random noise affecting the system. This assumption makes it possible to significantly simplify the study of level quantized systems, especially for LTI plants. However, if the quantization level is relatively high (for example, in the case of binary quantization), this can lead to the emergence of self-oscillations and even the system divergence, see [50][51][52][53]. Therefore, to analyze the system, its nonlinear model is required. Besides, the possibility of the bit grid overflow can also affect the quantizer, as a result of which the saturation is introduced to the control loop [54,55].
Minimum Necessary Data Rate for Estimation and Control
The limitation of the data transmission rate over the communication channel can be expressed in informational terms. Assume that coding alphabet S consists of µ elements. Then at each step k = 0, 1, . . . over the channel an amount ofR = log 2 µ bit can be transmitted per each step. Let the data transmission be carried out at discrete instants t k = kT 0 , where T 0 is the sampling interval. Then the data transmission rate in bits per second is as R = T −1 0 log 2 µ bit/s. In this regard, one speaks of "information constraints" in control and estimation problems.
The problem of determining the minimum bandwidth of the communication channel, at which it is possible to provide the required estimation accuracy, is posed and partially solved by Nair and Evans [56]. The sufficient condition for the value obtained in Nair and Evans [56] was developed in subsequent works in the form of the Data Rate Theorem which is a fundamental result establishing the smallest value for which the stabilization (estimation) problem for linear systems is solvable in principle. Nair and Evans [31] studied the exponential stabilizability of LTI plants in the sense of achieving an exponential moment stability. For a deterministic initial state case the result of [31] can be roughly presented in the following form [57].
Let the LTI discrete-time plant be described by the difference equation: where x[k] ∈ R n , y[k] ∈ R l , and u[k] ∈ R m are the state, output, and control vectors, respectively; A, B, and C are the matrices of the corresponding dimensions; and k ∈ Z + denotes the step number (the discrete time). It is assumed that pair (A, B) is reachable and (C, A) is observable. Let the sensor be connected to the controller over a digital communication channel, and no more thanR bits of data can be transmitted at each step k. Then the necessary and sufficient conditions for ρ-exponential (with the prespecified stability bound ρ > 0) stabilization are given by the inequality [31]: where η j are the eigenvalues of matrix A , j = 1, . . . , n. The right-hand side of (2), denoted as , gives a tight admissible bound when ρ-exponential stabilization can be achieved. For real-time systems with the constant sampling interval T 0 , NE-number R NE in bits per second has a form: Extensions of this result for stabilizing nonlinear systems in the vicinity of the origin and observing nonlinear systems through finite capacity communication channels, including large networks, were obtained in the series of the subsequent papers, see [58][59][60][61][62] to mention a few.
It is worth mentioning that on practice, the data transmission rate can not be taken as small as the NE-number (3) gives due to several reasons for the data bitrate being usually much greater than R NE , but the NE-number can serve as a measure showing the maximum available possibilities of estimation and control over the existing communication channel. Furthermore, a promising approach is to employ the event-triggered control instead of the control with constant sampling time, see e.g., [63][64][65].
Coding/Decoding Schemes
Under the assumption that the sampling time T 0 can be chosen arbitrarily, optimality of the binary coding in the sense of the required transmission rate (in bit-per-second) has been proven in [39], see also [66]. Therefore in the present study the binary quantizer is used as a core element for the coding procedure.
Static Binary Quantizer
Let σ[k] be a scalar information signal to be transmitted over the digital communication channel in discrete instants t k = kT 0 , where k = 0, 1, · · · ∈ Z + is a sequence of natural numbers, and T 0 is the sampling interval. Let us introduce the following static quantizer: where sign(·) is the signum function: Parameter M is referred to as a quantizer range. The output signal of the quantizer is represented as one-bit information symbol from the coding alphabet S = {−1, 1}, and is transmitted to the decoder. Note that for the binary coder, the transmission rate is as R = 1/T 0 bit per second. It is assumed that the equi-memory condition is fulfilled, i.e., the coder and decoder make decisions based on the same information [67,68]. The binary output codeword s ∈ S is transmitted to the decoder.
Zooming Strategies
In time-varying quantizers [33,39,[69][70][71][72] range M is updated with time. Using such a zooming strategy improves the steady-state accuracy of the transmission procedure and at the same time prevents the encoder saturation at the process beginning. The values of M[k] can be precomputed (the time-based zooming) [39,73,74], or current quantized measurements can be used at each step for updating M[k] (the event-based zooming). For an audio channel, Moreno-Alvarado et al. [75] developed the coding schemes with the capacity to simultaneously encrypt and compress audio signals, which makes possible increasing necessity for transmitting sensitive audio information over insecure communication channels.
The event-based zooming can be realized in the form of the adaptive zooming [76][77][78][79], where the quantizer's range is adjusted automatically depending on the current variations of the transmitted signal.
For the binary quantizer the following adaptive zooming algorithm was proposed and experimentally studied in [78]: where
Coders with Memory
The coding/decoding procedure can include the embedded observer, which adds a memory to the coder. The following model of drive process is used: where x(t) ∈ R n is the process state space vector; y(t) is the scalar measured signal; A ∈ R n × n , B ∈ R n × 1 are given real matrices; and ϕ(t) is the external input signal which is assumed to be the same both on the transmitter and the receiver sides (so that the equi-memory condition be fulfilled).
The quantized observation errorσ[k] is defined as a deviation between measured signal y(t) and its estimateŷ(t) quantized with given M[k] as follows: where the estimateŷ(t) is generated by the following observer: wherex(t) ∈ R n is the state estimation vector;ŷ(t) is the estimate of the drive process; L is (n × 1)-matrix (the column-vector) of the observer parameters; and the continuous-time observation errorσ(t) is found as an extension ofσ[k] over the sampling interval. In the case of the zero-order extrapolation,σ(t) has a form
Relative Motion Dynamics of Two Satellites in a Near-Circular Orbit
For describing the dynamics of the relative motion of two satellites, moving in near-circular orbits at the Earth's central gravitational field, equations in the Local Vertical Local Horizontal (LVLH) reference system in relative coordinates according to the HCW model is used, see [21,[80][81][82][83][84].
In the present paper, the LVLH reference system is employed, where the OZ axis is directed from the center of the Earth, the OY axis is directed normal to the orbital plane, and the OX axis complements the triple to the right coordinate frame, see Figure 1. Taking into account that the motion along the normal to the orbital plane (along OY axis) is isolated, and proceeding from the universal gravitation law and Kepler's third law under the assumption that for distance ρ of the satellite to the center of the Earth inequalities ρ x, z are held, one obtains the following HCW equations of satellite motion in the LVLH local coordinate system: where F x , F z are the components of the vector of non-gravitational forces applied to the satellite, expressed in acceleration units. These forces include the aerodynamic drag force f x along the OX axis, which is controlled by the satellite turning through the attack angle α. Based on the physical considerations it is valid that f ). Therefore it is natural considering angle α within 0 ≤ α ≤ π/2. In (10), ω denotes the averaged angular velocity of the spacecraft in orbit and satisfies the expression ω = µ/a 3 where µ = GM, G is the gravitational constant, M is the mass of the central body (for the Earth, µ = 398,603 ×10 9 m 3 s −2 ), and a is the semi-major axis of the satellite orbit. HCW Equations (9) and (10) are derived under the assumptions: √ x 2 + z 2 is small compared to ρ; the aerodynamic force is small, the acceleration caused by it does not exceed 1.7 × 10 −6 m/s 2 ; the eccentricity of the orbit is small, the orbit is very close to the circular one; and the orbital rate ω is approximately constant. For the resulting system (9), (10) in [17], a non-degenerate coordinate transformation is performed to the real Jordan form [36], as a result of which the model (9), (10) is represented in the form of a double integrator and a harmonic oscillator, connected in parallel. For the double integrator, a speed-optimal control is created in the form of relay feedback with a quadratic switching function. A similar, but technically more complex approach is used for the harmonic oscillator. Since the control is scalar, it is not possible to apply both control laws simultaneously and independently to both the double integrator and oscillator. To resolve this contradiction, Leonard [17] developed and studied for different flight scenarios an algorithm for switching from one control law to another, based on the specifics of the rendezvous problem requirements. The linearized equations of the satellites relative motion in the OXZ plane (9), (10) in the state-space form read as (cf. [5,21]): where χ = x 12ẋ12 z 12ż12 T ∈ R 4 is the system state vector, where x 12 denotes the difference in coordinates of the second and first satellites along the OX axis, x 12 = x 2 − x 1 ; z 12 = z 2 − z 1 ; and u denotes the control action (difference of aerodynamic forces acting on satellites, in units of acceleration). It is limited in modulus by the value u max . Equation (12) closes the feedback loop. In it, y is the output of the linear controller in the feedback, ϕ(·) is a nonlinear function describing the limitation of the control action, which is assumed to be the saturation function, ϕ(y) = sat u max (y). Since for each satellite the aerodynamic drag force is negative (that is, it acts against the direction of motion along the longitudinal axis and lies in the interval [−u max , 0]), then in order to provide the desired differential control −u max ≤ u ij ≤ u max , the actual steering (braking) action should only be applied to the first (along X-axis) satellite of the pair, see [21]. Matrices of dynamics model (11) have the form: Coefficients k i , i = 1, . . . , 4 are chosen at the stage of the control law synthesis.
Control Law Design
Suppose that each satellite has the onboard navigation equipment, e.g., the GLONASS/GPS receivers [85], for determining its position and is able to share this information via the digital inter-satellite communication channel. Therefore it is assumed that the onboard control system of each satellite is supplied by the values of all the relative coordinates x 12 = x 2 − x 1 , z 12 = z 2 − z 1 and their time derivatives. Control signals for each satellite are generated so as to provide the difference u = u 2 − u 1 , generated in accordance with the chosen control law u = U(x 12 , z 12 ,ẋ 12 ,ż 12 ). In the present study, for control law design the pole-placement technique is employed. The design procedure is made for a LTI system model under the assumption that the restriction on the control signal is not "active", i.e., u does not go beyond the boundaries of the linear region, that is |u| ≤ u max . State-space equations of the closed-loop system (disregarding disturbances) have the form:χ where matrices A, B, K are of form (13). The design problem consists of choosing the coefficients of the controller k i so as to provide the required spectrum {λ ABC } of the matrix (A − BK) of the closed-loop system (14). The fourth order Butterworth polynomial: is used, where parameter Ω is the geometric mean root of the characteristic polynomial. This parameter determines the desired transient time of the closed-loop system. Note, that the controller design is of a secondary meaning for this study that is focused on the data exchange between the satellites, and the other control schemes can be used for improving the formation dynamics. For example, in the case of essential parametric uncertainty, the Implicit Reference Model adaptive control [86,87] or the sliding mode control of [88][89][90] can be also employed.
Adaptive Coding for Transmission of Position Between Satellites in Formation
The application of adaptive coding for navigation data transmission between satellites in the formation is often based on the kinematic representation of the vehicle motion by the second-order model for each channel under the assumption that the satellite speed is constant, which leads to the following representation of the data source generator (cf. [91][92][93]): where y[k], V[k] are the satellite position and speed with respect to the given direction, variable ξ[k] denotes the unmodeled variations of the vehicle speed and is considered as an unknown disturbance.
To estimate the change of the coded signal, the following embedded observer is introduced to the encoder and decoder ( [78,93]):
Erasure Channel Description
By an analogy with [47,93,94], let us assume that output measurement is encoded by an encoder and transmitted to a decoder through packet erasure channel with erasure probability p. In addition, suppose that there exists a feedback from decoder to the encoder for acknowledgment whether the packet was erased or not. Therefore the encoder knows what information has been delivered to the decoder (i.e., the aforementioned equi-memory condition is fulfilled). Let the acknowledgment signal at time k which is sent by the decoder and received by the encoder be represented by ζ[k] ∈ {0, 1} as follows: Random variables ζ[k], k = 0, 1, . . . are assumed to be independent and identically distributed with common distribution: P r (ζ[k] = 0) = 1 − p and P r (ζ[k] = 1) = p.
Satellite Model Parameters
For the numerical investigations, two satellites are represented by 3U-cubesates in the form of rectangular parallelepipeds with dimensions 10 cm × 10 cm × 30 cm. The aerodynamic drag force acting to i-th satellite in the acceleration units is given by the relation where V denotes the satellite running speed with respect to the Earth's atmosphere; ρ = ρ(h) is the air density on the height h of the satellite orbit; α i ∈ [0, π/2] stands for the angle of attack; C α is the drag derivative coefficient with respect to attack angle; ∆S denotes the difference between the satellite cross-sectional areas for the cases of α = 0 and α = π/2 rad; S 0 is the cross-sectional area as α = 0; and m is the mass of the satellite. Therefore the differential drag can be found as which divides the following maximal control action u max = ρV 2 2 C α ∆Sm −1 ms −2 . The satellite running speed V is related to the orbital angular velocity ω as follows: ×10 24 kg, so that µ = 3.986 ×10 14 m 3 s −2 and r 0 = R earth + h, R earth = 6.371 × 10 6 m.
Simulations for Ideal Communication Channel
The simulations were performed in the MATLAB/Simulink software environment. For the numerical solution of the differential equations the MATLAB variable-step routine ode45 with the relative tolerance 10 −3 was used. For hybrid (continuous-discrete systems), including the coding/decoding procedure model, the blocks from the standard Simulink/Discrete blockset were included. The simulation time was confined to t fin = 54,000 s = 15 h.
Firstly, consider the "ideal" case where the position information is transferred over the channel without sampling and level quantization. For the performance criterion, let as pick up instant T * when the relative trajectory on the (X, Z) plane reaches the circle with given radius Q and does not leave it in the future, i.e., T * = max t x 2 12 + z 2 12 > Q (T * is further called the regulation time). It is natural to assume that within this circle the regulation rule switches to the rule ensuring collision avoidance, which is beyond the scope of this paper.
The simulation results for various initial conditions are depicted in The simulation results show that, despite of the control signal saturation at the beginning of the process, the targeting manifold is attained for both sets of the initial conditions with regulation time T * = 3.27 h (Figures 2 and 3) and T * = 4.61 h (Figures 4 and 5), respectively. Moreover, based on the harmonic balance method arguments [52,53] the conclusion can be made that the closed-loop system is globally asymptotically stable. (5), and embedded state estimation (17). For the numerical study, the coding-decoding procedure parameters are picked up as follows: M 0 = 1, m c = 0, ρ = e −0.01T 0 , gains l 1 , l 2 of observer (17) are found by the pole-placement technique for the discrete-time systemŷ [
Case of Erasure Communication Channel
Thirdly, let us study how the erasure of data in the communication channel affects the data transmission accuracy and the regulation time for the relative position of the satellites.
For evaluating the data transmission accuracy, the transients of the transmission procedure were excluded by consideration only the last 0.3t fin interval of the simulation time. For this interval, the standard deviations σ e x 1 and σ eẋ 1 of the data transmission errors e x 1 [k] = x 1 (t k ) −x 1 (t k ), and eẋ 1 [k] =ẋ 1 (t k ) −ẋ 1 (t k ) (respectively) were calculated. Logarithmically scaled functions σ e x 1 , σ eẋ 1 versus transmission rate R for various values of erasure probability p are plotted in Figures 8 and 9. The plots show that the data transmission errors decrease monotonically, looking like inversely proportional functions on communication rate R and are practically negligibly small as R > 1 bit/s for all considered values of p.
A summary graph of the dependence of regulation time T * on the data transmission rate R (for each one channel) at various erasure probabilities p ∈ {0, 0.1, 0.2, 0.3} is shown in Figure 10. The curves in Figure 10 make an impression about the required load of the communication channel and the quality of the stabilization process with its use. It is seen from the plot that for significantly high data transmission rate (exceeding 2 bit/s in our example) erasure of data in the channel with probability up to 0.3 does not have an effect on the regulation time. This time is defined by the system dynamics regardless of the communication channel capacity. To illustrate the system performance, the particular processes for T 0 = 0.667 s, x 12 (0) = 200 m, x 12 (0) = 0.025 m/s, z 12 (0) = −50 m, andż 12 (0) = −0.025 m/s and probability p = 0.2 of erasing data in the communication channel are plotted in Figures 11 and 12. Regulation time T * = 7.07 h is found by the simulation. The time histories and trajectories for the ideal and limited capacity erasure communication channels significantly differ from the case of the ideal channel, and from an application viewpoint, the process quality for these conditions is a boundary one. .
Illustrations of the adaptive coding procedure are given in Figures 13 and 14. Adaptively tuning quantizer range M[k] in accordance with algorithm (5) is depicted in Figure 13. The plot shows how the range is automatically increased at the initial stage of the process and decays then, which leads to reducing the data transmission error. Figure 14 illustrates an influence of data erasure on the codewords, transmitted over the channel. Signal s[k] from the coder is received in the form of s ζ [k] on the decoder side. The difference between these signals causes the additional data transmission errors.
Conclusions
In the paper the feedback control law was designed ensuring the regulation of the relative satellites motion in a swarm. Unlike previous papers, the limited data transmission rate over the communication channels was taken into account. The adaptive coding/decoding procedure for the transmission of position between the satellites in the formation, employing the kinematics process description was studied for the cases of the ideal and erasure channel. Note that an adaptive coder used in the paper was not new since it was employed earlier for other problems in [77,78]. However such a coder was not applied to the control of satellite swarms previously and this is an additional novelty in the paper.
The dependence of the closed-loop system performance and accuracy of the data transmission algorithm on the data transmission rate was numerically evaluated. It was shown that both data transmission error and regulation time depend approximately inversely proportionally on the communication rate.
In future research, disturbances and noise in the communication channel will be taken into account.
Conflicts of Interest:
The authors declare no conflict of interest.
Abbreviations
The following abbreviations are used in this manuscript: | 6,861.4 | 2020-12-01T00:00:00.000 | [
"Computer Science"
] |
Contrasting Traditional In-Class Exams with Frequent Online Testing
Although there are clear practical benefits to using online exams compared to in-class exams (e.g., reduced cost, increased scalability, flexible scheduling), the results of previous studies provide mixed evidence for the effectiveness of online testing. This uncertainty may discourage instructors from using online testing. To further investigate the effectiveness of online exams in a naturalistic situation, we compared student learning outcomes associated with traditional in-class exams compared to frequent online exams. Online exams were administered more frequently in an attempt to mitigate potential negative effects associated with open-book testing. All students completed in-class and online exams with order of testing condition counterbalanced (in-class first, or online first) between students. We found no difference in long-term retention for material that had originally been tested using frequent online or traditional in-class exams and no difference in self-reported study time. Overall, our results suggest that frequent online assessments do not harm student learning in comparison to traditional in-class exams and may impart positive subjective outcomes for students.
A challenging reality in higher education is that fewer funds are available to accomplish the same, or greater, educational goals.Student enrollments at public institutions have increased steadily over the last few years, while the average appropriation per student has decreased (State Higher Education Executive Officers, 2012).As a result, many institutions have increased course enrollments (often accompanied by fewer sections of each course) and added large online sections of courses in an attempt to offset education costs.Instructors are, thereby, tasked with providing educational opportunities for more students without sacrificing quality.Technology can be used to offload some of the cost associated with larger classes.For instance, a relatively simple course change, such as administering exams online instead of in-class, can free class time for additional instruction and can foster the use of educational testing techniques (e.g., repeated testing).Nevertheless, instructors might be hesitant to adopt online exams in place of in-class exams because they are uncertain about the relative effectiveness of online exams.We investigate how exam performance and content retention are affected by changes to the testing format from traditional in-class exams to online exams.Specifically, we examined both immediate (unit exams) and long-term outcomes (comprehensive exam) associated with in-class and online test delivery methods.Our results provide unique insight into the impact of online testing on student retention of course content.
Whenever a major pedagogical change is made, instructors ought to consider potential advantages and disadvantages associated with their design decision.Transitioning from in-class exams to online exams is no exception.There are several logistical benefits to online administration of exams.Web-based content delivery systems (e.g., Blackboard, Moodle, Canvas) can be used to administer and automatically grade student exams.This saves time and money as the exams do not have to be printed or manually graded.Online testing also saves class time.The time that would have been devoted to test taking can be used for other activities.Although setting up online exams can also be time consuming (Brewster, 1996), the time cost is recouped in large classes and in cases where exams or exam questions can be reused (e.g., for those teaching multiple sections or teaching the same course in the future).In addition to cost savings, online testing can provide students with more scheduling flexibility, allowing them to have more control over when and where they test.Therefore, online testing appears to provide economic benefits and convenience.Clearly, these practical advantages for faculty and administrators have facilitated the transition from paper to digital exams.
Another benefit of using online examinations is that they can promote the use of testing as a learning mechanism.Although traditionally used to assess student learning, exams also serve as learning opportunities.The testing effect is the finding that simply being tested over information can increase the likelihood of later recalling that information (McDaniel, Anderson, Derbish & Morrisette 2007).The benefits of retesting are greater than those obtained when individuals restudy information and can occur when no feedback is provided (e.g., Roediger & Karpicke, 2006).Similarly, testing can lead to increased retention of information that is not even directly retested (retrieval-induced facilitation; Chan, McDermott & Roediger, 2006).For example, Chan (2010) had participants read a passage then he immediately tested them over the information.After a delay, participants were given a final comprehension test in which the same questions were tested again (retest condition) and new questions on the same passage were tested (related condition).Although the largest memory benefits came from direct retesting, performance on the related questions was significantly higher than the control condition (passage with no previous testing or related testing).Even though the benefits are clear, repeated testing is costly.In fact, when instructors are polled on the topic, they report that the primary disadvantage to using frequent exams is the time required to administer the exams (Bacdayan, 2004).Online examinations provide a mechanism to increase exam frequency without losing instructional time.
Even with the logistical and educational advantages offered by the use of online examinations, it remains important to remember the student and, ultimately, his or her learning outcomes.Would students have different outcomes if they were tested online instead of in the classroom?Alexander, Bartlett, Truell, and Ouwenga (2001) compared the exam scores of students who were tested in a traditional classroom setting -paper and pencil -to students tested on computers in a proctored lab.No significant difference in test scores was found between the two groups.Results like this are promising, but they do not fully address our question as students are rarely proctored during online exams.
There are many uncontrolled factors when exam periods are unsupervised.One prominent issue is that students might use notes or textbooks during an online exam when they are not permitted to use those materials during an in-class exam.Apart from the issue of academic honesty, it is possible that these behaviors affect learning.For example, Brothen and Wambach (2001) Still and Still Journal of Teaching and Learning with Technology, Vol. 4, No. 2, December 2015. jotlt.indiana.edu 32 found that quizzing as a study technique is ineffective compared to traditional study methods if students look up the answers while taking the quiz.However, Agarwal and Roediger (2011) obtained contradictory results.They assigned participants to take open-or closed-book tests and then tested their comprehension again after two days.At initial test, participants in the open-book condition scored higher on the test.After two days, though, there was no significant difference between the groups in comprehension.In a follow up study (Experiment 2), participants were told to expect an open-or closed-book test in the future, but were given a surprise test in the interim to see how the groups' preparations for the test differed.They found that participants expecting an open-book test had studied less and they scored lower on the test than those who were expecting a closed-book test in the future.
When examining these findings, it is difficult to predict how student comprehension might be affected by the use of online testing in place of traditional in-class exams.Online and in-class exams can produce equivalent comprehension effects under proctored situations (Alexander et al., 2001), but our online exams would not be proctored.Because online exams would be unsupervised, we assumed that student behaviors would change; students might use their noteswhich may or may not have an effect (Agarwal & Roediger, 2011;Brothen & Wombach, 2001)and students might study less for the online test because they plan to use their notes (Agarwal & Roediger, 2011).Although we could not know how the unsupervised nature of the online test would affect student behavior and comprehension, we did want to reduce the likelihood of students adopting clearly maladaptive study habits.We were concerned that when students knew an upcoming exam was to be taken online (unsupervised, open-book), they would not study as much as when they had an upcoming exam to be taken in class (closed-book).In an attempt to counteract this behavior, we administered online exams twice as often as in-class exams (the online exams were half the length of in-class exams).Not only do students often prefer more frequent exams (Bangert-Drowns, Kulik, & Kulik, 1991), there is evidence that frequent testing can enhance learning (Landrum, 2007;McDaniel, Agarwal, Huelser, McDermott, & Roediger, 2011).Landrum (2007) found greater comprehension for students who took weekly in-class quizzes compared to those who took traditional unit exams; further, the benefits were greatest for the bottom third of students.It is possible that the comprehension benefits associated with frequent examination are related to changes in study habits.Although it is well known that students should space their study episodes over time (e.g., study a little every day) to maximize learning outcomes (e.g., Rohrer & Pashler, 2007), many students believe cramming is an effective means for achieving high exam scores (Taraban, Maki, & Rynearson, 1999).Therefore, we hoped that by having more frequent online examinations, students would feel compelled to adopt a study strategy that was more similar to one they would use if they were only taking traditional in-class examinations -that is, we hoped they would study more often.
In addition to taking online and in-class exams, students completed a comprehensive exam and two reflective surveys.From an educational standpoint, the inclusion of a comprehensive exam at the end of the semester was important.If one of the goals of instruction is to facilitate long-term retention, it seems comprehension should be tested after a substantial delay (e.g., more than a week).Using the data from the exams and surveys, we were able to 1) compare long-term retention of material tested online to material tested in class, 2) compare performance on in-class and online exams, 3) compare the amount of time students study in preparation for frequent online exams and for less frequent in-class exams and 4) consider additional benefits that may be associated with online exams (e.g., student subjective experience).
Participants
The university institutional review board approved all experimental procedures.Students (N = 139) from two sections of Introductory Psychology taught by the same instructor participated in the study.Participants received course credit in exchange for agreeing to participate.
Materials and Procedure
David Myers' (2009) introductory psychology textbook, Psychology in Everyday Life, was the content foundation for the course.We wrote the exam questions (multiple choice with four alternatives) to reflect content that appeared in the textbook and was addressed in class.We used the same exam questions and time limits (i.e., an average of one minute per question) for both online and in-class exams to better equate the testing conditions.
The course was originally designed with four multiple-choice exams ("unit exams" with approximately 50 questions each) being the primary method of assessment.In this traditional format, the exams would be administered in class every 3-4 weeks.For this study we modified the traditional format by replacing unit exams with shorter, and more frequent, online exams (approximately 25 questions each).Thus, in preparing the course for this study we constructed a total of eight online exams from the existing four traditional unit exams.Because each online exam assessed half of the content of a traditional unit exam, individual online exams were worth half the points (11% of final grade) of a traditional in-class unit exam (22% of final grade).
Testing condition was manipulated within subjects with each student taking two traditional in-class exams and four, more frequent, online exams.Students were assigned to one of two testing orders according to their class section: in one section students took two in-class exams during the first half of the semester and four online exams during the second half of the semester; in the other section the order was reversed.This counterbalancing was intended to decrease the influence of confounding factors like fatigue (e.g., lower motivation at the end of the semester) and practice effects (e.g., familiarity with the instructor's questioning style, more effective study or organizational strategies).
Online exams were administrated through Blackboard (http://www.blackboard.com/).Students were given a three-day window to take the exam.They were encouraged to take online exams in a campus computer lab to reduce the chances of technical problems and they were encouraged to organize their notes to facilitate quick searches, however neither suggestion was enforced.By contrast, notes and other materials were not allowed during in-class exams because we wanted to maintain a typical in-class testing environment.To minimize other potential confounds, we controlled several aspects of the testing condition: students could not retake exams, no immediate feedback was provided, and the same time limits were enforced in both testing conditions.
In addition to taking exams for normal course assessment, students took an in-class, 26question comprehensive final (questions came from each of the previous exams) at the end of the semester.When analyzing the results of the final, we only used questions that provided some discrimination between students (i.e., those with Point Biserial values above +0.3).The purpose of including a comprehensive final exam was to assess retention of information that had been tested via traditional in-class exams and information that had been tested via more frequent online exams.This assessment provides a way to see if student learning was adversely affected by the use of online exams.Because the comprehensive exam score was not included in students' final grades, they were given an incentive; they received extra credit if they scored 80% or higher.
Finally, students were asked to complete two short reflective surveys in class.At midsemester students were asked to estimate the number of minutes spent studying for each exam.At the end of the semester students were asked to indicate their testing preferences and practices (e.g., preferences for online or in-class testing, subjective test difficulty, self-reported test anxiety, study habits).
Results
All statistical tests used an alpha level of .05.Several dependent measures were used to determine the effectiveness of frequent online testing.First, performance on the cumulative final for questions initially tested online compared to questions initially tested in-class were used to gauge retention differences related to testing conditions.Those data were then categorized by the students' final course grade (only includes exam scores, no assignments or extra credit) to examine differential effects on subsets of students.Letter grades in the course were assigned such that students earning 90-100% of the points earned an A, 80-89% earned a B, 70-79% earned a C, 60-69% earned a D, and anything below 60% earned an F. We investigated this possibility because low-performing students have been shown to benefit more from frequent quizzing than other students (Landrum, 2007).Second, a comparison of in-class and online exam scores was used to determine whether or not subsets of students (groups based on final grade) were immediately impacted by the testing manipulation.Finally, student responses on the reflective survey were used to examine testing condition effects on study habits and testing preferences.
Impact of the Testing Manipulation on Comprehension
At the end of the course, students completed a comprehensive exam; half of the questions had appeared before on online exams, half on in-class exams.A paired samples t-test examining student performance on comprehensive exam questions failed to reveal a significant difference between performance on content assessed through online exams (M = .54,SD = .18)and in-class exams (M = .57,SD = .17),t(101) = -1.61,p = .11.
Despite failing to reject the null, it is possible that subgroups of students did benefit from frequent online testing.To investigate this possibility, a second set of analyses were conducted after dividing students into groups according to their final grade (see Figure 1).A repeated measures ANOVA examining online and in-class comprehension as a function of final course grade (between subjects factor) revealed no significant main effect of testing condition, F(1, 96) = 2.67, p = .11,ηp 2 = .03,final grade, F(4, 96) = .64,p = .64,ηp 2 = .03,and no interaction, F(4, 96) = 1.16, p = .34,ηp 2 = .05.The most noteworthy, although not statistically significant, result was performance for students earning an A for a final grade, t(13) = -1.68,p = .12.They scored numerically lower on comprehension questions over material that had been tested online (M = .51,SD = .20)than on material tested in class (M = .63,SD = .22).When these results are taken together, frequent online testing did not provide a significant benefit to long-term comprehension of course
Impact of the Testing Manipulation on Exam Performance
Overall students earned higher scores for online exams (M = .75,SD = .20)than for in-class exams (M = .70,SD = .14),t(124) = 3.35, p = .001.But, this general finding, does not provide an adequate description of the results.As before, we followed up with an ANOVA in order to examine possible differences between subgroups of students.There was a main effect of testing condition, F(1, 120) = 9.81, p = .002,ηp 2 = .08,a main effect of final grade, F(4, 120) = 338.38,p < .001,ηp 2 = .92, and both were qualified by an interaction, F(4, 120) = 9.99, p < .001,ηp 2 = .25.Paired samples t-tests were used to examine the interaction (see Table 1 for descriptive and inferential statistics).Students earning an A, B, or C scored significantly higher on the online exams than the in-class exams.No effect was found for students earning a D in the course.Students earning an F, by contrast, scored lower on online than in-class exams.We believe this occurred because some students simply forgot to take the online exams thereby earning zero points.Based on these data, it appears online testing may inflate the grades of some students.However, there are many potential reasons for this inflation, some of which could be controlled.For example, students who score higher on online exams might have better organizational strategies than other students -something we cannot control -or they may have collaborated with classmates -something that can be minimized through the use of randomly selected questions from a large database (Daniel & Broida, 2004).
Impact of the Testing Manipulation on Study Time
At mid semester, students were asked to estimate the amount of time they had spent studying for each exam.Half of the students had only taken online exams and half had only taken in-class exams.Because online exams assessed half as much content as in-class exams, we multiplied students' average reported study times by two in order to have a fair comparison with study times for in-class exams.An independent samples t-test failed to reveal a significant difference between reported study time for online exams (M = 129 minutes, SD = 98) and in-class exams (M = 108 minutes, SD = 93), t(91) = 1.078, p = .284.
Student Preferences
Descriptive data regarding students' subjective experience of online and in-class exams revealed that 74% of the students preferred online exams.In addition, 83% of students with self-reported test anxiety preferred online exams.Finally, even though the same questions and time constraints were used for online and in-class exams, 75% of students reported that in-class exams were more difficult.
Discussion
The purpose of this study was to examine the ramifications of implementing frequent online exams compared to traditional in-class unit exams.Based on previous research, it was not clear what those ramifications would be.While the actual format of the exam (online vs. paper and pencil) has little or no impact on performance (Alexander et al., 2001), student strategies may differ based on the format causing a difference in performance.For example, students may adopt different study strategies when they know they have an open-book exam compared to a closed-book exam (Agarwal & Roediger, 2011).Similarly, they may adopt different testing strategies for unsupervised online exams compared to traditional in-class exams; specifically, they might use their notes or textbooks for online exams.The evidence is mixed as to whether or not there is a detrimental effect on comprehension when students look up the answers to questions (c.f., Agarwal & Roediger, 2011;Brothen & Wambach, 2001).Our intent was to, at minimum, maintain the
Still and Still
Journal of Teaching and Learning with Technology, Vol. 4, No. 2, December 2015.jotlt.indiana.edu37 academic outcomes associated with in-class testing in our online testing condition.Therefore, we attempted to counteract the effects of suboptimal strategies by using frequent online exams; frequent testing has been shown to enhance learning (Landrum, 2007;McDaniel, et al., 2011).In addition to providing a practical comparison of frequent online testing to traditional testing, we used a within subjects design to increase internal validity and examined both short-term (individual exam performance) and long-term comprehension (comprehensive exam performance) effects.
Although one must exercise caution when interpreting null results, it appears possible to obtain similar long-term retention outcomes using frequent online exams compared to in-class exams.In addition, if we assume that students used their textbooks and notes for online exams, these results parallel Agarwal and Roediger's (2011) finding that open-book exams do not necessarily harm comprehension.In further interpreting these results, there are two issues to consider.First, although the predominant finding was no effect of testing condition on comprehension, students who earned an A in the course demonstrated a non-statistically significant trend toward lower comprehension in the online testing condition.This potential limitation to online testing merits further examination.Second, in our study the testing manipulation (online vs. in-class) was conflated with exam frequency.This design was deliberate, it reflects the applied nature of the study.We wanted to investigate the effect a practical, but informed, change in exam administration would have on student comprehension.While we could have simply contrasted online and in-class unit exams, we were concerned that the unconstrained nature of online exams would have a negative impact on student study habits.Students do not prepare as much for an unsupervised online exam as they would for an in-class exam (c.f., Agarwal & Roediger, Experiment 2); we hoped that more frequent, but otherwise equivalent, testing would counteract this tendency.Student self-reported study times indicate that we were successful in this regard as there was no significant difference in self-reported study times for online and in-class exams.
Another concern with online testing is that the assessment might be less valid than that provided by in-class testing; for instance, online exam scores might not be an accurate reflection of student knowledge.One result of this could be grade inflation.When we compare online to inclass exam scores, on average, "A students" scored 7% higher, "B students" scored 11% higher, and "C students" scored 6% higher on online exams.Although, these increases would not affect the letter grade for A students, they could impact the letter grades for B and C students.We interpret this as grade inflation because the higher online scores were not associated with any increase in long-term comprehension of the same material.This type of grade inflation could be controlled by reducing the point-values of online exams and including an in-class comprehensive final exam that counts toward the final course grade like a typical exam.
Although there are many reasons why online and in-class exam scores might differ, one of the most troubling explanations would be that these students were more likely to cheat (e.g., collaborate with other students to gain an unfair advantage).According to Daniel and Broida (2004), typical cheating practices reported by students include sharing quizzes and looking up answers in the textbook.Fortunately, there are ways to minimize cheating beyond the method we employed in this experiment (i.e., time limitations).Cheating behaviors can be reduced by drawing questions from large test banks, limiting the time students could spend on each question (Daniel & Broida, 2004), and by blocking access to other internet resources during the exam.These practices are easily employed within most web-based content management systems.Upon implementation of these measures, Daniel and Broida found no difference between online and inclass quiz performance.In addition, it has been found that student online scores on mastery quizzes correlate with in-class exam scores (Maki & Maki, 2001); this provides additional evidence that online quizzes and exams can provide valid assessments of student learning.
There are several potential benefits to online administration of exams.From a financial standpoint, they reduce costs associated with printing and administration of exams, a savings that only increases with the size of the course.In addition, online testing changes the normal time constraints associated with the classroom, providing the opportunity for repeated testing without sacrificing other instructional activities.From a convenience standpoint, they can allow students more flexibility in scheduling their own exam times; they also allow instructors the ability to conveniently administer make-up exams.Finally, from a subjective standpoint, students simply prefer frequent online exams.Some students in our study even claimed that taking the exam online reduced their test anxiety (c.f., Stowell & Bennett, 2010).Although we do not know why students had this preference, it may come from the increased sense of control, or agency, they have over the testing conditions (as hypothesized by Stowell & Bennett, 2010), or from a less stressful testing situation, or from their perception of the in-class exams being more difficult.Of course, it is always possible that they simply prefer online exams because many of them performed better on those exams, perhaps with the aid of notes or textbooks.
Our goal was to provide a practical means for achieving equivalent, or better, educational outcomes under the pressures of increasing course enrollments.While we did not see better educational outcomes with frequent online testing, we did not see a detriment to educational outcomes.Even so, we acknowledge that technological aids are not without cost.Online course management systems can be time consuming to use (Brewster, 1996).Not only do instructors have to configure the system, they often have to manage technological issues encountered by students.For example, in running this study, students would occasionally ask to schedule a make-up exam because of computer-related failures.These issues are compounded if there is a system-wide failure (e.g., downed server) during an examination period.In our study the number of students requesting an online testing accommodation was minimal (approximately four requests were received for each exam, a low number considering the 139 student-enrollment).Thus in this situation, the time saved via the content management system overcame the time cost associated with responding to students' technical issues.
In conclusion, frequent online exams can serve as a viable alternative to traditional in-class exams.Not only is this testing technique practical, frequent online testing in this study was shown to impose little, if any, cost to long-term comprehension.
Figure 1 .
Figure 1.Comprehension of material originally tested online or in-class.Gray bars represent average student performance on the comprehensive final for questions that had originally been tested online while the black bars represent performance on questions that had originally been tested in class.Errors bars represent the standard error of the mean. | 6,059.2 | 2015-12-30T00:00:00.000 | [
"Education",
"Computer Science"
] |
Extension of Fermat’s last theorem in Minkowski natural spaces
Minkowski natural (N + 1)-dimensional spaces constitute the framework where the extension of Fermat’s last theorem is discussed. Based on empirical experience obtained via computational results, some hints about the extension of Fermat’s theorem from (2 + 1)-dimensional Minkowski spaces to (N + 1)-dimensional ones. Previous experience permits to conjecture that the theorem can be extended in (3 + 1) spaces, new results allow to do the same in (4 + 1) spaces, with an anomaly present here but difficult to find in higher dimensions. In (N + 1) dimensions with N>4\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$N > 4$$\end{document} there appears an increased difficulty to find Fermat vectors, there is discussed a possible source of such an obstacle, separately of the combinatorial explosion associated to the generation of natural vectors of high dimension.
Introduction
Boolean hypercubic structure, Natural vector spaces, Minkowski spaces, and definition of generalized scalar products permitted to describe a large set of applications, which can be utilized to various chemical problems in general, but mainly associated with QSPR, see for example reference [1][2][3], for recently published papers on this subject. The present paper constitutes the theoretical part of a project encompassing a time-consuming computational effort and can be broadly located within the set of mathematical applications to chemistry and physics.
Three previous papers have been devoted to the problem of extending Fermat's last theorem. The initial one was almost purely theoretical [4] and was setting up the problem, the second paper was backed up with computational information [5] and constituted a new step into Fermat's theorem extension, and the third work recently published, presented an extended supercomputational framework to cope with the problem as far as possible [6].
Results of this last paper permitted to conjecture a behavior in three-dimensional spaces of natural vectors, similar to the property associated with vectors in two-dimensional natural spaces leading to Fermat's last theorem.
Fermat's theorem comportment in higher dimensional natural spaces was not discussed with the aid of the information gathered from various computational sources yet. The present study, using a sufficiently large set of varied dimensions, will discuss the number of Fermat vectors found and the conjectures that one can imagine about the extension of Fermat's last theorem to higher dimensions as well.
The existence of Fermat vectors in any dimension of natural spaces is connected with the p-th order norms of the natural vectors, see references [4,5]. The original Fermat's theorem might be seen as a property of natural two-dimensional vectors and the numerical behavior of their Euclidean and higher-order norms. In this case, if one wants to extend Fermat's theorem to any power and dimension, such endeavor can be explicitly described using generalized natural vector norms.
However, the norm property making some vector a Fermat one, that is: that the p-th order natural vector norm equals the p-th power of a natural number, which can be easily associated to N-dimensional Euclidean natural spaces, might be also connected to (N + 1)-dimensional Minkowski natural spaces. Such spaces have been recently introduced in several papers, see for example references [4,12,15] and will be used systematically here. Therefore, the theoretical body of this paper will be constructed by Minkowski spaces rather than Euclidean spaces. In this form, Fermat's last theorem can be conjectured for arbitrary dimensional Minkowski spaces. Thus, one can define Fermat's vectors compliant with Fermat's extended theorem as natural Minkowski vectors with zero p-th order norms.
The present work is assembled with the description of natural spaces and the useful operations to construct a sound structure to study the extension of Fermat's last theorem. Focus is made on the way to compute natural vector norms of any order because such mathematical operations constitute the background of the extended Fermat theorem. This preliminary description allows describing the Minkowski spaces of dimension (N + 1) , essential to define Fermat vectors via norms with zero value. After this follows an analysis of the extended Fermat's theorem conjectures one can construct from empirical computational experience.
A final discussion about the increasing scarcity of Fermat's vectors when Minkowski space dimensions augment finishes this paper.
Natural Spaces
A natural N-dimensional space is a vector space defined on the natural number set . There might be axiomatized that the natural spaces possess an addition semigroup, lacking subtraction, and negative numbers, as the natural number set does. In this sense, natural spaces might be also called semispaces. As, in general, vector spaces attached to such an addition structure have been named in previous literature [7].
Also, natural vector spaces can be easily associated with two characteristics: (1) the inward product [7][8][9][10] operation and (2) the complete sum operator [11]. Such an operation and operator permit the easy definition of powers of a vector, generalized scalar products, and p-th order norms [12]. A resumé of both follows so that the readers can avoid perusing the literature on this subject, and thus the present study becomes self-contained.
Inward product of two natural vectors
By the inward product of two vectors is constructed another vector of the same space where the factor vectors belong. The resultant inward product vector elements are the product of the elements of the implied vectors. That is, using for row vectors a Dirac's bra notation, or: then an inward product ⟨ � = ⟨ � * ⟨ � between two vectors is defined as: The row vector has been chosen for ease of writing, but everything could be described in a column vector space and the Dirac's ket notation: where the superscript T means transposition of the row vector into a column one. Such an inward product, which allows the natural vector spaces to behave like the natural number set, has been previously named as diagonal, Hadamard, or Schur product.
The inward product of two vectors behaves like the product of scalars. Therefore, the inward product is associative, commutative, and distributive concerning the vector sum. It can be extended without effort to matrix-vector spaces. Obviously enough, inward products can involve as many vectors as inward factors as needed.
No other natural vector properties are needed for the definition of Fermat vectors.
Complete sum of a vector
The complete sum (of the elements) of a vector: ⟨⟨ �⟩ , can be defined as a linear operator acting on a vector yielding a scalar. That is: and it is trivial to show that: and that: showing that the complete sum operator is linear.
Second-order scalar product
The inward product of two vectors and the complete sum operator can be used together to redefine the scalar product of two vectors, which due to the possibility to describe higher-order products of this kind using the same operations, will be named as a second-order scalar product. Using definitions (2) and (3) a second-order scalar product is immediately defined as:
p-th order scalar product
One can now consider the inward product of p vectors as another vector belonging to the same vector space, or: Therefore, the Eq. (7) defines a p-th order scalar product involving p vectors.
p-th order power of a vector
In the same manner, as defining higher-order inward products, one can use the repeated inward product of a vector, constructing in this way a p-th natural power of a vector:
p-th order norm of a natural vector
The previous Eqs. (3) and (8) permit to define the p-th order norm of a vector, N p � ⟨ � � , because using the previously defined operations one can easily write: Natural spaces, from the point of view of the existence of Eq. (9), can be also associated with Banach spaces, where a set of p-th order norms are well-defined.
Minkowski natural spaces and p-th order vector norms.
From the Euclidean structure of natural vector spaces, as described in the previous paragraph, an (N + 1)-dimensional Minkowski natural space (N+1) (ℕ) can be easily constructed. For more information, readers are referred to reference [12]. Now, a Minkowski space is an (N + 1)-dimensional natural space, where a metric vector ⟨ � , associated with the space norms, can be constructed using the structure: where the vector ⟨ � = (1, 1, … , 1) is featured as the N-dimensional unity vector.
In this manner, having defined the metric vector (10), the p-th order norm M p (⟨ �) of a vector in a Minkowski natural space can be easily redefined. This might be performed using the complete sum of an inward product as in the Eq. (9), but including the metric vector (10) in the definition:
Fermat vectors and Fermat's last theorem.
A vector ⟨f� ∈ (N+1) (ℕ) belonging to a Minkowski natural space can be called a Fermat vector of order p (or of p-th order) whenever the following equality: holds.
Using the definition of the Fermat vectors via Eqs. (11) and (12), the so-called Fermat's last theorem implies the following equation: is accomplished.
Details of the computational search of Fermat vectors.
Calculations in the search of Fermat vectors have been performed within a discrete set of 2 M natural numbers, associated with the set M = 0, 1, 2, … , (2 M − 1) ⊂ ℕ , related to the decimal representation of the bit strings of an M-dimensional Boolean Hypercube, see for example reference [16].
One can refer to the number of Fermat vectors found in a natural number batch M , using a Minkowski vector space dimension (N + 1) and an order p, with the symbol: #(M, N, p) . Then it is obtained, that: However, obviously enough the computational time strongly increases as the value of the Hypercube dimension increases. In the computations presented in reference [6] and here, the maximal value of the hypercube dimension has been set to M = 15.
To ease the calculation involving large powers of the elements of the natural set M , a vector containing the p-th power of this natural set: is previously computed and by N [p] one can suppose that is described the set of all natural numbers to the p-th power. Then, the search of Fermat vectors will take place in a subset of the natural vector space, whose vectors are made with the elements of the powers of natural numbers, as defined by the Eq. (14). That is, with the vectors: Still, when large Minkowski space dimensions are tested, the combinatorial explosion of all the possible created vectors might skyrocket the computing time to unreachable values, of course considering the authors' limited number of available computers. The evolution of computing structure perhaps will permit to obtain extended information on this subject soon. The vectors ⟨ [p] | | | described through the Eqs. (14) and (15), tested for being Fermat's vectors, are constructed without containing nor the zero nor the unit values as elements.
Fermat vectors of order 2.
The classical Fermat theorem, proved by Wiles [13], amounts the same as to consider that the following equation holds: which constitutes another way to express the Eq. (13).
To test the second-order first part of the Eq. (16), a large number of computations with Minkowski natural spaces of diverse dimensions have been performed.
Computational results have shown that the next expression: stands for a wide range of dimensions. The largest dimension tested in the Eq. (17) has been N = 601 . Computational results coherent with the Eq. (17) suggest it seems possible one can conjecture that the above equation holds for indefinite natural space dimension values. This is the same to conjecture that: second-order Fermat vectors exist for any natural Minkowski space dimension.
One must, in this context of Fermat vectors of order 2, refer to some Leech and Lorentzian lattices, which correspond to 24-dimensional Fermat vectors of order 2 in our notation [14].
Nevertheless, second-order Fermat N-dimensional vectors represent points, possessing natural coordinates, on the surface of an N-dimensional sphere with a radius set at the Minkowski coordinate in the N + 1 element.
Fermat vectors of order 3.
Recent exhaustive supercomputations [6] in Minkowski's natural spaces of dimension (3 + 1) have shown a behavior similar to the vectors of the lesser dimension (2 + 1) . That is, empirical computational evidence allows to conjecture that the equation: holds. Indicating that an extended Fermat's theorem can be postulated on dimension (3 + 1) in the same manner as on dimension (2 + 1) . That might be stated as the fact that the Eq. (18) is the higher-dimensional extension of the Eq. (16).
However, one thing is to obtain consistent computational results, and another the demonstration of such an extended Fermat's theorem conjecture.
Also, in the same way, as found in the second-order norms, as shown in the Eq. (17), there are Fermat vectors in higher Minkowski natural space dimensions than N = 3 that accept null third-order norms. That is, one can write another statement, as an equivalent extended conjecture, described by the Eq. (17): the largest dimension tested on the Eq. (19) has been in this case: N = 151. In the light of the computational experience, one can conjecture that Minkowski natural spaces of dimensions (2 + 1) and (3 + 1) behave similarly concerning the existence and absence of Fermat vectors of orders 2 and 3.
Fermat vectors of order p > 3.
In some cases, the search for Fermat vectors of higher orders has been exhaustive as in the lower order cases, commented in the previous paragraphs. In other issues, the search has not been so extensively performed because of the combinatorial difficulty that the calculations present. Some aspects of the computation of Fermat vectors, which also can be applied to the previously discussed dimensions and orders, will be given next.
Computational details
However, we can say that besides the supercomputing search, performed according to the reference [6], some tests have been achieved in i7 and i9 desktop computers, through Python code.
Such a code does not use the whole possible set of natural vectors, as defined in the Eqs. (14) and (15), but a randomly chosen subset amounting to a selected percent of the total number of candidate vectors as defined in the Eq. (15) to be tested. Such a procedure permits obtaining Fermat vectors within a reasonable computational time. Results from such random calculations have produced coincident matches with the exhaustive tests performed in a supercomputer environment. Such coincidences can be seen as a way to empirically validate the proposed conjectures.
On the infinite cardinality of Fermat's vectors
Note that, when a Fermat vector ⟨ � of p-th order: is found, this just means that there exist an infinite number of Fermat vectors, as all the homothetic vectors obtained as: also possess the corresponding p-th order norm null.
The case of dimension (4 + 1) : meta-Fermat vectors
It could have been interesting to find out a comparable behavior for higher dimensional Minkowski natural spaces, similar to the results obtained in the previously described paragraphs, concerning lesser dimensions.
To have some hint about the possibility to extend the Fermat theorem to higher dimensions, several computational tests have been performed within Minkowski spaces of dimension (4 + 1) . The obtained computational results permit that the following general statement: might be conjectured. The Eq. (21) perhaps shows that in Minkowski natural spaces of greater dimension there could appear anomalous meta-extensions of Fermat's theorem, like the one found in (4 + 1) dimensional Minkowski natural space, where Fermat vectors of dimension (4 + 1) and order 5 have been found.
The higher dimensions case.
Computations with a large number of vectors and diverse Minkowski natural space dimensions show that it seems the meta-Fermat vector extension of order 5 in (4 + 1) dimension, does not easily appear in the tested higher dimensions ([N > 4] + 1) . On the contrary, as the Minkowski natural space dimension grows larger, it is more difficult to find higher-order Fermat vectors.
In the light of the large set of numerical tests performed, one can conjecture the following statement, though: Results with larger order Minkowski norms have yielded no Fermat vectors wearing a zero norm. However, such a result does not signify that they do not exist. Simply under the computational constraints used no vector of this kind was found.
It looks as the zero norms of order 5 constitute some limit, which the performed numerical computational analysis has been unable to trespass.
As a consequence, it is not advisable to transform this last finding, contained in the Eq. (22), into a conjecture, as nothing opposes obtaining, ahead in time, zero norms within larger orders and bigger dimension spaces.
It is a matter of computer power and calculation costs to find out. Alternatively, one can rely on the plausible development of a theoretical structure, able to explain the detailed nuances about the existence of the Fermat vectors in complicated vector landscapes, similar to the one which was performed by Wiles [13] within the order 2 and the (2 + 1) dimensional case.
Resumé
Fermat's last theorem originally set up in spaces of (2 + 1) dimensions seem extensible to Minkowski's natural spaces of dimension (3 + 1) . Also, in dimension (4 + 1) , there appears that Fermat vectors with zero norms up to order 5 might be found. Therefore, a conjecture extending a Fermat theorem up to this anomalous order number seems to be conceivable.
Higher dimensions provide Fermat vectors of lower order in abundance but, for instance, dimension (5 + 1) provides a scarce amount of Fermat vectors of order 5. Such scarcity makes it difficult to extend the Fermat theorem up to this dimension, although it seems plausible that it can be so.
Higher dimensions ([N > 5] + 1) produce Fermat vectors, but of orders ≤ 5 . A fact which thwarts the possibility to extend a Fermat theorem conjecture upwards from Minkowski's natural spaces of dimension (5 + 1) . Unless high-speed computations, better than the ones used here can be tested in the future.
Fermat hypersurfaces and the scarcity of Fermat vectors
The reason for the difficulty of finding Fermat vectors of higher orders and dimensions is not at all easy to explain, though. The culprit of this complexity can be associated with many factors, which perhaps will appear more clearly when extensive computations could be developed, dedicated to shedding light on the search of Fermat vectors of higher dimensions and orders. Whenever one can overcome the combinatorial explosion of the generated natural vectors within larger dimensions and the corresponding bigger orders.
Perhaps, to understand a little bit better the problem we are facing, one might use the fact that vector powers in Minkowski natural spaces represent points, bearing coordinates made solely of natural numbers, but contained within a high dimensional surface, mostly defined over the rational (or real) field.
A Fermat hypersurface might be described within a Minkowski semispace as: with the parameter r taken as a constant. It must be noted that, when the order is p = 2 , the hypersurface of the Eq. (23) represents an N-dimensional sphere of radius r.
The probability that a point, lying into a higher dimensional, higher-order rational (or real) hypersurface, could coincide with coordinates made by powers of natural numbers, seems that decreases significantly by augmenting the dimension and the order of the hypersurface described in the Eq. (23).
That is, associating the right side of the Eq. (23) with a function: for a constant parameter r, one can say there is an infinite number of vectors ⟨x� ∈ N � ℚ + � fulfilling the Eq. (24). Now, a Fermat vector, like the one defined in the Eq. (12), will fit into such a function as in the Eq. (24), taking into account the use of a constant natural parameter r, in the same way as the vector defined with: This fact plausibly produces a dramatic scarcity of Fermat vectors when incrementing the space dimension and the order of the associated norms. An empirical fact that has been computationally observed.
Performing computations in search of Fermat vectors, when the parameter r, the power p, and a vector ⟨ � are found, fulfilling the Eq. (25), then one can say such occurrence corresponds to find a unique natural point fulfilling the function (24).
In other words, as far as we know, every element of a large number of obtained Fermat vectors corresponds to a unique natural position on some Fermat hypersurface of order p and radius r, defined as shown in Eqs. (23) or (24).
This can explain perhaps the observed computational fact that Fermat vectors, in case they exist, become more and more scarce when Minkowski space dimension and norm order become larger than 5.
Some test computations describing Fermat vectors associated with the same hypersurface.
Some extra computation search of Fermat vectors has been performed using the aleatory algorithm described before in Sect. 8.1, to illustrate the nature of Fermat vectors as natural points in a Fermat hypersurface.
Interesting results corresponding to Fermat vectors possessing the same parameter r can be found as explained below in various computation batches, attached to the same space and power, indicating the nature of Fermat vectors as different unique natural points belonging to the same Fermat hypersurface attached to the constant parameter r.
For batches associated with #(7, 4, 3) several vector triples having the same parameter r have been found. For batches like #(7, 3, 2) even some quadruple Fermat vectors possessing the same parameter r had resulted from the computation.
It is interesting to note that #(7, 10, p) p = 6, 7, 8 yield, after several days of computation, no Fermat vectors. However, using p = 3, 4, 5 , apart from the existence of Fermat vectors, at every value of p, many pairs of Fermat vectors with the same parameter r did appear.
A test of large dimensions has been also performed in the form #(7, 25, p) p = 4, 5 , with the result of obtaining several vectors containing scarce Fermat vector pairs, with the same parameter r , when the lower power p = 4 was observed, but no Fermat vectors at all were generated with the higher power value p = 5 . When both dimension and power raise, the number of possible Fermat vectors increases to an exceptionally large amount, even if the generating Mersenne power is not too big.
This behavior of Fermat vectors can be used empirically to explain the scarcity of such vectors as the parameters #(M, N, p) become larger.
Conclusions
As a result of exhaustive computations to find out p-th order Fermat vectors in (N + 1) dimensional natural Minkowski spaces, one can empirically extend Fermat's last theorem beyond the (2 + 1) dimensional spaces.
Certainly, such findings are empirical, therefore the obtained results shall be formulated in the form of a conjecture.
One can describe a plausible (N + 1) dimensional Fermat conjecture, in the form provided by Eqs. That there cannot be used a unique expression is due to the anomalous behavior encountered at the dimension (4 + 1).
Something similar could be found in higher dimensions. However, for the moment, the computationally explosive nature of the combinatorial problem, associated with the search for Fermat vectors of higher dimensions, has not provided another comparable result.
Acknowledgements One of us (R. C.-D.) wishes to express his gratitude to Prof. Carlos Perelman for his helpful comments about this manuscript.
Funding Open Access funding provided thanks to the CRUE-CSIC agreement with Springer Nature.
Conflict of interest
The authors state that there is no conflict of interest related to this work.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. | 5,716.2 | 2021-07-08T00:00:00.000 | [
"Mathematics"
] |
Immune-Monitoring Disease Activity in Primary Membranous Nephropathy
Primary membranous nephropathy (MN) is a glomerular disease mediated by autoreactive antibodies, being the main cause of nephrotic syndrome among adult patients. While the pathogenesis of MN is still controversial, the detection of autoantibodies against two specific glomerular antigens, phospholipase A2 receptor (PLA2R) and thrombospondin type 1 domain containing 7A (THSD7A), together with the beneficial effect of therapies targeting B cells, have highlighted the main role of autoreactive B cells driving this renal disease. In fact, the detection of PLA2R-specific IgG4 antibodies has resulted in a paradigm shift regarding the diagnosis as well as a better prediction of the progression and recurrence of primary MN. Nevertheless, some patients do not show remission of the nephrotic syndrome or do rapidly recur after immunosuppression withdrawal, regardless the absence of detectable anti-PLA2R antibodies, thus highlighting the need of other immune biomarkers for MN risk-stratification. Notably, the exclusive evaluation of circulating antibodies may significantly underestimate the magnitude of the global humoral memory immune response since it may exclude the role of antigen-specific memory B cells. Therefore, the assessment of PLA2R-specific B-cell immune responses using novel technologies in a functional manner may provide novel insight on the pathogenic mechanisms of B cells triggering MN as well as refine current immune-risk stratification solely based on circulating autoantibodies.
INTRODUCTION
Primary membranous nephropathy (MN) is an autoantibody-mediated glomerular disease that represents one of the leading causes of nephrotic syndrome in adults (1). MN is characterized by the deposition of anti-podocyte targeted IgG antibodies on the subepithelial layer of the glomerular capillary wall. Autoantibodies deposition leads to the thickening of the glomerular basement membrane, complement activation, and glomerular capillary injury with consequent proteinuria. In ∼25% of patients, MN is classified as "secondary, " due to a contemporary detection of a causative disease, such as malignancies, infections, drug reactions, or autoimmune diseases including systemic lupus erythematosus (2, 3). The natural history of the untreated disease is variable: spontaneous complete remission of primary MN is observed in approximately the 30-40% of patients (4, 5), whereas 30% of cases develop end-stage kidney disease (ESKD) generally over 10-15 years (6, 7). In kidney transplant recipients, MN relapses appear in 10-45% of cases (8)(9)(10)(11)(12) and occur as a de novo disease in about 2% of recipients (13,14).
Current understanding of MN pathophysiology comes from studies in rodent models. In 1959, Heymann et al. (15) described a model of MN, now defined as active Heymann nephritis, which was induced by immunizing Lewis rats with intraperitoneal injections of crude kidney extracts, together with complete Freund's adjuvant. This resulted in a disease characterized by subepithelial immune complexes similar to human MN. Subsequent in vivo and in vitro studies have led to a better understanding of how subepithelial immune deposits lead to podocyte injury and proteinuria. Complement-mediated cytotoxicity plays a major role in the disease pathogenesis, especially the terminal complement complex C5b-9 (membrane attack complex-MAC), which is detectable in the urine of patients with MN and considered a marker of podocytes injury (16)(17)(18)(19)(20). Data suggest that in primary MN, complement cascade is firstly activated by the mannose binding lectin pathway, leading to the formation of C3 deposits in the subepithelial space along with MAC on podocyte membranes (21)(22)(23).
The identification of the cell surface protease neutral endopeptidase (NEP) as a target podocyte autoantigen in a newborn with MN represented a cornerstone in our understanding of MN pathophysiology. Pierre Ronco and Hanna Debiec described the case of a mother genetically deficient in NEP that had given birth to an infant who developed antenatal nephrotic syndrome (24). During the previous pregnancy, the mother generated circulating anti-NEP that crossed the placenta and targeted NEP on the fetal kidney during her subsequent pregnancy, leading to in situ immune deposits. Therefore, NEP represents the first podocyte protein demonstrated to be a target antigen in human MN (25).
A 2019 study (33) showed that, in MN patients without detectable anti-PLA 2 R or anti-THSD7A autoantibodies, exostosin1/exostosin2 could represent target antigens. The authors performed mass spectrometry on laser microdissected glomeruli and immunohistochemistry on kidney biopsy of 22 MN patients, including 7 with anti-PLA 2 R antibodies and 15 without, detecting exostosin1/exostosin2 expression uniquely in five cases without detectable circulating anti-PLA 2 R antibodies. In a larger cohort of 209 MN patients negative for circulating anti-PLA 2 R antibodies, immunohistochemistry revealed bright granular glomerular basement membrane staining for exostosin 1/exostosin 2 in 16 cases (33). Eleven of the 16 cases showed signs of lupus nephritis or autoimmunity, suggesting that exostosin 1/exostosin 2 may represent a potential marker of a specific subtype of MN, most commonly associated with autoimmune diseases (33).
Altogether, these mechanistic findings have highlighted the key role of B cells in the pathogenesis of MN, both as autoantibody producing cells (34) and as antigen presenting cells (35), thus providing the basis for B-cell target therapies (36)(37)(38)(39). However, response to such therapies remains unpredictable and the identification of subjects who would develop spontaneous remission (in whom immunosuppression could be avoided) is still very challenging. The discovery of MN-specific antigens has allowed the development of many diagnostic and prognostic serologic tests and optimal non-invasive biomarkers for monitoring disease activity. Nevertheless, while the assessment of autoantibodies provides useful information about the humoral memory immune response, other assays are needed to better immune-risk stratify patients and to tailor treatment in a personalized fashion.
CURRENT CLINICAL MN BIOMARKERS: SERUM CREATININE, URINARY PROTEIN AND KIDNEY BIOPSY
According to the most recent Controversies Conference on KDIGO guidelines (39), proteinuria, and serum creatinine are still considered the gold-standard biomarkers to riskstratify MN patients. For instance, individuals with subnephrotic proteinuria have excellent long-term renal survival, therefore, immunosuppression is not recommended (39). Conversely, in patients with proteinuria above 4-5 g/24 h, MN prognosis may range from spontaneous remission to development of ESKD.
Urinary markers of renal tubular damage, such as, Beta2 microglobulin, N-acetyl-β-D-glucosaminidase (NAG) and retinol-binding protein (RBP), kidney injury molecule 1 (KIM-1) and neutrophil gelatinase-associated lipocalin (NGAL) have been also proposed to risk-stratify patients with MN. Yet, the levels of these biomarkers seem to not correlate with the severity of the disease (40).
Despite its invasive nature, kidney biopsy is still important for the diagnosis of MN, in particular among patients with altered kidney function and evidence of possible secondary causes (41), but the capacity of histological lesions to predict outcomes or response to therapy is limited at best. Hence, new approaches to better risk-stratify MN patients are highly needed in the clinical setting.
TARGET ANTIGENS IN MN
Over the last decade, discovery of target podocyte antigens and the development of commercial assays for the detection of serum anti-PLA 2 R and anti-THSD7A autoantibodies has revolutionized the traditional algorithms for diagnosis and management of MN, particularly due to their high specificity for disease diagnosis (26,27). Such autoreactive antibodies recognize the target conformational epitopes on the membrane protein expressed on glomerular podocytes under non-reducing conditions and are predominantly of the IgG4 subclass. Importantly, both autoantibodies are emerging as clinical biomarkers to predict outcome in MN patients.
Thrombospondin Type 1 Domain Containing 7A (THSD7A) THSD7A is a large transmembrane glycoprotein expressed by podocytes. In Europe and United States only 3% of MN subjects expresses anti-THSD7A autoantibodies (predominantly IgG4), while it increases to a 9% in Japan (27,31,42,43). Importantly, anti-THSD7A antibodies induce a MN-like pattern of disease when injected in mice (29). In a recent retrospective study, Zaghrini et al. (44) developed a new ELISA assay to detect THSD7A-specific antibodies: levels of anti-THSD7A autoantibodies correlated with disease activity and with response to treatment. Also, patients with high titers at baseline had a poorer clinical outcome. I has also been reported an association between anti-THSD7A autoantibodies and malignancies (42,43,45), but this needs to be better clarified in larger, multicenter studies.
Phospholipase A2 Receptor Type 1 (PLA 2 R)
The M-type phospholipase A2 receptor (PLA 2 R) is one of four members of the mannose receptor in mammals (46). PLA 2 R is a multifunctional receptor for soluble phospholipase A2 (sPLA2), which is described as a pro-inflammatory enzyme and PLA 2 R acts as a scavenger receptor to remove secreted PLA2 enzyme (47). Despite this receptor being highly expressed by human podocytes as well as by neutrophils and alveolar type II epithelial cells (26,48,49), autoantibodies against PLA 2 R exclusively induce nephrotic syndrome without apparent impairment in other organs.
The complexity of the PLA 2 R structure is illustrated by the identification of distinct immunogenic PLA 2 R epitopes, including a cysteine-rich domain (CysR), a fibronectin type II domain and eight distinct C-type lectin domains (CTLD1-8) (50), which are dependent on the protein conformation (26). Main antigenic epitopes recognized by anti-PLA 2 R antibodies have been recently identified and reported to be sensitive to reducing agents, thus confirming that conformational structure is of great importance in PLA 2 R epitopes (51,52). A further dominant epitope of PLA 2 R (P28mer) was recently identified being also a dominant epitope of THSD7A in the N-terminal domain, suggesting that this shared motif could be involved in the initial B-cell activation in MN (53).
GENETIC SUSCEPTIBILITY AND HUMORAL AUTOIMMUNE RESPONSE IN MN
A genetic predisposition for MN was initially speculated by the associative evidence linking variants in the HLA locus and the risk of developing MN (54). Years later, family case reports of MN were also described (55).
Several genome-wide association studies (GWAS) have recently associated risk alleles in HLA genes with the increase risk of MN. Stanescu et al. (56) defined the association between HLA-DQA1 allele with MN in Caucasian individuals, suggesting that the interaction between sequence variations in immune-proteins and glomerular components may explain a trigger-target model in the disease development. Such interaction between PLA 2 R and HLA-DQA1 variants was also studied in an Asian cohort with similar results (57). More studies confirmed this association in different cohorts of MN patients (58)(59)(60)(61), but the related mechanisms remain unknown.
The possible role of specific HLA alleles in MN was further investigated in two recent studies. Cui et al. (62) genotyped HLA-DRB1, DQA1, DQB1, and DPB1 genes in 261 primary MN patients and in 599 healthy controls. These investigators confirmed that risk alleles of HLA-DQA1 and PLA 2 R are significantly associated with the susceptibility to MN. Particularly, authors showed that these risk alleles are associated with the presence of circulating anti-PLA 2 R antibodies as well as to the increased expression of PLA 2 R in the glomeruli. Authors also detected the classical DRB1 * 1501 and DRB1 * 0301 alleles, showing significant independent effects on the risk of MN among the ethnic group of Han Chinese. Le et al. (63) sequenced HLA locus in 99 anti-PLA 2 R-positive MN subjects and in 100 healthy controls. Again, the association between DRB1 * 1501 and anti-PLA 2 R positive MN was demonstrated, and suggested DRB3 * 0202 as new risk allele for MN. These two alleles were subsequently confirmed in an independent cohort of 285 controls and 293 cases. Although DRB1 * 1502 was not revealed as a risk allele for MN, it was associated with significantly higher levels of anti-PLA 2 R autoantibodies and a significantly increased risk of progression to ESKD (64).
Altogether, GWAS has provided robust data about the genetic susceptibility to MN, suggesting that genetic tests could become a non-invasive tool to risk-stratify MN patients (65), although more data testing these associations in different ethnic groups are needed (66).
Detection of PLA 2 R Antigen in the Kidney
Anti-PLA 2 R IgG4 autoantibodies are detected in the subepithelial immune deposits using immunofluorescence or immunohistochemistry in patients with primary MN (67). In normal kidneys or other glomerular diseases, the PLA 2 R antigen is weakly expressed on the podocyte surface (67). Generally, a strong association between glomerular PLA 2 R staining and circulating anti-PLA 2 R antibodies is found (28,60,68), particularly when autoantibody levels are measured at the time of the biopsy assessment (69). However, glomerular PLA 2 R staining is not considered a diagnostic test for active disease, since the positivity of glomerular PLA 2 R staining with undetectable circulating anti-PLA 2 R autoantibodies is unlikely (28,69,70) and may reflect an immunologically inactive disease as a positive PLA 2 R antigen can persist for weeks or months after remission (67).
Detection of Serum Anti-PLA 2 R Autoantibodies as a Diagnostic Tool
Western blotting was initially performed to detect anti-PLA 2 R (26) and anti-THSD7A (27) autoantibodies, but this test is inadequate for routine clinical use. The first commercially available assay for serum anti-PLA 2 R autoantibodies detection was an indirect immunofluorescence assay (CBA-IFA; Euroimmun, Luebeck, Germany), based on a semi-quantitative determination, and therefore, not ideal for monitoring therapeutic response and disease progression. Most clinical laboratories routinely use an ELISA-based assay (Euroimmun), because it is able to quantify anti-PLA 2 R autoantibodies, but this assay is not as sensitive as CBA-IFA assays. Conversely, the CBA-IFA anti-PLA 2 R immunoassays detection may be considered only when diagnosis of PLA 2 R-associated MN is strongly suspected, but there is a negative ELISA test. The most recent diagnostic assay is a laser bead immunoassay (ALBIA; Mitogen Advanced Diagnostics Laboratory, Calgary, Canada), that allows a sensitive and a quantitative detection of these autoantibodies. This assay allows the detection of different molecules such as antibodies, complement or cytokines. A comparison between the CBA-IFA, ELISA and ALBIA platforms, showed similar capacity across the different tests to detect anti-PLA 2 R autoantibodies (71).
Serum Anti-PLA 2 R Autoantibodies as a Risk-Prognostic Biomarker of MN Different groups have suggested the use of anti-PLA 2 R autoantibodies to predict spontaneous remission of MN. Hofstra et al. (72) reported that spontaneous remission is inversely related to high antibodies titers measured by up to 6 months after biopsy assessment. Similarly, Timmermans et al. (73) showed that, among 109 MN patients, subjects with detectable serum anti-PLA 2 R autoantibodies at the time of biopsy had a lower probability for spontaneous remission than seronegative patients. In a retrospective study including 68 patients with biopsy-proven MN, Jullien et al. (74), reported that spontaneous remission was correlated with low titers of anti-PLA 2 R autoantibody at time of biopsy. These data were recently confirmed by a prospective study involving 62 MN patients: complete spontaneous remission was more common in subjects with lower anti-PLA 2 R autoantibody levels at the time of diagnosis (<40 UI/mL) (75).
Beck et al. (76) evaluated the relationship between changes in serum PLA 2 R-specific autoantibodies levels and the response to B cell-depleting antibody rituximab therapy in 35 adult patients with MN. Circulating autoantibodies were detected in 71% of patients at baseline and levels decreased after rituximab therapy in the majority of them. The reduction of anti-PLA 2 R autoantibody levels anticipated the decline of proteinuria, and in one particular patient with a relapse of proteinuria, the reappearance of the autoantibody in serum preceded the recurrence of MN. However, proteinuria may persist, regardless the presence of autoreactive anti-PLA 2 R antibodies due to irreversible capillary wall injury thus, perpetuating albuminuria levels in absence of active autoimmunity.
More recently, Ruggenenti et al. (77) investigated the association between treatment effect, circulating anti-PLA 2 R autoantibodies and genetic polymorphisms predisposing to antibody production in 132 MN patients with nephrotic range proteinuria treated with rituximab. Outcome of patients with or without detectable anti-PLA 2 R autoantibodies at baseline were similar. However, among 81 patients with autoantibodies, lower anti-PLA 2 R autoantibodies titer at baseline and full depletion at 6 months post-treatment strongly predicted remission over a median follow-up period of 30.8 months. All 25 patients displaying complete remission were preceded by undetected anti-PLA 2 R autoantibodies in circulation, while reemergence of circulating antibodies predicted clinical disease relapse. Accordingly, a further study involving 30 patients with MN and elevated anti-PLA 2 R autoantibodies (78) showed that clinical remission was heralded by a reduction in circulating autoantibodies.
Collectively, the above studies and further published data (79)(80)(81)(82)(83) suggest that serial measurements of anti-PLA 2 R autoantibody titers in the serum may help at risk-stratifying patients, allowing to personalize treatment and to reduce the side-effects related to over-immunosuppression.
However, antigen-specific memory B cells may exist and be ready to develop a rapid and effective secondary immune response even in absence of detectable circulating autoantibodies. This suggests that the assessment of the humoral auto-immune response using other cell-based assays may significantly improve the understanding of the effector mechanisms of the disease in patients with primary MN.
PLA 2 R Epitope Spreading and Disease Progression
Epitope spreading is a common immunopathogenic response to self-antigens: the immune response primary involves the so-defined immunodominant epitope recognized by most autoantibodies, then expands to the intramolecular epitope on the same protein (intramolecular epitope spreading) or to dominant epitopes on neighboring molecules (intermolecular epitope spreading) (84,85). The result is an increased diversity in antibody repertoire, leading to a broader overall immune response. Epitope spreading for the CysR epitope of PLA 2 R has been recognized as independent risk factor for reduced renal survival (86). In the GEMRITUX (Evaluate Rituximab Treatment for Idiopathic Membranous Nephropathy) randomized controlled trial (87), including a cohort of 58 patients positive for anti-PLA 2 R-specific autoantibodies randomly treated with rituximab or conservative therapy, epitope spreading strongly correlated with serum titer of anti-PLA 2 R autoantibodies The absence of epitope spreading at onset was an independent predictor of remission at 6 months and at last follow-up (median of 23 months) (88). Of interest, 10 of the 17 patients • Helper/cytotoxic T cell ratio was significantly higher at baseline in MN patients than the in controls due to a reduction of LEU2 cell subset. • Baseline helper/cytotoxic T cell ratio was significantly higher in patients achieving remission as compared to non-responder patients. • Percentages of switched (IgD − CD27 + ) and nonswitched (IgD + CD27 + ) memory B cells were higher in MN patients due to a higher percentage of naïve B cells at baseline. • Treg percentages were lower in MN patient at baseline. • After rituximab treatment, responder patients to treatment showed a significantly increased percentage of Treg cells than non-responders.
FSGS, focal segmental glomerulosclerosis; IFN, interferon; HC, healthy controls; IL, interleukin; IF, immunofluorescence; IS, immunosuppression; MCD, minimal change disease; MN, membranous nephropathy; NIAT, nonimmunosuppressive antiproteinuric treatment; NK, natural killer; Treg, regulatory T cells. who had epitope spreading at baseline and were treated with rituximab, showed reversal of epitope spreading at 6 months (88). The anti-PLA 2 R autoantibody titer has been shown to correlate with the degree of epitope spreading (88). Therefore, due to the lack of epitope-specific assays for anti-PLA 2 R autoantibodies for clinical practice, the total titer of anti-PLA 2 R autoantibodies could be considered a surrogate of epitope spreading (88).
Non-antigen-specific Cell Subset Measurements
A few studies have investigated the immune phenotype of MN patients and its changes in relation to treatment ( Table 1). Some investigators reported an increase of the CD4 + /CD8 + T cell ratio in MN patients with or without nephrotic proteinuria (89,90). Some evidence has shown a reduction of CD8 + T cells in patients with MN and nephrotic syndrome when compared to healthy subjects (91). This broad phenotype seems to be associated with a more favorable prognostic response to classical immunosuppressive therapy (92), but not to anti-CD20 depletion (93). MN is characterized by a predominance of IgG4 subclass autoantibodies, thus suggesting an involvement of a Th2 immune response, which has been described in some series (94)(95)(96). Interestingly, despite the well-reported role of regulatory T-cells (Treg) in autoimmune diseases (100,101), limited studies have investigated the role and impact of Tregs in primary MN, with controversial results (97,98). Recently, Rosenzwajg et al. (99) measured 33 lymphocyte subpopulations and also 27 serum cytokines/chemokines in 25 MN patients and 27 healthy subjects at the time of biopsy. After rituximab treatment, responder patients to treatment showed a significantly increased percentage of Tregs than non-responders concluding that monitoring T-cell subset could be a potential biomarker of MN activity.
Cellular Assays Measuring Antigen-Specific Immune Responses
The discovery of anti-PLA 2 R and anti-THSD7A autoantibodies represented a paradigm shift for the diagnosis and management of MN patients. Taking into account the putative pathogenic role of anti-PLA 2 R autoantibodies and the efficacy of B cell depleting therapies (77,(102)(103)(104), it is reasonable to speculate that autoreactive memory B cells play a fundamental pathogenic role in MN by fueling a persistent IgG4-specific humoral immune response. However, levels of anti-PLA 2 R autoantibodies fluctuate over time despite persistent renal injury, suggesting that the evaluation of anti-PLA 2 R autoantibodies alone may not capture the global humoral immune response taking place in patients with primary MN (69,(79)(80)(81). Once B cells recognize the target antigen through the help of autoreactive T Follicular Helper (T FH ) cells, B cells can differentiate into short-lived plasmablasts (secreting manly low-affinity IgM antibodies) or into memory B cells (mBC) and long-lived plasma cells after undergoing somatic hypermutation and immunoglobulin isotype class switching in the germinal center. In case of persistence of the priming antigen and T-cell help, auto-reactive mBC can rapidly differentiate into antibody-secreting cells and produce the effector antibodies against the specific target antigen and may finally occupy empty bone marrow niches after secondary activation replenishing plasma cell pool (105,106). Noteworthy, autoreactive memory B cells can be detected in absence of autoantibody levels in serum and its rapid differentiation and production of antibodies can be of great importance for a subsequent humoral response (Figure 1) (107,108). Recent works in kidney transplantation have shown the value of measuring circulating allospecific mBC in a functional manner, especially in the absence of detectable alloantibodies in the serum (109)(110)(111).
Starting from this background, our group has recently developed a new approach to functionally evaluate the PLA 2 Rspecific mBC response in MN patients. Using a PLA 2 R-specific B-cell ELISPOT-based immune assay, we have been able to accurately detect circulating mBC capable of producing anti-PLA 2 R-specific antibodies at the time of the flare of disease activity, thus confirming the presence of an active humoral immune response (personal communication). While evaluating PLA 2 R-specific antibody-secreting cell frequencies using an ELISPOT-based assay allows for an accurate detection of mBC responses at the single cell level after a polyclonal mBC culture stimulation, anti-PLA 2 R-specific antibodies may also be detected from these cell culture supernatants using single-antigen beads immune assay. Figure 2 shows two representative patients with similar proteinuria and anti-PLA 2 R autoantibody levels. While the first patient with detectable autoreactive mBC is having a disease flare, the second one has no detectable autoreactive mBC and is therefore predicted to undergo remission. If properly validated, this assay may be used to differentiate patients for whom therapy is needed vs. those who will undergo spontaneous remission.
CONCLUSIONS
Primary MN is the main cause of nephrotic syndrome in adults and is caused by the formation of autoimmune complexes in the glomeruli. Since the identification of different podocyte antigenic targets, the diagnostic strategies and treatment options for MN have significantly improved. The efficacy of rituximab treatment in MN patients has highlighted the importance of B cells in the pathogenesis of the disease (113); therefore a more accurate investigation of autoreactive mBC using new technology may refine current immune-monitoring largely based on the measurement of circulating anti-PLA 2 R or anti-THSD7A autoantibodies.
AUTHOR CONTRIBUTIONS
PC, MJ, AA, ÀF, CC, and OB conceived the article contents, prepared the manuscript, and endorsed the final draft submitted.
FUNDING
This work was supported by 2 Spanish competitive grants from the Instituto de Salud Carlos III [ICI14/00242; PI16/01321] a FEDER funding way to build Europe. Also, this work was partly supported by the SLT002/16/00183 grant, from the Department of Health of the Generalitat de Catalunya by the call Acció instrumental de programes de recerca orientats en l'àmbit de la recerca i la innovació en salut. We thank the CERCA Programme/Generalitat de Catalunya for the institutional support. OB was awarded with an intensification grant from the Instituto | 5,581.8 | 2019-11-08T00:00:00.000 | [
"Biology",
"Medicine"
] |
Transcriptional activation by a matrix associating region-binding protein. contextual requirements for the function of bright.
Bright (B cell regulator of IgH transcription) is a B cell-specific, matrix associating region-binding protein that transactivates gene expression from the IgH intronic enhancer (E mu). We show here that Bright has multiple contextual requirements to function as a transcriptional activator. Bright cannot transactivate via out of context, concatenated binding sites. Transactivation is maximal on integrated substrates. Two of the three previously identified binding sites in E mu are required for full Bright transactivation. The Bright DNA binding domain defined a new family, which includes SWI1, a component of the SWI.SNF complex shown to have high mobility group-like DNA binding characteristics. Similar to one group of high mobility group box proteins, Bright distorts E mu binding site-containing DNA on binding, supporting the concept that it mediates E mu remodeling. Transfection studies further implicate Bright in facilitating spatially separated promoter-enhancer interactions in both transient and stable assays. Finally, we show that overexpression of Bright leads to enhanced DNase I sensitivity of the endogenous E mu matrix associating regions. These data further suggest that Bright may contribute to increased gene expression by remodeling the immunoglobulin locus during B cell development.
Bright (B cell regulator of IgH transcription) is a B
cell-specific, matrix associating region-binding protein that transactivates gene expression from the IgH intronic enhancer (E). We show here that Bright has multiple contextual requirements to function as a transcriptional activator. Bright cannot transactivate via out of context, concatenated binding sites. Transactivation is maximal on integrated substrates. Two of the three previously identified binding sites in E are required for full Bright transactivation. The Bright DNA binding domain defined a new family, which includes SWI1, a component of the SWI⅐SNF complex shown to have high mobility group-like DNA binding characteristics. Similar to one group of high mobility group box proteins, Bright distorts E binding site-containing DNA on binding, supporting the concept that it mediates E remodeling. Transfection studies further implicate Bright in facilitating spatially separated promoter-enhancer interactions in both transient and stable assays. Finally, we show that overexpression of Bright leads to enhanced DNase I sensitivity of the endogenous E matrix associating regions. These data further suggest that Bright may contribute to increased gene expression by remodeling the immunoglobulin locus during B cell development.
Transcriptional regulation of genes during development and differentiation is tightly controlled through several mechanisms. The tissue specificity conferred by the immunoglobulin heavy chain enhancer (E) 1 has been studied extensively both for understanding Ig regulation and as a model for enhancer function (reviewed in Ref. 1). E is a complex unit containing binding sites for multiple transcription factors and can func-tionally be broken down into two segments, the core and the flanking matrix associating regions (MARs) (2)(3)(4)(5). Most of the previously identified factors bind to the enhancer core, and several have been shown to have some B cell specificity in terms of expression or ability to transactivate. However, no binding site in isolation can confer all of the tissue-specific regulation seen in vivo. Ultimately, this is the result of cumulative interactions of various nuclear factors with both DNA and each other. The E core segment alone can increase transcription in transient systems (6,7). In vivo, however, the core alone is insufficient to drive transcription or maintain tissue specificity. Transgenic studies have demonstrated that high level tissue-specific expression is only seen when the core is present in context of the MARs (8). This effect requires the core, because MARs alone could not produce high level expression. Although the MARs had previously been implicated in negative regulation of the Ig locus in non-B cells (4, 9 -12), this was the first demonstration that the MARs were required for proper expression in B cells.
Bright (B cell regulator of IgH transcription) is the only B cell-specific transcription factor shown to bind to, and transactivate via, the E MARs (13). Bright was first identified as a factor responsible for increased expression of the immunoglobulin heavy chain gene following antigen ϩ interleukin 5 stimulation of B cells in culture (14,15). The Bright binding complex has also recently been shown to contain Btk, which is critical for the DNA binding complex (16). Bright binds within the MARs of the IgH enhancer to distinct ATC motifs (P sites) previously identified as binding sites for the E negative regulator, nuclear factornegative regulator, and the MAR-binding protein, SATB1 (11,17). We have identified nuclear factornegative regulator as a previously characterized, lineage-nonrestricted homeoprotein, Cux/CAAT displacement protein (18), that antagonizes Bright binding and transactivation by direct competition for P sites. Developmentally, Bright expression is maximal in late stage B cells (13,19), a pattern opposite that of Cux/CAAT displacement protein (18). Bright is found in the nuclear matrix and within matrix-associated PML nuclear bodies (13,20), locations consistent with a putative role in chromosomal organization. Although a number of MAR binding factors have been cloned (e.g. see Refs. 13, 17, and 21-29, and reviewed in Ref. 30), Bright was the first shown to directly affect gene transcription.
MARs and attachment to the nuclear matrix can mediate specific alterations in chromatin structure (31)(32)(33)(34). Such a mechanism seemed reasonable for Bright, based on features of its DNA binding. Highly specific binding within the minor groove is achieved by virtue of two domains (reviewed in Ref. 35), a self-association/tetramerization domain, termed REKLES for a heptapeptide conserved within this region among Bright orthologues, and a DNA binding region, termed ARID for AT-rich interaction domain. The Bright ARID defined a new family of DNA-binding proteins, including SWI1, a component of the SWI⅐SNF complex that has been shown to remodel chromatin (36) and p270, its apparent mammalian orthologue (37). Components of human SWI⅐SNF appear to be tightly associated with the nuclear matrix (38), suggesting that at least a fraction of this complex could be involved in chromatin organizational properties associated with MARs (reviewed in Ref. 39). Like SWI⅐SNF, all ARID proteins bind AT-rich DNA, but only members that contain both ARID and REKLES bind specifically to AT-rich MAR motifs (35).
In this report we further characterize the mechanisms through which Bright functions and the contextual requirements for Bright transactivation. We also show that Bright bends its DNA target on binding. This, along with the observation that Bright overexpression induces increased DNase I hypersensitivity of the enhancer, provides a rationale for how this protein may facilitate enhanced expression of the immunoglobulin gene.
EXPERIMENTAL PROCEDURES
Constructs-The derivation of ⌬E and ⌬P E mutants was described previously (13). E and ⌬E were cloned in the XbaI site of the pBL-CAT2 vector. All ⌬P mutants were cloned in the SalI-BamHI sites of the CAT vector. The hybrid SV40-MAR construct (40) was previously constructed. The S107 promoter was isolated as a BamHI-HaeIII fragment (covering nucleotides Ϫ550 to Ϫ40), blunt-ended, and cloned into the pBL-CAT2 vector. Vectors containing elements distal of the cat gene were constructed by first subcloning E of the appropriate mutation into pBluescript (Stratagene) and cloning a KpnI-SacI fragment into the distal site of either pBL-CAT2 or pBL-CAT2 containing the S107 promoter fragment.
Electrophoretic Mobility Shift Assay-Specifics of binding reactions were described previously (13). To assess binding to the four P sites, the following contructs were used: the ⌬P2 E 5Ј MAR isolates the P1 site, the ⌬P1 E 5Ј MAR isolates the P2 site, the ⌬P4 E 3Ј MAR isolates the P3 site, and the ⌬P3 E 3Ј MAR isolates the P4 site. Briefly, these fragments were end-labeled (20,000 cpm/fmol), bound to in vitro-translated Bright protein in the presence of increasing concentrations of poly d(I⅐C), and run on a 4% nondenaturing polyacrylamide gel. Gels were dried and exposed to x-ray film.
Transfections and Stable Lines-Transfections of M12.4 and J558L cells and analysis of CAT protein was done as described previously (13). Stable transfectant lines were made by co-transfecting the indicated CAT vector in a 3-fold excess to pBK-cytomegalovirus (Stratagene). 48 h after transfection, cells were selected in G418. Transient transfection of these cell with the Bright sense or antisense constructs were done 3-5 weeks after the selection began.
Circular Permutation Distortion Assays-The high affinity Bright binding site (P2 ϫ 3) and the circular permutation plasmid have been described previously (22,41). The P2 site concatamer was cloned into the plasmid polylinker and confirmed by sequencing. A second series utilized an ϳ500-bp fragment spanning the core octamer and 3Ј MAR P3 site of E. Circular permutated fragments were generated by appropriate restriction digests. Mobility shift assay with these fragments and in vitro-translated full-length or truncated (amino acids 216 -601) Bright protein were performed as described above. The binding and functional activity of the truncated Bright polypeptide were described previously (22). The distortion angle was estimated by the method of Thompson and Landy (42). Briefly, the relative mobilities of the fastest complex (E) and the slowest complex (M) are determined. This ratio is then plotted on a graph of M/E (abscissa) and distortion angle (ordinate) derived from A-tract standards.
DNase I Digestion of Isolated Nuclei and Hypersensitive Site Analysis-Nuclei were isolated and treated with DNase I as detailed previously (43). Nuclease digests were restricted with BglII and analyzed on a 1.4% agarose gel in 1 ϫ TAE (40 mM Tris acetate, 1 mM EDTA). The DNA was blotted onto a Bio-Rad Zeta probe nylon membrane by a modified alkaline blotting protocol (43). The gel was blotted overnight in 0.4 M NaOH, 0.2 M NaCl. The membrane was then neutralized in 50 mM Tris at pH 7.5 for 5 to 10 min, air dried, and baked at 80°C under a vacuum for 1 h. Prehybridization was carried out from 2 h to overnight in 0.27 M NaCl, 15 mM sodium phosphate (pH 7.0), 1.5 mM EDTA, 0.5% BLOTTO dried milk powder, 1% SDS, 500 g of sonicated herring testis DNA per ml. Hybridization was carried out overnight in the same buffer in the presence of at least 2.5 ϫ 10 7 cpm of a radiolabeled DNA probe (specific activity, at least 10 9 cpm/g) generated by random primer synthesis with a Decaprime DNA labeling kit (Ambion, Austin, TX). The DNA probe used was a 300-bp XbaI-EcoRI restriction fragment found just downstream of the E 3Ј MAR (44). Autoradiograms were calibrated with DNA standards 2.3, 2.0, 1.3, 1.1, and 0.87-kb-long by constructing a plot of log DNA size versus mobility. The sizes of the resulting hypersensitive fragments were interpolated from the resulting linear fit.
Bright Does Not Transactivate from a Concatamerized Bind-
ing Site-In our first description of the Bright transcription factor (13), we demonstrated that Bright could transactivate gene expression from a plasmid containing an IgH enhancer element (E) upstream of a reporter gene. To assess the ability of Bright to transactivate gene expression from a binding site not in context of the E enhancer, we used reporter constructs containing concatamers of a binding site in the S107 promoter (Bf150) or the E P2 site in transient transfections. Expression constructs containing Bright in either the sense or antisense orientation were co-transfected with reporter constructs driven by a thymidine kinase promoter and the additional elements as described in Fig. 1. Concatamers of the P2 site, which gel shift analysis demonstrated is a strong Bright binding site, could not increase CAT levels in either a B cell or plasma cell line (Fig. 1). Similarly, a reporter construct with the S107 MAR site concatamerized to eight repeats (Bf150 ϫ 8) did not show any significant increase in transcription when Bright was co-transfected in the sense orientation.
Bright Requires Specific MAR Sequences for Transactivation Function-Despite the lack of Bright activity on a concatamerized substrate, Bright clearly activated transcription from an E element over the levels seen from E alone (see Ref. 13 and Fig. 2). Bright binding sites were required for this activity, because an E that lacked the P sites (⌬E) did not mediate Bright transactivation (see Ref. 13 and Fig. 2). To further examine the specificity for transactivation that E ascribes to Bright, we tested the effects of P site deletions. Because P2 is a well characterized Bright binding site (13), we reasoned that it might be capable of acting alone. Indeed, a construct that lacks P1, P3, and P4 (⌬P1, P3, P4) was competent in mediating Bright transactivation (Fig. 2). However, a construct that lacked the P2 but had all other sites intact (⌬P2) was still functional. The additional deletion of the P4 site (⌬P2, P4) abrogated Bright-mediated function. That P4 could mediate Bright transactivation alone was verified using a P4-only construct (⌬P1, P2, P3). Interestingly, there was a trend that the P2-only and P4-only constructs were activated to a slightly lower degree than the intact E, though the difference was not statistically significant. It seems possible that Bright can act through both sites but that the activity seen in the intact E may be the combined effects of Bright binding to both sites. It was unanticipated that Bright could not function through the P3 site, because Bright also binds P3 very well (Fig. 3). This lack of function suggested that competent Bright binding sites must be within a contextual arrangement to allow them to mediate transactivation.
Bright Mediates Promoter-Enhancer Interactions-Knowing that Bright could mediate transactivation from both the enhancer and the S107 promoter (and possibly other Ig promoters, as well), we became interested in determining whether these functions were independent or whether these elements could function in concert. We constructed CAT vectors that partially or completely recapitulated the immunoglobulin locus promoter/enhancer arrangement. The S107 promoter fragment contains two Bright binding sites, one of which functions as a MAR (15,45). In a construct containing the promoter alone, Bright could not transactivate in a transient assay (Fig. 4). This is in contrast to assays where E is placed 5Ј of the CAT gene and Bright effectively increased gene expression. The ability of Bright to function through the IgH enhancer is also seen when the enhancer is in the distal position. Strikingly, when both the promoter and enhancer are present in the same construct, the effects of Bright are synergistic, increasing transcription levels more than 3-fold over that seen with E alone in the distal position. This Bright-mediated transactivation requires Bright binding, because a construct with ⌬E in the distal position could not mediate the Bright effect (Fig. 4).
Bright Transactivates Integrated Targets by MAR Interaction-Because Bright binding sites have the potential to act as MARs, we also studied these vectors in stably transfected cells to determine whether Bright can mediate MAR effects that would only be detected from integrated targets. CAT constructs were stably transfected into J558L cells and selected with neomycin for 21 days before transient transfection with Bright sense or antisense constructs. In contrast to results from the transient transfection assay, Bright is able to transactivate from the promoter alone in the stable system (Fig. 4). This supports a role for Bright as a MAR-binding protein, because this phenomenon is only seen when the promoter construct is integrated into the chromosome. A further increase in S107 promoter-driven transcription is seen when E is present in the distal position. As in the transient studies, this interaction is specific for Bright binding, because a construct with ⌬E in the distal position does not transactivate beyond what is seen with promoter alone (Fig. 4).
Bright Mediates DNA Distortion-The distance between promoter-associated and enhancer-associated Bright sites that appear to synergize in the constructs of Fig. 4 is about 2 kilobase pairs. We assumed that Bright may affect DNA topology to facilitate these interactions. We have previously shown that Bright binds DNA in the minor groove (13). The class of high mobility group box proteins typified by lymphoid enhancerbinding factor-1 and SRY bind DNA in the minor groove and bend the double helix (41). To determine whether Bright can also distort its DNA target on binding, we used the circular permutation assay described by Giese et al. (41), which measures DNA bending, as well as DNA flexibility caused by changes in DNA structure such as melting of AT-rich regions. For this assay, a series of equally sized fragments, differing only in the position of a Bright binding site, were generated. If the DNA is distorted during binding, then fragments bound near the center will migrate through a gel at a slower rate than those bound near the ends. In Fig. 5, a truncated Bright protein (amino acids 216 -601) with full binding activity distorts the circular permutated fragments as assessed by differences in complex mobility. The full-length Bright protein had an identical effect in this assay (data not shown). The angle of induced distortion can be determined by comparing the calculated ratio M/E to a plot of known A-tract standards, where M and E are the relative mobilities of the middle-bound (slowest migrating) and end-bound (fastest migrating) fragments, respectively (42). For Bright, M is calculated to be 0.41 and E to 0.48 giving a ratio of 0.85. Based on A-tract standards in 4% polyacrylamide gels, this ratio corresponds to a distortion angle of 80 -90°. E Becomes DNase I Hypersensitive following Bright Overexpression-The ability of Bright to mediate specific activation of integrated binding sites and to distort DNA suggested that it may be involved with altering chromosomal architecture and nucleosome-free regions of DNA. DNase I hypersensitive sites coincide with nucleosome-free regions in chromatin. To test the ability of Bright to alter the chromosomal organization of the endogenous IgH locus, we stably transfected Bright into a murine mature B cell line (WEHI 231) that produces low levels of endogenous Bright protein (13). Following a 20-day culture in G418, we selected a clone that expressed Bright at levels ϳ8-fold above that in the WEHI 231 parental lane and about twice that seen in two IgM-secreting plasmacytomas (MOPC 104E and HNK-1; data not shown). Nuclease sensitivity in mock-transfected WEHI 231 nuclei was limited to a 220-bp region coinciding with the E core (Fig. 6). In cells ectopically expressing Bright, hypersensitivity was greater in magnitude and encompassed a significantly larger (ϳ500 bp) area that extended through the 5Ј MAR, which contains the high affinity P2 binding site of Bright. A modest (2-3-fold) increase in transcription accompanied this effect (data not shown) but is similar to the level of induction caused by antigen ϩ inter- leukin 5 stimulation (14,15). A stronger and more extended DNase I digestion pattern is observed (Fig. 6) in nuclei of the plasmacytomas that transcribe the locus about 50-fold higher than WEHI 231 (see Ref. 6 and data not shown). These results indicate that the endogenous enhancer assumes a more extended chromatin configuration as a direct or indirect consequence of ectopic Bright overexpression. DISCUSSION Herrscher et al. (13) described Bright as a B cell-specific transcription factor capable of transactivating expression from the IgH enhancer (E). In this report we have characterized the contextual requirements of Bright transactivation to further understand how it, and potentially other MAR binding factors, can affect transcription levels. The data presented in this report support several mechanisms for Bright-mediated transcriptional regulation.
Using transient transfection analysis we have demonstrated that context is important for Bright transactivation. Bright was unable to transactivate gene expression from a concatamerized binding site, suggesting that it required interaction with specific factors to function. Furthermore, Bright only acts through the P2 and P4 sites of the E MARs. This was initially surprising, because Bright binds the P3 site as strongly as P2 and suggested spatial constraints for the interactions of Bright with other factors. This suggested that Bright might function to form tertiary structures of the enhancer DNA and interact with additional DNA-binding proteins or adaptor molecules. In support of this, we demonstrated that Bright distorts DNA. Studies with the T cell receptor ␣-chain enhancer have shown the requirement for DNA bending and distortion to remodel DNA so that transcription factors whose binding sites are spatially distant can interact (46). It is possible that Bright plays a similar role in the induced immunoglobulin expression of late stage B cells.
Synergy between promoter and enhancer transactivation in both the transient and stable transfections suggest an additional level of function for Bright. Because Bright exists in a tetrameric form, and only two functional chains are required for Bright binding in a gel shift assay (13), it is likely that one Bright molecule could bind two sites. Indeed, these studies suggest that Bright could bring an enhancer in apposition to the promoter and directly affect transcriptional activation. This effect would be consistent with studies that have implicated the IgH enhancer MARs in long range (Ig heavy chain variable gene segment promoter-mediated) transcriptional ac-tivation (8,44,47). In comparing transgenic expression in lines generated from wild-type and MAR-deleted E constructs, no V H -initiated transcripts were detected from the MAR-deleted locus (47). Using a different approach, Artandi et al. (48) demonstrated that TFE3 proteins binding in the Ig promoter and enhancer could cooperate when binding sites were placed proximal and distal of a CAT gene, presumably through interaction of two dimers. Bright already exists as a tetramer and so would not require any additional proteinprotein interactions to carry out this function.
This study also provides functional evidence for the MAR binding function of Bright. Transient transfections with the S107 promoter fragment, which contains a MAR (45), demonstrated that Bright was unable to transactivate from this site. In contrast, when this construct was stably transfected, Bright was now able to affect a 4-fold increase in transcription consistent with the concept that MARs only have effects when they are integrated into the chromosome. We previously demonstrated that Bright protein can be matrix-associated (13). The fact that Bright is only capable of transactivating from the S107 promoter only when it is integrated suggests that Bright can function by modifying or mediating matrix attachment. One difference between the S107 plasmid and the construct with E in the proximal position, which can mediate transactivation in a transient assay, may be the availability of other interacting co-factors. This highlights the context-dependent activity of Bright. Bright may interact with some factors during a transient assay and allow activation from E, whereas matrix attachment is required for transactivation from a substrate that may have limited DNA binding factors associated with it for Bright interaction. In support of interactions such as this, we have recently shown that Bright associates with members of the Sp100 family, which co-localize with Bright in nuclear domains and act as co-factors in transactivation (20). Thus, Bright has multiple requirements for transactivation activity, but the context-dependent activity may also provide multiple mechanisms for Bright to activate gene transcription.
Ectopic overexpression of Bright revealed an altered pattern of chromatin organization within the IgH enhancer in WEHI 231 B cell nuclei. Consistent with previous studies (44,47,49), the pattern of untransfected WEHI 231 nuclei is restricted to the E core. The assembly of this complex, as judged by in vivo dimethyl sulfate methylation patterns, has been shown to be independent of the flanking MARs (47). Under conditions where Bright is expressed at high levels, DNase I hypersensitivity appears to extend upstream, to include the high affinity P2 site-containing MAR, but not downstream of the core. This third, highly extended configuration extending across the 3Ј MAR is observed in the two IgM-secreting plasmacytomas that transcribe at ϳ50-fold higher levels. Similarly, the Ig 3Ј enhancer assumes three states of DNase I detectable accessibility, which correlate strictly with stage of B cell development (50). That the Bright overexpressing cells may have begun to transition from mature to activated is consistent with the increased E accessibility and the slightly increased levels of transcription observed here and with the appearance of active Bright⅐MAR binding complexes both in normal B cell populations and in B cell lines observed previously (13,19). Based on its SWI1 similarities, nuclear matrix residence, and MAR bending properties, it is tempting to consider a direct role for Bright in this remodeling. However, both known classes of chromatin remodeling enzymes, SWI⅐SNF and the histone acetyltransferases, exist as large multicomponent, ATP-hydrolyzing complexes (reviewed in Ref. 36). We have no evidence for or against participation of Bright as a B cell-restricted member or recruiter of either. However, MARs do confer local regions of The strategy for indirect end labeling was described under "Experimental Procedures." Hypersensitive sites, mapped by reference to a 1.6-kb BglII restriction fragment spanning E, were detected by using an upstream XbaI-EcoRI 220-bp subfragment as a hybridization probe. Nuclei, isolated from the indicated cell lines, were digested with (from left to right in each panel) 0, 0.5, 1.0, or 2.0 g/ml DNase I. DNA was purified, cut with BglII, and analyzed by Southern blotting. Molecular size markers are indicated to the left. Hypersensitive positions, mapped by subtracting the fragment size from the parental BglII fragment, are indicated on the blots and superimposed onto a vertical schematic of E. histone acetylation (51). In a different target gene system, Cux has been shown to form a complex with histone deacetylase that leads to gene inactivation (52). Bright could mediate derepressive chromatin remodeling indirectly through its successful competition with Cux/histone deacetylase. In a similar logic, Bright, along with related chromatin remodeling proteins, would then be in a position to clear out regions carrying the cis-acting regulatory elements of the core, contributing to the accessibility of conventional DNA binding transactivators to promoter and enhancer elements.
Studies presented in this report suggest some novel mechanisms for the regulation of immunoglobulin gene expression. They confirm that Bright acts in a restricted manner by binding specific sites in the IgH promoter and enhancer and by potentially interacting with other factors within the enhancer core. It further provides some insight into the mechanism of enhancer function and more specifically, how Bright may play an important role in Ig gene expression. Further analysis of these Bright new alternatives should yield a greater understanding of long-standing questions regarding gene regulation. | 6,011.6 | 2001-06-15T00:00:00.000 | [
"Biology"
] |
Required sampling-density of ground-based soil moisture and brightness 1 temperature observations for calibration / validation of L-band satellite 2 observations based on a virtual reality 3 4
Microwave remote sensing is the most promising tool for monitoring global-scale near-surface 9 soil moisture distributions. With the Soil Moisture and Ocean Salinity (SMOS) and Soil Moisture Active 10 Passive (SMAP) missions in orbit, considerable efforts are made to evaluate their soil moisture products 11 via ground observations, forward microwave transfer simulation, and retrievals. Due to the large footprint 12 of the satellite radiometers of about 40 km in diameter and the spatial heterogeneity of soil moisture, 13 minimum sampling densities for soil moisture are required to challenge the targeted precision. Here we 14 use 400 m resolution simulations with the regional terrestrial system model TerrSysMP and its coupling 15 with the Community Microwave Emission Modelling platform (CMEM) to quantify sampling distance 16 required for soil moisture and brightness temperature validation. Our analysis suggests that an overall 17 sampling resolution of better than 6 km is required to validate the targeted accuracy of 0.04 cm/cm (70% 18 confidence level) in SMOS and SMAP over typical midlatitude European regions. The minimum sampling 19 resolution depends on the land-surface inhomogeneity and the meteorological situation, which influence 20 the soil moisture patterns, and ranges from about 7 km to 17 km for a 70% confidence level for a typical 21 year. At the minimum sampling resolution for a 70% confidence level also the accuracy of footprint22 averaged brightness temperature estimates is equal or better than 15 K/10 K for H/V polarization. 23 Estimates strongly deteriorate with sparser sampling densities, e.g., at 3/9 km with 3/5 sampling sites the 24 confidence level of derived footprint estimates can reach about 0.5-0.6 for soil moisture which is much 25 less than the standard 0.7 requirements for ground measurements. The representativeness of ground26 based soil moisture and brightness temperature observations and thus their required minimum sampling 27 densities are only weakly correlated in space and time. This study provides a basis for a better 28 understanding of sometimes strong mismatches between derived satellite soil moisture products and 29 ground-based measurements. 30
Abstract. Microwave remote sensing is the most promising tool for monitoring near-surface soil moisture distributions globally. With the Soil Moisture and Ocean Salinity (SMOS) and Soil Moisture Active Passive (SMAP) missions in orbit, considerable efforts are being made to evaluate derived soil moisture products via ground observations, microwave transfer simulation, and independent remote sensing retrievals. Due to the large footprint of the satellite radiometers of about 40 km in diameter and the spatial heterogeneity of soil moisture, minimum sampling densities for soil moisture are required to challenge the targeted precision. Here we use 400 m resolution simulations with the regional Terrestrial System Modeling Platform (TerrSysMP) and its coupling with the Community Microwave Emission Modelling platform (CMEM) to quantify the maximum sampling distance allowed for soil moisture and brightness temperature validation. Our analysis suggests that an overall sampling distance of finer than 6 km is required to validate the targeted accuracy of 0.04 cm 3 cm −3 with a 70 % confidence level in SMOS and SMAP estimates over typical mid-latitude European regions. The maximum allowed sampling distance depends on the land-surface heterogeneity and the meteorological situation, which influences the soil moisture patterns, and ranges from about 6 to 17 km for a 70 % confidence level for a typical year. At the maximum allowed sampling distance on a 70 % confidence level, the accuracy of footprintaveraged soil moisture is equal to or better than brightness temperature estimates over the same area. Estimates strongly deteriorate with larger sampling distances. For the evaluation of the smaller footprints of the active and active-passive products of SMAP the required sampling densities increase; e.g., when a grid resolution of 3 km diameter is sampled by three sites of footprints of 9 km sampled by five sites required, only 50 %-60 % of the pixels have a sampling error below the nominal values. The required minimum sampling densities for ground-based radiometer networks to estimate footprint-averaged brightness temperature are higher than for soil moisture due to the non-linearities of radiative transfer, and only weakly correlated in space and time. This study provides a basis for a better understanding of the sometimes strong mismatches between derived satellite soil moisture products and ground-based measurements.
Introduction
Information on the global soil moisture distribution is required, for example, for weather forecasting, climate research, and agricultural applications. Due to the high spatial variability of soil moisture, its in situ observation is practically impossible on continental scales. Passive microwave satellite remote sensing at L-band frequencies may achieve this goal because of the strong dependency of the soil dielectric constant on soil moisture, the -compared to higher frequencies -reduced sensitivity of the brightness temperatures to surface roughness and vegetation (Njoku and Kong, 1977;Ulaby et al., 1986), and the high transparency of the atmosphere at these wavelengths. The first operational L-band soil moisture detection satellite, SMOS (Soil Moisture and Ocean Salinity), was launched in 2008 (Kerr et al., 2010)
and was
Published by Copernicus Publications on behalf of the European Geosciences Union.
1958
S. Lv et al.: Required sampling density of ground-based soil moisture followed in 2015 by SMAP (Soil Moisture Active Passive), which initially were performing with an active instrument to achieve higher spatial resolution (Entekhabi et al., 2010); the active component did fail, however, shortly after the full operation of the satellite. Both satellites are currently continuously and globally observing passive microwave brightness temperatures, from which soil moisture products are derived at a spatial resolution of 36 and 9 km.
Before and after the launch of SMOS and SMAP several soil moisture monitoring networks for evaluation and retrieval algorithm development were established, such as ESA's efforts at the Valencia Anchor Station (VAS) in eastern Spain, SMOSREX (Surface Monitoring Of Soil Reservoir Experiment) in France, the upper Danube watershed located in southern Germany (Delwart et al., 2008;de Rosnay et al., 2006;dall'Amico et al., 2012;Kerr et al., 2016), and the SMAP calibration-validation (Cal/Val) project (Colliander et al., 2017a;Burgin et al., 2017;Chen et al., 2017Chen et al., , 2018. All those networks have been established since ground truth should be the only standard to evaluate these products. According to the Level 1 baseline and the minimum SMAP science requirements (SMAP Science Data Cal/Val Plan, O'Neill et al., 2015) the spatial resolution of Level 2 (passive soil moisture product L2_SM_P) and Level 3 (daily composite L3_SM_P) soil moisture products is 36 km, and they have to reach an accuracy for soil moisture of 0.04 cm 3 cm −3 with a probability of 70 %. A wide range of measurement techniques and protocols exist for setting up and performing ground-based observations for such evaluations. SMAP Cal/Val suggests that volumetric soil moisture should be observed in situ at 5 and 100 cm depth; optimal sensing and mounting depths are, however, still debated (Lv et al., 2016a(Lv et al., , 2018(Lv et al., , 2019. For core validation sites a minimum of six stations should cover one SMAP grid cell or footprint (O'Neill et al., 2015;Famiglietti et al., 2008); but this value has not yet been shown to guarantee the nominal accuracy by a thorough analysis . More recent results show that the spatial representativeness of the soil moisture tends to increase with the timescale of data series, but so does their spread (Molero et al., 2018). For Cal/Val, it is required to have instantaneous soil moisture values rather than averages in different timescales. Relevant studies typically use ground-based soil moisture networks with fixed average sampling distance over rather homogeneous land surfaces, which are, however, not necessarily representative for all land surface types. For SMAP core calibration and validation sites, the data product grid cell should be sampled with at least eight stations to reach with 70 % confidence an estimated soil moisture uncertainty of 0.03 cm 3 cm −3 given a spatial soil moisture standard deviation of 0.07 cm 3 cm −3 as assessed from field measurements (Colliander et al., 2017b). According to the same source, grid cells with a dimension of 9 km (as for downscaled SMAP products) should be sampled with at least five stations and pixels with 3 km diameter with at least three stations to reach with 70 % confi-dence an accuracy of 0.03 and 0.05 cm 3 cm −3 , respectively, while assuming a spatial soil moisture standard deviation of 0.05 cm 3 cm −3 within the grid cell. Ochsner et al. (2013) point out that too few resources are currently devoted to in situ soil moisture monitoring networks, and that despite their increasing number, a standard for network density and sampling procedures is missing. The International Soil Moisture Network (ISMN, https://ismn. geo.tuwien.ac.at/en/, last access: 11 April 2020) is an effort to unify global soil moisture observation networks (Dorigo et al., 2011). Coopersmith et al. (2016) suggested temporary network extensions around permanent installations to quantify the representativeness of the latter. Qin et al. (2013) suggested the use of MODIS-derived apparent thermal inertia to interpolate between in situ soil moisture measurements. So far, the required sampling density is discussed only concerning in situ measurements, which heavily depend on sensor quality and network location (Vereecken et al., 2008;Brocca et al., 2010;Bhuiyan et al., 2018). Higher station numbers are necessary, as well as the establishment of general rules for their selection . Chen et al. (2017Chen et al. ( , 2018Chen et al. ( , 2019 suggest the utilization of TC (triple collocation), which is a statistic method to characterize systematic biases and random errors, or ETC (extended triple collocation) to analyze the noise component in soil moisture observations, and to use correlation to evaluate the representativeness of soil moisture networks. They also suggest that the core validation sites should allow validation of the retrieved soil moisture to an accuracy of 0.04 cm 3 cm −3 with a probability of 70 % in terms of unbiased RMSE because the bias itself is hard to eliminate.
Establishing ground monitoring networks for calibration and validation of soil moisture products from satellite L-band observations is challenging partly due to the different spatial scales between observations from soil moisture sensors and satellites. Moreover, from a direct comparison between satellite soil moisture products and ground-based measurements from existing soil moisture networks, it is impossible to isolate the sampling error, and only very few studies systematically investigate the station density required to allow for a given accuracy, taking the land heterogeneity into account. In our study, we use a 400 m resolution virtual reality generated with a regional terrestrial modeling system coupled with an observation operator to estimate such minimum station densities. The virtual reality contains realistic soil, land cover, and topography variability and allows us to arbitrarily vary the sampling density and, thus, average sampling distance in steps of 400 m. Section 2 introduces the virtual reality, and the observation operator used to transfer the terrestrial system states into virtual observations. In Sect. 3, we derive the error growth with increasing average sampling distance for soil moisture and brightness temperatures. Conclusions and discussion are provided in Sect. 4. S. Lv et al.: Required sampling density of ground-based soil moisture 1959 2 Methodology and data 2.1 Virtual reality The modeling system used to create the virtual reality from which we draw the virtual soil moisture observations and compute brightness temperatures is the Terrestrial Systems Modeling Platform (TerrSysMP, Shrestha et al., 2014;Gasper et al., 2014;Sulis et al., 2015) developed within the framework of the Transregional Collaborative Research Center 32 (TR32, Simmer et al., 2015). TerrSysMP consists of the atmospheric model COSMO (Consortium For Small Scale Modelling, Baldauf, et al., 2011), the land surface model CLM (Community Land Model Version 3.5, Oleson et al., 2008), and the distributed hydrological model ParFlow v693 (Ashby and Falgout, 1996;Kollet et al., 2010). The platform, specially designed for high-performance computing environments (Gasper et al., 2014), has been extensively evaluated against observations Shrestha et al., 2018a) as well as similar regional terrestrial system models . The effect of spatial resolution on simulated soil moisture and the resulting exchange fluxes between land and atmosphere has been studied with TerrSysMP by Shrestha et al. (2014Shrestha et al. ( , 2018b. We use for this study available simulation results generated by the research unit FOR2131 (Schalge et al., 2016(Schalge et al., , 2019 over an area containing the Neckar catchment in southwestern Germany in its center (Fig. 1). CLM and ParFlow were run at the horizontal computational grid with 400 m resolution. ParFlow has 50 vertical soil layers in which the upper 10 coincide with the 10 soil layers of CLM. The vertical resolution is variable with smaller steps near the land surface. The atmospheric model COSMO runs at a 1.1 km horizontal resolution, and COSMO is forced at the lateral boundaries with a COSMO-DE analysis from the operational weather forecast run by the German national weather service (Deutscher Wetterdienst, DWD) available at hourly time steps. The main topographic features of the modeling area are the upper Rhine valley in the west, the Black Forest in the southwest, and the foothills of the Alps in the south. The heights range from 80 to 1900 m. The area was selected by the research unit because of its heterogeneity in topography and land use, typical for midlatitude European river catchments; thus, it is also well suited for our study. The objective of the research unit is the setup and test of a strongly coupled data assimilation system with a fully coupled regional terrestrial model. Their virtual reality run (VR01), the results of which we are exploiting in this study, is the so-called nature run from which the research unit draws the virtual observations to be assimilated in a lower-resolved model version using ensemble methods. The model area can be covered by about 15 × 20 SMOS pixels, which suffices for the statistical analyses performed to determine required sampling densities. There exist two soil moisture monitoring networks close to the domain, which are used for soil moisture validation studies with satellite-based L-band observations (Montzka et al., 2013).
The topographic data for VR01 are obtained from the European Environment Agency (EEA; http://www.eea.europa.eu/data-and-maps/data/eu-dem, last access: 11 April 2020), which is also the source for the CORINE land-use data (http://www.eea.europa.eu/ data-and-maps/data/corine-land-cover-2006-raster-3, last access: 11 April 2020) used to characterize vegetation in the model domain. Since CORINE uses many more land-use classes than CLM, the CORINE classes are aggregated to the five classes discriminated in the CLM in the modeling area: broadleaf forests which can be found mostly in hilly areas throughout the domain in smaller patches, needle-leaf forests which dominate at a higher elevation such as the Black Forest, grassland which is relatively rare and only appears in small patches, and crops which are the most dominant land-use type throughout the domain and appear almost anywhere. All other classes, such as urban areas, are treated as bare soil in VR01.
The leaf area index (LAI) for the specific plant classes is taken from MODIS estimates corrected for known biases (Tian et al., 2004). Instead of the tiling approach implemented in CLM, the dominant land-use type for each grid cell is used, because the resolution of 400 m is high enough to warrant this approach. The SAI (stem area index) is estimated from the LAI by formulations slightly modified from those implemented in the CLM. For crops, SAI is just 10 % of the LAI; thus, SAI is larger in summer than in winter. For all other types, SAI is 10 % of LAI plus a "dead leaf" component. The "dead leaf" component is estimated empirically from the change of the LAI from the previous and current month. The "dead leaf" component is only a major contributor during fall, but even there the needle-leaf trees, for instance, show only a small increase in SAI. The VR01 region is mostly covered by deciduous trees that have 1-2 months of high SAI because the dead-leaf component decays rather quickly. Details about SAI calculation in VR01 are described in Schalge et al. (2016), Lawrence and Chase (2007), and Zeng et al. (2002).
The soil map ( Fig. 1a-b) is derived from a product of the German Federal Institute for Geosciences and Natural Resources (BGR; https://www.bgr.bund.de/DE/Themen/ Boden/Informationsgrundlagen/Bodenkundliche_Karten_ Datenbanken/BUEK1000/Nutz_BUEK/nutz_buek_node. html, last access: 11 April 2020). Soil values for regions near the edge of the modeling domain in France and Switzerland are extrapolated. Variability was added to the relatively large polygons of constant soil parameters to better represent what would be found in reality at higher resolutions, following Baroni et al. (2017). The soil color is derived from the carbon content of the soil, with carbon-rich soils being darker, except for the bare soil areas, which all use the same relatively light color class. There is deep soil geology included in ParFlow as well as alluvial channels below rivers to account for deeper subsurface flow, but these features will not directly impact the results shown here as they only appear below the soil layers.
Generation of L-band passive microwave observations
The radiative transfer model CMEM (de Rosnay et al., 2009) computes the land emissivity based on a dielectric mixture model for soil moisture, soil sand and clay fractions, soil surface roughness, vegetation optical thickness, single scattering albedo, and land surface orientation relative to the satellite viewing perspective. Depending on the sand and clay fractions, brightness temperatures may vary by tens of Kelvins, given the same near-surface soil moisture. Vegetation optical thickness depends on LAI, which varies in the VR01 with time depending on plant functional type (PFT). Depending on the particular PFT, CMEM uses different parameters to calculate the vegetation optical thickness from the respective LAI. Soil effective temperature is computed with a new scheme introduced by Lv et al. (2014). The new scheme is a discretization of the integral formulation and takes advantage of multi-layer soil temperature and moisture profile information with a broader range of soil properties. This allows better adaptation of CMEM to the available land surface model data. Also, soil temperature and snow depth impact the simulated brightness temperatures. More details can be found in the SMOS global surface emission model handbook (de Rosnay et al., 2009). From the 400 m resolution brightness temperatures, virtual satellite observations are generated with CMEM, taking the satellite antenna function into account. Figure 2 shows the centers of the ∼ 320 footprints corresponding to the SMOS L1 TB data product at a 41 • incidence angle for a potential satellite overpass and -on the same scale -the satellite antenna function for one footprint, which changes shape depending on the elevation of the individual 400 m model grid areas, orbit altitude, declination, satellite scanning and incidence angle.
Not each SMOS overflight will cover the whole area in reality. But in our study, we assume for simplicity that all footprints indicated in Fig. 2 are observed once a day at 06:00 local time, which corresponds to the approximate ascending and descending overpass time of SMOS and SMAP, respectively. The satellite footprint is much larger than the nominal satellite spatial resolution of 40 km that is defined by a 3 dB contour of the main lobe; thus areas much larger in diameter contribute to one satellite-observed brightness temperature (i.e., 50 % of one satellite-observed brightness temperature originates from an area roughly 10 times larger than the nominal satellite footprint). The virtual reality employed in this study is a physically consistent state of the terrestrial system in space and time because it has been produced by a numerical model based on the conservations equations for mass, energy, and momentum. When applying the satellite observation operator to this model state, we assume that the model state is correct, as well as the simulated brightness temperature. Thus, our study only quantifies the impact of the sampling density of a surface network on the comparison between area-averaged values and their estimates from the surface network, i.e., we ignore errors of the dynamic model (TerrSysMP) and the forward operator (CMEM). Based on the modeling results, we analyze a range of ground-based network configurations with sampling points at least 400 m apart, and we assume that all quantities (state of the terrestrial system and brightness temperature) do not vary within 400 m. While this is an approximation, we believe that our results and their outcome can be generalized. We will come back to this point in the discussion section.
Since one SMOS and SMAP footprint covers approximately 106 × 106 model grid columns in the VR01, the respective area can be sampled up to a maximum of 106 × 106 (virtual) sites. If the footprint area is sampled with n sites, there are C n 106×106 sampling combinations (SCs, hereafter) possible, with which is an unordered, non-overlapping collection of distinct elements of a prescribed size taken from a given set. For example, with a 10 km distance between sampling sites, about 6 × 6 sampling sites are possible within one footprint, which can be spatially distributed in C 6×6 106×106 ≈ 1.69×10 104 ways. It is computationally not feasible to consider all those com-binations. When, however, we first divide each footprint into equally sized subareas each containing exactly one sampling site (this assumes a certain degree of homogeneity within the network, which would in reality also be strived for), the number of potential sampling networks is drastically reduced. If we set the sampling distance within a 43 × 43 km 2 area to i km, we divide the footprint into 43 i 2 subareas each containing 106×106/ 43 i 2 ≈ 6.08×i 2 400 m-resolution model columns. When we further select within each of the equally sized subareas of a satellite footprint the same model column (i.e., the one with row number k and column number l, both starting at 1 in the upper left column of each subarea), a regular equidistant observation network within the SMOS-SMAP footprints is enforced similar to the one used in the study by Famiglietti et al. (2008). For each footprint (subscript f ) at a particular time (subscript t) of a certain sampling distance (i km, subscript d), the number of network configurations SC ftd is ( This results for a certain sampling distance (i km) for all 320 footprints and all 365 d of a year to a sample size of from which we will compute the PDF of the resulting sampling errors. For each day, given one observation per day for all 320 footprints and summed over all sampling distances, we get samples of size from which we will compute PDFs of the maximum allowed sampling distances. For each grid cell with one observation per day taken over 1 year and summed over all sampling distances, we get from which we determine the spatial distribution of the maximum allowed sampling distances. For example, for 800 m sampling distance, we determine the maximum from 0.8 0.4 2 ×365×320 = 467 200 samples, the number of which increases with the square of the sampling distance.
The sampling described above is applied to soil moisture (brightness temperature) with (without) considering the satellite weighting function (Fig. 2b). Since SMAP Cal/Val requires that the nominal accuracy of 0.04 cm 3 cm −3 for retrievals should meet with a probability of 70 %, we take the error at the 70th percentile, if not specified otherwise. In the following, we mostly use the more intuitive sampling distance (km), but also the sampling density (sites per square kilometer) when we are qualifying tendencies. The relationship between the sampling distance and the sampling density is simply sampling density = 1 sampling distance 2 .
For example, the 15, 5, and 3 sites for grid cells with diameters of 36, 9, and 3 km recommended by SMAP Cal/Val would be around 0.0116, 0.0617, and 0.3333 sites per square kilometer and correspond to sampling distances of 9.295, 4.025, and 1.732 km, respectively. We note here that the grid size of the SMAP passive soil moisture product is 36 km ×36 km per pixel, which is the ISEA-4H9 discrete global grid for SMOS (43 km ×43 km). The 43 km in all equations shall be exchanged by 36 km when computing the number of sampling networks by Eqs.
Results
We first discuss in detail the results for soil moisture sampling. Then we extend the same methodology to brightness temperature and compare both results. We also evaluate the potential sampling error for "footprints" with grid sizes of 3 and 9 km, because the SMAP products also include combined active-passive soil moisture retrievals at higher spatial resolutions (e.g., EASE-grid 9 km) and a product only based on the active sensor (EASE-grid 3 km). Two kinds of percentages are used in this study. One is the confidence level, which is related to the number of potential network configurations for one footprint as given by Eq.
(2). The other percentage is related to the PDF of the maximum allowed sampling distance with a confidence level of 70 % (we also use 100 % for comparison), which is based on Eqs.
Soil moisture
We compare the true (but virtual) spatial arithmetic average of soil moisture at the SMOS-SMAP resolution with the arithmetic average of soil moisture at 0.05 m depth computed from the sampling points taken at distances ranging from 400 m (i.e., each VR01 grid column, no sampling error) to 18 km (about half the radius of a SMAP or SMOS pixel. First, we analyze the probability density function of the sampling error as it varies with the sampling distance, taking the SC ft samples for one whole year of all footprints in the entire model area into account (Eq. 3, Figs. 3 and 6).
Then we analyze the evolution over the year of the daily PDF of the maximum allowed sampling distance (for keeping the sampling error below the nominal value of 0.04 cm 3 cm −3 with 70 % confidence) from SC td samples (Eq. 4, Figs. 4 and 7). Finally, we look at the spatial variability of the maximum allowed sampling distance (for keeping the sampling error below the nominal value of 0.04 cm 3 cm −3 with 70 % confidence) based on all samples of one SMOS-SMAP pixel over the year SC fd (Eq. 5, Figs. 5 and 8). When we analyze the sampling errors for brightness temperatures, we use footprint averages weighted by the antenna function; using the weighting function according to the dB pattern for soil moisture leads to differences below 0.01 cm 3 cm −3 ; thus, the averaging procedure does not impact our conclusions for soil moisture. We compute the maximum sampling error for each sampling distance and each footprint from the daily observations over 1 year of all network configurations. The distributions of the corresponding 320 values are displayed in the boxwhisker plots in Fig. 3a. Thus each value entering the distribution at a given sampling distance (individual box-whisker plot in Fig. 3) stems from that sampling network for one of the 320 SMOS footprints, which leads to the largest sampling error, taking all daily observations over a year into account (Eq. 3). With a sampling distance of 400 m, we accurately reproduce the true (but virtual) arithmetic soil moisture average, i.e., the maximum error is zero. Maximum errors naturally increase with sampling distance, as demonstrated by the widening of the maximum error distribution. The median of the maximum sampling error increases almost linearly, with about 0.022 cm 3 cm −3 per kilometer increase in sampling distance. The spread of the maximum error increases from less than 0.01 cm 3 cm −3 at 0.8 km to approximately 0.4 cm 3 cm −3 at 18 km, with quite some variability between the sampling steps. To guarantee a sampling error below 0.04 cm 3 cm −3 (the assumed accuracy of SMOS-SMAP re- Figure 3. Box-whisker plots, with the median in red, 25th and 75th percentiles as bounds of the box, and whiskers encompassing all values of the maximum sampling errors for the 320 satellite footprints of the arithmetic mean soil moisture estimated for all network configurations observing twice a day over 1 year at the given sampling distances (abscissa). Panel (a) shows the absolute maximum error, while (b) displays the results for the 70th percentile of the sampling error distribution at each satellite footprint. The horizontal dashed line is the 0.04 cm 3 cm −3 retrieval error anticipated for SMOS and SMAP. trievals) with 100 % confidence everywhere in the region at any time of the year (Fig. 3a), the maximum sampling distance should not exceed 2.8 km. With a 4.8 km sampling distance, for 50 % of the area and/or days of the year, we get sampling errors above 0.04 cm 3 cm −3 . At a sampling distance of 4.4 km (about 18 sites within a 43 km ×43 km pixel); the same would hold for only 25 % of the satellite pixels. Figure 3c displays the PDF of the maximum sampling error corresponding to the 70th percentile of the sampling error PDF computed for each satellite pixel over the year. Thus, to guarantee a sampling error below 0.04 cm 3 cm −3 for all network configurations for only up to 70 % of all pixels and all days of the year, a minimum sampling distance of 6 km is required. At a sampling distance of 12 km, already only 50 % of the pixels fulfill this requirement. Overall, about one-quarter of the stations needed for 100 % confidence is needed, when the requirement to stay within the 0.04 cm 3 cm −3 error margin is relaxed to 70 %.
As outlined above, we can also quantify from the simulations the allowed maximum sampling distance on a daily basis from the samples with the size given by Eq. (4). According to Fig. 4b, for 80 % of the SMOS-SMAP pixels, the maximum allowed sampling distance is between 8.4 and 16 km, which is 7-26 stations for SMOS (43 km) and 5-18 stations for SMAP passive (36 km) to keep the sampling er-ror below 0.04 cm 3 cm −3 with 70 % confidence. A seasonal variation is not apparent, but rainfall events (Fig. 4a) affect the distributions by increasing the maximum allowed sampling distances because the surface soil moisture becomes more homogeneously distributed in space due to the typically quite widespread precipitation in that region. The opposite occurs during dry periods because evaporation, draining, and runoff over various soil and land cover types tend to create spatially heterogeneous soil moisture distributions, which typically reaches its maximum at intermediate soil moisture levels (Brocca et al., 2010).
The spatial distribution of the annual maximum sampling distance allowed to guarantee a sampling error below 0.04 cm 3 cm −3 with 70 % confidence computed from the samples given by Eq. (5) and its RMS for the year 2015 (Fig. 5) indicates that the southeastern region requires sampling distances of only below 16 km; thus only nine sites are needed within a SMOS-SMAP pixel to estimate the footprint-averaged soil moisture with a sampling error below 0.04 cm 3 cm −3 . Also, the annual variation is particularly small (blue). For the rest of the region, maximum allowed sampling distances range from 7 to 10 km (radius); thus, more than nine sites are required within one footprint. The annual variation of the maximum sampling distances for those footprints is larger than in the southeast. The mean Figure 4. Precipitation in VR01 (a) and time series of the distribution of the maximum allowed soil moisture sampling distance for each SMOS or SMAP pixel to assure a sampling error below 0.04 cm 3 cm −3 (70 % confidence) for the year 2015 (b). The colored intensity is proportional to the probability of occurrence. The 10th and 90th percentiles are indicated as blue and read lines, respectively. Every precipitation event makes the soil moisture field more homogenous regarding high PDF and larger maximum spatial sampling distance, which means fewer stations are required. allowed sampling distances and their day-to-day changes are only weakly correlated (correlation coefficient 0.40), but show larger-scale common patterns.
Brightness temperature
We now determine the maximum sampling distances for networks of ground-based microwave radiometers allowed to estimate SMOS-SMAP footprint brightness temperatures. To this goal, we transform the target accuracy of SMOS-SMAP soil moisture retrievals of 0.04 cm 3 cm −3 to the accuracy of the corresponding brightness temperature, which is approximately 10 K for H polarization and 5 K for V polarization (10 K/5 K) according to CMEM forward simulations (Sabater et al., 2011;Monerris Belda, 2009). We note that this brightness temperature accuracy is not the instrument observing error of the (virtual) microwave radiometer, but the sensitivity of the microwave forward transfer model to soil moisture. We are aware that the radiometric accuracies of ground-based and satellite-borne sensors are much better, and that the accuracy of the soil-moisture-brightness temper-ature relation is mainly responsible for the retrieval accuracy; thus, we use the 10 K/5 K uncertainty only as a proxy for the overall error.
By comparing the high-res TB for certain sampling distances with the antenna pattern TB from the satellite operator, Fig. 6 shows different patterns to the soil moisture. Even at a sampling distance of 800 m, the sampling error might exceed the 10 K for H polarization (5 K for V polarization) limit in certain regions and times. If we want to keep the limit with a probability of only 75 percentiles (the upper boundary of the boxes in Fig. 6, 100 % confidence panels), the maximum sampling distance must stay below 4.4 km. For a sampling distance of 5.2 km, the error may go beyond the nominal 10 K (5 K) with a probability of 50 %. For 9.2 km sampling distance, and the maximum sampling error is always above the nominal values for some region and/or a day in the year. Even if we require that the nominal error is undercut only with a probability of 70 % for all pixels and days, a sampling distance of 800 m is not enough. If only 50 % of all networks are required to fulfill the 10 K/(5 K) bound, a sampling distance of 10 km is sufficient. Figure 5. Spatial distribution of the mean of the maximum allowed soil moisture sampling distance in the model area required for keeping the maximum sampling error below 0.04 cm 3 cm −3 over the whole year. The circle radius indicates the maximum allowed sampling distance in the scale shown in the map, while its color (see color bar) gives the RMS of the maximum allowed sampling distance over time for the year 2015.
The time series of the distribution of the maximum sampling distances for brightness temperature (Fig. 7) is quite similar to the one for the maximum sampling distances for soil moisture. Figure 7 only illustrates the periods without freeze-thaw state transformations, and liquid water in the soil dominates the brightness temperature signal. Values range from 6.8 to 16.4 km for most cases. The spread of the sampling error has, however, a distinct seasonal variation; e.g., the maximum sampling distance for 90 % of the footprints is 11.6 km from DOY 100 to 275 and 8.8 km for the rest of the year.
The spatial distribution of the annual maximum sampling distance allowed to guarantee a sampling error less than 10 K/5 K for H /V polarized brightness temperatures, and its RMS for the year 2015 (Fig. 8) are similar for H and V polarizations but shows a substantial spatial contrast compared to the results for soil moisture (Fig. 5). Again, the southeast corner of the model region allows for larger maximum sampling distances, but there are now also other distinct regions with larger allowed maximum sampling distances. Additional input parameters required -especially LAI -and internal parameters in CMEM impact the representativeness of sites for brightness temperatures. LAI dominates the variation of the representativeness of ground-based observations and also its temporal variation, as can be inferred from the correlation between large maximum sampling distances with its variability over the year (correlation coefficient is 0.84/0.83 for H /V polarization), which is not observed for soil moisture. LAI is the only input in CMEM, which can lead to such a temporal variation because other parameters such as air temperature, soil moisture, and soil properties are either fixed or do not impact the brightness temperature as strongly.
3.3 Maximum sampling distance differences between soil moisture and brightness temperature The differences in the variability of the maximum allowed sampling distance for soil moisture and brightness temperature can be explained by using the microwave transfer model CMEM. The relationship between soil moisture and brightness temperature is complex and non-unique (Fig. 9a, b). For example, a soil moisture value of 0.4 cm 3 cm −3 relates to brightness temperatures from 180 to 250 K for H polarization and 225 to 265 K for V polarization due to the variation of vegetation cover, soil properties, and terrain. As already mentioned in the introduction, the spatial resolution for the SMAP active product is 3 km and for the passive-active merged soil moisture product 9 km. SMAP CAL/VAL requires three stations for the evaluation of the prior and five stations for the following product (Colliander et al., 2017b). We computed the station distance required to keep the sampling error below the nominal 0.04 cm 3 cm −3 for both products by using the same methodology used above. Due to limited computation capacity, only the higherresolution pixels in the center of the 43 km SMOS footprints are evaluated. According to the results (Fig. 10), the probability that 3 and 9 km pixels sampled with 3 and 5 stations, respectively, have sampling errors below the nominal value of 0.04 cm 3 cm −3 is below 40 % and thus much lower than the required 70 %. The temporal variation of the confidence level is larger for the 3 km than for the 9 km grid size.
The impact of land surface inhomogeneity
Areas with vegetation water content above 5 kg m −2 (mostly forests) are flagged in SMAP retrievals. The networks used in the studies by Colliander et al. (2017b) and Famiglietti et al. (2008) were selected because of their relative homogeneity; thus, forested patches, open water, permanent ice and snow, urban areas, and wetlands are excluded. Soil moisture maps from SMAP/SMOS are, however, global. Thus estimates are provided everywhere, and signals from open water surfaces on subgrid scales may influence the products. We used our simulated observations to study the impact of subpixel contributions of forested areas on the sampling errors.
In total, only 16 of the 320 footprints covering the model area have forest fractions below 15 % and negligible surface water contributions; such footprints are usually considered ideal for soil moisture Cal/Val. In terms of both soil moisture and brightness temperature, their maximum sampling errors are considerably lower compared to all sites for all sampling Figure 6. Same as Fig. 3 but for the sampling error of the brightness temperature. The respective brightness temperature errors are (equivalent to a soil moisture accuracy of 0.04 cm 3 cm −3 ) 10 K for H polarization and 5 K for V polarization and are indicated as dashed horizontal lines. distances (Fig. 11). Thus, excluding sites with larger forest fractions leads to lower sampling errors.
The results shown in Fig. 11 do not mean that forest sites always have higher soil moisture errors than non-forest sites, but by picking Cal/Val sites with favorable conditions reduces the required sampling density, which may, however, affect their representativeness. Moreover, the required sampling density inferred from non-forest sites cannot be extended to forest sites.
Conclusion and discussion
We used a virtual reality generated with a fully coupled subsurface-vegetation-atmosphere model platform over southwestern Germany with a spatial resolution of 400 m for the land components to quantify the sampling error for the arithmetic averaged soil moisture and the weighted average brightness temperatures estimated from in situ ground-based observation networks covering SMOS-SMAP-like footprints of 43 km diameter for a wide range of potential sampling distances. By using a virtual reality at such high resolution, we have a physically consistent three-dimensional evolution of the terrestrial system at our disposal from which we can take virtual soil moisture observations and -via the radiative transfer model CMEM and a satellite antenna function -microwave brightness temperature observations from the highest resolution at 400 m to any larger resolution.
As an upper threshold for the sampling error of groundbased sensor networks when estimating averages over SMOS-SMAP pixels, we adopted the target SMOS-SMAP soil moisture retrieval accuracy of 0.04 cm 3 cm −3 . We quantified the maximum sampling distance, which still keeps the sampling error below that accuracy either for all or for 70 % of all SMOS-SMAP pixels in the modeling region over 1 year for all network configurations possible. A primary assumption in our study is that the estimation of soil moisture for an area with a diameter of about 400 m is possible, or in other words that a single station within a 400 m area is representative for its spatial average, an assumption also discussed in Famiglietti et al. (2008). Compared to the region analyzed in Famiglietti et al. (2008), our study uses a much more realistic terrain and excludes subjective factors in selecting suitable Cal/Val sites. Because of this, the soil moisture error in our study grows much faster with increasing sampling distance. We also find that the estimation of areaaveraged brightness temperatures from a network of ground-based stations has a different error growth with increasing sampling distance compared to soil moisture despite an initial linear growth for both of them (compare Figs. 3 and 6). Thus, a representative soil moisture network does not guarantee a representative radiometer network for the estimation Figure 9. Scatter plots of the joint PDF between brightness temperature at H (a) and V (b) polarization against soil moisture computed from the 400 m resolution virtual reality for 1 year. Both the temporal and spatial variation is included. Figure 10. The spatial distribution of the soil moisture sampling confidence to achieve the 0.04 cm 3 cm −3 accuracy requirement by sampling 3 km (a) and 9 km footprints (b) with three and five sites, respectively (see the scale below the color bar). The colors show the minimum confidence level throughout the year 2015 for every footprint. The scale is soil moisture accuracy that can be achieved. of area-averaged brightness temperature, or that brightness temperatures computed for the soil moisture stations can be used for that estimate. But Figs. 3 and 6 also show that sampling distances below 6 km still fulfill the 70th percentage requirement for keeping the sampling error below the nominal error.
Besides plant types, there is no apparent pattern similarity between clay, sand, and elevation ( Fig. 1) and spatial sampling distance (Fig. 5). Soil properties may be related to the regional climate (annual precipitation, radiation flux balance, etc.). For instance, arid regions usually contain higher sand fractions, but such areas are seldom the focus of soil moisture studies because of their low variation. Transition zones like our model area usually encompass various soil properties, which are often correlated with land use and vegetation and thus the plant function type used in the CLM. Topography also affects the soil moisture and TB distribution, but it is difficult to infer the impact of land use and vegetation because soil properties determine both the water holding capacity and the plant cover. In practice, soil moisture monitoring networks avoid complex terrain. Homogenous terrain and landscape lead to an overestimation of satellite soil moisture product accuracies.
The statistical results in our study differ from those in Famiglietti et al. (2008) because our focus is on the satellite footprint scale and not the representativeness of one station within a network. For example, a particular sensor may not represent the actual 400 m average, but one such sensor every 400 m may statistically sufficiently represent a much larger footprint. A similar concept is adapted in ensemble forecasts Figure 11. The maximum sampling errors of the arithmetic mean of soil moisture (a) and brightness temperature (b) estimated from all sites and from sites with forest cover below 15 % against average sampling distance. using members, e.g., with different physics packages, none of which is expected to be the truth (Lewis, 2005;Leutbecher and Palmer, 2008). The space detected by a soil moisture sensor, which is measuring the dielectric constant of the soil or other media using capacitance/frequency domain technology, is about a 10 cm sphere. Thus, the study by Famiglietti et al. (2008) assumes soil moisture homogeneity on the scale of meters. We believe that the 400 m soil moisture homogenous assumption does not interfere with our conclusions and that our study can be considered as a complement to the study by Famiglietti et al. (2008).
The calibration and validation of passive satellite-based Lband soil moisture estimates are difficult due to the large subpixel variability (Lv et al., 2016b(Lv et al., , 2019. Even with a perfect microwave transfer model and precise sensors, we can hardly find an appropriate in situ observation to compare with. While soil moisture also varies in the vertical, sensors are usually mounted at a fixed depth; thus, comparisons with satellite observations require the knowledge of the microwave penetration depth, which is, however, unknown in general. Lv et al. (2018) developed a model based on the soil effective temperature which sheds light on this fundamental problem. This study isolates the sampling density issue from other factors and is a test of the current Cal/Val network standard without previous knowledge of the site. The SMAP team suggests 15 sites for a 36 km by 36 km grid size (Colliander et al., 2017b), and this study agrees with this configuration for typical mid-latitude European regions from the sampling error perspective. For a 36 km by 36 km grid size, the required sampling sites would range from about 36 (6 km) to 4 (17 km). However, five sites for 9 km by 9 km and three sites for 3 km by 3 km will miss the 70 % confidence level requirements over this area. Since SMAP's 9 and 3 km soil moisture products are from a combination of passive and active microwave signals, which have a lower accuracy than the passive ones (Entekhabi et al., 2010), their Cal/Val campaigns shall determine sampling distances with less confidence level.
Our virtual reality contains extensive land cover variability (Fig. 1); thus, it would be helpful to adapt our approach for less complicated regions with variabilities closer to the typical Cal/Val station networks. Overall, we find that a soil moisture sampling distance of ∼3 km is necessary to always keep the sampling errors below the nominal value. The allowance for a failure probability of 30 % extends this distance to 10 km. For brightness temperatures, the sampling requirements are much more strict; already, at 800 m sampling distance, it cannot be guaranteed that the sampling error remains below the equivalent threshold of 10 K/5 K for H and V polarization, respectively, even when allowing for a 30 % probability of failure. The error sources in retrieving soil moisture from TB data are also large in reality but are not S. Lv et al.: Required sampling density of ground-based soil moisture of concern in this study because VR01 and the TB produced by CMEM exclude the uncertainty, except for the sampling distance.
Our results are not only useful for the planning of groundbased soil moisture networks, they also contribute to a better understanding of the relation between brightness temperatures observed on the ground -or simulated at high resolution -and the ones observed from satellites, apart from the non-linearity effects of radiative transfer (e.g., Drusch et al., 1999). The study allows, for example, to quantify to what extent a bias between satellites' brightness temperature and forward simulation could be explained by the spatial sampling (e.g., Figs. 5,8,and 11), and to understand the similarities and dissimilarities between observed soil moisture and brightness temperature time-series (Figs. 4 and 7). Since ground-based soil moisture networks will always cover only certain parts of a satellite pixel, a bias must be expected between both. The different representativeness of the latter can also cause biases in satellite and ground-based estimates of soil moisture for soil moisture and brightness temperatures.
While the allowed maximum sampling distances do not change much over the year for soil moisture -except after large-scale precipitation events which will enable larger sampling distances -its equivalence for brightness temperature has a strong seasonal variation because of the blurring effect of vegetation during the growing season, when brightness temperatures become more homogeneous. The spatial distribution of the maximum sampling distances and their local variances behave quite differently between soil moisture and brightness temperature. The spatial patterns are different, and while the maximum allowed sampling distance and its variation are firmly related to brightness temperature, they are barely related to soil moisture; this unusual behavior is caused by the complexity of other factors influencing microwave radiative transfer.
Our study strongly suggests that the sampling density of current SMOS-SMAP ground-based Cal/Val networks and the resulting potential sampling error of estimated pixelmean soil moisture and brightness temperatures considered in such studies should be reviewed carefully. We expect this study will help us to understand the errors of satellite-derived soil moisture better. | 11,523.4 | 2019-01-01T00:00:00.000 | [
"Environmental Science",
"Mathematics"
] |
MMP-9 and CXCL8/IL-8 Are Potential Therapeutic Targets in Epidermolysis Bullosa Simplex
Epidermolysis bullosa refers to a group of genodermatoses that affects the integrity of epithelial layers, phenotypically resulting in severe skin blistering. Dowling-Meara, the major subtype of epidermolysis bullosa simplex, is inherited in an autosomal dominant manner and can be caused by mutations in either the keratin-5 (K5) or the keratin-14 (K14) gene. Currently, no therapeutic approach is known, and the main objective of this study was to identify novel therapeutic targets. We used microarray analysis, semi-quantitative real-time PCR, western blot and ELISA to identify differentially regulated genes in two K14 mutant cell lines carrying the mutations K14 R125P and K14 R125H, respectively. We found kallikrein-related peptidases and matrix metalloproteinases to be upregulated. We also found elevated expression of chemokines, and we observed deregulation of the Cdc42 pathway as well as aberrant expression of cytokeratins and junction proteins. We further demonstrated, that expression of these genes is dependent on interleukin-1 β signaling. To evaluate these data in vivo we analysed the blister fluids of epidermolysis bullosa simplex patients vs. healthy controls and identified matrix metalloproteinase-9 and the chemokine CXCL8/IL-8 as potential therapeutic targets.
Introduction
Epidermolysis bullosa (EB) is a genetically heterogeneous disease affecting the skin and mucous membranes. EB is characterized by the formation of blisters and erosions after minor traumatization, thereby significantly compromising life quality. EB is divided into four major groups: the simplex type (EBS), the dystrophic type (DEB), the junctional type (JEB) and Kindler syndrome. The genes underlying the different subtypes of EB have major functions in mechanical stabilization of the basement membrane zone. Depending on the gene which is affected, EB can be either a relatively mild disease or a life-threatening disease due to secondary complications like squamous cell carcinomas in dystrophic EB, in which the collagen VII (COL7A1) gene is mutated [1].
In the EBS subtype, mutations in the keratin-5 (K5), keratin-14 (K14) and plectin (PLEC) genes are causative, with many being inherited as autosomal dominants and therefore presenting a challenge to gene therapy. EBS type Dowling-Meara (EBS-DM) is caused by such dominant mutations in the K5 and K14 genes and belongs to the more severe subtypes within the EBS group [2].
The type-II keratin K5 and the type-I keratin K14 are the major components of the intermediate filament (IF) network in basal cells of epithelia, forming heterodimers that are bundled as tonofilaments. These IFs are attached to desmosomes and hemidesmosomes and provide mechanical stability not only within a single cell but, also between neighboring cells and to the basement membrane [3]. Due to the dominant nature of K5 and K14 mutations in EBS-DM, misfolded proteins can be integrated into the IFs, rendering them sensitive to mechanical stress. Upon trauma, these filaments disrupt and the keratinocytes lyse, leading to intra-epidermal blistering [4].
Yet, the function of IFs is considered to be more than just to provide mechanical stability to basal keratinocytes. It was shown that, upon mechanical stress, major MAPK pathways like ERK are activated in K14 mutant cell lines and change the apoptotic machinery within these cells [5]. Another form of stress response was shown in K14 mutant cell lines and in a K5 2/2 mouse model for EBS. In the latter, the inflammatory cytokines IL-6 and IL-1b were found to be upregulated in K5 2/2 mouse skin and it was hypothesized that keratin mutations contribute to EBS by inducing an inflammatory phenotype that mediates a stress response [6].
An important role of IL-1b in the skin is to activate keratinocytes in many pathological conditions and upon wounding. In basal keratinocytes, IL-1b is present in the cytoplasm in a precursor form. After injury, IL-1b is processed and released and activates signal transduction pathways in surrounding cells in both autocrine and paracrine fashion. In keratinocytes, IL-1b alters gene expression and causes cells to become proliferative and migratory [7].
Based on the fact that many stress pathways are activated in K14 mutant cells, we hypothesized that these pathways contribute to the blistering phenotype of EBS-DM patients to a greater extent than is usually supposed. In the present study, we investigated the gene expression profiles of two EBS-DM cell lines and compared them to that of a wild-type cell line. In a hypothesis-driven as well as hypothesis-generating approach, we identified a plethora of regulated genes in these cell lines. We investigated the relevance of these genes in vivo, and our data illuminate potential therapeutic targets that may provide a basis for future medical treatments.
Ethics Statement
In the course of this study we used a punch biopsy of a five-yearold patient to generate the immortalized cell line EBDM-1 (see below). The biopsy was taken for diagnostic reasons, and the parents gave written informed consent to use the remaining material for scientific research. The Salzburg ethics committee confirmed, that in this case no ethics approval is necessary, and that the procedure is in accordance with the Krankenanstaltenund Kuranstaltengesetz, 18c (Austrian Federal Hospital Act, section 8c), and with the Salzburger Krankenanstaltengesetz 2000, 130 (Salzburg Hospital Act 2000, section 30). No institutions or hospitals outside of Austria were associated with research on primary patient material. Samples of totalRNA of established cell lines were processed outside of Austria for microarray analysis by a commercial service (see below). For the microarray analysis, the cell lines were anonymized and could not be linked to certain individuals. Therefore, all ethical issues fall under Austrian legislation.
Cell Lines and Cell Culture
All cell lines used in this study were immortalized in the same way with HPV16 E6/E7. NEB-1 and KEB-7 cell lines were generously provided by the laboratory of E.B. Lane, College of Life Sciences, University of Dundee. KEB-7 cells are keratinocytes derived from a Dowling-Meara patient carrying the K14 R125P mutation (coding sequence G375C) and NEB-1 cells are wild-type keratinocytes derived from a healthy relative of this patient [8].
EBDM-1 cells were obtained from a skin biopsy of a five-year-old Dowling-Meara patient heterozygous for a K14 R125H mutation (coding sequence G375A). The skin biopsy for EBDM-1 cells was performed at the Dermatology Department of Paracelsus Medical University Salzburg (see ethics statement). The primary keratinocytes were isolated by incubating the biopsy in trypsin-EDTA for 30 minutes and transferring the epidermis onto a feeder layer in EpilifeH medium (Invitrogen). Immortalized keratinocyte cell lines were cultured in RM medium (DMEM plus 25% Ham's F12 medium, 10% heat-inactivated FCS, 1% Pen/Strep and additional growth factors: adenine 1.8610 24 M, hydrocortisone 0.4 mg/ml, transferrin 5 mg/ml, lyothyronine 2610 211 M, insulin 5 mg/ml and EGF 10 ng/ml). All cell lines were incubated at 37uC, 5% CO 2 in a humidified atmosphere. All experiments were performed within comparable passages and at 70% confluence. For interleukin-1b experiments, human IL-1b/IL-IF2 antibody polyclonal goat IgG (R&D Systems, # AB-201-NA) was added to the culture medium at a final concentration of 2 mg/ml of medium.
Microarray Analysis
According to MIAME guidelines [9], the microarray was performed as follows: NEB-1 and KEB-7 cells of passage 20 and EBDM-1 cells of passage 13 were harvested at 70% confluence and total RNA was extracted from cell lysates using an RNeasy Mini Kit (Quiagen, # 74104) according to the manufacturer's protocol. Sample processing and data analysis was performed by an Affymetrix Service Provider and Core Facility, ''KFB-Center of Excellence for Fluorescent Bioanalytics'' in Regensburg, Josef-Engert-Straße 9, D-93053 Germany. At KFB, an AmbionH WT Expression Kit was used to generate sense-strand cDNA from total RNA of NEB-1, KEB-7 and EBDM-1 samples according to the manufacturer's protocol. The sense-strand cDNA was then fragmented, labelled and hybridized using the Affymetrix GeneChipH WT Terminal Labeling and Hybridization Kit according to the manufacturers protocol. For data analysis, the Affymetrix Expression Console Software was used, and the Robust Multi-chip Analysis (RMA) algorithm was applied with default settings. The microarray dataset was submitted to ArrayExpress (http://www.ebi.ac.uk/arrayexpress/) as two separate experiments (accession number KEB-7 vs. NEB-1: E-MTAB-1640; accession number EBDM-1 vs. NEB-1: E-MTAB-1641). The bioinformatic online tool DAVID (http://david.abcc.ncifcrf.gov/) was used for further analysis of the microarray dataset [10]. In order to obtain statistically significant expression data, the expression of regulated target genes, identified in the microarray, was confirmed with SQRT-PCR. Targets were chosen for further investigation with SQRT-PCR even when they were found to be regulated in only one of the two EBS-DM cell lines. The microarray data, relevant to the present study, are given in detail in the results section, including the Sig log ratio, the fold expression and the mRNA accession number (Ref Seq) for each identified gene. The Sig log ratio is the difference in the log2 signal of a probe compared between two arrays (e.g. gene X in KEB-7 vs. NEB-1). A Sig log ratio of +1 is equivalent to two-fold upregulation. A Sig log ratio of 21 is equivalent to two-fold downregulation. A Sig log ratio of 0 indicates no change.
Semi-quantitative Real-time PCR (SQRT-PCR)
Cells were harvested at 70% confluence and total RNA was extracted from cell lysates using an RNeasy Mini Kit (Quiagen, # 74104) according to the manufacturer's protocol. DNase1 digestion (DNase1, Amplification Grade, Sigma-Aldrich, # AMPD1-1KT) and cDNA synthesis (iScript TM cDNA Synthesis Kit, BIO-RAD, # 170-8891) were also performed following the manufacturers' protocols. Primers were designed to bind over exon-exon junctions to exclude binding to intronic sequences and to amplify an equal product length of 150 bp for all target genes. GAPDH was used as a reference gene. SQRT-PCR was performed using iQ SYBR Green Supermix (BIO-RAD, # 170-8882) in a BIO-RAD CFX96 TM Real-Time System, C1000 TM Thermal Cycler. A three-step protocol was used, and the 2 2DDCt method was applied for quantification of gene expression [11]. GGTGTCGCGGAGCACGG,
Isolation of Total Protein from Cell Cultures
Cells were grown to 70% confluence and then washed twice with PBS. The cells were covered with lysis buffer (0.5 M Tris-HCl -pH 6.8, 20% glycine, 10% SDS, 5% ß-mercaptoethanol and Roche complete protease inhibitor), scraped off with a rubber policeman and transferred into microcentrifuge tubes. The cells were lysed by pipetting three times through a 22G syringe and then incubated at 95uC for five minutes. The cell lysates were stored at -20uC.
Isolation of Proteins from Cell Culture Supernatant
For western blot analysis of kallikrein-related peptidases, cells were incubated for 48 h in EpilifeH medium without FCS. The supernatant was collected and filtered through a cell strainer (BD Falcon TM ) to remove dead cells and cell debris. Complete mini protease inhibitor (Roche) was added to the supernatant and the solution was concentrated by centrifugation through a centrifugal filter (AmiconH Ultra, Millipore TM ) for 20 minutes at 4uC and 6800 g. The concentrate was mixed with 46 sample buffer and subject to SDS-PAGE and western blotting as described below.
Collection of Blister Fluid
Patients of various EBS subtypes visited the Dermatology Department of Paracelsus Medical University and the EB house Austria for routine check up and wound care. In the course of wound care, blister fluid was collected with a syringe. Roche complete protease inhibitor was added to the samples and aliquots of 1/10 and 1/20 dilutions were stored at -80uC. Control samples of three otherwise healthy volunteers were treated in the same way. Table 1 summarizes the data of patients and healthy controls.
Western Blot
Protein levels were determined by SDS-PAGE and western blot analysis on NuPAGEH 10% Bis-Tris Gels (Invitrogen) using 20 ml cell lysate or concentrated cell culture supernatant of each determined cell line. Annexin-I was used as a loading control for cell lysates. As a negative control for the western blot, one randomly chosen extra sample was applied; all gels included lanes with 4 ml of size marker (BIO-RAD Precision Plus Protein TM WesternC TM Standards). The electrophoresis run was performed at 100 Volts for about 2 h in 16 NuPAGEH buffer (Invitrogen). The size-separated proteins on the polyacrylamide gel were then electrophoretically transferred to a nitrocellulose membrane (Amersham TM Hybond TM -ECL, RPN78D). Blotting was done at 4uC at 250 mA for 1 h in transfer buffer (Tris-base 25 mM, glycine 192 mM, methanol 20%). The membrane was then incubated in blocking solution (5% low-fat dry milk powder diluted in 16 TBS containing 0.2% Tween 20) for 1 h at room temperature to saturate unspecific antibody-binding sites. The blocking reagent was then discarded and the primary antibody was applied (Cytokeratin-14, mouse monoclonal IgG, Santa Cruz Biotechnology, diluted 1/1000 in blocking reagent; Cytokeratin-15, rabbit monoclonal IgG, abcam, diluted 1/10,000 in blocking reagent; Cytokeratin-16, goat polyclonal IgG, Santa Cruz Biotechnology, diluted 1/500 in blocking reagent; phospho-ERM, rabbit polyclonal, Cell Signalling, diluted 1/500 in blocking reagent; KLK5 antibody, goat polyclonal IgG, abcam, diluted 1/ 100 in blocking reagent; KLK7 antibody, rabbit polyclonal IgG, abcam, diluted 1/100 in blocking reagent; Annexin-I, mouse monoclonal IgG, Santa Cruz Biotechnology, diluted 1/1000 in blocking reagent). Primary antibodies were incubated at 4uC overnight. No primary antibody was applied to the negative control. The next day, the primary antibody solution was washed away with blocking solution three times for ten minutes at room temperature. The secondary antibodies (Goat anti-mouse IgG:HRP, Serotec, 1/1000 in blocking solution; Goat anti-rabbit IgG:HRP, abcam, 1/1000 in blocking solution; Rabbit anti-goat IgG:HRP, Dako, 1/1000 in blocking solution) were then applied to the membrane containing the samples as well as to the negative control. BIO-RAD Precision Protein TM StrepTactin-HRP conjugate (1/5000 in blocking solution) was applied to the size marker. The secondary antibodies were incubated for 1 h at room temperature and the membrane was then washed three times for ten minutes with TBS containing 0.2% Tween-20. HRP staining solution was prepared 1:1 (BIO-RAD Immun-Star TM WesternC TM Kit) and applied to the membrane as well as to the negative control and to the size marker. The membrane was placed between two layers of transparent foil and analyzed on a BIO-RAD Molecular ImagerH ChemiDoc TM XRS system using Quantity one 4.6.5 software.
Immunoprecipitation of Active (GTP-bound) Cdc42
The evening before the experiment 9610 5 cells/well of every cell line were seeded into 6-well plates and incubated in RM medium over night at 37uC, 5% CO 2 in a humidified atmosphere. All of the following steps were performed on ice or at 4uC. The next day, the medium was discarded and the cells were washed once with ice-cold PBS. 550 ml lysis buffer (25 mM Tris-base, 140 mM NaCl, 1 mM EDTA, 0.5% NP-40 and Roche complete protease inhibitor) were applied per well and the cells were scraped off using a rubber policeman. The 6-well plates were kept on ice during the procedure. The lysates were transferred to 1.5-ml reaction tubes and incubated for 45 minutes at 4uC with constant rotation. The tubes were centrifuged at 15,000 g for 10 minutes at 4uC. The supernatant, containing the cleared lysate, was transferred into a new reaction tube and the pellet discarded. 50 ml of the cleared lysate were saved for determination of the loading control in SDS-PAGE and western blot. 1 mg of the antibody (Anti-active-Cdc42 antibody, mouse monoclonal IgG, NEWEAST Biosciences), was added to the lysate. The antibody/ lysate solution was then incubated for 2 h at 4uC with constant rotation. After 2 h, 30 ml of Protein G Sepharose TM 4 Fast Flow (GE Healthcare) were added to the solution and the incubation was continued for 2 h at 4uC with constant rotation. The lysates were then centrifuged at 4uC for 5 minutes at 250 g. The supernatant was discarded and the pellet was washed twice with lysis buffer (without protease inhibitor) then twice with wash buffer (100 mM Tris-base, 0.5 M LiCl) and once with PBS (all solutions ice-cold). Between the washing steps the lysates were centrifuged at 4uC for 5 minutes at 250 g. The pellet was resuspended by flicking the tube gently and 50 ml of 26 sample buffer were added; the pellet was again resuspended by flicking and then boiled for 5 minutes at 95uC. After boiling, the samples were centrifuged for 3 minutes at 11,000 g and 20 ml of each sample supernatant were loaded onto a NuPAGEH 10% Bis-Tris Gel (Invitrogen). SDS-PAGE and western blot were performed as described above with primary antibody (Anti-Cdc42 antibody, rabbit polyclonal, Cell Signaling, diluted 1:500 in blocking reagent) and Annexin-I
Determination of Relative Protein Amounts after Western Blotting
In most cases the differences in protein expression, analyzed by western blotting, were visible to the naked eye. Nevertheless, BIO-RAD Image Lab 3.0.1 software was used to determine the relative protein amounts of the different cell lines by computationally normalizing the target bands to the loading control bands in all samples and then calculating the relative differences between the target bands of the determined cell lines.
Statistical Analysis
For SQRT-PCR and QuantikineH ELISA experiments, a Students t-test was performed to determine statistical significance with parameters ''two-tailed'' and ''unpaired''. The number of repeats (n) and p-values (p) are given in detail in every figure legend.
Microarray Analysis Revealed Differentially Expressed Genes in the EBS-DM Cell Lines KEB-7 and EBDM-1
Our group recently demonstrated, that IL-1b is a critical determinant of the phenotype of the EBS-DM patient cell lines KEB-7 and EBDM-1 by way of activating the JNK stress pathway [12]. We also showed in the same study that these cells exhibit an invasive phenotype in Matrigel TM invasion chambers [12].
Based on that information, we expected to observe altered gene expression profiles for KEB-7 and EBDM-1 cells compared to NEB-1 wild-type keratinocytes. To identify differentially regulated genes, we performed a whole-genome microarray analysis and compared the transcriptome levels of KEB-7 and EBDM-1 cells to NEB-1 cells. The DAVID bioinformatics tool (http://david.abcc. ncifcrf.gov/) was used to identify regulated genes that are relevant to the disease pathomechanism, especially regarding blister formation and invasiveness, and the genes were grouped into functional classes. Six major functional classes were defined: 1) kallikrein-related peptidases, 2) matrix metalloproteinases, 3) factors related to actin cytoskeleton dynamics, 4) cytokeratins, 5) junction proteins, and 6) chemokines. Genes were chosen as targets for further analysis even if they were regulated in only one of the two EBS-DM cell lines. We used SQRT-PCR to confirm the microarray data at the gene expression level as well as western blot and ELISA to confirm expression at the protein level. In the following sections, we present the microarray data and describe and discuss the possible roles of the identified genes in inducing a blistering phenotype.
Kallikrein-related Peptidases were Upregulated in EBS-DM Cell Lines but not in Patients Blister Fluids
Among the 15 known human kallikrein-related peptidases (KLKs), KLKs 5, 6, 7, 8, 10, 11, 13 and 14 were found by microarray analysis to be significantly upregulated. KLK5 and KLK7 showed the highest increase in gene expression in both of the investigated EBS-DM cell lines (Table 2). Both KLK5 and KLK7 are known to contribute to skin desquamation in the stratum corneum by degrading desmosomal proteins [13]. The increased expression of KLK5 and KLK7 was confirmed by SQRT-PCR for both KEB-7 and EBDM-1 cells (Fig. 1A). Western blot analysis showed elevated amounts of KLK5 and KLK7 protein in the cell culture supernatant when compared to NEB-1 cells (Fig. 1B). To evaluate the in vivo situation, we analysed the blister fluids of EBS patients vs. healthy controls. KLK5 and KLK7 protein levels showed no significant differences between patients and controls, as analysed by ELISA and western blot, respectively (data not shown).
Expression of Matrix Metalloproteinase-9 was Increased in EBS-DM Cell Lines and Patients Blister Fluids
There are 23 human matrix metalloproteinases (MMPs) that are either secreted from the cell or membrane-bound. They contribute to a variety of functions, including extracellular matrix (ECM) degradation, tumor invasion, tissue remodeling and embryogenesis [14].
Microarray analysis of KEB-7 and EBDM-1 cells showed increased expression of MMPs 1, 7, 9, 13 and 19 compared to NEB-1 wild-type keratinocytes (Table 3). To verify these data, we performed SQRT-PCR and found each of the above MMPs to be significantly increased at the mRNA level, except MMP-1 in the KEB-7 cell line and MMP-13 in the EBDM-1 cell line. Moreover, MMP-7 (matrilysin) and MMP-13 (collagenase-3) were expressed at the highest levels in KEB-7 cells, and, MMP-9 (gelatinase B) and MMP-19 were expressed at the highest levels in EBDM-1 cells ( Fig. 2A). We analyzed MMP-7 and MMP-9 protein expression in 48-h-conditioned cell culture supernatant by ELISA. We only found MMP-9 levels to be increased in both patient cell lines, by more than 2-fold in KEB-7 and by 46-fold in EBDM-1 cells compared to NEB-1 (Fig. 2B). MMP-7 was not increased significantly on the protein level in KEB-7 or EBDM-1 (data not shown). We further analysed MMP-9 levels in vivo and found significantly increased levels in EBS patients blister fluids vs. healthy controls (Fig. 2C).
Expression of the Cdc42 Guanine Nucleotide Exchange
Factor ARHGEF9 was Upregulated in EBS-DM Cell Lines and it Activated Cdc42 As outlined above, Wally et al. observed increased invasiveness of KEB-7 and EBDM-1 cells compared to NEB-1 in Matrigel TM invasion chambers [12]. When considering cell migration and invasiveness, the polymerization of actin filaments has to be taken into account. Microarray analysis showed no increased expression of the actin gene itself or of most factors that contribute to actin polymerization like RhoA, Rac1 or Cdc42. However, in KEB-7 cells, we found clear upregulation of the WIPF1 gene (Table 4; Fig. 3A), which binds to the Wiskott-Aldrich Syndrome protein (WASP) and plays an important role in actin cytoskeleton dynamics.
Our attention was drawn to the upregulation of two Rho guanine nucleotide exchange factors (GEFs) (ARHGEF4, ARH-GEF37) and one Cdc42 GEF (ARHGEF9) ( Table 4). We chose ARHGEF9 for further analysis with SQRT-PCR because it was upregulated in both cell lines and its mRNA levels were increased more than 4-fold in KEB-7 and more than 14-fold in EBDM-1 (Fig. 3B). ARHGEF9 (collybistin) is the guanine nucleotide exchange factor responsible for GDP-GTP exchange of the small GTPase Cdc42 and is mainly found in the brain, heart and skeletal muscle [15]. The observed upregulation of the ARHGEF9 gene adds to our understanding of the increased invasiveness of the two EBS-DM cell lines, presumably due to downstream activation of the Cdc42 signaling cascade.
Like other small GTPases, Cdc42 can be present in an active (GTP-bound) or inactive (GDP-bound) state [16]. To investigate the levels of active Cdc42 we performed immunoprecipitations of lysates of NEB-1, KEB-7 and EBDM-1 cells using an antibody that binds only the activated GTP-bound form of Cdc42. The precipitates were then subjected to SDS-PAGE and western blot analysis using an antibody that detects both active and inactive forms of Cdc42. We observed higher amounts of activated Cdc42 protein in KEB-7 and EBDM-1 cells compared to wild-type NEB-1 cells (Fig. 3C). Therefore, it has to be considered that elevated
The genes coding for MRCKs (CDC42BPA, CDC42BPB, CDC42BPG) and the genes coding for ERM proteins were not significantly upregulated above a two-fold threshold in the microarray analysis (Table 4). Only CDC42BPG showed a considerable upregulation of 1.98-fold in KEB-7 compared to NEB-1. However, downstream activation of the Cdc42 pathway can be analyzed by determining the phosphorylation status of ERM proteins. Using an antibody against phosphorylated ERM proteins (75-80 kDa), we were able to show that phospho-ERM proteins are indeed present in significantly higher amounts in KEB-7 and EBDM-1 cells compared to NEB-1 cells (Fig. 3D). Determination of the protein amounts using BIO-RAD Image Lab 3.0.1 software revealed a 30-40% increase in phospho-ERM protein levels in the EBS-DM cell lines.
KEB-7 and EBDM-1 Showed Deregulation of Cytokeratin Expression
The microarray data indicated type I and type II cytokeratins to be differentially regulated in the two investigated K14 mutant cell lines (Table 5), which confirms previous investigations [12,19]. Using SQRT-PCR, we measured the expression of cytokeratins that are related to EBS (K5, K14) as well as cytokeratins that are expressed in activated keratinocytes (K6, K16, K17) or under specific cellular conditions (K15). The mRNA levels of K14, K15, K16 and K17 were significantly increased in KEB-7 and EBDM-1 cells (Fig. 4A). Although increased mRNA levels of K5 and K6B were seen in the microarray analysis, only K5 was verified to be significantly upregulated in KEB-7 (Fig. 4B). We investigated the protein levels of K14, K15 and K16, and all three cytokeratins were increased significantly in western blot analysis in both Dowling-Meara cell lines (Fig. 4C).
Expression of Junction Proteins was Increased in EBS-DM Cell Lines
Bioinformatic analysis of the microarray data revealed junction proteins as one major group of regulated genes ( Table 6). We verified these data with SQRT-PCR for the desmocollins DSC1, DSC2 and DSC3 (Fig. 5A), the desmogleins DSG1, DSG3 and DSG4 (Fig. 5B), and the gap junction proteins GJA1, GJB2 and GJB6 (Fig. 5C) because they all showed significant upregulation in the array. The highest transcript levels were found for DSC1, DSG1 and GJB6 (connexin 30), with an increased expression of 15-to almost 30-fold compared to NEB-1 wild-type keratinocytes.
CXCL8/IL-8 Expression was Increased in KEB-7, and High Levels were Found in EBS Patients Blister Fluids
Blister formation is often accompanied by infiltration of cells of the immune system into the skin. These immune cells are recruited by chemokines. Highly increased expression of the chemokines CXCL1, CXCL8/IL-8 and CXCL14 was evident in KEB-7 cells and of CXCL11 in EBDM-1 cells (Table 7). We used SQRT-PCR to confirm the upregulation of CXCL1, CXCL8/IL-8 and CXCL14 in the KEB-7 cell line. CXCL11 was not upregulated above the 2-fold threshold in KEB-7. CXCL11 and CXCL14 were increased significantly in EBDM-1, as determined by SQRT-PCR (Fig. 6A), although CXCL14 was not increased in the microarray analysis. We determined the protein expression of all four chemokines in 48-h-conditioned cell culture supernatant with ELISA and found CXCL8/IL-8 levels to be increased by more than 2-fold in KEB-7 (Fig. 6B), thereby almost exactly correlating with the mRNA expression found in SQRT-PCR. We found no increased protein levels of CXCL1, CXCL11 or CXCL14 (data not shown). We further analysed blister fluids of EBS patients and found significantly increased levels of CXCL8/IL-8 compared to healthy controls (Fig. 6C).
Incubation of EBDM-1 Cells with IL-1b Neutralizing Antibody Reduced the Expression of Target Genes
To study the role of IL-1b and its ability to alter the gene expression profile, we incubated EBDM-1 cells with IL-1b neutralizing antibody (2 mg/ml, R&D Systems) for 24 h and analyzed mRNA expression of distinct target genes by using SQRT-PCR. We chose EBDM-1 because it showed the most severe phenotype in most of our experiments. As targets we chose at least one representative of each of the six groups of regulated genes identified in our microarray analysis. In three independent experiments and with at least four SQRT-PCR runs per experiment, we observed a significant reduction of gene expression after IL-1b antibody incubation for all of the investigated target genes (Fig. 7).
Discussion
The Potential Role of MMP-9 in Blister Formation Reduction of blister formation was reported in a small group of EBS patients after administration of tetracycline orally over a period of several weeks [20,21]. The patients comprised a heterogeneous group that included Dowling-Meara and EBSlocalized patients (formerly known as Weber-Cockayne). The molecular mechanism by which tetracycline may reduce blister formation is still speculative, but the drug and especially its derivative doxycycline are known inhibitors of MMPs [22]. This indicates a potential causal link between blister formation due to increased expression of MMPs and its amelioration by tetracycline treatment.
Although a potential role of MMP-9 in blister formation is currently discussed for Pemphigus vulgaris [23], a correlation with blister formation in EBS has never been considered so far. The in vitro and in vivo data of increased MMP-9 expression, that we present here, may contribute to a better understanding of the EBS pathophysiology, which hopefully leads to new therapeutic options.
CXCL8/IL-8 Expression and Blister Formation
A potential role of CXCL8/IL-8 in blister formation has already been established. In the autoimmune blistering disease bullous pemphigoid (BP), infiltration of neutrophils into the skin is required for blister formation. The recruitment of neutrophil granulocytes is stimulated by IL-8 as a consequence of the binding of autoantibodies to the hemidesmosomal 180-kD BP autoantigen (BP180). In a mouse model, inhibition of neutrophil infiltration to the skin as well as inhibition of IL-8 circumvented blister formation [24].
In the present study, we observed significantly increased levels of CXCL8/IL-8 in EBS patients blister fluids. Based on these new data, CXCL8/IL-8 should be considered as a new and potential therapeutic target of blistering diseases. In in vitro studies it was shown that IL-8 release from cultured human keratinocytes is inhibited by the antibiotic molecule dapsone [25]. Inhibition of IL-8 and stopping of immune cell infiltration using small molecules, either already available like dapsone or yet to be developed, should open a therapeutic window into the treatment of blistering diseases like EBS-DM.
Kallikrein Expression in EBS-DM Cell Lines
The increased expression of KLK5 and KLK7 in the two EBS-DM cell lines KEB-7 and EBDM-1 indicated a potential role in the pathophysiology of Dowling-Meara-related blister formation. Hypothetically, kallikreins could play a role in acantholysis, the separation of cell-cell contacts, but there are only few reports of Dowling-Meara patients that show signs of acantholysis [26,27]. Nevertheless, since we did not observe any differences in KLK5 and KLK7 expression in patients blister fluids vs. controls, their role as potential therapeutic targets can not be suggested.
As the expression of kallikreins as well as of junction proteins could be a sign of keratinocyte terminal differentiation, we analysed the microarray data for differentiation-related genes, such as involucrin (IVL), filaggrin (FLG), loricrin (LOR), small proline-rich proteins (SPRR) and SPINK5/LEKTI (data not shown). Out of 14 investigated genes, 6 were upregulated in KEB-7 but none were upregulated in EBDM-1. We analysed these genes with SQRT-PCR and found IVL and FLG increased in KEB-7 and SPINK5/LEKTI in EBDM-1. Western blot confirmed the increase of IVL in KEB-7. FLG could not be detected on the protein level, and SPINK5/LEKTI showed no differences between patient cell lines vs. NEB-1, or between EBS patients blister fluids vs. controls. The fact, that differentiation-related proteins were expressed in only one cell line, but most of the targets were expressed in both cell lines, does not support the idea of a pure cell culture artefact due to keratinocyte differentiation.
The Role of Cytokeratins and Junction Proteins in Blistering Diseases
In basal cells of epithelia, K5 and K14 are the characteristic keratins expressed throughout the lifetime of the cell in the normal healthy state. The observed upregulation of K15 in EBS-DM cell lines occurs due to insufficient K14 function; however, K15 is unable to fully compensate the lack of K14 both in the recessive forms of EBS and in the dominant negative forms such as Dowling-Meara [28]. A change in keratin expression also appears after injury of the skin. In this case, the keratinocyte activation cycle is launched and K6, K16 and K17 are expressed, and the cells exhibit increased migratory potential necessary for wound closure [7]. The upregulation of K16 and K17 in EBS-DM cell lines, as shown in the present study as well as in another recent study [19], indicates that these cell lines are in an activated state as if undergoing constant wounding. It was also shown recently that the increased migratory potential of KEB-7 cells can be reduced significantly by genetic correction of the K14 R125P mutation at the mRNA level by using SMaRT technology [29]. This indicates a potential role of K14 mutations in the mechanisms leading to the migration-and invasion phenotypes of keratinocytes in vitro.
It was shown in several studies that IF stability is dependent on desmosome integrity, and that perturbations of desmosomes lead to a retraction of the IF network toward the nucleus [30,31]. Such accumulations of keratin around the nucleus were also found in Dowling-Meara cell lines [32]. Furthermore, downregulation of certain junction proteins such as desmoplakin and plakoglobin was observed in cell lines carrying severe K5 and K14 mutations [33]. Based on these data it can be hypothesised, that in keratinassociated blistering diseases a dysregulation of keratin expression correlates with a dysregulation of junction protein expression.
The Validity of in vivo Data
In the present study, we compared the blister fluids of EBS patients with the blister fluids of healthy controls. Considering healthy controls the aetiology and the mechanism of blister formation can be different from EBS patients and has to be taken into account for interpretation of results. E. g. the skin-layer in which blister formation occurs can be different from that in EBS patients and blisters can arise with or without signs of inflammation. For control number 1, blister fluid was obtained from a bedridden elderly male person that developed a mechanical (tension-) blister in the inguinal region in the course of mobilization procedures during patient care. The person is considered healthy in the sense of non-EB and without any bullous or other skin diseases. For control number 2, mechanical (friction-) blisters were induced deliberately on both heels by a healthy volunteer (non-EB, no skin diseases) from our lab by wearing unfitting shoes. Finally, considering control number 3, a blister developed on the foot of a healthy person (non-EB, no skin diseases) after a burn accident. No traces of blood and no signs of inflammation, pruritus or unusual pain occurred with the blisters of the three healthy controls. Taken together, by comparison of EBS blister fluids with the control samples described above, we consider the obtained in vivo data to be valid.
Molecular Pathomechanisms in Dowling-Meara: IL-1b, MMP-9 and CXCL8/IL-8 In the study of Wally et al., IL-1b was shown to be the inaugurating mediator of the Dowling-Meara phenotype observed in the EBS-DM cell lines KEB-7 and EBDM-1. The same study showed activation of the JNK stress pathway through IL-1b, and amelioration of the phenotype in vitro by depleting IL-1b with a neutralizing antibody [12]. Based on that knowledge, the present study investigated the gene and protein expression profiles of KEB-7 and EBDM-1 and showed that expression of these genes is dependent on IL-1b signaling.
Of course, the question remains, how does IL-1b mediate these effects and how are the molecular mechanisms interconnected? Fig. 8 shows an overview of the pathways described below. In the case of the Cdc42 pathway, inflammatory cytokines such as TNFa and IL-1 were shown to influence actin cytoskeleton dynamics by activation of Cdc42, and interconnections between IL-1 and Cdc42 signaling are known [34,35] (Fig. 8A). Cdc42 activates downstream targets like the ERM proteins (Fig. 8B), and phosphoezrin recruits the GEF Dbl to lipid rafts and induces the activation of Cdc42 in a positive feedback loop [36] (Fig. 8C). Cdc42 also activates the Rac and Rho pathways as well as PAK (p21-activated kinase), and activates downstream effects that lead to changes in gene expression [35]. Besides their interconnections, the IL-1 and Rho pathways can also act independently from each other [37]. Rho pathways like Rac1, RhoA and Cdc42 are also activated by IL-8 [38] (Fig. 8C), which we showed to be upregulated in EBS-DM cell lines and in patients blister fluids.
Expression of gelatinases and activation of MMP-9 and MMP-3 was shown to be induced by IL-1a in a dose-dependent manner in chondrocytes [39]. IL-1b activates the JNK pathway. It was shown in an in vitro model, that JNK components and RhoGTPases interact through crosstalk and are necessary for induction of MMP-9 expression in wounded keratinocytes [40].
The evident deregulation of different pathways like IL-1b and Cdc42 in K14 mutant cell lines and our findings in vivo demand a new hypothesis that gives greater weight to the role of matrix metalloproteinases and chemokines in blister formation. The idea of interactions between IL-1b, MMP-9 and CXCL8/IL-8 is supported by the literature, and the findings presented in our study provide a better understanding of the disease mechanisms, which is a necessity for the development of new therapies. | 8,339.8 | 2013-07-19T00:00:00.000 | [
"Biology"
] |
IBTIDA: Fully open-source ASIC implementation of Chisel-generated System on a Chip
—Building a System on Chip (SoC) using a fully open- source toolchain requires the availability of open-source tools for RTL simulation, generation, GDS-II conversion, manufacture- able foundry process design kits (PDKs), IP libraries, and I/O blocks. The proposed work shows the methodology of using completely open-source tools and hardware construction language (HCL) to tape-out RISC-V based SoC - Ibtida. The methodology utilizes Chisel (Constructing Hardware in Scala Embedded Language) as the RTL generator, Verilator as the RTL simulator, OpenLANE as the RTL to GDS-II converter, and SKY-130nm Open PDK to manufacture the SoC. Ibtida consists of a 5-stage pipelined 32-bit RISC-V (RV32IM) core with 32 GPIOs, and separate instruction and data memories. The Ibtida design is embedded in a harness on a physical chip. The harness is equipped with a management SoC used as a controller to the Ibtida. Prior to converting the RTL into GDS-II, the cycle-accurate simulation using Verilator and FPGA emulation on Xilinx ARTY A7 has been performed for verification and regression testing. The FPGA implementation utilizes 8650 LUTs, 3356 Slice Registers, 714 flip flops, and 2.5 Block RAM of 36Kb. The ASIC implementation utilizes a 2.5 mm2 area with a density of 37.44 KGate/mm2. The manufacturing of this SoC is provided by Google shuttle program called Open MPW (Multi Project Wafer) in association with Efabless and SkyWater technologies. To the best of our knowledge, this is the first RISC-V based SoC, generated using Chisel and taped-out using fully open-source technologies.
I. INTRODUCTION
T ODAY, Moore's law is diminishing. The trend of increasing computing capabilities by doubling the number of transistors is coming to a halt [1]. Due to this, we are entering the golden age of computer architecture [2] where the key driving force for the pursuit of increased performance has been other than the only miniaturization. Although, due to the proprietary nature of chip designing, the innovation has been somewhat limited due to the fact that only big companies can design their own processors. This was democratized by the advent of RISC-V Instruction Set Architecture (ISA) [3] which enabled startups and communities to work together in chip designing. Still, there was another barrier for academic researchers, startups, and small companies to actually tape-out their processors, that is, the close nature of Process Design Kit (PDK). From the past, there have been many open-source Electronic Design Automation (EDA) tools (SPICE, Magic, etc.) available for the physical design engineers but the lack of a completely open-source PDK kept the custom hardware design to a handful of large and established companies and well-funded research universities. However, this problem has also been resolved recently in mid-2020, when SkyWater foundry together with Google introduced the first fully opensource PDK the SKY130 process node [4] which is based on a 130nm Complementary Metal Oxide Semiconductor (CMOS) technology.
A. The open-source hardware momentum
Since the arrival of the RISC-V ISA, there has been a boom in the open-source chip designing domain. It proved that like open-source software, open-source hardware can be greatly improved by a collaborative effort between small and big companies complementing each other and not only improving the ISA but also the other tools ecosystem required for hardware designing [5].
The ChipsAlliance [6] established in 2019, takes the aim of open-source hardware designing even further. It provides a commonplace for designers to create innovative solutions using open-source tools. It has renowned companies as members working together to develop reusable open-source IPs. It is also focused on providing tools for open-source physical design. The very ambitious open-source OpenROAD project [7] is also part of the ChipsAlliance that aims to provide 24hour, No-Human-In-The-Loop layout design for SOC, Package, and PCB with no Power-Performance-Area (PPA) loss, enabling software engineers and people with scarce physical design knowledge to tape-out their own processors.
The availability of everything open-source from RTL to EDA tools still hindered the complete flow of open chip designing due to the nonexistence of a completely open-source PDK. For over twenty years, the PDKs have been kept closed source and required non-disclosure agreements (NDAs), license servers, and password-protected download sites causing the privilege to tape-out designs at the hands of only big established companies [8]. But with the SkyWater foundry opening up their design for a 130nm process together with Google and the Efabless/Google collaboration for providing free tape-out shuttles, presents a huge opportunity for startups, small academic institutes, and even high school students to come up with their custom unique designs and actually get them fabricated.
B. Why hardware should learn from software
Due to the halt in performance even after doubling the number of transistors, the era of domain-specific architecture is booming. The advent of the RISC-V ISA has enabled small teams and startups to develop custom hardware to improve performance and efficiency in terms of power consumption. However, the process for designing chips has been painfully long and involved a rigid development model that earlier software development followed, known as the waterfall model. The software created in the early days suffered from overbudget, not meeting deadlines, and being abandoned. Making changes to the whole monolithic software project was very difficult as the customer's needs changed. The same goes for hardware projects. In a hardware project, first, the microarchitecture is specified, followed by the RTL design after which the verification happens, and then the complete physical design of the netlist is done. Usually, the physical design is even outsourced to other companies which further increases the timeline of the projects usually ranging from 1-3 years, and if the customer's need changes the whole process needs to be repeated. The agile software methodology [9] emphasizes on working software over detailed documentation, customer collaboration, and being flexible over rigid specifications. It promotes small teams working iteratively on improving working-but-incomplete prototypes and enhancing them until the end result is acceptable. Inspired by this agile software approach, the researchers at the University of California, Berkeley proposed their own "Agile Hardware Manifesto" [10] through which they taped-out eleven processors in a span of five years.
To facilitate this agile hardware development idea by increasing designer productivity, Chisel [11] was created. It is a domain-specific language created on top of Scala which provides all the high-level programming features such as Object-Oriented Programming (OOP) and Functional Programming (FP) to the designer for creating reusable libraries that generate efficient hardware circuits. The idea is to create reusable packages just like in software which provides abstraction and easy-to-use integration opportunities of various verified IPs. Furthermore, the Chisel compiler automatically creates a fast, cycle-accurate C++ software simulator, or low-level synthesizable Verilog that maps to FPGAs or ASIC flows.
C. Previous works
There have been eleven tape-outs based on Chisel utilizing the Rocket-chip generator [12] by the University of California, Berkeley but were based on commercial EDA tools and closed PDKs. Also, a family of striVe SoCs was taped out using the OpenLANE and Skywater 130nm PDK to prove the viability of all open-source EDA tools and the PDK [13]. However, it is written in a traditional low-level hardware description language, Verilog. The Rocketchip generated tape-outs were missing the open-source backend flow to generate the GDS and the striVe family SoCs although mapped on the open PDKs, lacked the frontend design written in a higher-level programming language.
In this paper, we present our contribution by using the abstractness and software programming feel of Chisel to tapeout a 5-stage pipelined RISC-V RV32IM core and a minimal SoC around it with no prior experience in chip designing and passed the generated RTL, Verilog, to OpenLANE [14] to provide a completely open-source RTL-GDS flow which was then mapped onto the fully open-source SkyWater 130nm process design kit through the Google/Efabless MPW Shuttle program [15]. We used Chisel for the ease of programming hardware circuits providing us a quickstart with RTL designing as compared to the low-level Verilog and proved that the generated Verilog can be mapped to the fully open suite of Electronic Design Automation (EDA) tools and can be fabricated on the Skywater 130nm open PDK.
II. DESIGN METHODOLOGY AND SPECIFICATION
To prove our work proposed in the paper we followed a methodology on a design specification and analyzed its implementation and results. In the following sub-sections, we will discuss the methodology and specification of the design later delving into other sections for details related to the implementation and analysis of the design.
A. Methodology
Chisel was used as a frontend of the proposed design which is a domain-specific language embedded inside Scala that provides higher functionality of a programming language to design circuits instead of traditional HDLs like Verilog/VHDL [16]. The Chisel front-end generates an Intermediate Representation (IR) called Flexible Intermediate Representation for RTL (FIRRTL) which provides certain transforms and passes based on Scala [17] that runs on top of the Java Virtual Machine (JVM) which transforms the same Chisel code to be used into three different backends: 1) Simulation, 2) FPGA Emulation and 3) ASIC Implementation For simulation, to check the functionality of the design, the Chisel compiler was used to generate a C++ simulator based on the emitted Verilog of the SoC through Verilator [18] and emitted C++ wrapper for providing stimuli to the compiled simulator, finally running the simulator to generate a Value Change Dump (VCD) file that can be viewed on an opensource waveform viewer GTKWave [19].
For emulating on the FPGA, the Chisel generated Verilog was mapped on the Arty A7 FPGA board using Xilinx's Vivado for synthesizing, placing and routing, and generating the bitstream to be mapped on the board. This is the only closed source path that was used for emulation. However, an open-source alternative for the FPGA implementation exists as well such as the Symbiflow project [20] or OpenFPGA [21] but that is not the scope of this paper.
For the ASIC, the generated Verilog and SkyWater 130 nm PDKs were used along with the OpenLANE flow comprising of various open-source tools for Synthesis, Floorplan, Power Distribution Network (PDN) generation, Place and Route, Design Rule Check (DRC), Layout Versus Schematic (LVS) checks and GDSII generation.
B. Specification
Ibtida is a minimal System on a Chip designed completely with Chisel using the higher programming language features. It consists of four basic elements that every computer has: 1) Compute, 2) Communication, 3) Peripherals, and 4) Storage.
The instruction interface has a Point-Point interconnect for fetching instructions and the data interface has a 1xN interconnect that allows the core to either perform loads/stores to the memory or to the GPIO peripheral. Since there is no nonvolatile memory present for code storage, a UART controller is designed to accept the program from the host computer and writes it into the ICCM memory every time the board is powered on or a new program needs to be uploaded. The details of each element highlighted in figure 2 above are described below: 1) Compute: It is a 32 bit 5-stage pipelined core compliant with the RISC-V base ISA I-type extension and an additional M-type extension that supports multiply/divide instructions together becoming an RV32IM supported core. It has five pipelined stages: 1) Fetch (F). 2
) Decode (D). 3) Execute (E). 4) Memory (M). 5) WriteBack (WB).
a) Fetch: The fetch has a Program Counter (PC) that points to the next instruction to be fetched and an interface to fetch the instructions from the memory. The PC value is updated through a multiplexer that selects the next PC value which can be a simple PC + 4 through an adder or another jump address depending upon the instruction in the Decode stage.
b) Decode: The decode stage consists of a register file with 32 registers x0 to x31 each 32 bits wide as described in the RISC-V ISA. It also has an Immediate Generation unit that extracts the encoded immediate values from the instructions, concatenating and padding them to become 32 bits wide. There is a Control Unit as well that decodes the current instruction using the opcode and enables certain control signals depending upon the type of instruction. There is a Branch Unit that identifies if the current instruction is a branch instruction and calculates the next PC address if the branch is taken. The Branch Unit was kept in the Decode stage to improve the branch miss penalty to 1 cycle if the branch is taken since the fetch would need to be flushed and the new instruction needs to be fetched from the updated PC value. It also has a Hazard Detection logic unit that prevents structural hazards from happening i.e if the register being accessed by the current instruction is also being written at the same time by another instruction in the Write Back stage.
c) Execute: The execute stage has an Arithmetic Logic Unit (ALU) for computation-related tasks and an ALU Control unit indicating the ALU as to which operation needs to be performed. It also has a forwarding unit that is used to provide the ALU with proper operands if there are any data hazards in the pipeline. d) Memory: The memory stage consists of a store/load unit that performs either stores or loads to the memory or the GPIO peripheral.
e) Write Back: The write back stage consists of a mux that selects the data to be written in the register file which can be either from the ALU output or from the data memory.
2) Communication: The communication mechanism used between the core, peripherals, and memories is TileLink Uncached Lightweight (TL-UL) bus protocol [22]. The miniature version of TileLink, the TL-UL was used since we did not require cache coherency and other complex communication.
The fetch stage sends a valid request to the TL-UL Master which then communicates with the TL-UL Slave that is then connected with the instruction memory. This forms a Point-Point interconnection between the core's fetch and instruction memory as shown in figure 2. For load/stores during the memory stage, a 1xN switch is used to connect a single TL-UL Master with multiple TL-UL Slaves which are two in our case. One for the data memory and the other for the GPIO peripheral. The 1xN switch automatically decodes which slave to route the master's request to depending upon the address issued. There is no support for burst accesses. The master can only send one request at a time and wait for the acknowledgment before sending another request. The write back stage consists of a mux that selects the data to be written in the register file which can be either from the ALU output or from the data memory.
3) Peripherals: The SoC contains only one peripheral that is the GPIO connected to the bus. The GPIO has 30 I/O pads going outside to interact with the outside world. Its control and status registers (CSRs) are accessible via TL-UL bus which can be manipulated by the software program running on the
A. Verilator Simulation
For testing the functionality of the design, Verilator was used to simulate the SoC and each of it's individual components. The listing 1 shows how a 2-way mux can be designed in Chisel. A driver class as shown in listing 3 is used to configure the Scala backend to use verilator for testing and an additional flag is used to generate the VCD trace for waveform view.
Scala build tool (sbt) is utilized to compile the Scala classes and execute them as shown in listing 4 which in turn builds all the verilator files using the testbench and generates a VCD trace to view.
The generated VCD trace can be viewed on GTKWave. In figure 4 the resulting waveform for the mux is depicted.
Similarly, each module within the Ibtida SoC was tested for its correct functionality using Chisel-based testbenches and Verilator based simulation. In table I, a RISC-V assembly program for the sake of testing is shown that is run on the SoC, and figure 6 shows how the instructions passes through the pipeline with only the important signals extracted for ease. The whole test suite run on the Ibtida SoC is present on Github. [23] Initially, as shown in the figure 6 the UART programmer loads the program into the instruction memory and asserts uart done high signaling that the memory is loaded. The fetch then sends a valid request with the PC's current value and gets the instruction in the next cycle. Until then a NOP (No operation) instruction is sent to the datapath that does nothing in the pipeline. After this, on each clock cycle, a new instruction is fetched and previous instructions progress through in the pipeline. Finally, the registers get loaded with the values coming from the write back stage.
B. FPGA Emulation
The generated Verilog of Ibtida SoC from Chisel was mapped on the Arty A7 FPGA board. It runs on 8MHz frequency with no total negative slack (TNS) and failing endpoints. Table II shows the timing report of the implemented design. The MMCM primitive was used as the clock generator to provide the clock to the design. The ICCM and DCCM memories were mapped into FPGA Block Rams (BRAMs). The DSP units inside the board were used for efficient multiplication. The resource utilization of the design is given in Table III The power consumption of the implementation is given in Table IV.
C. ASIC Implementation
For the ASIC implementation, the Chisel-generated Verilog was integrated inside a testing harness and then hardened through the OpenLANE flow for generating the GDSII layout.
1) Testing Harness: Caravel [24], is a testing harness that acts as a manager of the Ibtida SoC. It has three parts in it: 1) Management Area 2) User Project Area 3) Storage Area as shown in figure 5.
a) Management Area: The management area consists of an SoC built on top of a RISC-V based microprocessor PicoRV32 [25]. It has some peripherals including timers, uart, and gpio. The firmware on the management area can be used c) Storage Area: It consists of two dual port SRAMs of size 1Kbyte generated by OpenRAM [26]. The storage area is only accessible to the management area. Figure 7 shows the architecture of the Caravel harness. The management area contains peripherals on a Wishbone Bus [27] which are written/read by the PicoRV32. There is also Chip LA which is a memory-mapped 128 bits wide logic analyzer on a wishbone bus. It can be configured to read data from the User Project Area or provide any data to it. There is also a Wishbone slave interface inside the User Project Area but we used it only to provide the clock and reset to Ibtida SoC coming via the Wishbone master interface on the management area. The User Project Area has access to the 38 GPIOs after they are configured to be usable by the firmware running on the management core.
2) Integrating Ibtida inside Caravel: Figure 8 shows the configured Ibtida SoC for integrating inside the Caravel User Project Area. The signals prefixed la are coming from the logic analyzer. The SRAMs were mapped onto technology-specific flip flop based DFFRAMs.
3) Openlane: RTL to GDS: OpenLANE is an opensource automated RTL-GDSII ASIC design flow based on several components, PDK (Process Design Kit), and IP (Intellectual Property) libraries including standard-cell libraries, that perform steps from RTL synthesis all the way to GDS streaming. It is an aggregation of open-source EDA (Electronic Design Automation) tools explicitly Open-ROAD, Yosys [28], Magic [29], Netgen [30], OpenPhySyn [31] and SPEF-Extractor [32]. Furthermore, a custom script is being used for design exploration and optimization. The completely open-source flow was designed in accordance with the open PDK, open-sourced by Google and SkyWater (Sky130 PDK) on a 130nm CMOS technology, but concurrently is generalized to support other technologies. The flow performs full ASIC implementation steps from RTL to GDSII, which includes: figure 9.
1) Logic Synthesis 2) Floor-Planning 3) Placement 4) Clock Tree Synthesis (CTS) 5) Routing 6) SPEF-Extraction 7) GDSII Generation 8) Physical Verification as shown in
The output of synthesis is a gate-level netlist which after floor-planning results in a def (Design exchange format) file, comprising information related to physical layout i.e. pin placement, die area, and the core area of a specific design. During further stages in the flow, the def file gets updated multiple times as standard cells get placed during the placement, and information regarding the coordinates of their placement is added. The final def gets generated after routing where the track information connecting the standard cells is added.
The GDS then gets generated followed by Design Rule Check (DRC) and Layout vs Schematic (LVS) check which is required for the physical verification.
a) Logic Synthesis: The first step towards attaining a hardened Ibtida IP involves logic synthesis. This process specifically focuses on acquiring the RTL along with the standard cell library files. Before the synthesis of Ibtida RTL, the OpenLANE environment needs to be set up as shown in listing 5 followed by the commands shown in listing 6. The flow can be executed in interactive mode, by the prep -design <design name> command which sources the design configuration file; config.tcl, where it reads the specified environment variables (VERILOG INCLUDE DIRS, SYNTH STRATEGY, etc.) required for synthesis and merges the relevant library exchange format (LEF) and technology LEF file i.e. technology-specific files information along with the generated Verilog RTL and passes it as input to Yosys and ABC which synthesizes the logic and maps it on to the technology-specific standard cells respectively as seen in figure 10.
The chip area calculated after synthesis is 1.25mm 2 . Furthermore, Table V shows the statistics of the generated netlist.
OpenLANE provides a set of design exploration strategies, which enables designers to achieve the design specifications in terms of performance and area. There are four design exploration strategies offered by OpenLANE which have been tested for Ibtida SoC. The strategies provide a trade-off between the area and timing. Strategies 0 and 1 (delay) explicitly focuses on achieving a better performance in terms of timing whereas 2 and 3 (area) strategies focus on getting a better (compact) area. The effect of different design exploration strategies for Ibtida SoC is shown in figure 11. b) Floor-planning: Floor-planning in the OpenLANE flow deals with assigning the die area and core area read from the config.tcl and generates the number of rows accordingly, and also involves placement of hard macros if any, in the design space. For Ibtida Soc floor-planning required the gate-level netlist generated through Yosys along with a pin order.cfg file; which includes the name of the pins to be Floor-planning for Ibtida SoC has been achieved using the listing 7, where init floorplan command floor-plans the netlist on a core area of 1620µm x 1590µm with a core utilization of 50%, i.e. half of the core area has been utilized by the standard cells. Table VI shows the coordinates of the core and the die for Ibtida SoC. This is followed by place io, which places the I/Os around the die as shown in figure 12. Power distribution network (PDN) is then generated using the gen pdn command which creates metal1 and metal4 horizontal rails and vertical straps respectively as shown in figure 13. The global placement command, inserts all the standard cells into the core area haphazardly. There is no sequence or order; some standard cells might even overlap each other. Ibtida SoC after global placement is shown in figure 15. The detailed placement command ensures that every cell is placed properly inside the rows. The legalization issues w.r.t. the overlapping cells are catered in this step, enabling the cells to align. Ibtida SoC after global placement is shown in figure 16. Ibtida SoC was generated using the command as shown in listing 9. run_cts Listing 9: Clock Tree Synthesis Script e) Routing: Routing is a step followed by CTS. In the OpenLANE flow, routing is executed automatically through scripts. The task of the router is to precisely define the paths on the layout surface enabling conductors to carry electrical signals. The conductors are responsible for interconnecting the pins and the standard cells on the layout and thus forming a routing grid. Since the routing grid is quite large, routing is performed using a divide and conquer approach; Global Routing followed by Detailed Routing as shown in listing 10. The global routing command abstractly plans the routing guides to outline the implementation of actual routes whereas the detailed routing command enables the wires to follow those routing guides and establish interconnects as shown in figure 17. The physical verification step also termed as the sign-off step in the OpenLANE flow is to validate the final layout. Throughout the flow, a series of reports and logs are generated, which usually involves checking the generated def file at each stage of physical design for any design rule violations. This is ensured by the EDA tools; fastroute, for identifying antenna violations, and tritonroute, which checks for any routing violations. The verification step ascertains that the placer and router have correctly placed the cells and routed the grid. The design is checked for any overlapping cells or short circuits and inspects any Layout vs. Schematic (LVS) error that includes any unmatched pins or short/open circuits between nets that should have been connected. Some common Design Rule Check (DRC) errors corresponding to the wire spacing, width and pitch need to be catered as defined in the PDK technology lef (.tlef) file. Some basic errors are shown in figure 19 and 20.
The final DRC and LVS check on the generated Ibtida SoC layout is ensured using the listing 13. For Ibtida SoC design to be considered DRC and LVS clean, it needs to be validated through Magic where run magic drc command checks the layout for any design rule check errors and reports them if any. Furthermore, a hierarchical SPICE netlist is extracted using the run magic spice export. The extracted netlist is then validated through the open-source tool Netgen In this paper we overviewed a Chisel generated SoC tapedout using the completely open-source toolchain and discussed the different chip designing flows involving RTL simulation, FPGA emulation, and ASIC implementation. Furthermore, we also discussed how the OpenLANE suite allows the automatic place and route of a chip without needing a physical design expert.
Chisel HDL allows software programmers and novice hardware engineers to describe circuits in a higher programming language feel as compared to traditional hardware description languages. The user can abstractly design RTL logic and write Whereas, OpenLANE allows the design to be taken through the ASIC process to generating a GDSII layout for fabrication.
We believe with the introduction of a completely opensource PDK; SkyWater 130nm, and a completely open-source ISA; RISC-V, combined with HDLs hosted in higher programming languages; Chisel, there exists a great opportunity for undergraduate students, academia and researchers to quickly design, implement and fabricate chips using the agile methodology analogous to the software domain which has been a huge bottleneck in innovation in the hardware design industry. | 6,302.8 | 2021-09-30T00:00:00.000 | [
"Computer Science"
] |