id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
260184197 | pes2o/s2orc | v3-fos-license | Roles of specialized pro-resolving mediators and omega-3 polyunsaturated fatty acids in periodontal inflammation and impact on oral microbiota
Periodontitis is a chronic inflammatory disease induced by dysbiotic dental biofilms. Management of periodontitis is primarily anti-bacterial via mechanical removal of bacterial biofilm. The successful resolution requires wound healing and tissue regeneration, which are not always achieved with these traditional methods. The discovery of specialized pro-resolving mediators (SPMs), a class of lipid mediators that induce the resolution of inflammation and promote local tissue homeostasis, creates another option for the treatment of periodontitis and other diseases of chronic inflammation. In this mini-review, we discuss the host-modulatory effects of SPMs on periodontal tissues and changes in the taxonomic composition of the gut and oral microbiome in the presence of SPMs and SPM precursor lipids. Further research into the relationship between host SPM production and microbiome-SPM modification has the potential to unveil new diagnostic markers of inflammation and wound healing. Expanding this field may drive the discovery of microbial-derived bioactive therapeutics to modulate immune responses.
having severe periodontitis (2). The global cost of lost productivity from severe periodontitis alone was $39 billion yearly, based on 2015 data (1). A recent report demonstrated that periodontal disease generally resulted in indirect and direct costs of $154.06 billion in the United States and €158.64 billion in Europe in 2018 (3).
Pathogenesis of periodontitis
The disproportionate inflammatory host response and microbiota dysbiosis are the two major etiologic factors in the pathogenesis of periodontitis (4,5). These two factors do not always exist simultaneously, but they influence each other. The coexistence of these two factors results in the development of periodontitis. A pathogenic biofilm is a prerequisite for periodontitis initiation. Several bacterial species, such as the core red complex species Porphyromonas gingivalis (P. gingivalis), Tannerella forsythia (T.forsythia), and Treponema denticola (T.denticola), are significantly associated with the severity and progression of periodontitis (6). Their virulence factors causing periodontal tissue destruction are also identified. However, these species are usually present in small amounts. The concept of keystone pathogens was proposed (7). The keystone pathogens, such as P. gingivalis, have a relatively small proportion of the overall microbial biomass but can dysregulate immune responses and induce microbiota dysbiosis, accelerating periodontal inflammation. This concept explains that the presence of specific species can be critical for the progression of periodontitis. While technologies advance, more and more species are identified as being associated with periodontitis and promoting disease progression. The antagonistic and synergistic interactions between multiple species as a network will be worth studying to elucidate the complexity of oral microbiota (8).
Although the biofilm is required to initiate periodontitis, the destruction of periodontal tissues in periodontitis are mainly caused by inflammation induced by bacteria (9). Persistent inflammation and tissue destruction create an environment leading to dysbiotic microbiota (5, 10). Local inflammation induced by periodontitis can further influence systemic immune responses. The status of periodontal health and systemic health condition are mutually affected (11). Studies demonstrated that periodontitis is associated with increased systemic levels of cytokines, such as interleukin-1α (12), interleukin-1β (12-14), interleukin-6 (IL-6) (12, 15, 16), C-reactive protein (CRP) (16,17), interferon-γ and tumor necrosis factor-α (12, 13) in serum or plasma. Periodontal therapy can reduce systemic levels of inflammatory mediators, such as CRP (18, 19) and IL-6 (16,20). Systemic level changes of cytokines in periodontitis patients indicate that periodontitis may play a role in the etiologic mechanisms of systemic inflammatory diseases, and inflammation drives the pathogenesis of periodontitis.
Treating periodontitis through host-modulation
For decades, the standard treatment has been biofilm removal via mechanical debridement, such as scaling and root planing (SRP). However, this approach has limited effects in patients with aggressive forms of periodontitis associated with dysregulated immune response. To address this issue, hostmodulation has been considered. Host-modulation therapy aims to modify the host response by reducing those damaging aspects of the inflammatory response leading to tissue destruction (21). Generally, there are two categories of host-modulation therapy: (1) modulating the host's inflammatory response by inhibition or resolution; (2) modulating the host's pathologic collagenolytic response in periodontal tissues (22). The first category includes the use of anti-inflammatory agents, such as non-steroidal antiinflammatory drugs (NSAIDs), in addition to conventional periodontal treatment (e.g., SRP). The second category includes the use of subantimicrobial-dose doxycycline, which reduces collagenase activity to inhibit disease progression.
Restoration of a proper host immune response is crucial in treating periodontitis, but no effective host-modulatory approach currently exists. NSAIDs have been used to treat periodontitis, with positive clinical outcomes (23,24). However, there are significant concerns regarding the long-term use of NSAIDs due to their adverse effects on the renal, cardiovascular, gastrointestinal, and hepatic systems (25). The clinical effects of anti-cytokine therapies used to treat rheumatoid arthritis and other immune diseases have also been investigated. Although these therapies can control inflammation, their clinical effects on periodontitis patients without immune disorders are not clear, and the adverse effects of systemic use, such as the increased risk of infection and malignancy, are concerning (21, 26). Although, statistically, subantimicrobial-dose doxycycline has beneficial clinical effects, but the absolute changes in pocket depth and clinical attachment level are limited. Also, the compliance with the long-term use can be challenging for patients (27). Hostmodulation therapy for periodontitis is a promising approach, but more studies are required to make it practical and effective.
Specialized pro-resolving mediators promote the resolution of inflammation and tissue regeneration
Anti-inflammation has been the central concept of treating periodontitis for years. Another option was presented with the discovery of specialized pro-resolving mediators (SPMs), a class of lipid mediators derived from omega-3 or omega-6 polyunsaturated fatty acids (PUFAs) that induce the resolution of inflammation and promote local tissue homeostasis (28). The resolution of inflammation is a proactive process induced by SPMs, including lipoxins, resolvins, protectins, and maresins. These SPMs are produced by enzymatic activation of membrane phospholipids and bind to specific G protein-coupled receptors on a variety of cells to regulate the immune response. In the resolution phase of inflammation, there are decreased infiltration of neutrophils, reduced levels of pro-inflammatory cytokines and lipid mediators, and increased recruitment of resolving macrophages, such as M2 macrophages, that clear the lesion by efferocytosis without immune suppression (29, 30). SPMs also stimulate the phagocytosis and killing of microbes (31). SPMs possess dual anti-inflammatory and pro-resolution properties.
Initial inflammation is required to defend against bacterial challenge. Neutrophils and macrophages play important roles in innate immunity. However, if acute inflammation is not properly resolved (e.g., excessive neutrophil infiltration and proinflammatory cytokine production), it leads to fibrosis, decreased apoptosis, impaired phagocytosis, and cellular senescence, resulting in chronic inflammation and tissue damage (32). Resolution of inflammation not only mitigates inflammation but also promotes tissue healing, regeneration, and reduction of pain. Due to the aforementioned characteristics, it is feasible to treat inflammatory diseases with SPMs. SPMs can control inflammation in many preclinical inflammatory-disease models, such as peritonitis (33), inflammatory bowel disease (34), diabetes (35), and periodontitis (36).
Recently, some newly identified conjugates of SPMs in tissue regeneration (CTRs) were identified, and these conjugates can promote tissue regeneration (30,37). The novel cysteinyl-resolvin significantly accelerates tissue regeneration with planaria and inhibits human granuloma formation (37). A recent study demonstrated that porcine periodontal ligament stem cells (pPDLSCs) can synthesize cysteinyl-containing SPMs (cys-SPMs), specifically, maresin 3 conjugates in tissue regeneration (MCTR3), and pretreatment of pPDLSCs with MCTR3 reduced the production of acute and chronic proinflammatory cytokines and chemokines in an inflammatory environment (38).
Specialized pro-resolving mediators in periodontitis
In periodontitis, preclinical studies have demonstrated that SPMs prevent and treat experimental periodontitis (36,39). Topical application of RvE1 can prevent bone loss, regenerate the lost bone, change gene expression patterns in gingiva, and result in shifts of the oral microbiota (39,40) and immune cellular components (41) in animals affected with experimental periodontitis. SPMs, including resolvins, lipoxins, and maresins, are now studied to understand their impact on periodontal inflammation and tissue healing. In the in vitro inflammatory condition, resolvin D1 (RvD1) can promote periodontal ligament fibroblasts (PDLF) proliferation (42, 43), reduce proinflammatory cytokine productions in gingival fibroblasts (44), and maresin-1 (MaR1) and resolvin E1 (RvE1) restore the regenerative properties of human PDLSCs (45, 46).
The impact of SPMs on periodontal pathogens was also investigated. An in vitro study showed that MaR1 enhanced intracellular antimicrobial reactive oxygen species production and restored impaired phagocytosis of P.gingivalis and Aggregatibacter actinomycetemcomitans (A.actinomycetemcomitans) in macrophages of localized aggressive periodontitis patients (47). This finding can be one of the mechanisms of oral microbiota shifts induced by SPMs.
For clinical applications, an effective vehicle to deliver SPMs is important to maintain high concentration and prevent lipid peroxidation. Membrane-shed vesicles, termed microparticles, have been used to deliver SPMs to treat experimental periodontitis (48,49). Compared to SPM alone, SPM delivered in microparticles can increase treatment efficacy by targeting tissues without dilution or inactivation of the mediator. In a clinical trial, a formulated-mouthwash with methyl ester-benzo-lipoxin A4, one type of SPM, has been approved safe and could reduce local inflammation and increase the abundance of pro-resolution molecules in serum of human participants (46). Using SPMs to treat periodontitis in the clinic has the potential but more clinical studies are required.
Lipid profiles and inflammatory diseases
Humans cannot efficiently produce the precursors for SPMs de novo. Instead, SPMs are derived from the ingestion of dietary omega-3 PUFA: alpha-linolenic acid (ALA), eicosapentaenoic acid (EPA), and docosahexaenoic acid (DHA) (Figure 1) (50). ALA, EPA, and DHA are incorporated as phospholipids into cellular membranes throughout the body, and SPMs are enzymatically released from these lipids to resolve inflammation (50). SPMs have been identified in many human samples, including milk (51), serum, lymphoid tissue (52), saliva, and gingival crevicular fluid (53, 54). These SPM levels have been shown to be involved in regulating the resolution of inflammation throughout the body, including the inflammatory status of mammary glands (51), the stability of atherosclerotic plaques (55), the severity of tuberculous meningitis (56), and the disease status of periodontitis (53,54,57). The action of omega-3 PUFA is in direct competition with dietary omega-6 PUFA, linoleic acid, which is the precursor to arachidonic acid (ARA), and the proinflammatory lipid mediators, prostaglandins and leukotrienes (58,59). The relative ratios of dietary omega-3 and omega-6 PUFA are believed to contribute to homeostasis in the initiation and resolution of inflammation throughout the body (60,61).
It is important to note that high doses of dietary PUFAs does not guarantee high production of SPMs. Actions of lipoxygenases are required to produce SPMs from PUFAs. Also, SPM corresponding receptors must be present on cells to bind SPMs inducing the resolution of inflammation.
Effects of dietary lipids on Microbiota
As dietary compounds, omega-3 PUFAs are directly exposed to the microbiome of the oral cavity and digestive tract, and multiple studies in animals and humans describe significant changes in gut inflammation and the composition of the gut microbiome based on dietary PUFA quality and quantity (58,62,63). For example, in a comparison of mice fed with lard vs. fish oil (high in omega-3 PUFAs) as the major dietary fat, significant increases in Lactobacilli, Bifidobacterium, and Akkermansia muciniphila were detected in the fish oil diet, and transfer of A. muciniphila to the lard-fed mice could partially reduce the diet-induced intestinal inflammation and improve mucosal barrier function (64). The increase in Bifidobacterium in response to dietary EPA and DHA was confirmed in another mouse study, which also demonstrated that a diet high in omega-3 PUFAs from flax-seed and fish oil increases the bacterial diversity of the mouse gut (65). The same study determined that the flaxseed/fish oil diet increased the levels of ALA, EPA, and DHA in multiple tissues with a parallel decrease in ARA, indicating that the lipid membranes of host tissues are the ultimate destination for dietary omega-3 PUFAs. Lipids play important roles in microbial physiology; as structural components of the cell membrane, as energy storage modules, and for cell signaling and regulation of cellular activities (66). There is considerable overlap in lipid metabolic activities between eukaryotes and prokaryotes, providing opportunities for interkingdom cross-talk via lipid modification (58). In the gut microbiome, dietary lipids can be biotransformed by bacterial enzymes, resulting in downstream effects on host lipid physiology (58). Bifidobacterium and Lactobacilli are associated with reduced intestinal inflammation and diets high in omega-3 PUFAs. These bacteria have been shown to produce enzymes that modify omega-6 linoleic acid to a conjugated linoleic acid (CLA) that has anti-inflammatory effects and blocks the production of ARA-derived lipids (67). These same microorganisms produce a conjugated omega-3 alpha-linolenic acid (CLNA) with significant antioxidant effects (68).
These commensal bacteria utilize host lipids as an energy source and metabolize some lipids to new isoforms to enhance mucosal barrier function. Conversely, gut bacteria have also been associated with detrimental changes to dietary lipids, as demonstrated by studies of the microbiome associated with irritable bowel syndrome (69, 70). These studies begin to shed light on the potential circular relationship between dietary lipids, changes in the local host environment, and selection for bacteria that metabolize those same nutrients and benefit from changes to the environment (71). It is important to note that gut bacteria primarily influence the production of pro-inflammatory or proresolution lipids at the level of precursor molecules by manipulation of linoleic and alpha-linolenic acids. Identification of novel bioactive lipid molecules produced by gut bacteria has the potential to create new therapeutics for the regulation of the host immune system (72).
There are a few studies investigating the impact of omega-3 PUFAs on oral microbiota. An in vitro study showed that omega-3 PUFA, including EPA, DHA, and ALA, and their ester derivatives inhibited the growth of various oral bacteria, including Streptococcus mutans, Candida albicans, A.actinomycetemcomitans, Fusobacterium nucleatum (F.nucleatum), and P. gingivalis (73). The other in vitro study also showed that DHA and EPA possessed antibacterial activities against planktonic and biofilm forms of periodontal pathogens, P.gingivalis and F.nucleatum (74). High-dose omega-3 PUFA intake during non-surgical treatment in stage III or IV periodontitis patients was associated with reduced counts of periodontal pathogens, including P.gingivalis, T.forsythia, T.denticola and A.actinomycetemcomitans in a randomized clinical trial (75). The antimicrobial property of omega-3 PUFA indicates its potential effects on the composition of the oral The hypothesis of oral microbiota and specialized pro-resolving mediator (SPM) interactions. ALA, alpha-linolenic acid; ARA, arachidonic acid; DHA, docosahexaenoic acid; EPA, eicosapentaenoic acid; LA, linoleic acid; LTB4, leukotriene B4; MaR1, maresin-1; n-3 PUFA, omega-3 polyunsaturated fatty acid; n-6 PUFA, omega-6 polyunsaturated fatty acid; PGE2, prostaglandin E2; RvD1, resolvin D1; RvE1, resolvin E1. This figure was created with BioRender.com. (76,77). More research is needed to investigate the impact of dietary lipids on oral microbiota and the interactions between oral and gut microbiota.
Profiles of SPMs and Microbiota in periodontitis
The role of SPMs and relevant lipids in oral microbiota in periodontitis has been rarely investigated. Recently, SPMs, SPM pathway markers, and SPM corresponding receptor genes have been identified in human gingival tissues (57). A follow-up study aimed to analyze and integrate data on lipid mediator level (SPMs and SPM pathway markers), SPM receptor gene expression, and subgingival microbiome in subjects with periodontitis and healthy controls (78). The study included 13 periodontally healthy and 15 periodontitis subjects examined before or after non-surgical periodontal therapy. Gingival tissue and subgingival plaque samples were collected prior to and 8 weeks after non-surgical treatment, but these samples were only collected once in the healthy group before any prophylaxis. Correlations between lipid mediator levels, receptor gene expression, and bacterial abundance were analyzed using the Data Integration Analysis for Biomarker discovery using Latent components (DIABLO) and Sparse Partial Least Squares (SPLS) methods. The study demonstrated that specific bacterial species were significantly associated with lipid mediators in different inflammatory conditions. When comparing these correlated species in periodontitis before treatment to after treatment, a bacterial species, Anaeroglobus geminatus, was identified in both conditions and positively correlated with different lipid mediators. Both states (before and after treatment) had four lipid mediators, 5(S),12(S)-dihydroxy-6E,8Z,11E,14Zeicosatetraenoic acid (5(S)12(S)-DiHETE), RvD1, MaR1, and leukotriene B4 (LTB4), correlated with different bacteria species. Among the nine bacterial species identified in the periodontitis after the SRP group, four Selenomonas species (Selenomonas sp._oral_taxon_136, Selenomonas sp._oral_taxon_137, Selenomonas sp._oral_taxon_138, Selenomonas sp._oral_taxon_479) were highly correlated with multiple lipid mediators. These identified bacteria are not considered periodontal pathogens in literature. Similar to the gut microbes described above, both A. geminatus and Selenomonas spp. encode enzymes capable of transforming linoleic and ALA-derived lipids, implying that they may play a similar role to gut bacteria in modifying oral lipids (79). It is also possible that the change in the local environment, including the inflammatory condition and lipid profiles, results in the presence of these bacteria, as we discussed in the other sections. These findings indicate the potential interactions between lipids, microbiota, and inflammation in periodontitis which has not been deeply investigated (Figure 1).
Conclusion
These findings demonstrate that, similar to the influence of diet on the gut microbiome, the resolution of inflammation induced by SPMs is associated with shifts in the taxonomic composition of the oral microbiota. Potentially, SPMs and subgingival bacterial species may have interactions that open new possibilities for the identification of diagnostic biomarkers and the development of therapeutics for periodontitis.
Author contributions
CTL conceived the topic, and CTL and GDT wrote the manuscript. All authors contributed to the article and approved the submitted version.
Conflict of interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Publisher's note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher. | 2023-07-27T15:09:39.229Z | 2023-07-25T00:00:00.000 | {
"year": 2023,
"sha1": "76f984e9f12cc3d4f80873ab7bda4a1566b2eae2",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/froh.2023.1217088/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "74360b791132c53f04a79643363bcb2bbf343f97",
"s2fieldsofstudy": [
"Medicine",
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
271137689 | pes2o/s2orc | v3-fos-license | Pleural Metastatic Melanoma With Recurrent Malignant Pleural Effusions
Pleural metastatic melanoma is rare, and associated malignant pleural effusions are even rarer. We present a case of pleural metastatic melanoma with recurrent malignant pleural effusions. The initial diagnosis showed no metastatic disease, and the patient underwent resection and received a year of immunotherapy for localized disease. However, two years later, the patient presented with pleural metastatic melanoma with unresolving malignant pleural effusions requiring an indwelling pleural catheter and eventually, thoracotomy with decortication. Clinicians should have a high index of suspicion for pleural metastatic melanoma in the setting of recurrent pleural effusions, even though it is a rare occurrence.
Introduction
Melanoma is the third most common skin cancer and the fifth most common cancer in males and females.The 2018 World Health Organization (WHO) Classification of Melanoma separates melanomas into three distinct categories: melanoma that is associated with solar damage, melanoma that is not associated with solar damage, and nodular melanomas.Environmental, genetic, and immunological factors play an important role in melanoma.If it is diagnosed early, survival rates are as high as 94%.If metastasis is present, the prognosis is poor.Melanoma can spread to the liver, bones, brain, or lungs.The lung is a common site of metastasis, and when lung metastasis is present, the most common cause of death is respiratory failure.The incidence rate of metastatic melanoma is about 0.9 per 100,000 [1].Pleural metastatic melanoma is rarer and not commonly reported.Additionally, only 2% of the patients with thoracic metastatic melanoma present with malignant pleural effusions [2].Differentiating between primary pleural neoplasm and pleural metastatic melanoma can be tricky given similar presentations.Prompt diagnosis by pleural biopsy is essential for initiating treatment early.We report a case of a patient with a history of right first toe melanoma status post resection and immunotherapy who presented with shortness of breath.She was later found to have a pleural metastatic melanoma with massive recurrent pleural effusions.
Case Presentation
A 77-year-old female with a medical history of hypertension, chronic kidney disease stage III, osteoarthritis, degenerative disc disease, and right first toe melanoma status post resection and immunotherapy presented with shortness of breath.She reported gradually worsening shortness of breath for the last two weeks.Over the past few days, she was short of breath while at rest, which prompted her to come to the emergency department.She denied chest pain, palpitations, sore throat, body aches, fever, chills, nausea, vomiting, or pedal edema.She denied using oxygen at baseline.Relevant past medical history included that about two years ago, the patient had a biopsy of her right great toe that revealed a melanotic lesion showing atypical dermal melanocytic proliferation suspicious for melanoma.Tumor cells were strongly positive for S100, SOX10, MITF, and HMB45.Positron emission tomography-computed tomography scan revealed right first toe focal moderate level increased fluorodeoxyglucose (FDG) localization correlating with melanoma only in the right great toe (Figure 1) with no involvement of regional sites, local lymph nodes, or distant metastatic disease.She underwent a right great toe amputation.Final pathology showed invasive melanoma with a distal thickness of 3 mm without ulceration.Metastatic melanoma was noted in one of two sentinel lymph nodes removed with metastatic melanoma.The final pathological classification was T3a, pN1a, M0.The patient was started on adjuvant immunotherapy with nivolumab 480 mg every four weeks.She received at least nine months of treatment before developing inflammatory side effects from the medication.Her regimen was adjusted to nivolumab 240 mg every two weeks.She received one year of immunotherapy.
FDG: Fluorodeoxyglucose
On presentation, vital signs were significant for a respiratory rate of 26 breaths/minute and the patient required three liters of oxygen via nasal cannula.Physical examination was notable for diminished breath sounds over the left lung.Chest radiography revealed near complete opacification of the left hemithorax and large left pleural effusion (Figure 2).The computed tomography angiography (CTA) of the chest showed multiple pleural-based masses involving the left lung measuring up to 3.7 x 1.9 cm with large left pleural effusion occupying nearly the entirety of the left lung and multiple nodules throughout the right lung measuring up to 1.2 cm (Figure 3).The patient received two thoracenteses with a total of three liters of dark red colored fluid removed.Per light's criteria, pleural fluid analysis was positive for exudative effusion.Despite the thoracenteses, the pleural effusion continued to reaccumulate.Therefore, the patient received an indwelling pleural catheter for continuous drainage.She also had a port placed and was discharged home with plans to start on a combination dual therapy of nivolumab and relatlimab for metastatic melanoma.One month later, the patient re-presented to our hospital with shortness of breath.On presentation, she required five liters of oxygen via nasal cannula.CTA chest showed multiple enlarged mediastinal and hilar lymph nodes, interval worsened metastatic large complex loculated left pleural effusion, large pleural metastases, and numerous pulmonary nodules (Figure 4 and Figure 5).Per history, the indwelling pleural catheter was not draining for the past two weeks prior to presentation.There was a concern that the indwelling pleural catheter was not draining but the patient was not a candidate for tissue plasminogen activator/dornase alfa treatment given the patient's history of hemorrhagic pleural effusion.Interventional radiology was consulted but were unable to aspirate any fluid due to the severe loculations.Cardiothoracic surgery was consulted for alternative treatment options.The patient underwent left thoracotomy with decortication and left chest tube placement.Left pleural tissue was sent for pathology which showed a population of malignant cells that were pleomorphic and anaplastic.Controlled immunohistochemistry demonstrated these cells to be S100 positive and HMB45 positive, supporting the clinical impression of metastatic melanoma.Eventually, the chest tube was removed, and the patient was weaned off to room air.She was discharged with a plan for salvage therapy for advanced melanoma with treatment consisting of dabrafenib and trametinib.
Discussion
Melanoma is largely responsible for deaths related to skin cancer.Lesions suspicious for melanoma can be identified with common features that are listed in the ABCDE mnemonic: asymmetry of shape, border irregularity, color variation, diameter greater than 6 mm, and evolution of the lesion [3].Melanoma can spread locally, regionally, and distantly [1].To make the diagnosis of melanoma, a full-thickness excision biopsy is generally necessary.Melanoma staging is done based on the Tumor, Nodes, and Metastases classification.This considers the tumor thickness and if there is ulceration present or not, whether there is involvement of regional lymph nodes and non-nodal regional sites, and if there are distant metastases [4].The National Comprehensive Cancer Network (NCCN) has guidelines for the treatment of malignant melanoma based on this classification.
Metastatic melanoma of the lung is common, but it only comprises about 5% of all secondary pulmonary malignancies.Additionally, metastasis to the pleura is rare and it is uncommon to have malignant pleural effusions.Only 2% of patients with thoracic metastases have pleural effusions [2,5].The study by Chen et al. reported three patients (2%) with malignant pleural effusions in the setting of metastatic melanoma to the thorax [6].Pleural metastatic melanoma can present as pleural thickening or pleural effusion.When present, the pleural effusion can be unilateral or bilateral, small or large, and can be black in color if there are melanocytes present.Malignant pleural effusions are expected to be exudative and about 60% of the time can be diagnosed on pleural fluid cytology [7].
Recurrent malignant pleural effusions usually signify advanced disease.Therefore, the management of malignant pleural effusions depends on the specific clinical scenario with factors such as patient age, life expectancy, response to cancer therapy, and symptomatic relief.Therapeutic options include repeated thoracentesis, indwelling pleural catheter, pleurodesis, decortication, and chemotherapy [8].
Our patient presented with recurrent large pleural effusions, dark red in color indicating hemorrhagic effusion.The pleural fluid was exudative making malignant pleural effusion top on our differential list and subsequently, a left pleural peel biopsy confirmed the diagnosis of metastatic melanoma.Despite the placement of the left-sided indwelling pleural catheter, our patient presented again with left-sided loculated pleural effusion.Ultimately, the patient required thoracotomy with decortication.
When our patient was initially diagnosed with cutaneous malignant melanoma, the stage was T3a, pN1a, M0 which was treated with resection and Nivolumab per the NCCN guidelines.Despite adequate treatment, the disease progressed, and she was found to have distant metastases.Malignant melanoma is an aggressive skin cancer and pleural metastasis is considered a poor prognostic factor.In our patient, pleural involvement occurred two years after the diagnosis of malignant melanoma of the right great toe.Per the NCCN guidelines, the patient was started on systemic therapy with dabrafenib and trametinib.Our case is unique in that pleural metastatic melanoma is very rare and there are only a few reports of it presenting as massive recurrent pleural effusions.
Conclusions
In conclusion, distant metastatic melanoma carries an overall poor prognosis.The rarity of metastatic pleural melanoma presenting as malignant pleural effusion may disguise metastatic disease as a primary lung or pleural tumor.Early identification is important because the most common cause of death in metastatic melanoma is respiratory failure secondary to lung or pleural involvement.A high index of suspicion for uncommon findings such as recurrent malignant pleural effusions in a patient with a history of melanoma can lead to prompt treatment.This can lead to improved overall outcomes.Depending on patient preference and survival expectancy, management can be aimed toward symptomatic relief as palliative treatment versus aggressive treatment for cure.compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work.Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work.Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
FIGURE 2 :
FIGURE 2: There is near complete opacification of the left hemithorax with large left pleural effusion (yellow arrow).
FIGURE 3 :
FIGURE 3: Pleural-based mass is seen involving the left lung measuring up to 3.7 x 1.9 cm (yellow arrow) with large left pleural effusion occupying nearly the entirety of the left lung. | 2024-07-14T15:04:47.072Z | 2024-07-01T00:00:00.000 | {
"year": 2024,
"sha1": "241330bf7129f169e519657752aec7cc1aac4710",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "3062f23c679767b3e411a94c31f8f58673f5655e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
229622355 | pes2o/s2orc | v3-fos-license | Does Smartphone Use, Ruangguru Application, and Learning Motivation Affect Learning Achievement in Economic Subjects?
The objectives of this research were to find out: (1) The influence of Smartphone students’ Learning Achievement of Economic Subjects; (2) The influence of Ruangguru Application on students’ Learning Achievement of Economic Subjects; (3) the influence of Learning Motivation on students’ Learning Achievement of Economic Subjects; and (4) The influence of using a Smartphone, Ruangguru Application, and Learning Motivation toward students’ Learning Achievement of Economic Subjects. This research is an Explanatory research. This research was conducted in December, 2019 at Public Senior High School 1 Bukittingi. In order to take the sample, the researchers used Proportional Random Sampling. The samples consisted of 112 respondents. To collect the data, the researchers used documentation and questionnaires. Documentation was used for Learning Achievement of Economic Subjects, while questionnaires were for Smartphone, Ruangguru Application, and Learning Motivation. The researchers analyed the data by using Multiple Regression Analysis technique. The results showed that: (1) Smartphone has a positive influence toward Economic Learning Achievement; (2) Ruangguru application has a positive influence toward Economic Learning Achievement; (3) Learning Motivation has a positive influence toward Economic Learning Achievement; and (4) Smartphone, Ruangguru application, and learning motivation all together to have a positive influence toward Economic Learning Achievement.
INTRODUCTION
Smartphone technology is always progressing from time to time. Nowadays smartphones are seen in the competitiveness of various brands Research conducted by Yen in (Muflih, 2017) in 2009, found that from 10,191 adolescents studied reported that 30% of participants could tolerate smartphone use, 36% had withdrawals, 27% showed heavier use, 18% failed to reduce smartphone use, and 10% experience impaired social interactions. Many government authorities recognize that there is definitely a risk of addiction due to excessive use or misuse of smartphones. However, due to limited findings and do not have validated standards about smartphone addiction. On smartphones, there are various applications such as being able to Sending messages, sending pictures, data, having a group to be able to have a conversation with several people together, sending voice messages or telling friends or relatives where we are by sending our location. Smartphones have other applications such as e-mail (sending messages via the internet), Browsing (browsing is done with the presence of the internet network), the presence of entertainment such as music, videos and the existence of various kinds of games that can be downloaded makes the smartphone owner interested in playing, the ability of the camera that has power considerable storage power. Learning patterns are one of the important factors that greatly affect the achievement or learning outcomes obtained by students. In education it is often known that students have different learning patterns from each other.
The difference can be seen from two aspects, namely before getting to know smartphone technology and after getting to know the smartphone technology. Student learning patterns before learning smartphone technology examples of limited learning time students can only receive subject matter when in class in the teaching and learning process and students must read books in the library to be able to increase knowledge not given by the teacher but after getting to know smartphone technology students easily search for subject matter and increase their own knowledge by using applications that are on smartphones anywhere and anytime. The development of technologies such as smartphone sophistication, students only need to have a smartphone to be able to access online tutoring applications, so students can determine when to study with full concentration without interference from other things.
This phenomenon is called online guidance (online guide). Lessons learned by students at school can be learned through this online tutoring. A curriculum that has been prepared by the government,which is usually applied in schools is also available in online tutoring. Now online tutoring services are increasingly prevalent not in Indonesia but also throughout the world. Therefore, there are many people who take the initiative to create an online tutoring service with highly competent tutors or teachers as well as experience and high flight hours. In Indonesia, the most popular online tutoring right now is the Ruangguru application. Hamalik as quoted by Wenno et al (2016) in (Gideon, 2017) states that if students experience failure or setback in learning outcomes, it means that there are difficulties encountered during learning. One indicator of student learning difficulties can be seen from the low learning outcomes obtained or not as expected. Teachers sometimes do not understand every difficulty experienced by students. In fact, by knowing the difficulties faced by students in learning, teachers can find alternative solutions or solutions to the right problems to overcome the difficulties faced by students in learning.
The Department of Educational Psychology Team (Mulyadi, 2010) in (Gideon, 2017) defines tutoring as the process of providing assistance to students in solving difficulties associated with learning problems. This is what online counseling programs see as a potential business that has a high enough chance. The adopted online system allows anyone to join online tutoring without having to worry about being crammed into the classroom. As long as students have adequate gadgets and internet networks, they can access online tutoring wherever and whenever.
METHODS
This research has been conducted at SMA Negeri 1 Bukittinggi. This location was chosen with consideration relevant to the object under study. The population consists of objects or subjects that will be discussed and owned by research conducted by researchers and the conclusions drawn from the research. Sugiono (2010). The population taken was 155 people. In order to take the sample, the researchers used the Proportional Random Sampling technique. According to Sugiyono (2014),the samples were taken if the object or the data sources are very large. Therefore, the researchers used the following Slovin formula used in (Umar, 2011): (1) The total number of samples was 112 students represented each of classes which was used the following formula: (2)
Variable Operational Definition
The learning achievement of the twelve natural science students of State Senior High School 1 Bukittinggi, Learning achievement is a real result obtained after learning activities or learning experiences which are reflected through the report cards that have been obtained by students.
Indicator of this variable is the index of student report cards.The use of Smartphone.
Smartphone is an object that is supported by features that are desired in the daily life of the user and has the ability like a computer that can be carried anywhere.Ruangguru Application. Ruangguru application can be accessed by an internet network that can be accessed anywhere we are, and the use of this tutoring application can help students in learning.
Learning motivation.It is an impulse or desire that gives rise to an action to achieve a goal. High motivation to learn, especially motivational encouragement from parents and peers will make the learning achievement of the twelve natural science students of State Senior High School 1 Bukittinggi increase.
Test validity
Valid means that the instrument can be used to measure what is being measured Sugiyono (2009). In the validity test, the product moment correlation technique formula can be used. According to (Umar, 2008): The value of r table with a significance level (α) of 5% with n = 30 is 0.361 ,. It means that the instrument was valid if r count> 0.361
Reliability test
In deciding reliability, an instrument can be reliable if the Cronbach Alpha value is greater than 0.6 (Arikunto, 2006). The formula for calculating the instrument reliability coefficient by using Cronbach Alpha is as follows:
The Technique of Data Analysis
Descriptive analysis is a statistic used to analyze data by describing data that has been collected as it is without intending to make conclusions or generalizations that are generally accepted, (Sugiyono, 2010)
Classic assumption test
The first step of regression analysis is an examination of assumptions that test residual normality, the absence of heterokedasticity problems in residuals, multicollinearity, and the absence of autocorrelation problems in residuals (Yamin, 2009)
Multiple Linear Regression
In using multiple linear regression, the requirements that must be met are the need to do a classic assumption test or the test requirements for multiple regression analysis so that the regression line equation obtained can really be used to predict the dependent variable or criterion. The coefficient of determination (R2) aims to measure how far the ability of the model can explain the dependent variables Ghazali (2013). In testing the hypothesis of the coefficient of determination seen from the value of R Square (R2), to find out how far the independent variables of learning motivation and peer environment on student achievement. Regression equation as follows:
Double determinant Coefficient (R 2 )
To find out the amount of contribution made by each variable. Then we need to find the partial determination coefficient. The coefficient of partial determination is also used to explain values ranging from zero to one. If R2 approaches 1 (one) then it can be said the stronger the model is in explaining the variation of the independent variable to the dependent variable partially and vice versa R2 approaches 0 (zero), the weaker the variation of the independent variable in explaining the dependent variable.
Simultaneous Test (F Test)
Coefficient testing simultaneously is to determine the influence of independent variables together (simultaneously) on the dependent variable. The testing process is done by comparing the value of Fcalculate with the value of F table at a significant level (α) and degrees of freedom (df). If the probability value is smaller than 0.05 (for the significance level = 5%), then the independent variables together affect the dependent variable. Meanwhile, if the probability value is greater than 0.05 then the independent variables simultaneously do not affect the dependent variable (Syahruddin et al, 2015).
Partial Test (T Test)
Partial coefficient testing is to determine the effect of each independent variable partially \ on the dependent variable. The testing process is conducted by comparing the value of t tabel at a significant level (α) and degrees of freedom (df) Based on the table above, it can be seen that the F count is 29.370 with a probability of 0,000. The result of Fcountis compared with Ftable using a significance level of 0.05 so that it was obtained 1.98. Therefore, F count is greater than Ftable (29.370> 1.98).Then, Ho is rejected and Ha is accepted. Thus, it can be concluded that the use of Smartphone (X1), Ruangguru Application (X2), and Learning Motivation (X3) together have significant influence toward students' achievement (Y).
The use of Smartphone variable toward Learning Achievement
The results of the data analysis show that the valueof tcountis 3,391, while the value of ttablefor n = 112 is 1,982173. Thus, the value oftcountis greater thanttabel (3,391 > 1,982173), Therefore, there is an influence on the use of smartphones towards student learning achievement. The above findings are in line with Barker (2005), this smartphone is an opportunity for students to take learning experiences outside of class boundaries. Because this device has portability properties, this device can be operated wherever and whenever it is needed. Utilization of ICT (Information Communication Technology) media at this time is indeed very necessary to support the subject matter being taught.
Ruangguru Application variable toward Learning Achievement
The results of data analysis show that the value of tcountis 2,791, whilethe value of ttableforn = 112 is 1,982173. Therefore, the value of tcountis greater than ttable (2,791 > 1,982173).It means that there is an influence of ruangguru application on student learning achievement. According to Herman (2005) With advances in technology, online tutoring is also growing, students can learn by determining the time they want, students only need to have a smartphone with an internet network so students can learn online. This phenomenon is called online tutoring (online tutors). Lessons that students get at school can be learned through this online tutoring. The curriculum that has been prepared by the government, which is usually applied in schools is also available in online tutoring. Now online tutoring services are increasingly prevalent not in Indonesia but also throughout the world. Therefore, there are many people who take the initiative to create online tutoring services with highly competent tutors or teachers as well as experience and high flight hours.
Learning Motivation toward Learning Achievement
The results of the data analysis show that the value of tcountis 2,899, while, the value ofttableforn = 112is 1,982173. Thus, the value oftcountis greater thanttable (2,899 > 1,982173). It means that there is an influence of learning motivation on student achievement. According to (Sardiman, 2009) Advances in Economics, Business and Management Research,volume 152 "motive is interpreted as an effort to encourage someone to do something". Starting from the word motif, then motivation is defined as an active driving force at certain times, especially when the need to achieve an urgent goal. Based on these explanations, it can be said that Learning Motivation is an impulse that arises in a person for a purpose that is realized by changing the learning activities and then the student's behavior. Learning motivation becomes an encouragement to move students to be more active in learning so as to achieve Economic Learning Achievement as expected.
The influence of Smartphone, Ruangguru application, and Learning Motovation toward Learning Achievement.
The results of data analysis show that the value of tcountis 29,370 and the value of Ftable forn = 112 is 1,98. This hypothesis is significant if Fcountis greater than Ftable (29,370 > 1,98). Hence,it can be concluded that the use of Smartphones (X1), Ruangguru Applications (X2), and Learning Motivation (X3) have a significant influence toward Students' Learning Achievement (Y). From the results of the analysis of the coefficient of determination (RSquare), it shows that all three variables have an influence of 0.449, which means the independent variable Samrtphone Influence, Ruangguru Application, and Learning Motivation has a 49.4% effect on student achievement. This means that 50.6% is influenced by other factors not examined in this study
CONCLUSIONS
From the results obtained that has been conducted on the influence of using a smartphone, ruangguru applications and learning motivation toward learning achievement of twelve grade natural science students of Senior High School I Bukitinggi, it can be concluded that: (1) The use of Smartphone has a significant positive influence toward learning achievement of twelve grade natural science students of Senior High School I Bukitinggi. This shows that the use of smartphones for learning will have a good influence on learning achievement. (2) Ruangguru Application has a significant positive influence toward learning achievement of twelve grade natural science students of Senior High School I Bukitinggi.This shows that the use of the Ruangguru application students no longer needs to go outside the house to follow the tutoring, with advances in online tutoring technology such as Ruangguru providing fresh air for students to learn independently. (3) Learning Motivation has a significant positive influence toward learning achievement of twelve grade natural science students of Senior High School I Bukitinggi.This influx can result in increased student motivation to learn, so the achievements will be high. (4) The use of a Smartphone, Ruangguruapplication, and learning motivation have significant positive influence toward learning achievement of twelve grade natural science students of Senior High School I Bukitinggi. The higher the use of smartphones for learning, and the use of ruangguru online tutoring applications will increase learning motivation so that students will also have a higher influence on student achievement. | 2020-12-03T09:06:03.495Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "98cd6fad557645a9c05c35f217e2473b659b656b",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.2991/aebmr.k.201126.037",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "d8689bcd75d0066a6b22bfb5d6d34d8c84edcf74",
"s2fieldsofstudy": [
"Economics",
"Computer Science"
],
"extfieldsofstudy": [
"Psychology"
]
} |
233289444 | pes2o/s2orc | v3-fos-license | Thinking Through and Writing About Research Ethics Beyond"Broader Impact"
In March 2021, we held the first instalment of the tutorial on thinking through and writing about research ethics beyond 'Broader Impact' in conjunction with the ACM Conference on Fairness, Accountability, and Transparency (FAccT '21). The goal of this tutorial was to offer a conceptual and practical starting point for engineers and social scientists interested in thinking more expansively, holistically, and critically about research ethics. This report provides an outline of the tutorial, and contains our 'lifecourse checklist'. This was presented as part of the tutorial, and provides a practical starting point for researchers when thinking about research ethics before a project's start. We provide this to the research community, with the hope that researchers use it when considering the ethics of their research.
INTRODUCTION
The past year witnessed the computational field grapple with questions of responsibility and ethics of their research. Calls for papers in engineering conferences [1] include "ethics" and "societal impact" as topic areas. NeurIPS, for example, implemented the requirement that submissions include a "broader impact statement" [2]. Conference organizers have also invited interdisciplinary reviewers to assess submissions' discussion of social impact [3]. These efforts mark a long overdue, but exciting moment for the field as it asks difficult and important questions about its impact in the academic and social world. Reactions to these changes are instructive in evaluating whether and how they have been constructive in challenging researchers to critically examine their positionality, epistemology, and work's impact during and throughout the research publication pipeline.
Barely a year into NeurIPS' impact statement requirement, it appears its impact has been, at best, modest. Abuhamad and Rheault [4] conducted a survey on this year's NeurIPS authors' experience with the broader impact requirement. Participants' attitudes ranged from nonchalance and mild annoyance to outright hostility. Elsewhere, when papers do engage with "broader impact," their engagement coheres around issues of bias and harm, according to Blodgett and colleagues [6]. Surveying papers analyzing bias in natural language processing (NLP) submitted to the Association for Computer Linguistics, the authors found papers' framing of bias to be "often vague, inconsistent, and lacking in normative reasoning, " have little to no engagement with literature outside of NLP, and preoccupied with computational fixes [6].
These strike us as a missed opportunity. While "bias" and "harms" warrant sustained attention, simply identifying potential harmful impacts or quantifying bias constitutes a small facet of what critical voices in the field mean when they call for accountability in AI [5,9,15].
This tutorial aims to offer a conceptual and practical primer for engineers interested in thinking more expansively, holistically, and critically about research ethics. We include issues of "harm" and "bias" as components of wider discussion about research ethics as it has been conceptualized and operationalized in the interpretivist traditions of humanities and the social sciences. In the interpretivist paradigm, "doing ethical research is not as simple as following a set of rules" [18]. Assessing the ethical impacts of research necessarily implicates the researcher's standpoint [10,11] in the social world structured by power relations [16], which, in turn, informs how they conducts their work [7]. In other words, what we see and how we see are inextricably shaped by who we are and what we do as [12,17] knowledge workers [14].
Meaningful engagement with research ethics must thus extend beyond diagnosing harms or anticipating carbon emissions and delve into this tripartite relationship. To do this, our goals are as follows: • Place interpretivist social scientific concepts of knowledge, power, and reflexivity in dialogue with engineering heuristics and practices for assessing research rigor, process, and impact • Provide concrete steps for thinking through research ethics during the publication pipeline • Discuss why research ethics training is lacking in the field, and develop specific recommendations to program committees, senior researchers, etc. to incorporate into doctoral training This is not to suggest that social scientific approaches to research ethics have the answers. In fact, much has been critiqued about the bureaucratization of research ethics, such as the Institutional Review Board (IRB) in the US [13]. Rather, we see these approaches as an instructive model for identifying core values, setting norms, and operationalizing them into the research pipeline, from training to publication.
Our goal is that the participants of our tutorial would walk away with the following: (1) actionable steps for discussing positionality and limitations in their writing; (2) a practical primer for mapping out ethical dimensions during the research process; and (3) specific interpretivist concepts like "reflexivity" to challenge their thinking about research purpose and process.
The following report is organised as follows: in Section 2 we provide an overview of the tutorial. In Section 3 we include the lifecourse checklist; a set of research ethics checkpoints that we provided with the tutorial. Slides from the tutorial can be found here, and a recording of the tutorial can be found here.
TUTORIAL OVERVIEW
Due to COVID-19, and in line with FAccT 2021, the tutorial was held entirely virtually on Zoom. The tutorial spanned over 90 minutes, and had a participation of over 50 conference attendees throughout. Here, we provide an overview of the tutorial. The tutorial consisted of two parts: The first part included a talk from the organisers on common misconceptions around research ethics and an imaginative exercise centred on the tutorial attendees' best and worst case scenarios for their own research projects. For the second part, the organisers outlined the lifecourse approach to research ethics. This was followed by the lifecourse group activity, and the following group discussion.
Inspired by the approach of Giele and Elder [8], the lifecourse approach to research ethics is centred on imagining each project as having a life course. It is up to the researcher to then think about the conditions, assumptions, and aspirations that shape their research design in the three different stages of this lifecourse; before, during and after the project.
In the lifecourse group activity, participants were split into breakout rooms and each were given a machine learning case study to practice applying the lifecourse approach. Participants were given our lifecourse worksheet for guidance (provided in Section 3). In the following group discussion, participants relayed their answers to the wider group, and discussed any difficulties/challenges they had as well as any interesting findings that arose from the approach.
LIFECOURSE CHECKLIST
Here, we provide the lifecourse checklist. This checklist was developed alongside the tutorial, and provides guidelines for applying the lifecourse approach when discussing research ethics before the start of a project. The checklist breaks down the lifecourse of a research project into three parts: before, during, and after. Some parts are not appropriate for certain research projects, and occasionally not enough information will be known about a project prior to its start to answer some questions. In these cases, we advise users to think about when in the project lifecourse you might be able to answer those questions. Before.
• Informed Consent -1a) If your dataset contains humans, what efforts could you make to inform people that you want to use their data for your research, and seek their consent, even if using a large dataset?
-1b) What expectations of privacy did people have when they provided the data you will be using for your study? How might your use of their data violate these expectations? -1c) How could your research be using their data in ways they did not anticipate? -1d) What process will you use to discuss and document this, decide if your study violates their privacy, and come up with ways to mitigate this? • Opting Out -2a) If you can't gain consent from each participant, how might you be able to still inform people that their data has been used, so they can opt-out? -2b) Can people opt-out of your research? If so, when and how will you tell them how to do that? -2c) Who can they talk to if they have concerns, and how and when are you giving them this information? -2d) Can people slow down or stop the research project entirely if they have enough concerns about its ethics? How? • Identifiablity -3a) Can the identities of people in the dataset be deduced? -3b) Can you remove identifying information from the dataset? If so, how? -3c) Can it be re-identified? -3d) How will de-identification alter the dataset's representativeness? (e.g. removing age from a dataset about Tik Tok users might reinforce the idea that only young people use the app). -3e) How could someone represented in the dataset be potentially embarrassed or upset by it, even if information about them was removed? • Representativeness -4a) Who (what groups/populations) does your dataset represent? How might your project frame overstate its representativeness? Who might it leave out or not represent? What harmful implications might this have? -4b) If you need to make changes to the dataset or strip out identifying information (like race, gender, age, etc), how will the changes alter the original dataset's representativeness? -4c) What assumptions does your dataset give you about social groups? (e.g., you may have gender parity in your dataset about profession, but "nurses" are female and "doctors" are male). • Positionality -5a) Who is conducting the study? What are the demographics of that team? How might those demographics affect how you frame your study and your blind spots? What biases and preconceived notions might they bring that will affect the way that the study is designed, carried out, and interpreted?) -5b) What skills and expertise do you need to do this project ethically? What expertise do you have that allows you to evaluate these ethical questions fully? Who can you consult or bring on to help you fill any gaps you have? • Data Quality -6a) Are you collecting more data than you need to answer your research question? -6b) Why might collecting more data not always be better? -6c) Who could be harmed by over-collecting data? -6d) How might overcollection be harmful? During.
• Access To Study Information -7a) Where will you put information about the study so that it is clearly available to other academics and the public, so they can also reflect on its implications? • Data Storage & Transfer -8a) Where are you putting the data you're using? How is it safe? How is it protected? Who has access? For how long? -8b) Will you transfer it anywhere, and if so how? What breaches could occur, and how will you mitigate these? • Exposure To Risk -9a) For the people participating in this research (e.g., people data is collected from, communities, etc.): How could this research subject them to legal risk or disciplinary action? How could it make them or their data/activities visible to organisations (e.g. states or corporations) in a way that would open them up to negative consequences? -9a-1) How are people participating in the project safeguarded from these risks? -9b) For the people working on this research (e.g., researchers, interns): What risk (mental, professional, or physical) could this research put those working on the project in? -9b-2) How are the people working on this project safeguarded from these risks? • Exploitation -10a) How will you ensure that everyone working on the project is working fair hours in practice -according to the labour laws of the university and the government -and are not overworked? How will they be fairly compensated? -10b) How will you ensure that they are not being harassed or bullied? -10c) How can workers seek help if they are being exploited? How will you make everyone participating aware of these avenues? -10d) How are junior researchers working on the project (but who do not have a say in its direction/big picture decisions) being protected from the potential negative effects of the study, should the study draw negative critique? -10e) What processes do you have in place to ensure that junior researchers can express concerns about the research without fearing retribution? • Ongoing Review -11a) What processes do you have to review the project during its duration to identify and fix anything that is not working as anticipated? -11b) How will you evaluate and address your method's limitations or ethical issues that arise during the project as you apply them? -11c) Who will be invited to participate in these reviews?
Who will not be invited to participate? After.
• Accessibility of Results -11a) Where will you publish your results? -11b) Who, or what groups, will have the most access to them? which groups will have less access to them? -11c) How else will you disseminate your research? -11d) What steps will you take to make sure that your research's limitations are clear to your different audiences? • Usability -12a) Who is the intended audience of your results? -12b) How do you imagine them using the results or outcomes of your study? • Immediate Impacts -13a) Who could the immediate results or findings of the project benefit? -13b) Who might be left out or not benefit? Who might be negatively affected? -13c) Who will make money from this research? How will you ensure this is as equitable as possible? • Future Applications -14a) What are the potential positive future applications of your findings, and who might they positively affect? -14b) Which groups could be negatively affected by future applications of your research? How? • Possible Misuses -15a) How could private organisations (like tech companies) misuse your research? How could public organisations (like governments or universities, in your country or elsewhere) misuse your research? -15b) How could the methods/datasets used or knowledge produced serve purposes other than ones intended? -15c) How will you safeguard against misuse? -15d) What would your "words of caution" (i.e. datasheets) be to future users of the results? How will you inform/warn them against potential misuse? • Limitations -16a) Reflecting on your research, what are the actual possible benefits of the research? What are its limitations? Be honest! | 2021-04-19T01:16:16.815Z | 2021-04-16T00:00:00.000 | {
"year": 2021,
"sha1": "0dfd002524e68d8a96dc574da67944a872f1f557",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "0dfd002524e68d8a96dc574da67944a872f1f557",
"s2fieldsofstudy": [
"Political Science"
],
"extfieldsofstudy": [
"Sociology",
"Computer Science"
]
} |
53950961 | pes2o/s2orc | v3-fos-license | Transcriptome Analysis of Oleoresin-Producing Tree Sindora Glabra and Characterization of Sesquiterpene Synthases
Terpenes serve important physiological and ecological functions in plants. Sindora glabra trees accumulate copious amounts of sesquiterpene-rich oleoresin in the stem. A transcriptome approach was used to determine the unique terpene biosynthesis pathway and to explore the different regulatory mechanisms responsible for the variation of terpene content among individuals. Analysis of de novo-assembled contigs revealed a complete set of genes for terpene biosynthesis. A total of 23,261 differentially expressed unigenes (DEGs) were discovered between high and low oil-yielding plants. DEG enrichment analysis suggested that the terpene biosynthesis process and the plant hormone signal transduction pathway may exert a major role in determining terpene variation in S. glabra. The expression patterns of candidate genes were further verified by quantitative RT-PCR experiments. Key genes involved in the terpene biosynthesis pathway were predominantly expressed in phloem and root tissues. Phylogenetic analysis and subcellular localization implied that S. glabra terpene synthases may evolve from a common ancestor. Furthermore, two sesquiterpene synthase genes, SgSTPS1 and SgSTPS2, were functionally characterized. SgSTPS1 mainly generated β-caryophyllene from farnesyl pyrophosphate. SgSTPS2 is a versatile enzyme that catalyzes the formation of 12 sequiterpenes from farnesyl pyrophosphate and synthesis of three monoterpenes using geranyl pyrophosphate. Together, these results provide large reservoir for elucidating the molecular mechanism of terpene biosynthesis and for exploring the ecological function of sesquiterpenes in S. glabra.
INTRODUCTION
The angiosperm Caesalpinioideae subfamily includes the important genera Copaifera Linn., Hymenaea Linn., and Sindora Miq., which are traditionally referred to as "diesel trees" by local people (Langenheim, 2003). These diesel trees produce a sesquiterpene-rich oleoresin that is routinely collected when the plant trunk is drilled into and tapped.Oleoresin has been widely utilized in pharmaceuticals, fuel, essential oils and food (Peltier et al., 2006;Gershenzon and Dudareva, 2007;Harvey et al., 2010). However, the focus of oleoresin production research in plants has been predominantly limited to the genus Copaifera, which is distributed in tropical America and Africa (Souza Barbosa et al., 2013;Amorim et al., 2017). The genus Sindora, naturally distributed in the tropical forests of Asia and Africa, contains the species Sindora glabra, which is indigenous to Hainan Island, China. The S. glabra tree exudes yellowish oleoresin or an amber liquid oil when wounded that has been conventionally used as kerosene. Due to its endangered state this species has been included in the second-class national protection plants of China. Studies have shown that the annual oleoresin yield of S. glabra varies from 0.01 to 3.00 L among different individuals and the major components of the resin oil are sesquiterpenes (about 85%) and abietic acid (about 13%) (Yang et al., 2016), which is a unique feature of S. glabra oleoresin. Variation in the oil composition is present across the natural distribution range of S. glabra.
Terpenes are one of the largest and most diverse class of plant metabolites and serve essential functions in plant growth, development, and defense. Moreover, many terpenes of specialized metabolites, such as artemisinin from Artemisia annua and paclitaxel from Taxus baccata, have high medical value (Klayman, 1985;Sandler et al., 2006). However, the amount and composition of terpenes differ greatly among plant species and tissues. The gymnosperm Pinus stores oleoresin in specialized resin canals and produces higher yields in spring (0.65 kg) and summer (0.55 kg) compared to autumn and winter (Lombardero et al., 2000;Rodrigues and Fett-Neto, 2009). Moreover, gymnosperm oleoresin is almost universally composed of mono-and di-terpenes. Paclitaxel are mainly extracted from tree bark, while tanshinone from Saliva miltiorrhiza is extracted from roots (Ge and Wu, 2005). The underlying molecular mechanisms of terpene biosynthesis have been thoroughly studied (Tholl, 2006;Degenhardt et al., 2009). Terpenes are polymers of isoprene and are derived from the five carbon units of isopentyl diphosphate (IPP) and dimethylallyl diphosphate (DMAPP), which are generated from the plastidic methylerythritol phosphate (MEP) pathway or the cytoplasmic mevalonate (MVA) pathway. Condensation of IPP and DMAPP by prenyltransferase contribute to the three linear intermediates, geranyl diphosphate (GPP), farnesyl diphosphate (FPP), and geranylgeranyl diphosphate (GGPP), which are then catalyzed by terpene synthases (TPSs) to form monoterpenes (C 10 ), sesquiterpenes (C 15 ), and diterpenes (C 20 ) (Lichtenthaler, 1999). Generally, the MEP pathway generates monoterpenes and diterpenes, whereas the MVA pathway produces sesquiterpenes and triterpenes. There is also some cross-talk between these two pathways, for example, the non-MVA pathway synthesizes both monoterpenes and sesquiterpenes in roots and leaves of Daucus carota (Hampel et al., 2005). Products of various TPSs are further subject to structural modification through oxidation, reduction, isomerization, hydration, and conjugation to give rise to the chemical diversity of terpenes (McGarvey and Croteau, 1995). Many TPSs can utilize the same substrate to produce multiple products and the variation of even a single amino acid mutation in conserved domains of TPSs can alter product profiles (Li J. X. et al., 2013). Plant genomes typically contain families of many TPSs with similar sequences but that are functionally diverse, with gene numbers ranging from ∼20 to 150 (Chen et al., 2011). Variation in the genome and expression levels of TPSs may explain some of the variation in terpenes present in natural S. glabra populations. Substantial progress has been achieved in the discovery and elucidation of TPSs involved in terpene biosynthesis in tree species, including the gymnosperms Abies grandis, Picea abies, and Ginkgo biloba, and the angiosperms Melaleuca alternifolia, Santalum spicatum, and Populus trichocarpa (Warren et al., 2015;Bustos-Segura et al., 2017).
Factors affecting terpene biosynthesis are quite complex and could include the plant developmental stage, biotic factors, such as insects and pathogens, and abiotic factors, such as light, temperature, and humidity. In Picea spp., stem resin accumulates constitutively in the cortex, while it appears within the developing xylem after mechanical wounding, insect feeding, or fungal elicitation (Martin et al., 2002). Some transcription factors, including the MYB, ERF, YABBY, and NAC families, have been found to regulate the biosynthesis of terpene secondary metabolites (Nieuwenhuizen et al., 2015;Wang et al., 2016;Li et al., 2017;Matías-Hernández et al., 2017). Plant signaling molecules, especially jasmonic acid (JA), have great potential to elicit the production of terpenes in gymnosperms, herbs and crops (Martin et al., 2003;Kim et al., 2006;Ghasemzadeh et al., 2016). Herbivore-induced diterpene resin in conifer trees were discovered to serve important defense functions (Keeling and Bohlmann, 2006;Hall et al., 2013). In Sitka spruce, traumatic resin and terpene synthase transcripts were induced following attack by white pine weevils, and the defense response was more complex than that associated with methyl jasmonate (Miller et al., 2005).
Elucidation of terpene biosynthesis in S. glabra provides a basis to understand oil composition and variability. With a deeper understanding of the genetic mechanism of metabolic pathways in this species, we could improve our understanding of the developmental and physiological conditions that are responsible for the production of the resin oil. This in turn may support breeding efforts toward tree improvement for oil yield and quality in sustainable S. glabra plantations and can afford opportunities for biotechnological S. glabra oil production (Bustos-Segura et al., 2017). Some important genes can be identified for enhanced production of a particular terpene in either microorganisms or other plant species. In this study, comprehensive de novo transcriptome analysis of high and low oil-yielding S. glabra was carried out. Key genes related to terpene biosynthesis were mined, and the expression patterns were validated in experiments. The function of S. glabra sesquiterpene synthase was further characterized.
Plant Materials
Plants were grown from seeds from eight natural populations of S. glabra distributed in Hainan Island in glasshouse with 16 h light photoperiods. After 1 year, seedlings were transferred to plantations located in the Experimental Center of Tropical Forestry, Chinese Academy of Forestry in Pingxiang City, Guangxi, China (22 • 02 ′ -22 • 19 ′ N, 106 • 43 ′ -106 • 52 ′ E). Plants were maintained for 10 years before sampled. We selected two plants L4 and L6 (annual yield of 25 and 50 g) as low oilyielding plants and two plants H10 and H12 (annual yield of 475 and 770 g) as high oil-yielding plants. Each tissue was collected from three individuals in the same family representing biological replicates. Fresh stem tissues were sampled after peeling and immediately frozen in liquid nitrogen and stored at −80 • C for RNA extraction.
Structure Analysis and Determination of Terpene Profile
Samples were prepared for cryosectioning as described by Martin et al. (2002). Sections were stained with 1% carmine and astra blue. Final sections were placed on the glass lamina and photographed with a light microscope Olympus BH-2. Extraction of terpene constituents were done as Lewinsohn et al. (1993). The extract was used for GC-MS analysis (Agilent 6890). Chemical compounds were identified based on comparison of the mass spectra with National Institute of Standards and Technology (NIST) standard library. Relative percentage of the identified compounds were computed based on the peak area.
RNA Library Construction and Sequencing
Total RNA was extracted from each sample using cetyltrimethyl ammonium bromide (CTAB) method as described by Asif et al. (2000), plus the Plant RNA isolation kit. Briefly, fine powder were extracted with the buffer containing 2% CTAB and chloroform was added to remove polyphenols and polysaccarides. The aqueous phase were then subject to RNA isolation according to manufacturer's protocol (Qiagen RNeasy kit). RNA concentration was measured using Qubit RNA assay kit in Qubit 2.0 Flurometer (Life Technologies, USA). RNA integrity was assessed by using the Agilent Bioanalyzer 2100 system (Agilent Technologies, USA). A total of 1.5 µg RNA per sample was used as input material for the RNA sample preparations. Sequencing libraries were generated using NEBNext Ultra RNA Library Prep Kit for Illumina (NEB, USA) following manufacturer's recommendations. The total of 12 libraries were subject to sequencing on an Illumina Hiseq 2000 platform and paired-end reads were generated.
De novo Transcriptome Assembly
Clean reads were obtained by removing reads containing adapter, poly-N and low quality reads. The high-quality reads were then de novo assembled using the Trinity platform (https://github. com/trinityrnaseq/trinityrnaseq/wiki) with the parameters "K-mer=25, min_kmer_cov=2." The reads obtained for all samples were assembled together. Short reads were first assembled into draft transcript contigs, then pooled into components and finally assembled into transcripts. The longest transcript was taken as one unigene. All transcriptome sequence data were deposited in NCBI Short Read Achieve (SRA) database under the accession number SRP133897.
Gene Functional Annotation
Gene putative function was annotated by using diamond v0.8.22 against the NCBI non-redundant protein sequences (Nr) database (e-value = Ie −5 ), NCBI blast against the NCBI nonredundant nucleotide sequences (Nt) database (e-value = Ie −5 ), HMMER 3.0 against the Protein family (Pfam) database (e-value = 0.01), diamond v0.8.22 against the Cluster of Orthologous Groups of proteins (KOG/COG) database (e-value = Ie −3 ), diamond v0.8.22 against A manually annotated and reviewed protein sequence database (Swiss-Prot) database (e-value = Ie −5 ), KEGG automatic annotation server against the Kyoto Encyclopedia of Genes and Genomes (KEGG) database (e-value = Ie −10 ), and Blast2GO against the Gene Onthology (GO) database (e-value = Ie −6 ). Transcription factors (TFs) were identified by doing BLASTx against all plant transcription factors database (http://planttfdb.cbi.pku.edu.cn/), at e-value 1e −5 and query coverage 50%. The CDS prediction was performed by blast into nr and Swissprot protein database and the ORF protein coding sequences were extracted and translated into protein sequences according to standard codon. For those unmapped sequences or mapped but with no predicted sequences the sequences were subjected to prediction by estscan (3.0.3) software.
Gene Expression Analysis
Gene expression levels were estimated by RSEM for each sample. The clean data were mapped back onto the assembled transcriptome. Readcount for each gene was obtained from the mapping results and normalized into a FPKM value (expected number of Fragments Per Kilobase of transcript sequence per Millions base pairs sequenced, Trapnell et al., 2010). Differential expression analysis of two conditions was performed using the DESeq R package. The resulting P-values were adjusted using the Benjamini and Hochberg's approach for controlling the false discovery rate. Unigenes with an adjusted P-value (padj) < 0.05 found by DESeq were assigned as differentially expressed. Gene Ontology (GO) enrichment analysis of the differentially expressed unigenes (DEGs) was implemented by the GOseq R packages based Wallenius non-central hypergeometric distribution, which can adjust for gene length bias in DEGs. The KOBAS software was used to test the statistical enrichment of DEGs in KEGG pathways. Gene expression correlation network between candidate STPS genes and WRKY genes was constructed with the Weighted Gene Co-expression Network Analysis (WGCNA) method (Langfelder and Horvath, 2008). Heat map was produced by the software Heatmap 2.0 (Toddenroth et al., 2014).
Quantitative RT-PCR Analysis
For qRT-PCR validation, all RNA samples were reverse transcribed into first-strand cDNA using Superscript III 1st strand RT-PCR reactions according to the manufacturer's protocol (Invitrogen, USA). Amplifications were carried out in triplicate in a total volume of 20 µL using SYBR Premix Ex Taq TM II kit (Takara). The specificity of the PCR amplicon was checked using a heat dissociation protocol (from 60 to 95 • C) after the final PCR cycle. Three independent biological replicates and three technical replicates were performed. The primers used in the qPCR were shown in File S12 and the ACT2 gene was used as internal control to normalize the expression.
Sequence Analysis of SgTPSs
Prediction of the presence of putative signal peptide was carried out using SignalIP 4.1 program (http://www.cbs.dtu.dk/ services/SignalP/). The homology search was performed using BLAST server. The multiple alignment of amino acid sequences were accomplished using the ClustalW2 program (http://www. ebi.ac.uk/Tools/msa/clustalo/). For phylogenetic analysis, the Arabidopsis thaliana and Populus trichocarpa terpene synthase sequences were retrieved from TAIR database (https://www. arabidopsis.org/) and Populus trichocarpa Genome database (http://www.plantgdb.org/PtGDB/). The bootstrapped neighbor joining tree was constructed using the MEGA 7 program (Tamura et al., 2011).
Subcellular Localization of SgTPSs
Leaves of 4 weeks old Nicotiana benthamiana plants were used for agrobacterium-mediated infiltration as described by Wydro et al. (2006). The ORF sequence of each gene was cloned into pCambia1301 vector and the GFP fusion constructs were used for transient expression. Infiltrated leaves were mounted on slides and imaged using a confocal laser-scanning microscope (Nikon C2-ER) with a standard filter set. The empty GFP vector was used as control.
Preparation of Recombinant Proteins
The identified terpene synthase gene was cloned from S. glabra cDNA library and confirmed by sequencing. The gene was then subcloned into pET30a vector for prokaryote expression. The resulting expression constructs were confirmed by restriction digestion and sequencing. For functional expression, the constructs were transformed into BL21 (DE3) cells. Cultures containing the recombinant constructs were grown overnight at 37 • C in LB medium with antibiotics and inoculated into LB medium until OD600 reached to 0.8, then induced with 0.1 mM isopropyl β-D-1-thiogalactopyranoside (IPTG) for 16 h at 15 • C. The induced cell pellets were collected for cell lysis and the supernatants containing soluble target proteins were used for protein purification by affinity chromatography on nickeliminodiacetic acid (Ni-IDA) resin (Qiagen, Germany). Protein concentration was determined using the Bradford method.
In vitro Assay and GC-MS Analysis
For in vitro assay, the reactions were carried out in 500 µL assay buffer (25 mM Tris-HCl PH 7.0, 5 mM DTT, 5 mM MgCl 2 ) containing 5 µg purified protein and 50 µM GPP or FPP substrate. The contents were mixed in a 2 ml glass vial and incubated at 30 • C for 5 h. Then the reaction products were extracted with solid-phase microextraction (SPME) system for 30 min and analyzed by gas chromatography-mass spectrometry (GC-MS) system 7890B-5977A (Agilent Technologies) fitted with HP-5MS column (30 m × 0.25 mm). The injection temperature was 250 • C with ionization energy 70 eV and mass scan range 30-300 amu. The GC was programmed with an initial temperature of 50 • C for 1 min, then increasing at a rate 5 • C/min until 80 • C (1 min hold), and then at a rate of 10 • C/min until 220 • C (10 min hold). Identification of compounds were achieved by using the NIST14 mass spectral library database and by comparison of retention times and mass spectra with authentic standards if available.
GC-MS Profile of Oleoresin in S. glabra
The trunk of S. glabra can exude a large amount of oleoresin when drilled into or tapped. These oleoresin are stored in specific cells named secretory canals. Anatomical analysis of S. glabra stem tissue showed secretory canals exist in the secondary xylem of the trunk ( Figure 1A). To gain more insight into the molecular mechanism of oleoresin biosynthesis and excretion in S. glabra, two high oil-yielding and two low oil-yielding plants were selected for oleoresin analysis ( Figure 1B). The annual oleoresin yield in H12 plants was ∼30 times higher than that of L4 plants. The GC-MS profile showed that the chemical compounds of oleoresin in the four types of plants were very similar ( Figure 1C). There were about 18 types of sesquiterpenes that had identifiable levels of >0.1% of total compounds detected. The major sesquiterpenes were α-copaene and β-caryophyllene, followed by β-cubebene, δ-cadinene, and germacrene. The other 13 compounds accounted for <3%. The relative amount of five compounds, namely, α-copaene, β-caryophyllene, α-humulene, aromadendrene, and amorpha-4,11-diene, showed a statistical difference among the four samples, while the other compounds were not statistically diverse.
RNA-Seq and de novo Transcriptome Assembly
The stem tissue of S. glabra was used for RNA-seq library construction. Transcriptome sequencing of 12 cDNA libraries generated an average of 55.51 million raw data reads (Table S1). After filtering adapter, low quality, and short reads, an average of 9.2 G clean bases were obtained. The Q30 percentages (percentage of bases with phred value >30) were above 93%, and the GC content was about 43.63%. Furthermore, the valid reads were de novo assembled to generate 409,106 transcripts, which were further clustered into 283,998 unigenes ( Table S2). The mean unigene length was 1,701 bp, and the N50 length was 2,541 bp. The largest number of unigenes was found for a size >2,000 bp, followed by unigenes ranging from 1,000 to 2,000 bp, and then unigenes between 500 and 1,000 bp. These data indicated high accuracy and good assembly of this transcriptome sequencing.
Functional Annotation
All of the unigenes were searched against seven public databases for functional annotation and there were 231,334 unigenes annotated in at least one database and 38,014 unigenes annotated in all of the databases ( Figure S1A, Table S3). The annotation percentage was highest in the nr database and lowest in the KOG database. The Venn map revealed that 48,447 unigenes were found to be common among five databases.
Based on nr annotation and the E-value distribution, 75.59% of unigenes showed homology to annotated genes from 818 plant species (Figure S1B, File S1). Among these, the top five species were Glycine max, Glycine soja, Cicer arietinum, Phaseolus vulgaris, and Medicago truncatula, all of which belong to the Fabaceae family. GO annotation revealed that in total, 154,112 unigenes were classified into three GO categories (Figure S1C, File S2). In the biological process category, large numbers of unigenes were classified into the cellular process and metabolic process groups, including 102 and 5,744 unigenes related to the fatty acid derivative metabolic process and the lipid metabolic process, respectively, which implied the identification of genes involved in specific metabolite biosynthesis pathways. In the molecular function category, genes related to binding and catalytic activity were highly abundant, indicating large amounts of enzymes in this particular state.
All of the unigenes were subjected to a search against the KOG database, resulting in the assignment of 63,438 unigenes ( Figure S1D, File S3). These unigenes were classified into 26 different functional groups, with the largest cluster being groups O and R, followed by group J. It was noteworthy that there were 2,986 and 957 unigenes annotated in the lipid transport and metabolism group (group I) and the secondary metabolites biosynthesis group (group Q), respectively.
According to mapping in reference to the canonical pathways in KEGG, a total of 90,343 unigenes were assigned to 130 pathways (Figure S1E, File S4). Among these, carbon metabolism was the most enriched, followed by the biosynthesis of amino acids. In addition, there were 1,838 unigenes involved in plant hormone signal transduction. These genes can be good candidates for investigating plant growth regulation and stress response. Moreover, there were 2,131 unigenes involved in the metabolism of terpenoids and polyketide pathways. This information is particularly important for the identification of genes involved in terpenoid biosynthesis in S. glabra.
Analysis of Differentially Expressed Unigenes
The transcripts assembled by Trinity were used as the reference transcriptome, and the clean reads from each sample were mapped to the reference sequences. The average mapping rate was 80.85%. The differentially expressed genes (DEGs) were analyzed by comparing high oil-yielding plants with low oilyielding plants: pair H10 and L4, H10 and L6, H12 and L4, and H12 and L6, respectively. There were 9,797 unigenes up-regulated in H10 and H12, and 9,393 unigenes were predominantly expressed in L4 and L6 (Figure 2A). The greatest gene expression difference existed between H10 and L4, and between H12 and L4, followed by pair H12 and L6, and pair H10 and L6. It should be noted that there were 10, 9, 15, and 11 downregulated and 40, 21, 3, and 14 up-regulated DEGs annotated in the terpenoid biosynthesis pathway between pairs H10 and L4, H12 and L4, H12 and L6, and H10 and L6, respectively (File S5). These DEGs involved in the terpenoid pathway could serve as good candidates for functional validation. Moreover, when comparing the expression profiles of H10 and H12 to those of L4 and L6, a total of 273 unigenes were co-repressed, and 248 unigenes were co-induced, respectively (File S6). These DEGs were involved in signaling, protein metabolism and processing, defense, transcription factors, and metabolism, suggesting that signaling pathways, transcription factors, and metabolism may be responsible for S. glabra terpene polymorphism.
Based on the GO functional enrichment, the up-regulated DEGs in H10 and H12 were significantly enriched in 14 GO terms (Padj < 0.05, Figure 2B), among which the most enriched were related to ADP binding in the molecular function category. In particular, there were 34 DEGs significantly enriched in terpene synthase activity, which implied positive roles of these DEGs in the regulation of terpene synthesis. When analyzing the down-regulated DEGs in H10 and H12, there were 803 DEGs enriched in signal transduction in the biological process, suggesting possible negative regulation of these DEGs in oil yield.
Furthermore, KEGG pathway enrichment of these DEGs resulted in the assignment of 121 metabolic pathways. The flavonoid biosynthesis pathway was significantly enriched in H10 and H12 predominantly expressed DEGs ( Figure 2C). In addition, there were 43 and 13 up-regulated DEGs enriched in terpenoid backbone biosynthesis and the sesquiterpenoid and triterpenoid biosynthesis pathway, respectively, which were consistent with the GO enrichment results that terpene biosynthesis-related DEGs could increase the terpene content. In L4 and L6 predominantly expressed DEGs, five KEGG pathways were significantly enriched, among which the majority of DEGs were involved in the plant hormone signal transduction pathway, further implying the potential influence of plant signal transduction-related DEGs in the down-regulation of terpene yield. Together, these data suggested that the terpene biosynthesis process and plant hormone signal transduction pathway may exert the most significant role in determining the terpene variation in S. glabra.
Terpene Biosynthesis in S. glabra S. glabra produces oleoresin in particular tissues of stems known as secretory canals, where terpenes constitute a major component of oleoresin oil. Transcriptome analysis revealed sequences for a complete set of 60 genes in the terpene biosynthesis pathway in S. glabra (Figure 3). In the MVA pathway, transcriptome mining identified three, six and three putative genes for the initial three enzymes acetyl-CoA acetyltransferase (AACT), hydroxymethylglutaryl-CoA synthase (HMGS), and hydroxymethylglutaryl-CoA reductase (HMGR), respectively (Figure 3, File S7). Next, during the formation of IPP, the transcriptome analysis revealed two putative unigenes for mevalonate kinase (MVK), two for phosphomevalonate kinase (PMK), and two for mevalonate diphosphate decarboxylase (MVD). Among these unique genes, Cluster-32860.96052, encoding HMGS, exhibited higher expression in both H10 and H12 plants as compared to those in L4 and L6 plants, suggesting a possible role of this unigene in determining the terpene amount in S. glabra.
The intermediate IPP can be isomerized into DMAPP by isopentenyl di-phosphate isomerase (IDI). The transcriptome analysis identified three representative unigenes for IDI (Figure 3, File S7). Geranyl diphosphate synthase (GPPS) catalyzes the condensation of IPP and DMPP to generate GPP, which is further utilized by farnesylpyrophosphate synthase (FPPS) for the synthesis of FPP that is subsequently catalyzed by geranylgeranyl diphosphate synthase (GGPPS) for the production of GGPP. The transcriptional mining identified two unigenes for GPPS, six for FPPS, and three for GGPPS. The relatively higher expression of FPPS unigenes than those of GPPS and GGPPS is in agreement with sesquiterpenes being the predominant component of terpene oil in S. glabra. Furthermore, Cluster-32860.104547, encoding FPPS in H10 and H12, exhibited an FPKM value eight-and five-fold higher than that of L4, respectively, indicating the potential influence of this gene on sesquiterpene yield.
For monoterpene biosynthesis, GPP was catalyzed by monoterpene synthases (MTPSs), such as geraniol synthase and linalool synthase, to produce different forms of monoterpene. However, there were no unigenes that contained a complete ORF for MTPSs discovered in these transcriptomes. For sesquiterpene biosynthesis, two putative genes were found to encode germacrene synthase (GS) and valencene synthase (VS), respectively (Figure 3, File S7). For diterpene biosynthesis, three and one genes were found to encode ent-copalyl diphosphate synthase (CPS) and ent-kaurene synthase (KS), respectively. In particular, Cluster-32860.192900, encoding STPS in H10, showed an FPKM value four-fold higher than that of L4, while Cluster-32860.116868, encoding STPS in H10 and H12, had an FPKM value six-fold higher than that of L4. These data indicated that the two identified STPSs in the terpene biosynthesis pathway may account for the variation in the terpene yield of S. glabra.
Cytochrome P450 family enzymes participate in the downstream modification of the terpene skeleton by regiospecific oxygenation and also contribute to another level of terpene diversity. Data analysis revealed that 31 candidate CYP genes were up-regulated in H10 or H12 compared to L4 and L6, with a maximum seven-fold FPKM change (Figure 4A, File S8). Depending on the phylogenetic analysis of the deduced protein sequence of these CYPs, six groups were identified, including nine unigenes in CYP83, two in CYP76, six in CYP82, five in CYP71, three in CYP704, and six in the CYP90 family ( Figure 4B). Several CYPs, such as the CYP71 and CYP76 families, have been reported to catalyze the oxidation of sesquiterpenes in various plants (Diaz-Chavez et al., 2013;Takase et al., 2016). Therefore, the identified differentially expressed CYP genes could be of particular interest for further elucidation of terpene diversity in S. glabra.
Phytohormones, including JA, salicyclic acid (SA), and abscisic acid (ABA), have been identified as potential regulators of specialized metabolite biosynthesis (Zhou and Memelink, 2016). In the transcriptome data, a total of 422 unigenes were annotated as related to different hormones. Among them, nine for ethylene, eight for auxin, seven for ABA, six for brassinositol, four for cytokinin, three for SA, and two for JA pathway associated genes were detected as DEGs among samples ( Figure S2, File S10). Cluster-32860.120273, encoding auxin response factor (SgARF), was down-regulated in both H10 and H12 compared with those of L4 and L6, indicating possible negative regulation of SgARF in terpene biosynthesis in S. glabra. These TFs and hormone related genes can provide foundations for investigating specialized metabolism as well as the engineering of terpene biosynthesis in S. glabra.
Experimental qRT-PCR Validation
To validate the expression of putative genes in RNA-seq data, 13 genes involved in the terpene biosynthesis pathway, including AACT, HMGS, HMGR, MVK, PMK, MVD, MCT, CMK, MDS, HDS, HDR, IDI, and GGPS, were selected for qRT-PCR analysis. The comparative analysis of these genes obtained by qRT-PCR revealed similar expression patterns as those obtained by transcriptome analysis (Figure S3A). Statistical analysis showed a strong correlation between the qRT-PCR results and the dataset obtained by RNA-seq; the correlation coefficient was 0.942 ( Figure S3B), suggesting that the data obtained by transcriptome sequencing are reliable for exploring the target genes and regulatory genes involved in terpene biosynthesis.
Expression Patterns of Key Genes
In order to gain insight into the spatial expression patterns of genes involved in terpene biosynthesis in S. glabra, the expression levels of 16 key identified genes were analyzed by qRT-PCR in different tissues, including the leaf, young stem, phloem and xylem from the trunk, and root. The putative rate-limiting genes HMGR1 and HMGR2 from MVA, and DXR1 and DXR2 from the MEP pathways showed higher expression in the phloem and root with relatively low expression in the leaf, young stem, and xylem (Figure 6). The diterpene synthesis genes including GGPPS1 and diterpene synthases DTPS1, DTPS2, DTPS3, and DTPS4 all exhibited preferential expression in the phloem and root. The putative transcription factors WRKY and ARF were also mainly expressed in the phloem and root. The two sesquiterpene synthase genes STPS1 and STPS2 were differentially regulated in various tissues, among which STPS1 showed the highest expression in the root, and STPS2 was mainly expressed in the leaf, phloem, and root. These expression patterns of genes were consistent with the terpene profile that terpene compounds mainly accumulate in the trunk of S. glabra plants.
Phylogenetic Analysis of SgTPSs
The TPSs family can be classified into six clades including TPS a-g (Chen et al., 2011). To clarify the function of TPS in S. glabra, we selected putative TPS genes that contained a complete ORF sequence and encoded proteins larger than 500 amino acids. Two STPS transcripts, Cluster-32860.192900 (SgSTPS1, 549 aa), and Cluster-32860.116868 (SgSTPS2, 557 aa), and four DTPS transcripts, Cluster-32860.70817 (SgDTPS1, 597 aa), Cluster-32860.241723 (SgDTPS2, 789 aa), Cluster-32860.433 (SgDTPS3, 762 aa), and Cluster-24327.0 (SgDTPS4, 686 aa), were then identified. Blast search of SgSTPS1 and SgSTPS2 showed closest homology to CoTPS1 (AGW18154) (93%) and CoTPS4 (AGW18157) (89%) from Copaifera officinalis, respectively; both of them were functionally characterized as sesquiterpene synthase (Joyce 2013). SgDTPS1 was closely related to CoTPS5 (AGW18158) (88%) and TcCPS (EOX94746) (57%) from Theobroma cacao, while SgDTPS2, SgDTPS3, and SgDTPS4 were most closely related to GsKS (KHN21375) (about 71%) from G. soja. Multiple sequence alignment revealed that SgSTPS1 and SgSTPS2 contained the RRX 8 W, EDXXD, DDXXD, and NSE/DTE motifs (File S11). SgDTPS1 was aligned with other CPSs and was found to include the QXXDGGWG and DXDDTAM motifs, suggesting it as a class II terpene synthase. SgDTPS2, SgDTPS3, and SgDTPS4 were aligned with other KSs and were revealed to contain the QXXDGGWG, DDXXD, and NSE/DTE motifs. The six genes were further used to build phylogenetic relationships with other characterized TPSs. It was found that SgSTPS1 and SgSTPS2 were all clustered into the TPS-a subfamily, suggesting they were sesquiterpene synthases (Figure 7). SgDTPS1 was grouped together with CPS in TPSc subfamily, suggesting this as a class II diterpene cyclase. SgDTPS2, SgDTPS3, and SgDTPS4 were grouped with KS in the TPS-e subfamily, indicating them to be diterpene synthase. These genes can serve as key candidates for investigating the biological function of terpene synthases in S. glabra.
Subcellular Localization of SgSTPSs
Since the majority (85%) of compounds in S. glabra oleoresin belong to sesquiterpenes, the two identified sequiterpene synthases, SgSTPS1 and SgSTPS2, were subject to further functional analysis. In silico signal peptide prediction by SignalIP showed that the two SgSTPSs were cytosolic with no signal peptide detected, which correlated well with the prediction of SgSTPS1 and SgSTPS2 as sesqui-TPSs. To understand the biological function, the full-length sequences of SgSTPS1 and SgSTPS2 genes were obtained by cloning using the cDNA library of S. glabra stem as a template and were confirmed by sequencing to have the exact sequence as those identified by transcriptome sequencing. To test the subcellular localization, the two genes SgSTPS1 and SgSTPS2 were fused in frame with the GFP reporter gene and were transformed into N. benthamiana. Confocal analysis revealed the SgSTPS1-GFP and SgSTPS2-GFP proteins were localized in the cytoplasm (Figure 8), suggesting that SgSTPS1 and SgSTPS2 are responsible for sesquiterpene production in the cytosol.
Biochemical Function of SgSTPSs
To characterize the function of the sequiterpene synthases SgSTPS1 and SgSTPS2, the recombinant proteins SgSTPS1 and SgSTPS2 were expressed in E. coli, and purified proteins were used for activity assay. GC-MS analysis of the SgSTPS1 enzymatic products revealed the formation of predominantly β-caryophyllene ( Figure 9A, Figure S4A), along with a minor amount of isocaryophillene and humulene when utilizing FPP as a substrate. While using GPP as a substrate, there were trace amounts of linalool and geraniol produced at the retention times of 10.42 and 13.37 min, respectively ( Figure S5). This indicated that SgSTPS1 is a sesquiterpene synthase with caryophyllene synthase activity that could be responsible for β-caryophyllene production in S. glabra.
However, the recombinant SgSTPS2 protein was found to be a versatile enzyme that used FPP as a substrate to catalyze the synthesis of 12 sesquiterpene compounds, including elemene isomer, α-copaene, β-elemene, ylangene, β-copaene, isogermacrene D, γ-cadinene, γ-muurolene, germacrene D, bicyclogermacrene, γ-amorphene, and cadina-1(10),4-diene ( Figure 9B, Figure S4B), all of which were also present in S. glabra oleoresin except elemene (Figure 1). Among the sesquiterpene products, elemene, ylangene, β-copaene, and FIGURE 6 | Spatial expression patterns of key genes in the terpene synthesis pathway. Expression of genes from the leaf, young stem, phloem, xylem and root was examined by qRT-PCR. Abbreviations for genes are the same as in Figure 4. SgActin was used as an internal control. Data are presented as the mean ± SE of triplicate samples. germacrene D were the major products. When using GPP as a substrate, SgSTPS2 synthesized three acyclic monoterpenes, linalool, geranyl methyl ether, and geraniol, demonstrating MTPSs activity of SgSTPS2.
Transcriptome Analysis and Terpene Variation in S. glabra
In an attempt to investigate the molecular basis of terpene biosynthesis in S. glabra, an RNA-seq approach was employed to sequence the stem transcriptome from high and low oilyielding trees. After transcriptome assembly, the total number, N50 length and mean length of de novo assembled transcipts and unigenes of S. glabra were much higher than those recently reported in other non-model plants including Carya illinoinensis (Mo et al., 2018), Kandelia obovata (Hong et al., 2018), Kalopanax septemlobus (Han et al., 2018), Salvia officinalis (Ali et al., 2017), and Melaleuca alternifolia (Bustos-Segura et al., 2017), suggesting high accuracy and reliability of the sequencing data. Numerous annotated unigenes (61.9%) were found to be distributed in the Fabaceae family, which was reasonable since S. glabra belongs to the Caesalpinioideae subfamily that is included in the Fabaceae family. Both GO and KEGG enrichment analysis of DEGs in H10 and H12 compared to L4 and L6 revealed that the terpene biosynthesis process and the plant hormone signal transduction pathway may play roles in determining the terpene variation in S. glabra. However, even plants belonging to the same type (lowor high-producing type) exhibited enormous differences in gene expression profiles (Figure 2A). These DEGs between H10 and H12 or between L4 and L6 may be accountable for the difference in the ratio of different terpene compounds ( Figure 1A).
Evolutionary Origin of SgTPSs
Based on the reaction mechanism and products formed, plant TPSs can be classified as class I, class II, or class I/II enzymes. Class I TPSs contain the DDXXD and NSE/DTE motifs that coordinate with Mg 2+ on their C-terminus. Class II TPSs include the DXDD motif for protonation-initiated cyclization of the substrate. The bifunctional class I/II diTPS harbors both functional active sites (Chen et al., 2011). In S. glabra, the sesquiterpene synthases, SgSTPS1 and SgSTPS2, and the diterpene synthases, SgDTPS2, SgDTPS3, and SgDTPS4, contain the DDXXD and NSE/DTE motifs, suggesting that they are class I TPSs that require Mg 2+ as a cofactor. The sequence annotation, homology and phylogenetic analysis indicated that both SgSTPS1 and SgSTPS2 are sesquiterpene synthases. However, both SgSTPS1 and SgSTPS2 contain an additional EDXXD motif at the N-terminus, which in its active form contributes to class II diterpene synthase activity (Cao et al., 2010). Bifunctional class I/II diterpene synthases are only known in non-vascular plants and gymnosperms. In angiosperms, all of the diterpene synthases that have been characterized to date are monofunctional, with loss of activity in one domain or the other. All of the sesquiterpene synthases are monofunctional, having retained only one active site. TPSs containing the EDXXD domain are members of the TPS-d, -c, -e, -f clades (Martin et al., 2004). Furthermore, an additional RRX 8 W motif was also found at the N-terminus of the two SgSTPS genes, but it was not present in the four SgDTPS genes. The RRX 8 W motif is essential for cleavage of the transit peptide in mono-and di-terpene synthases (Chen et al., 2011). Subcellular localization confirmed that the two SgSTPSs were localized in the cytosol (Figure 8). S. glabra terpene synthase gene motifs may suggest the evolutionary origin of terpene synthase through domain loss or subfunctionalization from a common ancestor, which has been reported in Magnolia sesquiterpene synthase (Lee and Chappell, 2008) and Copaifera sesquiterpene synthases (Joyce, 2013). This hypothesis was further supported by the enzymatic activity of the two SgSTPSs. Although SgSTPS1 mainly catalyzed the formation of sesquiterpene caryophyllene when using FPP as substrate, there were trace amounts of linalool and geraniol produced at the retention times of 10.42 and 13.37 min, respectively, when using GPP as the substrate (Figure S5). SgSTPS2 was confirmed to have bi-substrate capability that could catalyze both GPP and FPP to produce monoterpenes and sesquiterpenes, respectively ( Figure 9B). These results indicated that SgSTPS1 and SgSTPS2 still retain partial MTPSs activity. Multi-substrate capability was found to be usual in MTPSs and STPSs (Pazouki and Niinemets, 2016). However, there was no monoterpene discovered in S. glabra oleoresin and no chloroplastic signal peptide could be detected in SgSTPS1 and SgSTPS2. As monoterpenes are usually produced in chloroplasts through the MEP pathway, these results indicate that S. glabra sesquiterpene synthases may evolve through the loss of the chloroplast signal peptide. Further functional and structural characterizations are needed to decipher the evolutionary shift from mono-and di-terpene-rich oleoresin in gymnosperms to sesquiterpene-abundant oleoresin in angiosperms.
roles of the two genes in determining the terpene amount in S. glabra. In S. glabra oleoresin, the major components of sesquiterpenes are α-copaene (32.26%) and β-caryophyllene (16.33%) (Figure 1, Yang et al., 2016). Here, by a combination of transcriptome and experimental analyses, we identified that SgSTPS1 was mainly responsible for β-caryophyllene sesquiterpene production, while SgSTPS2 was accountable for αcopaene production. Nevertheless, SgSTPS2 is a versatile enzyme that can also produce other sesquiterpenes including ylangene, germacrene, elemene, cadinene, muurolene, and amorphene, all of which were present in S. glabra oleoresin except elemene, which may be a byproduct due to rearrangements of different sesquiterpenes (Agger et al., 2008). Therefore, the two SgSTPS enzymatic products matched well with the chemical composition of oleoresin in the S. glabra stem. Furthermore, these results indicated that different sesquiterpene synthases have diversified functionally to produce specific kinds of products. Among the characterized plant β-caryophyllene synthases, SgSTPS1 exhibited the highest amino acid identity (46%) to GhTPS from Gossypium hirsutum (AFQ23183) (Huang et al., 2013), 40% identity to AtTPS from A. thaliana (AAO85539) (Huang et al., 2012), and the lowest identity (35%) to ZmTPS from Zea mays (ABY79207) (Köllner et al., 2008), three of which catalyzed the formation of β-caryophyllene as the major product from FPP. In particular, the product amorphadiene that SgSTPS2 synthesizes from FPP is a precursor of the antimalarial drug artemisinin, which can be used for the production of artemisinic acid. The identified SgSTPS2 shared 42% similarity with amorpha-4,11diene synthase from A. annua (ABM88787). Furthermore, βcaryophyllene and α-copaene in plant species were found to serve in defense against pathogens or herbivores, and the identified sesquiterpene synthases were involved in the defense response (Huang et al., 2012). Therefore, we hypothesize that SgSTPSs may be responsible for the defense response in S. glabra. Further research is needed to investigate the ecological functions of sesquiterpenes and corresponding SgSTPSs.
Potential Regulatory Network of Terpene Synthesis
TFs play a predominant role in regulating the expression of genes in various metabolic pathways. Identification of those TFs could be important for understanding the regulatory mechanism of terpene biosynthesis in S. glabra. Most of the identified TFs for regulating terpene metabolites belong to the WRKY family, including TcWRKY1 from Taxus chinensis , AaWRKY1 from A. annua (Ma et al., 2009), GaWRKY1 from Gossypium arboreum (Xu et al., 2004), and HbWRKY1 from Hevea brasiliensis , which activate the expression of genes encoding key enzymes involved in the metabolite pathway. Here, we suggest that SgWRKY may be a positive regulator in S. glabra terpene biosynthesis (Figure 5A), which is consistent with the role of WRKY in other plants based on previous research. However, the regulatory role and downstream target of SgWRKY in terpene biosynthesis need to be further explored and confirmed by functional analysis. More transcriptome and experimental data are needed to elucidate the metabolic regulatory network. How transcription factors are connected to the signaling pathway and how their action is integrated with other regulatory circuits await further investigation in S. glabra.
AUTHOR CONTRIBUTIONS
NY designed the study and wrote the manuscript. NY, J-CY, and R-SL performed the experiments. G-TY and W-TZ helped in data analysis and manuscript preparation. Figure S4 | Mass spectra for the peaks in GC-MS for the products formed by SgSTPS1 (A) and SgSTPS2 (B). The peaks marked with numbers were identified by comparing with mass spectra library. The mass spectra for the peaks are shown with the references. Figure S5 | GC-MS chromatogram for the products formed by SgSTPS1 enzyme using GPP as substrate. The peaks marked with numbers were identified by comparing with mass spectra library. The mass spectra for the peaks are shown on the right and lower sides with the references.
Table S1 | Summary of RNA-Seq data from all samples. File S1 | Details of species classification of all annotated unigenes. File S7 | Details of DEGs involved in terpene biosynthesis pathway.
File S8 | Details of differentially expressed CYPs genes.
File S9 | Details of differentially expressed transcription factors involved in regulating terpene biosyntheis in S. glabra.
File S10 | Details of differentially expressed hormone related genes involved in regulating terpene biosynthesis in S. glabra.
File S11 | Multiple sequence alignment of S. glabra TPS genes with homologous genes from other plants.
File S12 | Primers used in qRT-PCR validation of expression levels of genes. | 2018-11-20T14:08:56.429Z | 2018-11-20T00:00:00.000 | {
"year": 2018,
"sha1": "48e91f1b2c59449e8878961ebc7cce757d07caad",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpls.2018.01619/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "48e91f1b2c59449e8878961ebc7cce757d07caad",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
204744081 | pes2o/s2orc | v3-fos-license | A Non-degenerate Scattering Theory for the Wave Equation on Extremal Reissner–Nordström
It is known that sub-extremal black hole backgrounds do not admit a (bijective) non-degenerate scattering theory in the exterior region due to the fact that the redshift effect at the event horizon acts as an unstable blueshift mechanism in the backwards direction in time. In the extremal case, however, the redshift effect degenerates and hence yields a much milder blueshift effect when viewed in the backwards direction. In this paper, we construct a definitive (bijective) non-degenerate scattering theory for the wave equation on extremal Reissner–Nordström backgrounds. We make use of physical-space energy norms which are non-degenerate both at the event horizon and at null infinity. As an application of our theory we present a construction of a large class of smooth, exponentially decaying modes. We also derive scattering results in the black hole interior region.
on black hole backgrounds provide useful insights in studying the evolution of perturbations "at infinity". In this article we construct a new scattering theory for scalar perturbations on extremal Reissner-Nordström. Our theory makes crucial use of the vanishing of the surface gravity on the event horizon and our methods extend those of the horizon instability of extremal black holes in the forward-in-time evolution. In the remainder of this section we will briefly recall scattering theories for sub-extremal backgrounds and in the next section we will provide a rough version of the main theorems. We will first review the scattering theories of the wave equation (1.1) on Schwarzschild spacetime backgrounds. Let T denote the standard stationary Killing vector field on a Schwarzschild spacetime. Since T is globally causal in the domain of outer communications, the energy flux associated to T is non-negative definite. This property played a crucial role in the work of Dimock and Kay [26,27] where a T -scattering theory on Schwarzschild, in the sense of Lax-Phillips [43], was developed (Fig. 1a). Subsequently, the T -scattering theory was understood by Nicolas [51], following the notion of scattering states by Friedlander [30] (Fig. 1b).
The T -energy scattering theory on Schwarzschild applies also when the standard Schwarzschild time function t is replaced by a time function corresponding to a foliation by hypersurfaces intersecting the future event horizon and terminating at future null infinity (Fig. 2a). This is convenient since it allows one to bound energies as measured by local observers. Recall that T is timelike in the black hole exterior and null on the event horizon. For this reason, the T -energy flux across an achronal hypersurface intersecting the event horizon is positive-definite away from the horizon and degenerate at the horizon. Hence, the associated norm for the T -energy scattering theory is degenerate at the event horizon. On the other hand, it has been shown [23,24] that Schwarzschild does not admit a non-degenerate scattering theory where the norm on the achronal hypersurface is defined in terms of the energy flux associated to a globally timelike vector field N (Fig. 2b) and the norms on the event horizon and null infinity are also defined in terms of energy flux associated with N , but with additional, arbitrarily fast polynomially decaying weights in time. This is due to the celebrated redshift effect which turns into a blueshift instability mechanism when seen from the backwards scattering point of view.
It is important to note that one can counter the blue-shift mechanism and define a backwards scattering map for non-degenerate high-regularity norms on an achronal hypersurface if the data on H + and I + are sufficiently regular and decay exponentially fast with sufficiently large rate (Fig. 3). A fully nonlinear version of this statement, in the context of the vacuum Einstein equations, was presented in [19].
As far as the Kerr family is concerned, Dafermos, Rodnianski and Shlapentokh-Rothman [23] derived a degenerate scattering theory in terms of the energy flux associated to a globally causal vector field V which is null on the event horizon and timelike in the exterior region. Similarly to the Schwarzschild case, the sub-extremal Kerr backgrounds do not admit a non-degenerate scattering theory in the exterior region. Let us also note that a T -energy scattering theory on Oppenheimer-Snyder spacetimes, describ- The T -scattering map.
The N -scattering map fails to be surjective. ing Schwarzschild-like black holes arising from gravitational collapse, was developed in [1].
Finally we present some results regarding the black hole interior region. Luk-Oh [46] showed that the forward evolution of smooth compactly supported initial data on sub-extremal Reissner-Nordström (RN) is W 1,2 -singular at the Cauchy horizon (Fig. 4).
Similar instability results for the wave equation on Kerr interiors were presented by Luk-Sbierski [47] and independently by Dafermos-Shlapentokh-Rothman [24] (see also [29,39,40]). Specifically, in [24] the authors assumed trivial data on the past event horizon and arbitrary, non-trivial polynomially decaying data on past null infinity and showed that local (non-degenerate) energies blow up in a neighborhood of any point at the Cauchy horizon (Fig. 5). The interior of Schwarzschild was considered by Fournodavlos and Sbierski [28], who derived asymptotics for the wave equation at the singular boundary {r = 0}.
Overview of the main theorems.
In this section we present a rough version of our main theorems. Theorems A and B are straightforward extensions of known results, so we will only sketch their proofs, whereas Theorems 1-6 are entirely novel results that require new techniques and whose precise statements of the theorems can be found in Sect. 4.
First of all, note that the standard stationary Killing vector field T is causal everywhere in the domain of outer communications of ERN. From this, it follows that the T -energy scattering theory in Schwarzschild can easily be extended to ERN (see Fig. 6): The T -scattering theory in Schwarzschild extends to extremal Reissner-Nordström.
Proof. Follows by applying the methods in Section 9.6 of [23] together with the decay estimates derived in [8].
In the following theorem, we show that in ERN we can in fact go beyond T -energy scattering by providing a bijective scattering theory for weighted and non-degenerate norms on ERN; see Fig. 7 for an illustration. Here, 0 will denote a spacelike-null hypersurface intersecting H + and terminating at I + . Note that the E 0 −norm is non-degenerate both at the event horizon and at null infinity (the latter understood in an appropriate conformal sense; see Sect. 2.4). The omitted terms involve either smaller weights or extra degenerate factors and additional angular or time derivatives. Here J T and J N denote the energy fluxes associated to the vector fields T and N and ∂ ρ is a tangential to 0 derivative such that ∂ ρ r = 1. Let E H + ∩J + ( 0 ) , E I + ∩J + ( 0 ) , E 0 denote the closure of smooth compactly supported data under the corresponding norms schematically defined above. The above theorem is in stark contrast to the sub-extremal case where the backwards evolution is singular at the event horizon (contrast Fig. 7 with Fig. 2).
By the bijective properties of Theorem 1, we can moreover conclude immediately that all scattering data along H + and I + with finite T -energy but with infinite weighted norm (as in (1.2)) will have an infinite weighted non-degenerate energy on 0 . The above theorem however does not specify which of the horizon-localized N -energy or the weighted energy for {r > R 0 }, for some large R 0 > 0, is infinite. The following theorem shows that there are characteristic data for which the solutions specifically have infinite horizon-localized N -energy. This immediately implies that the unweighted nondegenerate N -energy forward scattering map fails to be invertible, in other words we can find data with finite characteristic N -energies but with infinite standard (unweighted) N -energy at 0 . Theorem B. There exists solutions ψ to (1.1) on ERN that are smooth away from the event horizon H + with finite T -energy flux along H + and future null infinity I + , such that either: (i) ψ| H + vanishes, but r ψ| I + satisfies (1 + u) p (∂ u (r ψ)) 2 sin θ dθ dϕdu = ∞ if and only if p ≥ 2 and ψ has infinite unweighted N -energy flux along 0 ∩ {r ≤ r 0 }, with r 0 > r + arbitrarily close to the horizon radius r + , or (1 + v) p (∂ v (r ψ)) 2 sin θ dθ dϕdv = ∞ if and only if p ≥ 2 and ψ has infinite weighted N -energy flux along ∩{r ≥ R 0 } with R 0 > 0 arbitrarily large.
The following theorem concerns the scattering of initial data with higher regularity; see Fig. 8 for an illustration. • between a weighted higher-order energy space on (H + ∩ J + ( 0 ), I + ∩ J + ( 0 )) and a degenerate higher-order energy space on 0 are bounded and bijective.
The above theorem is of particular importance in constructing special solutions with high regularity. We next present a scattering result for the black hole interior of ERN ( Fig. 9) that extends the results derived in [31].
Theorem 3. (Rough version of Theorem 4.3) The scattering map in the black hole interior of ERN defined between weighted energy spaces is bounded and bijective.
We will now provide a few applications of the above theorems. The first application has to do with the relation of decay along H + and I + and regularity of the data on the hypersurface 0 (see Fig. 10). Contrast Fig. 11 with Fig. 4 in the sub-extremal case. See also Remark 4.5.
Related works.
A closely related topic to the scattering theories on black holes is the black hole stability problem for the forward-in-time evolution. Intense research has been done for both sub-extremal and extremal black holes in this direction. Decay results for the wave equation on the full sub-extremal Kerr family were derived in [22]. Definitive stability results of the linearized gravity system for Schwarzschild and Reissner-Nordström were presented in [20] and [35,36], respectively. The non-linear stability of Schwarzschild in a symmetry restricted context was presented in [42]. The rigorous study of linear waves on extremal black holes was initiated by the second author in [8][9][10][11][12] where it was shown that scalar perturbations are unstable along the event horizon in the sense that higher-order transversal derivatives asymptotically blow up towards the future. The stronger regularity properties of scalar perturbations in the interior of extremal black hole spacetimes compared to sub-extremal black holes was derived by the third author in [31,32]. Precise late-time asymptotics were derived in [5]. These asymptotics led to a novel observational signature of ERN [4] where it was shown that the horizon instability of ERN is in fact "observable" by observers at null infinity. For a detailed study of this signature we refer to the recent [15]. For works on extremal Kerr spacetimes we refer to the works [16,38,45]. Extentions of the horizon instability have been presented in various settings [3,14,18,37,50,52,54]. For a detailed review of scalar perturbations on extremal backgrounds we refer to [13].
1.4.
Discussion on nonlinear problems. The methods developed in this article have applications beyond extremal black holes. Indeed, they may be also applied in the construction of non-degenerate scattering theories with weighted energy norms in more general asymptotically flat spacetimes without a local redshift effect at the horizon (which acts as a blueshift effect in backwards evolution). One such example would be the Minkowski spacetime; see Sect. 5. Since our methods involve weighted and nondegenerate energies, we expect them to be particularly useful for developing a scattering theory for nonlinear wave equations satisfying the classical null condition, as weighted energies need to be controlled in order to obtain global well-posedness for the (forwards) initial value problem [41]. It would be moreover interesting to explore the generalization of our methods to the setting of perturbations of Minkowski in the context of a scattering problem for the Einstein equations. See also [44] for work in this direction. Another interesting direction to explore is the construction of dynamically extremal black holes settling down to extremal Reissner-Nordström with inverse polynomial rates from initial data along the future event horizon and future null infinity, which would involve a generalization of the backwards evolution estimates in this article to the setting of the Einstein equations. Note that the construction of dynamically extremal black holes settling down exponentially follows from an application of the methods of [19]. However, whereas it is conjectured in [19] that a scattering construction of dynamically sub-extremal black holes settling down inverse polynomially will generically result in spacetimes with a weak null singularity at the event horizon, our methods suggest that the event horizon of dynamically extremal black holes may generically be more regular (with the regularity depending on the assumed polynomial decay rate).
1.5. Overview of paper. We provide in this section an overview of the remainder of the paper.
• In Sect. 2, we introduce the extremal Reissner-Nordström geometry and spacetime foliations. We also introduce the main notation used throughout the rest of the paper. • We introduce in Sect. 3 the main Hilbert spaces which appear as domains for our scattering maps. • Having introduced the main notation and Hilbert spaces, we subsequently give precise statement of the main theorems of the paper in Sect. 4.
• In Sect. 5, we outline the main new ideas introduced in the present paper and we provide a sketch of the key proofs. • We construct in Sect. 6 the forwards scattering map F , mapping initial data on a mixed spacelike-null hypersurface to the traces of the radiation field at the future event horizon and future null infinity. We moreover construct restrictions to this map which involve additionally higher-order, degenerate norms. • In Sect. 7, we construct the backwards evolution map B, which send initial data for the radiation field at the future event horizon and future null infinity to the trace of the solution at a mixed spacelike-null hypersurface and is the inverse of F . Similarly, we construct restrictions of B involving higher-order, degenerate norms. • We prove in Sect. 8 additional energy estimates (in forwards and backwards time direction) that allow us to construct invertible maps F ± that send initial data along the asymptotically flat hypersurface {t = 0} to the future event horizon/null infinity and past event horizon/null infinity, respectively. The composition S = F + • F −1 − defines the scattering map, which may be thought of as the key object in our nondegenerate scattering theory. • In Sect. 9 we construct a scattering map S int in a subset of the black hole interior of extremal Reissner-Nordström. • In the rest of the paper, we provide several applications of the scattering theory developed in the aforementioned sections. In Sect. 10, we apply the backwards estimates of Sect. 7 to construct arbitrarily regular solutions to (1.1) from data along future null infinity and the future event horizon. As a corollary, we construct in Sect. 11 smooth mode solutions from data at infinity and the event horizon.
Geometry and Notation
where D(r ) = (1 − Mr −1 ) 2 , with M > 0 the mass parameter, and (θ, ϕ) are spherical coordinates on S 2 . We denote the boundary as follows H + := ∂M ext = {r = M}. We refer to H + as the future event horizon. The coordinate vector field T := ∂ v is a Killing vector field that generates the time-translation symmetry of the spacetime.
We can change to the coordinate chart (u, r, θ, ϕ) on the manifoldM ext = M ext \H + , in which g can be expressed as follows: Finally, it will also be convenient to employ the Eddington-Finkelstein double null coordinate chart (u, v, θ, ϕ) inM ext , in which g takes the following form: In these coordinates T = ∂ u + ∂ v . We moreover introduce the following vector field notation in (u, v, θ, ϕ) coordinates: We have that L(r ) = 1 2 D and L(r ) = − 1 2 D. Note that in (v, r ) coordinates, we can express: Let / ∇ denote the induced covariant derivative on the spheres of constant (u, v). Then we denote the following rescaled covariant derivative: The rescaled covariant derivative / ∇ S 2 is the standard covariant derivative on the unit round sphere.
Consider the following rescaled radial coordinate onM ext : x := 1 r . The metric g M takes the following form in (u, x, θ, ϕ) coordinates: We can then expressM ext = R u × (0, 1 M ] x × S. We can embed M ext into the manifold-with-boundary We denote I + := R u × {0} x × S 2 and refer to this hypersurface as future null infinity. By considering a conformally rescaled metriĉ we can also introduce (u, r, θ, ϕ) coordinates onM int , in which the metric takes the expression (2.2). In these coordinates, it immediately follows that we can embedM int into a larger manifold M = R u × (0, ∞) r × S 2 . Let us denote the manifold-withboundary M int = R u × (0, M] r × S 2 and the boundary which we refer to as the inner horizon or the Cauchy horizon (the latter terminology follows from the globally hyperbolic spacetime regions considered in Sect. 2.3). Finally, it is also useful to work in Eddington-Finkelstein double-null coordinates (u, v, θ, ϕ) inM int , in which the metric g takes the form ( Consider the corresponding hypersurface Then N v 0 := | {r ∈(M,r H )} is an ingoing null hypersurface intersecting H + , tangential to L and N u 0 := | {r ∈[r I ,∞)} is an outgoing null hypersurface, tangential to L. Furthermore, | {r ∈(r H ,r I )} is spacelike. We denote u (r ) := v (r ) − 2r * and observe that Without loss of generality, we can assume that u 0 > 0 (by taking v 0 appropriately large for fixed r H and r I ). We will consider the coordinate chart (ρ := r | , θ, ϕ) on .
We denote with D ± (S) the future and past domain of dependence, respectively, of a spacelike or mixed spacelike-null hypersurface S. Let R := D + ( ). We can foliate R as follows: where τ denote the hypersurfaces induced by flowing along T , with 0 = . We can extend R (with respect to the (u, x, θ, ϕ) coordinate chart) into the extended manifold-with-boundary M ext by attaching the boundary I + ≥u 0 := I + ∩ {u ≥ u 0 }: Note that we can similarly consider D − ( ) where is the time-reversed analogue of (the roles of u and v reversed) that intersects H − and define, with respect to (v, x, θ, ϕ) coordinates and v 0 ∈ R the analogue of u 0 and also define I − ≤v 0 The hypersurface naturally extends to a hypersurface in R, with endpoints on H + and I + , and can be equipped with the coordinate chart (χ = x| , θ, ϕ).
We moreover define H We denote furthermore We foliate the regions D −u 0 , with u 0 > 0, by outgoing null hypersurfaces that we also denote N u . In this setting N u = {u = u | v ≥ |u|}. It is also useful to consider a foliation by ingoing null hypersurfaces We moreover consider the following null hypersurfaces in We refer to Fig. 12 for an illustration of the above foliations and hypersurfaces. We use the following notation for the standard volume form on the unit round sphere: dω = sin θ dθ dϕ. Let n τ and n be the normal vector fields to τ and , respectively. We denote with dμ τ , dμ the induced volume forms on τ and respectively. On the null segments N τ and N τ , n τ and dμ τ are not uniquely defined, so we take the following conventions: We moreover use the notation dμ g M for the natural volume form on M ext or M int . Note that in (u, v, θ, ϕ) coordinates on eitherM ext orM int , we can express: We use the notation dμĝ M for the natural volume form on M ext (corresponding to the metricĝ M ). In (u, x, θ, ϕ) coordinates on M ext \H + , we can express: 2.4. Additional notation. Let n ∈ N 0 . Suppose K ⊂ R is compact. Then the Sobolev spaces W n,2 (K ) are defined in a coordinate-independent way with respect to the following norm: Recall that we can write in (v, r, θ, ϕ) coordinates: 2D −1 L = ∂ r , which is a regular vector field in R. Furthermore, we can express in (u, x, θ, ϕ) coordinates: which implies that r 2 L is also regular in R. Hence, W n,2 (K ) is a natural choice of Sobolev space with respect to the conformal metricĝ M . If K int ⊂ M int is compact, we instead define W n,2 (K int ) in a coordinate-independent way with respect to the following norm: In (u, r, θ, ϕ) coordinates, we can express 2D −1 L = ∂ r , which is a regular vector field in M int . We can also express in (u, r, θ, ϕ) coordinates, which clearly is also regular M int . We have that W n,2 (K int ) is therefore a natural choice of Sobolev space with respect to g M . We define the Sobolev spaces W 1,2 (N int v 0 ) with respect to the following norm: Let f, g be positive real-valued functions. We will make use of the notation f g when there exists a constant C > 0 such that f ≤ C · g. We will denote f ∼ g when f g and g f . We will also employ the alternate notation f ∼ c,C g, with f, g for 0 < c ≤ C positive constants, to indicate: We use the "big O" notation O((r − M) p ) and O(r − p ), p ∈ R to group functions f of r satisfying respectively.
Main energy spaces.
In this section, we will introduce the Hilbert spaces on which we will define scattering maps. Before we can do so, we will need existence and uniqueness (in the smooth category) for the Cauchy problem for (1.1) on extremal Reissner-Nordström.
Consider characteristic initial data
We denote with C ∞ ( 0 ) the space of smooth functions on the hypersurface 0 , with respect to the coordinate chart (χ, θ, ϕ) introduced in Sect. 2.3. We denote with C ∞ ( 0 ∩ {r H ≤ r ≤ r I }) the space of smooth function on the restriction 0 ∩ {r H ≤ r ≤ r I }, with respect to the coordinate chart (ρ, θ, ϕ).
Let us introduce the stress-energy tensor T[ψ] of (1.1), defined as follows with respect to a coordinate basis: Given a vector field X on M, we define the corresponding X -energy current J X as follows: We will denote the radiation field of ψ as follows: We define the following energy space where ψ denotes the (unique) smooth local extension of in R that satisfies ψ| 0 = and n 0 ψ| 0 ∩{r H ≤r ≤r I } = and solves (1.1) (see Theorem 3.1), so that all derivatives of ψ above can be expressed solely in terms of derivatives of and .
We also define the norm ||·|| E 0 on C ∞ ( 0 )×C ∞ ( 0 ∩{r H ≤ r ≤ r I }) as follows: We denote with E T 0 and E 0 the completions of C ∞ ( 0 )×C ∞ ( 0 ∩{r H ≤ r ≤ r I }) with respect to the norms || · || E T 0 and || · || E 0 , respectively. Note that, by construction, where ψ denotes the (unique) smooth local extension of to D + ( ) that satisfies ψ| = and n ψ| = and solves (1.1) (see Theorem 3.1), so that all derivatives of ψ above can be expressed solely in terms of derivatives of and .
We also define the norm || · || E on C ∞ c ( ) × C ∞ c ( ) as follows: We denote with E T and E the completions of C ∞ c ( ) × C ∞ c ( ) with respect to the norms || · || E T and || · || E , respectively. Note that, by construction, We denote with C ∞ c (H + ≥v 0 ) and C ∞ c (I + ≥u 0 ) the spaces of smooth, compactly supported functions on H + ≥v 0 and I + ≥u 0 , respectively.
Note that 3.2. Degenerate higher-order energy spaces. In this section, we will introduce analogues of the Hilbert spaces introduced in Sect. 3.1, but with norms depending on degenerate higher-order derivatives.
Definition 3.5. Define the norm ||·|| E n; 0 as follows: We denote with E n; 0 the completion of C ∞ ( 0 ) × C ∞ ( 0 ∩ {r H ≤ r ≤ r I }) with respect to the norm || · || E n; 0 . Definition 3.6. Define the norm || · || E n; as follows: let where ψ denotes the smooth extension of to R that satisfies ψ| = and n ψ| = and solves (1.1) (see Theorem 3.1), so that all derivatives of ψ above can be expressed solely in terms of derivatives of and .
We denote with E n; the completion of Definition 3.7. Let n ∈ N 0 and u 0 , v 0 > 0. Define the higher-order norms || · || E n;H + ≥v 0 and || · || E n;I + ≥u 0 as follows: Then we denote with E n;H + ≥v 0 Note that for all n ∈ N 0 , Definition 3.8. Let n ∈ N 0 . Define the higher-order norms || · || E n;H ± and || · || E n;I ± , as follows: with respect to the coordinate charts (u ± , v ± , θ, ϕ). Then we denote with E n;H ± ⊕ E n; with respect to the norms || · || E n;H ± and || · || E n;I ± .
Black hole interior energy spaces.
In this section, we introduce additional energy spaces that play a role in a non-degenerate scattering theory for the extremal Reissner-Nordström black hole interior.
Main Theorems
In this section, we give precise statements of the results proved in this paper. We refer to Sects. 2 and 3 for an introduction to the notation and definitions of the objects appearing in the statements of the theorems.
Non-degenerate scattering theory results.
We first state the main theorems that establish a non-degenerate scattering theory in extremal Reissner-Nordström.
Theorem 4.1. The following linear maps
Here, ψ denotes the unique solution to (1.1) with initial data ( , ) in accordance with statements 2. and 3. of Theorem 3.1.
Furthermore, their unique extensions
are bijective and bounded linear operators, and is also a bijective bounded linear operator.
We refer to the maps F and F ± as a forwards evolution maps, F −1 and F −1 ± and backwards evolution maps and S as the scattering map.
Remark 4.1. An analogous result holds with respect to the degenerate energy spaces
This follows easily from an analogue of Proposition 9.6.1 in [23] applied to the setting of extremal Reissner-Nordström; see also Sects. 6.5, 7.4 and 8.3. They advantage of Theorem 4.1 is the use of non-degenerate and weighted energy norms that also appear when proving global uniform boundedness and decay estimates for solutions to (1.1).
The following theorem extends Theorem 4.1 by considering degenerate and weighted higher-order energy spaces.
Theorem 4.2.
Let n ∈ N 0 . We can restrict the codomains of the linear maps F and F ± defined in Theorem 4.1, to arrive at which are well-defined. Furthermore, the unique extensions
are bijective and bounded linear operators and
is also a bijective bounded linear operator.
Both Theorems 4.1 and 4.2 follow by combining Propositions 6.16 and 7.11, Corollary 7.12 and Propositions 8.11 and 8.14.
We additionally construct a scattering map restricted to the black hole interior.
Theorem 4.3.
Let u int < 0 with |u int | suitably large. The following linear map: Furthermore, uniquely as a bijective, bounded linear operator: Theorem 4.3 is a reformulation of Proposition 9.2.
Applications.
In this section, we state some applications of the non-degenerate scattering theory of Sect. 4.1.
In Theorem 4.4 below, we show that we can obtain unique solutions to (1.1) with arbitrary high Sobolev regularity (with respect to the differentiable structure on R) from suitably regular and polynomially decaying scattering data on H + and I + in an L 2 -integrated sense.
and assume moreover that Reissner-Nordström with the sub-extremal setting, where generic polynomially decaying data along the future event horizon and future null infinity (with an arbitrarily fast decay rate) lead to blow-up of the non-degenerate energy along 0 ; see [23,24].
As a corollary of Theorem 4.4, we can moreover construct smooth solutions and in particular smooth solutions with an exact exponential time dependence.
and assume that ( , ) and all derivatives up to any order decay superpolynomially in v and u, respectively.
(i) Then there exists a corresponding smooth solution ψ to (1.1) on R such that rψ can moreover be smoothly extended toR with respect to the differentiable structure on R.
with ω ∈ C such that Im ω < 0. Then we can express and We refer to ψ as mode solutions.
Remark 4.3.
Note that in order for an analogous result to Theorem 4.5 (i) to hold in sub-extremal Reissner-Nordström, one needs to consider scattering data ( , ) that are superexponentially decaying, and hence it cannot be used to prove the analogue of Theorem 4.5 (ii). Nevertheless, the existence of a more restricted class of smooth solutions that behave exponentially in time with arbitrary ω such that Im ω < 0 in sub-extremal Reissner-Nordström can be established by restricting to fixed spherical harmonics and applying standard asymptotic ODE analysis.
Remark 4.4. One can apply the results of [5] to show that the time translations acting on L 2 -based Sobolev spaces with ψ the solution to (1.1) associated to ( , ), form a continuous semi-group, such that S(τ ) = e τ A , with A the corresponding densely defined infinitesimal generator A that formally agrees with T : The results of [55] imply that, in the setting of asymptotically de Sitter or anti de Sitter spacetimes, quasi-normal modes or resonances are smooth mode solutions that can be interpreted as eigenfunctions of A and the corresponding frequencies ω form a discrete set in the complex plane (cf. the normal modes and frequencies of an idealised vibrating string or membrane).
The smooth mode solutions of Theorem 4.5 (ii) (and those obtained in the subextremal setting by ODE arguments as sketched in Remark 4.3) form an obstruction to extending this interpretation to the asymptotically flat setting. Indeed, all the mode solutions of Theorem 4.5 (ii) are eigenfunctions of A but the corresponding set of frequencies ω, which is the entire open lower-half complex plane, is certainly not discrete. In order to maintain the viewpoint of [55], one has to consider smaller function spaces that exclude the smooth mode solutions of Theorem 4.5 (ii); see [34]. Theorem 4.6. Let u 0 be suitably large. Then there exists a constant C = C(M, u 0 , v 0 ) > 0 such that we can estimate in the black hole interior: Theorem 4.6 follows from Corollary 9.3.
Remark 4.5. Theorem 4.6 addresses the question of whether ψ ∈ W 1,2 loc in the black hole interior of extremal Reissner-Nordström for localized, low regularity initial data, which was raised as an open problem in [25]. For smooth and localized data, this statement follows from [5,31]. Indeed, Theorem 4.6 demonstrates that boundedness of a nondegenerate energy with weights that grow in r (together with boundedness of energies involving additional derivatives that are tangential to the event horizon) is sufficient to establish ψ ∈ W 1,2 loc . Theorem 4.6 can straightforwardly be extended to the > 0 setting of extremal Reissner-Nordström-de Sitter black holes, where there is no need to include r -weights in the non-degenerate energy norm that is sufficient to establish ψ ∈ W 1,2 loc . See also [2] for the results in the interior of extremal Reissner-Nordström-de Sitter.
Overview of Techniques and Key Ideas
In this section, we provide an overview of the main techniques that are used in the proofs of the theorems stated in Sect. 4. We will highlight the key new ideas and estimates that are introduced in this paper.
The proof of the main theorems Theorem 4.1 and Theorem 4.2 can roughly be split into four parts: 1.) Showing that the linear maps F , F −1 and F n , F −1 n that appear in Theorem 4.1 and Theorem 4.2 are well-defined when considering as a domain spaces of either smooth or smooth and compactly supported functions.
2.)
Proving uniform boundedness properties of these linear maps with respect to weighted Sobolev norms. This allows one to immediately extend the linear maps to the completions of the spaces of smooth (and compactly) supported functions with respect to appropriately weighted Sobolev norms. 3.) Constructing the linear maps S and S n . 4.) Constructing S int (independently from above).
The heart of this paper consists of establishing 2.) and 3.) by proving uniform estimates for smooth (and compactly supported) data along 0 , and H ± ∪ I ± . An overview of the corresponding estimates and techniques leading to 2.) is given in Sects. 5.1-5.3. Part 3.) follows by complementing these estimates with additional estimates in D ± ( ) near the past limit points of I + and H + , which is briefly discussed in Sect. 5.4. We briefly discuss the black hole interior estimates involved in 4.) in Sect. 5.5. Part 1.) follows from local estimates combined with soft global statements that have already been established in the literature. We give an overview of the logic of the arguments in this section.
The forwards map
is well-defined by global existence and uniqueness for (1.1) combined with the finiteness (and decay) of the radiation field r ψ, see for example the results in [5,8,9].
In order to show that the backwards map 2 is well-defined, we first need to make sense of the notion of prescribing initial data "at infinity"; that is to say, we need to show as a preliminary step that we can associate to each pair = . This may be viewed as a semi-global problem. We construct ψ as the limit of a sequence of solutions ψ i arising from a sequence of local initial value problems with fixed initial data ( , ) imposed on the null hypersurfaces A very similar procedure was carried out in the physical space construction of scattering maps on Schwarzschild in Proposition 9.6.1 in [23]. 3 One could alternatively interpret I + as a genuine null hypersurface with respect to the conformally rescaled metricĝ M , which turns the semi-global problem into a local problem.
Backwards r -weighted estimates.
We introduce time-reversed analogues of the r p -weighted estimates of Dafermos-Rodnianski [21] and the (r − M) − p -weighted estimates of [5]. We first illustrate key aspects of these estimates in the setting of the standard wave equation on Minkowski. We can foliate the causal future of a null cone C 0 in Minkowski by outgoing spherical null cones C u = {t − r = u}, with t, r the standard spherical Minkowski coordinates and u ≥ 0. Let us denote . We consider smooth, compactly supported initial data on The r p -weighted estimates applied backwards in time with p = 1 and p = 2 give In contrast with the usual forwards r p -weighted estimates, the spacetime integrals on the right-hand sides above have a bad sign. Hence, in order to obtain control of r -weighted energies along C u 1 , we need to start by controlling Note that standard ∂ t -energy conservation implies that for any 0 < u < u 2 : Hence, using that ψ is vanishing along C u 2 , we can integrate the above equation in u to obtain We can integrate by parts to convert one u-integration into an additional u weight: By applying both the p = 1 and p = 2 estimates above, and integrating by parts once more along I + as in (5.2), we obtain: Comparing (5.3) with (5.1) with u = 0, we see that we can obtain stronger, weighted uniform control along C 0 , provided we control an appropriately weighted energy along I + . One may compare this to the (modified) energy estimate obtained by using the Morawetz conformal vector field K = u 2 ∂ u +v 2 ∂ v , which is the generator of the inverted time translation conformal symmetries, as a vector field multiplier instead of ∂ t [48]; see also Sect. 5.4. The main difference in the setting of extremal Reissner-Nordström is that the r pestimates above only apply in the spacetime region where r ≥ r I , with r I suitably large, and they have to be complemented by an analogous hierarchy of (r − M) − p weighted estimates in a region {r ≤ r H } near H + , i.e. with r H − M sufficiently small. Roughly speaking, the analogue of the p = 2 weighted energy near H + corresponds to the restriction of the following non-degenerate energy (in (v, r ) coordinates): It is in controlling the non-degenerate energy in the backwards direction that we make essential use of the extremality of extremal Reissner-Nordström or the degeneracy of the event horizon. Indeed, if we were to consider instead sub-extremal Reissner-Nordström, we would fail to obtain control of a non-degenerate energy near H + with polynomially decaying data along H + ∪I + due to the blueshift effect (the time reversed redshift effect); see [23,24]. 4 In order to control the boundary terms arising from restricting the r -weighted estimates near I + and H + , we apply the Morawetz estimate derived in [8] in the backwards direction. Note that the presence of trapped null geodesics along the photon sphere at r = 2M does not lead to a loss of derivatives in the analogue of (5.3). This is because the backwards estimates, in contrast with the forwards estimates (see Sect. 5.2), do not require an application of a Morawetz estimate with non-degenerate control at the photon sphere.
Forwards r -weighted estimates revisited.
We consider again the setting of Minkowski to illustrate the main ideas. In order to construct a bijection from an rweighted energy space on C 0 to a u-weighted energy space on I + , we need to complement the backwards estimate (5.3) with the following forwards estimate: Note that a standard application of the r p -weighted estimates (combined with energy conservation (5.1) and a Morawetz estimate), see [21], is the following energy decay statement: One can apply this estimate along a suitable dyadic sequence and combine it with energy conservation (5.1) to arrive at the estimate with > 0. In order to take = 0, we instead revisit the r p -estimates and, rather than deriving energy decay along C u , we observe that the r p -estimates (together with (5.1) and a Morawetz estimate) provide directly control over After integrating by parts twice in u as in (5.2), we arrive at (5.4).
We arrive at an analogous estimate to (5.4) in the extremal Reissner-Nordström setting by following the same ideas, both near I + and near H + . The main difference is that whenever we apply a Morawetz estimate, we lose a derivative because of the trapping of null geodesics, which we have to take into account when defining the appropriate energy spaces.
Higher-order energies and time integrals.
Given suitably regular and suitably decaying scattering data on H + and I + , we can apply Theorem 4.1 to construct a corresponding solution ψ ∈ C 0 ∩ W 1,2 loc (with respect the differentiable structure on R) to (1.1) such that r ψ approaches the scattering data as r → M or r → ∞.
In the setting of (1.1) on Minkowski with coordinates (u, , we first consider T ψ. By rearranging and rescaling (1.1) in Minkowski, we have that in (u, x) coordinates: Since / S 2 commutes with the operator g , both in Minkowski and in extremal Reissner-Nordström, we can immediately obtain / S 2 (r ψ) ∈ W 1,2 from Theorem 4.1 (or its Minkowski analogue). Moreover, L(r ψ) ∈ W 2,2 follows from bounding uniformly in u the integral: Hence, we have to establish control over improved r -weighted energies where r ψ is replaced by L(r ψ) and L 2 (r ψ). Analogous improved r -weighted energies have appeared previously in the setting of forwards estimates in [5,7,53], see also the related energies in [49]. The backwards analogues of the corresponding improved r -weighted estimates form the core of the proof of Theorem 4.2.
To pass from T (r ψ) ∈ W 2,2 to r ψ ∈ W 2,2 , we apply the above estimates to solutions ψ (1) to (1.1), such that T ψ (1) = ψ. Such solutions ψ (1) can easily be constructed by considering initial scattering data that are time integrals of the scattering data H + in v and I + in u, assuming moreover that r ψ (1) | H + and r ψ (1) | I + vanish as v → ∞ and u → ∞, respectively.
In fact, we can show by an extension of the arguments above that T n (r ψ) ∈ W 1+n,2 loc for all n ≥ 2, assuming suitably regular and decaying data along H + and I + , so we can conclude that ψ ∈ W n+1,2 loc , provided the scattering data decays suitably fast in time. In order to obtain more regularity, we need faster polynomial decay along H + ∪ I + . This is the content of Theorem 4.4. By considering smooth and superpolynomially decaying data along H + ∪ I + and applying standard Sobolev inequalities, we can in fact take n arbitrarily high and show that ψ ∈ C ∞ ( R); see Theorem 4.5.
Note that time integrals ψ (1) also play an important role in [5,6] for spherical symmetric solutions. In that setting, one needs to solve an elliptic PDE (which reduces to an ODE in spherical symmetry) to construct ψ (1) , which is contrast with the backwards problem, where the construction is much simpler because we can integrate the scattering data in time to obtain data leading to ψ (1) . While r -weighted estimates are still suitable in the forwards direction in D −u 0 and D −v 0 , they are not suitable in the backwards direction. We therefore consider energy estimates for the radiation field r ψ with the vector field multiplier K = u 2 ∂ u + v 2 ∂ v , both in D −u 0 and D −v 0 in order to arrive at the analogue of the p = 2 estimate. In Minkowski space, K corresponds to the generator of a conformal symmetry, the inverted time translations. It is a Killing vector field of the rescaled metric r −2 m, where m is the Minkowski metric. Hence, K may be thought of as the analogue of ∂ t when considering r ψ instead of ψ and r −2 m instead of m. In particular, when considering K as a vector field multiplier in a spacetime region of Minkowski, one can obtain a weighted energy conservation law for r ψ. Since r is large in D −u 0 in extremal Reissner-Nordström, K may be thought of as an "approximate Killing vector field" of the rescaled metric r −2 g.
Estimates
Another useful property of K is that it is invariant under the Couch-Torrence conformal symmetry [17] that maps D −u 0 to D −v 0 . It therefore plays the same role when used as a vector field multiplier for the radiation field in D −v 0 as it does in D −u 0 .
In order to obtain the analogue of the r p -weighted estimate with p = 1 for T ψ, we apply instead the vector field multiplier first observing that the spacetime is invariant under the map t → −t, so the above discussion on F −1 can be applied to associate to each pair ( , ) ∈ C ∞ c (H − ) ⊕ C ∞ c (I − ) a solution ψ ∈ D − ( ) such that (ψ| , n ψ| ∩{r H ≤r ≤r I } ) lie in a suitable energy space. We show that in fact (ψ| 0 , n 0 ψ| 0 ∩{r H ≤r ≤r I } ) ∈ E 0 , so we can apply (the extension of) F to obtain a pair of radiation fields ( , ) ∈ E H + ⊕ E I + .
Scattering and regularity in black hole interiors.
We derive estimates for the radiation field in M int using once again the vector field K = u 2 ∂ u +v 2 ∂ v . Recall from Sect. 5.4 that the favourable properties of K as a vector field multiplier are related to its role as an approximate conformal symmetry generator near infinity and its invariance under the Couch-Torrence conformal symmetry. The equation for the radiation field takes the same form in M int and M ext near H + if one considers the standard Eddington-Finkelstein double-null coordinates inM int and inM ext . Therefore, K (now defined with respect to (u, v) coordinates inM int ) remains useful in the black hole interior. The usefulness of K in the interior of extremal black holes was already observed in [31][32][33].
The Forwards Evolution Map
In this section, we present the energy estimates in the forwards time direction that are relevant for defining the forwards evolution map F (see Sect. 6.5).
Preliminary estimates.
We make use of the following Hardy inequalities: Proof. See the proof of Lemma 2.2 in [7].
We denote for α = (α 1 , . We now state the following standard inequalities on S 2 : Lemma 6.2 (Angular momentum operator inequalities). Let f : S 2 → R be a C 2 function. Then we can estimate Proof. See for example [8,9].
Radiation field at null infinity.
We now recall some regularity properties of the radiation field at null infinity, which do not immediately follow from Theorem 3.1, and are derived in [5]. Proof. By (1.1) we obtain the following equation for φ: which implies (6.5) with n = 0. We obtain n ≥ 0 by induction.
. Then for all k, l ∈ N 0 and α ∈ N 3 0 , In particular, the limit exists for all u ≥ 0 and defines a smooth function on I + ≥u 0 . Proof. The k ≤ 1 case follows from Section 3 of [7] by using (6.6). We obtain the k ≥ 2 case via an induction argument, where in the induction step we simply repeat the argument for k = 1 using instead the commuted equation (6.5). See also Proposition 6.2 of [5].
By combining Theorems 6.6 and 6.7 with Lemma 6.3 and applying the mean-value theorem along a dyadic sequence of times ("the pigeonhole principle"), one can obtain energy decay in time along the foliation τ ; see for example [8,9] and [5] for an application of this procedure in extremal Reissner-Nordström.
In the present article, however, we will not apply the mean-value theorem, bur rather derive uniform boundedness estimates for various time-integrated energies on the lefthand side (see Proposition 6.8). We will then use these time-integrated energy estimates to obtain estimates for energy fluxes along H + and I + with growing time weights inside the integrals (Corollary 6.10).
Proof. Note first of all that for all τ ≥ 0 τ2 τ1 Nτ where in the final inequality we applied Lemma 6.1 and (6.7), using that φ attains a finite limit at I + , by Proposition 6.5. Similarly, we have that We combine (6.12) and (6.13) together with (6.8) to obtain the estimate: We now apply (6.9) with k = 0 and p = 1 to obtain: (6.14) By Lemma 6.3 and (6.14), we immediately obtain also We integrate once more in τ and apply (6.9) with k = 0 and p = 2 to obtain (6.10). Equation (6.11) follows from (6.10) by applying Lemma 6.3 applied in the region D + ( τ ), together with (6.9) with p = 2 and k = 0.
The following simple lemma is crucial in order to bound energy norms along H + and I + with time-weights inside the integrals.
Proof. We integrate the left-hand side of (6.16) by parts to obtain Note that for n ≥ 1: We then keep integrating by parts to arrive (6.16), using that Proof. First of all, by Theorem 5.1 from [5] it follows that for 0 ≤ j ≤ 2 the following qualitative statements hold: 5 We can therefore apply Proposition 6.8 together with Lemma 6.9 with n = 2 to obtain the desired estimate for the j = 0 term. The j = 1 estimate follows by replacing φ with T φ and applying (6.15) and Lemma 6.9 with n = 1. Finally, we obtain the j = 0 estimate by replacing ψ with T 2 ψ and applying Lemma 6.3.
We will complement (6.17) in Corollary 6.10 with an estimate involving additional angular derivatives. The motivation for this comes from the energy estimates in Sect. 8.1.
Higher-order estimates.
In this section we will derive the analogue of Corollary 6.10 for T n φ with n ≥ 1, but with stronger growing weights in u and v on the left-hand side (depending on n).
Proof. We will derive (6.18) by induction. Observe that the n = 0 case follows immediately from (6.8). Now, suppose (6.18) holds for all n = N . Then, by replacing T N ψ with T N +1 ψ (using that T commutes with the wave operator g ) and setting τ = τ 2N +2 , we have that Now, we apply the following identities and we integrate once more in τ to obtain: where we moreover applied Lemma 6.1 (together with a standard averaging argument near the boundaries) and Theorem 6.6 to control the lowest order derivative terms on the right-hand sides of (6.19) and (6.20). Now, apply (6.9) with k ≤ N + 1 and p = 2k + 1 when j = 0 and k ≤ N and p = 2k when j = 1, together with Lemma 6.2, to obtain Subsequently, apply (6.9) again, with k ≤ N + 1 and p = 2k + 2 when j = 0 and k ≤ N and p = 2k + 1 when j = 1.
Finally, since we are integrating two more times in τ compared to the n = N estimate, we can also include on the left-hand side of the above estimate the terms to obtain (6.18) with n = N + 1.
We will complement (6.21) in Corollary 6.13 with an estimate involving additional angular derivatives. The motivation for this comes from the energy estimates in Sect. 8.2. Corollary 6.14. Let n ∈ N 0 . Then, there exists a constant C = C(M, , r H , r I , n) > 0, such that (6.22)
Construction of the forwards evolution map.
In this section, we will use the uniform estimates derived in Sects. 6.3 and 6.4 in order to construct the forward evolution map between suitable weighted energy spaces.
Definition 6.1. Define the forwards evolution map
as the following linear operator: where ψ is the unique solution to (1.1) with (ψ| 0 , n 0 ψ| 0 ∩{r H ≤r ≤r I } ) = ( , ). Then F extends uniquely to a linear bounded operator: .
We moreover have that F n = F | E n; 0 .
The Backwards Evolution Map
In this section we will construct a map from suitably weighted energy spaces on H + and I + to suitably weighted energy spaces on 0 . The construction will proceed in two steps. As a first step, we construct in Sect. 7.1 a map with the domain C ∞ c (H + ≥v 0 ) ⊕ C ∞ c (I + ≥u 0 ). In other words, we establish semi-global existence and uniqueness for the backwards scattering initial value problem.
In the second step, this will be promoted to global existence and uniqueness in Sect. 7.4 by using the global, uniform weighted energy estimates of Sect. 7.2 that are valid on the completion of C ∞ c (H + ≥v 0 ) ⊕ C ∞ c (I + ≥u 0 ) with respect to the associated energy norms.
7.1. Initial value problem with compactly supported scattering data. In this section we will associate to a pair ( , ) ∈ C ∞ c (H + ≥v 0 ) ⊕ C ∞ c (I + ≥u 0 ) a unique solution to (1.1) in D + ( 0 ) such that r · ψ| H + = and r · ψ| I + = . This association is central to the definition of the backwards evolution map (see Definition 7.1).
(7.1) [23] in the setting of sub-extremal Kerr. Note however that Proposition 7.1 establishes in addition qualitative bounds on the radiation field r ψ and weighted higher-order derivatives thereof in the form of the inequality (7.1), which will be necessary in the backwards-in-time estimates of Sect. 7.2.
2.) (Uniqueness) If
Proof of Proposition 7.1. Observe first of all that ψ i is well-defined by local existence and uniqueness with smooth initial data on τ ∞ ∪ {v = V i }.
Apply the divergence theorem with J T in the region {r ≥ r I } bounded to the past by I v = {v = v} ∩ {u 0 ≤ u ≤ u ∞ } and 0 and to the future by I V i := {v = V i } ∩ {u 0 ≤ u ≤ u ∞ } and τ ∞ to obtain: which is equivalent to By applying the fundamental theorem of calculus in u, integrating from u = τ ∞ to u = u, together with Cauchy-Schwarz, we therefore obtain where we used that ψ i | τ∞ = 0, from which it follows that Now, we can use (7.2) and (6.5) with n = 0 together with the fundamental theorem of calculus in the u-direction to obtain Similarly, we can use (6.5) and Lemma 6.2 in a simple induction argument to conclude that for all n ∈ N we have in {r ≥ r I }: We can immediately apply the above argument to α φ and T k for any α ∈ N 3 0 , k ∈ N 0 , together with a standard Sobolev inequality on S 2 to obtain the following i-independent estimate: for all k ∈ N 0 and α ∈ N 3 0 , there exists a constant C(τ ∞ , u 0 ) > 0, such that |(r 2 L) n T k α φ i | 2 (u, v, θ, ϕ) We obtain a similar estimate in the region {r ≤ r H } by reversing the roles of u and v (integrating in the v-direction) and replacing r by (r − M) −1 : Given V > 0 arbitrarily large and n ≥ N , we have by (7.3) and (7.4) that for I ≥ 1 such that V I > V , φ i is uniformly bounded in i for all i ≥ I with respect to the C k norm on 6 We can extend the domain of φ to J + ( 0 )∩ J − ( τ ∞ ) as follows: we replace V above with V > V , applying Arzelà-Ascoli to the subsequence φ i k (starting from k suitably large) in the corresponding larger spacetime region and passing to a further subsequence. By uniqueness of limits, the resulting limit, which we note by φ has to agree with φ when v ≤ V .
We also have by (7.3) that for any > 0, there exist a V > 0 and K > 0, such that for all v ≥ V and k > K in the region {r ≥ r I }: We can analogously use (7.3) to obtain for all j, k, l ∈ N 0 : Furthermore, by replacing ψ by T l α ψ we can conclude that with respect to the differentiable structure inR, the restriction r ψ| I + is a smooth function on I + , satisfying r ψ| I + = . We can therefore conclude 1.) of the proposition. Now suppose ψ is another smooth solution to g ψ = 0, such that By a global T -energy estimate, we have that so ψ = ψ, which concludes 2.) of the proposition.
Backwards energy estimates.
In this section, we will derive estimates for the solutions ψ to (1.1) constructed in Proposition 7.1 that are uniform in τ ∞ . This is crucial for constructing solutions with scattering data that is not compactly suppported. The main tool we will develop is this section is a hierarchy of r -weighted estimates in the backwards time direction. However, we will first state a backwards Morawetz estimate that follows immediately from the results in [8], i.e. an analogue of Theorem 6.6 in the backwards time direction.
In the propositions below, we derive the "backwards analogues" of the hierarchies from Proposition 6.7. Proposition 7.3. Let 0 ≤ p ≤ 2, then there exists a constant C(M, , r I , r H ) > 0, such that for all 0 ≤ τ 1 ≤ τ 2 ≤ τ ∞ : Proof. Recall that φ satisfies the equation: Therefore, By reordering the terms, we therefore obtain: (7.8) Let χ denote a cut-off function and consider χφ.
We integrate both sides of (7.8) in spacetime to obtain: where we applied Lemma 6.1 and (7.5) to arrive at the inequality above. See also the derivations in the proof of Lemma 6.3 in [5] in the special case n = 0. We can repeat the above steps in the region where r ≤ r H by reversing the roles of L and L and replacing r p with (r − M) − p ; see the proof of Lemma 6.3 in [5] for more details.
We subsequently apply Proposition 7.3 to arrive at uniform weighted energy estimates along 0 . (7.10) We moreover have that Proof. By applying Lemmas 6.1 and 6.3, it follows that We now apply (7.6) with p = 1, together with (7.12) to conclude that Next, apply (7.6) with p = 2 to obtain We apply Lemma 6.9 to rewrite the right-hand side above to arrive at: (7.13) which leads to (7.10) when we take τ = 0. By applying the above estimates to T ψ and T 2 ψ we moreover obtain: We conclude the proof by combining the above proposition with Lemma 6.3 to obtain Remark 7.2. Note that in contrast with the estimates in Proposition 6.8, there is no loss of derivatives (caused by the application of (6.8)) on the right-hand side of (7.10).
We will complement (7.14) in Proposition 7.4 with an estimate involving additional angular derivatives. The motivation for this comes from the energy estimates in Sect. 8.1.
Higher-order estimates.
By commuting (7.7) with L k , we arrive at Similarly, we can commute (7.7) with L k to obtain: Proof. The proof is a straightforward generalisation of the proof of Proposition 7.3: we repeat the steps in the proof of Proposition 7.3, but we replace φ with either L k φ (when {r ≥ r I }) or L k φ (when {r ≤ r H }), and we use (7.15) and (7.16).
Proposition 7.7. Let n ∈ N 0 and let ψ be a solution to (1.1) such that ψ| τ∞ = 0 and n τ ∞ ψ| τ∞ = 0 for some τ ∞ < ∞. Then there exists a constant C(M, , r I , r H , n) > 0 such that Proof. We first consider the n = 1 case. Note that by (7.6) with k = 1 and p = 3: (6.19) and (6.20) (7.13),Lemma 6.9 and Lemma 6. (7.20) Now, we apply (7.6) with k = 1 and p = 4: By replacing φ on the left-hand side of (7.20) with T j φ and applying Proposition 7.4 to T m α φ, we therefore obtain: where we applied Proposition 7.4 and Lemma 6.9 to arrive at the final inequality. The general n case now follows easily via an inductive argument, where we apply (7.6) with k = n and p = 2n + 1 and p = 2n + 2. Proposition 7.7 combined with Lemma 6.3 immediately implies the following: We will complement (7.21) in Corollary 7.8 with an estimate involving additional angular derivatives. The motivation for this comes from the energy estimates in Sect. 8.2. Nu 0 Proof. From Proposition 7.1 it follows that ψ| 0 ∈ C ∞ ( ) and n 0 ψ| 0 ∩{r H ≤r ≤r I } ∈ C ∞ ( 0 ∩ {r H ≤ r ≤ r I }). The remaining statment follows from Lemma 6.3.
Using Proposition 7.10, together with the standard general construction of the unique extensions of bounded linear operators to the completion of their domains, we can define the backwards evolution map as follows: Definition 7.1. The backwards evolution map is the map B : where ψ is the unique solution to g ψ = 0 with (Mψ| H + ≥v 0 , r ψ| I + ≥u 0 ) = ( , ). The map B uniquely extends to a unitary linear operator, which we will also denote with B: In the proposition below, we show that we can consider restriction of B to suitably weighted energy spaces. Proposition 7.11. Let n ∈ N 0 . The backwards evolution map B is a bounded linear operator from C ∞ c (H + ≥v 0 ) ⊕ C ∞ c (I + ≥u 0 ) to E n; 0 , which can uniquely be extended as as the bounded linear operator We moreover have that B n = B| E n;H + ≥v 0 ⊕E n;I + ≥u 0 . Proof. By Proposition 7.1 it follows that the solution ψ corresponding to . By Corollary 7.8 it follows moreover that so ||B|| ≤ C. We can infer that, in particular, (ψ| 0 , n 0 ψ| 0 ) ∈ E n; 0 . The map B extends uniquely to the completion E H + , then the corresponding solution ψ to (1.1) satisfies φ| 0 ∈ C ∞ ( 0 ) and n 0 ψ| 0 ∈ C ∞ ( 0 ∩ {r H ≤ r ≤ r I }), and hence F (φ| 0 , n 0 ψ| 0 ) = (φ| H + , φ| I + ) is well-defined and (φ| H + , φ| I + ) = ( , ). We conclude that F • B = id on a dense subset. By boundedness of F • B we can conclude that F • B = id on the full domain. Hence, F must be surjective and in fact bijective (we have already established injectivity). It immediately follows then that B • F = id. The above argument can also be applied to F n and B n .
The Scattering Map
The aim of this section is to extend the estimates of Sects. 6 and 7 from the hypersurface 0 to the hypersurface . This will allow us to construct the scattering map S, a bijective map between (time-weighted) energy spaces on H − ∪ I − and H + ∪ I + . The estimates in this section will therefore concern the "triangular" regions bounded to the future by the null hypersurfaces N 0 and N 0 and to the past by = {t = 0}.
Weighted energy estimates near spacelike infinity.
In the proposition below we derive energy estimates with respect to the vector field multiplier K = v 2 L + u 2 L, which is commonly referred to as the Morawetz conformal vector field. 7 The main purpose of K is to derive backwards energy estimates along with r -weighted initial data along N −u 0 and N −v 0 which are analogous to the r -weighted boundary terms in the estimates in Proposition 7.3 with p = 2.
Proof. By (6.6) it follows that After integrating by parts on S 2 , we therefore obtain: We first consider estimates in the backwards time direction. We integrate (8.3) in spacetime and we use the following identity: Using that r ∼ v + |u| v in the integration region, we can further estimate: for > 0 arbitrarily small given r I > 0 suitably large (and v −1 r −1 in the integration region). Note that we can absorb the very right-hand side above into the left-hand side of (8.5) when > 0 is suitably small. We apply Young's inequality to estimate We absorb the spacetime integral of (Lφ) 2 and (Lφ) 2 to the left-hand side of (8.5), using that r is suitably large and (v + |u|) r in the integration region. In order to absorb the φ 2 term, we first observe that by assumption, we are considering φ such that φ| I + is well-defined and is compactly supported in u > u −∞ , so Therefore, by Cauchy-Schwarz, we can estimate Furthermore, similarly we have that Hence, so we can estimate: with > 0 suitably small given r I suitably large. As a result, we obtain We integrate (8.3) and apply (8.6) to obtain: Analogously, we have that and so that we can estimate Using that (r − M) −1 ∼ u + |v| u, we estimate further: for > 0 arbitrarily small given r H − M > 0 suitably small. Note that we can absorb the very right-hand side above into the left-hand side of (8.9) when > 0 is suitably small. We apply Young's inequality to estimate and absorb the corresponding spacetime integral to the left-hand side of (8.9), using that which follows from Cauchy-Schwarz combined with the assumption that φ| H and hence, We now consider the forwards time direction. First of all, we are assuming compact support on 0 ∩ {v r I ≤ v ≤ −u −∞ }, so for |u −∞ |, |v −∞ | suitably large, we have that φ vanishes along N −u −∞ , N −v −∞ , I + ∩ {u ≤ u −∞ } and H + ∩ {v ≤ v −∞ }, by the domain of dependence property of the wave equation.
We then apply the estimates (8.6) and (8.10) to obtain: for a suitably small positive constant c > 0.
We complement Proposition 8.1 with estimates involving lower weights in r , u and v, applied to T φ rather than φ. The r -weighted energies along N −u 0 and N −v 0 appearing in the proposition below appear as energy flux terms in Proposition 7.3 with p = 1.
We can moreover replace φ with α φ in the above estimates, with |α| ≤ 1, due to the commutation properties of i and g . By (6.6) it follows that After integrating by parts on S 2 , we therefore obtain: Hence, after integrating (8.15) in spacetime, the | / ∇ S 2 T φ| 2 term on the right-hand side will have a good sign if we consider forwards-in-time estimates and a bad sign if we consider backwards-in-time estimates.
In the backwards-in-time case, we use that T = ∂ u + ∂ v and t = 1 2 (v − |u|) and |u| + v r in the integration region, together with Lemma 6.2 to estimate: where we arrived at the last inequality by applying Lemma 6.3. Note that in this step we needed to use that our solution to (1.1) a time derivative, i.e. it is of the form T ψ!
We moreover apply Young's inequality to estimate We can absorb the spacetime integrals of the terms on the very right-hand side into the following flux terms: Integrating the identity (8.15) in u and v and applying the above estimates therefore gives the following inequality: (8.16) and hence, using (8.15) and the above estimate once more, now in combination with (8.16), we arrive at We repeat the above arguments near H + by considering and reversing the roles of u and v and L and L, in order to obtain the near-horizon estimate in the backwards time direction. We omit further details of this step. Now, we consider the forwards time direction. By repeating the arguments above in the forwards time direction, using that the ψ and n ψ are initially compactly supported and taking |u −∞ | and |v −∞ | appropriately large, we obtain moreover that Note that, in contrast with the backwards-in-time estimates, there is no need for an additional angular derivative in the T -energy term on the right hand side. The analogous estimate near H + proceeds by repeating the above arguments, interchanging the roles of u and v and replacing r by (r − M) −1 .
8.2.
Higher-order estimates. The aim of this section is to derive analogues of the estimates in Proposition 8.1 for higher-order derivatives of ψ (with additional growing weights). The key vector field that plays a role in this step is S = uL + vL. This vector field is also called the scaling vector field because it generates the scaling conformal symmetry in Minkowski. Even though the exact symmetry property is lost in extremal Reissner-Nordström, we will see below that the vector field still has favourable commutation properties with the operator L L.
Lemma 8.4. Let n ∈ N 0 and S = uL + vL. Then Proof. We will derive (8.19) and (8.20) inductively. Note that (8.19) and (8.20) hold for n = 0 by (6.6). Now assume (8.19) and (8.20) hold for n = N with N ≥ 0. Note first of all that for an arbitrary C 2 function f : For any p ≥ 0 we have that: Furthermore, we can expand
S(Dr
and we obtain, using the above observations and applying (8.19) with n = N : Hence, we can conclude that (8.19) must hold for all n ∈ N 0 . It follows analogously that (8.20) must hold for all n ∈ N 0 .
Since the vector field S does not commute with g , we do not immediately obtain Lemma 6.3 for S n ψ replacing ψ, with n ∈ N. However, we show in Proposition 8.5 that, when considering φ instead of ψ, an equivalent energy boundedness statement holds.
Proof. We establish the estimate (8.21) inductively. We prove the n = 0 case first and then assume that (8.21) holds for 0 ≤ k ≤ n − 1 in order to prove the k = n case. We will in fact do both of these steps at the same time in the argument below. By Lemma 8.4, we have that We subsequently integrate both sides of (8.23) in u, v and S 2 and we apply Young's inequality to absorb all the spacetime integrals either into the corresponding boundary integrals as in the proof of Proposition 8.1, or (if n ≥ 1) also into the left-hand sides of the estimates contained in (8.21) with 0 ≤ k ≤ n − 1.
Proposition 8.6. Let n ∈ N 0 . There exists constants c, Proof. We can apply the same arguments as in Proposition 8.1, replacing φ by S k φ, with 0 ≤ k ≤ n and applying the more general equations (8.19) and (8.20) instead of (6.6) to obtain: We conclude the proof by rewriting S k φ in terms of u and v derivatives and we moreover apply Lemma 8.4 to rewrite all mixed u and v derivatives. Furthermore, we apply Lemma 6.2 to replace the angular derivatives by derivatives of the form α .
Proof. We repeat the arguments in the proof of Proposition 8.2, applying the equations in Lemma 8.4 that introduce additional terms, which can be absorbed straightforwardly. Furthermore, rather than using Lemma 6.3, we apply Proposition 8.5 where necessary. We then obtain: We conclude the proof by replacing the S k derivatives by u and v derivatives with weights in |u| and |v|, and moreover applying Lemma 8.4 to rewrite all mixed u and v derivatives in terms of pure u or v derivatives, angular derivatives and lower-order derivatives.
Proof. Follows immediately after combining the results of Propositions 8.6 and 8.7.
By commuting g additionally with T and applying Lemma 6.3, we arrive at energy estimates along N u 0 and N v 0 (rather than N −u 0 and N −v 0 ) with the same weights and number of derivatives as the energy fluxes that appear in Corollaries 6.13 and 7.8. It follows immediately that ψ is a uniquely determined smooth solution to (1.1), such that lim v→∞ r ψ(u, v, θ, ϕ) = (u, θ, ϕ) and Mψ| H + = .
Proposition 8.10. Let ( , ) ∈ (C ∞ c ( )) 2 . Then the corresponding solution ψ to (1.1) satisfies and furthermore, the following identity holds Proof. Follows from Lemma 6.3 and Proposition 6.15 (combined with an analogue of Proposition 6.15 in the past-direction, making use of the time-symmetry of the spacetime).
Definition 8.2.
Define the evolution maps F ± : (C ∞ c ( )) 2 → E T H ± ⊕ E T I ± as the following linear operator: where ψ is the unique solution to (1.1) with (ψ| , n ψ| ) = ( , ). Then F ± extends uniquely to a linear bounded operator, also denoted F ± : Proposition 8.11. Let n ∈ N 0 . Then for all n ∈ N 0 F ± (C ∞ c ( )) 2 ) ⊆ E n;H ± ⊕ E n;I ± , (8.32) and F ± can uniquely be extended as as the following bounded linear operator F n;± : E n; → E n;H ± ⊕ E n;I ± .
We moreover have that F n;± = F ± | E n; .
Proof. Without loss of generality, we restrict our considerations to F + . We choose 0 so that 0 ∩ {r H ≤ r ≤ r I } = ∩ {r H ≤ r ≤ r I }.
We then apply the bounded operator F n from Corollary 6.16 to arrive at (8.32). The extension property follows immediately from the uniform boundedness of F + with respect to the desired norms. Proof. By applying the fundamental theorem of calculus, we have that for suitably large r * > 0 so ψ| (r, θ, ϕ) → 0 as r → ∞. By considering r * < 0 with |r * | suitably large, we can conclude analogously that ψ| (r, θ, ϕ) → 0 as r → ∞ and r ↓ M. The energy conservation statement simply follows from applying Lemma 6.3. where ψ is the corresponding unique solution to (1.1) as defined in Definition 8.1. Then B ± extends uniquely to a linear bounded operator, also denoted B ± : Proposition 8.13. The linear operator F ± : E T → E T H ± ⊕ E T I ± is bijective with B ± = F −1 ± . Proof. Follows by the same arguments as in the proof of Proposition 7.12. Proposition 8.14. Let n ∈ N 0 . Then for all n ∈ N 0 B ± (C ∞ c (H ± ) ⊕ C ∞ c (I ± )) ⊆ E n; , (8.33) and B ± can uniquely be extended as as the following bounded linear operator B n;± : E n;H ± ⊕ E n;I ± → E n; .
We moreover have that B n;± = B ± | E n; and B n;± = F −1 n;± .
Using that (r − M) −1 ∼ v + |u| in M int ∩ D + ( 0 ∪ N int and it follows analogously that is well-defined. The estimate (9.1) then follows by combining the above estimates.
Proposition 9.2. Let u int < 0 with |u int | suitably large. Let S int : be defined as follows: ).
Then S int extends uniquely as a bijective, bounded linear operator: Proof. The construction of S int and its inverse, on a domain of smooth, compactly supported functions, follow immediately from the estimates in the proof of Proposition 9.1, where r ψ| CH + and we apply the estimates of Proposition 9.1, replacing ψ with T j ψ, j = 0, 1, to arrive at (9.3). We obtain (9.4) by appealing additionally to Corollary 6.10.
Remark 9.1. One can easily extend the estimate in Corollary 9.3 to smaller values of |u int | (provided r > r min > 0 in the spacetime region under consideration), by applying a standard Grönwall inequality.
Application 1: Regularity at the Event Horizon and Null Infinity
As an application of the maps B n constructed in Proposition 7.11, we can show that we can associate arbitrarily regular solutions to suitably polynomially decaying scattering data along H + and I + . First of all, we will show that by considering T k ψ, rather than ψ, we obtain higher-regularity near H + and I + .
Before we address these regularity properties, we will relate the differential operators Proof. The identities can be obtained inductively by applying (7.7) and commuting L L with r 2 L and r 2 L. See Lemma 6.1 in [5] for more details. . Then we have that the corresponding solution ψ to (1.1) satisfies T n (r ψ) ∈ W n+1,2 loc ( R). | 2019-10-17T15:37:14.000Z | 2019-10-17T00:00:00.000 | {
"year": 2020,
"sha1": "9dbd812b07548398ed680d0ed921d3d7f5548516",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00220-020-03857-3.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "4bef7c59cdf5df463e3b6d94aa56e9c4e1237af2",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics",
"Medicine"
]
} |
265336945 | pes2o/s2orc | v3-fos-license | Geographical Variation in Body Size in the Asian Common Toad (Duttaphrynus melanostictus)
The geographic variation in life-history traits of organisms and the mechanisms underlying adaptation are interesting ideas in evolutionary biology. This study investigated age and body size of the Asian common toad (Duttaphrynus melanostictus) among five populations along a geographical gradient. We found that geographical variation in age was non-significant among populations but there was a significant and positive correlation between mean age and body size. Although the body size values at 1043 m are quite different from other sites, after controlling for age effects, there was a significant positive correlation between altitude and body size. Our findings followed the predictions of Bergmann’s rule, suggesting that the body size of D. melanostictus is potentially influenced by the low air temperatures at higher altitudes.
Body size is an important life-history trait that affects the success of ecological interactions and, ultimately, reproductive fitness [27,28].Body size variation results from changing growth rates, shifting age structure or a combination of the two, which affects population-, community-and ecosystem-level dynamics [29,30].There is evidence that a small body disadvantages males attempting to obtain a mate during the competitive interactions within a species [31,32].In the meantime, a substantial reduction in body size results in populations that exhibit heightened vulnerability to collapse, as diminished body size significantly declines both fecundity and survival within the natural environment [33].
Because there are significant changes in body size across temporal and spatial scales, extensive studies have been conducted to identify factors contributing to body size variation, with a particular focus on environmental temperature [18][19][20]27].Indeed, temperature affects metabolism, which has an effect on body size through its influence on energy allocation to growth, activity and reproduction in organisms [34].In terms of geographic variation in body size, Bergmann's rule states that a larger size is often obtained in colder temperatures than in warmer temperatures within or among species.Bergmann's rule has Life 2023, 13, 2219 2 of 10 been largely supported in homeothermic species.These patterns hold true in endotherms based on a positive correlation between basal metabolic rate and body mass [35].In particular, the explanation for Bergmann's rule is that larger-bodied animals possess a smaller surface-to-volume ratio, where heat conservation is more effective [36].However, larger individuals tend to lose more energy to the environment than smaller individuals which obey the generality of Bergmann's rule [37].Numerous researchers have used different animal groups to test this hypothesis, and some results showed that some endothermic animals follow Bergmann's rule [38][39][40][41][42][43] while others follow the inverse of Bergmann's rule [44][45][46].
Anurans, as an ideal model, were used for studying geographic variation in body size because body temperature depends tightly on environmental temperature [29].There is evidence for intra-specific studies on the effect of temperature on body size in frogs [27,46].However, there is much controversy on how environmental temperature influences variation in body size in frogs.For instance, species living at lower temperatures in high altitude and/or latitude areas have larger body sizes compared to those living at high temperatures in low altitude and/or latitude areas, following Bergmann's rule [27,47,48], while other species contrast this [13,49].Moreover, age, as one of the basic life-history traits, affects body size variation in an organism.Usually, organisms under harsh environments will allocate more energy and time into growth, so that individuals become older and have larger bodies [39,50,51].Indeed, most species display positive correlations between age and body size [13,42].
The Asian common toad (D. melanostictus ) is one of the most prevalent anurans in Asia [52].This species grows and breeds in various habitats, including marshes, puddles, man-made ponds and temporary ponds, providing it an ideal model for studying body size variation.Its most distinctive feature lie in the black bony spines protruding from its snout.With strong adaptability, this species can thrive across a range of altitudes from 10 to 2000 m and breeds between March and August [53].Here, we investigated patterns of variation in age and body size in D. melanostictus among five populations along a geographical gradient.Altitude or latitude have different connotations, where altitude has elevation shifts leading to not only temperature shifts but also exposure to sun and oxygen, and latitude shifts may be more related to length of day and temperature [54][55][56][57].We first examined the difference in age across altitudes or latitudes.We then explored the relationships between body size and altitude or latitude among populations when controlling the age effect to verify whether Bergmann's rule applies to this species, and we predicted an increase in body size with increasing altitude or latitude in the species.
Study Species
The Asian common toad is a medium-sized anuran, with the female having a larger body size than males.The toad is widely distributed in southern China where they live in holes in the ground during the day and forage for insects at night [58].It is a lekking species, where males actively wait for females for mating in pools during the night in breeding season.Upon the females' arrival at the breeding pools, males can promptly approach and engage in clasping behavior [52].The females usually lay a larger number of eggs, ranging from 2500 to 4000 eggs in the spawning site.There are some poison glands on the dorsum skin in this species which can be used to avoid predators such as snakes [53].
Sample Site
Fieldwork was conducted from April to August between 2018 and 2020.We studied five D. melanostictus populations including Midu, Mouding and Pingbian from the Yunnan province, Yuanling from the Hunan province and Pingjiang from the Guizhou province.These study sites significantly display variations in altitude, mean temperature and mean precipitation [52].The three sites in Yunan included Midu at an altitude of 1673 m, Mouding at an altitude of 1771 m and Pingbian at an altitude of 1043 m [52].The vegetations at these three sites were characterized by Taxus chinensis, Anoectochilus pingbianensis, Rhododendron platyphyllum, Cheirostylis pingbianensis, Rungia pinpienensis, and Ophiorrhiza pingbienensis.The Pingjiang population was found along some paddy fields at an altitude of 275 m.The sampling area was about 100 m long and 60 m wide.The vegetation at this site was characterized by Begonia rongjiangensis and Alsophila spinulosa.For the Yuanling population, the paddy fields were located an altitude of 275 m and we found Cephalotaxus oliveri, T. chinensis and Manglietia patungensis in the area.
Sample Collection
We randomly captured 116 males and 37 females along sampling lines (length 500 m; width 10 m) at night using flashlight illumination during the breeding season from five populations (Table 1).The toad museum number information at different collecting sites is shown in Supplementary Materials Table S1.We diagnosed the toad as D. melanostictus on the basis of their morphological characteristics and body color [53].We identified their sex by their secondary sexual characteristics (e.g., eggs in females and nuptial pads in males [52]).We euthanized all specimens using single-pithing, and then preserved them in 4% phosphate-buffered formalin for tissue fixation.The toes were clipped after the specimens were euthanized.All individuals with a specific museum number were stored in a public natural history collection in China West Normal University.The specimens used in this study were collected with permission from the China West Normal University Ethical Committee for Animal Experiments (CWNU-20001).
Body Size Measurement
The body mass of each individual was measured using an electronic balance with an accuracy of 0.01 g.Then, we used the vernier caliper to measure snout-vent length (SVL) of each individual with an accuracy of 0.01 mm [58,59].The measurements and records of each toad were measured twice by two persons for minimizing error.We clipped the longest phalanges of all individuals from the right hindlimb and then stored them in 4% neutral buffered formalin in order to determine age structure using skeletochronology [36].
Age Determination
The most commonly used method for determining the age of amphibians is skeletochronology [36,[59][60][61][62][63][64].We used the paraffin section method and Harris's haematoxylin stain to obtain histological sections of the phalanges [65].We first removed the skins and muscles of each digit, and washed the remaining bones in water for two hours.We then decalcified the bones in 5% nitric acid for 48 h.We washed the phalanges in running tap water for 24 h and stained them in Harris's haematoxylin for 150 min.After that, we dehydrated the stained phalange through successive ethanol stages of 70%, 80%, 95% and 100% for approximately one hour, respectively.We embedded tissues in small paraffin blocks and cut these for phalanges.We then selected cross-sections of about 13 µm thickness with the smallest medullar cavity and mounted them on glass slides.The lines of arrested growths (LAGs) on a section were demonstrated to form in the bones from a cycle of growth and a period of dormancy during hibernation [36].The number of rings thus relates directly to age, like the rings in trees.We counted LAGs under a light microscope and treated them as the age of the toads because the toads are exposed to distinct temperature cycles through the year.We regarded the first line of arrested growth as endosteal resorption on the basis of the kastschenko line (KL) where there was the interface between the periosteal and endosteal zones [36].
All individuals from the five populations had significant LAGs.Very closely spaced Harris's haematoxylin lines (double LAGs) were not found for all individuals.False LAGs were observed in one individual, but did not affect the estimate of the age.We did not find endosteal resorption in the sections from males and females.The age of the toad ranged from 1 to 4 in males and 1 to 7 in females, respectively.
Statistical Analysis
We used the R package 'lme4 in the RStudio version 4.3.0[66] to analyze the data.Since there was only one male individual in the Pingbian population, it was not considered in statistical analysis.Before analysis, we performed a log 10 transformation on the SVL and body mass data to conform to the normality assumption.We first used GLMMs treating age as the dependent variable, altitude or latitude as a fixed factor, sex as a covariate and population as a random factor to analyze geographical variation in age [67].We then used GLMMs with SVL and body mass as dependent variables, altitude or latitude as a fixed factor, age and sex as covariates, and population as a random factor to test the effect of altitude or latitude on body size variation.
Results
The number of female and male samples we collected in the five different populations are shown in the table above, which also reflects the mean values of male and female body size among different populations.In populations such as Midu and Pingjiang, where there were a certain number of males and females, the body size of males and females was almost the same.It was noteworthy that the body size of females at the Pingbian population was 72.82 mm, which significantly exceeded the values at other sites.
Geographical Variation in Age
The GLMMs showed that there was a non-significant difference in age among populations and between males and females (Table 2).Meanwhile, the GLMMs showed that the effects of altitude or latitude (Table 2) on age was non-significant (Table 2).
Geographical Variation in Body Size
The GLMMs revealed a positive effect of age on body size in D. melanostictus across populations (Table 3; Figure 1A,B).Therefore, we need control for age effect to analyze the geographical variation in SVL and body mass across populations.The GLMMs indicated that body size was significantly positively correlated with altitude, and the Pingbian population had the largest body size (Table 3; Figure 2A,B).However, there was a nonsignificant correlation between body size and latitude (Table 3).The interaction term with altitude and latitude on body size was non-significant (both p > 0.05).
Geographical Variation in Body Size
The GLMMs revealed a positive effect of age on body size in D. melanostictus across populations (Table 3; Figure 1A,B).Therefore, we need control for age effect to analyze the geographical variation in SVL and body mass across populations.The GLMMs indicated that body size was significantly positively correlated with altitude, and the Pingbian population had the largest body size (Table 3; Figure 2A,B).However, there was a non-significant correlation between body size and latitude (Table 3).The interaction term with altitude and latitude on body size was non-significant (both p > 0.05).
Discussion
Our findings indicate non-significant variation in age across altitudes or latitudes and a positive effect of age on body size.In exploring the effects of different altitudes on the body size of D. melanostictus, we found that in populations with a certain number of females and males, the body size of males and females was almost the same.However, there were essentially only females at 1043 m and the body size of D. melanostictus at this site was significantly higher than at other higher altitudes.We found that the mean age of D. melanostictus at 1043 m was higher than at other higher altitudes and this may result in the body size of D. melanostictus at 1043 m being higher compared to at other higher altitudes.It was a clear outlier and we thought there was a positive effect of altitude on body size after controlling for the age effect, following Bergmann's rule.This pattern suggests that low environmental temperature at high altitude results in larger body size across populations.In the following, we discuss our findings related to previous studies on age and body size variation across populations.
Skeletochronology can be widely used to determine the individual age of amphibians [65,[68][69][70][71][72][73][74][75].We confirmed that skeletochronology can determine the age structure of D. melanostictus, although a false line was only found in one individual.In general, the mean age of individuals increases with altitude or latitude among populations in anurans [76,77].The life-history hypothesis has suggested that high-altitude populations invest more energy in growth by delaying the age at sexual maturity [41].Indeed, a previous study has found that the mean age of the Andrew's toad (Bufo andrewsi) at high altitudes and/or latitudes is significantly higher than that at low altitudes because the lower environmental temperature at high altitudes leads to later age at sexual maturity and longer lifespan [36].In this study, there was a non-significant difference in the mean age of D. melanostictus at different altitudes or latitudes that was associated with the fact that the difference in length of the breeding season with different altitudes or latitudes was not obvious across populations.This is similar to the results which show that there is a non-significant difference in the mean age of the Andrew's toad across altitudinal gradients [36].
Geographical variations in body size have been attractive to evolutionary ecologists because it is an important scientific problem in life-history strategy [27,77].Life-history theory states that age can be an important contributing force to adult body size [38].Variations in body size along environmental gradients in ectotherms were explored by focusing on Bergmann's rule [78,79].Hence, investigating body size variations associated with life-history traits can help understanding the proximate reason for geographic clines in body size.Age and growth rate are two basic life-history traits in animals under natural selection.Cold temperature and limited food availability at high latitudes and/or altitudes are expected to select for slower growth rates, delaying sexual maturity and longer longevity [36].Consequently, individuals under harsh environments should allocate more time and energy into growth, so that they have larger body size [49].Herein, although age did not vary consistently with altitude or latitude in the toad, high-altitude populations had larger body size than low-altitude populations.When controlling for age effect, altitudinal variation in body size followed Bergman's rule, which was driven by populationlevel adaptation to different thermal environments.This pattern suggested that larger individuals can store more energy to adapt to the cold and highly variable environment so that they live longer and improve their survival rate in adversity.Our findings were consistent with previous studies on geographic variation in body size in some species of frogs conforming to Bergmann's rule, which show that larger body size is observed in lower ambient temperatures, in addition to longer hibernation periods and shorter activity time at high altitude [36,49].By contrast, some studies have shown that the body size of anurans species gradually decreases with increasing altitude or latitude [13,50,80].
A previous study has shown that body size of adults is associated with not only age, but also growth rate and size at the starting point of growth in a toad [36].Indeed, the body size of anurans is known to be positively correlated with age, growth rate and lifespan [80].There are three parameters (i.e., age, growth rate and lifespan) that mainly determine body size variation shifts along a geographic gradient, yet some anurans support Bergmann's rule and others obey it.Usually, when selection favors the negative growth rate-longevity correlation for anurans living in contrasting conditions, the relative influences of the two elements associated with local environments determine the cline rule of a species [20,36].For Bergmann's rule, later age of maturity and longer lifespan play a more important role for increased body size than slower growth in declining it.On the contrary, any prolonged time taken for growth fails to compensate the influence of slow growth on body size, which will result in the converse of Bergmann's rule.Indeed, the predominant influence of fast growth rate and long longevity on increased body size along altitudinal gradients exhibits Bergmann's rule [20,36], whereas the influence of slower growth rate on reduced body size along geographical gradients follows the converse of Bergmann's rule in Nanorana parkeri [53].Our findings suggested that body size variation in D. melanostictus following Bergmann's rule was likely to be the result of fast growth rates and long longevity in high-altitude populations.
Conclusions
Consistent with our prediction, our findings offer substantial evidence for the relationship between body size and altitude among populations following Bergmann's rule.Our findings imply that individuals at high-altitude populations experiencing low temperature have later age at sexual maturity and longer lifespan, which promote the development of larger body size to meet reproductive demands.This pattern suggests that body size variation in D. melanostictus can be explained by Bergmann's rule.
Figure 1 .
Figure 1.The relationship between age and SVL (A) and/or body mass (B) among Du aphrynus melanostictus populations.
Figure 1 .
Figure 1.The relationship between age and SVL (A) and/or body mass (B) among Duttaphrynus melanostictus populations.
Figure 1 .
Figure 1.The relationship between age and SVL (A) and/or body mass (B) among Du aphrynus melanostictus populations.
Figure 2 .
Figure 2. Mean of adult SVL (A) and body mass (B) changes with increasing altitude across Du aphrynus melanostictus populations.Pingbian population displays the largest body size.
Figure 2 .
Figure 2. Mean of adult SVL (A) and body mass (B) changes with increasing altitude across Duttaphrynus melanostictus populations.Pingbian population displays the largest body size.
Table 1 .
Sampling size (n), altitude, latitude and sex across populations in the Asian common toad (Duttaphrynus melanostictus).
Table 2 .
The influences of altitude, latitude and sex on age across populations in the Asian common toad (Duttaphrynus melanostictus).
Table 3 .
The influences of altitude, latitude and sex on body size across populations in the Asian common toad (Duttaphrynus melanostictus) after controlling age effect.
Table 2 .
The influences of altitude, latitude and sex on age across populations in the Asian common toad (Du aphrynus melanostictus). | 2023-11-22T16:48:26.622Z | 2023-11-01T00:00:00.000 | {
"year": 2023,
"sha1": "8421660b2a0b8075e9b123d78617f8393e7de096",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2075-1729/13/11/2219/pdf?version=1700219952",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "62abce0fd37ca2bd0bb3ea12d869b2b50b5100fe",
"s2fieldsofstudy": [
"Geography",
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
825549 | pes2o/s2orc | v3-fos-license | Integrated Pathway-Based Approach Identifies Association between Genomic Regions at CTCF and CACNB2 and Schizophrenia
In the present study, an integrated hierarchical approach was applied to: (1) identify pathways associated with susceptibility to schizophrenia; (2) detect genes that may be potentially affected in these pathways since they contain an associated polymorphism; and (3) annotate the functional consequences of such single-nucleotide polymorphisms (SNPs) in the affected genes or their regulatory regions. The Global Test was applied to detect schizophrenia-associated pathways using discovery and replication datasets comprising 5,040 and 5,082 individuals of European ancestry, respectively. Information concerning functional gene-sets was retrieved from the Kyoto Encyclopedia of Genes and Genomes, Gene Ontology, and the Molecular Signatures Database. Fourteen of the gene-sets or pathways identified in the discovery dataset were confirmed in the replication dataset. These include functional processes involved in transcriptional regulation and gene expression, synapse organization, cell adhesion, and apoptosis. For two genes, i.e. CTCF and CACNB2, evidence for association with schizophrenia was available (at the gene-level) in both the discovery study and published data from the Psychiatric Genomics Consortium schizophrenia study. Furthermore, these genes mapped to four of the 14 presently identified pathways. Several of the SNPs assigned to CTCF and CACNB2 have potential functional consequences, and a gene in close proximity to CACNB2, i.e. ARL5B, was identified as a potential gene of interest. Application of the present hierarchical approach thus allowed: (1) identification of novel biological gene-sets or pathways with potential involvement in the etiology of schizophrenia, as well as replication of these findings in an independent cohort; (2) detection of genes of interest for future follow-up studies; and (3) the highlighting of novel genes in previously reported candidate regions for schizophrenia.
Introduction
Genome-wide association studies (GWAS) have identified common susceptibility variants for numerous disorders [1], [2]. For complex diseases, however, many of the discovered variants have only a moderate or weak effect on disease risk. Due to correction for multiple testing and limited sample sizes, GWAS are likely to miss a fraction of loci with small genetic effect sizes, and researchers assume that a major fraction of heritability remains hidden for statistical reasons [3]. One way of overcoming this problem is to investigate the joint effects of multiple functionally related genes (e.g. gene-sets or pathways). Pathway-based analysis of GWAS data increases the power to detect disease related genes and, potentially, single nucleotide polymorphisms (SNPs) with small genetic effects. This approach provides valuable biological insights into the etiology of complex diseases [4].
Various methodological approaches to pathway association analysis are available. Maciejewski [13] has described a classification for gene-set analysis that is based upon both the statistical model used and the nature of the underlying hypothesis. This classification comprises four groups: self-contained, competitive with sample randomization, competitive with gene randomization, and parametric. The main advantages of the self-contained and the competitive with sample randomization tests are twofold.
While selection of the pathway association method is an important consideration, the power of a given pathway association study is also dependent upon other factors. These include the biological information (i.e. from gene-set and pathway databases) that is integrated into the model, the use of independent replication datasets, and the different levels of interpretation, which extend from the pathway level to the level of SNPs.
As a logical consequence, researchers are now modifying analytical frameworks in order to increase their power and potential impact. To achieve this, the present study has applied a hierarchical approach (see Figure 1). This approach uses three levels of evidence to unravel novel biological mechanisms with potential involvement in complex disorders. An advantage of this approach is that it builds upon previously developed and proven tools which gain synergistic effects from intersecting three different levels of evidence, i.e. evidence from the pathway-, gene-, and SNP-level. To test disease associated gene-sets and pathways, the Global Test was applied [15], [16]. To date, this well-established, self-contained pathway test has mainly been used for gene expression analyses. Subsequent identification of important riskgenes within the significant pathways was achieved using FORGE [17], while detection of the functional consequences of associated SNPs, i.e. the SNP function annotation, in the significantly associated genes was performed using RegulomeDB [18]. As part of our approach, a well-curated list of pathways and gene-set collections was integrated, and a reduction in false-positive findings was sought through the use of large-scale exploratory and independent replication samples. We applied our approach to data sets for schizophrenia (SCZ), and provide evidence for new SCZ risk genes that would otherwise have remained undetected in the investigated study samples.
Pathway analyses
Application of the Global Test to the BOMA-UTR (MooDS SCZ consortium (BOMA)) dataset and independent data from a Dutch study (UTR), Table 1) yielded 27 pathways that were significantly associated with SCZ after correction for multiple testing (False Discovery Rate (FDR),0.05) ( Table S1A). Of these, 14 pathways remained significant in the replication dataset. The replicated pathways are listed in Table 2, together with their FDRs, nominal p-values, and SNP set sizes. The replicated pathways include the following: (i) six gene-sets from the Transcription factor Targets database (dbTFT); (ii) four Gene Ontology (GO) terms (zinc ion binding, transition metal ion binding, positive regulation of gene expression, and synapse organization); (iii) two Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways (cell adhesion molecules, and apoptosis); (iv) one gene-set from the Chemical and Genomic Perturbation database (dbCGP, Kyng DNA damage by UV); and (v) one geneset from the microRNA targets database (mir-484 targets). The
Author Summary
Large-scale genetic studies of complex diseases such as schizophrenia have identified a variety of susceptibility loci. Since many of the respective variants have only a weak influence on disease risk, pathophysiological interpretation of the results is problematic. Investigation of the joint effects of multiple functionally related genes or pathways increases the power to detect disease related genes, and provides insights into the etiology of the disease in question. In the present study, an integrated hierarchical approach was applied to: (i) identify pathways associated with complex neuropsychiatric disease schizophrenia (ii) detect potentially affected genes in these pathways; and (iii) annotate the functional consequences of genetic markers in the affected genes or their regulatory regions. Two samples comprising .10,000 individuals of European ancestry as well as data from the Psychiatric Genomics Consortium schizophrenia study were examined. Pathways representing transcriptional regulation and gene expression, cell adhesion, apoptosis, and synapse organization showed significant association with schizophrenia. In particular, CTCF, CACNB2, and ARL5B, i.e. genes involved in chromatin modulation, calcium channel signaling and membrane transport, respectively, were highlighted as candidate genes for schizophrenia risk.
gene overlap for each pathway pair is shown in Figure S1. Table S2 summarizes the redundancy estimates for pathways retrieved from the same source. A description and a visual depiction of pathways with similar SNP content in the BOMA-UTR dataset are provided in Text S1 (section ''Pathway overlap'') and Figure S2, respectively. The overall gene and SNP overlap between all pairs of replicated pathways are provided in Table S3A and Table S3C, respectively. For the GAIN-MGS dataset, the gene and SNP overlap information is provided in Table S3B and Table S3D, respectively. The section ''Subject vs SNP label permutations'' in Text S1 and Figure S3 provides a detailed description of the results of the SNP-label permutation test coupled with the subject-sampling test.
To visualize the integration of the Global Test application on a SNP-, a gene-and a pathway level, Circos plots were generated for the entire genome ( Figure 2). These plots illustrate the impact of those individual SNPs that were annotated to the replicated pathways (whether overlapping or unique to a specific pathway) and the associated genes.
Gene-based analysis
A total of 100 genes fulfilled the criteria described in the Methods section ''Gene-based analysis with Global Test and FORGE'', i.e. these genes map to SNPs with a component Global Test p-value of ,0.001 in the BOMA-UTR dataset. Of these, the following eight genes were annotated to at least four (up to eight) of the 14 replicated pathways, thus indicating their potential importance in terms of SCZ risk: FOXP2 (eight pathways); BCL11A (six pathways); PCDH7 and RPL36P13 (five pathways respectively); and CACNB2, CTCF, MECOM, and RIMS1 (four pathways respectively).
Of the genes that were annotated to the 14 replicated pathways, the top 100 were then tested in the Psychiatric Genomewide Association Study Consortium (PGC) data. Of these, significant results were obtained for 18 genes (see Table S4). The vast majority of the 18 genes reside on different chromosomes, while most of the remainder reside on different chromosome arms. It therefore seems reasonable to assume that they represent independent signals, which results in a p-value of 0.004 for an enrichment of SCZ-associated genes among the 100 top genes. Included in the list of 18 replicated genes are known SCZ susceptibility genes, such as NRXN1, GRM3, and MMP16. Two of the eight most frequent genes in the top 14 pathways were also among the nominally significant genes in the gene-based FORGE analysis, i.e. CACNB2 (p = 8.57610 24 ) and CTCF (p = 0.015). Given the overlap (approx. 1,200 cases) between the PGC sample (FORGE analyses) and the present discovery sample (component Global Test), we opted to analyze the PGC dataset without including our discovery dataset. These analyses generated results of the same order of magnitude for both genes (CACNB2: p = 0.0090; CTCF: p = 0.0320). While CACNB2 showed a trend towards association in an independent dataset from Denmark (p = 0.0970), thus supporting the strong signal from the PGC data, CTCF was found to be strongly associated in the same independent Danish sample (p = 0.0075).
Potential functional consequences of SNPs in CTCF
Polyphen-2 predicted that the coding SNPs of interest in CTCF were ''benign'', whereas SIFT predicted that they were ''tolerated'' (Table S5). Figure 3 illustrates the potential consequences predicted for SNPs in CTCF and its regulatory regions. These include SNPs genotyped in the present discovery study and SNPs identified as their proxies using SNAP. For the latter, only those that were annotated by RegulomeDB as being (1) likely to affect DNA binding of the protein and linked to expression of a gene target, or (2) likely to affect DNA binding, are listed. The complete functional annotation data for the SNPs of CTCF are provided in Table S5. All genotyped SNPs annotated to CTCF showed a significant (component Global Test p-value of #0.05) contribution to pathway associations. Of these, rs6499137 and rs7191281 were located at the 39-UTR and the intron of CTCF, respectively. Given the 20 kb flanking region allowed for assigning the SNPs to a gene, the other two SNPs were considered to be shared with the neighboring gene RLTPR. Based on the functional annotation with the RegulomeDB database, the 39-UTR SNP of CTCF (rs6499137) and its proxies were considered to be associated with the altered expression of the neighboring gene RLTPR (Figure 3, Table S5). One of the proxies (rs17686899) overlaps with a number of functional elements, such as open chromatin region, the binding sites for different transcription factors, and regions with certain histone modifications across many cell types. This suggests that the SNP was likely to affect the binding of a number of transcription factors to the genomic region of this gene. The respective expression quantitative trait loci (eQTL) information suggested that the SNP was likely to affect the expression of two genes, i.e. DUS2L and RLTPR. Among the CTCF-annotated SNPs, the intronic SNP of CTCF, rs7191281, was one of the top SNPs (component Global Test p-value of ,0.001) contributing to the association of CTCF (and the association of the four replicated pathways containing CTCF). In addition, this SNP had the lowest p-value in the analyses of the PGC SCZ sample. While no information concerning functionality was available in the Reg-ulomeDB database for this intronic SNP of CTCF, its proxy, rs13334205, was annotated with strong functional consequences. This proxy SNP was located in the regulatory region of CTCF and overlapped with the binding site of DNA-binding proteins, such as EBF1, TCF12, POLR2A, in an open chromatin region ( Figure 3, Table S5).
Potential functional consequences of SNPs in CACNB2
The complete functional annotation data for the SNPs of CACNB2 are provided in Table S6. The positions of the majority of the genotyped and the proxy SNPs of CACNB2 overlapped a motif match to the FOX (FOXP1, FOXJ1, FOXJ2) and GATA (GATA1, GATA3) family motifs in open chromatin regions. Among the SNPs mapped to CACNB2, rs12257556 and its proxy rs4748474 were annotated with the strongest functional consequences. These intronic SNPs were eQTLs for ARL5B, and overlapped an open chromatin region. The proxy SNPs rs35803482 and rs7897710 both overlap with the binding sites of RAD21, SMC3, CTCF, and have a motif match for FOXP1. The intronic SNP rs2799573 (which was also the most highly associated SNP of CACNB2 in the PGC data) lies in the binding region of a number of proteins, such as CDX2, CTCF, JUN, JUND, MEF2A, RAD21, and SMC3 (Table S6), as identified in the ENCODE ChIP-seq data across a diverse set of cell types.
SCZ GWAS data analyses
In the present study, a genome-wide pathway association analysis was performed by means of the Global Test. The analyses involved well-curated descriptions of 7,350 pathways, and were carried out on large-scale discovery and replication datasets. A gene-based analysis of genes with a high contribution to the significance of the top pathways was then performed using the SCZ GWAS results of the PGC. Finally, a functional SNP-based analysis of the top hit genomic regions was conducted. Through this hierarchical approach, we were able to replicate pathway findings from previous studies of SCZ and detect novel pathways and genomic regions with an association to SCZ in the investigated samples. In the discovery set, we detected evidence for a significant contribution of 27 pathways. Of these, 14 remained significant in the replication dataset. The 14 replicated pathways are involved in transcriptional regulation and gene expression, synapse organization, cell adhesion, and apoptosis.
Previous pathway analyses of SCZ GWAS data have identified associations with pathways that are mainly involved in processes critical to synaptic function, neurodevelopment, cell adhesion, the immune system, the estrogen biosynthetic process, and apoptosis [10], [19], [20]. One of the 14 significant pathways in the present study, i.e. cell adhesion, was also the most significant pathway in the study by O'Dushlaine et al. [10]. Jia et al. [19] reported nominal significance for the following four pathways: CARM_ER (CARM1 and Regulation of the Estrogen Receptor); glutamate metabolism; TNFR1; and TGF beta signaling. Glutamate is implicated in synaptic neurotransmission, and TGF-beta and TNFR1 signaling are involved in several cellular processes, including apoptosis and excitotoxicity. The top hit pathways ''synaptic organization'' and ''apoptosis'' from the present study are thus consistent with the results of Jia et al [19].
However, the majority of pathways with significant association to SCZ in the present study are novel, and they are mainly involved in transcriptional regulation and gene expression. One reason for the failure of previous pathway-based studies of SCZ to generate similar findings may have been that they focused mainly on gene sets from the KEGG and BioCarta databases, whereas we accessed several pathway databases. These included the GO database, as well as special gene-set collections on chemical and genomic perturbations (dbCGP), and transcriptional regulation such as dbTFT and dbMIR. It should be noted that only few of our 14 replicated pathways achieved significance in the analysis of our discovery sample using GRASS [21], gseaSNP [22], and ALIGATOR [23]; see Text S1 and Table S1C). The difference in results can be explained by the different assumptions these alternative pathway approaches rest on.
As part of our hierarchical approach, we aimed to identify which genes in a particular pathway could be responsible for the association with SCZ risk. Integration of gene-based analysis facilitated both the prioritization of potential candidate genes and more precise formulation of hypotheses concerning the functional consequences of the potential pathway perturbations (i.e. at the gene-and SNP-level). In particular, we explored how variants that emerged as being of importance for our pathway-and gene-based signals might affect the function and regulation of other genes.
In the gene-based analysis, CACNB2 and CTCF showed the strongest evidence for association with SCZ in both the present samples and in those of the PGC. The gene CACNB2 encodes an auxiliary voltage-dependent L-type calcium-channel subunit that is mainly expressed in heart and brain tissue [24]. This subunit is essential for normal surface expression, adequate trafficking, and functioning of voltage-gated calcium channels [24]. Recently, CACNB2 was among four loci with genome-wide significance in a cross-disorder analysis of GWAS data for autism spectrum disorder, attention deficit-hyperactivity disorder, bipolar disorder, major depressive disorder, and SCZ [25]. Previously, CACNB2 had been one of the top hit regions in a GWAS of bipolar disorder I in a Han Chinese population [26]. Functionally, the calcium channel beta-2 subunit encoded by CACNB2, together with the calcium channel alpha(2)/delta subunit, affects the kinetics and expression of Ca(V)1.2 (encoded by CACNA1C) [27]. CACNA1C is a wellestablished susceptibility gene for bipolar disorder, SCZ, and major depressive disorder [25], [28][29][30][31]. The RegulomeDB search of genotyped SNPs and their proxies in CACNB2 resulted in the detection of the intronic SNPs rs12257556 and rs10764566, and these were eQTLs for ARL5B. The gene ARL5B encodes a trans-Golgi network localized small G protein that has been described as a key regulator of retrograde membrane transport [32]. Altered ARL5B expression may be involved in the dysregulation of axonal transport. Interestingly, a previous study found that the transcript of one of the most widely studied susceptibility genes for SCZ, DISC1, was an interacting molecule for a motor protein of axonal transport [33]. It is of note that SNPs (both genotyped and proxies) at the CACNB2 locus suggested an interplay with our second gene of interest, i.e. CTCF. Such a connection is also suggested with RAD21. A substantial body of literature describes an interaction between RAD21 and CTCF, particularly in neurons [34], [35]. Although few data are available on a potential interaction between CACNB2 and RAD21/CTCF, moderate evidence is available from several protein-protein interaction databases (data not shown) for an interplay between CTCF, RAD21, and ARL5B.
CTCF encodes a transcriptional regulator protein with 11 conserved zinc finger domains, and is an important modulator of conformational changes in chromatin [36]. A recent study of conditional knockout of the ctcf gene in mice demonstrated that CTCF was a key regulator of neuronal differentiation, and was essential for neuronal diversity and functional neural networks [37]. The authors showed that CTCF was required for appropriate dendritic arborization and synapse formation, since it controlled clustered protocadherin expression. Previous studies have shown an association between genetic variation in the protocadherin gene cluster and SCZ [38], [39]. Our result adds to this body of research the finding that transcriptional regulation of genes essential for neuronal diversity, such as the regulation of protocadherins by CTCF, may alter synaptic connectivity and thus contribute to the etiology of SCZ. Intriguingly, evidence from the majority of CTCF SNPs (both genotyped and proxies) suggested that the variants influence RLTPR expression (Figure 3). The RLTPR gene is expressed in several brain regions (EMBL-EBI Expression Atlas; http://www.ebi.ac.uk/gxa/gene/ ENSG00000159753). The resulting protein has a RGD (Arginine-Glycine-Aspartic acid) motif [40]. This is a universal cell recognition site of extracellular proteins and interacts with a family of cell-surface receptors, such as integrins for cell-adhesion molecules [41]. Together with the replicated KEGG pathway cell adhesion molecules, this finding strongly supports the hypothesis that modulation of adhesion, and interactions between cells as well as cell and the extracellular matrix, are implicated in the etiology of SCZ.
Another top hit gene in the present study was FOXP2, which was among the top genes in eight of the 14 most implicated pathways. FOXP2 (forkhead-box P2) is a transcription factor with an essential role in the development of speech and language regions in the brain. The fact that SCZ patients often show language impairments such as reading difficulties [42] renders FOXP2 a plausible SCZ candidate gene. Interestingly, a previous study reported an association between genetic variation in FOXP2 and SCZ in a Han Chinese population [43]. Furthermore, Walker et al. [44] identified FOXP2 as an inhibitor of the promoter activity and protein expression of DISC1. The present study supports the hypothesis that FOXP2 plays an important role in SCZ on the level of the transcriptional regulation of target genes.
The association with the apoptosis pathway was driven predominantly by a SNP which mapped to AKT3. Besides being detected via the Global Test, this gene was the most significantly associated gene in the FORGE analysis of the PGC data. AKT3 is a serin/threonine protein kinase, and is a member of the AKT family. It is involved in many biological processes, including apoptosis and cellular proliferation [45]. In a recent study by Diez et al. [46], AKT3 was identified as a modulator of the fine regulation of apoptotic processes and axon growth. Disruption of AKT3 significantly reduced axon length and viability of neurons in cell culture [46]. Moreover, AKT3 is the most abundant AKT member in the brain during neurogenesis. AKT3 controls brain size, and research has shown that genetic variation (duplication and point mutation) of AKT3 contributes to hemimegalencephaly [47].
In conclusion, the present study demonstrated that use of information from databases focusing on cell-regulatory networks together with information from traditional pathway database resources can facilitate the identification of susceptibility factors for the complex neuropsychiatric disease SCZ. Through the application of a well-designed hierarchical framework, our study highlighted the importance of calcium channel signaling, cell adhesion, and the modulation of transcriptional regulation implicated in neuronal diversity, neurite growth, and synapse formation in the etiology of SCZ. In particular, CTCF and CACNB2 (and possibly ARL5B) were identified as SCZ candidate genes.
Data sets
Participants from four datasets were included ( Table 1). The discovery set was the BOMA-UTR sample. This consisted of data from the MooDS SCZ consortium (BOMA) [48], [49], and independent data from a Dutch study (UTR) [48], and comprised 2,230 SCZ cases and 2,810 controls. The replication set consisted of the GAIN [dbGaP accession number: phs000021.v2.p1], and the MGS [dbGaP accession number: phs000167.v1.p1] datasets, and comprised 2,436 SCZ cases and 2,646 controls [50]. The BOMA and MGS samples were also used in the PGC SCZ study. An overlap of 80% existed between the PGC study and the sample used in the present pathway-based analysis.
Linkage disequilibrium (LD)-based SNP pruning
To accommodate the Global Test's assumption of independence between variables, the SNP set was reduced according to a variance inflation factor (VIF) and using a sliding window approach, as implemented in PLINK [51] (http://pngu.mgh. harvard.edu/purcell/plink/, version 1.07). A VIF of 100 was used. The window size was set at 50 SNPs, and was shifted by 5 SNPs at each step. An LD-based pruned set of SNPs ( Table 1) was then considered for mapping to pathways. A detailed description of this procedure is provided in Text S1 (section ''SNP independence and LD-based SNP pruning'') and in Table S7.
Annotation of SNPs to genes
SNPs were annotated with information from dbSNP Build 127. The ''seq-gene'' file containing information for annotating the SNP rs numbers to ENTREZ gene IDs was downloaded from the NCBI ftp website (BUILD 36.3). SNPs were assigned to a gene if the SNP was located within the genomic sequence or within 20 kb of the 59 and 39 ends of the first and last exons in order to account for important regulatory regions [52]. If a SNP was within a region shared by more than one gene, it was assigned to all genes (for details see Text S1).
Pathway and gene-set databases
Selected gene-set collections were accessed from the Molecular Signatures Database (MSigDB, version 3.0) [53] website (http:// www.broadinstitute.org/gsea/msigdb). This included the pathways from BioCarta (217 pathways), Chemical and Genomic Perturbations (1,825 gene-sets), Reactome (775 pathways), Micro-RNA Targets (176 gene-sets), and Transcription Factor Targets (456 gene-sets). Information concerning GO terms [54] and KEGG pathways [55], [56] was obtained from the respective R packages (3,686 GO terms; GO.db, version 2.5.0; 215 KEGG pathways; R package KEGG.db, version 2.5.0). At the time of data retrieval (June, 2011), these repositories were more up-to-date than the MSigDB database. A total of 7,350 pathways were included. These were represented by 237 788 (53.7%) of the SNPs in the BOMA-UTR dataset. Hence 53.7% of SNPs genotyped in the exploration samples were mapped to pathways. For the SNP data, SNP effect was coded as an allele dose effect (0, 1, 2). Detailed information on the pathway information overlap and redundancy is provided in Text S1 (section ''Choice of pathways and gene-sets'') and in Table S2 and Figure S1.
Pathway analysis with the Global Test
For the pathway-based analysis, the Global Test [15] was used (R package globaltest, version 5.12.0; Figure 1). The Global Test takes the individual level GWAS data as an input, and tests whether the global polymorphism pattern of a group of genes is significantly associated with the phenotype of interest. To account for both a potential underlying correlation structure and pathway and/or gene size, the Global Test with subject sampling was applied on the basis of 10,000 permutations of case-control status [15]. To study the impact of pathway and/or gene size in more detail, a SNP label permutation test was performed (for detailed information see Text S1, section ''Subject vs SNP label permutations'').
At the discovery stage of the analysis, less conservative correction for multiple testing was applied in order to prioritize the identification of associated pathways. This was a legitimate approach, since any false positives would be controlled for in the replication analysis. Multiplicity correction was applied for each individual collection of pathways/gene-sets. For pathways/genesets retrieved from the KEGG, Reactome, and MSigDB gene-set collections, the pathway scores were corrected for multiple testing using the Benjamini-Hochberg method [57]. A pathway was considered to be significantly associated with the phenotype of interest (i.e. SCZ) if the false discovery rates from all three of the following were ,0.05: (i) un-permuted test; (ii) the subjectsampling test; and (iii) the SNP-label permutation tests. The resulting list of significant pathways was ranked according to the false discovery rate obtained from the SNP-label permutation tests. For the GO terms, correction for multiple testing was performed using the Focus Level method [58]. A GO term was considered to be significant if both of the following were ,0.05: (i) the focus level obtained from the un-permuted test; and (ii) the false discovery rate obtained from the subject-sampling test. To account for a gender-specific variance in the perturbed pathways, control for gender was used as a covariate [15].
Component Global Test
To estimate the contributions of individual SNPs to a pathwayor a gene association, the component global test was performed using the covariates function implemented in the R package globaltest [15]. Throughout the text, the single SNP p-values obtained using the Global Test refer to the results obtained using the component global test.
The Global Test with the replication dataset
Only pathways that were significantly associated with SCZ in the discovery set were followed-up ( Figure 1, step 1). All tests in the follow-up step were performed as described above, with the exception that all tested pathways were subjected to Benjamini-Hochberg correction for multiple testing. Possible stratification in the data was investigated using a multi-dimensional scaling (MDS) approach. MDS covariates were obtained from PLINK using a previously described protocol [48]. To correct for the potential effect of stratification on the association test, the Global Test was run with four leading MDS dimensions as covariates.
Gene-based analysis with Global Test and FORGE
The aim of the second step (Figure 1, gene-based analysis) was to identify genes of particular importance to the replicated pathways. Genes that mapped to one or more of the identified pathways were analyzed ( Figure 1, step 2). First, the component global test was performed for every individual SNP that was annotated to the replicated pathways. SNPs with a component global test p-value of ,0.001 in the BOMA-UTR dataset were then annotated to genes. These genes are referred to as ''top genes'' in the subsequent text. Gene-based analysis of PGC data for the top genes was then conducted using FORGE [17] As with the Global Test, the analyses focused on genomic sequences that included both the genes themselves and a 20 kb window on either side of the respective gene to account for important regulatory regions. Along with the summary statistics of the PGC, genotype data from the European HapMap 3 samples were used (CEU and TSI). Details of the program and the test statistic used to calculate the gene-based p-values (fixed-effects Z score method) are provided elsewhere [17]. Genes that remained nominally significant (p,0.05) in both the component global test and the FORGE analyses were considered for the third step of the analyses (SNP function annotation). No correction for multiple testing was performed. However, replication of our most interesting findings was sought in an independent dataset from Denmark. Detailed information on these Danish samples is provided elsewhere [59].
SNP function annotation
The third step (Figure 1, SNP function annotation) focused on genes identified in step 2. Evidence that SNPs annotated to these genes are implicated in SCZ was sought by investigating the potential consequences of SNPs in terms of gene regulation or function. For each gene of interest, we first selected all SNPs that were annotated to this gene and which had shown evidence for association with SCZ in the discovery dataset (Global Test, p# 0.05). To account for the relevant information from other correlated SNPs, we then identified all SNPs from the 1000 genomes project (pilot project) [60] that showed strong LD with the associated SNPs (r 2 .0.8, maximum distance between both SNPs = 500 kb). The webtool SNAP [61] (Version 2.2) was used. Each query SNP was included as its own proxy. RegulomeDB [18] and Polyphen-2/SIFT [62], [63] were used for the functional classification of non-coding and coding SNPs, respectively. Figure S1 The heatmap of the level of gene overlap between the 27 schizophrenia associated pathways. The values in the cells indicate the maximum fraction overlap of the genes in a pathway (listed on y-axis). The corresponding pathway name in the x-axis is a pathway with the highest overlap (self-overlap is excluded).
Table S4
List of schizophrenia (SCZ) associated genes, their pvalues (FORGE analysis), and membership in the SCZ associated pathways discovered and replicated in the present study. Pathways in bold also showed an overall association using one of the other three methods (ALIGATOR, GRASS, gseaSNP) applied in the present study. (DOC) | 2016-05-01T08:53:11.475Z | 2014-06-01T00:00:00.000 | {
"year": 2014,
"sha1": "c3141cd318cd08db394b59aa1f7557e069aa7458",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosgenetics/article/file?id=10.1371/journal.pgen.1004345&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8ff0fd729396bc1c4b896d8b54021e0b501f387c",
"s2fieldsofstudy": [
"Psychology",
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
220740332 | pes2o/s2orc | v3-fos-license | Two new oleanane triterpenoid saponins from Elsholtzia bodinieri
Abstract Two new oleanane triterpenoid saponins, bodiniosides Q (1) and R (2), along with five known saponins, niga-ichigoside F1 (3), 3-O-[β-D-glucopyranosyl]-28-O-[α-L-rhamnopyranosyl -(1→2)-β-D-glucopyranosyl] arjunolic acid (4), asiaticoside E (5), sericoside (6), and bodinioside E (7), were isolated from the aerial parts of Elsholtzia bodinieri. The structures of 1 and 2 were characterized by spectroscopic techniques and chemical evidence as 3-O-β-D- xylopyranosyl-2α, 23-dihydroxy-olean-12-en-28-oic acid 28-O-α-L-rhamnopyranosyl- (1→2)-β-D-glucopyranoside (1) and 3-O-β-D-xylopyranosyl-2α, 23-dihydroxy-olean-12-en-28-oic acid 28-O-[β-D-glucopyranosyl-(1→4)-α-L-rhamnopyranosyl -(1→2)]-β-D-glucopyranoside (2). Compounds 1, 3, and 5 exhibited weak anti-influenza activity against strain A/WSN/33/2009 (H1N1), with inhibition rate of 11.63%, 17.01%, and 16.98%, respectively. Graphical Abstract
Introduction
The genus Elsholtzia which grows throughout East Asia, Africa, North America, and European countries, belonging to the family Lamiaceae, has approximately 40 species. 33 species of the genus Elsholtzia were found in China. Among these, some are used as medicines, some are taken as food and some are source of honey manufacture (Guo et al. 2012). Elsholtzia bodinieri Van't is an annual herbaceous plant found in the northwest and southwest mountainous area of China (Chinese name "Dongzisu"and "yashuacao"). Mainly distributed in Yunnan and Guizhou Provinces. As a traditional Chinese medicine, E. bodinieri has been used as herbal tea or traditional folk medicine for the prophylaxis and treatment of cough, headache, pharyngitis, fever and hepatitis (Jiangsu New Medical College 1985). We previously has been isolated triterpenoid saponins (Zhu et al. 2002;Zhao et al. 2015;Xiang et al. 2019), flavonoid glycosides (Li et al. 2008;Zhong et al. 2016), sesquiterpene glycosides , clerodane diterpenoid glycosides (Hu et al. 2008), and phenolic constituents ) from the aerial parts of this plant. As a continuation of our work, we further systematically investigated the chemical components of the aerial parts of this plant. In our search for secondary metabolites with structural diversity and potential anti-influenza virus activity, two new oleanane triterpenoid saponins, bodinioside Q (1) and R (2), along with five known ones, were obtained from E. bodinieri. Among them, compounds 1, 3, and 5 exhibited weak inhibition of influenza virus activities with inhibition rate of 11.63%, 17.01%, and 16.98%. Here in, we report the isolation, structural elucidation and anti-influenza virus activities of isolated compounds.
Results and discussion
Compound 1 was obtained as white amorphous powder, its positive HR-ESI-MS spectrum indicated the molecular formula to be C 47 H 76 O 18 by the observation quasimolecular ion peak [M þ Na] þ at m/z 951.4916 and with the help of NMR spectroscopic date, indicating ten degrees of unsaturation. The IR spectrum exhibited the presence of hydroxyl (3441 cm À1 ), carbonyl (1722 cm À1 ), and olefinic (1635 cm À1 ) groups. The 1 H NMR spectrum (Table 1) of 1 revealed six methyl signals at d H 0.96 (3H, s), 1.03 (3H, s), 1.10 (3H, s), 1.15 (3H, s), 0.83 (3H, s), and 0.78 (3H, s), correlation with carbons at d C 14.6 (C-24), 17.4 (C-25), 17.4 (C-26), 30.6 (C-27), 33.0 (C-29), and 25.7 (C-30) in the HSQC spectrum, respectively. The signal at d H 5.42 (1H, br. s), corresponding to the carbon at d C 122.4 (C-12) which coupled with d C 144.1 (C-13) in the 13 C NMR spectrum, indicated the existence of a double bond. On the basis of the above spectroscopic data, compound 1 was suggested to possess an olean-12-ene skeleton. Comparison of its NMR spectroscopic data with those of compound 4 (Acebey-Castellon et al. 2011), suggested that they had same 2, 3, 23-trihydroxy-olean-12-en-28-oic acid as the aglycone. Acid hydrolysis of 1 with 1 M HCl produced L-rhamnose (Rha), D-glucose (Glc), and D-xylose (Xyl) as sugar residues by GC chromatography with the corresponding trimethylsilylated L-cysteine derivatives. The FAB-MS (negative) of 1 showed fragment ions at m/z 619 [M-Glc-Rha-H] À and 487 [M-Glc-Rha-Xyl-H] À . Since NMR signals of three sugar units have undesirable overlapped effects, the HMQC-TOCSY experiment was successfully used to distinguish and assign the 1 H and 13 C NMR signals of each sugar moiety. The correlations from the anomeric proton signal at d H 5.02 to three carbon signals at d C 75.6, 78.5, and 67.2, as well as from three proton signals at d H 4.04, 4.14, and 4.33 to the anomeric carbon, suggested the presence of D-xylopyranose. In a similar way, the 1 H and 13 C NMR signals for D-glucopyranosyl and L-rhamnopyranosyl were assigned. In addition, the J H 1, H2 coupling constants of two anomeric proton signals at d H 5.02 (d, J ¼ 7.3 Hz) and 6.19 (d, J ¼ 8.1 Hz) suggested that the b anomeric configuration for the xylopyranosyl and glucopyranosyl units. The inspection of the anomeric proton (6.11, br. s) deduced the a anomeric configuration for L-rhamnopyranosyl unit (Zhao et al. 2015). The NMR data of 1 was highly analogue to the 4, except for the replacement of signals for the Xyl in 1 by the Glc in 4. The observation suggests that 1 was glycosylated at C-3 by the Xyl residue instead of the Glc in 4, which was confirmed by HMBC correlation from H Xyl -1 (d H 5.02, d, J ¼ 7.3 Hz) to C-3 ( Figure 1). The other key HMBC correlations suggested that the remaining moiety of 1 was identical to that of 4. Based on the above evidence, the structure of 1 was elucidated as 3-O-b-D-xylopyranosyl-2a, 23-dihydroxy-olean-12-en-28-oic acid 28-O-a-L-rhamnopyranosyl-(1!2)-b-D-glucopyranoside, and named bodinioside Q.
The anti-influenza A virus activities of compounds 1-7 against strain A/WSN/33/ 2009 were evaluated in MDCK cells. The results showed that 1, 3, and 5 exhibited weak inhibition of influenza virus activities with inhibition rate of 11.63%, 17.01%, and 16.98%, while the inhibition rate of the positive control (oseltamivir) was 71.20%. A previous study revealed that pentacyclic triterpenoids including ursane, oleanane, and lupane type have anti-influenza virus activity (Wang et al. 2016). Our results further confirmed that the pentacyclic triterpenoids were active against influenza virus. Although the structure-activity relationship was not able to be established due to the limited samples in the present study, the pentacyclic system may be essential for the activity since the E ring break in the aglycone exampled by 7 significantly reduced the activity relative to that in 3 and 5.
Materials
The aerial parts of E. bodinieri were collected in Yuxi city, Yunnan Province, P. R. China, in May 2016, and identified by Dr Xuanqin Chen. A voucher specimen (KMUST 20160005) was deposited at the Laboratory of Phytochemistry, Faculty of Life Science and Technology, Kunming University of Science and Technology.
Extraction and isolation
Dried and finely powdered aerial parts of E. bodinieri (15 kg) was macerated in 75% aq. Me 2 CO (3 Â 35 L, 24 h, each) at room temperature, then concentrated in vacuo to yield an extract, which was suspended in H 2 O, and successively partitioned with CHCl 3 , AcOEt and n-BuOH.
Acid hydrolysis for sugar analysis
Each of 1 and 2 (1.0 mg for each compound) in 1 M HCl (0.4 mL) was heated at 90-100 C in a screw-capped vial for 5 h. The mixture was neutralized by addition of Amberlite IRA400 (OH À form) and then filtered. The filtrate was dried in vacuo, dissolved in 0.2 mL of pyridine containing L-cysteine methyl ester (10 mg/mL) and reacted at 60 C for 1 h. A solution (0.2 mL) of trimethylsilylimidazole in pyridine (10 mg/mL) was added to this mixture, and it was heated at 60 C for 1 h. The final mixture was directly analyzed by GC [30QC2/AC-5 quartz capillary column (30 m  0.32 mm) with the following conditions: column temperature: 180/280 C; programmed increase 3 C/ min; carrier gas: N 2 (1 mL/min); injection and detector temperature: 250 C; injection volume: 4 lL; split ratio: 1/50]. The authentic samples D-and L-glucose, D-and Lxylose, and L-rhamnose were treated in the same manner. Under these conditions, the retention times of authentic samples D-and L-glucose, D-and L-xylose, and L-rhamnose were 18.29, 18.87, 13.35, 14.01, and 14.97 min, respectively. During our studies, identical retention times observed between the different hydrolysates and authentic standards.
Anti-influenza virus activity
Influenza strain A/WSN/33/2009 (H1N1) was used in this study. Oseltamivir was used as a positive control and purchased from Tszchem and LKT laboratories. MDCK cells were seeded into 96-well plates, incubated overnight and infected with influenza virus (MOI 1 = 4 0.1). Cells were suspended in Dulbecco's Modified Eagle Medium (DMEM) supplemented with 1% fetal bovine serum (FBS), containing test compound and 2 mg/mL TPCK-treated trypsin, and a final DMSO concentration of 1% was added in each well. After 40 h of incubation, Cell Titer-Glo reagent was added, and the plates were read using a plate reader (Wang et al. 2016). The inhibition rate was calculated by the following formula: inhibition rate (%) ¼ [1 À (luminescence with compounds À luminescence with compounds and virus)/(luminescence with DMSO À luminescence with DMSO and virus)] Â 100%. Assessment of anti-influenza virus activity was performed asdescribed previously (Song et al. 2014).
Conclusion
Our present work on the plants of E. bodinieri yielded two new oleanane triterpenoid saponins, along with five known saponins. Compound 1, 3, and 5 exhibited weak inhibition of influenza virus activities with inhibition rate of 11.63%, 17.01% and 16.98%, respectively. This investigation should provide valuable information for further understanding of E. bodinieri.
Disclosure statement
No potential conflict of interest was reported by the authors. | 2019-04-06T13:05:49.346Z | 2012-09-01T00:00:00.000 | {
"year": 2020,
"sha1": "49990cfda0978b70ed7d3988a26b6d3f6a32a0c1",
"oa_license": "CCBY",
"oa_url": "https://figshare.com/articles/journal_contribution/Two_new_oleanane_triterpenoid_saponins_from_i_Elsholtzia_bodinieri_i_/11821491/1/files/21616056.pdf",
"oa_status": "GREEN",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "81165d52028bf0fd9aaf591179f6e57dfe29f98e",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
22357987 | pes2o/s2orc | v3-fos-license | New methods for the management of gastric varices
Bleeding from gastric varices has been successfully treated by endoscopic modalities. Once the bleeding from the gastric varices is stabilized, endoscopic treatment and/or interventional radiology should be performed to eradicate varices completely. Partial splenic artery embolization is a supplemental treatment to prolong the obliteration of the veins feeding and/or draining the varices. The overall incidence of bleeding from gastric varices is lower than that from esophageal varices. No studies to date have definitively characterized the causal factors behind bleeding from gastric varices. The initial episodes of bleeding from esophageal varices or gastric varices without prior treatment may be at least partly triggered by a violation of the mucosal barrier overlying varices. This is especially likely in the case of varices of the fundus. In view of the high rate of hemostasis achieved among bleeding gastric varices, treatment should be administered in selective cases. Among untreated cases, steps to prevent gastric mucosal injury confer very important protection against gastric variceal bleeding.
INTRODUCTION
Bleeding from esophageal varices (EVs) or gastric varices (GVs) is a catastrophic complication of chronic liver disease. Bleeding from GVs is generally thought to be more severe than bleeding from EVs [1] , but it occurs less frequently [2][3][4] . Though many recent developments have improved the outcome of treatments for GVs, no consensus has been reached on the optimum treatment. In this paper we review the pathomorphology, hemodynamics, risk factors for bleeding, and treatments for GVs. In the esophagogastric varices grading system of the Japan Society for Portal Hypertension [5] , the varices are evaluated based on color (white [Cw], and blue [Cb]), form (small and straight [F1], nodular [F2], and large or coiled [F3]), and the red color sign (RC0-3). GVs are divided into cardiac varices (Lg-c), fundal varices (Lg-f), and varices involved both the cardia and fundus (Lg-cf). In this review, GVs are divided into two categories and described accordingly: Lg-c (cardiac varices: CVs) and Lg-cf or Lg-f (fundal varices: FVs).
PATHOMORPHOLOGY OF GVs
Arakawa et al [6] reported that CVs are supplied by the left gastric vein (cardiac branch), a vessel which enters the stomach wall in the cardia at a point 2 to 3 cm from the esophagogastric junction and diverges into a profusion of branches running throughout the cardia. Some of these veins become markedly dilated, acquiring the features of varices. Most veins in the cardia diverge into parallel veins from the esophagogastric junction as the flow becomes hepatofugal. However, Others will dilate, wind through the submucosa, and directly join EVs. Histologically, nearly the entire cross-section of the wall is the varix itself. The varices are covered by thinning layers of serosa and mucosa through which they can ultimately be seen.
The angio-architecture of a FV is quite different from that of a CV. Most FVs are supplied by the short gastric vein, though in some cases the blood is fed from the posterior or left gastric vein. Thus, the vascular anatomy of a FV is something like a splenorenal shunt running through the stomach wall. Bleeding from an EV most commonly occurs in the "critical area" 3 cm proximal to the esophagocardiac junction. Fine longitudinal veins in the lamina propria originate at the esophagocardiac junction and run in the lamina propria toward this critical area. EVs consist of multiple dilated veins. Those that rupture are usually located in the lamina propria [7] .
In the stomach, unlike its counterpart the esophagus a large winding vein runs through the submucosa without causing varicose veins to pile up. The ruptures in CVs occur in the submucosa, where they disrupt the muscularis mucosae and lamina propria mucosae. The mucosal layer covering a FV is somewhat thicker than that covering an EV. The difference between a CV and a FV lies in the www.wjgnet.com caliber of the varicose vein and the degree of vascular anastomosis. Most FVs are supplied via the short gastric vein, though some are fed by the posterior or left gastric vein. Anastomosis of FVs is generally uncommon. The varices within the wall penetrate the muscle layer and wind through the submucosal layer, where they displace and attenuate the muscularis mucosae and propria mucosae. The varicose veins protrude into the gastric lumen.
The lamina muscularis mucosa in the esophageal mucosa is loose, and the venous pressure in the submucosa is transmitted through communicating branches to the veins in the lamina propria. The lamina muscularis mucosa in the gastric mucosa, on the other hand, is tightly integrated with the lamina propria [8] .
The red color sign is an elevated red area which has proven to be important in portending variceal bleeding. The histological manifestation is a thinning of the epithelial layer The North Italian Endoscopic Club for the Study and Treatment of Esophageal varices [9] published a report establishing that the red color sign on EVs is predictive of bleeding. It remains unclear whether the endoscopic red color sign in the stomach has the same significance as the red color sign in the esophagus. In the latter case does it denote a thinning of the epithelial layer. The varix in the submucosa of the stomach is covered by the muscularis mucosae and propria mucosae. This generally confers an appearance different from that typical of the thinning epithelial layer of the esophagus [6] .
HEMODYNAMICS OF GVs
The portal hemodynamics of GVs should be evaluated in all patients with these varices to determine the most appropriate treatment. CVs are supplied by the left gastric vein (cardiac branch), a vessel which enters the stomach wall in the cardia at a point 2 to 3 cm from the esophagogastric junction and diverges into a profusion of branches running throughout the cardia. The main veins feeding the FVs are the left gastric vein (51%), posterior gastric vein (30%), and short gastric vein (69%). The principal drainage veins for the FVs are the gastrorenal shunt (87%) and gastric-inferior phrenic vein shunt (16%), though about 1% of FVs reported to communicate with neither [10] . FVs are more frequently supplied by the short and posterior gastric veins than CVs. Concomitant collaterals such as EVs, para-esophageal veins, and paraumbilical veins are additionally observed in nearly 60% of FVs.
RISK FACTOR FOR BLEEDING FROM GVs
The incidence of variceal bleeding in patients who have never received treatment for EVs has been reported to range from 16 to 75.6% [11,12] . The incidence of bleeding from GVs stands at 25% [2] , while cumulative bleeding rates from FVs at 1, 3, and 5 years have been estimated at 16%, 36%, and 44%, respectively [13] . Thus, the overall incidence of bleeding from GVs is lower than that from EVs [2] . In an earlier study on the natural course of GVs in 52 patients, our group treated bleeding from GVs in 4 patients over a mean follow-up period of 41 mo. Hemorrhage was successfully halted in all 4 of these patients. The cumulative bleeding rates at 1, 3, and 5 years were 3.8%, 9.4%, and 9.4%, respectively. Three of the 4 patients were free of erosive gastritis and gastric ulcer at the time of entry into the study, though ulcers or erosions were found at the bleeding points of the GVs in all 4 when the varices ruptured. There were no significant differences in patient characteristics with ruptured versus non-ruptured GVs when the patients entered the study [4] .
The endoscopic risk factors for bleeding from EVs include the presence of raised red markings, cherryred spots, blue color, and large size [14] . The risk factors for bleeding from GVs have yet to be characterized. In another study, our group examined 70 cirrhotic patients with first-time bleeding from EVs or GVs without prior treatment [15] . The red color sign was more common in EVs than in CVs or FVs (P < 0.0001). Mucosal erosion over the varices at the site of bleeding was more common in CVs (P < 0.0005) and FVs (P < 0.0001) than in EVs. An ulcer at the bleeding point was more common in CVs (P < 0.01) and FVs (P < 0.0001) than in EVs. Gastric ulcer was more common in CVs (P < 0.05) and FVs (P < 0.001) than in EVs. Erosive gastritis was more common in FVs (P < 0.02) than in EVs. The red color sign, a strong risk factor for the ruptures frequently encountered in EVs, was completely absent in the FVs. All of the CVs manifesting the red color sign communicated with EVs which manifested the red color sign themselves. This might have been due to the pronounced thickness of the mucosal layer overlying the FVs. FVs are usually two or three times larger than EVs and drain directly into an extremely dilated left gastric or posterior gastric vein [16] . The volume of blood flow through a FV therefore usually exceeds that through an EV. Gastric ulcers that develop over GVs represent violation of the protective layer of gastric mucosa. Violation of the mucosal barrier overlying GVs place patients at risk of massive bleeding, especially when FVs are involved. Violations of this type could be an important precondition leading to variceal hemorrhage.
TREATMENT OF GVs
Treatment modalities for GVs include balloon tamponade, endoscopic treatment, embolization, and surgery.
Balloon tamponade
Balloon tamponade with the Sengstaken-Blakemore or the Linton-Nachlas tube is an emergent procedure for active hemorrhaging from GVs. The procedure is effective in the short term, but permanent hemostasis is obtained in fewer than 50% of cases [17,18] .
Endoscopic treatment
Two endoscopic treatment modalities are used for the treatment of esophagogastric varices: endoscopic injection sclerotherapy (EIS) and endoscopic variceal ligation (EVL) [19][20][21][22][23][24][25][26] . EIS can be accomplished by either intravariceal EIS or extravariceal EIS [21][22][23]26] . In the treatment of EVs, intravariceal EIS obliterates both the interconnecting perforating veins and the veins feeding the EVs. Most veins in the cardia become parallel veins from the esophagogastric junction at the point at which the flow becomes hepatofugal. Nearby, however, a number of dilated winding cardiac veins run through the submucosa and directly join the EVs. This makes it possible to treat most CVs concomitantly with EVs when correcting the latter by intravariceal EIS.
EIS and EVL are both effective for the treatment of bleeding from EVs and CVs. EIS has been less successful in the treatment of bleeding from FVs, however. When used with 1% polidocanol, 5% ethanolamine oleate iopamidol (EOI), or thrombin for this purpose, EIS has a high rate of operative mortality [27][28][29] . Fortunately, the rate of initial hemostasis has been significantly improved since the introduction of N-butyl-2-cyanoacrylate (Histoacryl) as the sclerosant in EIS [30,31] . We should note, however, that bleeding from the GVs injection site and rebleeding from the rupture point have been reported in patients receiving EIS [2,29] .
While EVL is generally safe and effective for the treatment of CVs and FVs [32] , it sometimes causes deep or extensive ulcers and increases the risk of ensuing ulcer hemorrhage or secondary bleeding [33] . FVs are usually twice or three times larger than EVs and are directly connected to an extremely dilated left gastric or posterior gastric vein [16] . The volume of blood flow through an FV therefore usually exceeds that through an EV [34] . A mucosal injury remains on the varices after endoscopic treatment.
If the blood flow in the varices cannot be stopped completely, bleeding may recur at the site of this mucosal injury. This underlines the importance of ensuring the complete obliteration of the blood flow when treating FVs endoscopically. It may be dangerous to treat FVs by EVL alone.
GVs have also been treated by a combined endoscopic method using a detachable snare and simultaneous EIS and O-ring ligation [35] . This technique is not yet in widespread use, however. Our group published a report on the treatment of ruptured GVs by EIS with Histoacryl followed by O-ring ligation (endoscopic scleroligation: EISL) [24] . EISL was developed as a treatment modality for EVs to prevent bleeding from the injection site during needle removal [21,36] . When treating GVs by EIS with Histoacryl, the immediate freezing of the Histoacryl around the needle hinders the removal of the needle after the injection. In some cases, bleeding from the GV injection site or rebleeding from the rupture point also occurs [2,29] . Our group used the EISL procedure to treat ruptured GVs with punctures near the rupture points by simultaneous ligations of the injection sites and rupture points. EISL effectively stopped the bleeding from the GVs, enabled swift and easy needle removal, and successfully eliminated both bleeding from the injection site and rebleeding from the rupture point. An O-ring was placed at the point of the EISL injection with Histoacryl and left in position for a long time. As of this writing, EISL with Histoacryl is considered the most promising treatment for hemorrhaging GVs.
Interventional radiology (IVR)
The portal hemodynamics of GVs, the main feeding veins from the portal system, and the main drainage veins into the vena cava should be determined in all patients with the GV. Angiography can determine the hemodynamics of the GV simultaneously during treatment by embolization. Transportal obliteration: Two methods have been used to obliterate the feeding veins of GVs: percutaneous transhepatic obliteration and trans-ileocolic vein obliteration. The catheter is inserted directly into the portal vein, the portal circulation is visualized by portography, a balloon catheter is inserted selectively into the inflow site of the feeding veins for the varices, the balloon is inflated, and a test dose of a contrast medium is injected to determine the optimal volume of sclerosant fluid. Five percent EOI and/or 500 g/L glucose is injected to obliterate the feeding vein, then steel coils are used to complete the obliteration [37] . The procedure is quite effective, though only transportal obliteration is sometimes incomplete, especially in FVs. Balloon-occluded retrograde transvenous obliteration (BRTO): BRTO is a notable IVR procedure developed specially for the treatment of FVs. The technique is performed by inserting a balloon catheter into the outflow shunt (gastric-renal shunt or gastric-inferior phrenic vein shunt) via the femoral or internal jugular vein. Any existing collateral veins are treated with coils, absolute ethanol, or a small amount of 5% EOI. The balloon is inflated and a test dose of contrast medium is injected to determine the optimal volume of the sclerosant solution. Five percent EOI is slowly injected through the catheter until the shunt is filled with the sclerosant fluid. The catheter is removed after 24 h of balloon occlusion [38][39][40] . A remarkably high rate of FV eradication or reduction in FV size can be expected if the BRTO procedure is technically successful. Indeed, long-term eradication of treated FVs without recurrence is achieved in most patients [38,41] . Kanagawa et al [38] confirmed eradication of FVs in 97% of 32 patients treated by this procedure, and no FVs recurred in any of those patients within an average follow-up period of 14 mo. In earlier reports, the eradication rate of FVs exceeded 89% and the recurrence rate was less then 7%. In light of the minimal invasiveness and high safety of the procedure, BRTO is applicable not only for elective cases, but also for emergency cases with FVs.
FV treatment by BRTO has two significant effects, namely, eradication of the FVs themselves and obliteration of the unified portal-systemic shunt. Thus, most of the benefits and adverse effects of BRTO are related to obliteration of the unified portal-systemic shunt. Benefits such as decreased blood ammonia levels and improved porto-systemic encephalopathy are sometimes observed. Possible adverse effects include transient ascites, increases of ascites, pleural effusion, and the appearance of EVs manifesting the red color sign. These adverse effects may be due to elevation of the portal pressure in reaction to the occlusion of the portal-systemic shunt. Partial splenic artery embolization (PSE): The femoral artery approach is used for super-selective catheterization of the splenic arter y. The catheter tip is placed as distally as possible in either the hilus of the spleen or in an intrasplenic artery. Embolization is achieved by injecting 2-mm cubes of gelatin sponge suspended in a saline solution containing antibiotics [42,43] . PSE has been performed to treat hypersplenism, EVs, GVs, portal hypertensive gastropathy, pancreatic carcinoma, and portosystemic encephalopathy [37,[43][44][45][46][47][48][49][50][51][52][53] . Our group evaluated PSE in a long-term study of 26 patients with hepatic cirrhosis alongside 26 patients who did not undergo the PSE procedure [42] . The red blood cell counts of the PSE (+) group increased significantly by 6 mo after the procedure and remained increased for up to 7.5 years. The platelet counts peaked only 2 wk after PSE and gradually fell thereafter. Even so, the platelet counts remained significantly higher than the pre-PSE level for up to 8 years. No significant changes were observed in the aspartate aminotransferase and alanine aminotransferase activities in serum during the follow-up. Cholinesterase activity was increased significantly by 6 mo after PSE and remained increased for more than 7 years. The serum albumin concentration increased significantly from 6 mo after PSE and the level remained significantly increased for 6 years. Survival did not differ between the PSE (+) and PSE (-) groups. PSE, a non-surgical treatment, can benefit patients with cirrhosis by improving the capacity of hepatic protein synthesis and conferring protection against hemorrhage due to thrombocytopenia. Combination modalities with IVR: Our group also reported the long-term results of PSE as supplemental treatment for portal-systemic encephalopathy. We divided 25 patients with portal-systemic encephalopathy due to portal-systemic shunts into two groups, one treated by transportal obliteration and/or BRTO of portal-systemic shunt, followed by PSE (PSE (+) group; n = 14), the other treated by transportal obliteration and/or BRTO of the portal-systemic shunt without PSE (PSE (-) group; n = 11). The serum ammonia levels and grades of encephalopathy were lower in the PSE (+) group than in the PSE (-) group at 6, 9, 12, and 24 mo after treatment. Obliteration of the portal-systemic shunt raised the portal venous pressure in every case. As all of the patients had cirrhosis, the portalsystemic shunt drainage reduced portal hypertension and the obliteration of the portal-systemic shunt led to portal congestion and increased portal venous pressure. Our study thus confirmed the benefits of obliteration of the portal-systemic shunt by PSE in patients with portalsystemic encephalopathy [43] .
PSE is performed incrementally during the monitoring of the portal pressure in order to reduce the portal venous pressure to the level measured before obliteration of the veins feeding and/or draining the GVs [22,42,43,49,54] . PSE is a supplemental modality to prolong the effect of obliteration of the veins feeding and/or draining the GVs.
Combination of endoscopic treatment and IVR
Treatment of GVs solely by endoscopic modalities or by IVR is occasionally incomplete. Our group previously reported that combined treatments with IVRs and endoscopic modalities had significant impacts on longterm rebleeding and retreatment rates in patients with EVs or GVs [37,48,50,51] . In elective cases, complete GV treatment should be administered in order to prevent rebleeding with greater assurance.
Surgery
A number of surgical procedures have been developed to manage esophagogastric varices. These can be classified as shunting and nonshunting procedures. The goal of shunting is to reduce the incidence of variceal bleeding by lowering the pressure in the portal system using a portal-systemic shunt. While the standard portocaval shunt effectively reduces the incidence of variceal bleeding, impairment of the hepatic protein metabolism in patients undergoing the procedure frequently leads to the development of hepatic encephalopathy due to hyperammonemia [55][56][57] . The distal splenorenal shunt (DSRS) was developed by Warren et al [58] in 1967 as a way to preserve portal blood flow through the liver while lowering variceal pressure. The hope, in developing this approach, was to prevent both bleeding and hyperammonemia. While DSRS effectively prevents rebleeding, patients who undergo DSRS still can develop hyperammonemia. Our g roup responded by designing a DSRS with a splenopancreatic disconnection and gastric transection, modifications to prevent the loss of shunt selectivity. This modified DSRS has been proved to reduce the incidence of postoperative hyperammonemia [59] .
As an alternative to shunting, Hassab [60] and Sugiura et al [61] developed a method of gastro-esophageal decongestion and splenectomy for the treatment of varices. The Hassab operation devascularizes the distal esophagus and proximal stomach. Splenectomy, selective vagotomy, and pyloroplasty can be performed concomitantly with the procedure. Sugiura et al [61] developed a method of esophageal transection for patients with GVs and EVs. Sugiura's method is performed concomitantly with the Hassab operation to divide and reanastomose the distal esophagus in order to disrupt the blood supply to the EVs. While both procedures may solve the problem of hepatic encephalopathy, varices are likely to recur earlier than they are in patients undergoing DSRS [62] . | 2018-04-03T01:01:26.619Z | 2006-10-07T00:00:00.000 | {
"year": 2006,
"sha1": "35a3b5eca2150e6dbd997e279d520aea4d806d25",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.3748/wjg.v12.i37.5926",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "f5130d6994c7468cff5760cc3e3f12ee0b3d898d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
269427796 | pes2o/s2orc | v3-fos-license | Assessing the Predictive Accuracy of EORTC, CUETO and EAU Risk Stratification Models for High-Grade Recurrence and Progression after Bacillus Calmette–Guérin Therapy in Non-Muscle-Invasive Bladder Cancer
Simple Summary We aimed to identify risk factors and evaluate the accuracy of existing risk stratifications developed for non-muscle-invasive bladder cancer (NMIBC) regarding their ability to predict high-grade recurrence and progression. We included 171 NMIBC patients treated with TURBT and adjuvant BCG, of whom 73 experienced recurrence (42.7%), and 29 developed progression (17%). Available risk models (EORTC/ CUETO/ EAU) demonstrated limited accuracy in predicting high-grade recurrence-free (RFS) and progression-free survival (PFS). Multivariable analysis identified independent predictors for high-grade RFS, including T1HG tumor at repeat TURBT, tumor multiplicity, previous history of high-grade NMIBC, and EORTC2006 progression risk score. In conclusion, the available risk models lack accuracy in predicting high-grade RFS and PFS in BCG-treated NMIBC, suggesting potential improvement with the inclusion of additional risk factors. Abstract The currently available EORTC, CUETO and EAU2021 risk stratifications were originally developed to predict recurrence and progression in non-muscle-invasive bladder cancer (NMIBC). However, they have not been validated to differentiate between high-grade (HG) and low-grade (LG) recurrence-free survival (RFS), which are distinct events with specific implications. We aimed to evaluate the accuracy of available risk models and identify additional risk factors for HG RFS and PFS among NMIBC patients treated with Bacillus Calmette–Guérin (BCG). We retrospectively included 171 patients who underwent transurethral resection of the bladder tumor (TURBT), of whom 73 patients (42.7%) experienced recurrence and 29 (17%) developed progression. Initially, there were 21 low-grade and 52 high-grade recurrences. EORTC2006, EORTC2016 and CUETO recurrence scoring systems lacked accuracy in the prediction of HG RFS (C-index 0.63/0.55/0.59, respectively). EAU2021 risk stratification, EORTC2006, EORTC2016, and CUETO progression scoring systems demonstrated low to moderate accuracy (C-index 0.59/0.68/0.65/0.65) in the prediction of PFS. In the multivariable analysis, T1HG at repeat TURBT (HR = 3.17 p < 0.01), tumor multiplicity (HR = 2.07 p < 0.05), previous history of HG NMIBC (HR = 2.37 p = 0.06) and EORTC2006 progression risk score (HR = 1.1 p < 0.01) were independent predictors for HG RFS. To conclude, available risk models lack accuracy in predicting HG RFS and PFS in -NMIBC patients treated with BCG.
Introduction
Bladder cancer is a heterogeneous disease and three-quarters of patients are diagnosed at an early stage [1,2].Non-muscle-invasive bladder cancer (NMIBC) denotes tumors confined to mucosa and submucosa, which might be effectively treated with transurethral resection of the bladder tumor (TURBT) although recurrences are common [1,2].Intravesical instillations with Bacillus Calmette-Guérin (BCG) are recommended to reduce the recurrence risk in intermediate-and high-risk NMIBC [1].Recurrences can lead to progression and worsen further prognoses, and, even if benign, can substantially affect the quality of life.Our recent population-based study showed that cancer-specific deaths are not uncommon in long-term follow-up of high-grade NMIBC, reaching up to 19% of high-grade T1 tumors [3].Multiple prognostic stratification tools were established to facilitate the choice of optimal risk-adapted adjuvant therapy and to tailor the follow-up [4][5][6][7][8].Nomograms and risk-scoring models were developed in different cohorts of patients and external validations reveal their limitations in discriminative power and calibration [9][10][11][12].
European Organization for Research and Treatment of Cancer (EORTC) 2006 risk tables were established in the general cohort of NMIBC without adjuvant BCG.Spanish Urological Club for Oncological Treatment (CUETO) risk tables were developed in the cohort of patients who received short-term BCG therapy.The EORTC 2016 nomogram was created for the intermediate and high-risk NMIBC patients treated with maintenance BCG [6][7][8].Finally, the European Association of Urology (EAU) 2021 risk stratification was founded to determine progression risk in the group of primary NMIBC, who did not receive BCG [5].Importantly, none of the abovementioned risk models incorporated the information from repeat transurethral resection (reTUR) and reTUR was not routinely performed in studies resulting in the development of the EORTC 2006, CUETO and EORTC 2016 [6][7][8].A recent meta-analysis reported the contemporary prognostic role of reTUR and its association with recurrence-free survival [13].External validation of EORTC 2006 and CUETO revealed its unsatisfactory accuracy and discrimination in the retrospective analysis of 4689 patients with NMIBC [9].Another external validation of available nomograms by Krajewski et al. demonstrated an overestimation of progression risk and low discrimination for recurrence, when using CUETO and EAU 2021 risk models in NMIBC treated with routine reTUR and adequate BCG [10].Validation in the cohort of high-risk BCG-exposed patients revealed an overestimation of progression risk for the updated EAU 2021 model [11].
Notably, none of the aforementioned nomograms was designed or validated to assess the risk of high-grade (HG) recurrences, which have distinct prognoses and implications compared to low-grade (LG) recurrences [14,15].Approximately 30% of recurrences occurring in patients treated with BCG are of low grade, which is not considered therapy failure and should not prompt the cessation of the treatment [14,15].Only high-grade recurrence during BCG (or progression) meets BCG-unresponsive criteria and warrants discontinuation of further BCG instillations [1,15,16].
In this study, we aimed to identify the risk factors for high-grade recurrence and progression among BCG-treated patients with intermediate-, high-and very-high-risk NMIBC and to validate the accuracy of currently used risk stratifications.
Study Design and Selection Criteria
This is a retrospective, single-tertiary-center study.Inclusion criteria included the presence of intermediate-, high-, and very-high-risk NMIBC in adult patients, treated with TURBT between 2010 and 2019 who received subsequent intravesical BCG instillations.Exclusion criteria encompassed lack of adequate induction course of BCG defined as at least 5 of 6 instillations (n = 11), delayed BCG therapy > 4 months after the last TURBT (n = 7), presence of isolated carcinoma in situ (CIS/Tis) without concomitant papillary tumor (n = 27).
Treatment and Follow-up
All patients underwent TURBT at our department.TURBT was performed with the patient in a lithotomy position under spinal or general anesthesia in accordance with EAU clinical guidelines [1].A resection loop with monopolar current was utilized.All patients who received incomplete initial TURBT underwent repeat TURBT (also called as second or restaging transurethral resection).ReTUR was performed whenever indicated by EAU clinical guidelines or upon the treating physician's decision [1].
Surgical specimens were reviewed by a genitourinary pathologist, graded according to the 1973 and 2004 WHO grading systems and staged according to the 2009 TNM classification.
Patients with high-grade or T1-stage tumors or CIS or multiple and recurrent low-grade Ta tumors were qualified for BCG in the standard schedule.All patients received an induction BCG course of at least 5 of 6 weekly intravesical instillations [1].The maintenance schedule included 3 weekly instillations at 3, 6, 12, 18, 24, 30 and 36 months.Adequate BCG was defined as at least 5 of 6 instillations of induction and 2 of 3 instillations of the first maintenance course [1,17].Full-dose of BCG was administered including at least 2 × 10 8 and no more than 3 × 10 9 viable units.RIVM strain was used in 98.2% of patients.
Follow-up involved regular cystoscopy and urine cytology performed every 3 months in the first two years and every six months from the second to the fifth year [1,17].Any suspicion of recurrence or progression was verified every time with TURBT.
Outcomes
High-grade recurrence-free survival was the primary outcome.Low-grade recurrencefree survival, recurrence-free survival, and progression-free survival were secondary outcomes.Progression was defined as the occurrence of muscle-invasive bladder cancer or the development of distant metastasis.High-grade recurrence was defined as any grade 3 or high-grade recurrence following BCG.Low-grade recurrence was defined as initial grade 1/2 or low-grade papillary tumor recurrence following BCG.Salvage radical cystectomy was performed in eligible patients upon progression or high-grade recurrence.
Survival time was calculated from the date of index TURBT to the event of interest.Patients were censored at the date of last follow-up, death due to any cause or the date of salvage radical cystectomy.To estimate PFS following LG and HG recurrences, the date of recurrence was used as the index point in survival analysis.
Ethics Statement
Due to the character of this study, the Institutional Review Board waived the need for study approval.This study was performed in accordance with the Declaration of Helsinki and its later amendments.
Statistical Analysis
Descriptive variables included clinical, histopathological and survival data.Histopathological data included primary staging, grading, presence of concomitant CIS and the pathology at reTUR.Clinical data included previous patterns of recurrence (HG/LG/none and its frequency per year), tumor size, multifocality, age, gender and comorbidities summarized within Charlson comorbidity index as previously [18].
Patients were risk stratified using EAU 2021, EAU 2019, AUA, EORTC 2006, EORTC 2016 and CUETO risk models and scores [5][6][7][8]19].Dedicated recurrence and progression risk scores were calculated if applicable.Risk stratification were performed based on the clinical and histopathological data available at index TURBT.
Validation of risk models was performed with Cox proportional hazards.Discrimination was assessed using the concordance index (C-index) and area under the ROC curve (AUC).For calibration, we compared the actual estimates from Kaplan-Meier curves and expected survival rates predicted by the respective risk models.
The Kaplan-Meier method was used to compute survival estimates within 1-and 5-year time points.Reverse Kaplan-Meier methods were used to estimate the median follow-up with interquartile ranges (IQR).Cox proportional hazard (CPH) regressions were used for survival analysis.Univariable and multivariable Cox proportional hazards analyses were performed.Multivariable analyses included only selected variables based on the univariable analyses.Stepwise selection of variables was applied.Hazard ratios along with 95% confidence intervals (95% CI) were derived from CPH regression.
In order to internally validate the performance of our risk model and mitigate the potential effects of overfitting, we employed a bootstrap resampling technique.Specifically, we generated 300 bootstrap samples, each drawn with replacement from the original dataset while maintaining the same sample size.Optimism for the C-index was calculated.
Continuous variables were presented as median values accompanied by ranges between quartiles.Differences between groups were evaluated with the U-Mann-Whitney test for continuous variables and with Fischer's exact test or Chi-square test for categorical variables.For all statistical analyses, a two-sided p-value < 0.05 was considered statistically significant.SAS software version 9.4 (SAS Institute, Cary, NC, USA) was used to conduct statistical analysis.
Oncological Outcomes
Overall, 73 patients (42.7%) experienced recurrence and 29 (17%) developed progression to MIBC following BCG therapy.Comparison of baseline characteristics between patients with initial LG and HG recurrence are presented in the Appendix A Table A1.In our cohort, 5-year estimates of PFS, HG RFS and RFS were 81.4%, 65.2% and 53.7%, respectively.Overall, 21 recurrences were initially low-grade and 52 were initially high grade.Kaplan-Meier curves illustrating PFS, RFS, HG RFS and LG RFS are presented in Figure 1.
Significance of Low-Grade and High-Grade Recurrences
After initial low-grade recurrence, four patients (19.1%) developed high-grade recurrence and three (14.3%)progressed to MIBC.Among patients with initial high-grade recurrence, 26 (50%) developed MIBC, and 9 (17.3%)underwent salvage cystectomy for recurrent NMIBC.The median time from initial high-grade recurrence to subsequent MIBC progression was 9.6 months.Estimates of 5-year PFS after HG recurrence were 33.9% and after LG recurrence were 88.3%.
Novel EAU 2021 Risk Stratification
The EAU2021 risk stratification successfully grouped patients according to progression risk, which were 3.5%, 20%, and 25.8% in IR, HR and VHR groups at 5-year follow-up.Comparison of these estimates to reference risks reported in the EAU 2021 risk tables indicates poor calibration of the model, with underestimation of risk in HR (estimated risk of 9.6-11%) and overestimation in VHR groups (estimated risk of 40-44%).
EAU2021 risk stratification successfully stratified patients according to high-grade recurrence risk, which was 13.8%, 35.2%, and 48.5% in IR, HR and VHR groups at 5 years.As EAU 2021 did not report the HG recurrence risks we were unable to validate its calibration in that setting.
Discussion
In this study, we validated the currently risk classifications and proposed our risk model for the prediction of high-grade recurrence in BCG-treated patients with NMIBC.
We found that existing risk stratifications and models for NMIBC recurrence and progression lack accuracy in the population of patients with predominantly high-and veryhigh-risk NMIBC who receive BCG.Additionally, initial high-grade recurrences were more frequent and conferred a very high progression risk (5-year PFS of 34%), whereas initial low-grade recurrence, although not benign, preceded progressions only in the minority of such cases (5-year PFS 88%).There is a need for dedicated risk models specifically tailored to assess the risk of high-grade recurrence, as high-and low-grade recurrences differ in terms of further prognosis and implications for treatment.
Moreover, none of the available stratifications was constructed or validated to predict high-grade recurrence but rather designed to predict any recurrence.Our study showed the poor accuracy of EORTC 2006, EORTC 2016 and CUETO for the prediction of high-grade recurrence (C-indices 0.55-0.63).We found that the novel EAU 2021 risk model could be used not only for the stratification of progression risk but also for high-grade recurrence risk, albeit with poor accuracy for both events (C-indices of 0.57 and 0.59, respectively).We identified several risk factors that have not been used in any of the available stratification tools but appear to be significant predictors for high-grade recurrence.Our risk model included reTUR pathology and previous grade pattern that constituted adjunct to EORTC progression risk score and tumor multiplicity.
Furthermore, none of the available risk-scoring models provided higher accuracy for progression risk assessment than the set of four risk factors: the presence of T1G3, tumor multiplicity, presence of residual T1HG at reTUR and previous history of HG tumor.However, due to the relatively small sample size, validation in larger datasets is required.In our analysis, PFS was overestimated in the EAU 2021 HR group and underestimated in the VHR cohort.
We believe that dedicated risk models for the prediction of high-grade recurrence in BCG-treated populations can be clinically useful when counseling patients for further management after TURBT for high-grade NMIBC.To date, such models are not available and widely recognized EORTC 2016 and CUETO recurrence risk tables are recommended by the EAU guidelines for the estimation of risk for any recurrence in BCG-treated NMIBC [1].However, as we have demonstrated, the discrimination and accuracy of such models are relatively low.Interestingly, CUETO and EORTC progression risk scores exhibit higher accuracy than dedicated CUETO or EORTC recurrence risk scores in predicting high-grade recurrence.This can be explained by the difference in weights of cardinal risk factors such as tumor category, grade, and presence of CIS, which contribute to higher CUETO/ EORTC progression risk scores, but do not highly affect CUETO/ EORTC recurrence risk scores [6,8].In our multivariable analysis, the EORTC progression risk score was selected as the most significant among other risk scores and entered the model for HG recurrence risk.Other risk factors include pathology at reTUR with the presence of residual HG T1 as a strong adverse feature.High-grade T1 at reTUR was previously reported as associated with very unfavorable 5-year RFS (18%) and PFS (52%) among BCG-treated patients [20].
We are concerned about not including reTUR for prognostic purposes in any of the available risk models.In the study that developed a novel EAU 2021 risk model (22% of patients with T1) reTUR was performed in 16% of patients, whereas in EORTC 2006, EORTC 2016 and CUETO population reTUR was not routinely performed [6][7][8].Importantly reTUR could result in a change in the risk score (e.g., detection of CIS) and ensure completeness of prior resection.In our study, reTUR was performed in 73.8% of patients, residual papillary tumors at reTUR were found in 27.7% of patients in whom reTUR was performed and primary detection of CIS previously not biopsied during the index TURBT was found in 6.3% of patients in whom reTUR was performed.
Previous history of high-grade tumors compared to previous low-grade was independently associated with an increased risk for further high-grade recurrence.A study by Thomas has previously shown that among high-risk patients, a previous history of high-grade tumors was associated with a higher risk of progression to MIBC, compared to a history of progressive low-grade and primary tumors [21].Despite its inclusion in the calculation of the EORTC score, multiplicity appeared as an independent risk factor in the multivariable analysis.
Our risk model was characterized by an acceptable accuracy, with a C-index of 0.74, and did not reveal the significant risk of overfitting.Such a model can serve as guidance in the patient's counseling before BCG therapy.Our model mostly identifies patients who will be considered as BCG-unresponsive.Therefore, perhaps patients with T1HG at reTUR, with previous high-grade tumor, with multiple lesions and higher EORTC progression risk scores should be offered an enrolment in clinical trials aiming at improving response to BCG [2].The majority of high-grade recurrences result in BCG unresponsiveness except for late relapses after BCG interruption (>6 months) and papillary Ta or CIS before maintenance BCG administration [4,16].In our previous paper, we showed that inflammatory markers could be used as predictors of BCG-unresponsive disease [22].
The primary challenge and ultimate goal of further updating the EAU 2021 risk stratification was to identify patients who will progress [5].Such a group is at the highest risk for cancer-specific death which could be prevented by immediate or early cystectomy.The novel EAU 2021 risk model successfully identified a group of patients with a 40% risk of progression at 5-year follow-up [5].However, external validations of the EAU 2021 risk model underscored the overestimation of risk in BCG-treated high-and very-high-risk patients [10,11].In our cohort, progression risk overestimation within the use of EAU 2021 was also observed but only for the VHR group and not for the HR group in which the risk was actually underestimated.We identified the presence of T1G3, tumor multiplicity, presence of residual T1HG at reTUR and previous history of HG tumor as independent risk factors for progression.It is already clear that high-grade T1 tumors are most likely to progress among other NMIBC and are associated with undeniable long-term cancer-specific mortality [18].This is raised as an argument for early radical cystectomy to prevent the progression and its fatal consequences [23].
Low-grade recurrences during BCG therapy were reported in a few other papers.A study by Li et al. demonstrated that the grade of tumor recurrence following intravesical BCG treatment serves as a crucial indicator for predicting the progression of bladder cancer to muscle-invasive or metastatic urothelial carcinoma [14].Although individuals experiencing low-grade recurrences have fewer progression events, compared to those with high-grade recurrences, their estimated 5-year progression rate was still 14.4% in that study [14].Our study confirmed that low-grade recurrence can precede high-grade recurrence and progression which in our population occurred relatively late in the followup.Nevertheless, progression risks are significantly lower for LG than for HG recurrence and the presence of LG recurrence does not meet the BCG unresponsive criteria and is not an indication for BCG interruption [4,15].
We anticipate the imminent update of current risk models as our understanding of the role of the urinary microbiome expands, and as urine-and blood-based biomarkers are developed [2].Recent studies have underscored the potential significance of the urinary microbiome in the detection and course of NMIBC, sparking further interest in the investigation [24,25].However, novel blood-based and easily accessible biomarkers, such as systemic immune-inflammatory markers and the well-recognized neutrophil-tolymphocyte ratio, were not validated in this study due to their absence in the current clinically utilized risk models [22,26].The suboptimal accuracy of existing models and the lack of inclusion of newly developed and potentially significant prognostic factors highlight the limitations of these models and emphasize the imperative for the development of new, more comprehensive risk assessment tools.
Limitations of our study come from the inherent nature of its retrospective singlecenter character and low sample size.Our cohort did not include patients with isolated CIS without concomitant papillary tumors.We decided to exclude these patients as they were also excluded from studies developing EORTC 2016 and EAU 2021 risk models [5,7].Information regarding smoking was not available for all patients and was therefore not included in the regression analysis despite the recent evidence for the impact of smoking on RFS and PFS [27].Another limitation that must be acknowledged is the suboptimal regimen duration, which nonetheless reflects real-world clinical practice.Eleven patients received only 5 out of 6 instillations of induction BCG due to adverse events.In 6 of these 11 patients, BCG was continued in further maintenance instillations as adverse events resolved.Another important issue, considering the recommended BCG regimen duration, is the low percentage (13.5%) of patients who received the 3-year maintenance schedule.Notably, even in RCTs like the SWOG study, which showed the superiority of maintenance BCG over induction alone, only 16% of patients completed the 3-year maintenance schedule [17].On the other hand, in the EORTC-GU Cancers Group Randomized Study, 35% of patients completed the 3-year full BCG maintenance regimen to which they were allocated [28].Our results reflect real-world treatment patterns, which provide valuable validation of available risk models, and such validation is necessary to ensure the applicability of the risk models beyond a clinical trial setting.
Furthermore, future studies could incorporate emerging urine biomarkers and extended pathological assessment of TUR specimens including T1 sub-staging and immunerelated gene expression to refine predictive models and enhance their clinical utility [2,29].Additionally, conducting multicenter validation studies will be imperative to confirm the generalizability and reliability of our findings across diverse clinical settings and within larger cohorts.
Conclusions
To conclude, available risk models lack accuracy in predicting high-grade RFS and PFS in NMIBC patients treated with BCG.High-and low-grade recurrences have distinct prognosis and treatment implications.We found that among different risk models, the EORTC progression score had the highest accuracy for the prediction of high-grade recurrence.Pathology at reTUR, previous history of high-grade NMIBC and tumor multiplicity provided additional prognostic information.Further studies are required to improve existing risk models for high-risk NMIBC treated with BCG.
Table 1 .
Baseline characteristics of included patients with NMIBC treated with BCG.
Table 2 .
Risk Models EAU 2021 risk stratification, EORTC 2006, EORTC 2016 and CUETO recurrence scoring systems lacked accuracy in the prediction of high-grade recurrence (C-index 0.57/0.63/0.55/0.59,respectively).EAU 2021 risk stratification, EORTC 2006, EORTC 2016, CUETO progression scoring systems demonstrated low to moderate accuracy (Cindex 0.59/0.68/0.65/0.65) in predicting progression to MIBC.Detailed analyses of the accuracy of available risk models for the prediction of RFS, LG RFS, HG RFS and PFS are presented in Table 2. Discrimination and accuracy of available risk models for recurrence and progression in NMIBC.
AUC 1 year-Area Under the Curve regarding 1-year event-free survival; AUC 5 year-Area Under the Curve regarding 5-year event-free survival; p-value is calculated for univariable Cox proportional hazard for selected risk model.
Table 3 .
Univariable analyses with Cox proportional hazards for predicting high-grade recurrencefree survival and progression-free survival.
Table 4 .
Multivariable analyses with Cox proportional hazards for predicting high-grade recurrencefree survival (A), recurrence-free survival (B) and progression-free survival (C).
Table A1 .
Comparison of baseline characteristics between patients who developed initial low-grade (n = 21) and high-grade recurrence (n = 52) following BCG therapy. | 2024-04-28T15:04:31.994Z | 2024-04-26T00:00:00.000 | {
"year": 2024,
"sha1": "e017b1db5da5ac687328b548f900cf1840fece1a",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ScienceParsePlus",
"pdf_hash": "c94771e6c91f13a6c609a6e9aa8891a681320a4f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
251978707 | pes2o/s2orc | v3-fos-license | Endothelial versus pronephron fate decision is modulated by the transcription factors Cloche/Npas4l, Tal1, and Lmo2
Endothelial specification is a key event during embryogenesis; however, when, and how, endothelial cells separate from other lineages is poorly understood. In zebrafish, Npas4l is indispensable for endothelial specification by inducing the expression of the transcription factor genes etsrp, tal1, and lmo2. We generated a knock-in reporter in zebrafish npas4l to visualize endothelial progenitors and their derivatives in wild-type and mutant embryos. Unexpectedly, we find that in npas4l mutants, npas4l reporter–expressing cells contribute to the pronephron tubules. Single-cell transcriptomics and live imaging of the early lateral plate mesoderm in wild-type embryos indeed reveals coexpression of endothelial and pronephron markers, a finding confirmed by creERT2-based lineage tracing. Increased contribution of npas4l reporter–expressing cells to pronephron tubules is also observed in tal1 and lmo2 mutants and is reversed in npas4l mutants injected with tal1 mRNA. Together, these data reveal that Npas4l/Tal1/Lmo2 regulate the fate decision between the endothelial and pronephron lineages.
INTRODUCTION
Vascular development includes complex morphogenetic and cell differentiation processes, starting with the specification of the endothelial progenitors during late gastrulation, and their subsequent assembly into the major axial vessels (1,2). Several model systems have been used to study vascular development including the zebrafish, which offers unique advantages for live imaging (3,4) and is resilient against cardiovascular defects at embryonic and early larval stages (5).
Endothelial origins have been mapped to the lateral plate mesoderm (LPM), a tissue that arises on the ventrolateral sides of the vertebrate embryo and gives rise to a multitude of cell types including pronephron, endothelial, blood, and cardiac progenitors (6)(7)(8)(9). Fate mapping experiments in zebrafish have shown that single cells labeled at 40 to 50% epiboly stages [~5 hours post-fertilization (hpf)] can give rise to both blood and endothelial cells, providing evidence for the so-called hemangioblast (10,11). Pronephron, endothelial, and hematopoietic progenitor populations overlap at the threesomite stage (11 hpf) but start to form distinct bilateral stripes of cells along the anterior-posterior axis at early-to midsomitogenesis stages (~10 to 14 hpf) (12). Pronephron progenitors, which express pax2a and pax8, are located between the more lateral hand2-expressing cells, which give rise to smooth muscle and mesothelial cells (4,13), and the more medial endothelial and hematopoietic progenitors, which express etsrp (a.k.a. etv2; https://zfin.org/ZDB-GENE-050622-14), tal1 (a.k.a. scl; https://zfin.org/ZDB-GENE-980526-501), and lmo2 (2). Starting at around 14 hpf, endothelial and hematopoietic progenitors move along the ventral side of the forming somites toward the midline, where they coalesce into a vascular cord that remodels into two major axial vessels, the dorsal aorta (DA) and posterior cardinal vein (PCV) (14,15). Despite our growing understanding of cell fate determination within the LPM, the underlying cellular and molecular mechanisms remain incompletely understood.
In zebrafish, the earliest known event in endothelial specification is the expression of the basic helix-loop-helix-Per-Arnt-Sim (bHLH-PAS) transcription factor gene cloche/npas4l (16). cloche/npas4l mutants lack most endothelial and hematopoietic cells (17)(18)(19). Mechanistically, Npas4l positively regulates the expression of several target genes including the transcription factor genes etsrp, tal1, and lmo2 (20,21), thereby promoting the formation of endothelial and hematopoietic cells. Etsrp (Erythroblast Transformation-Specific)/Etv2 is an ETS transcription factor required for endothelial and myeloid specification (22,23). In zebrafish etsrp morphants and mutants, etsrp-expressing cells, which differentiate into endothelial cells in wildtype animals, acquire alternative fates including cardiac and skeletal muscle (24,25). In mice lacking the bHLH transcription factor TAL1 (T-cell acute lymphocytic leukemia protein 1), vascular plexus remodeling and hematopoietic development are both impaired (26,27); and in explant cultures of Tal1 mutant mice, some endothelial progenitors contribute to spontaneously beating cardiac foci (28). Similar to what is observed in Tal1 mutant mice, zebrafish tal1 mutants lack the expression of the erythroid marker gata1a and erythrocytes (29), and they exhibit vascular defects (30). Endothelial and blood cell differentiation in npas4l mutants can be partially rescued by global overexpression of zebrafish tal1 as assessed by in situ hybridization (29). LIM domain only 2 (LMO2) is a LIM domain transcription factor required for hematopoietic development, and both zebrafish and mouse LMO2 loss-of-function models exhibit defective erythroid development (31,32). Zebrafish lmo2 mutants also display vascular defects that have been linked to reduced endothelial cell migration (33,34). Mechanistically, LMO2 is part of transcription factor complexes involving TAL1 (31,35), where it has been reported to function as a scaffold (36). While the importance of these four transcription factors in endothelial and hematopoietic cell specification has been established, their role in separating these lineages from other LPMderived cell populations remains unclear.
Here, we trace npas4l-expressing cells in wild-type zebrafish embryos and in npas4l, etsrp, tal1, and lmo2 mutants. We find that npas4l reporter-expressing cells in npas4l mutants exhibit two distinct cell fate changes: (i) a contribution to pronephron tubules, which we also observed in tal1 and lmo2 mutants, and (ii) a contribution to skeletal muscle, which we, and others (25), also observed in etsrp mutants. Single-cell RNA sequencing (scRNA-seq) analysis complemented by live imaging reveals the existence of a population of LPM cells that coexpresses endothelial and pronephron markers, potentially representing multilineage progenitors. Lineage tracing shows that pax2a -expressing cells contribute to endothelial cells as well as the pronephron tubules, further supporting their multilineage potential. Together, these data indicate that a transcriptional network downstream of Npas4l promotes endothelial development at the expense of pronephron and somite fates, with Tal1 and Lmo2 contributing to the block toward pronephron fates and Etsrp to the block toward somite fates.
RESULTS
A knock-in reporter for npas4l expression visualizes early endothelial development Zebrafish npas4l mutants lack most endothelial and blood cells (17), and npas4l function is required cell-autonomously in the endothelial lineage for its specification (19). However, what happens to npas4lexpressing cells in npas4l mutants is unclear; these cells could undergo apoptosis, differentiate into other lineages, or arrest in their differentiation process. To distinguish between these possibilities, we generated a reporter line to visualize and track npas4l-expressing cells. We generated an npas4l-p2a-Gal4-VP16 reporter line (npas4l bns313 ; Fig. 1A and fig. S1A) by inserting a p2a-Gal4-VP16 cassette (37) in the 3′ end of the endogenous npas4l coding sequence. Homozygous npas4l bns313 animals develop normally and are viable (Fig. 1B), indicating that npas4l function is not affected by this insertion.
When crossing the npas4l bns313 line to a fluorescent UAS reporter line, we observed reporter expression in presumptive endothelial progenitors as early as 10 hpf (tailbud stage; fig. S1B). At 24 hpf, endothelial cells and circulating cells express the npas4l reporter ( Fig. 1B). At this stage, we also observed npas4l reporter expression in yolk syncytial layer cells and in a few skeletal muscle, pronephron tubule, and myocardial cells ( Fig. 1B and fig. S2). npas4l reporter expression in circulating cells diminished over time and was not detectable beyond 48 hpf; however, npas4l expression in endothelial cells could still be observed at 120 hpf. Although npas4l expression peaks before the end of gastrulation and is barely observed after 24 hpf (16), the stability of Gal4-VP16 and green fluorescent protein (GFP) sustains the fluorescence labeling of npas4l-expressing cells for at least the first 48 hours of development.
While exhibiting a strongly decreased number of endothelial and blood cells, homozygous npas4l bns423 mutants also display ectopic npas4l reporter expression in skeletal muscle cells (Fig. 1, B and C) and in a ventrolateral population of round cells ( Fig. 2A). As the morphology and anatomical position of these round cells suggested that they might be pronephron tubule cells, we immunostained transverse sections at 20 hpf and observed that they indeed express the pronephron transcription factor Pax2a (Fig. 2B). Analyses at later stages show that these cells contribute to the glomeruli ( fig. S4A) and the mature pronephron tubules ( fig. S4B). While the pronephron tubules in npas4l −/− embryos are clearly enlarged, the glomeruli appear to form as in wild type, as previously described (38).
To quantify these changes, we immunostained 24 hpf npas4l +/+ , npas4l +/− , and npas4l −/− embryos for npas4l reporter expression and Pax2a protein and counted the number of npas4l/Pax2a double-positive pronephron tubule cells (Fig. 2C), Pax2a single-positive pronephron tubule cells (Fig. 2D), and npas4l reporter-expressing skeletal muscle cells (Fig. 2E) in all three genotypes. Although we consistently found a low number of npas4l reporter-expressing pronephron tubule and skeletal muscle cells even in npas4l +/+ embryos, these numbers increased significantly in npas4l mutants, from a median of 14 to 66.5 Pax2a + pronephron tubule cells and from a median of 36 to 76.5 skeletal muscle cells (Fig. 2, C to E). We did not observe a clear Pax2a expression phenotype at 12 (fig. S5) or 14 ( fig. S6) hpf, suggesting that the 24 hpf pronephron tubule phenotype is not due to an enlarged Pax2a expression domain at early somite stages. To test whether this pronephron tubule phenotype was an artifact of this newly generated npas4l mutant allele or reporter, we analyzed two established npas4l alleles, the m39 large deletion allele (fig. S7A) and the s5 point mutation allele ( fig. S7B), at 24 hpf and observed enlarged Pax2a expression domains in these mutants as well. From these observations, we speculate that the npas4l/Pax2a double-positive cells are derived from endothelial progenitors that, in the absence of Npas4l function, contribute to the pronephron lineage. These experiments also revealed a significant increase of npas4l expression in pronephron tubule (median = 21) and skeletal muscle (median = 62) cells in npas4l +/− embryos (Fig. 2, C to E), suggesting a previously undescribed phenotype in these heterozygous embryos.
Together, these data indicate that npas4l mutants develop distinct cell specification phenotypes. The contribution of npas4l reporterexpressing cells to skeletal muscle in npas4l mutants is in agreement with previous findings regarding etsrp reporter expression and the abnormal differentiation of endothelial progenitors to skeletal muscle in etsrp mutants (25). In contrast, the contribution of npas4l reporter-expressing cells to the pronephron tubules has not been reported before.
Transcriptional profile of npas4l reporter-expressing cells in npas4l +/− and npas4l −/− embryos To investigate whether npas4l reporter-expressing cells contributing to pronephron tubules and skeletal muscle in npas4l −/− embryos exhibit a global change in transcription indicative of a change in cell fate, we compared the transcriptomic profile of npas4l reporterexpressing cells (5207 npas4l +/− cells and 5262 npas4l −/− cells) sorted from the trunks of 20 hpf embryos through scRNA-seq (Fig. 3A). By testing parameters of dimensional reduction, we determined that clustering into eight subgroups reflected the best representation of the data (Fig. 3B). We annotated clusters by cell-specific marker gene expression (Fig. 3, B and E, and tables S1 and S2), aided by published in situ hybridization data (ZFIN) and previous scRNA-seq data at similar embryonic stages (39). To investigate the relationship within and between clusters, we performed velocity analysis, and the resulting data (Fig. 3C) indicated changes in the contribution of npas4l reporter-expressing cells to the pronephron mesoderm (cluster 6) and skeletal muscle (clusters 2 and 3). The size of these three clusters was greatly expanded in npas4l −/− embryos compared with npas4l +/− siblings (Fig. 3D).
These data suggest that at 20 hpf, posterior npas4l reporterexpressing cells in npas4l mutants express LPM markers, as well as markers of pax2a-expressing LPM and pax3a-expressing paraxial mesoderm. Notably, the fli1a-expressing cells present in the posterior region of npas4l mutants ( fig. S8) express genes associated with both pronephron and paraxial mesoderm, but not genes associated with hematopoietic development (differential gene expression analysis shown in table S3), indicating that unlike in npas4l +/− embryos, these fli1aexpressing cells lack tal1 expression and thus hematopoietic potential. In addition, npas4l transcripts were only detected in the least differentiated cells of each of these lineages, indicating that npas4l is transiently activated in early progenitors and then quickly turned off as they differentiate ( fig. S8), consistent with earlier observations (16,21).
Pax2a is coexpressed with early endothelial and blood markers in a subset of the LPM On the basis of the identity of npas4l-expressing cells in npas4l wild-type and mutant zebrafish, we hypothesized that some cells in the LPM coexpress markers associated with endothelial and pronephron progenitors. To test this hypothesis, we first assessed the expression of the pronephron gene pax2a and hematoendothelial reporters. pax2a mRNA and Pax2a protein are first detectable starting at 10 hpf,
of 17
and their expression partially overlapped with that of a fli1a:GFP reporter in bilateral mesodermal stripes when the latter first became detectable at 12 hpf (Fig. 4A). We confirmed these observations by three-dimensional (3D) imaging of the expression of the drl:mCherry pan-LPM reporter and the tal1:EGFP hematoendothelial reporter together with Pax2a immunostaining (movie S1), as well as by live imaging of pax2a:EGFP; lmo2:dsRed embryos (movie S3). This analysis revealed the coexpression of the tal1 reporter and Pax2a in a subset of cells within the drl reporter-expressing LPM stripes. Together, these data indicate that pax2a is expressed in a subset of cells that, by marker expression, are endothelial and hematopoietic progenitors.
To further test the hypothesis that some cells in the LPM coexpress endothelial and pronephron markers, we carried out an analysis that relies on endogenous gene expression rather than on transgenic reporters. Fate mapping experiments have shown that some cells in gastrula stage embryos can give rise to endothelium, blood, and pronephron cells (11). Moreover, the coexpression of pronephron markers together with endothelial and hematopoietic markers in tailbud stage LPM has been documented in several zebrafish scRNA-seq datasets (4,39). To investigate the time point at which the pronephron and endothelial lineages separate, we used the large zebrafish scRNA-seq dataset published by Wagner et al. (39), which provides a single-cell time course from pregastrulation stages to 24 hpf combined with a barcoding approach, thereby allowing lineage reconstruction. We extracted the 10, 14, and 18 hpf mesodermal cells and searched for coexpression of endothelial and pronephron markers. The expression of several early endothelial markers and of pax2a is displayed in a dot plot (Fig. 4D) and a uniform manifold approximation and projection (UMAP) plot (Fig. 4E). At 10 hpf, no distinct endothelial or pronephron clusters are observed but only one that coexpresses endothelial and pronephron markers. At the individual cell level, 44% (32 of 73) of all cells in this cluster coexpress fli1a and pax2a (table S4). It is important to note though that lowly expressed genes are often missed in droplet-based single-cell methods like 10x Genomics, potentially leading to an underestimate of the number of single-and/or double-positive cells. This fli1a/pax2a , and intermediate mesoderm (cluster 6). (E) Dot plot with the top five markers separating the clusters. Pseudobulk expression data and marker genes used to separate the clusters can be found in tables S1 and S2, respectively. Differential gene expression in pseudo-bulk analysis can be found in table S3. The full and annotated dataset can be explored at https://bioinformatics.mpi-bn.mpg.de/20hpf_npas4l_expressing_cell or downloaded as an annotated data file using the link https://figshare.com/s/ e6bdd5be14c7085d606c.
of 17
transcriptional overlap decreases over time as the endothelial and pronephron cells differentiate (table S4). To quantify these observations, we correlated fli1a expression with that of several pronephron markers in all mesodermal cells at 10, 14, and 18 hpf. This analysis indicates that fli1a expression indeed significantly correlates with that of several pronephron markers including pax2a (40), osr1 (41), foxj1a (42), and hnf1bb (43) ( Table 1 and table S5). This correlation decreases significantly over time; osr1 and foxj1a stop showing a correlation by 14 hpf, and no significant correlation of fli1a expression with that of any pronephron markers is observed at 18 hpf.
pax2a-expressing cells contribute to endothelium and blood
We next performed lineage labeling of pax2a-expressing cells to determine whether endothelial and hematopoietic cells can arise from this population. In zebrafish embryos, pax2a is prominently expressed in the otic placode, hindbrain, and spinal cord neurons, as well as in the pronephron progenitors (44). These kidney progenitors derive from a medial subset of the LPM, at times referred to as the intermediate mesoderm (4,44,45). To determine the fate of pax2aexpressing cells, we generated a pax2a:creERT2 tud53 knock-in line, in which the CreERT2-encoding sequence was inserted at the translational start codon of pax2a (Fig. 5A). Expression of the resulting CreERT2 transgene faithfully recapitulates endogenous pax2a expression ( fig. S9) and enables 4-OH-tamoxifen (4-OHT)-dependent Cre activation in pax2a-expressing cells to recombine loxP-based reporter transgenes, resulting in the permanent labeling of pax2aexpressing cells and their descendants (46,47). Crossing pax2a:creERT2 to hsp70l:loxP-STOP-loxP-EGFP (hsp70l:Switch; Fig. 5A) (48) and treating the embryos with 4-OHT between 10 hpf (one-to twosomite stage) and 24 hpf, we found enhanced GFP (EGFP) expression (i.e., lineage labeling) at 72 hpf in the descendant structures of the pax2a-expressing progenitors (49), including the diencephalon, rhombomeres 3 and 5, the otic-epibranchial progenitor domain, and podocytes, and along the entire pronephron tubule ( fig. S10A). In 19 of 20 embryos exhibiting recombination, we also observed EGFP expression in endothelial cells of the intersomitic vessels (ISVs), dorsal aorta (DA), cardinal vein, and caudal hematopoietic territory (Fig. 5, B and C, Table 2). In 4 of 20 larvae, we also observed EGFP-expressing circulating cells that are likely erythrocytes (movie S2). Quantification of the data revealed that 4-OHT-induced recombination during early somitogenesis (starting at ~10 hpf) resulted in a median of 7.5 endothelial cells per larva exhibiting EGFP, whereas recombination during late somitogenesis (starting at ~16 hpf) resulted in a median of 4.5 endothelial cells per larva (Fig. 5, C and D). We also observed endothelial EGFP expression when combining pax2a:creERT2 with the endothelium-specific lmo2:loxP-dsRED-loxP-EGFP switch line (Fig. 5, E and F) (50), further showing that pax2a-expressing cells can contribute to the endothelium.
Etsrp, Tal1, and Lmo2 regulate distinct aspects of endothelial development
Npas4l promotes the expression of the three transcription factor genes etsrp, tal1, and lmo2 (21). We hypothesized that the reduction in expression of these transcription factors in npas4l mutants plays a role in the contribution of npas4l reporter-expressing cells to the pronephron tubules and skeletal muscle. To test this hypothesis, we generated mutant lines for these three genes ( Fig. 6A and fig. S11) and analyzed their phenotype in endothelial reporter lines and in the npas4l reporter background. These newly generated mutant alleles exhibit all previously published phenotypes (23,25,30,33,51), including the vascular defects ( fig. S12). We counted the number of npas4l/Pax2a double-positive cells in the presumptive pronephron tubules of npas4l, etsrp, tal1, and lmo2 mutants at 24 hpf and observed an increase in npas4l, tal1, and lmo2 mutants but not in etsrp mutants (Fig. 6, B and E, and fig. S13). We observed the same upward trend in the overall number of Pax2a-expressing cells in the presumptive pronephron tubules of npas4l, tal1, and lmo2 mutants, indicating a physiological change rather than a marker misexpression (Fig. 6C). In addition, we also observed that some npas4l reporterexpressing cells in the PCV of lmo2 mutants were Pax2a positive (fig. S14), indicating a potential transition from an endothelial toward a pronephron fate. Conversely, the number of npas4l reporterexpressing skeletal muscle cells was increased in npas4l and etsrp mutants but not in tal1 or lmo2 mutants (Fig. 6D). As an interesting side observation, double mutants for npas4l/etsrp and etsrp/tal1 do not develop any endothelial cells as indicated by the lack of fli1a:EGFP expression ( fig. S15). To our knowledge, this is the first time that a vertebrate model completely devoid of endothelial cells has been generated. Together, these data indicate that the contribution of npas4lexpressing cells to pronephron tubules is blocked by the npas4l targets/effectors tal1 and lmo2, whereas their contribution to skeletal muscle is blocked by etsrp.
tal1 mRNA injections rescue endothelial development in npas4l −/− embryos in an Etsrp-dependent manner
To better understand the relationship between Npas4l and its effectors, we tested whether injecting etsrp, tal1, or lmo2 mRNA would rescue different aspects of the npas4l mutant phenotype. While injections of etsrp or lmo2 mRNA did not have a noticeable effect on endothelial development in npas4l mutants, injections of tal1 mRNA were sufficient to significantly restore ISV formation (Fig. 7, A, B, and F). As Npas4l has several distinct transcriptional effectors, such a strong rescue of ISV formation by just one of them was unexpected. In these rescued mutants, we did not detect the bilateral population of npas4l reporter-expressing pronephron tubule cells, but the number of ectopic npas4l reporter-expressing muscle cells was not reduced compared with uninjected npas4l mutants (Fig. 7F). In addition, the finding that etsrp or lmo2 mRNA injections did not rescue npas4l mutants is consistent with the fact that etsrp and lmo2, but not tal1, are expressed in endothelial progenitors in npas4l −/− embryos (fig. S15B). Also, tal1 mRNA injections into etsrp mutants did not rescue ISV formation, indicating that the tal1-mediated rescue of the npas4l Table 1. Endothelial and pronephron gene expression overlaps at 10 hpf, and this overlap decreases by 14 hpf and is no longer observed at 18 hpf. Spearman correlation coefficient () between the expression of fli1a and the expression of four pronephron genes: pax2a, osr1, foxj1a, and hnf1bb; significant correlation is observed at 10 but not 18 hpf. A full list of gene expression correlations to fli1a expression and changes between the time points can be found in table S5. P values were adjusted to counteract the multiple testing problem. (Fig. 7C). These data also suggest that tal1 promotes etsrp expression. To test this hypothesis, we used reverse transcription quantitative polymerase chain reaction (RT-qPCR) to measure tal1 and etsrp mRNA levels at 10 hpf after npas4l, etsrp, tal1, and lmo2 mRNA injection into wild-type embryos (Fig. 7, D and E). We found that tal1 could only be induced by Npas4l at this stage, while etsrp could be induced by both Npas4l and Tal1. Together, these data show that the pronephron tubule contribution of npas4l reporter-expressing cells in npas4l mutants can be blocked by injection of tal1 mRNA, which also leads to an increase in endothelial cells. These experiments provide further evidence for a close relationship between the endothelial and pronephron lineages and indicate that Tal1 and Lmo2 modulate the endothelial versus pronephron fate decision downstream of Npas4l (Fig. 8).
DISCUSSION
The specification of endothelial cells is an early step in cardiovascular development, and the transcriptional effectors required for endothelial specification in zebrafish are induced by the bHLH-PAS transcription factor Npas4l (21). npas4l −/− embryos lack most endothelium and blood (16,17,21); however, the fates of npas4l-expressing cells in the absence of Npas4l function remain unclear. Here, we generated npas4l knock-in reporter alleles to track npas4l reporterexpressing cells in wild-type and mutant embryos. We report that in npas4l −/− , tal1 −/− and lmo2 −/− embryos, npas4l reporter-expressing cells contribute to the pronephron tubules, and that in npas4l −/− and etsrp −/− embryos, npas4l reporter-expressing cells contribute to skeletal muscle. These data indicate that Tal1/Lmo2 and Etsrp modulate endothelial development downstream of Npas4l in different ways. Building upon these initial observations, we provide evidence for a population of early LPM cells that coexpresses endothelial and pronephron markers. While most of these cells contribute to hematoendothelial or pronephron lineages in wild-type embryos, more of them commit to the pronephron lineage in the absence of Npas4l, Tal1, or Lmo2.
Tal1/Lmo2 and Etsrp drive distinct cell fate decisions
In tal1 and lmo2 mutants, npas4l reporter-expressing cells remain in a ventrolateral position and contribute to the pronephron tubules. In contrast, in etsrp mutants, npas4l reporter-expressing cells migrate to the midline, but most of them fail to acquire endothelial characteristics. npas4l mutants, in which etsrp, tal1, and lmo2 expression is strongly reduced, display an almost complete absence of endothelial cells. Notably, embryos lacking both Etsrp and Tal1 function do not develop any endothelial cells, further demonstrating that these transcription factors are indispensable for endothelial development.
The etsrp loss-of-function phenotypes that we observed are in line with those reported by Chestnut et al. (25) who applied a reporterbased strategy to determine the fate of etsrp-expressing cells in etsrp mutants; in the absence of Npas4l or Etsrp function, endothelial progenitors become skeletal muscle cells. In mouse, chicken, and zebrafish, the paraxial mesoderm has been shown to be an additional source of endothelial cells (52)(53)(54)(55). In zebrafish, the paraxial mesoderm has also been reported to be a bipotent cell population that can give rise to hematopoietic stem and progenitor cells as well as muscle progenitors (53,56). Together, these data point to a close relationship between endothelial cells and the paraxial mesoderm.
As an increased number of pronephron tubule cells has not been reported in tal1 or lmo2 mutants before, we focused most of our attention on this phenotype. Defects in endothelial lineage specification leading to an increase in the number of pronephron tubule cells have previously been observed in several other contexts including in hand2 mutants (57) and in tbx16 mutants (58). Both of these mutants, in fact, display a reduction in tal1 expression (57,58). Conversely, the loss of the Osr1 transcription factor (59,60), or overexpression of tal1 (61) or hand2 (57), causes a reduction in the size of the pax2a-expressing LPM territory and a concomitant increase in the number of endothelial cells. It is thus possible that the alteration in tal1 expression contributes to the observed pronephron tubule phenotypes in these models. Together, these data indicate that Tal1, Hand2, Osr1, and Tbx16 play a role in the fate decision between endothelial/ blood cells and pronephron tubule cells. Looking more closely at the acquisition of a pronephron fate by npas4l reporter-expressing cells in npas4l mutants, we did not observe an obvious increase in Pax2a expression at 14 hpf. However, in wild-type embryos, npas4l and fli1a reporter-expressing cells at 14 hpf usually express a lower level of Pax2a than pronephron cells (Fig. 4, A to A′). The loss of Npas4l, Tal1, or Lmo2 function and subsequent loss of endothelial marker expression may not necessarily elevate Pax2a levels at early stages. Instead, it could allow cells to contribute to the pronephron lineage despite low initial pronephron marker expression. As Pax2a expression is maintained in differentiated cells, the high marker expression that we observed at 24 hpf could be associated with maturation and not differentiation.
The direct comparison of tal1 and etsrp mutants revealed distinct phenotypes, and the injection of etsrp mRNA failed to induce tal1 expression, indicating that in zebrafish, tal1 expression is not induced by Etsrp. These observations are different from data in mouse that indicate that Tal1 is a direct target of ETV2 (62). Also, mouse Etv2 mutants, unlike zebrafish etsrp mutants, fail to form Gata1a-expressing erythroid progenitors (63). The mouse Etv2 mutant phenotype is more similar to the zebrafish npas4l mutant phenotype than to the zebrafish etsrp mutant phenotype. These results support the hypothesis that in mammals, Etv2 acquired the functions of Npas4l, potentially contributing to its loss (64). Thus, the assumption of functional equivalence between zebrafish Etsrp and mouse Etv2 requires reconsideration.
Endothelial and pronephron cells can come from the same progenitor population
The phenotypes observed in npas4l, tal1, and lmo2 mutants indicate that endothelial progenitors can acquire a pronephron tubule fate, leading to the hypothesis that the endothelial and pronephron lineages are closely related. To test this hypothesis, we investigated the lineage relationship between endothelial and pronephron cells in wild-type embryos.
Although classical LPM models divide this structure into distinct fate territories, an overlap of pax2a, pax8, tal1, and lmo2 expression in the LPM has been previously reported in Xenopus laevis (65) and hinted at in zebrafish (4,12). Warga et al. (11) documented the emergence of pronephron, endothelial, and hematopoietic cells in singlecell clones by fate mapping experiments in zebrafish, and Spanjaard et al. (66) expanded these observations through scRNA-seq combined with barcoding-mediated lineage tracing, also in zebrafish. The results by Warga et al. (11) suggest that the pronephron and hematoendothelial lineages start to segregate at midgastrulation stages (5 hpf). However, according to our analysis of a published scRNA-seq dataset of 10 hpf zebrafish embryos (39), no distinct endothelial or pronephron clusters, but only one cluster coexpressing endothelial and pronephron markers, can be observed at this stage.
In addition to relying on the anatomical location of the pronephron tubules and their progenitors, we used pax2a/Pax2a expression as a proxy for the pronephron lineage (40,67,68) and further validated our findings by costaining with mature pronephron tubule and glomerulus markers. Our imaging and expression analyses revealed that the expression of pax2a, an early expressed transcription factor gene involved in several steps of midbrain-hindbrain boundary and kidney formation (40,67,68), is also observed in hematoendothelial progenitor cells in the LPM during early somite stages. Consistent with these observations, we found pax2a:creERT2-labeled endothelial cells throughout somitogenesis, with the highest efficiency when the 4-OHT treatment started at 10 hpf. These lineage tracing, coexpression, and published fate mapping data, as well as the npas4l/tal1/lmo2 mutant phenotypes, suggest the existence of a progenitor population in the tailbud/early-somitogenesis LPM that can contribute to endothelial, hematopoietic, and pronephron cells. It is not clear though how many of the pronephron cells once expressed npas4l. Furthermore, because both the LPM (2,9) and paraxial mesoderm (52)(53)(54)(55) have been found to display high vasculogenic potential, it is not clear whether the mesoderm in general has high vasculogenic potential or whether there are specific progenitor cells for different types of mesoderm and endothelium. Specifically, it will be interesting to further investigate cells coexpressing endothelial and pronephron markers, or "renangioblasts," in various developmental contexts. Recently, a distinct population of cells has been described to lie very close to the pronephron tubules, turn on endothelial markers, and integrate into the existing vasculature (69), and it will be worth further investigating the origin of these cells. Note also that endothelial cells have been reported to arise during the in vitro differentiation of embryonic stem and induced pluripotent stem cells (also known as iPS cells or iPSCs) into kidney structures (70) and that human metanephric mesenchymal cells can develop into hematopoietic stem cells when transplanted into sheep (71). Furthermore, Tal1 is expressed in the mouse embryonic kidney, especially between embryonic day 13 (E13) and E17 (72), and fate mapping experiments in wild-type mice have revealed a contribution of Osr1expressing cells to the endothelium (73).
Together, our data, in combination with previous observations (4,11,12,65,66,(70)(71)(72)(73)(74) indicate that some endothelial and pronephron cells share a common progenitor pool. Detailed knowledge of lineage decisions and alternative fates during endothelial development is instrumental when generating endothelial cells in a therapeutic context or when generating organoids consisting of heterogeneous cell populations. Furthermore, kidney cells that dedifferentiate during tumorigenesis could become a source of endothelial cells. To extend our work in zebrafish, future investigation of Tal1/Lmo2 as potential modulators of the fate decision between the endothelial and pronephron tubule lineages in mammals is warranted. Genome editing for knock-ins and knockouts using CRISPR-Cas9 technology N 20 -NGG sequences in the region of interest were selected manually and checked for off-targets using the CRISPR design tool CHOPCHOP v2 (https://chopchop.cbu.uib.no/) on the zebrafish GRCz10 genomic assembly. For knockout generation, sgRNA (single guide RNA) templates were in vitro transcribed using the MEGAshortscript-T7 Kit (Ambion, Austin, Texas), followed by purification on an RNA-cleanup column (Biozym, Hessisch Oldendorf, Germany). The guide RNA (gRNA) used to generate the Gal4 knock-in reporter was ordered as crRNA/tracrRNA duplex (Integrated DNA Technologies, Coralville, Iowa) and assembled according to the user method provided by the Essner laboratory (https://sfvideo.blob.core.windows.net/sitefinity/docs/ default-source/user-submitted-method/crispr-cas9-rnp-deliveryzebrafish-embryos-j-essnerc46b5a1532796e2eaa53ff00001c1b3c. pdf?sfvrsn=52123407_10). The resulting sgRNAs or gRNAs were coinjected with Cas9 mRNA (300 pg) in a 2-nl injection volume with 20% phenol red into zebrafish zygotes.
Founders were screened by outcrossing the injected individuals and screening for GFP expression in at least 300 embryos. Precise 5′ insertion of the cassette in F 1 zebrafish was confirmed by PCR on gDNA and complementary DNA (cDNA) (forward: 5′-CTTGGTC-CCTGCTGTGTTCT-3′, reverse: 5′-CAGTCTTTCTAGCCTTGAT-TCCAC-3′). PCR analysis for the vector backbone of the pGTag donor plasmid in the bns313 allele indicated vector concatemer insertions; the 3′ insertion sequence at the genomic DNA level has not yet been fully determined. However, the cDNA sequence from exon 1 to the end of the Gal4-reporter was verified by Sanger sequencing, and the distal gene tmem264 is still present in the bns313 allele ( fig. S1).
The mutant version of the npas4l + reporter npas4l Pt(+36bp-npas4l-p2A- , henceforth named npas4 bns423 -reporter, was generated by inducing in the npas4l bns313 allele an in-frame indel in the domain encoding the DNA binding bHLH domain to retain the reading frame and reporter expression but abolish Npas4l transcriptional activity using the sgRNAs 5′-CCGCGCCTTAGATGCTCCTT-3′ and 5′-CTCCA-CACTCTTCCTGATGT-3′. Injected embryos and potential founders were screened by PCR (forward: 5′-CTCTGTTCTGCTGGTGATCT-GC-3′, reverse: 5′-ATGCGTTGTGGATGCTCTCC-3′). We recovered a +36-bp insertion at the end of exon 2 that caused a strong phenotype, yet correct splicing as determined by PCR spanning the coding sequence from exon 1 to the Gal4 insertion.
For the generation of the pax2a:CreERT2 tud53 knock-in transgenic line, genomic DNA from wild-type AB was used to generate the pax2a bait by PCR (Phusion Polymerase, Thermo Fisher Scientific) using primers pax2a-bait-for (5′-GACAACGTTGTAGGCTAC-TACTAATTAACGACAC-3′) and pax2a-bait-rev (5′-GATAATC-GACTGAGGTCGCCGTCTCGCCT-3′). The amplified 904-bp bait fragment was cloned into a pCS2+ vector containing the zebrafish codon-usage-optimized CreERT2 sequence. The CMV promoter was later removed from the pCS2+ vector, and the construct was verified by sequencing. The sgRNA designed for the pax2a locus had the sequence 5′-GGGGGGATCTGGGAAGGAGG(−GGG)-3′. Preparation of sgRNA and Cas9 mRNA and injection into one-cell stage embryos were performed according to standard protocols. The injected embryos were monitored for the next 5 days, and approximately 100 embryos were raised to adulthood. To identify founders, 4-to 6-month-old F 0 fish were outcrossed with wild-type strains, and 50 embryos from each clutch were used to isolate genomic DNA. Subsequently, PCR was carried out using the primer pair pax2a-int-for (5′-GGGAAATCAACATAAAAACATCCGACAT-CAATACC-3′) and Cre-5′-rev (5′-TGACTTCATCGCTGGTAGC-GTCC-3′). An 1155-bp amplicon indicated a correctly oriented knock-in at the targeted locus, and PCR products from individual embryos were verified by sequencing. The knock-in strain was maintained as an outcross to reduce the general effects of inbreeding.
For the generation of tal1 mutant alleles, we targeted the region encoding the DNA binding bHLH domain in the fourth exon using the sgRNA 5′-CAAGAACGAGATCCTGCGTC-3′. Injected embryos and potential founders were screened by HRMA (forward: 5′-CAAGAAACTCAGCAAGAACGAGATC-3′, reverse: 5′-GTCCT-GGTCGTTGAGGAGCT-3′). We recovered the −6 bp (in-frame) deletion allele bns498 leading to the loss of R228 and L229 in the second helix of the HLH-motif and the −7 bp (out-of-frame) deletion allele bns497 leading to a premature termination codon after R228 that is predicted to truncate the protein sequence by 97 amino acids.
For the generation of etsrp mutant alleles, we targeted the region encoding the DNA binding ETS domain in the sixth exon using the sgRNA 5′-AAGTTGGACTGGTGATGGCT-3′. Injected embryos and potential founders were screened by HRMA (forward: 5′-AGCTCT-GGCAGTTTCTGCTAG-3′, reverse: 5′-CTCAGCGGGATCTGA-CATTTTAAAC-3′). We recovered the −9 bp (in-frame) deletion allele bns426 leading to the loss of D265 G266 and W267 and the −4 bp (out-of-frame) deletion allele bns422 after G264 leading to an altered peptide sequence AGSLKCQIPLRWRSGGASVKTSLK* followed by a premature termination codon at position 289 that is predicted to truncate the protein sequence by 78 amino acids.
For the generation of lmo2 mutant alleles, we targeted the region encoding the beginning of the second LIM domain in the third exon using the sgRNA 5′-TTCCTGTGAAAAGAGGATCC-3′ with the aim to split the two conserved LIM domains of this scaffolding protein. Injected embryos and potential founders were screened by HRMA (forward: 5′-TCCTTTCAGACTGTTTGGTC-3′, reverse: 5′-GCACACGCATGGTCATTTCAAAG-3′). We recovered the −6 bp (in-frame) deletion allele bns500 leading to the replacement of I101 R102 A103 with T101 and the +2 bp (out-of-frame) indel allele bns499 after R100 leading to an altered peptide sequence TTGPLK* followed by a premature termination codon at position 107 that is predicted to truncate the protein sequence by 52 amino acids.
As predicted from the domain annotations, the lmo2 out-of-frame allele bns499 exhibits a stronger phenotype than the lmo2 bns500 inframe allele does and was therefore used exclusively. The in-frame and out-of-frame alleles generated for tal1 and etsrp, however, were phenotypically indistinguishable. Therefore, we worked exclusively with the tal1 bns498 and etsrp bns426 in-frame alleles to minimize the possibility of transcriptional adaptation (82).
npas4l alleles and genotype of the embryos analyzed
We crossed the bns423 allele to the bns297 allele to generate the npas4l mutant embryos shown in several figures (Figs. 1C, 2, 3, and 7 and figs. S3, S5, S6, S13, and S14) as they exhibit the null phenotype and can be genotyped by HRMA (21). Figures 1B (npas4l −/− ) and 6 as well as figs. S4 and S15 display embryos from intercrosses of bns423 hets. To generate data for the etsrp, tal1, and lmo2 mutants, we used a single bns313 reporter allele. The npas4l +/+ embryos in Fig. 1B and figs. S1B and S2 are from intercrosses of bns313 hets.
scRNA-seq sample preparation and data analysis
Trunks from ~150 npas4l heterozygous (npas4l bns423/+ ) and transheterozygous (npas4l bns423/bns297 ) embryos were cut at the anterior end of the yolk extension at 20 hpf in DMEM/F10 + 5% fetal bovine serum + 0.01% tricaine on agarose-coated plates. The cells were dissociated using the Pierce Cardiomyocyte Dissociation Kit (Thermo Fisher Scientific) according to the manufacturer's instructions and sorted for GFP fluorescence using an FACSAria III sorter (BD Biosciences, San José, California) and DAPI (4′,6-diamidino-2-phenylindole) as an indicator for dead cells. The cell suspensions were counted with a Moxi cell counter (ORFLO Technologies, Ketchum, Idaho) and diluted according to the manufacturer's instructions to obtain 5000 single-cell data points per sample. Each sample was run separately on one lane in a Chromium controller with Chromium Next GEM Single Cell 3′ Reagent Kits v3.1 (10x Genomics, Pleasanton, California).
scRNA-seq library preparation was done following standard protocol. Sequencing was done on a NextSeq 500 (Illumina), and raw reads were aligned against the zebrafish genome (DanRer11) and counted by StarSolo (https://github.com/alexdobin/STAR) followed by secondary analysis in the Annotated Data Format. Preprocessed counts were further analyzed using the Scanpy software (https://github.com/theislab/scanpy). Basic cell quality control was conducted by taking the number of detected genes and mitochondrial content into consideration. We removed only 16 cells in total that did not express between 1000 and 7000 genes or had a mitochondrial content of less than 6%. Furthermore, we filtered genes if they were detected in less than 30 cells (<0.3%). Raw counts per cell were normalized to the median count over all cells and transformed into log space to stabilize variance. We initially reduced dimensionality of the dataset using principal components analysis (PCA), retaining 50 principal components. Subsequent steps, like low-dimensional UMAP embedding and cell clustering via community detection, were based on the initial PCA. Final data visualization was done using the scVelo (https://github.com/theislab/scvelo) and cellxgene (https://github.com/chanzuckerberg/cellxgene) packages. A trajectory inference was calculated using partition-based graph abstraction and displayed as force-directed graph calculated using ForceAtlas2 as implemented in Scanpy.
For the reanalysis of the dataset originally published by Wagner et al. (39), we relied on an annotated data file mapped to the latest zebrafish genome assembly (GRCz11) that was provided by the authors and analyzed using Scanpy. After creating subsets of 10 to 18 hpf mesodermal cells using the original cell type and time point annotations provided in the dataset, we normalized the raw counts per cell to the median count over all cells and log-transformed the data. We reduced the dimensions to 50 principal components as described in the paragraph above and visualized the data in UMAP and dot plots. For the correlation analyses, we calculated a nonparametric Spearman correlation between a bait gene and all other genes over all single cells. For comparisons between different correlations, we performed a Fisher transformation.
mRNA synthesis and microinjections
We used the published pCS2-npas4l plasmid (Addgene plasmid no. 164654) to synthetize npas4l mRNA. Coding sequences of tal1, etsrp, and lmo2 were cloned into pCS2z vectors (Addgene plasmid no. 62214), deposited as Addgene plasmids no. 164655 to 164657, and in vitro transcribed using the SP6 mMessage mMachine Kit (Ambion). The mRNAs were injected in a 2-nl volume with 20% phenol red into zebrafish zygotes. Different amounts of mRNA were injected to determine effective doses with minimal embryonic lethality or deformation. The optimal concentrations for rescue experiments were 5 pg for npas4l, tal1, and etsrp and 25 pg for lmo2.
CreERT2-based lineage tracing
Lineage tracing experiments were performed by crossing male pax2a:creERT2 with female hsp70l:Switch or lmo2:dsRED animals. Embryos were induced using preheated (65°C for 10 min) 4-OHT (H7904; Sigma-Aldrich H7904) at a final concentration of 10 M in E3 embryo medium at the indicated time points. Embryos were washed in fresh E3 medium at 24 hpf and 1-phenyl-2-thiourea (Sigma-Aldrich) was added to a final concentration of 0.003% to inhibit pigment formation. To induce EGFP transcription in hsp70l:Switch embryos, the embryos were incubated at 37°C for 1 hour at 68 hpf and 4 hours before imaging. Larvae were imaged using a ZEISS LSM 880 confocal microscope or a ZEISS Z.1 light-sheet microscope (ZEISS W Plan-Apochromat 20×/0.5 numerical aperture objective). For imaging, 72 hpf larvae were treated with 0.016% ethyl 3-aminobenzoate methanesulfonate salt (Tricaine, Sigma-Aldrich) in E3 and mounted in 0.5% low melting point agarose (LMA; A1801-LM, Benchmark Scientific) for confocal microscopy and 1.0% LMA for light-sheet microscopy. Z-stack maximum projections were made using Fiji.
Histology and microsections
Embryos were fixed in 4% paraformaldehyde (PFA) solution overnight, washed 3× with phosphate-buffered saline (PBS) and embedded in gelatin. Briefly, the embryos were incubated at 4°C in 30% (w/v) sucrose in PBS overnight. The embryos were then incubated for 1 hour at 37°C in 7.5% (w/v) gelatin and mounted afterward in the same solution. The tissue blocks were frozen in liquid nitrogencooled isopentane and stored at −80°C. The blocks were sectioned using a CM3050S cryostat (Leica, Wetzlar, Germany) and the tissue sections stored at −20°C.
Immunostaining
Immunostaining was performed according to standard protocols, with the following parameters: For whole-mount stainings, embryos were fixed overnight in 4% PFA at 4°C. Early embryos (≤20 hpf) were dechorionated after fixation, older embryos were dechorionated before fixation. Embryos were then permeabilized for 3 min with Proteinase K (10 g/ml) and blocked in PBS + 5% goat serum + 0.1% Triton X-100.
For the staining of gelatin sections, the gelatin was removed from the slides by incubation in PBS for 10 min at 37°C. The sections were permeabilized for 10 min at room temperature in PBS + 0.5% Triton X-100 and blocked in PBS + 5% goat serum + 0.1% Triton X-100.
We used the following primary antibodies: chicken anti-GFP (1:500; Aves Labs, Tigard, Oregon), rabbit anti-dsRed (1:500; Takara Bio, Kusatsu, Japan), mouse anti-mCherry (1:500; Takara Bio), and rabbit anti-Pax2a (1:200; GeneTex, Irvine, California). The a6F monoclonal antibody (anti-Na + -and K + -dependent adenosine triphosphatase; 1:100) developed by D.M. Fambrough was obtained from the Developmental Studies Hybridoma Bank, created by the National Institute of Child Health and Human Development of the NIH and maintained at The University of Iowa, Department of Biology, Iowa City, IA 52242. Alexa fluorophore-conjugated secondary antibodies (Thermo Fisher Scientific) were used in a 1:500 dilution. DAPI was added to the secondary antibody solution (final concentration of 1 g/ml).
In situ hybridization
Probes corresponding to the full coding sequence of pax2a and CreERT2 were used. Probe synthesis and in situ hybridization were performed according to standard protocols.
Confocal microscopy imaging
Embryos were embedded in 1% low melting point agarose on their side. Living embryos were anesthetized with 0.01% tricaine before embedding and stayed under anesthesia during the procedure. All experiments on living embryos and larvae were nonrecovery experiments. For genotyping, the anesthetized embryos were taken out of the agarose, exposed to heat briefly, and lysed using 50 mM NaOH for 10 min at 95°C.
Confocal images were acquired using an LSM 800, LSM 880, or LSM 710 confocal microscope (ZEISS, Oberkochen, Germany). The images were acquired and processed using the ZenBlue software package. Only linear adjustments were used, and acquisition parameters were kept constant throughout the imaging whenever possible. The confocal microscopy data presented in this manuscript were not used for the quantification of fluorescence intensity.
For overview images of the whole embryo, a tile scan with a Plan-Apochromat 10×/0.45 DIC II objective (ZEISS) was performed and stitched with ESID (electronically switchable illumination and detection module) as the reference channel. Bright field-like images for this magnification were generated using an ESID channel and enhanced depth of focus. The channel was then added to the orthogonal projection of the fluorescence channels.
Images of the trunk region were acquired using an LD LCI Pln Apo 25× 0.8 W (ZEISS) or a C Apo 40×/1.1 W DICIII (ZEISS) lens on an LSM 800 observer or Pln Apo 40×/1.3 oil DIC M27 or 40×/1.2 immersion-corrected DIC M27 lens on an LSM 710. As an anatomical landmark, we kept the yolk extension in the field of view. For the presentation of the bright field-like ESID channel, a single plane was exported from the middle of the stack and added to the orthogonal projection of the fluorescence channel.
Sections were imaged using a Pln Apo 40×/1.4 Oil DIC II (ZEISS) lens and the Airyscan detector, followed by 2D Airyscan processing. Similarlooking transverse sections over the yolk extension were used, but the location along the anterior-posterior axis varies slightly between different sections because of the lack of precise anatomical landmarks.
Reverse transcription quantitative polymerase chain reaction
Total RNA was isolated from pools of 10 embryos at the tailbud stage using TRIzol (Thermo Fisher Scientific) and purified by isopropanol precipitation. First-strand synthesis was performed using the Maxima First Strand cDNA Synthesis Kit (Thermo Fisher Scientific) with 1 g of RNA template. A total of 0.5 l of the resulting first-strand cDNA was used in a 10 l RT-qPCR. qPCR was performed in technical duplicate using a CFX Connect Real-Time PCR system (Bio-Rad, Hercules, California) and fold changes calculated using the 2 −Ct method.
Quantification of ISV numbers
fli1a:GFP-positive ISVs were counted under a stereomicroscope at 48 hpf along the entire body axis. Controls and treated embryos were derived from the same clutch. All experiments were repeated at least twice.
Statistics
Statistical analyses were performed in R (count data) and python (qPCR and scRNA-seq data). Count data were fitted to a Poisson model using the "glm" library. For log-transformed qPCR data, normality was assumed and P values calculated by unpaired two-sample t test using the "scipy" package. Bonferroni correction was applied to adjust P values, when appropriate.
SUPPLEMENTARY MATERIALS
Supplementary material for this article is available at https://science.org/doi/10.1126/ sciadv.abn2082 View/request a protocol for this paper from Bio-protocol. | 2022-09-02T06:17:30.543Z | 2022-08-31T00:00:00.000 | {
"year": 2022,
"sha1": "4514663c0f29c539ba40965108872e8c5c9432a2",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "9d0f4ff9efd1268fb882bc71a7c87ebd33e5f440",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
256752832 | pes2o/s2orc | v3-fos-license | Longitudinal Changes in Suicide Bereavement Experiences: A Qualitative Study of Family Members over 18 Months after Loss
Family members bereaved by their loved ones’ suicidal death normally undergo a complicated and lengthy bereavement process. In this qualitative case study, we explored longitudinal changes in the suicide bereavement process by applying assimilation analysis, based on the Assimilation Model (AM) and the Assimilation of Problematic Experiences Scale (APES), to longitudinal interview data collected from two Chinese suicide-bereaved individuals within the first 18 months after their loss. The results showed that over time the participants both progressed in adapting to their traumatic losses. Assimilation analysis both effectively elaborated the difference in the inner world of the bereaved and clearly demonstrated development in their adaptation to the loss. This study contributes new knowledge on the longitudinal changes in suicide bereavement experiences and demonstrates the applicability of assimilation analysis to suicide bereavement research. Professional help and resources need to be tailored and adapted to meet the changing needs of suicide-bereaved family members.
Introduction
Grief following the suicidal death of loved ones can be devastating. Cerel et al. [1] found that close family members form the majority of the bereaved who are most impacted by suicide, and that they suffer more than non-family persons from suicide-related loss. Creuzé et al. [2] also demonstrated the great impact of suicide on family members as both individuals and as a unit.
While sharing several common features (e.g., sadness) with bereavement after other types of loss, especially bereavement following unexpected decease and violent decease [3], some of these features are more evident in suicide than non-suicide bereavement. First, suicide bereavement exhibits a higher level of complicated emotional reactions and thoughts, including numbness and disbelief [4], rejection [5,6], guilt and self-blame [4], perceived responsibility [5,6], feelings of being rejected and abandoned by the deceased leading to feelings of anger and unworthiness [7], pondering on unanswered questions [8,9], and dramatic changes in one's core belief system [10,11]. Second, individuals bereaved by suicide are at higher risk for mental health difficulties such as depression, anxiety, post-traumatic stress disorder (PTSD), complicated grief, as well as for suicide ideation, attempts and completions [11][12][13] and unpleasant situations in their social network, such as stigma and shame [5,6], embarrassment, and isolation, etc. [10,14,15]. Besides these individual-level grief reactions, various post-loss family-level changes occur, such as in the regulation of the family's life, and in communication and interaction, mutual emotional accessibility, and cohesiveness within the family unit [11]. Creuzé et al. [2] found that family conflicts, taboos or cohesion also arise following suicidal loss.
Despite these research findings, studies on longitudinal changes in the suicide bereavement process remain scarce. Some of the few existing studies have focused on specific groups (e.g., children, parents, and older spouses) [16][17][18][19] while others have followed a quantitative approach [6,18]. Methodologically rigorous qualitative research is needed to elaborate the diverse grief trajectories of bereaved individual family members in their diverse relationships with the deceased, and to gain a clear picture of their mental status at different time points. Hence, the present study applied a qualitative case study approach, tracking the bereavement journey of two Chinese suicide-bereaved individuals over 18 months and focusing on their lived bereavement experiences at different time points. We used assimilation analysis [20][21][22], to study the data as it offers an intensive, qualitative procedure for a case study [23] and has been demonstrated to offer particular advantages for monitoring psychological changes in the processing of psychologically problematic or painful experiences [24][25][26].
Assimilation analysis (AA) is based on the Assimilation Model (AM) and the Assimilation of Problematic Experiences Scale (APES). The Assimilation Model (AM) is an integrative theory and framework used in accounting for psychological change processes. AA was originally applied to track changes in psychotherapeutic processes [27,28] and was later developed for studying interviews in non-therapeutic contexts [25,29,30]. This study is a further application of AA to non-therapeutic interviews conducted to assess two participants' natural grieving status during which they received no professional intervention, although one participant participated in a suicide-bereavement support group. According to the schema formulation of AM [31], positive change occurs as the problematic experiences are gradually assimilated into one's schemas, "schema" referring here to the frame of reference that organizes one's perception and experience. A problematic experience can be a wish, intention, or behavior that is psychologically painful, arising from a particular life event or set of associated life events [27,32]. Coincidentally, Jordan [11] noted that suicide may disturb the presumed world or cognitive schema of bereaved individuals.
The assimilation of individuals' problematic experiences into their schemas is rated from zero to seven in an eight-stage process, as presented in the Assimilation of Problematic Experiences Scale (APES) [26], which contains a description of the cognitive and affective features of each stage. In stage zero (warded off/dissociated), the problematic experience is actively avoided, and accompanied with minimal affect [26]. In stage one (unwanted thoughts/active avoidance), thoughts associated with the experience arise when triggered by external circumstances, and affect becomes stronger and more salient. In stage two (vague awareness/emergence), the experience is acknowledged, the problem cannot yet be clearly formulated, and affect is acutely painful or panicky. In stage three (problem statement/clarification), the problem is clearly stated, with negative but manageable affect. In stage four (understanding/insight), the problematic experience achieves a clear connection to a schema, accompanied by both unpleasant and pleasant recognition and affect. In stage five (application/working through), understanding is used to tackle the problem, and affect is positive and optimistic. In stage six (resourcefulness/problem solution), a successful solution to the problem is worked out, with positive and satisfied affect. In stage seven (integration/mastery), solutions are successfully applied in new situations, with positive or neutral affect.
Wilson [33,34] applied AM to analyze bereavement counseling. We used assimilation analysis to analyze a single bereavement category, i.e., suicide bereavement, in specifically non-therapeutic research interviews. The research questions were: 1.
What changes occur in suicide bereavement experiences over the first 18-month period after loss? 2.
What are the strengths and challenges of using assimilation analysis to analyze changes in the suicide bereavement process?
Moreover, as both participants are Chinese, we explored the potential impact of Chinese culture on their grief.
Participants
This study constitutes part of a larger research project focusing on lived suicide bereavement experiences in China. To track the bereavement journey of suicide-bereaved individuals, focusing on their lived bereavement experiences at different time points, two participants, W and Song (both are pseudonyms) were included in this study as, at the time of their first interviews, the interval since their suicidal loss was the shortest among all the 14 participants included in the research project. They were also the only two participants in the longitudinal interviews. Specifically, W was interviewed four times at around 3, 7, 10, and 18 months after his wife (L) had died by suicide. Song was interviewed twice, at around 6 and 18 months after her younger brother (X) had died by suicide.
Both participants had received a higher education. W was over thirty. He and his late wife's marriage was the first marriage for each, and they had no children. The marriage had lasted for 4-5 years. Song was approaching her thirties. She was the second daughter in the family and three years younger than her older sister.
Research Ethics
The Research Ethics Committee of University of Eastern Finland approved the study. Before the interviews, the participants were informed about the research, including the voluntary nature and anonymity of participation, the purpose and procedures of the interviews, the potential benefit and risks of participating in the interviews, their right to quit at any time, and the resources available to them if they encountered distress during and/or after the interviews. Both participants gave their written informed consent before the interviews. After the interviews, the interviewer inquired about the participants' mental status so that timely support could be provided if needed.
Participant Recruitment and Data Collection
W was recruited through a suicide bereavement support group and Song through social media. The first author conducted semi-structured in-depth interviews with W and Song in quiet and private venues in China. All the interviews were conducted face to face, except for W's third interview, which was conducted online through an audio call. The interviews focused on the participants' bereavement experiences and process, specifically on their reactions, perceptions, and changes in these after the event, their coping and adjustment at different times, changes in their families, support sought or received, etc. The interview guide was derived from the literature on experiences and changes in suicide bereavement processes. The interviews were audio-recorded with the participants' consent. Throughout the interviews, the interview process mostly followed the participants' narratives. Probes and follow-up questions were proposed when appropriate. This approach enabled the interviewees to manage their narrative pace and emotions with a greater sense of control.
Assimilation Analysis
The first author conducted all the interviews in Chinese, transcribed the interviews verbatim in Chinese, and translated the Chinese transcript into English for analysis. A four-step assimilation analysis [26,35] previously used to analyze psychotherapy sessions was adapted and used in this study. Each step was completed with alternation between the two authors' independent data analysis and their collaborative data sessions conducted until consensus was achieved.
Step 1: Familiarization and indexing. Through repeated listening to the audio recordings and reading the transcripts, the researchers discerned the participants' thoughts and feelings about their loss and made a list of problematic topics. In AM, a "topic" is an attitude expressed toward an object (which can be a person, thing, event, or situation) [36].
Step 2: Identifying and Choosing Themes. In AM, a "theme" is an attitude revealed recurrently, possibly regarding several objects [36]. From the list of topics extracted in Step 1, themes, i.e., topics which were mentioned frequently and narrated at great length, were identified. We named every theme based on its core content. Based on their length of narration, we assigned the themes identified in each interview into three categories: focal themes, secondary themes, and tertiary themes. The focal themes were narrated at the greatest length in each interview, the secondary themes at medium length, and the tertiary themes at the shortest length. Some themes, which closely resembled each other were combined to form a single focal or secondary theme. These combined themes were named sub-themes in this study.
Step 3: Selecting Passages. Passages representing the three categories of themes were located and extracted.
Step 4: Describing the Process of Assimilation Represented in the Passages. Each interview was assigned an overall APES rating based on the content of the themes and passages gleaned from Steps 2 and 3, respectively. We used words together with the APES ratings to elaborate our understandings of the participants' process of assimilating their loss.
The Case of W
The themes identified in each of W's four interviews are presented in Table 1 below. Note. Bold: focal themes; bold and italics: secondary themes; normal: tertiary themes; normal in parentheses: sub-themes. Numbers mark the sequence of the focal themes. L is the pseudonym used to refer to W's late wife.
Themes and APES Ratings of Each Interview
W's first interview has been analyzed in detail in another research article [37]. The focal themes in the first interview included intellectualization and bereavement experiences. Intellectualization was manifested in the fact that W spent most of his time talking about various scientific and philosophical topics. This theme alone accounted for 84 min of the 144-min interview.
After being asked about the impact of his wife's suicide on him, W vividly described his intense and overwhelming sadness and other negative emotions. He felt her death was unbelievable and sudden, and he was experiencing feelings of guilt, although these had moderated after he learned some of the reasons for her death. Moreover, it had altered his view of life, and he briefly recalled what L had been like. Fortunately, his parents' company and his participation in the bereavement support group had helped him. W's perceptions of his wife's death were somewhat contradictory. On the one hand, W attributed L's death to an accident while on the other, his behavior indicated that he did not reject the high possibility that L might have died by suicide.
Owing to the frequent shifts in topics and themes, and to the great discrepancies in the APES ratings across different themes, it was not possible to give an overall APES rating of W's first interview. However, it was agreed that W's mental status was characterized by turbulence, contradictions, and avoidance, and hence that W's overall assimilation of L's suicide was still at an early stage, i.e., below 2.5 points, which is the cutoff between emergence and clarification on the APES.
The focal themes in the second interview included bereavement experiences and changes, and exploration and reconstruction of L's suicide. W's narratives were less intellectualized; scientific and philosophical topics were more both briefly mentioned and more relevant to his narration of his thoughts and experiences. In the narratives under the theme bereavement experiences and changes, W expressed his expectation of gaining more control over his life and what he wished to achieve in the future. However, he mentioned his pain only indirectly and briefly in the second interview, his bereavement-related emotions remaining almost indiscernible.
Excerpt: W: Seeking truth . . . actually is the most important thing, sometimes, including after this thing happened, sometimes the truth of things brings a lot of pain, you need something to numb yourself, right? But you can't numb yourself forever right? W acknowledged L's death as suicide, explored its possible causes, and constructed a multi-dimensional interpretation for it that involved L's personalities, the influence on L of her original family, L's diaries, and L's depression. However, W's reconstruction mainly focused on external causal explanations for L's death and lacked self-reflection, thereby appearing incomprehensive.
W's self-exploration started through reading books, pondering about his family and observing himself. He felt he had gotten past the most severe phase of guilt and selfblame and was less distressed. However, he frequently emphasized the topic of guilt and self-blame, which seemed a crucial and unavoidable part of his bereavement process.
Excerpt: W: She wasn't a person like me, she didn't get the disease because of me, after I knew that, I walked away from feelings of guilt, my conscience could quieten down . . . Based on the above observations, we can see that with his more manageable emotional status, W was more accepting of his suicidal loss: he was able to confront the complexities of his bereavement, and begin his (although not yet comprehensive) exploration and reconstruction of L's suicidal death. Hence, we rated W's second interview as 2.8 (approaching clarification).
In the third interview, the focal themes included conflicts with ex in-laws, exploration and reconstruction of L's suicide and bereavement experiences and changes. The largest proportion of W's talk in the interview concerned his conflicts with his ex in-laws. He recounted in detail how L's original family blamed him for L's death, and the strong emotions triggered by these conflicts.
Excerpt: W: . . . So you say, rather disgusting, right? So mean, so heartless . . . ridiculous and bastard, really disgusting . . . What they did made me feel I couldn't remain drowning in sorrow any more, I had to pull myself together to tackle those things.
On the theme of bereavement experiences and changes, W directly described his current emotional status and compared it with his intense emotions experienced previously. Some negative aspects emerged into W's narration of his memories of L. He recognized that he was still suffering considerably from feelings of guilt and frankly elaborated the different aspects of these feelings caused by L's suicide.
Excerpt: W: After she'd gone, I found her diaries, after seeing them, I blamed myself heavily at that time . . . back then, I didn't empathize with her, she hid those things from me, I didn't recognize them either, this made me somehow feel guilty . . .
Thus, it is evident that W was able to directly and openly express and manage his different emotions, including those that were negative and aggressive emotions. His reflection and reconstruction had now become more comprehensive and in-depth, and he had a clear picture of what he wanted to achieve in the future. Hence, we rated W's third interview as 3.3 (slightly above clarification).
The focal themes in the fourth interview included bereavement experiences and changes, and exploration and reconstruction of the suicide. W relocated to live alone at some point between the third and fourth interview. He said that emotionally he could "come to terms with reality", i.e., his wife's suicide. He had also taken action on his future. W summarized some of the things that had helped his recovery, including his hobbies, his zest for reading and thinking, his friends, the bereavement support group, etc. He remembered what L was like and thought about her personalities dialectically.
Excerpt: W: Truth is the thing that you must come to terms with, it is just sometimes too cruel to be accepted . . . You are just a minute star in the universe, you have such a short life, we are so minimal, this thing is not a big deal . . .
On the theme exploration/reconstruction of the suicide, W newly added that L probably felt great pressure in their marriage due to her physical illness. Conflicts with parents accounted for a greater proportion of his talk in the fourth than previous interviews. W recounted the difference in ideas and habits of living between himself and his parents. He assessed his current feelings of guilt as "appropriate".
Excerpt: W: I read her diaries, and then I acquired some knowledge about psychology, I got to know what was going on, I didn't blame myself too much, actually a certain amount of guilt is unavoidable, I think it is appropriate guilt, not too much . . . Thus, it can be seen that by the time of his fourth interview, W had taken actions to achieve what he wanted and had become more flexible and proficient at expressing and regulating his emotions, more aware of what was helpful and unhelpful for recovering from his bereavement. We rated W's fourth interview as 3.7 (approaching insight).
Development of Bereavement across the Four Interviews
W's changes in bereavement are shown in the development of themes across the four interviews. W's talk in the later three interviews was much more stable than in the first interview, which was characterized by frequent switches between themes and fluctuation in their APES ratings. Moreover, the themes identified in the four interviews underwent various changes, either existing in only one or two interviews, evolving into other themes, or increasing or decreasing in the length and depth of their narrative content (see Table 1).
The theme bereavement experiences and changes was the only focal theme common to all four interviews. However, its position and sub-themes in the interviews varied. In regard to the contained sub-themes, in the first interview, no "changes" were included; in the second interview, seeking truth and future expectation emerged, however, expression of emotions was almost hidden. In the third interview, emotions were expressed, and he went into more detail in future expectations. In the fourth interview, W's narratives on this theme were more down to earth.
The theme exploration and reconstruction of L's suicide was the second focal theme in the remaining three interviews. While containing the same sub-themes as in the second and third interview, the underlying narrative content underneath of each sub-theme was much more detailed and profounder in the third interview. The fourth interview contained a new sub-theme, L's worry, and feelings of pressure in the marriage.
Feelings of guilt were also present across W's four interviews. In the first interview, it was a sub-theme. He was experiencing guilt about L's death, although this had moderated once he had learned some of its causes. In the later interviews, it was a secondary theme. In the second interview, W talked at length on this theme, despite saying that he had already left behind his most severe feelings of guilt. In the third interview, W was still burdened by guilt. In the fourth interview, however, W concluded that his feelings of guilt had moderated to an appropriate level.
Themes and APES Rating of Each Interview
The themes identified in each of Song's two interviews are presented in the table below. The focal themes in the first interview were exploration and reconstruction of the suicide, emotions caused by the suicide and impact of the suicide. Song talked about her brother's suicide candidly. She sought to understand his inner world and the reasons or even THE reason for it, possible advance warnings, and to figure out what family-related issues might have impacted his choice of suicide. The suicide has also caused her to feel a range of intense emotions (see Table 2). Table 2. Themes included in each of Song's two interviews.
First (6 Months after Her Loss; 94 min) Second (18 Months after Her Loss; 283 min)
1 Exploration and reconstruction of the suicide (family environment, family history and family relationship related to the suicide; ignored advance warnings of the suicide; extended family's experiences of depression and attempted suicide; parental education; personalities and personal experiences of X; reflection on Song's personal experiences) 2 Emotions caused by X's suicide (incredibility of X's suicide; unable to understand or accept it; unable to have done something to prevent it; partial understanding; unchangeable and overwhelming; inestimable and unresolvable pain; pity and tragedy; loss of interest; blame/hate; compassion; feelings of guilt; dazed/resigned) 3 Impact of the suicide (impact on socializing; changes in living arrangement; different impact of X's suicide on different family members; impact on family relationship; impact on career plan and romantic relationship; impact on the extended family; description of suicide method and scene) View of life Religious belief Few people to talk to about X's suicide Suicide prevention Tangled romantic relationship (conflicts in relationship; consideration of relationship and marriage; status of the relationship; boyfriend's family background; boyfriend's relationship history and marriage; disliked marital status) Bereavement experiences and changes in family members (changes in living arrangements; impact of X's suicide on socializing; different impact of X's suicide on different family members; changes in view of life; carrying out career plan; emotional status; difficulty of emotionally accepting X's suicide; ambivalence between understanding and misunderstanding of X's suicide; description of the suicide method and scene and bereavement experiences immediately after X's suicide; similar suicide bereavement experiences to someone else's) Impact of family environment, family history and family relationship on Song (family relationship after X's suicide; conflicts with sister and mother; sister's marriage situation and brother-in-law's family background) Exploration and reconstruction of X's suicide (family environment, personalities, personal experiences; triggers and advance warnings of his suicide; suicide note) Disliked While Song was clear about the impact of the suicide on her and her family, she was "puzzled" about what areas she could work on and how to cope with her intense emotions in practice. Her current life arrangements and future plans had changed. She spent a lot of energy caring for her parents and had also had to postpone making decisions on her career development and romantic relationship due to her "too tired/exhausted" state.
We rated Song's assimilation of her brother's suicide on the APES as 2.7. Song's second interview lasted 283 min. The largest proportion of her talk was spent giving a detailed description of her tangled romantic relationship with her boyfriend. Her quarrels with him had become more serious since X's suicide.
Song: If he's so close to his cousin, why can't he understand my affection for my brother, I'm very angry about this, thinking why can't you understand the trauma in my heart . . . Well, it may be, I think this kind of trauma may be caused by my brother, it may be . . .
Int: How about before? Were you like this before the event? Song: Before, it wasn't so serious before.
In her secondary theme, bereavement experiences and changes in family members, Song's life seemed to have moved on, since her focus had shifted away from caring for her parents to implementing her career plan. Nevertheless, Song manifested ambivalence at several points. Cognitively, she could understand why her brother had died by suicide but was emotionally unable to accept it. The deliberate nature of suicide challenged her belief about the controllability of the world and life, making her feel both angry and resigned. She tried to prevent her thoughts and emotions about X's suicide affecting her daily life. Paradoxically, in the interview, she recalled the method and scene of his suicide and her family's bereavement experiences immediately afterwards in vivid detail, as if the event had happened only a few days ago.
Song: Now as soon as I think about the details, I'll definitely be overwhelmed immediately, I force myself not to think about it, I need to move forward . . . Now my dad basically doesn't mention it anymore, and my mom doesn't either, she won't mention my brother constantly like before, she also just wants to forget it.
The impact of the family environment, family history and family relationships on Song was a secondary theme. Here, Song's attention focused on how her family influenced her, including her relationship with other family members and her choices in her romantic relationship.
In this interview, several narratives were intertwined, and Song freely switched between them. The same story line was scattered across the interview. Song seemed to give a comprehensive and accurate introduction to her life, covering every single aspect from past to present. The main story line among the many concerned Song's tangled romantic relationship with her boyfriend, with her brother's suicide and the family's bereavement forming an implicit and underlying motif throughout the interview. This may coincide with Song's stated choice of not thinking about X's suicide.
Song's assimilation of her brother's suicide was rated 3.2 on the APES.
Development of Bereavement across the Two Interviews
Song's first interview focused on her exploration and reconstruction of the suicide. She also spent much time in the interview reflecting on the various emotions X's suicide aroused in her and on the impact of his suicide on the entire family. In comparison, the focus in the second interview had shifted onto her tangled romantic relationship and her family's influence on her life. Moreover, in the first interview, Song narrated her various emotions at great length, while in the second interview, her emotions were less evident.
Commonalities
Both W and Song had a comparatively high level of psychological mindedness, meaning that they were aware of their own psychological processes and could elaborate these in clear and rich language. They were also eager to integrate the psychological knowledge they had acquired into their interpretation of their close ones' suicides. Moreover, their religious interest or belief eased their grief to some extent.
The APES ratings of both participants' assimilation of their loss increased over time. W and Song both displayed emotions more in the earlier interviews. These emotions were characterized by ambivalence, turbulence, fluctuation, and detachment. Later, their emotions were less apparent. Moreover, their interview themes were interconnected. For example, the focal theme bereavement experiences and changes and exploration/reconstruction of the suicide impacted on each other. Lastly, Song's first and W's second interview occurred at similar interval after their loss, at 6 and 7 months, respectively. Coincidentally, their APES ratings of 2.7 and 2.8, respectively, were also similar.
W and Song also shared some family-level commonalities. The families of both participants played an important role in their bereavement. W's parents supported him with their presence while Song greatly supported her parents. Her relationship with her parents and sister also impacted her bereavement process. Moreover, both families had more intra-familial communication and interaction post-than pre-suicide, although the conflicts between family members escalated.
Differences
Compared to W, Song's emotions were more explicit, diversified, and more frequently observed, especially in her first interview. Song attributed her brother's death to suicide from the very beginning whereas W had doubts about the nature of his wife's death. Moreover, probably owing to their different relationship to the deceased, Song's account included more about other family members' bereavement experiences.
Both W and Song's last interview took place at 18 months after their loss. At the time of the last interview, W had drawn on his inner and outer resources to create a channel for his grief and recovery. He had arrived at a balanced and peaceful phase of grieving after having undergone turmoil, distress, and conflict with his ex in-laws. In comparison, Song continued to experience difficulty in dealing with the overwhelming emotions aroused by thinking about X's suicide. Dramatic and conflictual voices filled her inner world, causing her to be more avoidant when coping with her bereavement. The difference in W's and Song's status at their last interviews is reflected in their APES rating: 3.7 (W) and 3.2 (Song).
Discussion
Bereavement occasioned by suicide is normally a complicated and long process. The mental status of bereaved individuals varies at different time points after their loss. W and Song both experienced changes and progress in their bereavement during the first 18 months after loss. W journeyed from suffering overwhelming, detached and turbulent emotions, and experiencing a considerable void in his heart and life, to constructing causes for his wife's suicide from different perspectives, dealing with the conflicts triggered by his loss, confronting negative emotions, and finally arriving at a balanced and peaceful phase of grieving. While Song also started from being overwhelmed by intense emotions, she ended up experiencing dramatic mental conflicts and intentionally avoiding mention of her loss.
The grief trajectories of W and Song support the findings of Gaffney and Hannigan [38] on the initial, medium-term and long-term stages of coping with grief. Dealing with complicated emotions is an essential part of suicide bereavement experiences, as the present two cases show. The intense emotions revealed by the participants in their initial interviews are a previously reported feature of bereavement reactions in the months immediately following a suicide [39,40]. W's emotions were detached at 3 months and hidden at 7 months post loss. Song, in turn, displayed obvious avoidance at 18 months post loss. Ross et al. [19] considered avoidance a maladaptive strategy at 6 and 12 months after suicidal loss. However, views vary. For example, Gaffney and Hannigan [38] found avoidance to be a regulatory strategy, Wilson [34] suggests that detachment and avoidance may facilitate temporary respite from intense grief, while Updegaff and Taylor [41] suggest that avoidant coping can be helpful temporarily.
Along with the expression and regulation of emotions, exploration/reconstruction of the suicide, i.e., sense-making and meaning-making of the suicide, have been demonstrated to be a crucial stage in suicide bereavement. Sands and Tennant [42] posited that reconstruction can help bereaved persons progress in their bereavement trajectory. The significance of exploration/reconstruction for suicide bereavement has also been empirically supported [8,19,38,39,43,44].
In line with Shields, Kavanagh, and Russo [44], who found that the three main themes underlying the process of bereavement, i.e., the feelings of bereavement, the meaning of bereavement, and the context of bereavement, may have a large impact on one another, the themes in the present participants' interviews were interconnected. Studies have suggested that reconstruction of the suicide story can help the bereaved bond with the lost family member in a more positive way, lessening their sense of guilt [42,45]. In our study, the two most prominent themes-bereavement experiences/emotions caused by the suicide and exploration/reconstruction of the suicide-were interrelated and affected each other's development.
Assimilation analysis effectively elaborated the differences in the participants' inner worlds and clearly demonstrated their adaptation to their loss over time. The extraction of themes and related passages from the transcripts showed the prominence and valence of each theme, indicating their sequence in the process of suicide bereavement and giving a clear picture of the participants' real-time grieving status. Comparison of the APES ratings and thematic content across the different interviews clearly revealed the changes in the participants' suicide bereavement process. Thus, the application of assimilation analysis in this study rendered visible not only the micro details in the different phases of bereavement experiences, but also the underlying macro changes over time. This could hardly have been achieved with the research methods used in previous studies on suicide-bereaved individuals' grief trajectories [6,[17][18][19]46].
We conducted in-depth individual interviews with the two participants. Research has shown that such interviews can have an interventive impact on participants, even if unintended [47]. Bonanno, Boerner, and Wortman [48] found that talking about a deceased spouse was beneficial for resilient individuals. Similarly, Baddeley and Singer [49] suggest that the bereaved can make meaning of their bereavement by disclosing their grieving experiences to other people. Shields, Kavanagh, and Russo [44] propose that the act of creating an understanding and non-judgemental environment that allows the bereaved to communicate their experiences candidly and honestly can help them through their grieving process. Here, W was interviewed at a higher frequency and shorter interval than Song. The potential interventive impact of W's four interviews and/or his participation in a bereavement support group may partly explain his better final status. Research has confirmed the positive function of bereavement support groups [19,38,50]. Participation in research interviews and in support groups provides opportunities for the bereaved to talk about their grieving experiences with others and potentially find meaning in their bereavement.
Ali [51] suggests that consideration of the indigenous cultural context is crucial for generating knowledge on adaptive reactions to grief. The two present cases shed light on the impact of Chinese culture on individuals' bereavement. The intensity of the participants' feelings of guilt and self-blame stem partially from their sense of failing their responsibilities as a husband and as an older sibling. This reveals the uneven distribution of responsibility and the hierarchy in family relationships in Chinese culture. For Song, caring for her parents became her most important bereavement-related task during the first year after the event, as she had to be strong for her family. Chinese families widely value traditional filial piety. This factor may have informed Song's strong sense of responsibility towards her parents and her blaming of her deceased brother, as suicide is deemed as extremely unfilial act in Chinese culture [52]. Hence, Song's family also experienced awkwardness in their social network after their loss, an added burden, especially at the onset of their bereavement. Research has also shown that, particularly in Asian cultures, stigma associated with mental illness casts a shadow not only over the affected individuals but often also over their families [53,54].
Strengths and Limitations
A strength of this study is that it is one of the few to monitor suicide bereavement trajectories over a longer period. Utilizing in-depth interviews, the study tracked the two participants over 18 months, thereby amassing rich and detailed longitudinal data on their experiences. These factors, together with the application of assimilation analysis, enabled the main features of the bereaved individuals' inner worlds to be charted at different times, revealing how they adapted to their loss. Hence, our study extended the (thus far) limited knowledge on changes in suicide bereavement experiences over time, while also demonstrating the applicability of assimilation analysis to this research domain.
We also applied various methods to guarantee the trustworthiness of this study. Many of these methods enabled us to meet the qualitative research criteria suggested by Lincoln and Guba [55], Creswell and Miller [56] and Korstjens and Moser [57]. The methods included prolonged engagement (thorough preparation of the data collection phase; allowing sufficient time to gain familiarity and create a relationship of trust with the suicide-bereaved participants; adequate interview length; a long time span between successive interviews), methodological triangulation (complementing the in-depth interviews with field notes to provide reference points for the data analysis), investigator triangulation (close collaboration between two researchers; alternation between the authors' independent data analysis and their collaborative data sessions), persistent observation (going back and forth between the dataset and data analysis; reading relevant theorical and empirical literature throughout the research process; allowing observations that emerged from the data to prompt ideas about the data analysis while also allowing the data analysis impact the subsequent data collection; the data analysis started immediately after each interview and continued until the article was finalized), transferability (giving a rich account of the research process and context, including the participants as well as the research data), dependability and confirmability (detailed descriptions of the analyses and interpretations made and derived from the data), reflexivity (the first author kept a research journal to keep track of her ideas and thoughts in all phases of the research so that she could reflect on her own role in each phase and, if necessary, make self-corrections).
Since, according to Levitt [58], "qualitative generalization" refers to the phenomenon rather than the population, the findings of this research can to some extent, depending on the context of the bereavement and characteristics of the bereaved, be generalized to the suicide bereavement process and the longitudinal changes that occur during it. However, it should be noted that for several reasons, generalizing from this research to wider populations is limited. First, the number of cases and interviews was small. Second, the difficulty of finding participants who had recently lost loved ones to suicide and were willing to participate in longitudinal interviews raises the possibility of selection bias. Third, the level of psychological mindedness and understanding of psychological knowledge of the present participants is not commonly encountered in the field. Finally, owing to resource constraints, we could not extend the longitudinal interviews beyond 18 months after loss. Thus, it is possible that a longer time span might better facilitate comparative research on this topic.
Clinical Implications
The trajectories found in this study may be of value to those who help people bereaved by suicide, including health professionals, social workers, volunteers, family members, and friends. Forms of assimilation analysis can be applied in in-depth assessment interviews with bereaved individuals to understand their adaptive processes. Our results indicate that professionals should bear in mind that the mental status of persons bereaved by suicide differs both between and within individuals over time. Hence, professional interventions and other social resources targeted to bereaved family members must consider their specific situations and tailor support to meet their changing needs. For bereaved persons suffering from long-term emotional dysregulation and severe or chronic stress symptoms such as anxiety or depression, professionals should evaluate and monitor their risk for developing complicated grief, PTSD, or suicidal tendencies, etc. Finally, coordinated culturally appropriate assistance and services can help promote the recovery of family members.
Conclusions
This study tracked the bereavement journey of two suicide-bereaved individuals and their lived experiences of bereavement at different time points during the first 18 months after loss. Although the mental status of these individuals varied both intra-and interindividually over time, both underwent a complicated and lengthy process, showing a positive trend towards recovery from their traumatic loss. This study also demonstrated the applicability of assimilation analysis to research on changes in suicide bereavement experiences over time. We further found that participation in a bereavement support group and in individual in-depth research interviews seemed to have a positive effect on these suicide-bereaved individuals. We also speculated on the possible impact of the Chinese culture on suicide bereavement in these two cases. The findings of this study can contribute when designing more appropriate measures for helping bereaved individuals varying in the characteristics of their bereavement process. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.
Data Availability Statement:
The data are not publicly available due to their confidential nature. | 2023-02-11T16:04:56.810Z | 2023-02-01T00:00:00.000 | {
"year": 2023,
"sha1": "cf7f80b33990450d2ab08d74294253e252180626",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-4601/20/4/3013/pdf?version=1675926320",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3d892ec9d6e9cd69c4fcdd51277736a1d64e5e97",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
14806931 | pes2o/s2orc | v3-fos-license | B and T cells collaborate in antiviral responses via IL-6, IL-21, and transcriptional activator and coactivator, Oct2 and OBF-1
Transcriptional activator Oct2 and cofactor OBF-1 regulate B cell IL-6 to induce T cell production of IL-21, to support Tfh cell development in antiviral immunity.
Protective long-term humoral immunity against pathogens depends on the generation of antibodies of high affinity that are capable of appropriate effector functions, a process which relies on the formation of germinal centers (GCs) in LNs or in the spleen during infection. GCs are essential but transient structures in which high affinity antibody-secreting cells and memory B cells are generated during a T cell-dependent (TD) antibody response. Although B cells constitute the majority of cells within a GC, macrophages, follicular DCs, and CD4 + T cells contribute to the defined architecture and the functionality of a GC during an immune response. These cells cooperate via antigen presentation, adhesion molecules, cell surface co-stimulatory molecules, and secreted factors to enable a robust GC reaction and an effective antibody response.
The formation and maintenance of GCs require a specialized subset of CD4 + T cells, T follicular helper cells (T FH cells; Yu and Vinuesa, 2010;Crotty, 2011;Nutt and Tarlinton, 2011). T FH cells that are induced during TD responses are characterized by the expression of several critical surface markers that interact with ligands on APCs such as DCs and B cells. These molecules include co-stimulatory molecules and their ligands (PD-1, ICOS, CD200, OX40, and CD40-ligand), adhesion mediators of the Slam/SAP family, and receptors for IL-6 and IL-21 Nurieva et al., 2008;Ma et al., 2009;Yusuf et al., 2010).
The coordinated induction of the chemokine receptor CXCR5, and repression of CCR7, allows T FH to home to B cell follicles (Ansel et al., 1999;Haynes et al., 2007). CXCR5 induction depends on an OX40-mediated signal in T FH (Brocker et al., 1999). Antigen-presenting B cells meet their cognate T FH cells at the T-B border and engage in prolonged interactions, mediated by antigen and Slam/SAP proteins, to deliver signals that are essential for T FH maintenance and subsequent productive GC formation (Qi et al., 2008;Deenick et al., 2010). Once in the follicle, T FH cells provide help to activated B cells through the expression of molecules such as CD40-ligand and ICOS and through the secretion of cytokines, predominantly IL-4 and IL-21 (Chtanova et al., 2004;Reinhardt et al., 2009). IL-21, a pleiotropic cytokine, is a hallmark of T FH cells. It has been shown to induce proliferation and expression of Blimp1 and Bcl6 in B cells, thereby influencing their decision to differentiate into antibody-secreting cells or to continue to participate in the GC reaction (Ozaki et al., 2004;Arguni et al., 2006). Furthermore, IL-21 promotes switching to IgG 1 , IgG 2a and IgG 3 and inhibits IgE responses (Ozaki et al., 2002).
Recent studies have suggested that both IL-6 and IL-21 have pivotal roles in vivo in the generation of IL-21-secreting T FH cells and the formation of GCs Nurieva et al., 2008;Suto et al., 2008). Differentiation of an activated CD4 + T cell into an IL-21-secreting T FH cell is dependent on the transcription factor Bcl6, which acts as a master regulator for CD4 + T FH cell differentiation (Johnston et al., 2009;Nurieva et al., 2009;Yu et al., 2009). In vitro, IL-6 and IL-21 are able to stimulate Bcl6 and enhance Il21 expression in CD4 + T cells, consistent with these cytokines serving an inductive role for T FH (Suto et al., 2008). Nurieva et al. (2008) reported that mice deficient in IL-6 formed fewer GC B cells and have reduced T FH cell numbers after an immune challenge with sheep red blood cells. Similarly, other groups demonstrated a reduced frequency and size of GCs in IL-6-deficient mice (Kopf et al., 1998;Wu et al., 2009). In some of the aforementioned studies, the impaired formation of GCs in the IL-6-deficient mice was linked to a reduction of IL-21-producing T FH cells (Nurieva et al., 2008;Suto et al., 2008). IL-21 has also been implicated in the generation and maintenance of T FH cells and the formation of GCs in vivo (Nurieva et al., 2008;Vogelzang et al., 2008). Thus, it was proposed that IL-6 initially induces Bcl6 and Il21 expression in activated CD4 + T cells, and subsequently, IL-21 acts as a positive feedback loop to maintain Il21 and Bcl6 expression in the T FH (Nurieva et al., 2009;Linterman et al., 2010).
However, other studies have yielded conflicting results on the roles of IL-6 and IL-21 in T FH cell generation and GC formation. These studies indicate that IL-21 is not essential for the generation of T FH cells (Linterman et al., 2010;Zotos et al., 2010;Rankin et al., 2011) and that loss of IL-21R had little effect on initial GC development but was critical for GC maintenance during an immune response (Linterman et al., 2010;Zotos et al., 2010). Another study suggested that IL-6 was not required for the formation of T FH cells or the Analysis of GC B cells in C57BL/6 (WT), IL-6, IL-21, and IL-6/IL-21 double-deficient mice (DKO). Mice were analyzed on day 10 after influenza infection. Results shown are from three to six independent experiments, totaling 4 naive WT and 21 WT, 8 Il6 / , 8 Il21 / , and 16 DKO-infected mice, respectively. (A) Cells from the draining mLNs and from the spleen were stained for GC B cells with -B220, -Fas, and PNA, and the percentage of B220 + cells that were also PNA + /Fas + is shown. (B and C) Frequency distribution of GC B cells in spleens and mLNs from WT and mutant mice analyzed on day 10 of infection. (D) Ratio of GC area to B cell follicle area in spleens of WT and DKO animals on day 10, as measured from histological sections. Each symbol represents an individual animal. (E and F) Frequency distribution of GC B cells from WT and mutant mice analyzed on day 21 of infection. Each symbol represents an individual animal. Statistical analyses used Tukey's multiple comparison tests. ***, P < 0.001; **, P = 0.001-0.001; *, P = 0.01-0.05. Bars and numbers show mean percentage with ± SEM. Results are from three to six independent experiments. (G) Representative histological staining to detect GCs in spleens from control or mutant mice 10 d after influenza infection. Paraffin sections were stained with -GL7, -B220, and -CD3. Bars, 50 µm. Combined loss of IL-6 and IL-21 does not affect the virus-specific CD8 response but limits T FH formation and the antibody response in influenza infection. Analysis of anti-influenza CD8 responses in WT and DKO animals. Mice were analyzed on day 10 after infection. (A) Splenocytes and cells from the bronchoalveolar lavage (BAL) were stained with -CD8, -CD44, and either NP-tetramer or PA-tetramer. Frequency distribution of splenic, virus-specific CD8 + T cells (tetramer stains: NP, black symbols; PA, gray symbols) is shown in a representative of two independent experiments using three to five animals of each genotype. (B) Frequency distribution of KLRG1/CD44 double-positive CD8 + T cells in spleen and bronchoalveolar lavage. Each symbol represents an individual animal. Bars and numbers show mean percentage with ± SEM. (C) WT, IL-6-or IL-21-deficient, and DKO mice were infected with HKx31 influenza virus and analyzed on day 10 after infection. Cells from the mLNs were stained for T FH cells with -CD4, -CXCR5, and -PD-1, and the percentage of PD-1/CXCR5 double-positive CD4 + T cells was measured. (D) Frequency of T FH from WT and mutant mice analyzed on day 10 after infection. Representative example shown of two to six independent experiments, totaling 6 naive WT, 23 WT, 8 Il6 / , 8 Il21 / , and 19 DKO-infected mice, respectively. (E) CD4 + PD-1 + CXCR5 + T FH cells and CD4 + PD-1 CXCR5 T cells were sorted from spleen on day 14 of HKx31 influenza infection. Bcl6 and Il21 expression was measured by RT-qPCR. (As expected, Il21 is not expressed in the DKO mice. This is a control only.) Bars and numbers show relative gene expression normalized to the housekeeping gene, Hmbs, ± SEM (n = 3). (F) IL-21-GFP reporter mice, on a WT or Il6 / background, were infected, and mLNs were harvested on days 6, 8, and 10 and stained for T FH cells (CD4, TCR-b, CXCR5, and PD-1). The dot plot Because in an earlier study (Zotos et al., 2010) we showed that loss of IL-21 or its receptor did not change the kinetics of T FH appearance after immunization, we focused on the influence of IL-6. To explore the rate of induction of T FH in vivo, we made use of our recently described IL-21-GFP knockin reporter mice (Lüthje et al., 2012). In these mice (which are heterozygous for a functional Il21 allele), IL-21-GFP + CD4 cells can be clearly visualized as a subset of T FH cells that localize to the GCs during infection or immunization and express cytokine genes as well as the T FH hallmarks PD-1, CXCR5, and Bcl6. We enumerated CD4 + /GFP + cells in WT and Il6 / mice bearing the IL-21-GFP allele (Fig. 2 F) during the early stages of the antiviral response (days 6-10; IL-21-GFP + cells only appear after day 5 in this model; unpublished data) and found that T FH cells in Il6 / mice were significantly delayed in their generation, but nearly matched WT numbers by day 10 in the mLNs (Fig. 2 G). As expected, GC B cells followed similar kinetics with Il6 / mice, trailing their WT counterparts (not depicted). These data show that IL-6 strongly influences T FH induction or expansion early in the antiviral response.
Collectively, these data concur with previous studies showing that neither IL-6 nor IL-21 alone is required for the generation of GCs or T FH cells (Poholek et al., 2010;Linterman et al., 2010;Zotos et al., 2010;Rankin et al., 2011). However, we show here that the simultaneous loss of both cytokines strongly blunts both GC and T FH development. IL-6 deficiency significantly delays T FH induction in vivo. Thus, the highly interdependent T FH and GC B cell response to infection relies on the combined actions of IL-6 and IL-21 IL-6 and IL-21 are critical for an effective antibody response to acute viral infection To assess the consequences of the impaired T FH and GC development in IL-6/IL-21 DKO mice on the humoral antiviral response, we measured antiviral IgM and IgG levels in the serum of WT, single mutant, and DKO mice on day 14 of the infection. Although the single loss of IL-6 or IL-21R alone (which phenocopies loss of IL-21 in the antibody response; Zotos et al., 2010) had no impact on the IgM response and only a modest impact on IgG titers, combined loss of IL-6 and IL-21 resulted in a significant (approximately threefold) reduction in virus-specific IgM (Fig. 2 H). IL-21R-deficient mice had an impaired IgG response (3-4-fold), but the combined absence of IL-21 and IL-6 magnified this effect, reducing IgG titers to 14-fold lower than in WT mice ( Fig. 2 H), confirming that the combined actions of IL-6 and IL-21 are essential for a strong humoral response to acute viral infection.
in the spleens of DKO mice compared with controls. These results indicate that IL-6 and IL-21 in combination play an essential role in the development of GC B cells in response to acute viral infection.
As IL-21 has been shown to contribute to CD8 + T cell responses (Casey and Mescher, 2007;Novy et al., 2011), we wanted to ensure that the defective GC development we observed in the double mutants was not influenced by a crippled CD8 + response to the influenza infection. We therefore measured virus-specific CD8 + T cell responses in WT and DKO mice during the peak of the immune response. There was no significant difference in the frequency of virus-specific CD8 + T cells between WT and mutant mice (Fig. 2 A). Furthermore, there was no difference in the percentage of mature effector KLRG1 + CD44 + CD8 + T cells in control and DKO mice ( Fig. 2 B), demonstrating that, unlike the GC response, the antiviral CD8 + T cell response was unaffected by the loss of these two cytokines.
The combined actions of IL-6 and IL-21 control the early generation of T FH cells
The defective GC reaction in DKO mice raised the question of whether the relevant T helper cell response in these mice was impaired. We examined the draining LNs 10 d after influenza infection from WT, IL-6 and IL-21 singly deficient mice, and DKO mice for CD4 + T cells expressing the T FH markers CXCR5 and PD-1 (Vinuesa et al., 2005). Infected, but not naive, WT mice showed a distinct T FH cell population in the draining LNs. Loss of IL-6 or IL-21 alone did not cause a significant change in frequency of T FH cells (Fig. 2, C and D). However, there was a significant reduction of T FH cells at day 10 of the infection in DKO mice. Interestingly, by day 21, T FH frequencies were similar in all mice (not depicted), implying that IL-6 and IL-21 affect most strongly the early stages of T FH development.
Both IL-6 and IL-21 have been implicated in the induction of Bcl6 and Il21 expression (Nurieva et al., 2009;Linterman et al., 2010), hallmarks of T FH cells. We therefore assessed whether T FH cells that develop in the absence of IL-6 and IL-21 expressed Bcl6. To that end, we isolated CD4 + PD-1 + CXCR5 + T FH and CD4 + PD-1 CXCR5 non-T FH cells from influenza-infected WT and DKO mice and measured Bcl6 and Il21 messenger RNA (mRNA) expression ex vivo by realtime quantitative PCR (qPCR). The combined loss of IL-6 and IL-21 did not alter Bcl6 expression in the T FH cells (Fig. 2 E). These results indicate that IL-6 and IL-21 play critical but redundant or complementary roles early during T FH cell generation or expansion, but T FH cells formed in their absence are normal. mLNs at different time points after influenza infection in WT mice. The expression of Il21 in CD4 + cells in the draining LNs stayed low early in the infection, reaching appreciable levels on days 3-5, which were sustained until at least day 10 ( Fig. 3 A). In contrast, Il6 mRNA was transiently expressed in CD19 + B cells, rising sharply between days 1 and 2 of infection, peaking at days 2-3 and falling by day 5 (Fig. 3 A). Although APCs such as DCs and macrophages are thought to be a source of IL-6 in the early process of T FH priming (Kopf et al., 1998;Cucak et al., 2009), our data show that B cells also secrete IL-6 early after an acute viral infection ( Fig. 3 A). To test whether IL-6 expression was restricted to newly activated cells, we isolated activated (CD86 + CD69 + ) and resting (CD86 CD69 ) B cells (CD19 + CD11c CD11b ) from the draining LNs of mice 3 d after infection and from naive controls. Among B cells, Il6 mRNA
IL-6 is induced early in B cells during an immune reaction
Although the combined loss of IL-6 and IL-21 impaired the formation of T FH cells and strongly reduced the GC response, the maintenance of T FH cells was independent of IL-6 and IL-21. We therefore reasoned that these cytokines play important roles early in a viral infection, during the initiation of T FH cell differentiation. To test whether the kinetics of expression of both cytokines was compatible with this prediction, we measured Il6 and Il21 mRNA levels in cells from the draining , 2003), because our own studies on TLR signaling responses in B cell lines indicated that both Oct2-and OBF-1-deficient cells expressed less Il6 than controls (unpublished data), and a recent publication suggested that octamer-binding transcription factors are involved in the transcriptional regulation of the human IL6 gene (Smith et al., 2008). We therefore examined whether loss of OBF-1 or Oct2 has an effect on Il6 expression in primary B cells. OBF-1 expression is restricted to the lymphocyte compartment of the immune system. Consistent with the lack of OBF-1 expression in myeloid cells, OBF-1 loss had no impact on Il6 expression in macrophages (CD11b + /GR1 + ) or BMderived DCs stimulated with LPS or CpG (Fig. 4, A and B). In contrast, Il6 expression in B cells was strongly influenced by the loss of OBF-1 or Oct2. B cells up-regulate Oct2 and OBF-1 upon activation (Fig. 4 C). Splenic B cells purified from WT and Oct2-and OBF-1-deficient mice were cultured with various mitogens, and Il6 expression was measured by qPCR. Although Il6 expression was induced under all conditions (and most strongly by TLR ligands) in WT B cells, its induction was very weak in both Oct2-and OBF-1deficient B cells, especially in LPS or CpG cultures (Fig. 4,D and E). Consistent with these results, CpG-activated Oct2or OBF-1-deficient B cells were strongly impaired in their capacity to induce Il21 transcription in co-cultured CD4 + T cells, most clearly seen when their numbers were limiting (Fig. 4 F). Addition of exogenous IL-6 to these cocultures fully complemented the deficiencies of Oct2, OBF-1, or IL-6 mutant B cells, inducing robust Il21 mRNA expression in the responding CD4 + T cells (Fig. 4 G) and confirming that IL-6 is the dominant inductive cytokine in these cultures.
Molecular regulation of the IL-6 locus by octamer-binding factors
To investigate whether octamer-binding factors directly interact with the murine Il6 locus control regions, we first analyzed the Il6 gene for consensus octamer-binding sites. The Il6 locus in mice spans a region of 7 kb on mouse chromosome 5. A previous study has identified binding sites for several transcription factors (AP-1, NF-B, or CEBP family members) at a core promoter region 230 bp upstream of the transcription start site (Fig. 5 A; Baccam et al., 2003). Bioinformatics analysis using PROMO (Messeguer et al., 2002) revealed four consensus octamer-binding sites within the Il6 locus: octamer 1, ATTTGCAT 3309 to 3302; octamer 2, TTTTGCAT 1459 to 1452; octamer 3, ATTTGCAT 3793 to 3800; and octamer 4, ATTTGCAT 10615 to 10622 (Fig. 5 A).
To determine whether the putative octamer-binding sites in the Il6 gene are functional, nuclear extracts from CpG-stimulated B220 + B cells were used for electrophoretic mobility shift assays (EMSAs). Endogenous Oct2 bound to all four predicted octamer sites (Fig. 5 B). In accordance with earlier evidence and our unpublished data indicating that OBF-1 does not directly contact DNA but associates was restricted to the activated cell compartment in the infected mice (Fig. 3 B), indicating that activated B cells represent a rapid and abundant cellular source of IL-6 after infection. Furthermore, activated B cells increased rapidly and dramatically in number to become the most abundant APC in the draining mLNs between days 2 and 10 of infection (Fig. 3 C).
Next, we wished to determine whether IL-6 produced by activated B cells was sufficient to induce the generation of IL-21-producing CD4 + T cells in vitro. B cells were stimulated for 24 h with CpG1668, a known inducer of IL-6 production by B cells (Yi et al., 1996). They were then added, at different cell ratios, to cultures of -CD3/-CD28-stimulated naive CD4 + T cells. After 4 d, the CD4 + T cells were recovered and assayed for Il21 expression. In control cultures, and consistent with published results (Dienz et al., 2009), CD4 + T cell activation in the presence of soluble recombinant IL-6 induced marked Il21 expression (Fig. 3 D). Il21 was also strongly induced in activated T cells that were co-cultured with equal numbers of CpG-activated WT B cells, and the amount of Il21 mRNA expressed in the CD4 + T cells was proportional to the number of B cells added to the cultures. IL-6-deficient B cells failed to induce Il21 mRNA expression in co-cultured CD4 + T cells (Fig. 3 D), indicating that B cell-derived IL-6 is necessary and sufficient to induce IL-21 production by CD4 + T cells in this co-culture system.
Finally, we wished to determine whether B cell-derived IL-6 supported T FH formation in vivo. To this end, 1-2 × 10 7 IL-6-sufficient B cells from Ly5.1 congenic mice were injected on two consecutive days into either WT or DKO mice. The mice were then infected with influenza virus, and T FH generation and GC B cell formation in the draining LNs were analyzed. B cell transfer was relatively inefficient, but by 10 d after infection, a small proportion of donorderived B cells (≤6% of total B cells) were evident in each experiment (not depicted). As shown (Figs. 2 and 3 E), loss of IL-6 and IL-21 leads to a clear reduction of T FH . However, transfer of IL-6-sufficient B cells led to a significant rescue of the T FH population in DKO mice (Fig. 3 E). In parallel, we observed a partial but significant rescue of GC formation in DKO mice that had received IL-6-sufficient B cells (Fig. 3 F). The rescue was notable considering the low ratio of IL-6-sufficient to -deficient B cells in the recipients. Collectively, these data show that IL-6 is expressed by activated follicular B cells in the draining LNs early after viral infection (days 2-3) and that IL-6 supplied by activated B cells is sufficient to drive IL-21 expression in CD4 + T cells in vitro and T FH cell development in vivo.
Oct2-and OBF-1-deficient B cells are impaired in IL-6 production
The factors that influence IL-6 production during T FH expansion and GC formation are largely unknown. We focused our attention on Oct2 (a DNA-binding POU/homeodomain transcriptional activator) and OBF-1 (OCA-B/Bob.1), a coactivator for Oct1 and Oct2 ( König et al., 1995;Shore et al., 2002). These data show that several sites in the Il6 locus can be bound directly by Oct2 and suggest that OBF-1, in complex with Oct2, directly regulates Il6 expression in B cells. Together these experiments show that Il6 expression in B cells is strongly dependent on Oct2 and OBF-1.
with DNA-bound Oct2 or Oct1 (Strubin et al., 1995), it was not possible to detect direct binding of OBF-1 to these octamer sites. Each of the sites, however, has the consensus sequence known to be necessary to recruit OBF-1 to the Oct-DNA complex (Gstaiger et al., 1996). Consistent with the EMSA results, chromatin immunoprecipitation (ChIP) revealed that Oct2 associates with all four sites in vivo (Fig. 5 D), as it does Thus, optimal genesis of both GC B cells and T FH cells is dependent on OBF-1 but not Oct2.
OBF-1 has been previously implicated in the differentiation of helper T cells (Brunner et al., 2007). Thus we asked whether T FH cells of OBF-1-deficient mice show a normal functional phenotype in vivo and in vitro. First, we measured mRNA expression of the T FH cytokine Il21 and the T FH cell regulator Bcl6 in phenotypic T FH cells sorted from WT and OBF-1-deficient mice. Bcl6 and Il21 mRNA expression was normal in T FH cells from OBF-1 KO mice (Fig. 6 E). We then tested whether CD4 + T cells from OBF-1-deficient mice were able to differentiate into IL-21-producing cells T cells in vitro and found that WT and mutant T cells were equally capable of doing so (not depicted). Thus, once they are formed, T FH cells in the Obf1 / mice have a normal phenotype. This strongly suggests that reduced numbers of T FH cells in the Obf1 / mice are caused by a T cell-extrinsic defect. To test this, we generated mixed BM chimeras using BM from T cell-deficient or B cell-defective mice (TCR / or Cd19 / , respectively) and BM from OBF-1-deficient or control mice. 8 wk after reconstitution, the recipient mice were infected with influenza virus and analyzed 10 d later. In mice reconstituted with OBF-1-deficient T cells together with WT B cells, GC B and T FH cells formed normally (Fig. 6, F and G), indicating that OBF-1-deficient T cells are not impaired in their ability to differentiate to T FH and to provide sufficient help for GC B cell development in vivo. In contrast, mice with OBF-1-deficient B cells and WT T cells (Obf1 / :CD19 / ) showed no
Impaired T FH cell development in OBF-1-deficient mice
To determine whether OBF-1 or Oct2 is involved in GC and T FH development in response to viral infection, WT and OBF-1-or Oct2-deficient mice were infected with influenza virus. The formation of T FH cells and GC B cells was assessed on day 10 after infection. Although Oct2-deficient mice showed normal development of GC B cells in infected mice (in contrast to a study using hapten protein immunization; Schubart et al., 2001), GC B cells were severely reduced or absent in the lung-draining LNs of OBF-1-deficient mice compared with WT (Fig. 6, A and B). OBF-1 KO mice also showed a significantly reduced T FH cell compartment when compared with WT or Oct2 / mice (Fig. 6, C and D). Consistent with these cellular deficiencies, virus-specific Ig was severely reduced in OBF-1-deficient mice (not depicted). AK039125 is an adjacent gene. (B) EMSA analysis on nuclear extracts from 24-h CpG-stimulated splenic B cells, performed using short fragments (160-207 bp) containing the consensus octamer site (ATGCAAT) from an Ig heavy chain promoter or octamer sequences identified in the Il6 locus (site 1, ATTTGCAT 3302; site 2, TTTTGCAT 1438; site 3, ATTTGCAT 3793; site 4, ATTTG-CAT 10615). Specific complex formation was detected through supershifts using -Oct2 or -OBF-1 monoclonal antibodies, as indicated. Oct2-DNA complexes are indicated with asterisks. The results are representative of three independent experiments. (C and D) Immunoprecipitation (ChIP) of chromatin from purified splenic B cells from mice of the indicated genotypes, using preimmune and hyperimmune rabbit serum specific for Oct2. (C) Cd36 is a known Oct2 target gene (König et al., 1995). (D) ChIP on the same chromatin as in C, but examining the octamer-containing Il6 gene sequences identified in A and positive by EMSA (B). Values in all graphs are means ± SEM (n = 3).
are the critical importance of antigen and DCs as early inducers of T FH polarization in the T cell zone, the colocalization of T FH with antigen-specific B cells, and the need for prolonged contact between T FH precursors and cognate B cells (Garside et al., 1998;Haynes et al., 2007;Deenick et al., 2010;Poholek et al., 2010;Baumjohann et al., 2011;Choi et al., 2011;Kerfoot et al., 2011;Kitano et al., 2011). Prolonged B cell-T cell interaction is critical and is mediated by Slam/SAP family receptors, allowing the sustained antigen signal that apparently drives T FH cell differentiation and maintenance GC and reduced T FH cell generation, demonstrating that defective GC formation and reduced T FH cell development are B cell intrinsic in Obf-1 / mice.
Initiation of a TD antibody response
Very recently, the cellular, anatomical, and molecular events that occur during the earliest stages of a TD B cell response have been scrutinized by several groups (e.g., Crotty, 2011;Deenick et al., 2011;Vinuesa and Cyster, 2011). Among their findings Dienz et al., 2009), its in vivo role in the generation of T FH is not. Nurieva et al. (2008) showed that IL-6 loss caused a strong reduction of T FH cells in response to sheep red blood cell immunization, conclusions which conflict with other studies (Poholek et al., 2010;Choi et al., 2011). Our kinetic data suggest that IL-6 acts primarily in the induction and/or expansion of T FH early in the immune response, as seen most clearly using the IL-21-GFP reporter system on an Il6 / background, but that T FH cell numbers normalize as the response proceeds (Fig. 2).
One explanation for the conflicting data regarding the roles of IL-6 and IL-21 in GC and T FH responses is likely to be the variety of experimental systems used, including immunization with synthetic or nonreplicating antigens (haptencoupled proteins or sheep red blood cells), or infectious agents, and the differing levels of inflammation (and so IL-6 production) that might result in each situation. Another may be that IL-6 and IL-21 act cooperatively, and loss of either factor alone can be compensated in vivo. The latter is consistent with the results presented here analyzing IL-6/IL-21 doubledeficient mice, which clearly show that IL-6 and IL-21 act together on the formation, persistence, and function of GCs and T FH cells. Our findings disagree with aspects of a recent study , which showed that IL-6 neutralizing antibody had no additional impact on GC formation over IL-21 loss alone. It is possible that IL-6 neutralizing antibody cannot fully neutralize all IL-6 in vivo (particularly if the IL-6 is delivered within a tight junction from cognate B cell to T FH cell), leading to an underestimation of its contribution to the response. Nevertheless, we concur with the general conclusion of this paper, that IL-6 and IL-21 serve different functions in humoral immunity. However, IL-6 and IL-21 are not functionally redundant in the conventional sense; they may share signaling pathways, but they act at different times and on different cells during the response. During acute infection, IL-6 is produced by APCs, including B cells, as we show here. IL-6 acts on CD4 + T cells to initiate or reinforce their polarization toward T FH cells. IL-21 acts later on T FH polarized cells in an autocrine manner, through a positive feedback loop to reinforce T FH commitment (Suto et al., 2008), and on GC B cells to drive their differentiation. Finally, IL-6 and IL-21 are not the only cytokines initiating TD B cell immunity, as IL-4 and IL-27, another Stat3 signaling cytokine, have recently been implicated in T FH cell differentiation and GC responses (Batten et al., 2010;Vijayanand et al., 2012), and IL-12 has also been shown to be important in T FH development in humans (Ma et al., 2009).
A role for B cell-derived IL-6 in GC and T FH responses
We found that activated WT B cells can stimulate Il21 expression in CD4 + T cells in vitro and that IL-6 was necessary and sufficient for this effect. We also performed a detailed kinetic analysis of IL-6 and IL-21 expression in vivo and found that IL-6 was produced from B cells and myeloid cells in a transient fashion, peaking on day 2 to 3 after infection, then dropping over subsequent days. Although Il6 was expressed in (Qi et al., 2008;Cannons et al., 2010). Once cognate T cells and B cells have been activated by antigen during an immune response to a pathogen, IL-6 is thought to drive T FH cell differentiation and IL-21 secretion (Fazilleau et al., 2009;King, 2009). Subsequently, during a B cell-T cell interaction in a GC, IL-21 can act on both the B cell, driving isotype switching and differentiation to an antibody-secreting plasma cell (Kwon et al., 2009;Linterman et al., 2010;Zotos et al., 2010), and in an autocrine fashion on the T FH cell, reinforcing signals that maintain the T FH phenotype (Nurieva et al., 2008;Vogelzang et al., 2008). Here we show that, in addition to antigen and other co-stimulatory surface molecule interactions, B cells release IL-6 to promote T FH in response to infection.
The role of IL-21 in GC formation and T FH development
Some uncertainty surrounds the role of IL-21 during TD B cell responses. Some studies described an impaired initial GC B cell response to various antigens in the absence of an IL-21 signal (Nurieva et al., 2008;Vogelzang et al., 2008;Bessa et al., 2010;Poholek et al., 2010;Eto et al., 2011;Rankin et al., 2011). In contrast, Ozaki et al. (2004) and Zotos et al. (2010) did not detect early GC abnormalities upon loss of IL-21 or IL-21R but saw impaired persistence of GCs. Here we also found normal GC formation in IL-21-or IL-21R-deficient mice on day 10 of an acute viral infection.
The influence of IL-21 on the formation and maintenance of T FH in vivo is also unclear. Some studies describe a reduction of T FH cells in the absence of IL-21 or IL-21R (Nurieva et al., 2008;Vogelzang et al., 2008), whereas others suggest that IL-21 is specifically required for T FH cell persistence but not formation (Linterman et al., 2010). Still others report no impact of IL-21 or IL-21R on the formation of T FH cells (Bessa et al., 2010;Poholek et al., 2010;Zotos et al., 2010;Eto et al., 2011;Rankin et al., 2011). We find here that loss of either IL-21 or IL-21R had little impact on T FH formation during influenza infection at any time point examined.
The role of IL-6 in GC formation and T FH development
Studies to define a role for the pleiotropic cytokine IL-6 in TD immune responses have also yielded conflicting results. Kopf et al. (1998) demonstrated that IL-6-deficient mice had reduced serum IgG 2a and formed smaller GCs than controls upon DNP-OVA immunization, whereas others have demonstrated a more severe impact of IL-6 loss on the appearance of GC B cells (Nurieva et al., 2008;Wu et al., 2009). However, Poholek et al. (2010) and Eto et al. (2011) saw no significant reduction of GC B cells after LCMV infection in mice deficient for IL-6 or after IL-6 neutralization. We also find here that loss of IL-6 caused only minimal reduction of GC B cells at the peak of an influenza infection and had a mild effect on GC maintenance.
Although the activity of IL-6 as an inducer of IL-21 expression in CD4 + T cell cultures is well established (Suto et al., 2008; 2060 IL-6 and IL-21 in B and T cell responses | Karnowski et al.
T cell-intrinsic role for OBF-1 in T FH formation. Conversely, WT T cells were impaired in their differentiation to T FH when
Obf-1 / B cells were present, indicating that the observed T FH and GC phenotypes in OBF-1-deficient mice are both B cell intrinsic.
A temporal model of T FH and GC generation
Bcl6 is the signature transcriptional regulator of both T FH and GC B cells, and Bcl6 reporter mice have recently revealed the in vivo dynamics of development of these cells during immune responses (Baumjohann et al., 2011;Kitano et al., 2011). Upon infection or immunization, antigen-presenting DCs prime CD4 T cells to rapidly but modestly induce Bcl6 expression and to initiate T FH cell differentiation. However, this signal provokes only incomplete T FH cell differentiation, as the nascent T FH cells do not express PD-1 or CXCR5 and cannot sustain Bcl6 expression or GC development. A second, stronger wave of Bcl6 expression in CD4 + cells, induced through contact with B cells, correlates with increased cell division and CXCR5 and PD-1 expression. These studies confirm that DCs are important for early T FH priming but that from approximately day 3.5 onwards, well before GCs are formed, B cells are required to sustain and reinforce Bcl6 expression and T FH expansion and to enable follicular entry (Haynes et al., 2007;Zaretsky et al., 2009;Deenick et al., 2010;Baumjohann et al., 2011;Goenka et al., 2011). In this paper, we demonstrate an important interplay between IL-6 and IL-21 in the formation of T FH and GCs. The kinetics of production and cellular sources of IL-6 and IL-21 documented here during acute viral infection are consistent with these factors being part of the critical communication between B cells and T FH that is required for GC formation.
Because IL-6 and IL-21 both signal through Stat3 (Zeng et al., 2007;Nurieva et al., 2008;Eddahri et al., 2009), it is possible that precursors of T FH need to exceed a certain Stat3 signaling threshold or signal duration to efficiently up-regulate and maintain the high Bcl6 levels required to commit fully to differentiation. Loss of one cytokine might be tolerated, but loss of both could drop the signal below this limit. Thus, IL-6 and IL-21 may cooperate to ensure that T H cells receive a sufficiently strong and durable signal to mature, enter the follicle, and support GC formation for a potent antibody response.
MATERIALS AND METHODS
Mice, immunization, and tissue recovery. All mutant mice were >10 generations backcrossed onto the C57BL/6 background. IL-6-deficient (Kopf et al., 1998), IL-21-deficient (Parrish-Novak et al., 2000, IL-21Rdeficient (Ozaki et al., 2002), IL-21-GFP reporter (Lüthje et al., 2012), Ly5.1 C57BL/6, recombination activating gene 1-deficient (Rag-1 / ; Spanopoulou et al., 1994), CD19-deficient (Engel et al., 1995), Oct2-deficient (Corcoran et al., 1993) and OBF-1-deficient (Schubart et al., 1996) mice were bred and maintained in the specific pathogen-free facilities of the Walter and Eliza Hall Institute of Medical Research. The Oct2 +/+ and Oct2 / mice used here were Rag1 / mice reconstituted with fetal liver cells, as the Oct2 mutation is lethal when homozygous (Corcoran et al., 1993). TCR--deficient mice (Philpott et al., 1992) were maintained at the myeloid cells isolated from influenza-infected mice, we found that viral antigens induced high Il6 expression in activated follicular B cells within the draining LN, by far the most abundant APC in the tissue at that time. Conversely, Il21 expression was restricted to CD4 + T cells, increasing from days 3 to 10. We therefore reasoned that IL-6 could play a critical early role in the GC response. Indeed, we were able to improve the weak T FH cell response of IL-6/IL-21 doubly deficient mice and the GC response through the provision of naive WT B cells to the animals just before infection. These data collectively support a role for paracrine secretion of IL-6 by B cells to CD4 + T cells as an important early step in T FH development or expansion. More recently, IL-6 was shown to play an essential late role in the clearance of a chronic viral infection, with follicular DCs supplying IL-6 to T FH , to drive GC formation and neutralizing antibody production (Harker et al., 2011). Collectively, these studies point to a need for provision of IL-6 both early and late for optimal TD antibody responses but suggest that the preferred cellular source of IL-6 changes as the response progresses and T FH cells interact with different cellular partners.
B cells require Oct2 and OBF-1 to produce IL-6
There is accumulating evidence for the direct role of these two transcription factors in the cytokine-mediated regulation of antibody responses Emslie et al., 2008). Here we found that Il6 expression in B cells is dependent on Oct2 and OBF-1. We identified four consensus sites in the Il6 gene to which Oct2 bound in vitro and in vivo. It is known that OBF-1 is recruited, in an Oct factor-dependent way, to such consensus sequences (Cepek et al., 1996;Gstaiger et al., 1996;Shore et al., 2002). Therefore, we propose that in activated B cells, OBF-1 is able to bind with Oct2 to the consensus sites in the murine Il6 locus, activating the gene. This was specific to B cells, as IL-6 production was not affected in macrophages or DCs isolated from Obf-1 / or Oct2 / mice ( Fig. 4 and not depicted). Consequently, both OBF-1-and Oct2-deficient B cells were weak in vitro inducers of Il21 expression in CD4 + T cells.
OBF-1 is essential for the formation of GC B cells (Schubart et al., 1996). However, the specific requirements for OBF-1 in the GCs are not fully understood. Impaired BCR signaling in OBF-1 / B cells (Qin et al., 1998;Samardzic et al., 2002), possibly mediated through loss of SpiB expression (Bartholdy et al., 2006), is likely to be the dominant disability blocking GC development. Clearly, poor IL-6 production by B cells is not the sole defect underlying the total lack of GCs in Obf-1 / mice, as Il6 / and Oct2 / mice can make GCs normally (Figs. 1 and 6). Here we observed a significant reduction of T FH cells in the draining LNs of influenza-infected OBF-1-deficient mice. As OBF-1 is expressed in activated T cells (Sauter and Matthias, 1997;Zwilling et al., 1997) and regulates essential T helper cytokines (Brunner et al., 2007), it was possible that T FH cells required OBF-1 intrinsically for their differentiation. However, analysis of mixed BM chimeras excluded a GM-CSF (PeproTech). On days 3 and 5 of the culture period, 70% of the culture supernatant containing nonadherent cells was removed and replaced with fresh media containing 10 ng/ml GM-CSF. On day 6, the loosely adherent and nonadherent cells were removed by vigorous washing. The remaining adherent macrophage population was harvested by incubating the cells for 5 min in PBS + 10 mM EDTA followed by gentle scraping with a rubber policeman (SARSTEDT).
For DC culture, BM cells were extracted and erythrocytes were removed by brief exposure to 0.168 M NH 4 Cl. Cells were cultured for 5 d at a density of 1.5 × 10 6 cells/ml in RPMI 1640 medium containing 100 ng/ml mouse Flt3L (PeproTech) at 37°C in 10% CO 2 (Naik et al., 2005).
For B/T cell co-cultures, splenic B cells were isolated using -CD19 or -B220 beads (Miltenyi Biotech) and stimulated for 24 h with CpG as described above. Activated B cells were washed three times with PBS and co-cultured in different ratios (Fig. 3: 3 × 10 5 or 3 × 10 3 B cells to 3 × 10 5 T cells; Fig. 4: 2 × 10 5 or 6 × 10 4 B cells to 6 × 10 5 T cells) with naive C57BL/6 T cells in the conditions described above. After 5 d, the CD4 + cells were recovered by flow cytometric cell sorting.
B cell rescue. 1-2 × 10 7 splenic B cells were isolated from Ly5.1 congenic mice and injected i.v. on two subsequent days into C57BL/6 and IL-6/IL-21 DKO mice. On the third day, the host mice were infected with HKx31 influenza virus. Mice were sacrificed 10 d after infection, and GCs and T FH cells in mLNs were analyzed by flow cytometry. To adjust for technical variation between experiments that was not related to genotype, the percentages of T FH cells (of total CD4 + T cells) or GC B cells (of B220 + B cells) were normalized to the mean frequency of the WT control from each experimental cohort. We were thus able to determine the fold change of T FH or GC B cell percentage compared with each control group. We also compared T FH and GC B cell data from the transplanted mice with data from infected C57BL/6 and DKO mice that had not received B cells.
Western blotting. OBF-1 and Oct2 expression was detected using our monoclonal rat -mouse antibodies (clone 9A2, Corcoran et al. [2004]; clone 6E4, Corcoran et al. [2005]). Protein extracts corresponding to equal cell numbers were loaded onto the gel, with equal protein loading confirmed with Ponceau red stains of the membrane after protein transfer. A goat -actin antiserum (Santa Cruz Biotechnology, Inc.) was used as a loading control.
Quantitative RT-PCR. First-strand cDNA was transcribed (Super-Script III First-Strand Synthesis System; Invitrogen) from total RNA (RNeasy Micro kit; QIAGEN) using the manufacturers' protocols. Realtime qPCR analysis was performed using a SYBR green system (Superarray), according to the manufacturer's instructions. The expression data were analyzed on a sequence detection system (ABI Prism 7900HT; Applied Biosystems) and CFX384 real-time system (Bio-Rad Laboratories) using relative quantification of gene expression. Expression was normalized using hydroxymethylbilane synthase (Hmbs) as a housekeeping gene. Normalization of expression data was computed by the qGENE tool (Simon, 2003).
University of Melbourne. Reconstitution experiments used donor BM from TCR / , Cd19 / , Obf-1 / , and C57BL/6 strains, mixed in equal ratios and injected into Rag-1 / mice. At the indicated times after infections, mice were sacrificed, spleens and mLNs were removed, and single cell suspensions were prepared for analysis as previously described (Blink et al., 2005). All procedures were approved by the Animal Ethics Committee of the Walter and Eliza Hall Institute of Medical Research.
Viral infections.
Mice were inoculated intranasally with 10 4 pfu of the HKx31 (H3N2) influenza virus (Flynn et al., 1998;Belz et al., 2000). Virus stocks were grown in the allantoic cavity of 10 d embryonated hen's eggs and stored in aliquots at 80°C.
Antibodies and flow cytometry. Single cell suspensions of BM cells, splenocytes, or LNs were stained with fluorochrome or biotin-labeled antibodies. Cells were analyzed on an LSRII, FACSCalibur, or FACS-Canto cytometer (BD) or were sorted using a MoFlo (Beckman Coulter) or FACSAria (BD) using a live lymphocyte gate (defined as negative for propidium iodide uptake). Data were analyzed with FlowJo (Tree Star) and Prism (GraphPad Software) software. Antibodies used were the following Immunofluorescence histology. Splenic tissue samples were fixed in 4% paraformaldehyde. 7-µm sections were cut and stained with -B220 (RA3-6B2; biotin; BD), -CD3 (rabbit polyclonal; Thermo Fisher Scientific), and -GL7 (supernatant; in house). Secondary antibodies used were streptavidin-Cy5 (BD), -rabbit Ig Alexa Fluor 488 (goat polyclonal; Invitrogen), and -rat Ig Alexa Fluor 555 (goat; Invitrogen). Multiple images from spleen sections were taken with an LSM 5 life microscope (Carl Zeiss), using the Mosaic module (Carl Zeiss) to stitch and align all images taken from one section. For analysis, images covering 1/4 to 1/2 of a spleen section were taken and processed from each sample. The images were analyzed and quantified with the AxioVision (Carl Zeiss) software.
For the preparation of BM-derived macrophages, BM was harvested from the femurs of 8-12-wk-old mice and cultured in bacterial-grade dishes for 6 d in RPMI medium supplemented with 10 ng/ml recombinant murine EMSA. Nuclear extracts were prepared (Schreiber et al., 1989) and EMSA was performed as previously described (Corcoran et al., 2004). Restriction fragment probes were labeled using [ 32 P]dATP and Klenow DNA polymerase. Probes for Il6 locus were generated through PCR amplification using genomic C57BL/6 DNA as a template. 130-bp-to 350-bp-long PCR products were subsequently labeled using [ 32 P]dATP and Klenow DNA polymerase.
ChIP. ChIP was performed essentially as described previously (Emslie et al., 2008) except that Protein G Dynabeads (Invitrogen) were used to capture the protein-DNA immune complexes. The primers used to amplify the four putative Oct2-binding sites in the Il6 gene are listed in the previous section, and the Cd36 primers have been described previously (Emslie et al., 2008). | 2016-05-04T20:20:58.661Z | 2012-10-22T00:00:00.000 | {
"year": 2012,
"sha1": "687c8ddd80db7d9cc25a281d1c3e84992a698c72",
"oa_license": "CCBYNCSA",
"oa_url": "http://jem.rupress.org/content/jem/209/11/2049.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "687c8ddd80db7d9cc25a281d1c3e84992a698c72",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
119233658 | pes2o/s2orc | v3-fos-license | Large Mixing Angle Sterile Neutrinos and Pulsar Velocities
We investigate the momentum given to a protoneutron star, the pulsar kick, during the first 10 seconds after temperature equilibrium is reached. Using a model with two sterile neutrinos obtained by fits to the MiniBoone and LSND experiments there is a large mixing angle, and the effective volume for emission is calculated. Using formulations with neutrinos created by URCA processes in a strong magnetic field, so the lowest Landau level has a sizable probability, we find that with known paramenters the asymmetric sterile neutrino emissivity might account for large pulsar kicks.
I. INTRODUCTION
The existence of sterile neutrinos, in addition to three active neutrinos of the standard model, is of great interest for both particle physics and astrophysics. The Los Alamos LSND experiment found evidence for antineutrino oscillation [1]. An analysis of LSND, along with other short-baseline experiments, found [2] that the standard model is not consistent with the data and the best model has two sterile neutrinos in addition to three active neutrinos.
The recent experiment by the MiniBooNE Collaboration [3] found that the data for electron neutrino appearance showed an excess at low energies, in comparison to what was expected in the standard model. This data, along with the LSND data, has been analyzed in a model with two light sterile neutrinos [4], and compared to MiniBooNE data [5]. The mixing angles of two light sterile neutrinos were extracted. (See, however, Ref. [6], which questions the accuracy of the results of Ref [4]). An analysis of MiniBooNE, LSND, and other experiments [7], based in part on earlier work with the seesaw mechanism [8], also found that the preferred model to fit the data consists of two light sterile neutrinos, in addition to three active neutrinos.
Very recently a global analysis of updated Mini-BooNE data, along with data from LSND, KAR-MEN, NOMAN, Bugey, CHOOZE, CCFR84, and CDHS has been carried out [9]. There are two possible scenerios, both compatible with the results of Refs. [4,5]. Fits to appearance and disappearance with a 3+1 hypothesis have very good χ 2 probabilities, and also find CPT violation. The CPT violation, with different mixing angles and mass differences for neutrinos vs antineutrinos, might be understood as matter effects, similar to the MSW effect [10,11], but this has not yet been proved. Also a 3+2 model was shown to give a good χ 2 probability and is compatible within the range of the parameters for the 3+2 model used in Refs. [4,5], but without the uncertainty pointed out in Ref. [6].
In the present work we apply this range of fits [4,5] to MiniBooNE/LSND to the study of pulsar velocities. The gravitational collapse of a massive star often leads to the formation of a neutron star, a pulsar. It has been observed that many pulsars move with linear velocities of 1000 km/s or greater, called pulsar kicks. See Ref. [12] for a review. Due to the high density, the active neutrinos have a small mean free path, and only escape at the surface of the neutrinosphere. During the first ten seconds after temperature equilibrium is attained during the gravitational collapse of a massive star, the main cooling is via emmision of neutrinos produced by the URCA process, and during the next ten seconds via neutrinos produced by the modified URCA process. We have investigated the pulsar kicks which arise from the modified URCA processes in the time interval 10-20 sec after the supernova collapses, when the neutrinosphere is just inside the protoneutron star. With a strong magnetic field and temperature, so that the population of the lowest Landau level is approximately 0.4 of the total occupation probability, and we find large pulsar kicks [13].
The largest neutrino emission after the supernova collapse takes place during the first 10 sec-onds, with URCA processes dominant. The possibility of pulsar kicks from anisotropic neutrino emission due to strong magnetic fields during this time was discussed more than two decades ago [14]. It has been shown [15] that, with the strength of the magnetic field expected during this period, the lowest Landau level has a sizable occupation probability, which produces the neutrino emission asymmetry that is needed for pulsar kicks. However, due to the high opacity for standard model neutrinos in the dense region within the neutrinosphere, few neutrinos are emitted, and the large pulsar kick is not obtained [16].
Sterile neutrinos with a small mixing angle have small opacities. It has been shown [17] that, using the model of Ref. [15] and assuming the existence of a heavy sterile neutrino (mass > 1 kev) with a very small mixing angle constrained to fit dark matter, the pulsar kicks could be explained. More recently the effects of such sterile neutrinos with large masses and small mixing angles have been studied for other processes [18]. See Ref. [19] for a review of processes that moght be associated with dark matter sterile neutrinos.
In the present paper we use the fits of Refs. [4,5] with two sterile neutrinos to investigate the possibility of obtaining the large pulsar velocities which have been observed. The values of the masses and mixing angles are within the range found in calculations based on the seesaw mechanism [7]. Note that our model differs from that of Ref [17] in that with a much larger mixing angle there is a higher probability of sterile neutrinos, but a much smaller effective volume, due to a larger opacity. However, as we shall show, since the mean free path is much larger than those of standard neutrinos, under the conditions in which standard neutrinos produce a pulsar velocity of 2-300 km/s, the MiniBoone/LSND sterile neutrinos can give a kick of more than 1000 km/s.
II. ASYMMETRIC STERILE NEUTRINO EMISSIVITY AND PULSAR KICKS IN LIGHT TWO-STERILE NEUTRINO MODEL
Within about 1 second after the gravitational collapse of a large star, the neutrinosphere is formed with a radius of about 40 km, with temperature equilibrium. Within about 10 seconds about 98% of neutrino emission occurs, with neutrinos produced mainly by URCA processes. Due to the strong magnetic field, neutrino momentum asymmetry is produced within the neutrinosphere, but with a small mean free path they are emitted only from a small surface layer of the neutrinosphere, and the pulsar kick cannot be accounted for. If a standard active neutrino, say the electron neutrino, oscillates into a sterile neutrino, it will escape from the protoneutrino star and neutrinosphere, unless it oscillates back into the active neutrino. The mixing angle plays a key role. In the work of Fuller et al [17] the mixing angle is so small that the sterile neutrinos are emitted. In the present work the starting point is the analysis of MiniBooNE and LSND data, with the two or more sterile neutrinos with small masses and large mixing angles. Before we can proceed, however, it is essential to determine possible effects of the high density and temperature of the medium on the mixing angles.
A. Mixing Angle in Neutrinosphere Matter
It has long been known that dense matter can affect neutrino states. The famous MSW effect [10,11] for understanding solar neutrinos, and the study of oscillations of high energy neutrinos [20] are studies of mixing of active neutrinos in matter. There have been many other studies. In the present work we are dealing with sterile/active neutrino mixing given by the mixing angle θ m in neutrinosphere matter In the work Ref [17] it was shown that the mixing angle for sterile neutrinos that can account for dark matter as well as those produced in the neutron star core is almost the same as the vacuum value. Starting from the much larger mixing angles for the sterile neutrinos that seem to account for the MiniBoonE, LSND data, we need the value of the mixing angles in the neutrinosphere, as we discuss below. The effective mixing angle in matter, θ m can be related to the vacuum mixing angle, θ by [21] sin 2 (2θ m ) = sin 2 (2θ) In Eq(2) V T is the finite temperature potential, while the finite density potential due to asymmetries in weakly interacting particles has been dropped as it vanishes when temperature equilibrium is reached [21]. A convenient form for V T , with the background of both neutrinos and electrons included, is given in Ref [22] with G F , θ W the standard weak interaction parameters and α = 1/137. Assuming T=20a MeV, with a ≤1.0, p=b MeV, and (δm) 2 = 1.0 ev 2 [4,6], we find Therefore, the mixing angle in the neutrinosphere medium is approximately the same as the vacuum mixing angle. This agrees with Ref [17].
B. Emissivity With a Light Sterile Neutrino
We now use the fits to MiniBooNe and LSND with light sterile neutrinos to estimate pulsar kicks. The MiniBooNE results are consistent with the LSND results and CPT only if there are at least two sterile neutrinos. Models with three sterile neutrinos have also been considered [6,8]. Fits to the Mini-BooNE experiment and the LSND results by Ref. [4] in Ref. [5] with two sterile neutrinos are shown in Fig. 1.
From the Sterbenz/Maltoni-Schwetz fits one finds for the mixing angles of the two sterile neutrinos: and the masses are negligibly small. Note that this is in contrast to the parameters of Ref. [17], with the constraint of dark matter giving a mixing angle of (sin2θ dm ) 2 ≃ 10 −8 , and a mass greater than 1 keV. In our present work we will use values for (sin2θ) 2 in the range 0.2 to 0.004 to estimate the pulsar kick, which is compatible with the recent global analysis [9] The probability of asymmetric emission, giving a pulsar kick, does not depend directly on the sterile neutrino mass in our model, but is proportional to the (sin2θ s ) 2 . It is the large mixing angles found in fits to MiniBoone and LSND that lead us to carry out the investigation in the present paper.
C. General Formulation of Neutrino Emissivity With a Strong Magetic Field
The neutrino emissivity is given in general form in many papers, e.g., see Refs [23,24]: δ where M is the matrix element for the URCA process and F is the product of the initial and final Fermi-Dirac functions corresponding to the temperature and density of the medium. The main source of the asymetric emissivity that produces the pulsar velocity is the fact that the electron has a large probability to be in the lowest (n=0) Landau level.
See Refs [25,26] for a discussion of Landau levels.
The asymetric emissivity can be seen by considering the weak axial intereaction, W A , with G = 10 −5 m 2 n , g A = 1.26, the χ are the nucleon spinors, and the lepton wave functions are Ψ(q e ), Ψ(q ν ), where q e and q ν are the electron and antineutrino momenta, respectively. The key to the asymmetric emission is given be the trace over the leptonic currents, T r[l † i l j ], with the magnetic field B in the z direction. We only consider the weak axial force, which is dominant.
Using the relationship given in Eq(8) one can show that the result of the traces and integrals over the axial product matrix element has the form (B =ẑ) Details are given in Ref [13], where it is shown that the asymmetric neutrino emissivity, using the general formulation of Ref [23], is where T 9 = T /(10 9 K), p ns is the neutron star momentum, P (0) is the probability of the electron produced with the antineutrino being in the lowest Landau state, f=.52 is the probability of the neutrino being at the + z neutrinosphere surface [13], V ef f is the volume at the surface of the neutrinosphere from which neutrinos are emitted, and ∆t ≃ 10s is the time interval for the emission. We derive P (0) and V ef f , the effective volume for the emissivity, in the next two subsections. Just as in our previous work in which Landau levels play a crucial role [13], only the lowest Landau level, for which the helicity is -1/2 (rather than ±1/2 as with the usual Dirac spinors) gives asymmetric emission. The probability that an electron in a strong magnetic field is in the lowest (n=0) Landau level, P (0), can be calculated from the temperature, T, and the energy spectrum of Landau levels [25,26]. A particle with momentum p and effective mass m * e in a magnetic field B in the nth Landau level has the energy with B c = 4×10 13 G, and m * e is the effective mass of the electron at the high density of the protoneutron star and neutrinosphere.
From standard thermodynamics the probability of occupation of the n=0 Landau level, P(0), is given by [17]: where F(n), with magnetic field B, temperature T, and chemical potential µ, is The electron energy is restricted to magnitudes greater than µ, but the integrals in Eq.(7) are insensitive to p min , so we take p min =0 as in Ref [17]. We agree with the estimate of Ref. [17] for P(0). Note that if we had used the free electron mass, m e , in the Landau energies (Eq.(5)) we would have obtained a much smaller value for P(0). For B= 10 16 G, µ = 40 MeV, m * e =4 MeV and T ν−sphere = 20 MeV, P(0) ≃ 0.3. This is similar to our estimate of P(n=0) ≃ 0.4 at the surface of the protoneutron star at about 10 seconds [13]. Therefore our result for asymmetric emissivity differs from that of Fuller et al [17] mainly in that we have a much larger mixing angle, and a much smaller effective volume, since the sterile neutrinos oscillate back to active neutrinos within the neutrinosphere; and therefore our emission only takes place near the surface of the neutrinoshere. However, in contrast with purely active neutrino emission in which the opacity results in very small pulsar kicks [16], the sterile neutrinos have a much larger effective volume, and can therefore produce much larger pulsar velocities E. Estimate of V ef f = Effective Volume for Emission To estimate V ef f we make use of the early study of opacity in about the first 20s of the creation of a neutron star via a supernova collapse [27,29,30], and a recent detailed study of neutrino mean free paths [28]. Since the mean free path of the sterile neutrino is determined by that of the standard neutrino to which it oscillates, λ, we make use of studies of active neutrino mean free paths. First note that the neutrino mean free path is given by where M f i is the weak matrix element and n(q) is the Fermi distribution. For the calculation of the sterile neutrino, for which M f i = 0, one can use the value of 1/λ with a factor of sin 2 (2θ) from the matrix element and another such factor from the occupation probability. From the results of previous authors, for T in the 10 to 20 Mev range and µ in the 20 to 40MeV range, we estimate that λ ≃ 1.0cm This gives a range for the effective sterile neutrino mean free path λ s ≃ 5.0 to 250 cm .
For a neutrinosphere radius of 40 km, with λ s << R ν this gives us ν λ s . From Eq.(10), R ν = 40 km, and λ=1.0 cm, with T 9 = T 10 9 K . Taking the mass of the neutron star to equal the mass of our sun, M ns = 2 × 10 33 gm, we obtain for the velocity of the neutron star v ns ≃ 3.35 × 10 −7 ( T 10 10 K ) 7 1 which means that sterile neutrino emission could account for the large pulsar kick with the parameters extracted from Refs [4,5]. If we use the physical parameters that give Eq.(18) for electron neutrinos, we obtain a pulsar velocity of v ns = 95 km/s, which is consistent with previous predictions by several authors.
It should be noted that the study of the Mini-BooNE and LSND results are in progress, and the mixing angles that result could be different from those obtained in Refs. [4,5]. For this reason we have used a range of parameters. Preliminary data from the MiniBooNE/Minos experiment [31], however, is consistent with the MiniBooNE results [3]. We also once more point out that although Ref [6] questions the accuracy of the peramaters extracted by Ref [4], our model is compatible with the recent global analysis [9], with very good χ 2 probabilities.
III. CONCLUSIONS
Because of the strong magnetic fields in protoneutron stars and the associated neutrinosphere, the electrons produced in the URCA processes that dominate neutrino production in the first 10 seconds have a sizable probability, P(0), to be in the lowest (n=0) Landau level. This leads to asymmetric neutrino momentum. With the mixing angles found in Refs [4,5], we find that the sterile neutrinos produced during this period for high luminosity pulsars can give the pulsars velocities of greater than 1000 km/s, as observed. We emphasize that our results are consistent sterile neutrino parameters based on a global analysis of data from MiniBoonE, LSND, KARMEN, NOMAN, Bugey, CHOOZE, CCFR84, and CDHS [9], rather than a model. There is a high probability that light sterile neutrinos with a large mixing angle exist, which is the basis of our work.
There is a strong correlation of the pulsar velocity with temperature, T. Since it is difficult to determine T accurately, it is difficult for us to predict the velocity of a pulsar whose kick arises from sterile neutrino emission. On the other hand, if the pulsar kick arises from the asymmetric emission of active neutrinos produced by the modified URCA processes after 10 seconds, also proportional to P(0) [13], then T can be determined by an accurate measurement of the neutrinos from the supernova. Therefore, in future years, with much more accurate neutrino detectors, one could predict the velocity of the resulting pulsar. Unfortunately, the energy of emitted sterile neutrinos cannot be measured. From our results in the present paper and those in Ref [13], high luminosity pulsars receive a large kick both from sterile neutrinos in the first ten seconds and standard neutrinos in the second ten seconds. | 2009-08-12T17:14:41.000Z | 2009-06-15T00:00:00.000 | {
"year": 2009,
"sha1": "c6979e8bd9e7df534c661744c64e1b3b18062000",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "2d506195181988df736dedeb8ed5f5f455e864bc",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
233314049 | pes2o/s2orc | v3-fos-license | Respiratory parameters on diagnostic sleep studies predict survival in patients with amyotrophic lateral sclerosis
Objective In amyotrophic lateral sclerosis (ALS), respiratory muscle involvement and sleep-disordered breathing relate to worse prognosis. The present study investigated whether respiratory outcomes on first-ever sleep studies predict survival in patients with ALS, specifically taking into account subsequent initiation of non-invasive ventilation (NIV). Methods From patients with ALS, baseline sleep study records, transcutaneous capnometry, early morning blood gas analysis, survival data and clinical disease characteristics were retrospectively analyzed. Patients were stratified according to whether enduring NIV was consecutively established (“NIV(+)”) or not (“NIV(–)”). Results Among the study cohort (n = 158, 72 female, 51 with bulbar onset ALS, 105 deceased) sleep-disordered breathing was present at baseline evaluation in 97 patients. Early morning base excess (EMBE) > 2 mmol/l predicted nocturnal hypercapnia. Ninety-five patients were NIV(+) and 63 were NIV(–). Survival from baseline sleep studies was significantly reduced in NIV(–) but not in NIV(+) patients with nocturnal CO2 tension ≥ 50 mmHg, apnea hypopnea index ≥ 5/h, and EMBE > 2 mmol/l. Hazard ratio for EMBE > 2 mmol/l was increased in NIV(–) patients only, and EMBE independently predicted survival in both NIV(–) and NIV(+) patients. Furthermore, EMBE on baseline sleep studies was the only predictor for survival from symptom onset, and hazard ratio for shorter survival was markedly higher in the NIV(–) than the NIV(+) group (2.85, p = 0.005, vs. 1.71, p = 0.042). Interpretation: In patients with ALS, EMBE > 2 mmol/l predicts nocturnal hypercapnia and shorter survival. Negative effects of sleep-disordered breathing on survival are statistically abolished by sustained NIV. Supplementary Information The online version contains supplementary material available at 10.1007/s00415-021-10563-0.
ODI
Oxygen desaturation index OSA Obstructive sleep apnea p tc CO 2 Transcutaneous carbon dioxide tension SDB Sleep-disordered breathing SpO 2 Peripheral oxygen saturation T0 Date of self-reported symptom onset T1 Date of baseline sleep evaluation T2 Date of death for deceased patients or date of last clinical status report t CO 2 ≥50 Cumulative duration of p tc CO 2 ≥ 50 mmHg t SpO 2 <90 Duration of SpO 2 falling below 90% ΔFS Linear progression rate; 48 − ALS-FRS-R sum score/duration from disease onset in months
Introduction
Amyotrophic lateral sclerosis (ALS) is a neurodegenerative disease mainly involving the motor pathways [1]. Median survival time is 2.5-3.5 years after symptom onset and 1.5-2.5 years following diagnosis [2,3]. Prognosis is mainly determined by phrenic nerve involvement that puts patients at risk of chronic respiratory failure and pulmonary infections. These comprise the most frequent causes of premature death in affected patients [4,5]. Mechanical ventilation has been shown to significantly improve healthrelated quality of life and survival [6,7]. Moreover, early initiation of non-invasive ventilation (NIV) has proven benefits in patients with both non-bulbar and bulbar onset of disease [8]. Since respiratory failure first manifests as nocturnal hypoventilation [9], sleep studies and sensitive detection of sleep-related hypercapnia is essential for early implementation of NIV [10,11]. In ALS, sleep-disordered breathing (SDB) may also encompass obstructive sleep apnea (OSA) and, rarely, central sleep apnea [10]. Measures of respiratory muscle dysfunction and SDB have been introduced as predictors of disease progression and overall prognosis [12][13][14][15][16][17]. However, the predictive value of nocturnal capnometry has not yet been evaluated, and only one study focused on the apnea hypopnea index (AHI) [14].
A recent study showed that daytime arterial base excess relates to respiratory muscle weakness and risk of death or tracheostomy [18]. The present study investigated whether nocturnal transcutaneous carbon dioxide tension, AHI, and early morning base excess (EMBE) on baseline sleep studies predict survival, taking into account whether NIV was subsequently initiated, and whether it was effective in terms of treatment adherence and sustained correction of SDB. The latter aspect has been proven meaningful in ventilated patients with neuromuscular disorders (including ALS) as persisting obstructive events, desaturations or nocturnal hypercapnia all impact survival [19][20][21].
Patients and study design
We retrospectively analyzed clinical records and sleep studies derived from patients with ALS admitted for firstever evaluation of sleep-related breathing between January 2010 and March 2018. All patients met the revised El Escorial criteria for possible, probable or definite ALS [22]. Patients underwent diagnostic sleep studies for the following reasons: FVC < 70% of the predicted value or symptoms possibly indicating sleep-disordered breathing, such as non-restorative sleep, sleep disturbances, morning headache, or daytime sleepiness. Subjects with any kind of mask-based therapy or invasive ventilation at initial presentation were excluded. The initial cohort comprised 285 patients. In 25 subjects sleep records were incomplete, 16 patients subsequently underwent tracheostomy, and for 86 patients follow-up data on later start of ventilatory support, health status or date of death could not be retrieved. Finally, 158 individuals entered survival analysis which focused on survival time after self-reported symptom onset (referred to as T0) and following baseline evaluation of sleep-related breathing (referred to as T1). As an endpoint for both timespans, T2 was defined as the date of death for deceased patients or the date of the last clinical status report for patients who were still alive when the database was closed. Reasons for exclusion of patients who had been tracheotomized at any time following T1 will be outlined in the discussion section. The study was approved by the local ethics authority (Ethikkommission der Westfälischen Wilhelms-Universität Münster und der Ärztekammer Westfalen-Lippe, 2016-178-f-S).
Sleep studies
At baseline, either cardiorespiratory polygraphy (Weinmann, Hamburg, Germany; n = 83) or full polysomnography (Nihon Kohden, Rosbach, or Somnomedics, Randersacker, Germany; n = 75) was performed. Scoring of sleep and respiratory events followed standard recommendations [23,24]. Respiratory sleep outcomes comprised oxygen desaturation index (ODI), AHI, peripheral oxygen saturation (SpO 2 ), and duration of SpO 2 falling below 90% ( t SpO 2 <90 ). Sleep apnea was defined as AHI ≥ 5/h. Baseline, maximum and mean carbon dioxide tension (p tc CO 2 ) were extracted from transcutaneous capnometry recordings that were available in all patients (Sentec, Therwil, Switzerland). Nocturnal hypoventilation was defined as peak p tc CO 2 ≥ 50 mmHg (6.7 kPa) or an increase of 10 mmHg or more from the awake baseline value [25]. Cumulative duration of the p tc CO 2 increase ≥ 50 mmHg ( t CO 2 ≥ 50 ) was specifically recognized as it has been proposed for defining nocturnal hypoventilation more recently [26]. We defined SDB as the presence of sleep apnea or nocturnal hypoventilation, or both. In 146 patients capillary blood gas analysis was available. Blood samples were taken from the arterialized earlobe within 1 h after awakening.
Ventilator settings
Treatment settings comprised air humidification, nasal or oronasal interface, and pressure-controlled bi-level ventilation using a spontaneous-timed mode with average volume assured pressure support (AVAPS ® , Philips Respironics).
Clinical measures and formation of subgroups
From clinical records, we collected demographic data, body mass index, and date and type of symptom onset (bulbar or non-bulbar). Baseline spirometry was available for 103 patients. Clinical status was documented using the revised ALS Functional Rating Scale (ALS-FRS-R) in 151 patients [27]. Presuming that the ALS-FRS-R score had been 48 prior to disease onset, we calculated the individual progression rate defined as the average monthly decline of the ALS-FRS-R score before admission to the sleep laboratory (ΔFS = 48 − ALS-FRS-R/duration from disease onset in months) [28,29]. Regarding functional deterioration, patients were stratified as "slow" (ΔFS < 0.47/month), "moderate" (ΔFS 0.47-1.1/month) and "fast" progressors (ΔFS > 1.1/month) [29]. Patients were also categorized according to bulbar function at the time of baseline sleep studies using the bulbar subscore of the ALS-FRS-R. A subscore of > 6 was classified as 'no, mild or moderate' bulbar dysfunction and ≤ 6 was defined as 'severe' bulbar dysfunction [12]. Cognitive and behavioral impairment or presence fronto-temporal dementia (ALS-FTD) was documented according to current diagnostic criteria [30]. Data on NIV initiation, adherence to treatment, tracheostomy, survival status or date of death were collected from clinical records or obtained by contacting the patients' caregivers and the deceased patients' dependents. In patients using NIV, information on treatment adherence was specifically available from device memory data. Patients were subdivided in a NIV(+) group that comprised all patients with regular use of NIV until T2, and a NIV(-) group encompassing subjects in whom enduring NIV was never established. The latter group included patients who either declined NIV, who were started on continuous positive airway pressure (CPAP) therapy only, or who transiently attempted but then abandoned NIV. Mean survival times and hazard ratios were calculated for different strata which were formed according to cut-off values for AHI (≥ 5/h), maximum p tc CO 2 (≥ 50 mmHg), t CO 2 ≥50 (≥ 30 min), EMBE (> 2 mmol/l) [22], and FVC (< 70% predicted) [26].
Statistical analysis
Statistical data analysis was performed using IBM SPSS® 26.0 (IBM, Armonk, NY, USA). Results are presented as mean and standard deviation or standard error, respectively. For comparison of categorical variables the Chi-square test was applied. Comparison between means was performed using the two-tailed t test and ANOVA in case of normal distribution or the Mann-Whitney U and Kruskal-Wallis test for non-parametric data. Correlations between continuous variables were analyzed using Pearson's or Spearman's correlation coefficient as appropriate [31]. Cumulative 5-year survival was visualized using Kaplan-Meier plots and analyzed using the log rank test. Hazard ratios were calculated using Cox regression analysis with upright FVC, AHI, p tc CO 2 , t CO 2 > 50, EMBE and ALS-FTD included. Effects of different variables on survival were analyzed using a linear regression model (including respiratory measures and the presence of ALS-FTD). p values < 0.05 were considered statistically significant. For multiple testing, Bonferroni's correction was applied.
Clinical characteristics of patients
Demographic data and disease characteristics at baseline are depicted in Table 1. One individual was diagnosed with familial ALS (SOD1 gene mutation). Ten subjects fulfilled diagnostic criteria of ALS-FTD [30]. Comorbidities included arterial hypertension (n = 47), chronic obstructive pulmonary disease (n = 3), and congestive heart failure (n = 1). Medication with riluzole was specified by the majority of patients at T1, with no significant difference between the NIV(+) and NIV(-) groups ( Table 1). The number of patients with percutaneous gastrostomy increased from 10 at T1 to 54 at T2.
Male patients were affected more severely and more often by SDB (Table 2). Nocturnal hypoventilation was present in 36/86 (41.9%) of men and in 19/72 (26.4%) of women (p = 0.042). For OSA, prevalence numbers were 52/86 (60.5%) and 28/72 (38.9%), respectively (p = 0.007). Whereas pCO 2 on morning blood gas analysis was higher in men than in women (albeit normal), EMBE showed no statistical difference between genders (Table 2). When patients were grouped according to severity of bulbar symptoms or disease progression as reflected by the ΔFS, FVC was significantly lower in patients with severe bulbar dysfunction, and EMBE was higher in fast progressors than in slow and moderate progressors ( Table 2). Other respiratory sleep outcomes did not show significant differences between these subgroups ( Table 2).
Ventilatory support
Between T1 and T2, NIV was eventually initiated in 108/158 (68.4%) patients. In 77 subjects, NIV was started shortly after T1, and 31 patients went on NIV later during the disease course. Following baseline sleep studies, initiation of NIV was based on the diagnosis of either SDB (n = 63) or on FVC reduction and symptoms of respiratory muscle weakness only (n = 14). Regular use of NIV until T2 was documented for 95/158 patients, referred to as the NIV(+) group. The NIV(-) group (63/158 patients) comprised 47 subjects in whom ventilatory support was never established, 13 patients who aborted NIV without later recommencing it, and 3 patients with isolated OSA on initial sleep studies who used nocturnal CPAP therapy until T2. In patients who were deceased by the time of database closure (n = 105) NIV had been regularly used until death by 71 individuals (67.6%), while this was the case among 24/53 patients (45.3%) who were still alive.
Survival from diagnostic sleep studies (T1)
Survival analyses focused on the impact of respiratory sleep outcomes in either NIV(+) or NIV(-) patients. We investigated mean survival, cumulative survival and hazard ratios with regard to specific thresholds for parameters reflecting SDB. In addition, bivariate correlation analysis and linear regression were used to test for associations between continuous data and survival. Mean survival time after T1 was 22.1 (18.9) months among all patients, 23.8 (20.4) months in the NIV(+) group and 19.6 (16.1) months in the NIV(-) group, with no significant group difference (Table 1).
In the NIV(-) group, patients in whom maximum p tc CO 2 , AHI, or EMBE at T1 surpassed the specific cutoff value (≥ 50 mmHg, ≥ 5/h, or > 2 mmol/l) showed significant reduction of mean survival time compared to patients with sub-threshold values (Table 3). In contrast, when patients in the NIV(+) group were stratified accordingly, mean survival time did not differ between subgroups. Thus, once NIV was established and steadily used, the negative impact of baseline respiratory measures on mean life span was abolished. Notably, NIV(+) patients in whom the AHI had been ≥ 5/h on baseline sleep studies showed longer median survival than subjects with an initial AHI < 5/h although this difference missed statistical significance (25.1 ± 20.3 months vs. 22.1 ± 20.7 months).
When the NIV(+) and NIV(-) groups were directly compared, patients with EMBE > 2 mmol/l or AHI ≥ 5/h at T1 showed longer survival if enduringly ventilated (EMBE: p = 0.022; AHI: p = 0.012). For subjects with maximum p tc CO 2 ≥ 50 mmHg this finding could not be reproduced, most likely because only 4 of those patients had not started NIV.
Linear regression identified EMBE as the only independent significant predictor of survival after T1 in NIV(-) and NIV(+) patients (Supplemental Table 1).
Survival from symptom onset (T0)
Regarding survival from T0 the same statistical approach and threshold values were used as described above. Mean survival time after T0 was 52.4 (40.8) months among all patients, 50.8 (31.1) months in the NIV(+) group and 54.8 (52.4) months in the NIV(-) group, with no significant group differences (Table 1). Mean survival times are depicted in Table 4. In NIV(-) patients, reduction of mean survival was only found in association with baseline EMBE > 2.0 mmol/l but not with any of the other respiratory measures. The same effect of EMBE > 2 mmol/l was not observed in NIV(+) patients. Cumulative survival for different respiratory strata was again visualized using Kaplan-Meyer curves (Fig. 2). Increased hazard ratios were only found for EMBE > 2 mmol/l in both ventilated and non-ventilated patients, with relative risk increase being markedly higher in the NIV(-) subgroup (NIV(-): 2.85, p = 0.005; NIV(+): 1.71, p = 0.042).
Linear regression with AHI, EMBE, maximum p tc CO 2 , t CO 2 ≥ 50 and ALS-FTD integrated into the model showed that only EMBE at T1 independently predicted survival after T0 in both NIV(-) and NIV(+) patients (Supplemental Table S2). The presence of ALS-FTD significantly predicted survival in NIV(-) patients only (data not shown).
Discussion
The present study investigated whether respiratory parameters on baseline sleep studies impact survival in patients with ALS. Only few previous studies had a similar purpose, either focusing on sleep apnea or daytime blood gas analysis, but without taking into account whether enduring NIV was subsequently established [14,18]. In one study, overall prognosis appeared to be worse in ALS patients with concomitant OSA but indication for NIV was based on the AHI alone, and adherence to NIV was not considered [14]. Recently, it was reported that ALS patients with normal pCO 2 and bicarbonate values on daytime blood gas analysis Fig. 1 Survival from baseline sleep studies (time point T1) in patients with ALS. a-c refer to patients who did not undergo non-invasive ventilation, d-f depict Kaplan-Meyer plots for patients who started enduring NIV following T1. Survival analyses were performed using critical cut-off values for respiratory sleep outcomes at T1. AHI apnea hypopnea index, EMBE early morning base excess, ptcCO 2 maximum nocturnal transcutaneous CO 2 tension, X axis months, Y axis cumulative survival. p values < 0.05 were considered significant show longer survival than patients with normal pCO 2 and increased bicarbonate or patients with elevation of both parameters [18]. To both underline and complement this finding the following conclusions can be drawn from the present study: 1. EMBE > 2 mmol/l reliably predicts nocturnal hypercapnia. This finding is in line with a previous study showing that daytime bicarbonate levels indicate respiratory muscle weakness and SDB even if daytime pCO 2 is lower than the values on which prescription of NIV is usually based on [18]. Of note, > 2 mmol/l as a cut-off value for daytime base excess is markedly lower than the thresholds previously proposed regarding patients with ALS or Duchenne's muscular dystrophy [10,32]. 2. Nocturnal hypercapnia on baseline sleep studies predicts shorter survival. Evolving hypoventilation as reflected by maximum nocturnal p tc CO 2 ≥ 50 mmHg or EMBE > 2 mmol/l is likely to be indicative of more advanced and aggressive disease. Accordingly, EMBE was correlated with ΔFS at T1 (r = 0.313; p < 0.000) and possibly parallels disease progression. The negative effect of EMBE > 2 mmol/l on survival was markedly higher in the NIV(-) than the NIV(+) group. As base excess is not yet part of standard criteria for NIV indication almost one third of patients with EMBE > 2 mmol/l were not started on ventilatory support. Since this subgroup showed the highest hazard ratio for shorter survival we postulate that EMBE should be considered in NIV indication for patients with ALS at least. Further studies are necessary to evaluate EMBE in other neuromuscular conditions with less rapid disease progression. 3. Reduction of FVC is closely associated with shorter survival in patients with ALS [33]. The present study shows that life span is also reduced in patients who surpass critical values for maximum p tc CO 2 , AHI and EMBE on diagnostic sleep studies and do not subsequently start NIV. In contrast, the negative impact of SDB on remaining life span is abolished by regular usage of NIV. This finding underlines that survival analyses in ALS should take ventilatory support into account. Interestingly, patients with OSA even lived longer than patients without OSA once NIV was established. The presence of OSA at baseline might indicate better diaphragm strength (still sufficient to collapse the upper airway during inspiration) which possibly explains why NIV(+) patients with OSA showed longer survival than subjects without OSA. In contrast, OSA was related to reduced survival in NIV(-) patients which may reflect a negative impact that is independent of diaphragm function. 4. This study suggests that early initiation of NIV relates to longer survival although it was not specifically designed to test this hypothesis as a previous study by Vitacca et al. [8]. Quantitative measures indicating SDB were inversely related with life span in non-ventilated patients, and this association was abrogated following NIV initiation. As respiratory failure is known to evolve in a continuum ranging from CO 2 retention during REM sleep to daytime hypercapnia [9] it can be assumed that survival benefits from NIV depend on the point of treatment start. 5. As shown by previous studies, adequate follow-up of ventilated patients requires titration of ventilator settings to sustainably achieve normocapnia, normoxia, and normalization of the AHI [19][20][21]. In the present study, it was attempted to meet this goal in routine practice. Since linear regression analysis showed that EMBE > 2 mmol/l independently predicts survival after baseline sleep studies also in NIV(+) patients, it is desirable to further assess whether adjustment of NIV settings should also aim to correct EMBE below 2 mmol/l. 6. Lastly, the present study suggests that prior attempts to specifically define sleep-related hypoventilation by duration of nocturnal hypercapnia were arbitrary. Based on transcutaneous capnography, two national guidelines proposed thresholds of either ≥ 10 min (p tc CO 2 ≥ 55 mmHg) or ≥ 30 min (p tc CO 2 ≥ 50 mmHg) [24,26]. For both numbers no published evidence is available, and they were presumably meant to take into account that transcutaneous capnometry is somewhat inaccurate. We specifically investigated the 30-min cutoff in patients with ALS and did not find that surpassing it was specifically related to shorter survival even in non-ventilated patients. We conclude that the 30-min threshold is not helpful for guiding treatment decisions and may even get in the way when it comes to early indication for NIV in patients with ALS.
Study limitations
It might be considered a limitation of this study that death and tracheostomy were not combined to form a common endpoint. Furthermore, patients who underwent tracheostomy were not even assigned to the NIV(+) group. This was avoided for several reasons: The number of tracheostomized patients was too small (16/158) for valid statistical analysis, depict Kaplan-Meyer plots for patients who started enduring NIV fol-lowing T1. AHI apnea hypopnea index, EMBE early morning base excess, ptcCO 2 maximum nocturnal transcutaneous CO 2 tension, X axis months, Y axis cumulative survival. p values < 0.05 were considered significant and in most cases, it was impossible to retrospectively identify whether tracheostomy was performed due to respiratory or bulbar deterioration. In addition, tracheostomy may substantially prolong survival to a point that patients would probably not have reached with ongoing NIV [34]. Thus, inclusion of patients with invasive ventilation would possibly have confounded survival time in the NIV(+) group. Lastly, patients may gain meaningful prolongation of life span and also quality of life from invasive ventilation, rendering it inappropriate to generally equate tracheostomy with death. It has to be acknowledged that in a subset of patients, overall survival and the use of NIV may be negatively affected by cognitive and behavioral impairment and overt ALS-FTD, in particular. However, the present study was not designed to specifically investigate this aspect, and the number of patients fulfilling diagnostic criteria for ALS-FTD was rather small.
Further limitations comprise the retrospective study design and the fact that some clinical information was only available from deceased patients' dependents. Moreover, the number of patients with sleep-related hypoventilation who did not start NIV was extremely small, hampering further statistical analysis regarding this subgroup. Notably, all patients with t CO 2 ≥ 50 > 30 min subsequently underwent NIV.
To conclude, the present study evaluated the impact of SDB and NIV on survival in patients with ALS. It underlines the importance of transcutaneous capnography for diagnostic and prognostic purposes, and strongly suggests that serum bicarbonate (or EMBE, respectively) does not only predict respiratory muscle weakness and SDB as previously shown [18] but also survival. Most importantly, this study suggests that the specific impact of SDB on overall prognosis can be neutralized by implementation of enduring NIV. Once indicated NIV leads to a meaningful prolongation of life span. In this sense, the findings presented here add to an increasing body of evidence showing that for patients with ALS, NIV is actual treatment rather than mere palliation. | 2021-04-21T14:13:54.261Z | 2021-04-20T00:00:00.000 | {
"year": 2021,
"sha1": "edef6e68f1625572f010e4e211f2ed8b4e2aa4b7",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00415-021-10563-0.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "edef6e68f1625572f010e4e211f2ed8b4e2aa4b7",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
19953681 | pes2o/s2orc | v3-fos-license | Management of an intruded primary central incisor with a natural crown under general anesthesia
Tooth intrusion is the most common trauma during early infancy. Primary maxillary central incisors are the most affected teeth. There are a few treatment approaches which depend upon the severity of the trauma, and the treatment must be managed professionally. In this case report, a 3-year-old girl with a history of trauma 40 days before referring to our pediatric clinic is presented. Deciduous maxillary right central incisor was intruded through labial and alveolar socket and completely covered with soft tissue. The intruded deciduous incisor tooth was surgically extracted and impression was taken under general anesthesia. The removable partial prosthesis was completed by using the patient′s own extracted tooth. Using natural crown on removable prosthesis gives psychological satisfaction to the patient and his/her family, and can be better tolerated since its shape, size, and color are exactly in harmony.
INTRODUCTION
Most of the injuries at early ages happen due to falling off when the kids learn to walk or run, as there is lack of motor abilities at that age. [1] One of the most common traumatic injuries happening in preschool children or kids is intrusive luxation which causes displacement of the tooth in the alveolus. [2] Intrusive traumas are mostly experienced in the deciduous dentition with damage to anterior teeth; this kind of trauma is more common at age 1-3 years due to the high resilience and fl exibility of the surrounding tissues around the deciduous teeth. [3] Preschool children have wide medullar spaced bones; this situation leads to luxation and intrusion injuries instead of structural fractures. [4] The dentists should also avoid the permanent tooth disturbance in the anterior zone traumas since there is a probability of injuring the permanent teeth. [5][6][7] Deciduous teeth traumas may present with several visual signs: color change of crown, pulp obliteration, pulp necrosis, resorption of root, infl ammatory resorption, The history of the patient included a trauma caused by falling off while running 40 days before referring to our clinic. After the trauma, they had immediately visited a local dentist in their neighborhood. The dentist told them to wait due to the possibility of spontaneous eruption process. When the parents realized that there was no positive visual and feedback from their daughter, they decided to visit our clinic for examination and treatment. In the clinical examination, there was no negative sign for health and neurological symptoms. In her history, there were no extraoral injuries like nose or head trauma. During intraoral examination, we observed a swelling at the maxillary anterior region, exactly at deciduous maxillary right incisors + ridge zone and labial sulcus [ Figure 1]. We found that the deciduous maxillary right central incisor was dislocated and intruded through labial sulcus soft tissue by observing the periapical radiograph [ Figure 2a]. Due to the age of the patient, we decided to remove it under GA with the permission of the family, as there could be lack of cooperation from the patient.
An impression (3M ESPE AG, Dental Products, Seefeld, Germany) was taken during GA to build up a removable partial prosthesis for the patient. Then, the intruded deciduous incisor tooth was surgically extracted [ Figure 2b], and soft tissue ruptures were reformed and sutures were placed with 3-0 black braided silk (Ethicon; Johnson and Johnson Ltd., Somerville, NJ, USA). Extra care was taken to avoid permanent teeth disturbance during surgery of the hard and soft tissues. Also, we gave post-extraction instructions to the patient's parents. Antibiotic was prescribed to the patient to avoid infection.
The crown of the extracted deciduous upper right central incisor was separated from the root. The pulp chamber of the tooth was cleaned and then stored in sterile saline solution until use. Before implementing the tooth to the removable appliance, fl owable resin composite material (3M ESPE, St Paul, MN, USA) was placed into the crown in increments and cured. The removable partial prosthesis was completed by using the patient's own extracted tooth [ Figure 3]. The patient was followed up at 3 months. Prophylaxis was done due to poor oral hygiene. Clinical and radiographic examinations [ Figure 4] were undertaken and there was no problem found.
DISCUSSION
Growing kids, in other words, preschool children, are much more vulnerable to fall; as a result, they face injuries and traumas due to lack of their neuromuscular coordination. [12] Most of the injuries in deciduous dentition are an intrusive luxation caused by face impact. [1,4] Gondim et al. [13] studied and followed up 16 patients with intrusion of the primary teeth, and 56.25% of the patients who suffered from tooth intrusion were males and in 91%, upper central incisors were the most intruded teeth. According to the histories taken from the patients who had intrusive luxation, the injuries were found to be due to fall during ordinary walking or running (62.5%) or during riding a bicycle or tricycle (12.5%). Generally, the anterior teeth, and mostly, the maxillary central incisors are affected because of their anatomical location, where they are directly exposed to any kind of physical trauma. This paper reports a 3-year-old female child who suffered from trauma to the primary central incisor and was misdirected about the treatment.
For the success of the treatment, in the diagnosis and treatment planning, expert's advice is very important. In situations of injuries and/or traumas including intrusive luxations, there are things to do in order. If the apex is displaced toward or through the labial bone plate, the tooth should be considered to be left in place for spontaneous repositioning and re-eruption. When the crown is completely intruded, the tooth rarely re-erupts and may become necrotic, indicating the need for extraction. [14][15][16] Also, if the apex is displaced through the tooth germ, tooth extraction is suggested to avoid possible damage to the permanent successor. Ankylosis should be suspected if visual signs of re-eruption are not present after 1-2 months, so extraction should be considered. [17] Also, a child with a thumb habit or swallowing disorder may apply force, avoiding the intruded tooth from re-erupting. [15,16] In the case presented here, the tooth was intruded through the labial of maxillary ridge and the root was displaced distally at an angle. With the impact force of trauma, intrusive tooth was completely embedded into the bone and the labial alveolar bone was broken. For this reason, it should have been considered for extraction immediately after the patient referred to the clinic.
In the case presented here, an impression was taken during GA to build up a removable partial prosthesis and the extraction was carried out immediately after the impression was taken. In normal conditions, for a better prosthesis and impression, the wound of extraction area should heal and soft tissue should be in fi nal shape. Due to younger age of the patient and lack of cooperation from her, we did not take the risk of second GA only for impression.
In tooth losses, both esthetics and functions should be considered. Depending on the patient's age, the treatment approaches may vary. [11] That is why, the condition of the preschool patient was considered as an esthetic and functional problem in her developing dentition. As a consequence, to avoid bone loss of the alveolar process and as a temporary solution till fi nal permanent treatment could be done, space management procedure with an esthetic concept was applied.
Anterior tooth loss results in difficulty in speech development, especially in a young child. It is also a setback for a child to have lost a tooth at an early age, and it may lead to the development of tongue habits. There are reports of many cases treated with fi xed appliances in the literature. [10,16,18,19] While some of these cases were treated with esthetic fi xed maintainers, others were treated with glass fi ber reinforced cement to fi x the patient's own tooth as a pontic which is the permanent central incisor. Despite the advantages of these fi xed restorations in comparison to removable partial prosthesis, the fi xed appliances have limiting effects on the maxillary growth in preschool growing kids. [20] The other reason for using removable esthetic partial prosthesis in this case was the crown lengths of the deciduous molars, for which occluso-cervically, the lengths were short; therefore, we were not able to prefabricate an anterior esthetic fi xed space maintainer by molar bands.
Acrylic teeth can be used for building removable esthetic partial prosthesis after traumatic tooth loss in the anterior region, but they do not provide the contour or size that a natural tooth does. [16] Tannure et al. reported using patient's own tooth to make the patient feel comfortable about size, shape, and color. [21] For the reason mentioned earlier, we had to build a removable partial prosthesis with the patient's own tooth. To conclude, we can say that using patient's own tooth instead of prefabricated teeth on a fi xed appliance renders psychological satisfaction for the patient and his/her family, and can be better tolerated because of its shape, size, and color. These kinds of cases should be referred to an expert. | 2016-05-04T20:20:58.661Z | 2014-04-01T00:00:00.000 | {
"year": 2014,
"sha1": "6f01180f37e77f6c6c51aa8dc3c6f8d4ab7b5736",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4103/1305-7456.130632",
"oa_status": "HYBRID",
"pdf_src": "WoltersKluwer",
"pdf_hash": "6f01180f37e77f6c6c51aa8dc3c6f8d4ab7b5736",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
204960823 | pes2o/s2orc | v3-fos-license | Deriving Born's rule from an Inference to the Best Explanation
In previous articles we presented a simple set of axioms named Contexts, Systems and Modalities (CSM), where the structure of quantum mechanics appears as a result of the interplay between the quantized number of modalities accessible to a quantum system, and the continuum of contexts that are required to define these modalities. In the present article we discuss further how to obtain (or rather infer) Born's rule within this framework. Our approach is compared with other former and recent derivations, and its strong links with Gleason's theorem are particularly emphasized.
The word "context" includes the actual settings of the device, e.g. measurement of Sz rather than Sx: the context must be factual, not contrafactual. On the other hand all devices able to measure Sz are equivalent as a context, in a (Bohrian) sense that they all define the same conditions for predicting the future behaviour of the system. 2 We omit the free evolution of the system; if it is present, the result of a new measurement can still be predicted with certainty, but in another context that can be deduced from the free evolution. Mutatis mutandis, this is equivalent to full repeatability.
From the above definition, justified by empirical evidence, one measurement provides only one modality. Therefore in any given context the various possible modalities are mutually exclusive, meaning that if one result is true, or verified, all other ones are not true, or not verified. We have then the Basic postulate (contextual quantization) : The number N of mutually exclusive modalities for a given quantum system is the same in any relevant context.
In the above example one has N = 2 K .
Definition 2 (incompatible modalities) : Modalities observed in different contexts are generally not mutually exclusive, they are said to be incompatible.
Incompatible means that if a result is true, or verified, one cannot tell whether the other one is true or not.
Definition 3 (extravalence 3 ) : When S interacts in succession with different contexts, certainty and repeatability can be transferred between their modalities. This is called extracontextuality, and defines an equivalence class between modalities, called extravalence 4 .
The equivalence relation is obvious, for more details and examples of extravalence classes see [11].
The intuitive idea behind these definitions and postulate is that making more measurements in quantum mechanics (by changing the context) cannot provide "more details" about the system, because this would increase the number of mutually exclusive modalities, contradicting the basic postulate. One might conclude that changing context randomizes all results, but this is not true: some modalities may be related with certainty between different contexts, this is why extravalence is an essential feature of the construction.
III. THEOREMS.
Theorem 1 : Given an initial modality and context, obtaining another modality in another context must (in general) follow a probabilistic law.
First, let us emphasize that modalities in different contexts are always considered different, even if they are extravalent, so some care is required when counting modalities. Let us start from an initial modality for a system in context C u , and perform a measurement in another context C v . Several situations can be considered: (i) From the basic postulate there are N mutually exclusive modalities in each context, and one of them is realized when doing a measurement. Therefore the situation where all modalities in context C v would have a probability p = 0 to occur is excluded by construction.
(ii) If one modality in the new context C v is obtained with certainty, this means that C v contains a modality extravalent with the initial one; then p = 1 for this modality, and p = 0 for all other (mutually exclusive) ones. If the situation is the same for all modalities in C u , then they are all extravalent with a modality in C v , and the modalities in the new context can be seen as a rearrangement (permutation) of the initial ones. So let's try again with another context C w ; if the situation is the same again in all other contexts, it means that there are only N classes of extravalent modalities, going through all contexts. This means that the context is unique up to a rearrangement (permutation) of the modalities; therefore there are no incompatible modalities, and the situation is essentially classical.
(iii) Since case (i) is excluded, and case (ii) is classical (there are no incompatible modalities), the general case (where incompatible modalities do exist) is that obtaining a modality in the new context is probabilistic (0 < p < 1), hence the theorem is demonstrated.
The core of this proof is that measuring in a new context cannot be a "refinement" of the previous measurement, because this would extend the number N of mutually exclusive modalities. To see that more explicitly, let us consider an initial modality u 0 in C u , connected to at least two modalities v 1 or v 2 , according to (iii) above. Now let us measure again in C u : if u 0 is found again with certainty, then there would be two mutually exclusive situations, u 0 → v 1 → u 0 and u 0 → v 2 → u 0 . This would give at least (N + 1) mutually exclusive modalities, in contradiction with the quantization postulate.
Therefore the randomness is not only from C u to C v , but also back from C v to C u [12]. This makes clear that probabilities do follow from the fixed value of N , i.e. from the maximum number of mutually exclusive modalities for a given system, imposed by the basic postulate.
Theorem 2 : Given an initial modality and context, the probability to get another modality in another context keeps the same value as long as the initial and final modalities belong to the same respective extravalence class, independently of the embedding contexts.
Let us start again from an initial modality u i and context C u , and follow the same steps as in Theorem 1 when performing a measurement in another context C v .
(i) The situation where no modality can be obtained in the new context (p = 0) is excluded as said above.
(ii) The situation where obtaining one modality in the new context is certain (p = 1) means that the new context contains a modality extravalent with the initial one. Then p = 1 corresponds to modalities in the same extravalence class, this is the definition of extravalence.
(iii) In the general case one gets another modality v j with a probability 0 < p < 1. Given this new modality v j , changing again the context to another one C w containing a modality w k extravalent to v j will yield w k with certainty. In that case the probability for going from u i to v j will be the same as the one for going from u i to w k (Fig. 1). Moreover, if one starts from a modality x l extravalent to u i , and one goes to u i then to v j , the probability for going from x l to v j will be the same as the one for going from u i to v j .
Therefore the probability to get another modality in another context only depends on the extravalence classes of the initial and final modalities, and the theorem is This theorem shows that the probability to get a new modality starting from an initial one is linked neither to the context, nor to the modalities themselves, but to their extravalence class. In some approaches this property is called "non-contextual assignment of probabilities", and this is a very fundamental feature of quantum mechanics, which appears here as a theorem. It also suggests the major next step, i.e. that the probability law should be obtained by attributing a mathematical object to an extravalence class, in such a way that all the above requirements are fulfilled. As a general feature of such an inductive or inference reasoning [4], it cannot be shown that the proposed solution is unique (ie, necessary), but it can be shown that it fulfills all the requirements (ie, that it is sufficient).
Theorem 3 : Let us associate a N × N rank-1 projector P i to each extravalence class of modalities, and a set of N mutually orthogonal projectors to each context. Then the probability law f (P i ) built from these projectors obeys Born's rule, and different sets of mutually orthogonal projectors are related by (complex) unitary matrices.
Since the N × N projectors are associated to extravalence classes of modalities, the probabilities are a function f (P i ) of these projectors, in agreement with Theorem 2. Since a context (set of mutually exclusive modalities) is associated to a set of N mutually orthogonal projectors, the probabilities for this set of projectors sum to 1. This condition only requires to add probabilities for commuting (orthogonal) projectors, avoiding known objections to other derivations [3]. Then all the hypotheses for Gleason's theorem [13] are fulfilled (see § IV), and thus Born's rule applies [11]. By construction orthogonal sets of projectors are connected by complex unitary matrices. Complex numbers are required to connect continuously the identity matrix to all permutations of modalities: this cannot be done by (real) orthogonal matrices, which split into two subsets with determinants ±1; see [9,11].
Here we have considered initial and final modalities, i.e. rank 1 projectors [11], but more generally Gleason's theorem provides the probability law for density operators (convex sums of projectors), interpreted as statistical mixtures. This clarifies the link between Born's rule and the mathematical structure of density operators [18].
IV. AN OVERVIEW OF GLEASON'S THEOREM.
Gleason's theorem has the reputation of being unpenetrable by physicists, who usually keep away from this frightening monument (see also Discussion below). Therefore we want to present here a "physicist's demonstration", where most mathematical difficulties are deliberately omitted, in order to reveal the big picture. All the (nice) mathematical details can be found in "An elementary proof of Gleason [14], which is more recent and reader-friendly than the original work by Gleason [13].
Let us consider a separable Hilbert space H over R or C, and if dim(H) = N we denote it C N (over C) ou R N (over R). Then we define a real-valued non-negative function f acting on the unit sphere of H, such that for any orthonormal basis The function f (x i ) can be seen as the probability to get the result x i , in a "state" defined by f . Note that if f (x j ) = 1 for the vector x j , then f (x k =j ) = 0 for all other vectors in the orthonormal basis {x i }: the results x i are mutually exclusive as we required. The non-obvious hypothesis is why f (x i ) depends only on x i and f , and not on the other vectors {x k =j } in the orthonormal basis: this is where the discussion above plays a crucial role, by associating x i to an extravalence class of modalities.
Here our goal is to sketch a demonstration of Gleason's theorem: If N ≥ 3, there exists a density operator 6 ρ defined on H such that f (x i ) = x i |ρ|x i for all unit vectors x i . Then f is said to be "regular". 6 This means a positive semidefinite Hermitian operator with unit trace. It describes a pure state if it is a rank one projector.
For simplicity we will assume that the extreme value f (x i ) = 1 is reached, and then present the (easier) result that in that case ρ is a projector |x x|, so f (x i ) = x i |x x|x i = | x i |x | 2 : this is the usual Born's rule for pure states (or for extravalent modalities).
Step 1: prove the following "reduction lemmas" L1 -In R N , f is regular iff it is the restriction to the unit sphere of a quadratic form (this is clear by writing explicitly ρ as a self-adjoint operator) L2 -If f is regular in R 3 , then it is also regular in any 2-dimensional subspace R 2 of R 3 (this is clear by restricting the quadratic form from R 3 to R 2 ) L3 -If f is regular in any subspace R 2 of C 2 , then it is regular in C 2 (not obvious, see [14]) L4 -If f is regular in any subspace C 2 of C N , then it is regular in C N (not obvious, see [14]) Crucial lemma: If f is regular in R 3 , then it is regular in C N (use L2, then L3, then L4).
Therefore it is enough to show that f is regular in R 3 . This explains why the theorem requires N ≥ 3: in fact, f is regular in any C 2 considered as a subspace of C N , but not in C 2 considered alone. Said otherwise, it is well known, e.g. from Clauser [15], that one can build a "classical model of a (unique) qubit". However this classical model fails if this qubit is one among several qubits, which is fine as far as QM is concerned.
Step 2: prove that f is regular in R 3 Now one looks for a probability function f (u), where u is a normalized vector in R 3 , so that 0 ≤ f (u) ≤ 1 and f (u)+f (v)+f (w) = 1 for any orthonormal basis {u, v, w} of R 3 . One does not assume that f is continous, but here we assume that the extreme values 0 and 1 are reached (this is only for simplification, and the general case is treated in the full theorem [13,14]).
Given a normalized vector p and an orthonormal basis {u, v, w}, the quantities cos 2 (u, p), cos 2 (v, p), cos 2 (w, p) are the squares of the components of p in the basis, so they sum to 1. Therefore cos 2 (u, p) for a fixed p is an acceptable function f (u), and actually it is the good one. But why is it the only such function? We will split the answer in two parts. To be simple (see text) we assume that the extreme values 0 and 1 are reached, we define a normalized vector p such that f (p) = 1, and put it at the pole of the 3D sphere. As a consequence, f (q) = 0 for any q on the equator. Given an orthonormal basis {u, v, w}, the quantities cos 2 (u, p), cos 2 (v, p), cos 2 (w, p) are the squares of the components of p in this basis, so they sum to 1. Therefore h(u) = cos 2 (u, p) is an acceptable function f (u), and Gleason's theorem shows that it is the only one.
Why is there no φ ? In R 3 a normalized vector u is defined by two polar angles θ and φ, and in cos 2 (u, p) there is only one angle, why? To be specific let us choose p as the vector such that f (p) = 1 (since this value is reached), and position it at the pole of the unit sphere in 3 dimensions (see Fig. 2). As a consequence, f (q) = 0 for all vectors on the equator, and for any vector one can define h(u) = cos 2 (u, p), which depends on the "latitude" of u (the polar angle θ), but not on its "longitude" (the azimuthal angle φ).
One can then use two lemmas to show that for any two vectors u, v in the northern hemisphere such that h(u) > h(v), one has f (u) ≥ f (v); this is done in Annex 1. Then define the smallest and largest values of the possible values of f (u) for a given latitude: Why only cos 2 (u, p) ?
Given that cos 2 (u, p) is an acceptable f (u), one may think that any other function f (u) = g(cos 2 (u, p)) should be acceptable also. To show this not the case, one uses a Magical lemma: Consider a function g over [0, 1], verifying the hypotheses (i) g(0)=0, (ii) a < b ⇒ g(a) < g(b), (iii) a+b+c = 1 ⇒ g(a) +g(b) +g(c) = 1. Then g(a) = a for any a within [0,1].
The proof (subtle but not difficult) is given in Annex 2. It is easily seen that g(cos 2 (u, p)) fulfills the hypothesis of the lemma for any orthonormal basis {u, v, w}, with a = cos 2 (u, p) etc. So from the magical lemma one gets g(cos 2 (u, p)) = cos 2 (u, p), and the additional function g is useless.
Step 3: conclude that f is regular in C N Therefore f is regular in R 3 , and also in C N from the reduction lemmas. The demonstration can be reconsidered in the more general case where the value f (p) = 1 is not reached, and one finds 7 that ρ is no more a projector, but a density matrix associated with a statistical mixture.
V. DISCUSSION.
An essential feature of the contextual quantization postulate, i.e. the fixed value N of the maximum number of mutually exclusive modality, turns out to be the dimension of the Hilbert space. In the spirit of [4] and as 7 In the general case in R 3 , the maximum (resp. minimum) value of f is 0 ≤ M ≤ 1 (resp. 0 ≤ m ≤ 1), and one shows [14] that there exist a basis {p, q, r} such that shown is [9,11], this provides one more heuristic reason for using projectors. Then the projective structure of the probability law warrants that, despite the availability of an infinite number of incompatible modalities, N cannot be "bypassed" by getting more details on any of them.
This would not be the case in the usual probability theory, based on partitions : making a partition of all modalities in N sub-ensembles for each given context would not prevent sub-partitions, that would correspond to additional details or "hidden variables", that are forbidden by our basic postulate. This corresponds mathematically to Bell's or Kochen-Specker's theorems, and all their variants, which basically show the inadequacy of partition-based probabilities. This problem obviously vanishes when projectors are used, and then from Gleason's theorem no other choice is left than Born's rule. It is worth emphasizing also that Bell's or Kochen-Specker's theorems consider discrete sets of contexts, whereas Gleason's theorem is based upon the interplay between the continuum of contexts, and the quantized number of modalities accessible in a given context. This feature also fits perfectly with the CSM ideas.
We note that some recent derivations of Born's law [16][17][18] dismiss Gleason's theorem, on the basis that its hypotheses are either too strong (extracontextuality) or unjustified (projective probabilities). More precisely, refs. [16][17][18] argue for the non-relevance of Gleason's theorem to QM, in opposition to the CSM view. Quoting [18]: "As mentioned in the introduction, Gleason's theorem and many other derivations of the Born rule assume the structure of quantum measurements. That is, the correspondence between measurements and orthonormal bases {ϕ i }, or more generally, positive-operator valued measures. But in addition to this, they assume that the probability of an outcome ϕ i does not depend on the measurement (basis) it belongs to." In [18] this additional assumption (which is physically true) is called "non-contextuality", that is clearly misleading, clashing with the terminology used in the Kochen-Specker theorem. As written above, a better name is "non-contextual assignment of probabilities", and the best name is just extracontextuality, that has deep physical roots. This is made clear by associating projectors to extravalence classes, clearly distinguishing the physical result (the modality) and the mathematical construction (the projector). To answer the remark about "assuming the structure of quantum measurements", we do posit the projective structure of quantum probabilities [11], not as a deduction but as a duly justified inference [4]. In the CSM approach the mathematical formalism works because physics tells the rules, and not the opposite. Therefore in our approach Gleason's hypotheses have a deep physical content, linking contextual quantization and extracontextuality of modalities. Since these features are required from empirical evidence, the QM formalism provides a good answer to a well-posed question.
VI. ALGEBRAIC SCHEME FOR QUANTUM MEASUREMENTS.
A consequence of our approach is that usual textbook quantum mechanics, which is limited to type-I operator algebra as introduced initially by Murray and Von Neumann [19], is not universal because it does not include the context. This issue was already discussed by Von Neumann [20], and again later in the framework of algebraic quantum theory [21]. Nevertheless, as discussed in these articles, it is possible to get a full picture by including the context in the formalism, taking into account that its number of degrees of freedom is unbounded, which makes its algebra of operators non-type I [20,21]. Generic scheme including the system, a possible ancilla, and the context. The number of degrees of freedom in the context is unbounded, which makes its algebra non type-I. The cut separates a type-I system algebra, where usual QM applies, from the type-II or III context algebra where there is no more unitary equivalence of representations. Fig. 3 displays such a generic scheme, including the system (plus ancillas) and context, separated by a (movable) cut. The full scheme is then universal, but the mathematical description including type II or III algebra does not allow arbitrary quantum superpositions at the context level -in agreement with empirical evidence. Then a quantum measurement proceeds as follows: • Before the measurement the modality is associated with the following (density) operator in context C 1 : Specifying the modality requires to give both |ψ i ψ i | and ρ (C1) i because the projector |ψ i ψ i | specifies only an extravalence class of modalities.
• After the measurement carried out in context C 2 , but before reading out the result, the sectorized state (statistical mixture) is This form is completely generic from a mathematical point of view because the context is unbounded, and it can be justified in several possible ways: sectorization in the non-type-I algebra, loss of offdiagonal elements of the reduced density matrix, flow of information to the environment, loss of interference, loss of the ability to create entanglement in a projective measurement... They all lead to the same results, as discussed e.g. in [22].
• After reading out the measurement result k in context C 2 , the new modality can be updated and it is associated with the operator: This defines a new pre-measurement modality, and |φ k φ k | may evolve unitarily until the next measurement is performed.
Summarizing, the non-unitary step in the measurement is due to the fact that the whole unbounded context is involved in a transient way; this is not an additional ingredient, but a required part of the full (non type-I) formalism. Looking at |ψ as the "state of the system", as done usually, is misleading because the vector (or projector) is associated with an extravalence class of modalities. The basic CSM tenet, that the modality belongs to both the system and the context, appears explicitly here under a mathematical form.
As a conclusion, usual type I QM provides a description of (idealized) isolated quantum systems. A state vector or projector is "incomplete" because it is not associated with an actual modality, but with an extravalence class of modalities, belonging to different contexts. From a physical point of view, the modality belongs jointly to a quantum system, and to a specified context. From a mathematical point of view, the behavior of modalities can be studied using type-I QM, where Born's rule applies as a consequence of Gleason's theorem. On the other hand, the description of (unbounded) contexts requires a non type-I formalism. Overall, these combined tools provide a consistent picture of quantum measurements within a unified quantum framework.
Annex 1 : Proof of the geometrical lemmas.
Here we show that for two vectors u, v in the northern hemisphere with h(u) > h(v), one has f (u) ≥ f (v).
For this purpose we define D u , the great circle going through u and cutting the equator at two points corresponding to vectors orthogonal to u. By convention D u is called the "descent through u", and u is obviously the "northern vector" in D u . Then one prooves the two lemmas: Basic lemma : One has f (u) ≥ f (u ) for any u in D u . Proof : Consider a vector u, and another vector u within D u . Let v (resp. v ) be a vector in D u orthogonal to u (resp. u ). Adding a vector w perpendicular to the D u plane, {u, v, w} and {u , v , w} are two orthonormal basis. By definition of f one has f (u) Piron's lemma : Consider u, v such that h(u) > h(v). Then there is a series of N vectors w n such that w 0 = u, w N = v, and each w n is within D wn−1 , i.e. in the descent through the previous vector of the series. Proof : It relies on a smart geometrical construction due to Piron [23]. It is convenient to project the northern hemisphere on a plane tangent at the pole p, using a projection from the center of the sphere. The different latitudes are then concentric circles centered on p, and the equator is projected at infinity. The descent through u is a straight line, tangent at u to the circle corresponding to the latitude of u. Then there are two cases : -If u and v have the same longitude, one takes u = w 0 , v = w 2 , and there exists w 1 with a latitude between those of w 0 and w 2 , located on D u = D w0 , and such that w 2 is on D w1 (this is clear by looking at the previous projection, u and v are on the same line coming from p).
-If u and v have different longitudes, one can take u = w 0 , v = w N , and build the other vectors w n by progressively rotating between the two circles associated to the two latitudes. When these latitudes get closer, N becomes larger, and it tends to infinity for two different longitudes with almost the same latitude (again this is clear from a drawing). This proves the lemma.
Therefore the basic lemma relates u and u within the descent through u, and Piron's lemma relates u and v from a succession of descents through the vectors in the series w n . As a conclusion, one deduces from the two lemmas that for u, v in the northern hemisphere with h(u) > h(v), one has f (u) ≥ f (v). | 2019-10-30T09:43:01.000Z | 2019-10-30T00:00:00.000 | {
"year": 2019,
"sha1": "23c1b6000436168cdd83a48535cf55bb8c5ce494",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1910.13738",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "fb3a1cba6df1e221ccb79cc34c9e0fc25b744856",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Computer Science",
"Physics",
"Mathematics"
]
} |
251101622 | pes2o/s2orc | v3-fos-license | Development of a Multiplex Bead Assay to Detect Serological Responses to Brucella Species in Domestic Pigs and Wild Boar with the Potential to Overcome Cross-Reactivity with Yersinia enterocolitica O:9
The aim of this study was to develop a multiplex bead assay using a Brucella rLPS antigen, a Brucella suis smooth antigen, and a Yersinia enterocolitica O:9 antigen that not only discriminates Brucella-infected from Brucella-uninfected pigs and wild boar, but also overcomes the cross reactivity with Y. enterocolitica O:9. Sera from 126 domestic pigs were tested: 29 pigs were Brucella infected, 80 were non-infected and 17 were confirmed to be false positive serological reactors (FPSR). Sera from 49 wild boar were tested: 18 were positive and 31 were negative. Using the rLPS antigen, 26/29 Brucella-infected domestic pigs and 15/18 seropositive wild boar were positive, while 75/80 non-Brucella infected domestic pigs, all FPSR, and all seronegative wild boar were negative. Using the smooth B. suis 1330 antigen, all Brucella-infected domestic pigs, 9/17 FPSR and all seropositive wild boar were positive, while all non-infected pigs and 30/31 seronegative wild boar were negative. The ratio of the readouts from the smooth B. suis antigen and Y. enterocolitica O:9 antigen enabled discriminating all Brucella infected individuals from the FPSR domestic pigs. These results demonstrate the potential of this assay for use in the surveillance of brucellosis, overcoming the cross-reactivity with Y. enterocolitica.
Introduction
Porcine brucellosis is a major concern and is widespread throughout the world, especially in the Mediterranean, Balkans, South America and South East Asia. The primary etiologic agent is the bacterium Brucella suis, and the disease is a cause of severe economic losses in livestock production and may threaten public health [1]. The Eurasian wild boar (Sus scrofa) is widely distributed in the Palearctic and is a wildlife reservoir host for B. suis in many regions. The apparent prevalence of B. suis (based on serology) has been estimated to range from 25-46% in areas of high wild boar density in Spain where epidemiological links with Brucella infection in domestic pigs are suspected [2,3].
Serological methods used for diagnosis of porcine brucellosis include indirect, blocking and competitive enzyme-linked immunosorbent assays (ELISA) based on smooth lipopolysaccharide antigens (sLPS), the Rose Bengal Test (RBT), the complement fixation test (CFT) and the fluorescence polarization assay [1,4]. The B. abortus antigens seem to be suitable for testing swine sera, at least in RBT and CFT, as they can identify antibodies against the three biovars (1,2,3) of B. suis, which infect pigs [1]. A drawback of these serological tests is the lack of reliability for individual diagnosis, because, although they may have acceptable sensitivity, they frequently lack specificity [1]. A major reason for this is infection by Yersinia enterocolitica O:9, which has antigenic determinants (sLPS O-chains) closely related to those of Brucella spp. [5][6][7][8][9]. The structure and the biological properties of the rough Brucella LPS make it a suitable antigen for the serodiagnosis of porcine brucellosis [10]. Specifically, it lacks O-chain and only the lipid A and the core antigens remain. This rLPS structure differs between Brucella and Y. enterocolitica O:9 [11][12][13]. The omission of the cross-reactive O-chain means that rLPS has the potential to be a more specific antigen when applied to samples that are false-positive in assays employing the O-chain [14]. In wild boar, the sensitivity of a multi-species Brucella sLPS iELISA was estimated at being 100% and its specificity was adequate. However, its cross-reactivity with Y. enterocolitica O:9 was not assessed [2].
The above reasons prompted us to use an rLPS-rich antigen, extracted from a rough strain of Brucella, in order to try to enhance the specificity (Sp) of serology. Additionally, a whole cell B. suis biovar 1 (strain 1330) smooth antigen was used to maximize sensitivity (Se) because it is a homologous antigen for infected domestic pigs and wild boar. Being a whole-cell preparation, it contains the most possible immunogenic epitopes, and has a high O-chain content, which includes a low frequency of the OPS M epitope that is not possessed by Y. enterocolitica O:9 [1,15]. The combination of these two antigens in the same multiplex bead assay could enhance the diagnostic accuracy. Finally, a whole cell Y. enterocolitica O:9 antigen was used to detect cross-reacting antibodies and/or antibodies produced after natural exposure to Y. enterocolitica that may be present in Brucella-seropositive domestic pigs and wild boar.
The purpose of the present study was to develop a multiplex bead assay using Brucella rLPS, a whole cell B. suis 1330 smooth, and a whole cell Y. enterocolitica O:9 antigen, which not only discriminates Brucella-seropositive from Brucella-seronegative domestic pigs and wild boar but also overcomes the cross reactivity between B. suis and Y. enterocolitica O:9.
Domestic Pig Sera
One-hundred and twenty-six domestic pig sera were used for test development. Group A contained 29 sera from animals that were culture-positive for B. suis biovar 2 (25/29, obtained in Spain), or biovar 1 (4/29, obtained in South America). Group B contained 80 randomly selected sera collected from herds within Great Britain, which is officially brucellosis-free. Group C contained 17 sera from herds within Great Britain that were FPSR (false-positive serological reactors) during routine testing by either RBT (n = 10), cELISA (n = 10), SAT (n = 8) or their combination.
Wild Boar Sera
Sera from 49 Eurasian wild boar from Spain were also tested: group A-Brucella seropositive, (n = 18)-and group B-Brucella seronegative (n = 31). The discrimination between the seropositive and seronegative wild boars was determined by an indirect ELISA using the sLPS antigen [2].
Antigen Preparation and Coupling
The antigens used for assay development: (a) rLPS-rich phenol/chloroform/petroleum ether extract from B. abortus RB51 (hereafter referred as rLPS) [14]; (b) whole cells of the smooth B. suis strain 1330 grown on serum dextrose agar at 37 • C and heat-killed [16]; (c) whole cells of the smooth Y. enterocolitica O:9 (strain 234/02) grown on nutrient agar at 27 • C and heat-killed. Ten micrograms of each antigen were bound to 2.5 × 10 6 Pro Magnetic carboxylated beads according to the manufacturer's instructions (Bioplex Pro Magnetic COOH Magnetic Beads Amine Coupling Kit, BioRad, Hercules, CA, USA).
Multiplex Bead Assay Protocol
The Bio-Rad Bio-Plex multi-analyte bead suspension array system, which is based on Luminex's xMAP technology, was used for the assay. The bead reporter fluorescence, expressed as MFI (median fluorescence intensity), was determined with a Bio-Plex 200 (Bio-Rad) instrument that was initially calibrated and set to count 100 beads from each of three bead sets, with the Double Discriminator (DD) gate values set at 7500-25,000. A one-step protocol and normalization method for MFI values were used, as described previously [17]. Fifty microliters of master mix, containing approximately 3500 coupled beads of each type (the rLPS antigen, the smooth B.suis 1330 antigen and the smooth Y. enterocolitica O:9 antigen), biotinylated protein AG (secondary antibody) at a 1:500 dilution and streptavidin-phycoerythrin (2 µg/mL) in dilution buffer, were added to each well of a flat-bottom 96-well plate. Diluted serum (50 µL) was mixed with the master mix and the plate was incubated for 2 h at room temperature, with shaking at 600 rpm. The beads were washed twice with 100 µL Wash buffer (0.1 M PBS and 0.05% Tween 20) using the Bioplex pro Wash Station (BioRad) and finally resuspended in 100 µL of dilution buffer. The bead reporter fluorescence, expressed as MFI, was determined using Bioplex 200 (BioRad) instrument. The machine was calibrated and set to count 100 beads from each of three bead sets. Serum from a known seropositive Brucella-infected domestic pig was included as a positive control on each plate for normalization of test sera MFI values.
Receiver Operating Characteristic (ROC) Curve Analysis
ROC analysis was used for (i) evaluating overall test performance and (ii) determining cut-points that optimize the diagnostic accuracy of the test. For (i) the trapezoidal rule [18] was used to calculate the area under the ROC curve (AUC) and the corresponding confidence intervals [19]. For (ii) two different criteria were used for cut-point selection: (a) the simultaneous optimization of the Se and Sp and the overall minimization of false results, which corresponds to the maximization of the Youden's index J = Se + Sp − 1 [20] and (b) the minimization of the quantity min (1 − Se) 2 + (1 − Sp) 2 that corresponds to the cut-point closest to the upper left corner of the AUC plot [21].
ROC analysis was performed to compare the following combinations: (I) group A and B for the rLPS antigen in domestic pigs; (II) group A and C for the rLPS antigen in domestic pigs; (III) group A and B for the B. suis 1330 smooth antigen in domestic pigs; (IV) group A and C for the B. suis 1330 smooth antigen in domestic pigs; (V) group A and C for the ratio of the readout from the smooth B. suis 1330/the readout from the smooth Y. enterocolitica O:9 in domestic pigs; (VI) group A and B for the rLPS antigen in wild boars; (VII) group A and B for the B. suis 1330 smooth antigen in wild boars.
All analyses were carried out in R [22], using the pROC package [23].
Results
The distribution of the normalized values for each species are shown in Figure 1. ROC curves are shown in Figure 2. The overall discriminatory power for all combinations was high as indicated by the AUCs, which were, in all instances, higher than 0.95 except for the smooth antigen between groups A and C in domestic pigs which was 0.722 (Table 1). Se and Sp combinations at the selected cut-offs were also high for all combinations with the exception of the B. suis 1330 smooth antigen between groups A and C in domestic pigs. The selected cut-offs that maximize the Youden's J statistic showed that 26/29 group A domestic pigs (23/25 Brucella infected by biovar 2 and 3/4 infected by biovar 1) were positive using the rough B. abortus RB51 antigen, while 75/80 group B domestic pigs and all (17/17) group C (FPSR) domestic pigs were negative. The same antigen detected 15/18 of the seropositive wild boar and was negative for all (31/31) seronegative wild boar. The smooth B. suis 1330 antigen detected in all (29/29) group A domestic pigs was negative in all (80/80) group B animals, but was positive in just over half (9/17) of the group C (FPSR) samples. In wild boars, the same antigen was detected in all (18/18) group A seropositive samples and was negative in 30/31 group B seronegative animals. Finally, the ratio of the smooth B. suis 1330 and the smooth Y. enterocolitica O:9 normalized MFI values discriminated with 100% sensitivity and 100% specificity between group A and group C (FPSR) domestic pigs.
Discussion
This study shows that a multiplex bead assay could be a useful serodiagnostic tool for porcine brucellosis due to the high Se and Sp (whole cell B. suis 1330 smooth antigen) and the ability to identify cross-reactions due to Y. enterocolitica O:9 (rLPS antigen; smooth B. suis 1330/smooth Y. enterocolitica O:9 normalized MFI values ratio).
The most commonly used serological tests are generally designed to measure antibodies against a single antigen preparation, whereas the one-step multiplex bead assay is capable of detecting antibody responses to a range of different antigens at the same time. This offers significant benefits over other tests, including reduced reagent costs [24]. The potential advantages of multiplex bead assays over conventional serologic tests provide a strong impetus for their routine use in both research and clinical laboratories. In our study the beads were conjugated on three separate occasions, one conjugation per antigen and bead type, and were easily mixed and efficiently combined.
The frequency of the false-positive reactions in group B domestic pigs and wild boar was low (<10%), despite the fact that multiplexed assays are usually characterized by a lower Sp when compared to conventional serological tests due to the simultaneous presence of multiple ligands [25]. One possible explanation could be the use of protein AG instead of a species/isotype-specific secondary antibody. The results of this study show that the multiplex bead assay using rLPS and smooth B. suis 1330 antigens effectively distinguishes between sera from group A and group B in both domestic pigs and wild boar. Based on the calculated AUC values, Se and Sp of the latter antigen seems to be better than rLPS for this purpose.
In a recent study, the Se and Sp of a conventional iELISA, in which the sLPS antigen was used, were 0.66 and 0.97, respectively, leading to the conclusion that this assay is not sensitive enough for the diagnosis of brucellosis in domestic pigs [26]. Furthermore, in another study the use of sLPS in an iELISA showed a DSn of 95.07% and a DSp of 99.75% but only 24.77 with FPSR samples, results similar to ours [27]. In another study, the use of sLPS antigen in the iELISA resulted in 0.94 Se and 1.00 Sp [14], confirming the results of several previous publications that also support the diagnostic accuracy of sLPS in discriminating Brucella-infected from non-infected domestic pigs [28,29]. Our results indicate that a multiplex bead assay using the whole smooth B. suis 1330 antigen may be better than conventional serologic tests at discriminating between sera from Brucella infected and non-infected non-FPSR domestic pigs (Se: 1.00, Sp: 1.00).
The use of rLPS had a satisfactory diagnostic performance in the multiplex bead assay. The good distinction between group A and group B samples from domestic pigs was also found in a recent study where the same antigen was used in an iELISA, resulting in a Se of 0.91 and a Sp of 0.99 [14]. The results of the multiplex bead assay clearly show that in domestic pigs, the rLPS antigen can discriminate group A (Brucella infected animals) from group C (non-Brucella infected FPSR) sera. This attribute of the rLPS rich antigen was anticipated, as the structure of the core sugars within the rLPS is very different between Brucella spp. [30] and Y. enterocolitica O:9 [13] and the absence of the O-chain in this antigen [31] avoids cross-reactivity with antibodies against Y. enterocolitica O:9, as has been previously shown using an iELISA method [14]. Other studies have shown Brucella rLPS antigen to be less effective at serodiagnosis [27]. However, in this case, the antigen was pure and so differs from the less pure preparation used in this study, within which the co-extractants may behave as excipients and enhance the efficacy of the rLPS antigen.
The application of serological tests in wildlife is usually carried out for screening purposes or surveillance. Wild boar are indigenous in many countries and may contribute to the transmission of B. suis to livestock and hamper the success of eradication programs [32]. Based on our results, the best antigen for screening wild boar populations with the multiplex bead assay is the smooth B. suis 1330 (Se: 1.00, Sp: 0.97). Furthermore, given the high seroprevalence (up to 63%) against Brucella spp. in European wild boar populations [33], the concomitant use of rLPS may improve the combined specificity, considering that this antigen gave a negative result in only one group B serum sample, with a positive normalized MFI value for the smooth B. suis 1330 antigen. Further studies with larger sample sizes are obviously needed to confirm this hypothesis.
According to a previous study [34], the cut-off should be selected by taking into consideration the epidemiologic situation in each area. For example, for countries which are brucellosis-free, by taking into account the low prevalence of the disease and the serious consequences of a false-positive diagnosis, it may be advisable to choose a cut-off at the lower part of the ROC curve in order to maximize the Sp. On the other hand, a maximum Se would be appropriate for countries where the disease occurs at high prevalence. Therefore, the Se and Sp of the multiplex bead assay may change depending on the criteria used for cut-off selection.
The poor ability of the smooth B. suis 1330 antigen to differentiate between group A and group C domestic pigs was also expected based on previous studies in cattle [35,36] and pigs [8,9], as the antigen did differentiate between most of the samples, but not nearly so well as the rLPS antigen, However, the concomitant use of the smooth Y. enterocolitica O:9 antigen in the multiplex bead assay and the calculation of the ratio between the smooth B. suis 1330 and the smooth Y. enterocolitica O:9 normalized MFI values fully overcame this drawback, permitting the clear differentiation between groups A and C. This would most likely be due to the binding of antibodies on each antigen that are not shared but are distinct to each antigen types. Furthermore, the use of the rLPS antigen in a multiplex bead assay may be helpful in cases of dual infections of Brucella spp. and Y. enterocolitica O:9, given that the OPS is so similar between the B. suis biovar 2 and the Y. enterocolitica O:9 [15].
Conclusions
Based on the results of this study, the multiplex bead assay can be considered to be an accurate diagnostic test for brucellosis in domestic pigs and wild boar, if at least two antigens are included. For domestic pigs, the use of the smooth B. suis 1330 antigen along with the Y. enterocolitica O:9 antigen (thus enabling calculation of the ratio between MFI values for the two antigens) seems to be the best combination to discriminate between sera from Brucella-infected and non-Brucella-infected (FPSR and non-FPSR) animals, although the addition of the rLPS would help in the case of dual infection. In wild boar, the smooth B. suis 1330 antigen seems to be more accurate in terms of Se and Sp but the addition of the rLPS may further increase Sp. All samples used in this study represent material collected by partners and other organizations for other purposes than this project as specified in deliverable D4.5/5.5 entitled 'Guidelines for ethical sample collection' submitted to European Commission (26 February 2010, Dissemination Level: PP, Restricted to other programme participants, including Commission Services). The wild boar serum samples were collected opportunistically (no active capture, killing and sampling of wild animals specifically for this study was performed) from animals hunter harvested by members of Hunting Federations. Thus, special approval was not necessary, and steps to ameliorate suffering were not applicable to this study. Research on animals as defined in the EU Ethics for Researchers document (European Commission, 2007, Ethics for Researchers-Facilitating Research Excellence in FP7, Luxembourg: Office for Official Publications of the European Communities, ISBN 978-92-79-05474-7) is not applicable to this study.
Informed Consent Statement: Not applicable.
Data Availability Statement: All data are presented in the manuscript. | 2022-07-28T05:25:38.582Z | 2022-07-01T00:00:00.000 | {
"year": 2022,
"sha1": "642e73a1f712fc3323fab837b7eac40ea00967c4",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "642e73a1f712fc3323fab837b7eac40ea00967c4",
"s2fieldsofstudy": [
"Agricultural And Food Sciences",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
247055628 | pes2o/s2orc | v3-fos-license | Mathematical Description of Dynamic Processes Occurring in Rolling Bearings Used in Oil-And-Gas Sector Rotor Machines as the Basis for Their Vibration-Based Diagnostics
The rolling bearings used in modern rotor machines in the oil-and-gas sector are seen as crucial components. The accuracy of their production complies with the highest operational requirements. However, bearings are also the most probable cause of rotor machine failures. This can be attributed to the friction in support assemblies. This article considers the mathematical dependencies reflecting the dynamic processes occurring in rolling bearing during operation. It will help construct a modern test-range model for vibration-based diagnostics of bearing systems. The experiments with the practical use of this model showed a fault detection accuracy of 82-84%, which is a very good result.
Introduction
Modern equipment requires unique reliable evaluation methods for its current, as well as some past and future operations. The developed methods must be applicable in production contexts, sufficiently accurate and reliable, and accessible for production workers. [12,14].
Relevance
Currently, non-destructive inspection methods are widely used to assess the technical condition of machinery in the oil-and-gas sector. The improvement of diagnostic techniques is a very promising area of activity. For instance, it is very important to develop efficient control methods for the engineering parameters of support elements of rotor machines operated in the oil-and-gas sector. Establishing the mathematical bases is basic for the development of vibration-based diagnostic techniques for a bearing system in such rotor machines.
Statement of problem
For the correct understanding of the dynamic processes occurring in rolling bearings that impact their operability and cause vibrations, we need to analyze the geometrical and mathematical correlations describing the kinematics and dynamics of the support structures in question. We also need to analyze the vibration ranges of rolling bearings with defects and damages. [8,9,10].
Theory. The graphic representation of dynamic impacts and geometric parameters of a rolling bearing is shown in Figure 1. In that figure, you can see the following circumferential velocities: A -the 2 circumferential velocity of a point, В -of the running surface of the inner race, С -of the center of the rolling element, which is twice as small as the circumferential velocity of the inner race.
Assume that dТ is the rolling element diameter; Dс is the diameter of the circumference that passes through all of the rolling element centers; Dв is the diameter of the circumference that is the geometric location of the contact point of rolling elements and the running surface of the inner race; пс is the number of revolutions per minute for the rolling element centers (of the retainer ring) around the bearing axis О; пв is the number of revolutions per minute for the inner race of the bearing. a) b) Figure 1. The graphic representation of dynamic impacts and geometric parameters of a rolling bearing.
In this case, the circumferential velocity vс of the rolling element centers (the retaining ring) can be determined using the following dependency i.e., for one revolution of the inner race, the retaining ring (Figure 1, a) together with the set of rolling elements rotates a little less than half a revolution (for single-row radial bearings cos β = 1). We must note that if the bearing's outer race rotates when the inner ring is unmoved (fixed axis and rotating housing), then for and we get (Figure 1, b) (4) where nн is the number of revolutions per minute for the rotating outer race; υH is the circumferential velocity of the running surface of the outer race; DН is the diameter of the circumference that is the geometric location of the contact point of rolling elements and the running surface of the outer race [1,2,3].
In this case, when the outer ring rotates once, the retaining ring rotates a little more than half a revolution. Thus, the retaining ring rotation rate and the appearance of amplitude modulations in the vibration diagram that signal about retaining ring faults should be within the range of fretr =0,4fr -0,6fr, where fr is the machine rotor speed.
To determine the mathematical dependencies and the locations of the typical frequencies that can be informative for the assessment of the technical condition of the object in question and the description of vibration processes due to the impacts on bearing races, it is necessary to review the application of main loads that occur during rolling bearing operation. [18,19].
As we can see from the rolling bearing load diagram in Figure 2, the loads affecting the operating bearing (ball or roller-based) are distributed according to specific laws reviewed in detail below [4].
I -outer race, II -inner race, III -rolling element, Q -load application direction in the bearing, -maximum rolling element response (located against load application point), Rn -other responses, r -the inner radius of the inner race, -the angle between the straight lines passing through the centers of the adjacent rolling bodies and the bearing. The bearing consists of the outer race I which is rigidly connected to the machine housing, the inner race II connected to the shaft, and the rolling elements (balls or rollers) III. The bearing load Q will have an uneven distribution across specific balls or rollers. Those of them that are on the application line of the force Q (according to the antipodal point theory) will experience the highest loads. If we assume that only the balls or rollers located below the аа line will take the loads, we can claim that the response distribution law Rn will look like the one shown in Figure 2 as the dashed line, i.e. the ball load will reduce the further they are from the line of force Q. It will equal zero for the balls whose centers are located on the аа line.
According to the contact deformation theory for elastic bodies, this problem can be solved assuming that the contacting bodies are smooth and homogeneous (although this is not realistic, some tolerations have to be made to avoid making a multiparameter problem impossible to be solved), only elastic deformations can take place in the contact area, and the pressure forces against the contact surfaces are normal (see Figure 2). At the same time, we can assume that the contact areas of rolling bodies and surfaces are small compared to the sizes of the contacting bodies. Theoretically, the contact area of balls with the rolling surfaces is a point, and the contact area of rollers is a line. Therefore, we can neglect the friction in the contact area occurring when loads are applied when solving this problem [5].
There is a dependency [5] stating that the actual contact area of these surfaces is an ellipse that can transform into a circle or a strip limited by parallel straight lines in extreme cases. These extreme cases take place for two bodies limited by spherical or cylindrical surfaces, which is the case with the rolling bearings in question (ball-based or roller-based with short rollers).
Thus, if some load Q is applied to a bearing (see Figure 2) and rolling elements take forces Р0, Р1,…, Рn, the balance equation looks as follows: where α is the angle between the rolling element axes. If we accept the hypothesis that the pressure distribution law [15] is sinus-shaped and the sinusshaped pressure distribution law looks the same as harmonic oscillations caused by the contact of rolling elements with the rolling surface (the superposition principle for the specific element within the vibration range of a specific component), i.e. we assume that the normal response Rn for support structure satisfies Rn = Rn max cosα, (6) where Rnmax is the response to the maximum ball load. If we use the elasticity theory to solve equations (5) and (6) [4], the arithmetic sum of all responses will be as follows: -for a ball bearing , -for a roller bearing .
Thus, the sum for ball and roller bearings may change depending on various hypotheses used between 1.3Q and 1.46Q.
Thus, when discussing the amplitude-frequency parameters, we should assume the following: a a a 1 When the rolling surfaces are not damaged, all of the vibrations in the operating bearing are reflected in the "noise" characteristics, i.e. insignificant and negligible components of the vibration range [13,16].
2 When damages occur, there must be amplitude and frequency modulations for the retaining ring rotation rate of fret r (the centers of rolling elements rotate around the bearing center or the shaft center with this rate). It is logical to assume that the appearance of modulations for a damaged retaining ring will always have typical amplitude surges, which can characterize rolling element damages because the retaining ring contacts them directly. I.e. damaged rolling elements may cause vibrations when they contact the intact retaining ring or when intact rolling elements contact a damaged retaining ring with the rolling element collision rate of fr.b. f r.b. = fret r×z, (9) where z is the number of rolling bodies.
3 If the rolling surfaces (bearing races) are damaged, there appear subharmonic modulations at a rate of ; where k=1.3 …1.35 for ball bearings and k=1.4…1.46 for roller bearings, n=1,2,3 is the order of harmonics. These subharmonic frequencies for inner rolling surfaces are located above the fr.b., and below fr.b. For outer races due to the different circumferential velocities on the surfaces of the rolling elements when they contact the inner or the outer race [6,7].
Research findings
Based on the mathematics provided above, we constructed a mathematical graph model (testing range) reflecting the correlations between the dynamic processes, as well as the amplitude and frequency parameters, and the faults occurring in the support structures of rotor machines [11,15,17].
А -modulation amplitude, frev -rotational modulation frequency of the machine rotor, fret r -modulation frequency of the bearing retaining ring, fr.b. -modulation frequency of the rolling elements, kmodulation harmonics of rolling surfaces Based on the above, as well as the field tests conducted on the compressor plants of gas transmission pipelines, we can conclude the following: 1. When there are no damages on rolling surfaces, all of the vibration processes in the operating bearing stay within the vibration limits (the fault-free vibration according to the ISO 2373-74 standard is 0.2-0.3 mm/s). There is only the modulation of rotational frequency frev that does or does not exceed the standard value depending on the technical condition of the machine in question.
2. According to the statistics, one of the most widespread defects is the retaining ring wear or damage responsible for up to 28.3% of the total bearing failure count. According to the mathematics provided above, these problems result in frequency modulation of 0.45frev -0.55frev. (Figure 3).
3 The percentage of bearing failures caused by the rolling elements is comparable to those caused by the retaining ring (27.9% of the total failure count). Field experiments showed that this amplitude surge occurs at a rate of fr.b.= fsep ´ z, where z is the number of rolling elements (Figure 3).
4. Outer and inner race defects commonly occur after the rolling elements and the retaining ring are damaged. They also help obtain informative frequencies for the diagnostics of the outer and inner rolling surfaces. Following the dependencies shown above, race defects cause subharmonic modulations with frequencies of where k=1.3...1.35 for ball bearings and k=1.4…1.46 for roller bearings, and n=1,2,3 is the order of harmonics. The frequency modulations caused by the damages of the outer race are located below fr.b. (Figure 3), and the inner race defects (Figure 3) are above the fr.b. frequency. This can be attributed to the higher circumferential velocity on the surfaces of the rolling elements when they contact the inner race compared to when they contact the outer race.
5. To calculate the amplitude value for Order 1, 2, and 3 modulation harmonics of the damaged rolling surfaces in bearing supports, as well as the amplitude values for the rotational frequency modulation of the damaged retaining ring and rolling elements, we had to perform long-term field studies and measurements and develop a mathematical model reflecting their distribution pattern [thesis]. We observed the highest amplitude with the rotational frequency modulation А(frev), and the modulation amplitude for the damaged retaining ring is А(fret r) »0.6-0.8А(f rev). The modulation amplitude for damaged rolling bodies of the first harmonic in absolute values is at least 0.72Аr.b., while the rolling element amplitude of the second harmonic is 0.216Аr.b., 0.072Аr.b. for the third harmonic. The development of harmonics depends on the degree of rolling surface damages. One or two harmonics can manifest themselves like this, and they do not have to be in both races at the same time. We noted that damages may occur on the inner race first and then on the outer race, or vice versa [6.7].
Conclusions
When the developed testing range was used at industrial facilities, the accuracy of bearing support fault detection was 82-84%. Thus, we can claim that the suggested mathematical techniques and the developed diagnostic procedures for the bearing supports of rotor machines used in the oil-and-gas sector based on vibration parameters are highly efficient. | 2022-02-23T20:08:03.339Z | 2022-02-01T00:00:00.000 | {
"year": 2022,
"sha1": "abc929e79288714a7adc4d7b1cfc3d00a85adebb",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/988/4/042050",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "abc929e79288714a7adc4d7b1cfc3d00a85adebb",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Physics"
]
} |
221101296 | pes2o/s2orc | v3-fos-license | Nursing homes: the titanic of cruise ships – will residential aged care facilities survive the COVID‐19 pandemic?
Australians living in residential aged care facilities (RACF) are extremely vulnerable to coronavirus disease 2019 (COVID-19). Residents are both more at risk of contracting the virus and more at risk of dying because of it. Internationally RACF have been the epicentre of the pandemic. Some estimates suggest more than half of all COVID-19 deaths have been residents of aged care facilities. RACF outbreaks overseas have contributed significantly to community transmission. There is much we can learn from overseas experiences about how to prevent and manage COVID-19 outbreaks in Australian RACF. International approaches have prioritised protecting acute health services while preventing and preparing for outbreaks within RACF has received less attention. We suggest this is now not the right approach, as without significant support, an outbreak in an RACF is likely to lead to widespread transmission and death both in RACF and the community.
The hallmark of a civilised society is how it treats its most vulnerable people, and our elderly are often amongst our most physically, emotionally and financially vulnerable. Frail and elderly members of our community deserve to, and should, be looked after in the best possible way. (The Hon Richard Tracey AM RFD QC, Chair of the Royal Commission into Aged Care Quality and Safety.) Internationally residential aged care facilities (RACF) have emerged as 'ground zero' for coronavirus disease 2019 (COVID-19). 1 Residents often depend on prolonged close physical contact with caregivers, and many are cared for in close proximity to other residents. Facilities around the world are reporting overburdened, inadequately trained staff who may work at multiple facilities, increasing both their own risk of exposure and their capacity to transmit the highly contagious virus. The physical infrastructure of RACF seldom allows isolating and there are reports that appropriate isolation has occurred only after a significant number of deaths. 2 Finally, older people can present with atypical symptoms and those with cognitive impairment may be less able to communicate their symptoms leading to a delayed diagnosis.
More than 200 000 Australians currently live in RACF. 3 There are more than 2500 RACF in Australia, operated by private for-profit, government or community-based/charitable providers, funded by consumer contributions and $12.2 billion from the Australian Government annually. 4 Half of all facilities house over 60 residents. 3 In Australia there is no mandatory minimum qualification to work as a personal care worker in an RACF and there is no guarantee that a nurse will be on duty at all times. 4 Community concern about the care provided by the sector prompted the recent Royal Commission into Aged Care Quality and Safety (the Commission). In October 2019 the Commission released its Interim Report in which aged care in Australia was described as a 'Shocking Tale of Neglect'. The Commissioners found that a fundamental overhaul of design, objectives, regulation and funding of aged care in Australia is required. 4 This system now has to meet the challenge of COVID-19 with a funding increase of less than 3% to protect the most vulnerable Australians. 5 RACF residents who are at risk of becoming infected are more likely to develop severe illness or die due to age, medical comorbidities and frailty. Data from overseas demonstrate between 24-84% of all deaths from COVID-19 have been residents of RACF, the large variation due to inconsistencies in testing and reporting. 6 Locally there have already been several outbreaks, the most significant of which at the time of writing was at Newmarch House in Sydney, with 37 residents and 34 employees infected, including 17 residents' deaths. 7 The cause of this outbreak is under investigation.
There are lessons from overseas experiences to prevent, prepare for and manage COVID-19 outbreaks in Australian RACF. Acknowledging that the physical environment and staffing of aged care facilities overseas varies, we have chosen to refer to all overseas facilities as RACF. One of the first recorded outbreaks in an RACF occurred in the US State of Washington. 8 Due to increasing numbers of cases almost all of the 82 residents at the facility were tested 14 days after the first case was identified. Of the 23 residents who tested positive, only 10 were symptomatic at the time of testing. A further 10 developed symptoms a week after testing. Widespread testing in Belgian RACF revealed that 73% of employees and 69% of residents who tested positive to COVID-19 were asymptomatic at the time of testing. 9 These studies suggest that strict infection prevention measures are needed even before a case of COVID-19 is identified clinically. Another US study demonstrated benefit from serial asymptomatic testing. Two asymptomatic cases were identified 2 weeks after transferring all residents with positive tests, excluding all staff who tested positive and implementing strict infection prevention measures. 10 In Italy, authors from the Observatory of Long Term Care reported that public and government attention was directed towards acute hospitals with little attention given to RACF. 11 This group identified three main issues leading to failure to contain the outbreak: first, inadequate communication and management guidelines for RACF; second, delay in the provision of personal protective equipment (PPE) to the sector; and third, failure to control the spread of the virus within facilities. As of 30 April 2020, 95% of people who died due to COVID-19 in Italy were aged over 60. 12 However, there are no accurate data on the proportion of these that were RACF residents.
The UK government strategy to support RACF during the COVID-19 pandemic prioritised easing pressure on acute hospitals. Facilities were instructed to accept both new and returning residents despite their COVID-19 status and to institute appropriate infection prevention measures. The strategy included instruction on how to ensure adequate supply of PPE. 9 Despite this, RACF workers reported inadequate PPE supply in facilities that had accepted COVID-19 positive patients. 13 COVID-19 cases have occurred in over 2000 RACF in the UK with almost 15 000 deaths. 14 Singapore appears to be a success story with very low transmission rates within facilities and only four deaths at the time of writing. 15 Measures employed to prevent spread include the restriction and pre-screening of visitors and reduction in unnecessary transfer of patients between health facilities. All employees of RACF have been tested for COVID-19 and testing of all residents is underway. Facilities have been instructed to refer all patients with fever and respiratory illness to acute hospitals where they are isolated while awaiting testing. Documentation is required to confirm that returning residents do not have COVID-19. Over 2500 staff have been accommodated in hotels to reduce their interaction with the community and therefore their risk of exposure.
At present there have been no infections or deaths in RACF in Hong Kong. Officials have postponed all nonurgent medical services, supplied all facilities with PPE at no cost and restricted residents' movements within facilities. 16 A special allowance has been paid for workforce support, recognising more staff are required due to decreased family care visits and to account for increased sick leave during the pandemic. Each facility has a trained 'infection controller' who oversees infection prevention. 14 Residents who have attended hospitals are unable to return until they have undergone a strict quarantine.
As social and physical distancing changes, there is an urgent question of how to protect and care for Australian aged care residents. The Communicable Diseases Network Australia has released National Guidelines for the Prevention, Control and Public Health Management of COVID-19 Outbreaks in Residential Care Facilities. 17 The guidelines state that it is the primary responsibility of the RACF to manage a COVID-19 outbreak. The vast majority of RACF in Australia are private entities but holding individual facilities primarily responsible must be interpreted in the context of the Commission's recent finding of a system that 'lacks transparency in communication, reporting and accountability', 4 three essential features of disease outbreak management. The guidelines clarify that 'state health authorities will act in an advisory role to assist RACF to detect, characterise and manage COVID-19 outbreaks'. 17 This is profoundly inadequate due to the high rate of asymptomatic transmission and the limited capacity of RACF to manage outbreaks. There is currently no enforcement of the sector's compliance with published guidelines and there are concerns that visitor restrictions have further reduced oversight of the sector.
There are concerns about the sector's ability to care for sick and dying residents, as highlighted by disturbing international reports of RACF residents being left abandoned or dead in facilities. 16 The Commission's Interim Report comments on inadequate staffing leading to 'basic standards often not being met', a concerning finding when health facilities are expecting absenteeism of up to 30% at the peak of an outbreak. 4 There are also concerns about how RACF will care for confused, wandering or aggressive COVID-19 patients given the high rates of restrictive practices described in the Commission's Interim Report.
Suggestions to prevent, prepare for and manage a COVID-19 outbreak
Our discussion focusses on suggested changes to the workforce, testing and location of care.
Workforce
Current workforce practices in the aged care sector present challenges for prevention and management of COVID-19 in RACF. Overseas experience suggests that staff should only work at one facility and should not be involved in community care. If this is not possible, staff should be required to complete a register of all facilities at which they work. A centralised government-funded pool of appropriately trained staff skilled in both infection prevention and care of the elderly would have been useful to deploy at Newmarch House during the outbreak as many of their staff were required to quarantine. 18 All RACF staff must receive education about the importance of not attending work while unwell and should have access to paid sick leave if required to quarantine including those employed on a casual or temporary basis.
Testing
There needs to be widespread testing with a low threshold to test and accessible on-site testing. If a single case Editorials of COVID-19 is confirmed, all staff and residents should be tested regardless of symptom status. Consideration should be given to testing of asymptomatic staff even in the absence of a case. Testing of asymptomatic residents could also be considered. Use of contact and droplet precautions for all residents should be implemented when a resident has been tested until results are available. Because of this, there must be centralised coordination of PPE acquisition and delivery to ensure each facility has a stockpile of appropriate PPE available on site. All staff require frequent training in the use of PPE including simulating care in isolation rooms.
Location of care
Current Australian guidelines recommend transfer of a RACF resident to hospital only if the resident's condition requires it. This recommendation is now worth reconsidering for two reasons. First, it is clear from international experience that inadequate outbreak management in an RACF is likely to lead to high mortality and broader community transmission. This may lead to a higher burden on hospitals than accepting the care of RACF residents with suspected or confirmed COVID-19. Second, our acute care setting now has additional capacity and far greater expertise in infection prevention and management than RACF.
There are three options for management of COVID-19 infections in RACF residents. The first is to transfer all suspected or confirmed COVID-19 cases to an acute hospital setting. This overcomes the limitations of the physical infrastructure of RACF and places the burden of prevention of spread of infection on expert services. The potential harm to the individual (i.e. falls and delirium in an unfamiliar hospital environment) must be balanced against benefit to the community. When reviewing advanced care plans with residents, health care workers should make residents and their families aware of their facility's capacity to isolate them effectively in the event of acquiring COVID-19 so that any transfer to hospital is anticipated. The South Australian Government has recently announced that all COVID-19 positive residents will be transferred immediately to hospital by ambulance. They have considered Advanced Care Directives as not binding in the event of a pandemic emergency. 19 Residents confirmed to have COVID-19 could remain in hospital until they have completed their isolation period unless the health service and RACF are confident they can be effectively isolated at the RACF.
Two other options could be considered if the health care system were to become overwhelmed and rationing required limiting access to hospitals: cohorting to specific COVID-19 facilities and cohorting within the resident's own RACF. Both would require a highly trained, mobile workforce available to be deployed at the beginning of an outbreak. Cohorting to a specific COVID-19 facility would require exposing residents to new unfamiliar environments as they would be if transferred to hospital. Cohorting infected residents within their own facility would not completely remove the risk they pose to other residents. There are multiple examples now that local cohorting places a vulnerable population at great risk for significant morbidity and mortality. Should our health services be overwhelmed by a pandemic wave, specialist facilities established in collaboration with hospital services could care for COVID-19 positive residents.
Conclusion
RACF are required to provide skilled care for a unique, highly dependent population, making physical distancing impossible. Facilities have not been designed with infection prevention strategies in mind and staffing ratios are highly variable. The catastrophic outcomes of this infection in RACF around the world parallel the outcomes seen from cruise ships and urgent action is required to protect RACF residents, workers and the community at large. | 2020-08-12T13:03:28.798Z | 2020-08-10T00:00:00.000 | {
"year": 2020,
"sha1": "eb9ccc585de2f9233d32735c7166adf3928b8207",
"oa_license": null,
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/imj.14966",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "dee1536c42ae3e520d6a643cdf62f06f8cbb7032",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
248478984 | pes2o/s2orc | v3-fos-license | Development and Validation of a Prognostic Model to Predict the Risk of In-hospital Death in Patients With Acute Kidney Injury Undergoing Continuous Renal Replacement Therapy After Acute Type a Aortic Dissection
Background This study aimed to construct a model to predict the risk of in-hospital death in patients with acute renal injury (AKI) receiving continuous renal replacement therapy (CRRT) after acute type A aortic dissection (ATAAD) surgery. Methods We reviewed the data of patients with AKI undergoing CRRT after ATAAD surgery. The patients were divided into survival and nonsurvival groups based on their vital status at hospital discharge. The data were analyzed using univariate and multivariate logistic regression analyses. Establish a risk prediction model using a nomogram and its discriminative ability was validated using C statistic and the receiver operating characteristic (ROC) curve. Its calibration ability was tested using a calibration curve, 10-fold cross-validation and Hosmer–Lemeshow test. Results Among 175 patients, in-hospital death occurred in 61 (34.9%) patients. The following variables were incorporated in predicting in-hospital death: age > 65 years, lactic acid 12 h after CRRT, liver dysfunction, and permanent neurological dysfunction. The risk model revealed good discrimination (C statistic = 0.868, 95% CI: 0.806–0.930; a bootstrap-corrected C statistic of 0.859, the area under the ROC = 0.868). The calibration curve showed good consistency between predicted and actual probabilities (via 1,000 bootstrap samples, mean absolute error = 2.2%; Hosmer–Lemeshow test, P = 0.846). The 10-fold cross validation of the nomogram showed that the average misdiagnosis rate was 16.64%. Conclusion The proposed model could be used to predict the probability of in-hospital death in patients undergoing CRRT for AKI after ATAAD surgery. It had the potential to assist doctors to identify the gravity of the situation and make the targeted therapeutic measures.
BACKGROUND
Acute type A aortic dissection (ATAAD) has a high incidence of postoperative acute kidney injury (AKI) due to its special pathophysiological changes and the surgical procedure, which seriously affects the patient's prognosis. AKI has a reported incidence ranging from 20 to 67%, according to the differences in the definition of AKI (1,2). Some studies showed that the mortality due to postoperative AKI was 10-20 times higher than that without AKI in patients after ATAAD surgery (3,4). In addition, the mortality for those in need of renal replacement therapy (RRT) was higher. The high risk of short-term mortality in patients undergoing RRT affects the prognosis of patients, making it necessary to identify prognostic factors and perform targeted interventions. Therefore, an effective model needed to be constructed for predicting the risk of in-hospital death in patients with AKI undergoing continuous renal replacement therapy (CRRT) after ATAAD surgery (in this study, all patients treated with RRT undergoing CRRT).
The nomogram has been considered as an effective way to create a straightforward visual graph of a numerical predictive model that quantifies the risk of a clinical outcome. This study aimed to identify the clinical risk factors for in-hospital death in patients with AKI undergoing CRRT after ATAAD surgery, and establish and validate a predictive model.
AKI was diagnosed based on changes in the urine output, serum creatinine, or both, according to the Kidney Disease: Improving Global Outcomes (KDIGO) classification. Every patient had a urinary catheter to measure urine output every hour, and serum creatinine measurements were performed at least once daily.
Data Collection
Relevant data related to the surgery were recorded. (1) The preoperative general data included age, gender, weight, height, time from the occurrence of dissection to the surgery, history of hypertension, type of dissection, maximum diameter of the aorta, cardiac function grade, myocardial ischemia, aortic regurgitation, renal insufficiency, pleural effusion, pericardial tamponade, aortic rupture, shock, smoking history, diabetes, oral administration of β-receptor blockers, and calcium antagonists. (2) The intraoperative data included operative time, cardiopulmonary bypass (CPB) time, circulatory arrest time, minimum temperature, crystal colloid input, and blood transfusion. (3) The postoperative data included blood pressure, central venous pressure, mechanical ventilation time, ICU stay, blood transfusion volume, and so forth. The complications included lung infection, respiratory failure, other organ dysfunction [Liver dysfunction is defined as serum alanine aminotransferase or aspartate aminotransferase that are at least 10 times the upper limit of normal value. Permanent neurological dysfunction (PND) is defined as a stroke due to embolism or hemorrhage, it is confirmed by consultation with a neurologist and by imaging (CT/MRI)], hemodynamic instability, arrhythmia, CRRT catheterization or anticoagulationrelated bleeding, and electrolyte and acid-base imbalance. The urine volume per hour, daily intake and output, blood creatinine level, urea nitrogen, electrolyte, pH, and internal environment (whether acid-base balance) were recorded postoperatively. The patient treatment measures were recorded, including mechanical ventilation, vasoactive drug use, and fluid therapy.
The timing of most patients undergoing CRRT initiated within 8 h of AKI stage 3 using the KDIGO classification or if any of the following absolute indications for RRT were present: serum urea level > 40 mmol/L; Serum potassium concentration of >6 mmol/L despite medical treatment (bicarbonate and/or glucoseinsulin infusion); pH < 7.15 in a context of pure metabolic acidosis (PaCO2 below 35 mmHg) or in a context of mixed acidosis with PaCO2 ≥ 50 mmHg without the possibility of increasing alveolar ventilation.
Grouping: The patients were divided into survival and nonsurvival groups based on the vital status at hospital discharge.
Statistical Analysis
Patients' baseline characteristics were expressed as frequency and percentage for categorical variables, and as mean ± standard deviation or median and interquartile range (IQR) for continuous variables, as appropriate. Indicators with missing values warranted interpolation by multiple imputations using the MICE package (5). We assumed that the data were missing at random (6); therefore, we performed predictive mean matching (7) to generate five complete imputed data sets that fit the logistic models. The binary data were tested using the χ 2 test or Fisher exact test. Normally distributed data were compared for significance using t-tests. The Mann-Whitney U-test was applied for data with nonnormal distribution. The significance of each variable was assessed by univariate logistic regression analysis. The variables with P-value < 0.1 were entered into the multivariate logistic regression analysis to identify the independent risk factors. Based on the results of the final regression analysis, a nomogram to predict the risk of inhospital death in patients with postoperative AKI undergoing CRRT after ATAAD surgery was constructed using the R software (R software, version 4.1.2). The regression coefficients in multivariate logistic regression were proportionally transformed into a point scale, and the total points were converted into predicted probabilities (8).
The sample size calculation showed that a sample of 32 from the positive group and 60 from the negative group achieve 80% power to detect a difference of 0.15 between the area under the receiver operating characteristic (ROC) curve (AUC) under the null hypothesis of 0.85 and an AUC under the alternative hypothesis of 0.7000 using a two-sided z-test at a significance level of 0.05.
The performance of the nomogram was valuated by discrimination and calibration. The discrimination was demonstrated by the area under the ROC curve (equivalent to the C statistics). The calibration was performed using a visual calibration plot comparing the predicted and actual probabilities of in-hospital death. Furthermore, the calibration was performed using a visual calibration plot via 1,000 bootstrap resamples for internal validation to evaluate their predictive accuracies (9). The Hosmer-Lemeshow test was also recommended to assess calibration. Furthermore, we used 10-fold cross-validation to calculate the misdiagnosis rate. The statistical analysis and graphics were implemented by using R 4.1.2. All tests were two tailed, and a P-value < 0.05 indicated a statistically significant difference.
RESULTS
A total of 175 patients with postoperative AKI undergoing CRRT after ATAAD were included in this study. The in-hospital death occurred in 61 (34.9%) patients. The comparison of the baseline data showed that the proportion of age > 65 years in the nonsurvival group was significantly higher than that in the survival group [age > 65 years: 18 cases (29.5%) in the nonsurvival group vs. 7 cases (6.1%) in the survival group, P < 0.001]. Therefore, the age > 65 years was included in multiple logistic regression analyses. No significant differences were found in other baseline data between the two groups ( Table 1).
The comparison of intraoperative data between the two groups revealed that the CPB time in the nonsurvival group was longer than that in the survival group (CPB time: nonsurvival group 235.6 ± 64.8 min vs. survival group: 219.6 ± 48.9 min, P = 0.04). Other intraoperative data, including the type of surgery, operative time, aortic cross-clamp time, MHCA time, and intraoperative blood transfusion volume, showed no statistical difference ( Table 2). Therefore, CPB time was included in multivariate logistic regression analysis.
The comparison of the clinical and laboratory data during CRRT between the two groups revealed that the lactic acid 6 h after CRRT, 12 h after CRRT, and 24 h after CRRT in the nonsurvival group was higher than that in the survival group (lactic acid 6 h after CRRT: 6.5 ± 4.9 mmol/L in the nonsurvival group vs. 3.7 ± 2.7 mmol/L in the survival group, (Table 3). Therefore, lactic acid 12 h after CRRT was included in multivariate logistic regression analysis. The comparison of postoperative complications and transfusion data during the ICU stay between the two groups: the proportion of liver dysfunction and PND in the nonsurvival group was significantly higher than those in the survival group [liver dysfunction: 24 cases in the nonsurvival group (39.3%) vs. 7 cases in the survival group (6.1%), P < 0.001; PND: 27 cases in the nonsurvival group (44.3%) vs. 12 cases in the survival group (10.5%), P < 0.001]. No significant differences were observed in other complications and the volume of blood transfusion during the ICU stay between the two groups ( Table 4). Therefore, liver dysfunction and PND were included in multivariate logistic regression analysis.
Nomograms and Model Performance
A nomogram was constructed to predict in-hospital death, including four significant independent risk factors: age > 65 years, lactic acid 12 h after CRRT, liver dysfunction, and PND (Figure 3). The total score was obtained by summing up the single scores used to estimate the probability of in-hospital mortality. The discrimination of the predictive model was estimated using a C statistic of 0.868 (95% CI, 0.806-0.930) and a bootstrap-corrected C statistic of 0.859; the area under the ROC curve was 0.868 (Figure 4). The calibration curve showed that the predicted probabilities of in-hospital death fitted well with the actual prevalence rates (calibration curve: via 1,000 bootstrap samples, mean absolute error = 0.022 (2.2%)) ( Figure 5). The Hosmer-Lemeshow test (P = 0.846) also demonstrated good calibration. The 10-fold cross-validation of the nomogram showed that the average misdiagnosis rate was 16.64%.
DISCUSSION
Several factors have been reported to be associated with high mortality during CRRT (10)(11)(12). Researchers have been striving to develop prediction models for patients with AKI. However, These models have limited applicability to patients undergoing CRRT (13)(14)(15). For example, in a previous study, one model, HELENICC score, was suggested for patients undergoing CRRT (16). However, this study included only patients with septic AKI. Patients with postoperative AKI undergoing CRRT after ATAAD surgery that have higher in-hospital mortality and worse Frontiers in Cardiovascular Medicine | www.frontiersin.org prognosis, making it necessary to screen for prognostic factors and perform targeted interventions. In this study, a nomogram was developed and validated for predicting the risk of inhospital death in these patients. In additional, this nomogram had excellent discriminative performance and calibration, which provided individual predictions for each patient. The present study created an uncomplicated intuitive graph of a statistical predictive model that quantified the risk of in-hospital death in patients with AKI undergoing CRRT after ATAAD surgery. In the proposed nomogram, age > 65 years was the greatest contributor to the risk of in-hospital death, followed by liver dysfunction and PND; lactic acid after 12 h of CRRT showed the smallest effect on the probability of in-hospital death.
This study showed that the in-hospital mortality rate was higher in patients aged more than 65 years and undergoing CRRT because of the decreased immune function of elderly patients, the physiological function of their organs degenerated, and the renal blood flow and glomerular filtration rate decreased every year with age, accompanied by hypertension, hyperlipidemia, diabetes, and other diseases. Therefore, they were more likely to have a poorer prognosis after CRRT for postoperative AKI.
Commereuc and his colleagues (17) showed that the mortality of patients with AKI, who were older than 65 years and required CRRT in the ICU, was more than 70%, which was up to 76% in patients aged more than 80 years, with a significantly higher risk of death compared with patients aged < 50 years. The prognosis of patients requiring CRRT was worse with increasing age. The aforementioned results also supported the conclusions of this study.
This study showed that high lactic acid values 12 h after CRRT was an independent prognostic factor for in-hospital death in patients undergoing CRRT for AKI after ATAAD. Blood lactic acid is an important indicator of systemic perfusion and oxygen metabolism; it reflects increased anaerobic metabolism in the presence of hypoperfusion (18). Elevated blood lactate levels have been shown to be a sensitive, early biochemical indicator of tissue hypoperfusion and oxygen insufficiency and can be used to assess disease severity and prognosis (19,20). If patients do not get effective clearance of blood lactate in a short time, no improvement is seen in histiocyte hypoperfusion and oxygenation disorders, the progression of the disease worsens, shock and respiratory failure occur, and case fatality rate increases. If the clinical rescue treatment is appropriate, the tissue perfusion and oxygenation improve, the concentration of lactate in tissue cells decreases quickly, and the condition improves until recovery (21). Lactic acid remains high 12 h after CRRT, suggesting that the ischemic and hypoxic states of the tissues are still severe after CRRT, and the prognosis of such patients is poor.
In this study, liver dysfunction was a predictive factor for in-hospital death, which was defined as ischemic liver injury (ILI). A practical clinical definition of liver dysfunction is as follows: a syndrome with rapid and short-term increases in either AST or ALT levels to a level of more than 10 times the upper limit of normal, which is most usually occurred in critically ill patients. It is characterized by a predominant hepatocellular structure of damage, which is caused by insufficient blood and oxygen delivery to the liver cells. The latent etiologies often resulted in ILI are circulatory, cardiac or respiratory failure (22)(23)(24). Most specialists come to an agreement that outright conspicuous falls in systemic blood pressure are a typical predisposing characteristic of ILI. The incidence of ILI in the ICU (22,(25)(26)(27)(28) was 1-12% and might be even higher in patients with cardiogenic shock (23,25,29). The surgical procedure for ATAAD is difficult, and the situation is full of challenges during the surgery. Malperfusion syndromes, such as liver dysfunction, can be present. If patients with AKI also have liver dysfunction, their in-hospital mortality increases significantly.
PND manifested mainly as stroke due to embolism or hemorrhage, which was diagnosed by a neurologist and confirmed by imaging (CT/MRI). Brain injury was one of the most important factors, other than cardiac insufficiency, leading to poor prognosis after cardiac surgery. Studies showed (30,31) that the incidence of perioperative stroke was significantly higher in cardiac surgery than in noncardiac and nonneurosurgery. The incidence of perioperative stroke after cardiac surgery in patients with ATAAD was higher than that after other types of cardiac surgery (32). A deep hypothermic circulatory arrest (DHCA) has been shown to be one of the most risk factors for neurological complications after CPB. The incidence of stroke increased by 1.8-13.6%, and early mortality increased by 6.1-15% in adults after DHCA (33,34). With the application of multiple brain protection strategies in patients with ATAAD, the incidence of neurological injury after ATAAD surgery is lower than before, but it is still an important factor affecting the prognosis of patients. The present study showed that patients requiring CRRT with PND had significantly increased in-hospital mortality. Multimodal brain FIGURE 3 | Nomogram predicts in-hospital death risk in patients with postoperative AKI undergoing CRRT after ATAAD surgery. The nomogram was established to predict the risk of in-hospital death based on four independent prognostic factors. The total score can be calculated by summation of single scores. We can estimate the probability of in-hospital death by projecting the total score to the lower total point scale. function monitoring and the active use of multiple perioperative brain protection strategies during the perioperative period may improve patients' outcomes.
This study had a few limitations. First, although the internal validation of the model produced excellent discrimination and fabulous calibration, the generalizability of this nomogram still required external validation, especially from other countries, taking the differences in clinical behavior and epidemiology. Second, the prediction model was constructed retrospectively, and a retrospective research had its own limitations. It is necessary to carry out a prospective study to test the model. Third, the misdiagnosis rate of this model was still exists, and doctors used it should get noticed.
CONCLUSIONS
In summary, a nomogram was developed and validated for predicting the risk of in-hospital death in patients with postoperative AKI undergoing CRRT after ATAAD surgery. The nomogram could help identify the gravity of the situation and provide treatment recommendations for these patients.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author/s.
ETHICS STATEMENT
This study was approved by the Institutional Ethics Committee of the Beijing Anzhen Hospital (No. KS2019034-3). All patients gave their written informed consent.
AUTHOR CONTRIBUTIONS
RJ carried out the studies, participated in collecting data, and drafted the manuscript. XL and ML participated in acquisition, analysis, or interpretation of data. NL, LS, and JZ reviewed and edited it. All authors contributed to the interpretation of the data and the completion of figures and tables and have read and approved the final manuscript.
FUNDING
The study was supported by the Beijing Municipal Science and Technology Commission (No. Z191100006619095). | 2022-05-02T13:06:27.631Z | 2022-05-02T00:00:00.000 | {
"year": 2022,
"sha1": "49aef8ed9fea5716ed7fa6f323c94023ea319665",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "49aef8ed9fea5716ed7fa6f323c94023ea319665",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
254878978 | pes2o/s2orc | v3-fos-license | Worse survival in patients with right ventricular dysfunction and COVID-19–associated acute respiratory distress requiring extracorporeal membrane oxygenation: A multicenter study from the ORACLE Group
Objective We sought to determine the impact of right ventricular dysfunction on the outcomes of mechanically ventilated patients with COVID-19 requiring veno-venous extracorporeal membrane oxygenation. Methods Six academic centers conducted a retrospective analysis of mechanically ventilated patients with COVID-19 stratified by support with veno-venous extracorporeal membrane oxygenation during the first wave of the pandemic (March to August 2020). Echocardiograms performed for clinical indications were reviewed for right and left ventricular function. Baseline characteristics, hospitalization characteristics, and survival were compared. Results The cohort included 424 mechanically ventilated patients with COVID-19, 126 of whom were cannulated for veno-venous extracorporeal membrane oxygenation. Right ventricular dysfunction was observed in 38.1% of patients who received extracorporeal membrane oxygenation and 27.4% of patients who did not receive extracorporeal membrane oxygenation with an echocardiogram. Biventricular dysfunction was observed in 5.5% of patients who received extracorporeal membrane oxygenation. Baseline patient characteristics were similar in both the extracorporeal membrane oxygenation and non–extracorporeal membrane oxygenation cohorts stratified by the presence of right ventricular dysfunction. In the extracorporeal membrane oxygenation cohort, right ventricular dysfunction was associated with increased inotrope use (66.7% vs 24.4%, P < .001), bleeding complications (77.1% vs 53.8%, P = .015), and worse survival independent of left ventricular dysfunction (39.6% vs 64.1%, P = .012). There was no significant difference in days ventilated before extracorporeal membrane oxygenation, length of hospital stay, hours on extracorporeal membrane oxygenation, duration of mechanical ventilation, vasopressor use, inhaled pulmonary vasodilator use, infectious complications, clotting complications, or stroke. The cohort without extracorporeal membrane oxygenation cohort demonstrated no statistically significant differences in in-hospital outcomes. Conclusions The presence of right ventricular dysfunction in patients with COVID-19–related acute respiratory distress syndrome supported with veno-venous extracorporeal membrane oxygenation was associated with increased in-hospital mortality. Additional studies are required to determine if mitigating right ventricular dysfunction in patients requiring veno-venous extracorporeal membrane oxygenation improves mortality.
Results:
The cohort included 424 mechanically ventilated patients with COVID-19, 126 of whom were cannulated for veno-venous extracorporeal membrane oxygenation. Right ventricular dysfunction was observed in 38.1% of patients who received extracorporeal membrane oxygenation and 27.4% of patients who did not receive extracorporeal membrane oxygenation with an echocardiogram. Biventricular dysfunction was observed in 5.5% of patients who received extracorporeal membrane oxygenation. Baseline patient characteristics were similar in both the extracorporeal membrane oxygenation and non-extracorporeal membrane oxygenation cohorts stratified by the presence of right ventricular dysfunction. In the extracorporeal membrane oxygenation cohort, right ventricular dysfunction was associated with increased inotrope use (66.7% vs 24.4%, P<.001), bleeding complications (77.1% vs 53.8%, P ¼ .015), and worse survival independent of left ventricular dysfunction (39.6% vs 64.1%, P ¼ .012). There was no significant difference in days ventilated before extracorporeal membrane oxygenation, length of hospital stay, hours on extracorporeal membrane oxygenation, duration of mechanical ventilation, vasopressor use, inhaled pulmonary vasodilator use, infectious complications, clotting complications, or stroke. The cohort without extracorporeal membrane oxygenation cohort demonstrated no statistically significant differences in in-hospital outcomes.
Conclusions:
The presence of right ventricular dysfunction in patients with COVID-19-related acute respiratory distress syndrome supported with venovenous extracorporeal membrane oxygenation was associated with increased in-hospital mortality. Additional studies are required to determine if mitigating right ventricular dysfunction in patients requiring veno-venous extracorporeal membrane oxygenation improves mortality. (J Thorac Cardiovasc Surg 2023;-:1-10)
CENTRAL MESSAGE
The presence of RVD in patients with COVID-19 requiring venovenous ECMO support was associated with increased mortality.
Despite the changing virulence of COVID-19, acute respiratory distress syndrome (ARDS) remains a persistent disease phenotype. ARDS develops in approximately 31% to 67% of patients hospitalized with COVID-19 [1][2][3] and is associated with significant mortality, more than 52%. 1,2,4 Management of COVID-19-associated ARDS in the initial waves of the pandemic focused on early intubation, 5 lung protective ventilation, 5,6 and prone positioning. [7][8][9][10] Despite these strategies, a subset of these patients progressed to develop refractory hypoxemia or hypercarbia, necessitating advanced therapies such as veno-venous extracorporeal membrane oxygenation (VV-ECMO). [1][2][3]5,[11][12][13][14][15][16][17] The role of ECMO in management of COVID-19-associated ARDS has largely consisted of using VV-ECMO to address severe refractory hypoxemia and hypercarbia. Interestingly, right ventricular dysfunction (RVD) has been demonstrated to be relatively common in this cohort, with RVD shown to occur in approximately 25% to 40% of patients with COVID-19-associated ARDS. [18][19][20][21] Some centers have advocated for early, aggressive right ventricle (RV) support with a right ventricular assist device (RVAD) in conjunction with ECMO in response to early studies that suggest increased mortality for patients with RVD in the setting of COVID-19. 22,23 Despite this trend, RVD in patients who require ECMO support for COVID-19-associated ARDS has not yet been shown to impact survival. Given the relative infrequency of ECMO for COVID-19 at any one center and the complexity of appropriate management of these patients, multicenter collaborative analysis has become essential to better understand the role advanced therapies play in treating this novel disease. 24,25 The Outcomes and Recovery After COVID-19/Critical illness Leading to ECMO (ORACLE) group is an interdisciplinary collaboration across 6 academic medical centers that aims to define the recovery and ongoing needs of survivors of COVID-19-associated ARDS. Established in 2020, the overarching goal of the ORACLE research collaborative is to better understand how ECMO impacts long-term outcomes of survivors. We present an analysis of the ORACLE registry with a specific focus on evaluating the impact of RVD on clinical outcomes. We hypothesized that the presence of RVD in patients with COVID-19 supported with VV-ECMO is associated with worse clinical outcomes and higher mortality.
MATERIALS AND METHODS
We conducted a retrospective analysis using data collected at 6 academic medical centers across the United States (University of Colorado, University of Kentucky, University of Virginia, Johns Hopkins University, Vanderbilt University, University of Pittsburgh Medical Center) representing the ORACLE interdisciplinary collaborative. 26 Participating sites were experienced ECMO centers and strictly adhered to Extracorporeal Life Support Organization guidelines when considering ECMO candidacy. Each center used specialized teams to manage ECMO-supported patients and manage ECMO-supported patients per Extracorporeal Life Support Organization guidelines both before and after ECMO cannulation. 27 Guidelines for cannulation included presence of single organ failure, intubation less than 10 days, age less than 70 years, P:F less than 80 mm Hg for greater than 6 hours or P:F less than 50 mm Hg for greater than 3 hours, and pH less than 7.25 with PaCO2 greater than 60 mm Hg for more than 6 hours. Patients with known cardiac dysfunction were not cannulated for VV-ECMO, and VA-ECMO was not offered to this cohort. Each institution maximized matching resources to patient need independently based on local dynamics, and efforts to provide all necessary resources were maintained at each participating center during the pandemic. All patients considered for ECMO had been intubated before evaluation. The study was approved by the Institutional Review Board at each site, and a waiver of informed consent was granted (University of Colorado and all other sites: COMIRB#20-0731, approved April 4, 2020).
Study investigators at each site performed a retrospective chart review of all adult patients with COVID-19 admitted to the intensive care unit (ICU) during the first wave of the pandemic from March to August 2020. 28 All transesophageal and transthoracic echocardiograms obtained during the index hospitalization for COVID-19 were reviewed. The presence of any RVD was determined and categorized dichotomously at the discretion of providers certified in adult echocardiography at each participating institution; however, RVD was broadly defined as a composite of size ratio and elevated RV pressure or presence of septal dyskinesia on transthoracic or transesophageal imaging. 29 Left ventricular (LV) dysfunction was defined as LVejection fraction less than 50% as documented in the echocardiography report. Patients who did not receive a clinically indicated echocardiogram were not included in the full analysis ( Figure 1). Data from all sites were combined for analysis. Patient demographics and in-hospital characteristics, including survival at discharge, were compared based on ECMO status using chi-square tests for categorical variables and t tests or Kruskal-Wallis tests for continuous variables. We used Kaplan-Meier survival curves and log-rank P values to test the association between survival to discharge and RVD separately for ECMO-supported patients and patients supported only with mechanical ventilation. Analyses were performed using R software (R Foundation for Statistical Computing).
RESULTS
The study included 424 mechanically ventilated patients with COVID-19 across 6 institutions. Of these patients, 159 were cannulated for VV-ECMO and 242 received a clinically indicated echocardiogram during their index hospitalization for COVID. A total of 79.2% (126/159) of the ECMO cohort had echocardiograms. RVD was observed in 38.4% (48/126) of ECMO-supported patients. A total of 44.2% (117/265) of the non-ECMO cohort received echocardiograms. Comparison of the demographics and outcomes of patients supported with ECMO and the non-ECMO cohort demonstrated ECMO-supported patients were younger, traveled further to receive care, and had less chronic renal disease. ECMO-supported patients had greater vasopressor, steroid, and inhaled pulmonary vasodilator use (Table E1). ECMO-supported patients had increased use of tracheostomy, longer duration of ventilation, and a longer hospitalization without a significant reduction in mortality (Table E2). Further analysis focused on those patients who received a clinically indicated echocardiogram and was stratified by both ECMO use and RVD. RVD was observed in 27.4% (32/117) of this group ( Figure 1). The majority of RVD was isolated, with biventricular dysfunction observed in 5.6% (7/126) of ECMOsupported patients, whereas isolated LV ejection fraction less than 50% was observed in 2.4% (3/126) of ECMOsupported patients. Non-ECMO-supported patients had an observed rate of biventricular dysfunction of 9.0% (10/ 117) and isolated LV dysfunction of 4.0% (5/117). Given the potential for LV dysfunction to confound the relationship between RV dysfunction and mortality, we used logistic regression to estimate the adjusted odds of death for RV dysfunction, LV dysfunction, and an interaction between them. The interaction term was not significant; therefore, we fit a model with 2 binary factors for each of these variables. RV dysfunction was significantly associated with increased odds of death in the VV-ECMO cohort ( The Journal of Thoracic and Cardiovascular Surgery c Volume -, Number -of death in patients with RVD supported with ECMO (OR, 2.33; 95% CI, 1.05-5.28; P ¼ .04). ICU admission measures of systemic illness and prediction of mortality were similar between RVD and non-RVD ECMO cohorts, as measured by SOFA and Apache II scores (Table 1).
When both ECMO and non-ECMO cohorts were stratified by the presence of RVD, baseline patient characteristics were similar (Table 1). ECMO-supported patients with RVD were more likely to require inotropes than ECMOsupported patients without RVD (66.7% vs 24.4%, P <.001); however, there were no significant differences in vasopressor or inhaled pulmonary vasodilator use, or duration of mechanical ventilation before ECMO cannulation (Table 2). There were no significant differences in receipt of blood transfusion (P ¼ .901) or clotting complications (including deep venous thromboses and pulmonary emboli) (P ¼ .255), although bleeding complications were significantly increased in ECMO-supported patients with RVD compared with ECMO-supported patients without RVD (77.1% vs 53.8%, P ¼ .015). Use of investigational COVID-19 therapy, tracheostomy, and steroids was also similar between groups. There was no significant difference in rates of intracranial hemorrhage, stroke, or delirium. There was a trend toward increased rates of acute kidney injury in the RVD cohort, but this did not reach statistical significance (89.4% vs 73.1%, P ¼ .052) There was no significant difference in days ventilated before ECMO, hours on ECMO, or duration of mechanical ventilation required during the hospital stay. Duration of mechanical ventilation before ECMO and total duration of mechanical ventilation were similar between ECMO-supported patients with and without RVD. Length of hospital admission (39.0 vs 37.0 days, P ¼ .603) was not significantly different in ECMO-supported patients with and without RVD ( Table 2). ECMO-supported patients with RVD demonstrated a significantly reduced survival to discharge compared with ECMO-supported patients without RVD (39.6% vs 64.1%, P ¼ .012) (Table 2, Figure 2). Kaplan-Meier survival curves demonstrated a varying rate of survival over time, but these rates were not significantly different in the ECMO cohort (P ¼ .08) or no ECMO cohort (P ¼ .91) in relation to RVD given censoring ( Figure 3). In contrast to those patients who required ECMO, the impact of RVD in the cohort who did not require ECMO support was less pronounced. In the 117 mechanically ventilated patients who were not cannulated for ECMO, there was neither a significant difference in survival related to RVD (68.2% vs 65.6%, P ¼ .962) ( Figure 2) nor were there significant differences in rates of in-hospital complications including duration of mechanical ventilation, length of stay, bleeding, or clotting complications (Table 2).
An additional subgroup analysis was performed investigating the impact of single-site cannulation (right internal jugular dual lumen cannula) for VV-ECMO versus dualsite cannulation (right internal jugular return with common femoral venous drainage) (Figure 4) for VV-ECMO. Singlesite cannulation was used in 34.1% (43/126) of ECMOsupported patients, and dual-site cannulation in 65.9% of patients (83/126). There were no significant differences in observed RVD between single-site and dual-site cannulation groups (39.5% vs 37.3%, P ¼ .963). In-hospital survival was not significantly different between single-site cannulation and dual-site cannulation groups (58.1% vs 53.0%, P ¼ .719).
DISCUSSION
We present data from the multicenter interdisciplinary ORACLE collaborative with a specific focus on dysfunction, only RVD was significantly associated with increased odds of death before discharge. When we used of subset of only patients who did not have LV dysfunction, RVD was still significantly associated with mortality. Additional covariates could not be included because of the small sample size. Cannulation approach using a single-site versus a dual-site strategy was not associated with significant differences in in-hospital survival in this study. As such, our findings are less suggestive of a global myocarditis phenotype that has been previously described 13,14 and favor right-sided dysfunction as the dominant ventricular dysfunction pattern in this disease process. Second, this multicenter study demonstrated increased inhospital mortality associated with RVD on echocardiography in ECMO-supported patients (Figure 2). The observed increased mortality associated with RVD in VV-ECMOsupported patients has been suggested by multicenter studies, 30 Substantial clinical morbidity exists for patients with COVID-19 who require VV-ECMO support, regardless of RV function. ECMO-supported patients with RVD in this study demonstrated increased dependence on inotropes and increased bleeding complications. Patients with RVD did not have increased rates of clotting complications, stroke, vasopressor needs, progression to dialysis, duration of mechanical ventilation, or length of time on ECMO. Third, the present study suggests the association of RVD with increased mortality in COVID-19 ARDS may be limited to those who are supported with VV-ECMO, because there was no significant difference in clinical outcomes related to RVD in patients supported with mechanical ventilation alone (survival to discharge 68.2% vs 65.6%, P ¼ .962).
These findings provoke discussion on 2 central questions: (1) Is RVD a phenotype of another determinant of survival, such as degree of hypoxemia, or is it the cause of the survival difference? (2) Would protecting the RV with an RVAD with an oxygenator improve survival? The question of RVAD placement is particularly challenging because RVD is not always present at the time that mechanical circulatory support is initiated.
RV failure is often underrecognized in critical illness, particularly in the setting of ARDS, due in part to the difficulty of diagnosis by noninvasive means. 21 During the early wave of the COVID-19 pandemic, concern regarding the possibility of transmission to healthcare workers during diagnostic procedures such as echocardiography or use of Swan Ganz catheters likely resulted in an underrecognition of RVD, which has since been described in 27% to 40% of hospitalized patients with COVID-19, with some studies showing a 3-fold increase in mortality. 11,31-36 RVD has been shown to impact outcomes for ECMO-supported patients with ARDS before the COVID-19 pandemic; however, the implications of RV dysfunction in COVID-19-associated ARDS has not clearly been delineated. Mechanistically, the development of RVD in the setting of ARDS can be considered as a secondary result of pulmonary vasoconstriction in response to the combined hypoxemia, hypercapnia, and acidosis seen in these individuals. Furthermore, hypercoagulability and increased incidence of pulmonary embolism associated with COVID-19 may have contributed to increased incidence of RVD. This, coupled with the increased airway driving pressures often required by many of these patients to offset The Journal of Thoracic and Cardiovascular Surgery c Volume -, Number -pulmonary parenchymal fibrosis and reduced elasticity, leads to rapid onset of pulmonary arterial hypertension. This acute pulmonary hypertension results in a compensatory dilation of the RV as it shifts on the Frank-Starling curve to provide adequate contractility against the increased pulmonary resistance. These mechanistic details become increasingly important in the setting of COVID-19-associated ARDS, which has been associated with significant systemic effects including pulmonary interstitial inflammation and eventually fibrosis, which dramatically limit lung compliance and increase hypercoagulability. The results of the present study suggest that more liberal use of echocardiography in ECMO-supported patients with ARDS may aid in prognostication and could better guide therapeutic efforts.
Correction of these respiratory and metabolic derangements with the institution of VV-ECMO should offer RV protection from dysfunction. However, the presence of persistent RVD and inferior outcomes for patients with ARDS-associated RVD in prior studies suggests persistent RV-PA uncoupling due to inadequate gas exchange and metabolic correction, pulmonary vascular dysregulation, or macrovascular/microvascular thrombosis resulting in persistent pulmonary hypertension on ECMO. As a result, persistent RV dysfunction on ECMO is concerning for a more fixed uncoupling phenomenon or progression to chronic pulmonary hypertension in some individuals, which may be related to the association with increased inotropic support seen in our study.
The observed prevalence of RVD in COVID-19-associated ARDS, the risk for persistent RV-PA uncoupling, and the description of direct myocardial inflammatory manifestations of COVID-19 have prompted some centers to adopt more liberal use of right atrial to pulmonary artery ECMO (venopulmonary ECMO) configurations. These modifications to the ECMO circuit facilitate RV unloading and enhanced pulmonary arterial flow in an attempt to counteract these effects. Data from this approach are limited but promising, with centers demonstrating a 3-fold survival benefit for venopulmonary ECMO over maximal mechanical ventilation alone. 22,37 These single-center studies should be interpreted with caution because comparative studies of venopulmonary ECMO versus conventional VV-ECMO are lacking; however, they do promote a shift in the approach to ARDS from an isolated pulmonary parenchymal derangement to a mixed cardiopulmonary condition that may require a more tailored approach for patients with RVD.
The present study is supportive of prior small, single institution investigations that suggested reduced survival in ECMO-supported patients with RVD and expands on those analyses. 30 These findings highlight the importance of multi-institutional collaboratives, such as the ORACLE collaborative, in assessment of complex therapies for ARDS. Although RVD occurring in patients on ECMO support was numerically infrequent at any one institution within the collaborative, collectively this analysis allows for a more robust assessment of patient outcomes related to the condition across multiple medical institutions. These retrospective studies are important in providing a foundation for future, prospective studies in this arena to better delineate the roll of echocardiographic screening and RVD in VV-ECMO-supported patients with ARDS and the roll of medical optimization to mitigate the impact of RVD on morbidity and mortality in this cohort of patients.
Study Limitations
Our analysis has several limitations. Although the cohort included patients from 6 institutions across the United States, this was a retrospective observational study with the associated inherent weaknesses. In-hospital care of patients with COVID-19 ARDS and posthospitalization assessments were performed during a time when the healthcare system in this country was experiencing unique stressors and rapid evolution of the understanding of this novel disease. This includes the potential for variability in vasopressor use, anticoagulation, and other therapies within and between institutions. Furthermore, COVID-19-specific medical therapeutics, such as monoclonal antibodies, were not available during this era. In this analysis, we had insufficient sample size to explore site-level variation and their impact on outcomes. Additionally, patients with RVD were categorized dichotomously and based only on assessment by echocardiography rather than on severity of RVD. It must be acknowledged that because echocardiograms were performed on the basis of assessment of clinical need, and thus were not performed prospectively, their interpretation is subject to bias. A structured approach to repeat echocardiography was not performed to assess for recovery of RVD, and although board-certified echocardiographers interpreted these exams, there is likely some degree of heterogeneity in the strict criteria used. Finally, this study was a retrospective analysis of mechanically ventilated patients with COVID-19 supported with VV-ECMO during the first wave of the pandemic from March to August 2020; COVID-19 and its treatment continue to evolve.
CONCLUSIONS
This multicenter study demonstrates significant mortality associated with the presence of RVD in patients with COVID-19-associated ARDS supported with VV-ECMO. Of note, ventricular derangement in this cohort was predominantly characterized by isolated RVD, and the increased mortality appears limited to patients requiring ECMO support. These findings offer important insight into the management of COVID-19-associated ARDS. | 2022-12-21T14:04:48.049Z | 2022-12-01T00:00:00.000 | {
"year": 2022,
"sha1": "9aaa421471b2b1dc0cc992c3a130609bae3abdee",
"oa_license": null,
"oa_url": "https://doi.org/10.1016/j.jtcvs.2022.12.013",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "e84d1cca5e8ce84cac6a88dbb1e75f14ab079553",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
20771297 | pes2o/s2orc | v3-fos-license | Amelioration of Blood Compatibility and Endothelialization of Polycaprolactone Substrates by Surface-Initiated Atom Transfer Radical Polymerization
Attempts to develop synthetic vascular grafts for the replacement of diseased vascular sec‐ tions have been an area of active research over the past decades [1]. However, thrombosis formation as a result of platelet adhesion to the luminal surface of synthetic graft and reste‐ nosis caused by host inflammatory remain a challenge, especially for small-diameter (<6 mm) graft replacement [2,3]. Therefore, the haemocompatibility of the biomaterial used in the graft is a prerequisite for clinical success. As the result, various strategies have been de‐ veloped to improve the blood compatibility of biomaterial surfaces, including the surface immobilization of anti-coagulants such as heparin [4] and sulfated silk fibroin [5], the incor‐ poration of polyethylene oxide or negatively charged side chains [6,7], and surface passiva‐ tion with protein layers, such as albumin [8]. Despite the efficacy of these approaches in preventing acute thrombogenesis, concerns remain on the drug elution lifespan, with possi‐ ble consequence of late thrombosis [9]. To avoid undesirable blood-material interaction, the seeding of autologous endothelial cells (ECs) onto the luminal surface of the graft is consid‐ ered to be an ideal approach to increase the patency of synthetic grafts [10]. Many studies have indicated that endothelial cells release factors that regulate thrombogenesis and plate‐ let activation [11], while delayed or absent stent endothelialization has been implicated in late thrombosis and adverse clinical outcomes [13]. Thus, rapid endothelialization of vascu‐ lar grafts is of great importance for blood-contacting vessels for long-term patency.
Introduction
Attempts to develop synthetic vascular grafts for the replacement of diseased vascular sections have been an area of active research over the past decades [1].However, thrombosis formation as a result of platelet adhesion to the luminal surface of synthetic graft and restenosis caused by host inflammatory remain a challenge, especially for small-diameter (<6 mm) graft replacement [2,3].Therefore, the haemocompatibility of the biomaterial used in the graft is a prerequisite for clinical success.As the result, various strategies have been developed to improve the blood compatibility of biomaterial surfaces, including the surface immobilization of anti-coagulants such as heparin [4] and sulfated silk fibroin [5], the incorporation of polyethylene oxide or negatively charged side chains [6,7], and surface passivation with protein layers, such as albumin [8].Despite the efficacy of these approaches in preventing acute thrombogenesis, concerns remain on the drug elution lifespan, with possible consequence of late thrombosis [9].To avoid undesirable blood-material interaction, the seeding of autologous endothelial cells (ECs) onto the luminal surface of the graft is considered to be an ideal approach to increase the patency of synthetic grafts [10].Many studies have indicated that endothelial cells release factors that regulate thrombogenesis and platelet activation [11], while delayed or absent stent endothelialization has been implicated in late thrombosis and adverse clinical outcomes [13].Thus, rapid endothelialization of vascular grafts is of great importance for blood-contacting vessels for long-term patency.
Due to its slow degradation rates in vivo (2-4 years) [14], good mechanical strength, and biocompatibility with vascular cell types [15,16], polycaprolactone (PCL) is currently being ex-tensively investigated as scaffolds for vascular tissue engineering applications [17][18][19][20][21].However, the intrinsic hydrophobicity and poor cytocompatibility of PCL substrates lead to poor affinity for cell adhesion, thereby restricting their applications as blood-contacting devices.Consequently, surface modification of PCL is necessary to improve cell adhesion and proliferation.Functional polymer brushes containing reactive hydroxyl (-OH), carboxyl (-COOH) or amine (-NH2) groups have been successfully grafted onto the PCL surfaces using γ-ray irradiated, ozone or photo-induced polymerization grafting to introduce hydrophilicity [9,16,[22][23][24].These flexible reactive groups on the polymer brushes are well-suited to conjugate bioactive macromolecules for improved cytocompatibility.However, γ-ray irradiated, ozone or photo-induced polymerization grafting of polymer brushes has several limitations, including low density of grafting due to steric hindrance, uncontrollable graft yield of polymer brushes, and undesired formation of a covalent bond between reactive groups on the polymer brushes and the surface [25].Hence, an alternative grafting approach that allows control over brush density, polydispersity and composition is desired.
One such alternative is the use of surface-initiated atom transfer radical polymerization (ATRP) approach to covalently graft polymer brushes in a tunable and controllable manner [26].This approach allows the preparation of well-defined dense polymer brushes containing reactive pendant groups (e.g.-OH, -COOH, or epoxide groups), and provides highly reactive binding sites for functional biomolecules.[27] As a result, surface-initiated ATRP provides a promising approach to fabricate PCL substrates with well-defined polymer brushes of controlled length and density, as well as tunable grafting density of biomacromolecules.However, to the best of our knowledge, only few studies have been devoted to modifying biodegradable polyester polymers using surface-initiated ATRP to improve their cytocompatibility or blood compatibility [27,28].Also, the functionality of the attached cells was not thoroughly investigated in those studies.
As such, the aim of the current study is to utilize the surface-initiated ATRP method to tailor PCL substrates with dense functional P(GMA) brushes and high-density immobilized gelatin to improve their properties for cell attachment and proliferation.Each functionalization step was ascertained by XPS, AFM and water contact angle measurements.The cytocompatibility of the functionalized PCL substrates was evaluated using human umbilical vein endothelial cells (HUVECs) and the effect of different surface properties on the regulation of the thrombogenicity of the attached cells was also investigated.
Aminolysis of PCL film substrates and immobilization of ATRP initiator
Polycaprolactone (PCL) films were prepared by solution casting method using previously established methods [29].Briefly, 5 g of the PCL pellets was dissolved in 40 ml of dichloromethane to form the PCL solution.The polymer solution was then cast onto the glass substrate with predetermined thickness using the automatic film applicator (PA-2105, BYK).The solvent was removed at room temperature by slow evaporation over a 24 h period, and was further dried in a vacuum oven for another 24 h at 35 o C to obtain the translucent PCL films with a thickness of about 150 μm.The resultant PCL films were cut into round-shaped specimens with a diameter of 2 cm.The activation of PCL substrates was performed by aminolysis treatment using a procedure previously described [30,31].Briefly, the PCL films were immersed in a 10% (w/w) 1,6-hexanediamine and isopropanol mixture at 40 o C for a predetermined time.After aminolysis treatment, the resultant PCL-NH 2 surfaces were thoroughly rinsed with copious amounts of deionized water and isopropanol, respectively, to remove free 1,6-hexanediamine, and dried in a vacuum oven at 30 o C for 24 h.
The introduction of an alkyl halide ATRP initiator on the PCL-NH 2 surface was accomplished through the reaction of the amino groups with 2-bromoisobutyrate bromide (BIBB) [32].The PCL-NH 2 films were immersed in 30 ml of anhydrous hexane solution containing 1.0 ml (7.2 mmol) of triethylamine (TEA).After 30 min of degassing with nitrogen, the reaction mixture was cooled in an ice bath, and 0.89 ml (1.65g, 7.2 mmol) of BIBB was added dropwise via a syringe.The reaction was allowed to proceed with gentle stirring at 0 o C for 2 h and then at room temperature for 12 h.The resulting surface (referred to as the PCL-Br surface) was washed thoroughly with copious amounts of hexane, ethanol, and finally deionized water, in that order, and was subsequently dried in a vacuum oven under reduced pressure at ambient temperature overnight.
Surface-initiated ATRP of GMA and immobilization of gelatin
For the grafting of P(GMA) brushes from the PCL-Br surfaces, surface-initiated ATRP of GMA was performed using a [GMA (3 ml)]:[CuBr]:[CuBr 2 ]:[Bpy] molar feed ratio of 100:1.0:0.2:2.0 in 5 ml of methanol/water mixture (5/1, v/v) at room temperature in a Pyrex ® tube.The reaction was allowed to proceed for 0.5 to 3 h to produce the PCL-g-P(GMA) sur-faces.After the prescribed reaction time, the films were removed and washed sequentially with copious amount of methanol and deionized water, followed by immersing in methanol for about 48 h to ensure the complete removal of the physically-adsorbed reactants or polymers.For the immobilization of gelatin onto the pendant epoxide groups of the P(GMA) brushes, the PCL-g-P(GMA) films were incubated in 10 ml of the phosphate buffered saline (PBS, pH 7.4) containing 3 mg/ml gelatin.The reaction was allowed to proceed at room temperature for 24 h under continuous stirring to produce the corresponding PCL-g-P(GMA)-cgelatin surfaces.After the reaction, the gelatin-immobilized PCL films were washed thoroughly with PBS solution and deionized water to remove the physically adsorbed (reversibly-bound) gelatin, prior to being dried in a vacuum oven under reduced pressure overnight.
Grafting density of the P(GMA) brushes and immobilized gelatin
The grafting density of the P(GMA) brushes and the amount of immobilized gelatin on the PCL substrates was determined by the grafting yield (GY) using the following equation [27,33]: Where Wa and Wb are the weights of the dry film after and before graft polymerization (or immobilization of gelatin) respectively, and A is the film area (about 3.2 cm 2 ).For each GY measurement, a minimum of three pieces of PCL films was used and the resulting values were averaged.
Surface characterization
The composition of the functionalized PCL films was determined by X-ray photoelectron spectroscopy (XPS).All the XPS spectra were recorded on a Krato AXIS Hsi spectrometer with a monochromatic Al Kα X-ray source (1486.6 eV photons), using procedures similar to those described previously [32].The N 1s core-level signal can be used as an indicator of the immobilized gelatin.The [N]/[C] ratio, as determined from the sensitivity-factor-corrected N 1s and C 1s core-level XPS spectral area, indicated the relative abundance of the immobilized gelatin on the PCL substrates.Static water contact angles of the functionalized PCL film surfaces were measured at 25 o C and 60% relative humidity using a sessile drop method with 3 μl water droplets on a FTÅ 200 contact angle goniometer (First Ten Angstroms Inc., Portsmouth, VA, USA).The contact angles reported were the mean values from four substrates, with the value of each substrate obtained by averaging the contact angles for at least three surface locations.The surface topography of the functionalized PCL substrates was investigated by atomic force microscope (AFM).A multimode scanning probe microscope equipped with a NanoscopeIIIa controller (Digital Instrument, Santa Barbara, USA) was used to capture the AFM images in air. 10 μm scans were recorded in tapping mode with a silicon cantilever.The drive amplitude was about 300 mV, and the scan rate was between 0.5 and 1.0 Hz.The arithmetic mean of the surface roughness (R a ) was determined by Nanoscope software.
Cytocompatibility of the functionalized PCL substrates
Human umbilical vein endothelial cells (HUVECs, ATCC CRL-1730 TM ) were cultured in gelatin-coated T25 flasks containing MCDB131 cell culture medium supplemented with Foetal Bovine Serum, 0.2% Bovine Brain Extract, 0.25 ug/ml amphotericin, 0.1 mg/ml heparin, 100 U/ml penicillin, and 100 ug/ml streptomycin, in a CO 2 environment at 37 o C. The MCDB131 medium was changed every other day.Upon 90% culture confluency, cells were harvested by trypsinization using 0.25% Trypsin-EDTA.ECs between 4-6 passages were used for subsequent experiments.
Cell proliferation
The pristine and functionalized PCL films were sterilized by immersing into 75% (v/v) ethanol solution for 60 min, and then rinsed thrice with sterile PBS, followed by MCDB131 medium incubation overnight.Gelatin-coated coverslips (0.1%) were used as positive controls.
Cell viability and proliferation was determined using the AlamarBlue TM (AB) assay.0.5 ml of EC cell suspension (2×10 4 cells/ml) was seeded into each well of 24-well plate containing the pristine and functionalized PCL films, and incubated in a 5% CO 2 environment at 37 o C for 1, 3, 5 and 7 days.The cell culture medium was changed every other day.After the predetermined incubation period, culture media was removed from the wells, and 0.5 ml of the AB solution (10% AB solution in culture media without FBS) was added to the wells.The plates were incubated in a 5% CO 2 atmosphere at 37ºC for 4 h and the fluorescence density was measured using a microplate reader (Model 680, Bio-Rad Laboratories, Inc. Hercules, CA, USA) at an excitation wavelength of 570 nm and an emission wavelength of 580 nm.Cell numbers were calculated using standards derived from seeding known quantities of cells and correlating with fluorescence emission.
Cell imaging
In vitro qualitative analysis of cell coverage and viability was performed using a LIVE/ DEAD ® viability/cytotoxicity assay to assess the extent of endothelialization on the functionalized PCL surfaces.For this procedure, calcein AM (4 mM in anhydrous dimethyl sulfoxide, DMSO) and EthD-1 (2 mM in DMSO/H2O, 1:4 v:v) were added to PBS (1:1000 ratio) to produce a LIVE/DEAD® staining solution.The cell-seeded PCL samples, obtained after 7 days cell culture, were first washed thrice with PBS to eliminate the nonadherent cells, followed by staining using 0.1 ml of LIVE/DEAD staining solution.After incubation in a 5% CO 2 atmosphere at 37 o C for 30 min, the samples were visualized by Nikon Image Ti fluorescence microscope (emission at 515 nm and 635 nm (Nikon Instruments, Tokyo, Japan) to acquire fluorescent images using NIS-Elements Br software.
Blood compatibility of the bare and endothelialized PCL substrates
The hemolysis rate, coagulant activity, nitric oxide (NO) production, and platelet activation of the bare and endothelialized PCL films with various surface functionalizations were investigated to evaluate their blood compatibility.The endothelialized PCL substrates were obtained by culturing ECs (2×10 4 cells/ml) for 7 days on surface-functionalized PCL films using the procedures described above.
Hemolysis rate test
The pristine and functionalized PCL samples were immersed in diluted blood solution containing 2% fresh anticoagulated (ACD) human blood and 98% physiological salt solution and incubated at 37o for 1 h.After centrifugation at 3000 rpm for 5 min, the absorbance of solution was recorded as D t .Under the same conditions, the solution containing 2% ACD blood and 98% physiological salt solution was used as a negative reference, and the solution containing 2% ACD blood and 98% distilled water was used as a positive reference.These absorbances were recorded as D nc and D pc , respectively.The hymolysis rate α of the samples was calculated using the following equation:
Coagulation assays
Whole blood of healthy human volunteers was mixed with 3.8% sodium citrate at a volume ratio of 1:9.The blood was centrifuged at 3000 rpm for 15 min at room temperature to obtain platelet-poor plasma (PPP).Aliquots of 500 μl of PPP were added to be in contact with the surfaces of each bare or endothelialized PCL substrate for 10 min at 37 o C. The PPP was then collected and added with tissue thromboplastin (human brain extract) for prothrombin time (PT) tests, or added with a partial thromboplastin reagent (ellagic acid) for activated partial thromboplastin time (APTT) test.Subsequently, the fibrin clot formation time was determined by an automatic coagulation analyzer (Sysmex CA-7000).PPP that was not exposed to the PCL substrates was used as a blank sample.
Nitric Oxide (NO) secretion by HUVEC
ECs cultured for 7 days on the functionalized PCL substrates were washed twice with PBS and incubated at 37 o C with trypsin-EDTA (0.25%) solution for cell detachment.The resultant ECs were serum-starved overnight in serum-free medium.After incubation with fresh serum-free medium for 6 h, the medium was removed and DAF-FM diacetate (Molecular Probes, D-23842) was added to the medium to effect a final concentration of 10 μM.The medium was subsequently incubated at 37 °C for 1 h, followed by detection of fluorescence using Glomax 20/20 luminometer equipped with a blue fluorescent module.The end product of DAF-FM diacetate and NO is a benzotriazole derivative with a fluorescence excitation and emission maxima of 495 and 515 nm, respectively.Fluorescence units were normalized to cell numbers.
Platelet activation determination by P-selectinassay
Platelet activation by the bare and endothelialized PCL substrates was investigated using the P-selectin (CD62P) assay.Briefly, 100 μl of fresh human platelet rich plasma (PRP) was incubated with the bare or 7-day endothelialized PCL substrates at 37 o C for 2 h.At the end of incubation, the films were thoroughly with copious amounts of PBS solution thrice, followed by adding 40 μl of anti-CD62P (1:100, v:v) to each film, and then incubated at 37 o C for 1 h.After being washed thrice with PBS solution, the films were each incubated with 40 μl of horseradish peroxidase-conjugated sheep anti-mouse polycolonal antibody at 1:100 (v:v) at 37 o C for 1 h.Subsequently, the PCL films were reacted with 150 μl of 3,3',5, 5'-tetramethylbenzidine (TMB) chromogenic solution for 10 min.The color reaction was terminated by adding of 100 μL of 1 M H 2 SO 4 , and the optical densities (OD) were measured at 450 nm using a Varioskan Flash Microplate Reader (Thermo Fisher Scientific, Waltham, MA, USA).
Gene and protein expression of vWF and activity of MMP-2 in ECs cultured on functionalized PCL surfaces
For the real-time qPCR of von Willebrand factor (vWF) and matrix metalloproteinase-2 (MMP-2), the total RNA was extracted from the ECs after 7 days in culture, reverse-transcribed into cDNA and analyzed as described above.The expression of vWF and MMP-2 was normalized to the housekeeping ribosomal protein L27 (rpl27).Endothelial cells treated with 10 ng /ml TNF were used as positive controls.
For the immunoblot detection of vWF protein, cells were lyzed in protein lysis buffer (0.1% sodium dodecyl sulfate, 0.5% triton X-100, and 0.5% sodium deoxycholate dissolved in pH 7.4 PBS) and resolved using a denaturing 10% SDS-PAGE.The proteins were then blotted onto a nitrocellulose membrane and after blocking with 5% non-fat milk in tris-buffered saline with 0.1% Tween (TBS-T), the membrane stained using a rabbit anti-human vWF antibody at 1:5000 and subsequently with anti-rabbit HRP-conjugated antibody at 1:10,000 in TBS-T.The vWF was then visualized after developing using chemiluminescence on X-ray film.For determination of the MMP-2 activity, proteins were extracted from trypsinized cells using the protein lysis buffer before resolving by electrophoresis through a 10% SDS-PAGE copolymerized with 0.1% gelatin as substrate for enzymatic digestion.The molecular sizes of gelatinolytic activities were determined using protein standards (Fermentas, Prestained PAGE rulers).Upon completion of gel running, the gel was incubated with 100 mL renaturation buffer containing 2.5% triton X-100 for 1 h at room temperature with agitation.
The gel was subsequently incubated in 100 ml of development buffer containing 50 mM Tris base, 200 mM NaCl, 5 mM CaCl2, and 0.02% Brij-35 overnight at 37 °C.Developed gel was then stained by the Coomassie Blue and gelatinolytic activities of MMP-2 were determined by the transparent bands appeared at the molecular weight of approximately 68 kDa and 98 kDa, respectively.
Statistical analysis
Each experiment was carried out with four replicates (n = 4), and the data are presented as mean ± standard deviation (SD) unless of otherwise stated.Statistical analysis was carried out by means of one-way analysis of variance (ANOVA) with Tukey's post hoc test.The confidence levels of 95% (p<0.05) and 99% (p<0.01) were used and no adjustments were made for multiple comparisons.
Results and discussion
Polycaprolactone (PCL) films with gelatin-coupled poly(glycidyl methacrylate) (P(GMA)) brushes were prepared via the following reaction sequence (Fig. 1): (a) active amine groups were introduced to the PCL film surfaces by the aminolysis reaction, (b) the immobilization of an alkyl bromide ATRP initiator was achieved via TEA-catalyzed condensation reaction between the amine groups on the aminolyzed PCL substrates and 2-bromoisobutyryl bromide (BIBB), (c) well-defined P(GMA) functional brushes were covalently grafted from the ATRP initiator-immobilized PCL surface via surface-initiated ATRP of GMA, (d) cell-adhesive gelatin was directly coupled to the pendant active epoxide groups of the grafted P(GMA).Details of each functionalization step are discussed below.
Aminolysis of PCL substrates and immobilization of ATRP initiator
Aminolysis represents an easy-to-perform chemical technique to engraft amino groups along the polyester chains, and hence has been widely used in the surface modification of scaffolds for tissue engineering applications [31,34].In this study, PCL substrates were activated by the aminolysis reaction to introduce active amine groups.The relative amount of amine groups on the aminolyzed PCL (defined as the PCL-NH 2 ) surface was quantitatively determined by XPS measurements.The [N]/[C] ratio, as determined from sensitivity factorcorrected N 1s and C 1s core-level XPS spectral area, increases with the aminolysis time and reaches the maximal value after 1 h, which is estimated to be about 0.043 (Figs. 2).The result is consistent with the data reported previously [35,36].As a degradation reaction, the aminolysis reaction is found to proceed preferentially at the amorphous regions of polymer in diamine solution during the initial period [37].At longer aminolysis time, the decrease in bound amine groups may be caused by chain scission, formation of oligomers and other low mass fragments that are removed from the surface during reaction and the rinsing process [38].Thus, the optimal aminolysis time for PCL film was found to be 1 h, and this reaction time was chosen for the subsequent surface modification and cell studies.1), which is in good agreement with the theoretical value of 5:1:1 for the polycaprolactone structures.The appearance of N 1s signal in the wide scan spectrum (Fig. 3a) and an additional peak component at 285.5 eV, attributable to C-N species, in the curvefitted C 1s core-level spectrum (Fig. 3c) indicate the successful introduction of amine groups onto the PCL substrates after 1 h of aminolysis.The only peak component found at the BE of 399.6 eV in the N 1s core-level spectrum is associated with the free amine group on the PCL-NH 2 film surface (Fig. 3c') [38].The decrease in static water contact angles of the PCL substrates from 93 ± 2 o to 66 ± 3 o is consistent with the presence of amine groups on the PCL-NH 2 surface (Table 1).The amine groups on the aminolyzed PCL surface can not only improve surface hydrophilicity, but also offer the active sites for further functionalization.The immobilization of a uniform monolayer of initiators on the solid surface is indispensible in the surface-initiated ATRP process [40].An alkyl bromide ATRP initiator was introduced onto the PCL-NH 2 surface via TEA-catalyzed condensation reaction to produce the PCL-Br surface.Successful introduction of the alkyl bromide-containing ATRP initiator onto the PCL substrates can be deduced from the appearance of three additional signals with BEs at about 70, 189 and 256 eV, attributable to Br 3d, Br 3p, and Br 3s, respectively, in the wide scan spectrum of the PCL-Br surface (Fig. 3a) [41].The [Br]/[C] ratio, as determined from the Br 3d and C 1s core-level spectral area ratio, was about 3.17×10 -2 (Fig. 3d).The corresponding Br 3d core-level spectrum of the PCL-Br surface with a Br 3d 5/2 BE of 70.4 eV is consistent with the presence of the alkyl bromide species [41] (Fig. 3d').The alkyl bromide-immobilized PCL surface became more hydrophobic, as static water contact angle increased noticeably to 85 ± 3 o (Table 1).[bpy]=100:1:0.2:2 in methanol-water solution (1:1, v:v) at room temperature for 1 and 3 h to produce the PCL-g-P(GMA)1 and PCL-g-P(GMA)2 surfaces, respectively, f Reaction conditions: the PCL-g-P(GMA)1 and PCL-g-P(GMA)2 surfaces incubated in PBS (pH 7.4) solution containing the gelatin at a concentration of 3 mg/mL at room temperature for 24 h, g GY denotes the grafting yield, and is defined as GY = (W a -W b )/A, where W a and W b corresponds to the weight of the dry films before and after grafting of polymer brushes, respectively, and A is the film area (about 3.2 cm2 ), h SD denotes standard deviation, i Determined from the corresponding sensitivity factor-corrected element core-level spectral area ratios, j Determined from the curve-fitted C 1s core-level spectra.Theoretical values are shown in parentheses.k WCA denotes static water contact angles.
Sample
Table 1.Grafting yield, surface composition, and water contact angles of the pristine PCL and surface-functionalized PCL surfaces
Surface-initiated ATRP of GMA and immobilization of gelatin
P(GMA) is an effective surface linker to immobilize biomolecules, such as proteins, antibodies or enzymes, for tissue engineering applications [42].Fig. 4 shows the respective wide scan, C 1s and Br 3d core-level spectra of the PCL-g-P(GMA) surface from 1 and 3 h of ATRP reaction.The C 1s core-level spectra of the PCL-g-P(GMA) surface can be curve-fitted into three peak components with BEs at 284.6, 286.2 and 288.7 eV, attributable to C-H, C-O and O=C-O, respectively (Figs. 4b and 4d).For the PCL-g-P(GMA)1 surface from 1 h of ATRP, the area ratio of [C-H]:[C-O]:[O=C-O] is about 3.8:2.8:1.0 (Table 1), which is slightly different from the theoretical value of 3:3:1 for the GMA unit structure.The deviation in peak component area ratio of C 1s core-level spectrum of the PCL-g-P(GMA)1 surface suggests that the thickness of the P(GMA) brushes is less than the probing depth of XPS technique (about 8 nm in an organic matrix) [41].Increasing the reaction time to 3 h leads to a [C-H]:[C-O]: [O=C-O] ratio of about 3.1:3.0:1.0, which is close to the theoretical value of the GMA repeat unit structure (Table 1).This is an indication that the P(GMA) brushes were thicker than the probing depth of XPS technique.It has been reported that the thickness of the P(GMA) brushes grafted on the silicon surface is around 30 nm after 3 h of ATRP of GMA under similar reaction conditions [41].The presence of P(GMA) brushes leads to decrease in static water contact angles to 62 ± 4 o and 61 ± 5 o , respectively, for the PCL-g-P(GMA)1 and PCL-g-P(GMA)2 surfaces, owing to the presence of hydrophilic epoxide groups [43].The grafting yield (GY) was measured to evaluate the kinetics of polymer chain growth in this study.As shown in Fig. 5, an approximate linear increase in GY of the grafted P(GMA) chains with polymerization time could be observed for the PCL-Br surface.The result suggests that the chain growth from the PCL-Br surface proceeds in a controlled and well-defined manner.The GY values of the PCL-g-P(GMA)1 and PCL-g-P(GMA)2 surfaces are about 6.31 ± 1.32 and 14.76 ± 2.63 μg/cm 2 (Table 1), respectively.The persistence of the Br 3d core-level signal (Figs.4b' and 4d') is consistent with the fact that the living chain end from the ATRP process involves a dormant alkyl halide group, which can be readily reactivated to initiate the block copolymerization [25].However, the molecular weight and molecular weight distribution of the surface-graft polymers cannot be determined with sufficient accuracy without precise cleavage of the grafted P(GMA) from the film surfaces [27].Nucleophilic reactions involving -NH 2 moieties of biomolecules and the pendant epoxide groups have been widely reported [44].In this work, cell-adhesive gelatin was directly coupled to the pendant epoxide groups of the PCL-g-P(GMA)1 and PCL-g-P(GMA)2 surfaces to produce the corresponding PCL-g-P(GMA)1-c-gelatin and PCL-g-P(GMA)2-c-gelatin surfaces, respectively.Fig. 6 shows the respective wide scan, C 1s and N 1s core-level spectra of the gelatin-immobilized PCL surfaces.The corresponding curve-fitted C 1s core-level spectrum was composed of five peak components with BEs at about 284.6, 285.5, 286.2, 288.2 and 289.1 eV, attributable to the C-H, C-N, C-O, O=CNH, and O=C-O species [41], respectively (Figs. 6b and 6d).The C-N peak component is associated with the linkages in gelatin itself, as well as the linkage between P(GMA) and gelatin.The O=CNH peak component is ascribed to the peptide bonds in gelatin.The above results and the appearance of a strong N 1s signal with BE at 399.6 eV (Figs.6b' and 6d'), characteristic of amine species, are consistent with the fact that gelatin has been covalently immobilized on the P(GMA) brushes.The surface wettability of the PCL substrates is significantly improved after the immobilization of gelatin, as water contact angles decrease to 37 ± 2 o (for the PCL-g-P(GMA)1-c-gelatin) and 35 ± 3 o (for the PCL-g-P(GMA)2-c-gelatin) (Table 1).It is reported that gelatin contains large amount of glycine (Gly) and proline (Pro) which are hydrophilic amino acids [45].The hydroxyl groups (-OH) generated in the ring-opening reaction of epoxide groups by coupling of gelatin could have also contributed to the lower water contact angle of the PCL-g-P(GMA)-c-gelatin surfaces.
Surface topography
The changes in topography of the PCL film surfaces after each functionalization step were investigated by AFM.Fig. 7 shows the representative AFM height images of the pristine PCL and functionalized PCL surfaces with scanned areas of 10 μm × 10 μm.The pristine PCL film surface is relatively uniform and smooth with a root-mean-square surface roughness values (R a ) of about 19 nm (Fig. 7a).After the aminolysis treatment, the R a value increases to 31 nm (Fig. 7b).The observation that aminolysis caused a noticeable increase in surface roughness is in agreement with the findings by other groups [30,35].The existence of shallow pits is probably the result of the penetration of hexanediamine molecules into the PCL films, since it has been previously reported that the aminolysis reaction can take place at a depth of around 50 μm [23,30].After graft polymerization of GMA, obvious increases in R a values are observed on the PCL-g-P(GMA)1 (56.9 nm, Fig. 7c) and PCL-g-P(GMA)2 surfaces (67.8 nm, Fig. 7e), and characteristic fiber-like features of polymer brushes are also visible on the P(GMA)-grafted film surfaces (Figs.7c and 7e).The subsequent coupling of gelatin to P(GMA) brushes resulted in a further slight increase in surface roughness of PCL substrates, as R a values increase to 59 nm and 71.5 nm for the PCL-g-P(GMA)1-c-gelatin and the PCL-g-P(GMA)2-c-gelatin surfaces, respectively.
Endothelial cells proliferation and surface endothelialization
The adhesion and proliferation of endothelial cells (ECs) on the functionalized PCL surfaces was quantitatively determined by the AlamarBlueTM (AB) assay, and the results are shown in Fig. 8.The pristine PCL film surface is the least conducive for supporting cellular growth, since only a marginal increase in cells over 7 days of culture was observed.The cells attached to the PCL-NH 2 film surfaces showed a slight improvement in proliferation as compared to the pristine PCL surface.This result is consistent with previous findings that the presence of amine groups on the PCL surfaces leads to a positive effect on cell proliferation [23,30], albeit to a limited extent.Despite the improvement in surface hydrophilicity and roughness, the grafting of P(GMA) brushes onto the PCL film surfaces did not lead to an enhancement in EC proliferation behavior, which is probably associated with the cytotoxicity and mutagenicity of epoxides groups to ECs [46].Besides the fact that polymer surfaces with moderate hydrophilicity of water contact angles in a range of 30-70 o and rougher nano-topography are favorable for cell attachment and proliferation [47,48], other factors (e.g.biological cues) may also be required for positive cell interaction with material substrates.This hypothesis was confirmed by the observation that the gelatinized P(GMA)-grafted PCL substrates exhibited higher affinity and proliferation for ECs as compared to the surfaces that did not contain the bioactive gelatin motifs.
In fact, the proliferation rates of the PCL-g-P(GMA)1-c-gelatin and PCL-g-P(GMA)2-c-gelatin surfaces were comparable to that of the gelatin-coated coverslips (positive controls).Cell proliferation on the gelatin-immobilized PCL surfaces was not only significantly enhanced, but also found to be positively correlated to the amounts of immobilized gelatin.The PCL-g-P(GMA)2c-gelatin surface exhibited more pronounced enhancement in cell adhesion and proliferation than that of the PCL-g-P(GMA)1-c-gelatin surface, as the longer ATRP reaction time allowed for more gelatin to be attached.This result suggests that an increase in surface density of the immobilized gelatin can lead to an increase in EC proliferation over time.This phenomenon is probably associated with the fact that immobilization of gelatin provides many epitopes or ligands for cell adhesion molecules, such as integrins, thus mimicking the natural extracellular environment that is favorable for EC adhesion, spreading and proliferation.The visualization of EC coverage on the functionalized PCL surfaces enabled a good assessment of the efficacy of endothelialization over the entire surface.Fig. 9 shows the representative fluorescence images of LIVE/DEAD-stained ECs on the pristine PCL and functionalized PCL film surfaces after 7 days of culture.The sparse coverage of ECs on the pristine PCL substrate further confirmed the unfavorable surface properties of pristine PCL for cell adhesion and proliferation (Fig. 9b).Poor endothelialization was also observed on the aminolyzed PCL (Fig. 9c) and P(GMA)-grafted PCL surfaces (Fig. 9d).The dense growth of ECs on the PCL-g-P(GMA)1-c-gelatin (Fig. 9e) and PCL-g-P(GMA)2-c-gelatin (Fig. 9f) surfaces indicated significant improvement in EC coverage for the gelatin-immobilized PCL surfaces.The observed denser coverage of ECs on the PCL-g-P(GMA)2-c-gelatin surfaces compared to the PCL-g-P(GMA)1-c-gelatin surfaces meant that the efficacy of endothelialization is positively correlated to the amount of immobilized gelatin.Taken together, the results suggest that the higher the surface concentration of immobilized gelatin, the better the endothelialization efficacy of the material within a given period of time.
Hemolysis rate test
Hemolysis rate is an important factor for characterization of the blood compatibility.The lower the hemolysis rate, the better the blood compatibility.Figure 10 shows the hemolysis rate of the pristine PCL and surface-functionalized PCL samples.It can be seen that the hemolysis rate of the functionalized samples has no substantial improvement.The gelatin-immobilized PCL substrates even show a relatively higher hemolysis rate than those of the pristine PCL and PCL-NH 2 surfaces.This result is consistent with the previous findings that gelatin exhibited somewhat hemostatic effect by nature.However, the hemolysis rate of the gelatin-immobilized PCL samples is approximately 3%, far below the accepted threshold value of 5% for biomaterial applications.Thus, the gelatin-immobilized samples can be used as a novel material with good hemocompatibility.
Coagulation activity on the bare and endothelialized PCL substrates
Blood coagulation, particularly under conditions of relatively low flow, has been recognized to be one of the main problems of vascular occlusion [49].Activated coagulation factors influence the clotting time through extrinsic, intrinsic and common coagulation pathways.Prothrombin time (PT) is used to evaluate deficiencies in the extrinsic factor, and represents the time for blood plasma to clot after the addition of thromboplastin (activator of the extrinsic pathway) [50].Activated partial thromboplastin time (APTT) is used to evaluate the intrinsic factors, such as VIII, IX, XI and XII, and common coagulation pathway factors V, X and II [50].Both PT and APTT are commonly used to screen for adverse activation of the coagulation pathways on the vascular grafts and to evaluate their haemocompatibility in vi-tro.Thus, the PT and APTT values of the bare and endothelialized PCL substrates with various surface modifications were measured.The normal ranges of PT and APTT for healthy blood plasma are 12.0-14.5s and 27.0-35.6s, respectively [51].Fig. 11 shows the PT and APTT results for the bare and endothelialized PCL substrates with various surface modifications.For the bare PCL and functionalized PCL surfaces, both PT and APTT are all within the normal ranges of coagulation time.None of the PCL substrates was found to affect the coagulation pathways significantly, and no discernible effect was observed in the presence of gelatin, indicating that surface functionalization activated neither the intrinsic nor the extrinsic coagulation pathways.This result is consistent with previous findings by other groups that cell-adhesive proteins and peptides do not affect or convert blood coagulation pathways [51,52].Even after the PCL substrates were coated with a layer of ECs, the PT and APTT readings were still observed to be in the normal range of the clinical reference, and no significant differences in the coagulant activities were observed on the endothelialized PCL substrates as compared to the bare PCL substrates.The results also revealed that the ECs cultured on the pristine PCL and functionalized PCL surfaces remained unactivated and did not exhibit procoagulation phenotypes.Hence, it could be concluded that the presence of the monolayer of ECs had no effect on the intrinsic or the extrinsic coagulation pathways.
Nitric Oxide (NO) production
Nitric oxide (NO) is an important regulator of vascular tone and platelet adhesion and the continuous NO release by ECs prevents thrombogenesis [53].In this study, the NO secretion of the ECs on the pristine PCL and functionalized PCL surfaces were measured.As shown in Fig. 12, the amount of NO secreted by ECs on the gelatin-immobilized PCL surfaces was significantly higher than those on the pristine PCL, aminolyzed and P(GMA)-grafted PCL surfaces.The amount of NO production of the ECs seeded on the PCL-g-P(GMA)2-c-gelatin surface was around 2-fold higher than that on the PCL-g-P(GMA)1-c-gelatin surfaces, indicating that the improved NO production observed for ECs grown on the gelatin-immobilized PCL surface may be positively correlated to the amount of covalently immobilized gelatin.The above results suggest that a high density of immobilized gelatin led to the enhancement in NO secretion.
Platelet activation on the bare and endothelialized PCL substrates
Apart from coagulation pathways, platelet activation is considered another important criterion in assessing blood compatibility of the biomaterial surface.The activation of attached platelet results in platelet aggregation and the formation of a thrombus [54].The subendo-thelial collagens (i.e., types I -IV) have been found to interact directly with platelets to trigger their activation for thrombogenic initiation [55].As gelatin is a derivative of collagen, one downside of using gelatin-immobilized surfaces could be the detrimental adhesion and activation of platelets, which could initiate clotting.Therefore, the extent of platelet activation by the different biomaterial surfaces was studied.In this study, platelet activation on the bare and endothelialized PCL substrates was determined using the P-selectin assay, since P-selectin is one of intracellular granular molecules released upon platelet activation and stimulation [5], For the bare PCL substrates, the amount of activated platelets on the gelatin-immobilized PCL surfaces was comparable to those of the pristine PCL surfaces (Fig. 13a).In contrast, platelet activation was found to be significantly higher on the P(GMA)-grafted PCL surfaces, indicative of an increased risk for thrombogenesis.The results suggest that the immobilization of gelatin on the P(GMA)-grafted PCL surfaces not only rendered them with adhesion-promoting properties, but also decreased the risk of inducing the activation of platelets.It has been re-ported previously that the biological motifs immobilized on the polymers could not bind effectively with the platelet integrin unless their separation distances was between 1.48-2.2nm [56], and that polymers with small spacer, such as lauric acid-conjugated GRGDS, exhibited no increase in activation [57].Here, the gelatin was directly conjugated onto the grafted P(GMA) brushes (small spacer), and thus the gelatin motif could not gain access to the binding sites of the platelet integrin (such as α2β1).In addition, it is probable that the other surface properties (e.g.enhanced hydrophilicity) were also responsible for reduced platelet activation activity.
In the case of the endothelialized PCL substrates, the amount of activated platelets on the gelatin-immobilized PCL surfaces was significantly reduced with respect to the other endothelialized PCL substrates (Fig. 13b), indicative of good anti-thrombogenic behavior of the EC confluent layer.This result is consistent with the high level of NO production by ECs on the gelatinized PCL surface.The P-selectin expression on the endothelialized pristine PCL and PCL-NH 2 surfaces was higher than those on the bare substrates (Fig. 13b), which is line with the well-established fact that subconfluent EC layers were more thrombogenic in nature [13].The platelet activation was significantly enhanced on the endothelialized P(GMA)grafted PCL surfaces, as observed by the high levels of P-selectin expression, which is an indication of an increased risk of thrombogenesis for those surfaces.Overall, the above results showed that the thrombogenicity of a biomaterial is influenced by both EC confluency and the surface properties of the biomaterial.Consequently, it can be concluded that the immobilization of gelatin on the P(GMA) brushes prevented platelet activation, but that the presence of the P(GMA) brushes alone led to a pro-thrombogenic surface.
Expression and activity of vWF and MMP-2 of EC on the surface-functionalized PCL samples
In order to further investigate the thrombogenicity of endothelialized surfaces, we performed real-time PCR tests and protein immunoblotting on factors that mediate platelet adhesion to ECs.The expression of von Willebrand factor (vWF) on endothelial cells promotes platelet adhesion and high or abnormal expression of vWF has been implicated in pathological conditions such as thrombosis [58].Matrix metalloproteinase-2 (MMP-2) is another factor known to be involved in platelet aggregation, as well as to have important roles in the degradation and remodeling of the endothelial extracellular matrix [59,60].7 days after the seeding of ECs, the relative expression of both vWF and MMP-2 mRNA in the EC on both PCL-g-P(GMA)1-c-gelatin and PCL-g-P(GMA)2-c-gelatin surfaces were downregulated when compared to the ECs on PCL-g-P(GMA) surfaces and gelatin-coated coverslips (Fig. 14a and 14c).Immunoblotting of the vWF protein produced in ECs, also, suggest that it is lower in ECs on PCL-g-P(GMA)-c-gelatin surfaces in comparison to PCL-g-P(GMA) surfaces but differed where the vWF expression of ECs on the pristine PCL surface and gelatincoated coverslip could not be detected (Fig. 14b).Nevertheless, these results suggest that pro-thrombogenic factors could be increased on bare PCL-g-P(GMA) surfaces and the conjugation of gelatin could help to reduce thrombogenicity.
Conclusion
This study described the successful biofunctionalization of PCL substrates with tunable surface densities of covalently-immobilized gelatin by surface-initiated ATRP of glycidyl methacrylate (GMA).Kinetics studies revealed that the grafting yield of the functional P(GMA) brushes increased linearly with polymerization time, and the amount of immobilized gelatin increased with the concentration of epoxide groups on the P(GMA) brushes.The significant improvement in the adhesion and proliferation of ECs on the gelatin-immobilized PCL substrates were found to be positively correlated to the amount of covalently immobilized gelatin.Blood compatibility tests demonstrated that the ECs cultured on the gelatinimmobilized-P(GMA) surfaces exhibited low platelet activation and significantly increased nitric oxide (NO) production, although the coagulation pathways were not affected both before and after EC coverage.Overall, the high surface-density of immobilized gelatin obtained by surface-initiated ATRP on the PCL surfaces is favorable for EC attachment and proliferation.The attached ECs maintained an unactivated and non-thrombogenic phenotype that mimics the EC lining of a healthy blood vessel.Hence, such a surface may have huge potential for vascular graft applications.
Figure 1 .
Figure 1.Schematic illustration of the process of (a) aminolysis of PCL substrates to introduce the free amino groups (the PCL-NH 2 surface), (b) immobilization of a alkyl bromine-containing initiator via condensation reaction to give the PCL-Br surface, (c) surface-initiated atom transfer radical polymerization (ATRP) of GMA from the PCL-Br surface to produce the PCL-g-P(GMA) surface, and (d) subsequently covalent conjugation of gelatin to obtain the PCL-g-P(GMA)c-gelatin surface.
Figure 2 .
Figure 2. The [N]/[C] ratio of the aminolyzed PCL surface as a function of aminolysis time determined by XPS measurements.The analysis reaction of PCL films proceeded at 40 o C in 10 wt% 1,6-hexanediamine/2-propanol solution.Error bars represent the standard deviation over separate measurement on three PCL films.The optimized aminolysis time was observed at 1 h with the [N]/[C] ratio of 0.043.The chemical composition of the PCL film surfaces at various stages of surface modification was ascertained by XPS.Figs.3a-3cshow the respective wide scan, C 1s, and N 1s core-level spectra of the pristine PCL and PCL-NH 2 surfaces from 1 h of aminolysis.The C 1s core-level spectra of the pristine PCL can be curve-fitted into three peak components with binding
Figure 3 .
Figure 3. Wide scan and C 1s core-level curve-fitted XPS spectra of the (a,b) pristine PCL, (c,d) PCL-NH 2 from 1 h of aminolysis, and (e,f) PCL-Br surfaces.Insets of (d') and (f') correspond to the N 1s and Br 3d core-level XPS spectra of the PCL-NH 2 and PCL-Br surfaces, respectively.
Figure 4 .
Figure 4. Wide scan, C 1s, and Br 3d core-level curve-fitted XPS spectra of the (a,b,b') PCL-g-P(GMA)1 from 1 h of ATRP reaction and (c,d,d') PCL-g-P(GMA)2 from 3 h of ATRP reaction.Successful grafting of P(GMA) polymer brushes can be deduced from the area ratios of [C-H]:[C-O]:[O=C-O] peak components comparable to the theoretical value of 3:3:1 of GMA molecular structure.
Figure 5 .
Figure 5.A linear relationship between the grafting yield (GY) of the P(GMA) brushes with the surface-initiated ATRP time.The polymer chain growth was tuneable by varying reaction time.
Figure 6 .
Figure 6.Wide scan, C 1s and N 1s core-level curve-fitted XPS spectra of the (a,b,b') PCL-g-P(GMA)1-c-gelatin and (c,d,d') PCL-g-P(GMA)2-c-gelatin surfaces.The appearance of two additional peak components of C-N and O=CNH as a result of immobilized gelatin.
Figure 8 .
Figure 8. EC proliferation profile on the gelatin-coated coverslips, pristine PCL and functionalized PCL surfaces after 1, 3, 5 and 7 days of incubation at 37 o C in a 5% CO 2 atmosphere as determined by the AlamarBlue TM (AB) assay.Data presented as means ± SD. *p<0.05 and **p<0.01refers to statistically significant difference compared with the pristine PCL surface.The cell proliferation rate of ECs seeded on the gelatin-immobilized surfaces was significantly improved as compared to that the pristine PCL film.
Figure 10 .
Figure 10.Hemolysis rate of the pristine PCL and surface-functionalized PCL samples.Data presented as means ± SD, n=3.
Figure 11 .
Figure 11.a) PT and (b) APTT results (means ± SD, n=3) for the bare and endothelialized surfaces of pristine PCL and surface-functionalized PCL, The coagulation activity of all surfaces was found to be in the normal range for healthy blood plasma.
Figure 12 .
Figure 12.The amount of NO production for the ECs seeded on the pristine and functionalized PCL substrates.Data presented as means ± SD, n=3.*p<0.05 and **p<0.01corresponds to statistically significant difference as compared to the pristine PCL.
Figure 13 .
Figure 13.P-selectin expression for the (a) bare and (b) endothelialized surfaces of the pristine and functionalized PCL substrates.Plasma rich platelet (PRP) was used as positive control.*p<0.05 and **p<0.01corresponds to statistically significant difference compared with the pristine PCL surface.
Figure 14 .
Figure 14.Real-time qPCR revealed that the expression of (a) vWF and (c) MMP-2 in ECs on PCL-g-P(GMA)-c-gelatin surfaces is lower when compared to PCL-g-P(GMA) surfaces and gelatin-coated coverslips.Expression levels are normalized using the housekeeping gene rpl27 and taken relative to ECs on pristine PCL (dotted line) (mean ± SD, n=3).Endothelial cells treated with TNF-αfor 6 h were used as a positive control for the expression of both genes.(c) Immunoblotting of vWF protein expressed in ECs on PCL-g-P(GMA)-c-gelatin is also reduced when compared to PCL-g-P(GMA) surfaces, thus corroborating the real-time PCR results. | 2017-09-18T01:29:28.946Z | 2013-03-27T00:00:00.000 | {
"year": 2013,
"sha1": "961df55a773f036cb66e3eb40e7a7a78030d3aa6",
"oa_license": "CCBY",
"oa_url": "https://www.intechopen.com/citation-pdf-url/43734",
"oa_status": "HYBRID",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "3d90bfda780642703cf993a344429216c58bae83",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
239059288 | pes2o/s2orc | v3-fos-license | Real-Time Prediction of the Trend of Ground Motion Intensity Based on Deep Learning
In order to predict the intensity of earthquake damage in advance and improve the effectiveness of earthquake emergency measures, this paper proposes a deep learning model for real-time prediction of the trend of ground motion intensity. ,e input sample is the real-time monitoring recordings of the current received ground motion acceleration. According to the different sampling frequencies, the neural network is constructed by several subnetworks, and the output of each subnetwork is combined into one. After the training and verification of themodel, the results show that the model has an accuracy rate of 75% on the testing set, which is effective on real-time prediction of the ground motion intensity. Moreover, the correlation between the Arias intensity and structural damage is stronger than the correlation between peak acceleration and structural damage, so the model is useful for determining real-time response measures on earthquake disaster prevention and mitigation compared with the current more common antiseismic measures based on predictive PGA.
Introduction
Earthquakes are one of the natural disasters all over the world that can cause great harm to humans. With the current level of scientific research, earthquakes cannot be predicted very accurately. e main reason is that the occurrence of an earthquake is ground shaking caused by the release of a large amount of crustal energy. is is a very complicated process. Although there are certain rules to follow, there are a lot of variables involved in predicting earthquakes, many of which cannot be measured [1,2]. erefore, the current earthquake prediction is mainly based on the earthquake risk estimation, that is, the probability of the largest damage because of the earthquake in a certain city or building area within a certain number of years [3][4][5].
Although the methods such as the ETAS model have proven effective in predicting triggered earthquakes, they often fail because they often underestimate the number of future earthquakes, and the degree of underestimation is related to the time of the main shock. Joffe et al. pointed out that when the accuracy of predicting earthquakes is insufficient to meet the demand, it is important to find a new method that can have a wider and larger source of information [2]. is almost coincides with the deep learning method.
Deep learning has attracted much attention due to its extensive success in various fields [6]. In the field of earthquake research, the core is to obtain information and knowledge by analyzing data. Deep learning technology has strong modeling, feature extraction, and data analysis capabilities and has been successfully applied to many challenging problems, such as earthquake identification and classification [7,8], seismic phase picking [9,10], and earthquake early warning [11,12]. In terms of earthquake prediction, deep learning also has many applications [13], but almost all of them are long-term predictions [14,15], short-term predictions [16], and few real-time predictions.
In the current earthquake prediction research, the output ground motion parameters mainly include peak acceleration and peak velocity, which can only reflect a specific ground motion feature and cannot fully effectively infer the damage caused by ground motion. Many scholars have found that Arias intensity is a kind of ground motion energy parameter that includes ground motion amplitude, frequency spectrum, and duration characteristics. It has a strong correlation with disaster phenomena caused by earthquakes. It can better reflect some characteristics of ground motions and is more reliable and effective for the prediction of damage of the earthquakes [17][18][19][20][21].
At present, studies have shown that Arias intensity is used in the prediction of the degree of damage to shortperiod structures caused by ground motions [22], the probability of earthquake-induced landslides [23], the possibility of foundation failure due to earthquake-induced sand liquefaction [24], and other aspects very well. erefore, this article uses deep learning technology to build a model and takes 30 s real-time monitoring of the horizontal acceleration time history recordings as input, which can predict the Arias intensity trend of the future 5 s acceleration time history recordings in real time.
e main contribution of our research is listed as following points: (1) When an earthquake occurs, the severity or development trend of the earthquake disaster can be determined in advance. e proposed model may cooperate with the earthquake early warning system to make the real-time monitoring and emergency measures of the building's seismic system be more effectively avoiding casualties and property losses.
(2) Our model suggests that Arias intensity can be predicted in many situations. (3) is article is purely data-driven to build a model to explore the possibility of using current seismic acceleration recordings to predict the future.
Fully Connected Neural Network.
Commonly used neural network models include fully connected neural networks, convolutional neural networks, and recurrent neural networks. e fully connected neural network is shown in Figure 1. e neurons are laid out in layers. e leftmost layer is the input layer, and the rightmost layer is the output layer. e middle part is the hidden layer because they are invisible. All neurons in two adjacent layers are connected to each other, and each connection has a weight. Compared with the one-dimensional arrangement of neurons in each layer of a fully connected neural network, a convolutional neural network (as shown in Figure 2) is very different in structure. Each layer of neurons is arranged in a cube structure, and there is generally a pooling layer between convolutional layers to reduce the number of samples in each layer to reduce the number of parameters. e recurrent neural network (as shown in Figure 3) is somewhat similar to the full connection, but the difference is that the output of each neuron of the recurrent neural network not only depends on its input but also depends on the output of the previous neuron.
Because the recurrent neural network has problems such as gradient disappearance and training instability, the fully connected neural network is more conducive to the overall trend prediction because the neurons of each layer are connected to each other, and the calculation time is shorter compared with other network structures. e shorter calculation time is more meaningful for implementing predictions, so the model created in this article is a fully connected neural network (FCNN). e model is divided into two stages. e first stage is segmented input. is model is for real-time monitoring of the change trend of Arias intensity, that is, the change trend of acceleration. If a total of 3000 horizontal acceleration values of 30 s are used as input, the calculation amount is relatively large, and the calculation time cannot meet the requirements of rapid prediction. In addition, the input of the last 5 s is more important for the result and plays a decisive role, so we have divided different sampling frequencies according to the time position of each data point to reduce the calculation time without greatly affecting the results. Since the sampling frequency of the input data is different, the input is separated according to the sampling frequency, and according to the size of the input data, using different neural network layers and neuron numbers will have a better training effect. e second stage combines the output of the first stage to get the prediction result. e first stage is divided into three parts. e first part is data input with a sampling frequency of 1 hz in the time range of 0∼15 s, a fully connected layer, the number of inputs is 15, and the number of outputs is 5; the second part is data input with a sampling frequency of 10 hz within a time range of 15∼25 s, two fully connected layers, the number of inputs is 100, and the number of outputs is 20; and the third part is data input with a sampling frequency of 100 hz within a time range of 25∼30 s, three fully connected layers with the number of inputs 500, and the number of outputs is 40. e second stage combines a total of 65 outputs from the three parts of the first stage and forms a two-class model through three fully connected layers. Because the training speed of the ReLU activation function is much faster than that of the Sigmoid function and the Tanh function, the models constructed in this paper use the ReLU activation function [25]. When calculating the loss function, a Softmax layer is added to the last layer of the model. e model structure is shown in Figure 4.
Data Processing Method.
Data are very critical for deep learning models. It affects the convergence speed of the model and the training effect. erefore, it is important to preprocess the data before inputting the model to make it accurate, consistent, and applicable. e following are the data preprocessing steps ( Figure 5): (1) Acceleration Zoom. e data used in this article are all downloaded from K-net and KiK-net. e acceleration of each seismic recording is not a true value but an amplified value. Since the Arias intensity needs to calculate the square of the acceleration, it will increase exponentially if it is calculated by the magnification value, and the magnification factor of each station may not be the same, so it is necessary to scale the ground motion acceleration recording according to the scale factor of each station. e unit of acceleration is gal.
(2) Baseline Correction. Ground motion recordings are susceptible to low-frequency noise such as instrument noise and background noise, resulting in baseline error pollution, leading to serious drift and distortion in acceleration and the velocity and displacement waveforms obtained by integration. We can do a simple baseline correction, that is, subtract the average value from the scaled ground motion, which can greatly reduce the influence of low-frequency noise on the acceleration recording. In practical applications, the average value of the steady noise lasting 20 seconds before the earthquake can be subtracted to reduce the influence of low-frequency noise. (3) Normalization. A total of 30 seconds of data is selected, and the horizontal ground motion acceleration is obtained according to the sampling frequencies of 1 Hz, 10 Hz, and 100 Hz in the first 15 seconds, the middle 10 seconds, and the last 5 seconds, respectively. We take the absolute average value when sampling from low frequency to high frequency. Taking 1 Hz as an example, if the ground motion recording is 100 Hz, that is, there are 100 acceleration in one second, then the absolute average value of 100 acceleration in one second is taken as one of the data in the 1 Hz sample. After obtaining a complete sample, normalization is performed; that is, the entire sample is divided by the maximum value.
Training and Testing
Methods. e data are divided into a training set and a test set, which are used for model training and testing, respectively. e sample number ratio is 4 : 1, and the samples are not repeated between two sets. Each sample consists of an acceleration recording and a label. e acceleration recording is the 615 acceleration values with a duration of 30 seconds described above as the input to the model. e label is the trend of Arias intensity change in the future 5 seconds. If the trend increases, the label is 1, and the decrease is 0, as the actual comparison value output by the model.
After the output of the model passes through the Softmax layer, two probability values are obtained, which correspond to the probability that the model judges whether the intensity of Arias will decrease or increase in the future 5 seconds. e purpose of model training is to make the difference between the output and the label value close to 0. If it is actually increasing, then the probability of increase in the model output is close to 1, and the probability of decreasing is close to 0, and vice versa. is difference is called a loss function. e loss function used in this paper is the sparse Softmax cross entropy, and the optimization function to reduce the cross entropy is the adaptive moment estimation optimizer (Adam), which is derived from the AdaGrad and RMSProp optimization functions. It has the following advantages: (1) it has simple implementation and efficient calculation; (2) hyperparameters hardly need to be adjusted, reducing the influence of factors to improve training efficiency; (3) the learning rate can be automatically adjusted to a certain extent; and (4) it is very suitable for large-scale data and parameter scenarios. First of all, whether it is earthquake prediction or earthquake risk estimation, it is more realistic for larger earthquakes, so the earthquake recordings we downloaded are selected from the period 2003 to 2019, on K-net and KiK-net with a magnitude of 5.0 or higher, involving a total of 1667 stations, and divided into 5 to 5.9, 6 to 6.9, 7 to 7.9, and 8 earthquake intervals, and each of the training set and test set is guaranteed that the number of each interval data is equal in order to get a better training effect, and the results can be compared and analyzed. en, when we sample the recording, each recording can be 5 s, 10 s, 15 s, and so on. Until the end of the recording as the sampling end point of each sample, take the past 30 s data according to different sampling frequencies.
at is, the samples obtained in each recording include different stages, before, during, and after the earthquake. If 5 s or 10 s is not enough for the end point to take 30 s, the zero padding process is done in front to make the sample has the same acceleration data structure. Finally, we count the time of earthquake occurrence in all earthquake recordings and divide the training set and the testing set by the earthquake time. e proportion of the number of the training set is 80% and the testing set is 20%, so we sort the statistical earthquake time from first to last and take four-fifths of the time as the node to divide the training set and the testing set. e training set is the earthquake that occurred before the time node, and the testing set is the earthquake that occurred after the time node.
Hyperparameter Adjustment.
During model training, the adjustment of hyperparameters will have an impact on training efficiency and training results. is article involves two hyperparameters: learning rate and batch size.
(1) Learning Rate. e learning rate is the degree to which the model reduces the value of the loss function each time. If the learning rate is high, the model may converge faster in the first few times, but it is likely that the model cannot reach the global optimum. If the learning rate is low, it will definitely have a great impact on training efficiency. We use the controlled variable method to measure the loss of the model while ensuring that other variables are the same. As shown in Figure 6, the learning rate is set to 0.001 in this article. e learning rate is multiplied by 0.99 in each iteration. e learning rate gradually decreases as the number of training increases, so that the loss of the model is closer to the minimum.
(2) Batch Size. e batch size refers to the number of samples input to the model each time. If it is too small, it will easily bias the convergence direction of the model. If it is too large, it will easily cause the model to stay in the local optimum and fail to reach the global optimum. According to the changes in the accuracy of the training set and the testing set in Figure 7, the model's relatively high generalization ability and the lowest over-fitting phenomenon are better, so the batch size of 50 is more appropriate.
Results
In this paper, a method for predicting the variation trend of ground motion Arias intensity using FCNN is proposed. After scaling, baseline correction, and normalization of the original seismic recording data, segment frequency division sampling is carried out, and FCNN is used for identification and classification. After 100 rounds of training, we selected the most accurate model parameters for analysis based on the test data set. e results show that the training is carried out by using the data set of 160000 samples, the classification is carried out according to the variation trend of Arias intensity, and the training effect of the model is tested on the test set of 40000 samples. After training, the model has the over-fitting phenomenon; that is, the accuracy of the training set increases and the accuracy of the test set decreases, as shown in Figure 8(a). erefore, the model in this paper is taken from the highest accuracy before the continuous decline of the test data set. e average accuracy of the model is 76.5%. e model corresponds to different magnitudes of 5.0∼5.9, 6.0∼6.9, 7.0∼7.9, and above 8.0. e average accuracy of prediction is 83.2%, 76.9%, 75.5%, and 70.6%, respectively, as shown in Figure 8(b).
Regarding the computational complexity, it represents the increasing trend of the runtime of the program with the size of the data. Usually we use big O notation to represent the computational complexity of the model. For the fully connected neural network model that has been trained and can be used in practice since the number of neural network layers and the number of neurons in each layer have been fixed, there will be no loop calculations, so its computational complexity depends entirely on the number of input 17. e model predicts that the probability that the intensity of the magnitude 5.4 earthquake will increase is 99.6%, that the probability of decreasing intensity of the magnitude 6.1 earthquake is 65.6%, that the probability of decreasing intensity of the magnitude 7.3 earthquake is 58.2%, and that the probability of increasing intensity of a magnitude 9.0 earthquake is 74.0%. 6 Shock and Vibration and input the model to determine whether the Arias intensity will increase or decrease in the future. e red line represents whether the intensity will actually increase from the known recordings, that is, the label. e label with alpha> 1 is 1, and the label with alpha <1 is 0.
samples. Using big O notation, the complexity is O(n). us, the trained model is highly effective for real-time use. Our running time is less than 0.1 second for each evaluation on a single time series using a common computer.
Conclusion and Discussion
rough the above research and results, we can get the following preliminary conclusions: (1) e model predicts the future Arias intensity trend with a 76.5% overall accuracy. When an earthquake occurs, it can provide a certain reference for judging the development trend of this earthquake or the scale of damage caused by this earthquake. is model has certain feasibility for predicting the Arias intensity trend of the seismic acceleration recordings, and the sample data input to the model are only the acceleration recordings without other variables, and then a certain accuracy of the ground motion intensity trend prediction can be quickly obtained, which is meaningful. In practical applications, real-time monitoring recordings can be used as input to predict the trend of future monitoring values (Figure 9), and based on the predicted values, the damage intensity of the earthquake could be estimated more accurately in advance combining with other parameters such as propagation path and site conditions. (2) It can be seen from the histogram in Figure 8(b) that the accuracy of this model for predicting earthquake intensity trends from magnitude 5 to magnitude 8 earthquakes basically decreases as the magnitude increases. (3) In theory, any time series of images, including seismic acceleration recordings, can be represented as functions because the function may be too complicated to be implemented easily. In deep learning, as long as the number of network layers and neurons are sufficient and the training function meets the requirements, after a long period of training and debugging, the network is possible to approximate any complex function with a fairly high accuracy. erefore, with the in-depth research on the application of deep learning, it may become a powerful tool for the analysis of ground motions.
Earthquake prediction is a world-recognized problem in seismology, and it is currently in the development and exploration stage. Due to the complexity of earthquake prediction, the current earthquake prediction is mainly based on long-term earthquake risk estimation and short-term prediction of observed earthquake precursor phenomena, almost all of which are empirical. Now, experts and scholars in the field of earthquake prediction at home and abroad have a certain understanding of medium-and long-term prediction, so the risk estimation has reference value. However, the understanding of earthquake precursor phenomena is far from reaching the level of regularity, so the success rate is not very high. Due to the rapid development of deep learning technology and the ability to identify and judge data features that scientists cannot find at present, we have proposed a deep learning model for predicting earthquake trends and achieved a high success rate. Nevertheless, this model still has some problems to be solved urgently: (1) e data used in this article come from K-net and KiK-net. e earthquake recordings used are large, medium, and small earthquakes in Japan. erefore, it is still necessary to verify whether the model is applicable to other regions. In the future, we will also add data from other countries and regions to the training data to improve the generalization ability of the model; (2) From Figure 8(b), it can be seen that the accuracy of the model for predicting the future 5 s Arias intensity trend of large earthquakes is not high, but it is more accurate on large earthquakes that can cause more damage. e anticipation of large earthquakes is crucial. e acceleration recordings of large earthquakes are more complex and have more influencing factors, so we will optimize the model for this point and strive to improve the accuracy of prediction of large earthquakes in the future.
(3) e model predicts the trend of ground motion intensity in the future 5 seconds. With the optimization of the model and the increase in data in the future, it may be able to predict the trend of 10 seconds or more.
Data Availability
e earthquake recordings data used to support the findings of this study are processed from K-net and KiK-net.
Disclosure
All statements, results, and conclusions are those of the researchers and do not necessarily reflect the view of funders.
Conflicts of Interest
e authors declare that they have no conflicts of interest. | 2021-10-21T15:11:21.062Z | 2021-10-18T00:00:00.000 | {
"year": 2021,
"sha1": "07cc910d87561f94cbdb362872cef7d7ab09b150",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1155/2021/5518204",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "f9f4d651c65f948afbf68e4e9a6febf904e4a2c7",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science",
"Computer Science"
],
"extfieldsofstudy": []
} |
56058758 | pes2o/s2orc | v3-fos-license | Air Quality and Bioclimatic Conditions within the Greater Athens Area, Greece - Development and Applications of Artificial Neural Networks
Panagiotis Nastos1, Konstantinos Moustris2, Ioanna Larissi3 and Athanasios Paliatsos4 1Laboratory of Climatology and Atmospheric Environment, Faculty of Geology and Geoenvironment, University of Athens, 2Department of Mechanical Engineering, Technological Educational Institute of Piraeus, 3Department of Electronic-Computer Systems Engineering, Technological Educational Institute of Piraeus, 4General Department of Mathematics, Technological Educational Institute of Piraeus, Greece
Introduction
Over the past few decades the phenomenon of urbanization resulted in severe problems. The quality of human life has been deteriorated in the megacities around the world. This chapter deal with the Artificial Networks (ANNs) forecasting ability in predicting the air quality as well as the bioclimatic conditions in an urban environment. For this purpose, different ANNs are demonstrated in this chapter. These ANNs have been developed in order to predict the air quality as well as the bioclimatic conditions within the Greater Athens Area (GAA), Greece. The prognosis for both air quality and bioclimatic conditions within GAA concerns the next three days (24 to 72 hours prediction).For the proper ANNs training for both air quality and bioclimatic conditions, hourly values of specific meteorological parameters such as air temperature, relative humidity, wind speed, wind direction, air pressure, sunshine and solar radiation, as well as hourly values of air pollutants concentrations have been used. These hourly data have been recorded in many different sites within GAA from the network of the Greek Ministry of Environment Energy and Climatic Change (GMEECC) during the period [2001][2002][2003][2004][2005]. Hourly values of barometric pressure and total solar irradiance for the s a m e t i m e p e r i o d w e r e a c q u i r e d f r o m t h e National Observatory of Athens (NOA). This chapter is divided into nine sections. The first section is brief introduction concerning ANNs. The second section presents air quality indices that have been used in this work in order to describe the air quality within GAA. The third section presents bioclimatic indices, which describe the human thermal comfort-discomfort due to meteorological conditions. The fourth section presents statistical performance indices that have been used in order to investigate the predictive ability and reliability of the developed ANNs models. The fifth section demonstrates the examined sites within the GAA and the data/methodology used in this study. The sixth section presents the ANNs that were developed in order to predict the maximum daily value of the air pollution indices as well as the persistence of the phenomenon, namely the number of consecutive hours within the day with high/strong air pollution. The seventh section presents the ANNs that were developed in order to predict the daily values of the bioclimatic indices as well as the number of consecutive hours within the day with dangerous bioclimatic conditions for humans' health. The eighth section includes the spatial variation for both air quality levels and human comfort/discomfort levels within GAA. The ninth is the last section summarizing briefly the extracted results by the performed analysis and how these results can contribute positively to the economy, energy, environment and quality of human life in general. Finally the results of this work have shown that the ANNs could give an adequate forecast for both air quality and bioclimatic conditions within the urban environment of the GAA for the next three days at a statistical significant level of p<0.01.
Artificial Neural Networks
Artificial Neural Networks (ANNs) are a branch of artificial intelligence developed in the 1950s aiming at imitating the biological brain architecture. They are an approach to the description of functioning of human nervous system through mathematical functions. Typical ANNs use very simple models of neurons. These artificial neurons models retain only very rough characteristics of biological neurons of the human brain (McCulloh & Pitts, 1943). ANNs are parallel-distributed systems made of many interconnected non-linear processing elements (PEs), called neurons (Hecht-Nielsen, 1990). A renewal of scientific interest has grown exponentially since the last decade, mainly due to the availability of appropriate hardware that has made them convenient for fast data analysis and information processing (Viotti et al., 2002). Figure 2.1 presents the structure of a biological neuron (upper graph) as well as the structure of an artificial neural (lower graph). ANNs have been applied in time series prediction (Lapedes & Farber, 1987;Werbos, 1988). Although their behaviour has been related to non-linear statistical regression (Bishop, 1995), the big difference is that ANNs seem naturally suited for problems that show a large dimensionality of data, such as the task of identification for systems with great number of state variables. Over the last years, black box approaches have been recognized to constitute a viable alternative to conceptual models for input-output simulation and forecasting and also to allow shortening the time required for the model development. In particular, ANNs concentrated a general consensus in predicting different pollutants time series, as shown by the review of Gardner & Dorling (1998a, 1998b. Many ANNs were developed for very different environmental purposes. Heymans & Baird (2000) have used network analysis to evaluate the carbon flow model built for the northern Benguela upwelling ecosystem in Namibia. Antonic et al. (2001) have estimated the forest survival after building the hydroelectric power plant on the Drava River, Croatia by means of GIS constructed database and a neural network. Karul et al. (2000) used a three-layer Levenberg-Marquardt feedforward neural network to model the eutrophication process in three water bodies in Turkey. Besides, Moustris et al. (2011) used ANNs for long term precipitation forecast, using long-term monthly precipitation time series of four meteorological stations in Greece. Fig. 2.1. Biological (upper graph) and artificial (lower graph) neuron structure. Viotti et al. (2002) used ANNs to forecast short and middle long-term concentration levels for some of the well-known pollutants at the urban area of Perugia, Italy. The ANNs approach proved to be viable also for O 3 , PM 10 , NO 2 , NO x forecasting, outperforming alternative techniques in different case studies (Nunnari et al., 1998;Prybutok et al., 2000;Kolehmainen et al., 2001;Balaguer Ballester et al., 2002;Schlink et al., 2003;Corani, 2005;Slini et al., 2006;Dutot et al., 2007;Papanastasiou et al., 2007).
Multi-Layer Perceptron and feed-forward ANNs
The Multi-Layer Perceptron (MLP) is the most commonly used type of ANNs. Its structure consists of Processing Elements (PEs) and connections (Hecht-Nielsen, 1991). PEs, which are called neurons, are arranged in layers. The first layer is the input layer, one or more hidden layers follow and the final layer is the output layer. An input layer serves as buffer that distributes input signals to the next layer, which is a hidden layer. Each neuron of the hidden layer communicates with all the neurons of the next hidden layer, if any, having in each connection a typical weight factor. So, each unit-artificial neuron in the hidden layer sums its input, processes it with a transfer function and distributes the result to the output www.intechopen.com layer. It is also possible that there are several hidden layers connected in the same fashion. The units-artificial neurons in the output layer compute their output in a similar manner. Finally, the signal reaches the output layer, where the output value from the ANN is compared to the target value and an error is estimated. Thus, the values of weight factors are amended appropriately and the training cycle repeats until the error is acceptable, depending on the application. Since data flow within the artificial neural network from a layer to the next one without any return path, such kind of ANNs are defined as feed-forward ANNs. The structure of a feedforward Multi-Layer Perceptron artificial neural network can be represented as in Figure 2.1.1. (Caudill & Butler, 1992).
Feed-forward ANNs training and the Back-propagation training algorithm
The training-learning process of ANNs can be far from the ensemble optimum in some cases, and the problem can be solved only with a very good database, a best choice of the input configuration for training, or using most powerful learning algorithms (Viotti et al., 2002). The back-propagation learning algorithm consists of two steps of computation: a forward pass and a backward pass. In the forward pass, an input pattern vector is applied to the sensory nodes of the network, i.e. to the units in the input layer. The signals from the input layer are propagated to the units in the first layer and each unit produces an output. The outputs of these units are propagated to the units in the subsequent layers and this process continues until, finally, the signals reach the output layer, where the actual response of the network to the input vector is obtained (Figure 2.1.1). During the forward pass, the synaptic weights of the network are fixed. During the backward pass, on the other hand, the synaptic weights are all adjusted in accordance with an error signal, which is propagated backward through the network against the direction of synaptic connections.
The mathematical analysis of the algorithm is as follows (Viotti et al., 2002). In the forward pass, given an input pattern vector y (p) , each hidden node-neuron j receives a net input: where w jk represents the weight between the hidden neuron j and the input neuron k. Thus, the hidden neuron j produces an output: is the activation faction of the hidden layer. Different kinds of activation functions are referenced in the literature, such as linear, sigmoid, hyperbolic tangent, logistic, etc. (Norgaard et al., 2000). In the following, we consider a hyperbolic tangent activation function for the neurons in the hidden layer, hence, the value returned by the activation function of neuron j of the hidden layer is: Each output neuron receives the input from the preceding hidden layer by the forecasted value, so that the entry to the output neuron can be written as: where w j represents the weight between the output neuron and the hidden neuron j. It therefore produces the final output: The presentation of all the patterns is usually called epoch. Many epochs are generally needed before the error becomes acceptably small. In the batch neuron the error signal is calculated for each input pattern but the weights are modified only when the input patterns have been presented. The error function is calculated referring to the Mean Square Error (MSE) and the weights are modified accordingly: where d is the desired or real output (monitored variable value) and y is the ANN output or the forecasted value. In the batch mode, E is equal to the sum of all MSEs on all the patterns of the training set. E is obviously a differentiable function of all weights (and thresholds) and therefore we can apply the gradient descent method. For the hidden to output connections the gradient descent rule gives: where η is a number called learning rate. The learning rate is a parameter that determines the size of the weights adjustment each time the weights are changed during the training process. Small values for the learning rate cause small weight changes and large values cause large changes (Attoh-Okine, 1999). The best learning rate is not obvious. If the learning rate is 0.0, the network will never learn. Refenes et al. (1994) reported that one and tow layered network with a learning rate of η=0.2 and a momentum rate of 0.3<α<0.5 yield the best combination of convergence. The momentum term is a factor used to speed network training. It adds a proportion of the previous weight changes to the current weight changes. Using the chain rule it can be written as: Thus, the hidden to output connections are updated according to the following equation: For the input to hidden layer connections the gradient descent rule is: It is worthwhile noting that, a network architecture having just one hidden layer, and activation functions arranged as described above, constitutes a universal predictor and it can theoretically approximate any continuous function to any degree of accuracy. In practice, such degree of flexibility is not achievable because parameters must be estimated from sample data, which are both finite and noisy (Barazzetta & Corani, 2004). The ANNs work on a matrix containing more patterns. Particularly, the patterns represent the rows while the variables are the columns. This data set is a sample. To be more precise, giving the ANN three different subsets of the available sample we can get the forecasting model; the three subsets concern the training, the validation and the test subsets. These subsets are briefly described: • Training subset, the group of data with which we train-educate the network according to the gradient descent for the error function algorithm, in order to reach the best fitting of the nonlinear function representing the phenomenon.
•
Validation subset, the group of data, given to the network still in the learning phase, by which the error evaluation is verified, in order to update the best thresholds and weights effectively. • Test subset, one or more sets of new and unknown data for the ANN, which are used to evaluate ANN generalization, i.e. to evaluate whether the model has effectively approximated the general function representative of the phenomenon, instead of learning the parameters uniquely.
Air quality indices
Urban air pollution is a growing problem in big cities with large urbanization, where adverse health effects have been established. Bad city design combined with specific topographical and meteorological conditions allowing poor circulation, are associated with frequent episodes of critically high atmospheric pollution, enforcing in some cases extreme actions by the authorities, such as restriction of motor vehicles circulation within large area of the city. For a better and more effective monitoring and analysis of air quality in big cities, air pollution indices are often used. Most of them have resulted after a series of epidemiological studies, which investigated the impact of air pollution on public health. In this work, two air pollution indices are presented and applied in order to forecast the air quality within GAA using ANNs.
Description of the European Regional Pollution Index (ERPI)
The European Regional Pollution Index (ERPI) has been proposed and developed by Moustris (2009). This air quality index is based on the air pollution index that is known as Regional Pollution Index (RPI). The New South Wales government in Sydney, Australia used RPI since the mid 1990s (NSW-EPA 1998. The calculation of ERPI was performed using the thresholds prescribed by the European Community (EC) based on the framework directive 1996/62/EC and the three affiliated directives 1999/30/EC, 2000/69/EC, and 2002/3/EC (Table 3.1.1). Due to the way of calculation of ERPI, based on EC air pollution thresholds, the Australian RPI was renamed as European Regional Pollution Index (ERPI). In this work, ERPI was calculated for five main air pollutants. Concretely, the air pollutants concern nitrogen dioxide (NO 2 ), sulfur dioxide (SO 2 ), carbon monoxide (CO), ozone (O 3 ) and particulate matter with aerodynamic diameter less than or equal to 10 μm (PM 10 ). For any observed concentration C i , the value of the sub-index I i is given by: where I 1 , I 2 , I 3 , I 4 , and I 5 are the sub-indices whose values are defined by the NO 2 , SO 2 , CO, O 3 and PM 10 , respectively. If ERPI ≥ 50 this means that at least one of the pollutants is over its limit value (Table 3.1.1).
Description of Daily Air Quality Index (DAQx)
A new impact-related air quality index obtained on a daily basis and abbreviated as DAQx (Daily Air Quality Index) has been recently developed and tested by the Meteorological Institute of Freiburg, Germany, and the Research and Advisory Institute for Hazardous Substances, Freiburg, Germany (Mayer et al., 2002a(Mayer et al., , 2002bMakra et al., 2003). DAQx considers the air Pollutants SO 2 , CO, NO 2 , O 3 and PM 10 . To enable a linear interpolation between index classes, DAQx is calculated for each pollutant by: with C inst : highest daily 1 hour concentration of SO 2 , NO 2 , and O 3 , highest daily running 8 hours concentration of CO, and mean daily concentration of PM 10 . C up is the upper threshold of specific air pollutant concentration range; C low is the lower threshold of specific air pollutant concentration range; DAQx up is the value of DAQx according to C up ; DAQx low is the value of DAQx according to C low ( Mayer et al. (2002aMayer et al. ( , 2002b.
Bioclimatic indices
The growth of the city of Athens during the last decades and the phenomenon of urbanization (Philandras et al., 1999) have established the well known Urban Heat Island (UHI) at a great areal extent of the city, resulting in explicit effects on human thermal comfort-discomfort. Thermal comfort is defined as the condition of mind, which expresses satisfaction with the thermal environment, absence of thermal discomfort, or conditions in which 80% or 90% of humans do not express dissatisfaction (Givoni, 1998). Several indices, which describe the human thermal comfort-discomfort, have been developed worldwide. In this chapter three bioclimatic indices will be presented. The Discomfort Index (DI), the Cooling Power index (CP) and the Physiologically Equivalent Temperature (PET). In the process, these indices are briefly described.
Discomfort Index (DI)
The Discomfort Index (DI) was originally developed by Thom (Thom, 1959) and was supported by later works (Clarke & Bach, 1971;Giles et al., 1990). This index describes the www.intechopen.com degree of thermal load under various meteorological conditions, suitable for both outdoor and indoor environments. It is useful to evaluate how current temperature and relative humidity can affect the sultriness or discomfort sensation and cause health danger in the population.
DI ( o C)
Classification of human comfort-discomfort sensation DI<21 No discomfort feeling 21≤DI<24 Less than 50% of the total population feels discomfort 24≤DI<27 More than 50% of the total population feels discomfort 27≤DI<29 Most of the population feels discomfort 29≤DI<32 The discomfort is very strong and dangerous DI≥32 State of medical emergency Several formulas of the index have been proposed for use along with tables of boundary values that indicate degrees of comfort-discomfort. In the present work we used the following formula of DI, calculated as a combination of air temperature T (ºC) and relative humidity RH (Giles et al., 1990): The classification of the DI values with the equivalent feeling of thermal comfortdiscomfort is given in Table 4.1.1 (Giles et al., 1990).
Cooling Power index (CP)
The Cooling Power Index (CP) was developed by Siple & Passel (1945) The classification of the CP index values modified by Besancenot et al. (1978), with the equivalent feeling of thermal comfort-discomfort is given in Table 4.2.1.
Physiologically Equivalent Temperature (PET)
The thermal index Physiologically Equivalent Temperature (PET) is based on the total energy balance of the human body. PET values were evaluated (Mayer & Höppe, 1987;Höppe, 1999), in order to interpret the grade of the thermophysiological stress (Table 4.3.1). It describes the effect of the thermal environment as a temperature value ( o C) and can be quantified easier for non specialists in this topic. For night time situation, air temperature corresponds very close to the PET value. It has been applied in heat waves and climatic variability studies (Nastos & Matzarakis 2008, Matzarakis & Nastos 2010) and weather impacts on health (Nastos & Matzarakis, 2006). The PET analysis was performed by the use of the radiation and bioclimate model, RayMan, which is well-suited to calculate radiation fluxes and human biometeorological indices (Matzarakis et al., 1999 and was chosen for all our calculations of mean radiant temperature and PET. Table 4.3.1. Physiologically Equivalent Temperature (PET) for different grades of thermal sensation and physiological stress on human beings (during standard conditions: heat transfer resistance of clothing: 0.9 clo, internal heat production: 80 W) (Matzarakis & Mayer, 1996)
Statistical performance indices
The quality and reliability of the developed ANNs, concerning their ability to forecast both air quality and bioclimatic conditions within GAA, were tested using several statistics www.intechopen.com indices that have already been applied in similar studies (Moustris et al., 2010). The statistical performance indices that used in this work are presented and described briefly: Mean Bias Error: where N is the number of the data points, O i is the observed data and P i is the predicted data. The MBE represents the degree of correspondence between the mean forecast (P i ) and the mean observation (O i ). MBE is used to describe how much the model underestimates or overestimates the observed data. Positive/negative values indicate over estimated/under estimated prediction.
Root Mean Square Error: RMSE provides a measure of how well future outcomes are likely to be predicted by the model. The coefficient of determination (R 2 ) indicates how much of the observed variability is accounted by the estimated model (Kolehmainen et al., 2001). The coefficient of determination is a number between 0 and +1 and measures the degree of association between two variables. The coefficient of determination is calculated according to the equation (Comrie, 1997 where O iave is the average of the observed data. A relative measure of error, called the index of agreement (IA), is also discussed in Willmott et al. (1985). Index of agreement is calculated according to the formula: where O iave is the average of the observed data. This is a dimensionless measure that is limited to the range of 0-1. If IA=0, that means no agreement between prediction and observation and if IA=1, that means perfect agreement between prediction and observation.
Data and methodology
For the calculation of the bioclimatic indices as well as the air quality indices, appropriate meteorological data in hourly basis were used. More specifically, hourly values of air temperature ( o C), relative humidity (%) and wind speed (m/s were used for DI and CP www.intechopen.com calculation. In addition to the aforementioned meteorological parameters, total cloudiness cover (octas) was taken into consideration for PET calculation, using the RayMan model (Matzarakis et al., 1999. The appropriate meteorological parameters used as inputs in the RayMan model were acquired from the National Observatory of Athens, for the period 2001-2004. Besides, hourly values of air pollutants concentrations (NO 2 , SO 2 , CO, O 3 and PM 10 ) were used in order to estimate the two air quality indices ERPI and DAQx. All the above datasets have been recorded by the network of the GMEECC covering the period 2001-2005 and concern nine (9) different regions within the GAA, namely the regions: Agia Paraskevi, Thrakomakedones, Lykovrissi, Maroussi, Liossia, Galatsi, Patission, Aristotelous, and Geoponiki ( Fig. 6.1). Thus, for each one station-region two daily values for each one of the two examined air quality indices were calculated. The first daily value concerns the air pollutants NO 2 , SO 2 , CO, and O 3 and the second concerns the particulate matter PM 10 . This happened because the daily concentrations of particulate matter PM 10 as well as the daily concentrations of ozone are both high enough. If only one daily value for each of the two air quality indices was calculated, then, we will not be able to know if that value is due to ozone or PM 10 . Thereafter, an appropriate number of ANN models were developed and trained in order to predict for the next three days the daily value for each one of the two air quality indices as well as the number of consecutive hours during the day where the value of the index is greater than a threshold value. index in each region-station was calculated. The calculation was carried out only during the warm period of the year (May-September) in order to describe the human discomfort due to heat stress weather conditions. Then, an appropriate number of ANNs were developed and trained in order to predict for the next three days the daily value for each one of the two bioclimatic indices as well as the number of consecutive hours during the day, where the value of the index is greater than a threshold value (DI ≥ 24 o C) or less than a threshold value (CP ≤ 174 W/m 2 ). Furthermore, the mean daily values of PET index were estimated only for the National Observatory of Athens, because of the availability of the total p a r a m e t e r s n e e d e d a s i n p u t s i n R a y M a n model. Thereafter, the developed ANN was evaluated in forecasting PET for the next three days.
ANNs description
Six different ANNs were developed in order to forecast the air quality levels within the GAA. The first one (ANN#1) was trained in order to forecast the daily value of ERPI (for the pollutants CO, NO 2 , SO 2 and O 3 ) for the next day at seven different areas of GAA (APA, THR, LYK, MAR, LIO, GAL and PAT). The second one (ANN#2) was trained in order to forecast the daily value of DAQx (for the pollutants CO, NO 2 , SO 2 and O 3 ) for the next day at the above seven different areas within the GAA. The third one (ANN#3) was trained in order to forecast the daily number of the consecutive hours for the next day, with at least one of the pollutants concentrations (CO, NO 2 , SO 2 and O 3 ) above a threshold according to directives of European Community, for each one of the seven examined stations within the GAA. The fourth (ANN#4) was trained in order to forecast the daily value of ERPI (with respect to PM 10 ) for the next day, at five different areas of GAA (APA, THR, LYK, MAR, and ARI). The fifth (ANN#5) was trained in order to forecast the daily value of DAQx (with respect to PM 10 ) for the next day, at the mentioned five different areas within the GAA. Finally the sixth (ANN#6) was trained in order to forecast the daily number of the consecutive hours for the next day with the PM 10 concentrations above a threshold according to EC directives, for each one of the five examined stations within the GAA.
In each case, the group of data defined as "the training set", used for ANNs training, concerns the time period 2001-2004. The group of data defined as "the validation set", given to the network still in the learning phase, accounts 20% of 'the training set" for each one of the developed ANN models. Finally "the test set" refers to the year 2005. The year 2005 is absolutely unknown to the models, in order to reveal the models forecasting ability. Table 6.1 presents the input and output data for the six developed ANN models. The combination of selected data for the appropriate ANN models training was done after a series of several tests (trial and error method). At the end, the combination that gave the best forecasting result in each case was selected (Table 7.1.1).
In this point, we have to mention that for all the constructed ANN models we have used as input data, in addition to other parameters, the maximum and minimum air temperature, the maximum and minimum wind speed for the next day as well as the mean daily air barometric pressure and the mode daily wind direction for the next day. This may produce a limitation in the forecasting attempt, but it is easy to have access to these forecasted values through the network of the Hellenic National Meteorological Service (HNMS). In all cases, it seems that the prediction for the pollutants CO, NO 2 , SO 2 and O 3 is much more successful using the ERPI, which is according to the European Community directives, instead of the DAQx. But using both predictions we can have a better and safe "picture" about air quality one day ahead within the GAA. As far as the air pollution persistence (for the pollutants CO, NO 2 , SO 2 and O 3 ) is concerned, it seems that ANN#3 gives an adequate prediction. The R 2 values range between 0.017 and 0.605 while IA range between 0.299 and 0.877. Finally, the worst prediction with respect to the air quality index ERPI appears for the region-station PAT (city centre) against the region-station LIO (urban area) concerning the air quality index DAQx. Generally, it seems that the prediction for the stations, which are closer to the GAA's downtown, is not so good compared to the prediction of the peripheral regions-stations. This is likely due to the traffic load and the bad air circulation within the city's centre, meaning that, more relevant data, associated with the above mentioned factors, are needed for a better ANNs training. 8. Bioclimatic conditions forecasting using ANNs
ANNs description for DI and CP forecasting
Four different ANN models were developed in order to forecast the bioclimatic conditions within the GAA during the warm period of the year (May-September). The first one (ANN#7) was trained in order to forecast the daily value of Thom's DI index for the next day at eight different areas of GAA (APA, THR, LYK, MAR, LIO, GAL, GEO and PAT). The second one (ANN#8) was trained in order to forecast the daily value of CP index for the next day at the above mentioned eight different areas within the GAA. The third one (ANN#9) was trained in order to forecast the daily number of the consecutive hours with DI ≥ 24 o C for the next day at each one of the eight examined stations within the GAA. Finally, the fourth (ANN#10) was trained in order to forecast the daily number of the consecutive hours with CP ≤ 174 W/m 2 for the next day at each one of the eight examined stations within the GAA.
In each case the group of data named as "the training set" used for ANNs training concerns the time period 2001-2004. The group of data named as "the validation set" given to the network still in the learning phase accounts 20% of the training set for each one of the above ANNs. Finally "the test set" refers to the year 2005, which is absolutely unknown to the models in order to reveal the models forecasting ability. Table 8.1.1 presents the input and output data for the four developed ANNs. The combination of selected data for the appropriate ANN models training was done after a series of several tests (trial and error method). At the end, the combination that gave the best forecasting result in each case was selected (Table 8.1.1). Stations' number (1,2,3,4,5,6,7) √ √ √ √ (5,6,7,8,9) √ √ √ √
Month
The maximum (T max ) daily temperature for the six previous days.
√ √ √
The maximum (RH max ) daily relative humidity for the six previous days.
√ √
The maximum (DI max ) daily value of DI for the six previous days.
√ √
The daily number of consecutive hours with DI≥24 0 C for the six previous days.
√ √
The maximum (V max ) daily wind speed for the six previous days. √ The minimum (CP min ) daily value of CP for the six previous days. √ The daily number of consecutive hours with CP≤174 W/m 2 for the six previous days.
√ √
The maximum (T max ) and minimum (T min ) daily temperature for the six previous days. √ The maximum (V max ) and minimum (V min ) daily wind speed for the six previous days. √ The maximum (CP max ) and minimum (CP min ) daily value of CP for the six previous days. √
OUTPUT DATA (output layer)
The maximum (DI max ) daily value of DI for the next day. √ The minimum (CP min ) daily value of CP for the next day. √ The daily number of consecutive hours with DI≥24 0 C for the next day. √ The daily number of consecutive hours with CP≤174 W/m 2 for the next day. √ Table 8.1.1. Input and output data for the appropriate training of the four developed ANNs.
DI and CP daily value forecasting for the next day
The global fit agreement statistical indices as well as the excess statistical indices for the observed and predicted values were calculated and demonstrated for the eight examined stations respectively. More specifically, O ave , P ave , MBE, RMSE, IA and R 2 values for DI are presented in Table 8 Besides, the IA values, show a very satisfactory prediction regarding ANN#9 (0.368 ≤ IA ≤ 0.951) and ANN#10 (0.750 ≤ IA ≤ 0.946). The worst prediction for the daily number of consecutive hours with high discomfort conditions, due to strong heat stress, refers to the region-station of THR (suburban region-station). This may be attributed to the fact that, in this suburban region (Thrakomakedones) the bioclimatic conditions are better than all the other examined regions within the GAA due to lower temperature values. Both discomfort indices, DI and CP, present daily values over their thresholds for a short period of time during the examined period. Thus, there is not a "memory-experience" of the persistence in THR, so the developed ANN models cannot have the appropriate training in order to forecast the number of consecutive hours with strong discomfort. Figure 8.2.1 reveals that within the city's centre (PAT), the strong discomfort conditions (DI ≥ 24 0 C) appear from the end of June to the first half of September. At the suburban station (THR) there is not a significant discomfort, according to DI values. Just a few days during the warm period of the year appear to be over the threshold of DI ≥ 24 0 C; namely at least 50% of the population feels discomfort due to heat stress. Figure 8.2.2 illustrates that close to the city's center (urban area of Galatsi), the hot sub comfort conditions according to CP values (CP ≤ 174 W/m 2 ) appear from the middle of June until the first half of September. At the suburban station (THR), the discomfort due to heat stress conditions starts at the beginning of July until the middle of August. In all the above cases it seems that the prediction of bioclimatic conditions one day ahead with the use of ANN models is very satisfactory and realizable.
ANNs description for PET forecasting
Three developed ANNs were trained using back-propagation algorithm to forecast the mean daily PET value for the next day (ANN#11), two next days (ANN#12) and three next days (ANN#13). The training dataset concern the period 2001-2003, while the validation dataset concern the year 2004, which was absolutely unknown to the constructed model, in order to test the predictive ability of the model. Superposed epoch analysis on the training datasets indicated that three days before the incidence of strong heat/cold stress are adequate to forecast PET value for the next days. Thus, the input data (Table 8.3.1) which were taken for ANNs training concern the mean daily air temperature, relative humidity, wind speed and sunshine for the previous three days from the National Observatory of Athens. 9. Spatial distribution of air quality and bioclimatic conditions in the GAA 9.1 Spatial variation of air quality within GAA The mean annual value for both air quality indices ERPI and DQAx was calculated at all the examined regions within GAA, during the time period 2001-2005. Figure 9.1.1 shows the spatial variation of air quality levels within GAA. As far as the air quality index ERPI is concerned, only the station THR appears a satisfactory air quality level in annual basis (ERPI < 40). The stations MAR, APA and GAL appear a tolerable air quality level (ERPI < 50). Moreover, the air quality levels at LYK, LIO and PAT stations are very close to the limit value of ERPI ≥ 50. Finally, the air quality level appears to be poor in the city centre station ARI. This may be attributed to the high PM 10 concentration levels almost during the whole year. In this point, we have to mention that the station PAT is also in the centre of the city and very close to the ARI station, but unfortunately for this station we don't have any PM 10 observations. Similar conclusions are extracted with respect to the air quality index DAQx. The only exception is the LIO station in which the air quality levels seems to be much closer to the stations GAL, MAR and APA.
Spatial variation of bioclimatic conditions within GAA
During the period 2001-2005, the mean annual value for both bioclimatic indices DI and CP was calculated at all the examined regions within the GAA. Figure 9.2.1 depicts the spatial variation of the bioclimatic conditions within the GAA during the warm period of the year (May-September), where three different bioclimatic zones appear. The first zone is the north suburban zone (THR), which can be characterized as a comfortable zone. The second zone extends peripherally the city's center (LIO, LYK, MAR and APA) and can be marginally characterized as a comfortable zone or warm zone. Finally, the third zone concerns the city's center (GAL, PAT and GEO), which can be characterized as an uncomfortable zone or a strong heat stress zone. As far as the persistence of discomfort during the examined period 2001-2005 is concerned, the mean seasonal number of consecutive hours during the day with high levels of human discomfort appears in the station PAT; 11.3 and 13.6 consecutive hours with respect to DI and CP, respectively, against 1.0 and 2.7 consecutive hours at the station THR, respectively. All the other examined regions-stations within the GAA present a bioclimatic behavior between PAT and THR. This means that for a given building within the city's center region (PAT), we need 5 to 11 times more energy for cooling during the warm period of the year than the energy for cooling at the north suburban area (THR).
Conclusions
In this study an application, which concerns the development and the use of ANN models on environmental issues and generally in environmental management, is presented. A number of ANN models have been developed and trained in order to forecast the air quality levels, as well as the bioclimatic conditions in different regions within the GAA. The findings of this work appoint the ANN models forecasting capacity.
The Results showed that the use of ANN models as forecasting tool is realizable and satisfactory at a statistically significant level of p<0.01. In particular for the air quality forecasting for the next day, the R 2 values ranged between 0.381 and 0.826 (ERPI) and between 0.378 and 0.686 (DAQx). Besides, the IA index between the predicted and observed values ranged between 0.717and 0.937 for ERPI forecasting, while it ranged between 0.746 and 0.889 for DAQx forecasting. It seems that in all cases, the air quality forecasting is more sufficient using the ERPI air quality index than the DAQx. In this point we have to mention that the ERPI is according to the European Community directives for the air quality levels.
The same results are extracted regarding the forecasting of the persistence of air pollution episodes and especially the number of consecutive hours during the day with poor air quality.
Concerning the forecasting of bioclimatic conditions for the next day, the R 2 values ranged between 0.676 and 0.841 for DI and between 0.591 and 0.814 for CP. The IA values ranged between 0.849 and 0.956 for DI and between 0.813 and 0.948 for CP. Taking into account the persistence of the phenomenon (the number of consecutive hours during the day with high discomfort conditions due to strong heat stress), it seems that ANN#9 (consecutive discomfort hours according to DI values) and ANN#10 (consecutive discomfort hours according to CP values) give an adequate prediction. A remarkable finding of this research is that the high values of IA (0.956 -0.982) and R 2 (0.839 -0.933) with respect to PET forecasting for the next three days indicate that the constructed ANNs have an excellent forecasting ability of PET, a more complex bioclimatic index based on the human energy balance. This gives evidence that the developed ANNs, taking into account simple meteorological parameters recorded in the previous three days, are capable to predict a bioclimatic index, which is not easily calculated (PET was estimated using the RayMan model). | 2018-12-05T17:53:28.514Z | 2011-08-17T00:00:00.000 | {
"year": 2011,
"sha1": "ef3e6217440523e39b2446e3ccbb2f72d36a4442",
"oa_license": "CCBYNCSA",
"oa_url": "https://www.intechopen.com/citation-pdf-url/17399",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "b78e81b08b317737c94b0810c2afb1a22d3fa49e",
"s2fieldsofstudy": [
"Environmental Science",
"Computer Science"
],
"extfieldsofstudy": [
"Geography"
]
} |
251592332 | pes2o/s2orc | v3-fos-license | Intratracheal Administration of Acyl Coenzyme A Acyltransferase-1 Inhibitor K-604 Reduces Pulmonary Inflammation Following Bleomycin-Induced Lung Injury
Acute lung injury (ALI) is characterized by epithelial damage, barrier dysfunction, and pulmonary edema. Macrophage activation and failure to resolve play a role in ALI; thus, macrophage phenotype modulation is a rational target for therapeutic intervention. Large, lipid-laden macrophages have been observed in various injury models, including intratracheal bleomycin (ITB), suggesting that lipid storage may play a role in ALI severity. The endoplasmic reticulum-associated enzyme acyl coenzyme A acyltransferase-1 (Acat-1/Soat1) is highly expressed in macrophages, where it catalyzes the esterification of cholesterol, leading to intracellular lipid accumulation. We hypothesize that inhibition of Acat-1 will reduce macrophage activation and improve outcomes of lung injury in ITB. K-604, a selective inhibitor of Acat-1, was used to reduce cholesterol esterification and hence lipid accumulation in response to ITB. Male and female C57BL6/J mice (n = 16-21/group) were administered control, control + K-604, ITB, or ITB + K-604 on d0, control or K-604 on d3, and were sacrificed on day 7. ITB caused significant body weight loss and an increase in cholesterol accumulation in bronchoalveolar lavage cells. These changes were mitigated by Acat-1 inhibition. K-604 also significantly reduced ITB-induced alveolar thickening. Surfactant composition was normalized as indicated by a significant decrease in phospholipid: SP-B ratio in ITB+K-604 compared with ITB. K-604 administration preserved mature alveolar macrophages, decreased activation in response to ITB, and decreased the percentage mature and pro-fibrotic interstitial macrophages. These results show that inhibition of Acat-1 in the lung is associated with reduced inflammatory response to ITB-mediated lung injury. SIGNIFICANCE STATEMENT Acyl coenzyme A acyltransferase-1 (Acat-1) is critical to lipid droplet formation, and thus inhibition of Acat-1 presents as a pharmacological target. Intratracheal administration of K-604, an Acat-1 inhibitor, reduces intracellular cholesterol ester accumulation in lung macrophages, attenuates inflammation and macrophage activation, and normalizes mediators of surface-active function after intratracheal bleomycin administration in a rodent model. The data presented within suggest that inhibition of Acat-1 in the lung improves acute lung injury outcomes.
Introduction
Acute lung injury (ALI) affects approximately 200,000 people in the United States annually and has a high mortality rate due to respiratory failure (Johnson and Matthay, 2010;Dushianthan et al., 2011). ALI is characterized by inflammation, resulting in changes to innate immune cell, endothelial, and epithelial cell function (Matthay and Zimmerman, 2005;Johnson and Matthay, 2010). These alterations present as diffuse alveolar damage, epithelial and endothelial barrier dysfunction (Tam et al., 2011;M€ uller-Redetzky et al., 2014), pulmonary edema (Butt et al., 2016), and surfactant dysfunction (Cross and Matthay, 2011;Mokra and Kosutova, 2015;Butt et al., 2016). The primary management strategy for patients is mechanical ventilation (Johnson and Matthay, 2010;Dushianthan et al., 2011;Patel et al., 2018), which can lead to worse patient outcomes due to ventilation-perfusion mismatch, further tissue damage due to barotrauma, and altered inflammatory responses (Ioannidis et al., 2015). Therefore, identifying an effective pharmacological treatment is advantageous for this patient population.
Macrophage-mediated inflammation is a critical component in the pathogenesis of ALI (Huang et al., 2018;Chen et al., 2020). As the first line of defense for innate immune responses No author has an actual or perceived conflict of interest with the contents of this article.
ABBREVIATIONS: Acat-1, acyl coenzyme A acyltransferase-1; ALI, acute lung injury; A.U., arbitrary unit; Arg1, inducible arginase; BAL, bronchoalveolar lavage; ITB, intratracheal bleomycin; LXR, liver X receptor; Nos2, inducible nitric oxide synthase; SP-B, sterol O-acyltransferase-1; SP-D, surfactant protein D. (Hussell and Bell, 2014;Hartl et al., 2018), macrophages play a crucial role in detecting foreign toxicants, particulates, and pathogens, initiating a cascade of host defenses (Martin and Frevert, 2005;Hartl et al., 2018). Macrophages are highly plastic (Hussell and Bell, 2014) and can be stimulated to express a spectrum of phenotypes that contribute to both inflammation and injury resolution in the lung (Porcheray et al., 2005;Benoit et al., 2008;Laskin, 2009;Johnston et al., 2012;Laskin et al., 2019). Early in the pathogenesis of ALI, alveolar macrophages release pro-inflammatory cytokines (Patel et al., 2018;Chen et al., 2020), attracting neutrophils to the site of injury (Cort es et al., 2012;Patel et al., 2017). Macrophages responsible for the generation of reactive oxygen and nitrogen species may contribute to the majority of the cellular injury observed in ALI (Pittet et al., 1997;Ware, 2006). These chemical species damage the alveolar epithelium and endothelium, promoting increased permeability in the lung, leading to edema and proteinaceous debris accumulation in the alveoli (Cort es et al., 2012;Butt et al., 2016). When this signaling is aberrant or overly persistent, the macrophage response may promote further injury rather than transition to resolution . Therefore, regulating macrophage activation presents as a logical target when considering intervention in ALI.
In addition to ALI, macrophages have been implicated in driving the atherosclerotic process (Yu et al., 2013). Excess accumulation of lipids in macrophages drives their activation and progression toward atherosclerotic plaque formation, contributing to vascular injury (Yu et al., 2013;Chistiakov et al., 2016). Research into the mechanisms of foam cell formation led to the discovery of acyl coenzyme Acholesterol acyltransferase-1 (Acat-1/Soat1) (Chang et al., 1993;Chang et al., 2001;Chang et al., 2009). Acat-1 catalyzes the conversion of cholesterol to cholesterol esters, a process essential for lipid droplet formation (Chang et al., 2001;Chang et al., 2009). As such, the persistence of lipid-laden cells is critically dependent upon cholesterol esterification (Sekiya et al., 2011;Yu et al., 2013;Chistiakov et al., 2017). These lipid-laden cells exhibit a 'foamy' appearance and have been implicated as a driving force in atherosclerosis (Ross, 1999;St€ oger et al., 2012;Yang et al., 2020). Therefore, altering macrophage phenotype in the context of atherosclerotic plaques has been a topic of investigation for some time (Chinetti-Gbaguidi et al., 2015;Bouhlel et al., 2007;Yang et al., 2020); prior research in this area may also have utility in the management of pulmonary pathologies.
As lipid-laden macrophages have been observed in animal models of ALI (Venosa et al., 2019), we propose that inhibition of Acat-1 in the lung will lead to a reduction in injury. Studies varying the route of administration of Acat-1 inhibitors led to the hypothesis that the intratracheal administration of K-604 could effectively target pulmonary cells, mitigate lipid accumulation, modulate cell phenotype, and reduce disease severity in a model of ALI. To test this hypothesis, we have used the intratracheal bleomycin (ITB)-induced model of ALI (Chen et al., 2001;Genovese et al., 2005;Wilkinson et al., 2020) and the novel intratracheal administration of K-604. We found that K-604 administration reduced ALI severity, altered pulmonary cell cholesterol ester formation, and altered alveolar and interstitial macrophage phenotype. These data suggest that Acat-1 inhibition in the lung reduces the inflammatory response to ITB-mediated injury.
Materials and Methods
2.1 Animal Use. Six-to 8-week-old male and female wild-type C57BL6/J mice obtained from Jackson Laboratories (Bar Harbor, ME, USA) were used for all experiments. Mice were housed under standard conditions in groups of four per cage with food and water provided ad libitum. All experiments were conducted in accordance with Rutgers University Institutional Animal Care and Use Committeeapproved protocols adhering to the U.S. National Institutes of Health Guide for the Care and Use of Laboratory Animals.
2.2 Bronchoalveolar Lavage (BAL) and Phospholipid Analysis. BAL fluid was collected by instilling 5 × 1 ml of ice cold PBS through a 20-gauge canula inserted into the trachea. BAL fluid was centrifuged at 300 × g for 8 minutes, and the supernatant was assessed for protein concentration using a Pierce BCA Protein Assay (Thermo Scientific, Rockford, IL). Large and small aggregate fractions of BAL were separated from the large aggregate fraction according to the methods of Bligh and Dyer (Bligh and Dyer, 1959). Phospholipid analysis was adapted from the method of Bartlett (Bartlett, 1959). Samples of equal phospholipid content were loaded onto Bis(2-hydroxyethyl)imino-tris(hydroxymethyl)-methane gels (4-12%, ThermoFisher Scientific, Rockford, IL), transferred to polyvinylidene fluoride membranes, and blocked in non-fat dried milk (10% with 5% Tris-Tween buffered saline) to prevent non-specific binding. Membranes were incubated overnight with either surfactant protein D (SP-D, Duke University, Durham, NC) and SP-B (University of Pennsylvania, Philadelphia, PA) primary antibody and were incubated with Goat-anti-Rabbit horseradish peroxidase (Bio-Rad Cat No. 170-6515) before visualization with ECL Prime Western Blotting Detection Reagent (Amersham Biosciences, Amersham, UK, Cat No. RPN2232). BAL cell pellets were resuspended in 1 ml of staining buffer (5% FBS in 1x PBS, 0.2% sodium azide) and assessed for viability using Trypan Blue Solution (0.4%, ThermoFisher Scientific, Rockford, IL).
2.3 BAL Cell Cholesterol Measurement. 10,000 to 20,000 cells were reserved from the BAL fluid to determine the concentration of total cholesterol, free cholesterol, and cholesterol esters in BAL cell samples. A bioluminescent Cholesterol/Cholesterol Ester-Glo Assay (Promega, Madison, WI) was used, and luminescence was recorded on a SpectraMaxM2 multi-mode microplate reader (Molecular Devices, San Jose, CA) utilizing SoftMax Pro software v5.3.
2.4 Histology Preparation and Analysis. After BAL fluid collection, the large left lung lobe was excised and inflation fixed in paraformaldehyde (3%) and embedded in paraffin. Four-micrometer sections were stained with hematoxylin and eosin to observe histologic changes. Slides were scanned at 40x using a VS120 Virtual Slide Microscope (Olympus, Waltham, MA) and viewed with OlyVIA viewing software for virtual slide images (Olympus, Waltham, MA) at 400x. Randomly selected areas (n 5 10) from each histologic slide were Cholesterol Esterification and Acute Lung Injury captured and analyzed by a blinded observer to determine average alveolar wall thickness, cell infiltration (# of nuclei), and tissue consolidation (% open tissue space) using ImageJ (NIH).
2.5 Lung Tissue Digestion and CD45+ Cell Separation. After BAL collection, lung tissue was cut into small pieces and digested in 5 ml of 2 mg/ml collagenase type IV (Sigma Aldrich, St. Louis, MO) in RPMI 1640 1x (ThermoFisher Scientific, Rockford, IL) and 5% FBS (ThermoFisher Scientific, Rockford, IL). Tissue was filtered through a 70-mm strainer and washed with RPMI/5% FBS until the strainer was absent of tissue. The filtrate was centrifuged for 6 minutes at 400 × g, and supernatant was aspirated. The remaining cell pellet was resuspended in 1 ml of Red Blood Cell Lysis Buffer (Sigma Aldrich, St. Louis, MO) for 5 minutes at room temperature. Five milliliters of RPMI/5% FBS was added, and cells were centrifuged again for 6 minutes at 400 × g. Supernatant was aspirated, and the cell pellet was counted and resuspended at a concentration of 1 × 10 8 in 2% FBS in 1x PBS/1 mM EDTA. Immunomagnetic selection of CD451 leukocytes was performed using the EasySep Mouse CD45 Positive Selection Kit (Stemcell Technologies, Cambridge, MA). CD451 selected cells and flowthrough (assumed CD45-cells) were recovered. Cells were then stained and processed for flow cytometry.
2.6 Flow Cytometry. Cells from the BAL or lung tissue digest were brought up to a volume of 100 ll of staining buffer. To prevent non-specific binding in the staining and analysis, cells were incubated with TruStain FcX anti-mouse CD16/32 (Fc Block, 1:100) (Biolegend, San Diego, CA, Cat No. 101320) for 10 minutes at 4 C. Samples from the BAL and CD451 cells from the lung digest were incubated for 30 minutes at 4 C in a cocktail of the following antibodies ( Franklin Lakes,NJ,Cat No. 565528),and MHC II (Invitrogen,Waltham,MA,. Cells were centrifuged for 6 minutes at 400 × g and washed with staining buffer. Cells were stained with eFluor 780-conjugated fixable viability dye (Invitrogen,Waltham,MA, for 30 minutes at 4 C, washed with staining buffer, and fixed in 3% paraformaldehyde. Cells were analyzed using a Gallios 10-color flow cytometer (Beckman Coulter, Brea, CA). Using Kaluza software (Beckman Coulter, Brea, CA), cells were initially sorted based upon forward and side scatter, doublet discrimination, and viability (Supplemental Fig. 1, Supplemental Fig. 2) and further analyzed to determined discrete cell phenotypes.
2.7 Reverse-Transcription Polymerase Chain Reaction. BAL macrophages recovered from each group were preserved in 1 ml of TRIzol Reagent (ThermoFisher Scientific, Rockford, IL). In brief, RNA was extracted using phenol-chloroform. After a 15-minute centrifugation at 14,000 × g, the aqueous phase was removed, and RNA was extracted with isopropanol and spun at 14,000 × g. The pellet was isolated and washed with 70% ethanol, dried, and resuspended in 15 ml of RNase-free ultrapure water. After heating at 70 C for 10 minutes, RNA was quantitated using a NanoDrop 1000 spectrophotometer (ThermoFisher Scientific, Rockford, IL) and stored. cDNA was prepared according to a High-Capacity cDNA Reverse Transcription Kit (ThermoFisher Scientific, Rockford, IL) with a final target concentration of 200 ng/ml. Taqman assays were used to analyze relative gene expression using Taqman fast master mix (Thermofisher Scientific, Rockford, IL) and the following Taqman primers: Gapdh (Mm99999915_g1), Nos2 (Mm00440502_m1), Arg1 (Mm00475988_m1), Abca1 (Mm00442646_ m1), and Abcg1 (Mm00437390_m1). A threshold of 40 cycles was used as the limit of detection for gene expression. Thermal cycle numbers greater than 40 were set to 40 cycles for the purpose of the analysis. DDct values were calculated using Gapdh as the control gene and PBS as the control condition. Fold change was calculated as 2 (-DDct) .
2.8 Statistical Analysis. Statistical analysis was performed using GraphPad Prism Software version 9.0.2 for Windows (GraphPad Software, San Diego, CA). Quantitative data were analyzed by two-way ANOVA. Multiple comparisons were followed by Holm-Sidak test with a significance level of P < 0.05 compared with control (*) and ITB ( # ). Data are shown as mean ± standard error of the mean.
Results
3.1 K-604 Mitigates ALI and Improves ITB-Induced Histologic Changes. ITB administration resulted in a decreased body weight compared with controls in alignment with previously conducted studies (Barbayianni et al., 2018). On average, ITB mice lost 14 ± 2.1% body weight over 7 days, which was significantly blunted to a loss of 4±1.0% # when K-604 was administered on d0 and d3. Control mice were not significantly different than those administered K-604 alone (3 ± 0.8% versus 0.5 ± 0.6%).
For each whole-lung scan obtained of animals in each group ( Fig. 2A), 40x images (n 5 10) were captured at random and analyzed using ImageJ software to assess cell infiltration (Fig. 2B), lung tissue consolidation (Fig. 2C), and epithelial thickening (Fig. 2D). ITB resulted in an increased number of nuclei per high-powered field (Fig. 2B) compared with control (230 ± 3.3* versus 183 ± 5.0), consistent with a macrophage-dominant response to ITB-induced lung injury (Venosa et al., 2021). In the presence of K-604, ITB did not result in a significant increase in the number of nuclei (195 ± 7.9*). ITB resulted in consolidation of the lung tissue (Fig. 2C) as indicated by a significant decrease in white space (%) compared with control (57 ± 0.5%* versus 70 ± 0.6%). K-604 did not significantly reduce loss of white space when compared with control (62 ± 1.5%). ITB significantly increased the 358 measured alveolar wall thickness (Fig. 2D) compared with control (4 ± 0.0* mm versus 2 ± 0.1 mm), which was significantly reduced by K-604 (3 ± 0.1 mm # ).
3.2 K-604 does not Reduce ITB-Induced Epithelial Injury but Normalizes Lung Lining. ITB mediates ALI by inducing redox cycling and damage within epithelial cells (Allawzi et al., 2019), as such loss of barrier function and the accumulation of edema and protein within the BAL are hallmarks of injury. ITB increased BAL protein concentration compared with control ( Fig. 3A) (2.9 ± 0.23* mg/ml versus 0.1 ± 0.19 mg/ml), which was not mitigated by K-604 administration (3.1 ± 0.21 mg/ml) indicating that K-604 did not alter the initiating injury event (ITB administration). Phospholipids within the BAL were also quantitated (Fig. 3B). ITB resulted in an increase in BAL phospholipids compared with control (117 ± 14.7 lg/30 ll* versus 23 ± 5.7 lg/30 ml) which is only partially reduced by K-604 administration (87±9.6 lg/30 ll). SP-D and SP-B concentrations were determined through western blot of the small and large phospholipid aggregate fraction, respectively (Fig. 3C). The SP-D to SP-B ratio was increased with ITB compared with control (5 ± 0.6 arbitrary units [A.U.]* versus 0.6 ± 0.2 A.U.) consistent with inflammatory activation, while an apparent decrease was observed with K-604 administration (3.0 ± 0.5*). Phospholipid levels were compared with SP-B protein levels as this ratio is critical to surfactant function (Fig. 3D). The phospholipid to SP-B ratio was increased with ITB compared with control (13 ± 2.2 A.U.* versus 1 ± 0.4 A.U). This ratio was significantly decreased in ITB 1 K-604 compared with ITB alone (8 ± 1.2 A.U. # ), indicating a normalization of surface-active function.
To test whether transporters were involved in the observed changes in cholesterol levels, expression of cholesterol efflux transporters Abca1 and Abcg1 was measured in BAL cells. Abca1 expression was not changed with ITB (1.5 ± 0.1 versus 1.1 ± 0.1), but expression was reduced in ITB 1 K-604 (0.44 ± 0.05 # ). Abcg1 expression was decreased with ITB regardless of K-604 treatment compared with control (0.65 ± 0.04*, 0.61 ± 16-21 per group). Images were obtained using a VS120 microscope (inset: 40x; magnified image: 400x). Randomly distributed images (n 5 10) from each tissue preparation were analyzed for the number of nuclei present (B), percentage of white space (C), and average alveolar wall thickness (D). Values are expressed as mean ± standard error of the mean. (*) represents significantly different from control (P < 0.05); (#) represents significantly different from ITB (P < 0.05).
Cholesterol Esterification and Acute Lung Injury 0.05* versus 1.08 ± 0.1). These minimal changes in efflux transporter expression support that the changes in intracellular cholesterol accumulation and cell size are due to effective Acat-1 inhibition in the lung.
3.4 K-604 Treatment Reduces Alveolar and Interstitial Macrophage Activation in Response to ITB. To analyze alveolar macrophage activation, we examined BAL cells by flow cytometry using cell surface markers (Table 1). Alveolar macrophages were identified by expression of CD45, F4/80, and Siglec F (Supplemental Fig. 1). Alveolar macrophages were further categorized as mature (CD11c1/CD11b-), migratory (CD11c1/CD11b1), or recruited (CD11c-/CD11b1) (Fig. 5), where the mature population may be considered tissue-resident, and the migratory and recruited populations may be derived from the circulation or from resident cells taking on a more migratory-like phenotype. A significant loss of the mature alveolar macrophages was observed as a result of ITB administration (18 ± 4.1%* versus 95 ± 0.67%) This ITB-mediated decrease was reduced by K-604 (48 ± 6.0% # ). ITB-induced increases in the migratory (24 ± 4.9%* versus 3 ± 0.9%) and recruited (10 ± 2.7%* versus 0.2 ± 0.1%) alveolar macrophage populations were also observed compared with control. K-604 rescued ITB-mediated increases in migratory macrophages (12 ± 2%*) but did not significantly alter the percentage of recruited macrophages (8 ± 2.0%). Alveolar macrophages were also analyzed for expression of the activation markers Ly6c and CD206. CD206 was only observed within mature alveolar macrophages and was not seen in cells that expressed Ly6c. Ly6c expression was observed in both migratory and recruited macrophages irrespective of treatment. A significant increase in pro-inflammatory (Ly6c1/CD206-) alveolar macrophages was observed following ITB-treatment compared with control (21±5.7%* versus 0.5 ± 0.2%). This increase was mitigated by K-604 administration (13 ± 4.0% # ) indicating reduced acute macrophage activation post-ITB.
Within the lung digest, interstitial macrophages were identified by expression of CD45, F4/80, CD11b and the absence of Siglec F (Supplemental Fig. 2). The yield of interstitial macrophages was consistent across all groups. However, the percentage of cells expressing marker of maturation (CD11c and CD206) was significantly increased by ITB compared with control (Fig. 6) (20 ± 1.9%* versus 10 ± 1.5%). Administration of K-604 with ITB significantly reduced the percentage of CD11c1/CD2061 interstitial macrophages compared with ITB alone (6 ± 2.2%).
Discussion
Herein, we show an increase in cholesterol within macrophages in response to bleomycin which is the first demonstration of such an accumulation in this model of acute lung injury. Further, we have shown that use of an Acat-1 inhibitor reduces ITB-mediated lung inflammation and macrophage activation. Using intratracheal delivery of K-604, we observed preservation of body weight, improved lung structure, and normalization of lung surfactant (Figs. 1-3). In addition, while ITB increased total, free, and cholesterol ester content in alveolar macrophages, K-604 reduced this accumulation (Fig. 4). K-604 also reduced the loss of mature alveolar macrophages (Fig. 5) and macrophage activation in both the lung lining and the interstitium (Figs. 5-7). That K-604 reduces total cholesterol and lessens injury is evidence that lipid accumulation is a significant factor within ALI.
K-604 is a known inhibitor of the enzyme Acat-1, and is highly selective for this particular acyltransferase (Shibuya et al., 2018). Acat-1 is responsible for the esterification of cholesterol to cholesterol ester, and is thus critical to lipid droplet formation (Chang et al., 2001;Chang et al., 2009). Therefore, we have used cholesterol ester as our primary endpoint to determine a sufficient level of pharmacological inhibition of Acat-1 (Fig. 4). Changes to free cholesterol, not simply cholesterol ester accumulation alone, may play a role in lipid-mediated signaling in these cells. The effects of K-604 may not be limited to cholesterol ester accumulation, as lipid-signaling pathways are complex and highly integrated. Additional work is needed Fig. 4. K-604 reduces ITB-mediated increases cholesterol content of BAL cells. Cells from the BAL were analyzed for total cholesterol, free cholesterol, and cholesterol esters (A) reported as cholesterol (lM) per 1 × 10 4 cells. BAL cell expression of Abca1 (B) and Abcg1 (C) was determined by RT-qPCR and is shown as fold change over control. Values are expressed as mean ± standard error of the mean (n 5 5-16 per group). (*) represents significantly different from control (P < 0.05); (#) represents significantly different from ITB (P < 0.05).
CD45
Myeloid-derived cells, including alveolar and interstitial macrophages (Trowbridge and Thomas, 1994;Roach et al., 1997;Misharin et al., 2013) Siglec F Highly expressed on alveolar macrophages (Misharin et al., 2013;Hussell and Bell, 2014) F4/80 Pulmonary macrophages, including alveolar and interstitial macrophages (Gordon et al., 2011;Misharin et al., 2013;Hussell and Bell, 2014) CD11b Indicative of a migratory phenotype; expressed on interstitial macrophages, with low expression in resident alveolar macrophages (Zaynagetdinov et al., 2013;Hussell and Bell, 2014) CD11c Resident alveolar macrophages and mature interstitial macrophages (Hussell and Bell, 2014) CD206 Pro-fibrotic marker on activated interstitial macrophages. Expression on alveolar macrophages indicates an anti-inflammatory bias (Zaynagetdinov et Cholesterol Esterification and Acute Lung Injury to parse out downstream signaling consequences of inhibiting cholesterol esterification in these cells. As shown previously (Chen et al., 2001;Genovese et al., 2005;Wilkinson et al., 2020), ITB resulted in a significant loss of body weight (Fig. 1). ITB-induced ALI results in increases in infiltrating macrophages, epithelial thickening, and proteinaceous debris deposition in the lung (Lindenschmidt et al., 1986;Izbicki et al., 2002;Ji et al., 2014). We observed significant increases in these criteria (Figs. 2 and 3) in response to ITB. Acat-1 inhibition with K-604 reduced body weight loss, epithelial thickening, and cellular invasion; however, it did not significantly alter BAL protein. This is consistent with K-604 reducing the inflammatory response to ITB, but not the direct epithelial injury.
The lung lining is a lipid-rich environment and the production and recycling of lipid is critical to surfactant function and homeostasis. Both alveolar type II cells and macrophages are critical in maintaining surfactant (Crouch and Wright, 2001;Weaver and Conkright, 2001;Guo et al., 2019). While type II cells produce and recycle surfactant, macrophages are responsible for its degradation (Poelma et al., 2002). SP-B and SP-C, produced by type II cells, are critical to the active regulation of surface tension (Clark et al., 1995;Stahlman et al., 2000;Weaver and Conkright, 2001;Agassandian and Mallampalli, 2013). The ratio of phospholipid:SP-B is thus an indicator of normal surfactant function. ITB increased BAL phospholipid while reducing relative SP-B expression, indicating a loss of surface-active function. K-604 restores SP-B expression, which may normalize function.
SP-D is important in innate immune regulation (Crouch and Wright, 2001;Agassandian and Mallampalli, 2013), producing both pro-and anti-inflammatory effects (Agassandian and Mallampalli, 2013). SP-D is exported via a separate vesicular pathway to SP-B by type II cells and its production favors immune regulation over surface-activity. ITB resulted in a favoring of SP-D production relative to SP-B (Fig. 3C). This SP-D:SP-B ratio was reduced by the addition of K-604 to ITB, which is consistent with a reduced inflammatory response. Also, the reduction in oxidative cross linking of SP-D in the ITB 1 K-604 BAL (Fig. 3E) may be indicative of a reduction in oxidative stress. Furthermore, K-604 increases SP-B levels in the BAL irrespective of ITB (Fig. 3E), indicating a bias toward surfactant production. A limitation of this study is that intratracheal administration of K-604 targets not only macrophages, but can affect all pulmonary cell types expressing Acat-1. Although macrophages highly express Acat-1, type II cells also express it (Sakashita et al., 2000). Therefore, these changes in lung lining composition may involve type II cells.
Here, we see that 7d after ITB administration, there is both cholesterol and cholesterol ester accumulation in BAL cells (Fig. 4), which is consistent with foam cell formation and persistent activation (Venosa et al., 2019). Foam cells have been Fig. 1). Cells that were positively stained for both Siglec F and F4/80 were determined to be alveolar macrophages (AMs). Mature macrophages (CD11c1/ CD11b-), migratory macrophages (CD11c1/CD11b1), and recruited macrophages (CD11c-/CD11b1) were identified from alveolar macrophages in the BAL (A). The percentage of mature (B), migratory (C), and recruited macrophages (D) was calculated from the total number of alveolar macrophages and were assessed for their Ly6c and CD206 expression. The percentage of CD11b1 alveolar macrophages that were identified as expressing the acutely activated phenotype (Ly6c1/CD206-) was determined (E). Values are expressed as mean ± standard error of the mean (n 5 8-13 per group). (*) represents significantly different from control (P < 0.05); (#) represents significantly different from ITB (P < 0.05).
well characterized in models of atherosclerosis and models of lung injury (Venosa et al., 2019), but we cannot confirm their presence in this model simply by measures of cholesterol ester accumulation. The pulmonary microenvironment, even during injury, differs greatly from that of the atherosclerotic plaque, which is known to be mediated by foam cells (Javadifar et al., 2021). In this model, while cholesterol ester accumulation is increased with ITB and reduced by K-604, one cannot conclude that these changes are associated with foam cell formation.
K-604 administration reduces cholesterol ester accumulation, indicating successful inhibition of Acat-1 in the lung. Cholesterol esterification in macrophages has previously been identified as a pharmacological target (Chinetti-Gbaguidi et al., 2015;Bouhlel et al., 2007;Yang et al., 2020) and the inhibition of Acat-1 reduces lipid-laden cell formation (Ikenoya et al., 2007). Systemic Acat-1 inhibition for the treatment of atherosclerosis, however, resulted in adverse cardiac events, halting the advancement of Acat-1 inhibitors in the clinic (Meuwese et al., 2009). To bypass potential adverse effects linked to systemic administration, we have used intratracheal instillation of K-604, which resulted in inhibition of cholesterol esterification in BAL cells without a significant increase in cell death.
Previously, we have shown that nitrogen mustard exposure downregulates the ligand-activated liver X receptor (LXR) (Venosa et al., 2019) and its targets, the efflux transporters Abca1 and Abcg1 (Tontonoz and Mangelsdorf, 2003;Beyea et al., 2007), as well as reducing Acat-1 expression. In ITB alone and ITB 1 K-604, Abcg1 expression was significantly decreased compared with control (Fig. 4C), highlighting that there may be a potential disruption in LXR signaling with ITB that is not rescued by K-604 administration. Although these changes are small, this may be a result of lipid-laden macrophages only being a fraction of the total BAL cell population. Regardless, additional investigation into LXR signaling and its role in lipidladen cell formation in the context of ALI is needed. However, the changes in efflux transporter expression are insufficient to explain the lipid accumulation seen in BAL cells with ITB.
Altering lipid accumulation within macrophages can significantly alter inflammatory phenotype and indeed both alveolar and interstitial macrophage populations were shifted in response to ITB (Figs. 5 and 6). There was a significant loss of mature alveolar macrophages in response to ITB (Fig. 5B), cells that are important in the maintenance of lung homeostasis (Hussell and Bell, 2014). This population was preserved by K-604, which is consistent with reducing the post-injury inflammation. Alveolar macrophages play an essential role in surfactant recycling (Weaver and Conkright, 2001;Laskin et al., 2019), and preservation of this population may mediate the K-604-dependent improvement in surfactant homeostasis.
Macrophages are recruited to the lung lining in response to injury (Golden et al., 2022). Here, this response was blunted with K-604, as evidenced by reduced migratory macrophages in the BAL (Fig. 5C) and lowered activation (Fig. 5E). Phenotypic changes were also observed in interstitial macrophages, where mature activated macrophages were increased in response to ITB (Fig. 6B), possibly as a mechanism to replace the diminished mature alveolar macrophage population in the lung (Fig. 5). CD2061 interstitial macrophages are immunoregulatory in nature (Bedoret et al., 2009), producing cytokines that promote cell growth and differentiation as well as Fig. 6. Flow cytometric analysis of macrophages isolated from lung tissue. Cells from digested lung tissue were immunomagnetically-separated based upon CD45 expression. CD451 cells were isolated, immunostained, and analyzed (Supplemental Fig 2). Cells that expressed F4/80 and CD11b in the absence of Siglec F were categorized as interstitial macrophages and were analyzed for CD11c/CD206 expression (A). The percentage of mature, chronically activated cells (CD11c1/CD2061, green box) was calculated from the total number of interstitial macrophages (B). Values are expressed as mean ± standard error of the mean (n 5 8-13 per group). (*) represents significantly different from control (P < 0.05); (#) represents significantly different from ITB (P < 0.05). Fig. 7. K-604 inhibits ITB-mediated induction of the inflammatory enzymes Nos2 and Arg1. BAL cells were analyzed for Nos2 and Arg1 expression by RT-qPCR. 2 -DDct values were calculated using Gapdh as the control gene and PBS as the control condition to calculate fold change in expression. Values are expressed as mean ± standard error of the mean (n 5 8 per group). (*) represents significantly different from control (P < 0.05); (#) represents significantly different from ITB (P < 0.05).
Cholesterol Esterification and Acute Lung Injury favoring repair and wound healing (Schyns et al., 2019), and therefore may be pro-fibrotic. Chronic immune cell activation alters tissue composition by collagen deposition and can lead to fibrosis (Klingberg et al., 2013) and negatively impact lung function (Klingberg et al., 2013;Martinez et al., 2017). In this regard, it is significant that this CD11c1/CD2061 population of cells is reduced by K-604, suggesting Acat-1 inhibition can improve resolution and reduce fibrosis. It will be important to study these changes following ITB and K-604 treatment at time points longer than 7 days.
Arginine metabolism is critical to macrophage activation, with Nos2 and Arg1 expression characterizing acute and chronic activation (Rath et al., 2014;Orecchioni et al., 2019). Both Nos2 and Arg1 expression were significantly increased in BAL macrophages from ITB animals (Fig. 7, A and B), a response that was inhibited by K-604. K-604 therefore reduces activation without bias toward either Nos2 or Arg1, implying that Acat-1 inhibition reduces both acute and chronic macrophage activation.
In conclusion, in the ITB-mediated ALI model, macrophages experience a lipid-rich extracellular environment due to direct epithelial damage, creating a biologic scenario in which there may be increased cholesterol and lipid accumulation. Macrophages in our model become large and display characteristics of the acutely-activated phenotype, which was mitigated by the intratracheal administration of an Acat-1 inhibitor. In addition to reducing cholesterol esterification, K-604 reduces the inflammatory effects of ITB-mediated lung inflammation, resulting in preservation of mature alveolar macrophages, reduced recruited and interstitial cell activation, normalization of surfactant alterations, and reduced injury. Acat-1 thus presents a potential pharmacological target in ALI; however, further investigations into the mechanisms underlying the effects of Acat-1 inhibition are necessary. | 2022-08-17T06:16:18.450Z | 2022-08-15T00:00:00.000 | {
"year": 2022,
"sha1": "3adf2c2e08497cdab95657ce26b8f01c50b8f2de",
"oa_license": "CCBYNC",
"oa_url": "https://jpet.aspetjournals.org/content/jpet/382/3/356.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "a934ed9b65779addeb39ac6f68bbed0450a791ca",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
125658154 | pes2o/s2orc | v3-fos-license | Entire solutions in nonlocal monostable equations: Asymmetric case
This paper is concerned with entire solutions of the monostable equation with nonlocal dispersal, i.e., \begin{document}$u_{t}=J*u-u+f(u)$\end{document} . Here the kernel \begin{document}$J$\end{document} is asymmetric. Unlike symmetric cases, this equation lacks symmetry between the nonincreasing and nondecreasing traveling wave solutions. We first give a relationship between the critical speeds \begin{document}$c^{*}$\end{document} and \begin{document}$\hat{c}^{*}$\end{document} , where \begin{document}$c^*$\end{document} and \begin{document}$\hat{c}^{*}$\end{document} are the minimal speeds of the nonincreasing and nondecreasing traveling wave solutions, respectively. Then we establish the existence and qualitative properties of entire solutions by combining two traveling wave solutions coming from both ends of real axis and some spatially independent solutions. Furthermore, when the kernel \begin{document}$J$\end{document} is symmetric, we prove that the entire solutions are 5-dimensional, 4-dimensional, and 3-dimensional manifolds, respectively.
Although the traveling wave solution is a key object characterizing the dynamics of nonlocal dispersal equations, such as (1), it is not enough to understand the whole dynamics. In fact, traveling wave solutions are special examples of the so-called entire solutions, which are defined in the whole space and for all time t ∈ R. From the viewpoint of biology, entire solutions can model new spreading and invasion behavior of the epidemic and species, respectively; see [28,30,45]. Moreover, entire solutions can help us for the mathematical understanding of transient dynamics and the structures of the global attractors. However, the global attractors are rather complicated. Some new types of entire solutions other than traveling wave solutions have been established for various evolution equations with spatially homogeneous environment; see e.g. [8,18,19,27,29,37,41] for reaction-diffusion equations with and without delays, [38] for delayed lattice differential equations with global interaction, [25] for reaction-advection-diffusion equations, [30,36,40,45] etc. for reaction-diffusion or discrete model systems.
Recently, Li et al. [26] and Sun et al. [33] constructed new types of entire solutions for symmetric nonlocal equations with monostable and bistable nonlinearity by combining two traveling wave solutions coming from both ends of real axis and some spatially independent solutions. And Dong et al. [16], Li et al. [28] and Zhang et al. [45] further considered the entire solutions for symmetric nonlocal systems. However, the issue of the existence of entire solutions for nonlocal equation (1) is still open when J is asymmetric. As for entire solutions, it is natural to ask what is the difference between symmetric equations and asymmetric equations.
In fact, there is a close relationship between the nonlocal equation (1) and a local version. Let J(x) = 1 ε P ( x ε ) with ε > 0, where P (x) is a general mollification function with support x ∈ [−1, 1]. If u(x) is smooth, then the Taylor's formula implies that where α = 1 2 R P (−z)z 2 dz, β = R P (−z)zdz. Thus there is a formal analogy between J * u − u and ε 2 αu + εβu (see [12]). When J is symmetric, it is clear that β = 0, then (1) can be viewed as an approximation of the classical Laplace diffusion equation Therefore, equation (1) indeed shares many properties of equation (3). For instance, both of them have maximum principle, and stationary solutions are constants [17].
Especially, the results on the existence and some related properties of traveling wave solutions of equation (1) are similar to those of the reaction diffusion equation (3); see [3,7] for bistable nonlinearity, [4,13,31,32] and the references therein for monostable nonlinearity. However, for general asymmetric kernel J, we see from (2) that a better analogy than (3) for (1) is the following elliptic equation: Thus, there is an essential difference between symmetric and asymmetric equations, which makes the two types of equations have many different dynamical properties. For instance, Coville et al. [12] and Sun et al. [34] have showed that the minimal speed c * of asymmetric equation (1) may be nonpositive (The minimal speed c * > 0 for symmetric equation (1)). Additionally, asymmetric equations lack symmetry between decreasing and increasing traveling wave solutions. Therefore, resolving the issue of entire solutions of asymmetric equation (1) represents a main contribution of our current study.
In order to establish the entire solutions of (1), it is necessary to study the property of the minimal speeds c * andĉ * . Theorem 1.1. Assume that (J1) and (F1) hold. Then c * +ĉ * ≥ 0.
The assertion (iii) of Theorem 1.4 implies that, when c > 0,ĉ < 0, the entire solution behaves as two traveling wave solutions φ(x − ct + h 1 ) andφĉ(x +ĉt + h 2 ) moving in the same direction as t → −∞. And the decreasing wave solution φ(x − ct + h 1 ) move faster than the increasing oneφĉ(x +ĉt + h 2 ) since c > −ĉ. At last, the entire solution u − (x, t) tends to 1 as t → +∞. Remark 1.5. For any c > c * ,ĉ >ĉ * and (c,ĉ) ∈ C −+ , the equation (1) also has an entire solution u So far, we only constructed entire solutions of (1) for c > c * andĉ >ĉ * . Indeed, Coville [12] also guaranteed the existence of monotone traveling wave solutions with the critical speed c = c * (c * = 0) under conditions (J1) and (F1). If we further assume that f satisfies then the existence of entire solutions of (1) combining the traveling wave solutions with critical speeds c = c * and/orĉ =ĉ * can also be established. Theorem 1.6. Assume that (J1) and (F1)-(F2) hold. When c * ĉ * = 0, for any c ≥ c * ,ĉ ≥ĉ * and cĉ = 0, h 1 , h 2 ∈ R, k > 0 and χ 1 , Recall that in [26], we have obtained the existence and relative properties of entire solutions for symmetric equation (1) with monostable nonlinearity, that is, J satisfies R J(y)dy = 1 and J is compactly supported, and f satisfies (F1). Obviously, our Theorem 1.2 and Corollary 1.3 in this paper can completely cover the previous results in [26]. However, the property (iii) in Theorem 1.4 can not occur when J is symmetric. In fact, when J is symmetric, 2 ,φĉ(0) = 1 2 and the uniqueness of traveling wave solutions. However, in [26] we did not consider the uniqueness and continuous dependence of the entire solutions on parameters c,ĉ, h i (i = 1, 2) and k. We will devote to this topic in this paper. Unfortunately, the method we used here depends on the symmetry of kernel J. Therefore, we just consider the uniqueness and continuous dependence of the entire solutions of (1) with symmetric J. The result can be stated as follows.
The greatest difficulty in proving the continuous dependency of the entire solution w(x, t) is that the mathematical expression of the solution of Cauchy problem of (1) is too abstract, since the kernel J is abstract. In this paper, we get over it by means of Fourier transform.
The rest of the paper is organized as follows. In Section 2, we give the existence of the solutions for Cauchy problem of (1) and a comparison theorem which is essential in getting the entire solutions we desired. Sections 3, 4 and 5 are devoted to the proofs of Theorems 1.1, 1.2, 1.4 and 1.6, respectively. In the last section, we prove the continuous dependence on parameters and the uniqueness of entire solutions obtained in [26], and end this paper with an important remark.
2.
Preliminaries. In this section, we will make some preparations for getting our main results later. Since the main theorems are proved by the aid of solution sequence of the Cauchy problems starting at times −n with suitable initial values, we first consider the following Cauchy problems of (1): Furthermore, if for any τ < T ,ū is a supersolution of (1) on (x, t) ∈ R × [τ, T ), then u is called a supersolution of (1) on (x, t) ∈ R × (−∞, T ). Similarly, a subsolution u(x, t) can be defined by reversing the inequality (15).
Next, we show some priori estimates uniformly in n of u n (x, t), which allow us to take the limit as n → +∞. Moreover, some properties fulfilled by the functions u n (x, t) will hold well for the limit function u(x, t).
Lemma 4.1. There exists a positive constant C 1 , which is independent of x, t, n and (χ 1 c, χ 2ĉ , χ 1 h 1 , χ 2 h 2 , χ 3 k) such that for all n ∈ N, t ≥ −n + 1 and x ∈ R, In addition, if there exists C 2 independent of n, x and (χ 1 c, χ 2ĉ , Then there exist positive constants M and M , which are independent of x, t, n and (χ 1 c, χ 2ĉ , χ 1 h 1 , χ 2 h 2 , χ 3 k), such that the solutions u n (x, t) of (22) satisfy for any x ∈ R, t > −n and η > 0.
Proof. Since the functions u n are uniformly bounded and f is of C 2 , it is easy to show that (24) holds. Moreover, in view of J ∈ L 1 (R), there exists L > 0 such that Thus, the rest of the proof is similar to that of [26, Proposition 2.5].
The following lemma gives an upper bound of the functions u n (x, t), which is independent of n.
Proof. We will only prove u n (x, t) ≤ Π 1 (x, t) for all (x, t) ∈ R × [−n, +∞), since the proofs of other inequalities are similar. Without loss of generality, we assume χ 1 = 1 and set v n (x, t) = u n (x, t) − φ c (x − ct + h 1 ). Then we will compare v n (x, t) with the solution of a linear equation. Obviously, 0 ≤ v n (x, t) ≤ 1 for (x, t) ∈ R × [−n, +∞). Since φ c (x − ct + h 1 ) is a solution of (1) and f (s) ≤ f (0) for any s ∈ [0, 1], a direct computation shows that which implies that w n (x, t) is a solution of the following Cauchy problem Note thatφĉ(ξ) ≤ Bĉe µ(ĉ)ξ for all ξ ∈ R. Thus we have v n (x, −n)
It then follows from Lemma 2.4 that
The proof is complete.
Proof of Theorem 1.2. (i) Note that φ c (x−ct) andφĉ(x+ĉt) are monotone traveling wave solutions of (1) which satisfy (5) and (6), respectively. Then for any c > c * ,ĉ > c * and cĉ = 0, we have |φ c | ≤ 2+M3 |c| and |φ ĉ | ≤ 2+M3 |ĉ| , where M 3 = max s∈[0,1] f (s), which make (25) of Lemma 4.1 hold. Thus the solutions u n (x, t) of (22) are globally Lipschitz in x. Following (24) and Lemma 4.1, the Arzela-Ascoli Theorem and the diagonal extraction imply that there exists a subsequence {u ni } i∈N of {u n } n∈N such that u ni (x, t) converge uniformly to a function u(x, t) in T . From the equation satisfied by u n (x, t), we know that the limit function u(x, t) is an entire solution of (1). Furthermore, since f is of class C 2 , the same estimate as (24) is also hold for u(x, t). That is to say, there exists a constant C 3 , which is independent of x, t and (χ 1 c, χ 2ĉ , χ 1 h 1 , χ 2 h 2 , χ 3 k), such that |u t |, |u tt | ≤ C 3 for any (x, t) ∈ R 2 .
Therefore, we have The proof of the second inequality is similar.
Next, we prove the Corollary 1.3.
It then follows from Lemma 2.4 that
A,a sinceĉ > 0. And since f is of class C 2 , the conclusion is obvious. (ii) can be proved by the same argument as that in [26, Theorem 1.1], thus we omit the details. This completes the proof.
5.
Proof of Theorem 1.6. Let v n (x, t) be the unique solution of (22) with u n (x, t) replaced by v n (x, t), we first give an upper bound of v n (x, t).
According to Lemma 5.1, the remaining proof of Theorem 1.6 is similar to that of Theorem 1.2, and is omitted.
6. Proof of Theorem 1.7. In this section, we prove Theorem 1.7 under the conditions (J2) and (F1). Since it is a further work of [26], for convenience, we use φĉ(−x −ĉt) instead ofφĉ(x +ĉt) as the nondecreasing traveling wave solutions of equation (1). Lemma 6.1. The functions φ c (z) are continuous with respect to c ∈ (c * , +∞) in 1] f (s), f ∈ C 2 (R) and J is compactly supported. By differentiating the equation It is easy to see that there exists a constant M 5 > 0 which is independent of x and c such that If c l → c ∈ (c * , +∞), then by the unique boundedness of |φ c l (z)|, |φ c l (z)| in z ∈ R on l ∈ N and by a diagonal extraction process, there exists a subsequence c li such that φ c l i → φ in C 1 By passing to the limit c li → c , the function φ is nonincreasing in R, since φ c l are normalized in 0, it follows that φ(0) = 1 2 . Note that f is positive on (0, 1), this yields that φ(−∞) = 1 and φ(+∞) = 0. Therefore, φ is a traveling wave front of (1) with speed c. Following Carr and Chmaj [4], we have φ ≡ φ c . Therefore, the whole sequence φ c l → φ c in C 1 loc (R) as l → +∞.
We claim that φ c l → φ c0 in C 1 loc (R) as l → +∞. Indeed, since c l → c 0 as l → +∞, there exists a subsequence c li and a function φ such that φ c l i → φ in C 1 loc (R), where φ is nonincreasing and satisfies On the other hand, by [34], there exists two constants q > 1 and γ > 1, independent of c l , such that Therefore, as l → +∞, e −λ(c0)z − qe −γλ(c0)z ≤ φ(z) ≤ e −λ(c0)z + qe −γλ(c0)z for any z ∈ R, which implies that φ is not a constant and satisfies lim z→+∞ φ(z)e λ(c0)z = 1. Then it follows from [34] (or Carr and Chmaj [4]) that φ ≡ φ c0 . Consequently, the whole sequence φ c l → φ c0 in C 1 loc (R) as l → +∞. Furthermore, note that the function φ c = φ c · + ln αc λ(c) is also a solution of with lim z→+∞ φ c (z)e λ(c)z = 1. Thus, in view of the uniqueness of traveling wave solution in [4] and [34], we have φ c = φ c , that is φ c = φ c · + ln αc λ(c) . In order to prove that α c is continuous in c at c 0 , it is enough to show that φ c (0) is continuous in c at c 0 . We argue by contradiction. Assume that φ c l (0) → φ c0 (0), but α c l → α c0 for a sequence c l → c 0 . Thus without loss of generality, there exists a real ε > 0 and a subsequence c l → c 0 such that α c l ≤ α c0 − ε. Consequently, one has since the function φ c (·) is continuous in C 1 loc (R) with respect to c. Consequently, This is impossible because φ c0 is decreasing. Since φ c (z) is continuous in c at c 0 for any z ∈ R, it is obvious that φ c (0) is continuous in c at c 0 . This completes the proof.
Proof. Fix c 0 ∈ (c * , +∞) and let c l → c 0 as l → +∞ with c l > c * for each l ∈ N.
We argue by contradiction. Assume that A c l → A 0 ∈ R ∪ {∞} as l → +∞ (up to extraction of some subsequence) and Then there exists L ∈ N such that for any l > L, A c l > b. On the other hand, since α c l → α c0 ≤ A c0 and λ(c l ) → λ(c 0 ), there exists a constant z 0 > 0, independent of c l , such that e λ(c l )z φ c l (z) ≤ b for any |z| > z 0 .
loc (R) and the equicontinuity of e λ(c l )z on l, there exists L > L such that e λ(c l )z φ c l (z) ≤ b for any l > L and z ∈ [−z 0 , z 0 ]. Consequently, e λ(c l )z φ c l (z) ≤ b for any l > L and z ∈ R, which contradicts to A c l > b for any l > L. The proof is complete.
In view of the a priori estimate (24) and Lemma 4.1, there exists a function w(x, t) such that w l (x, t) → w(x, t) as l → +∞ (up to extraction of some subsequence) in the sense of T . In particular, the function w(x, t) is an entire solution of (1) and also satisfy the estimate (24) and Lemma 4.1. Since the functions ξ l (t) are uniformly bounded in C 2 (R), wa can assume that they converge in C 1 loc (R) to a function ξ(t), which is a solution of ξ = f (ξ) in R and ξ(t) → ke f (0)t as t → −∞.
Then from [26], we obtain that the function w(x, t) fulfills the following estimate Now we prove that w(x, t) ≡ w(x, t) = w c,ĉ,h1,h2,k (x, t) for any (x, t) ∈ R 2 . Remember that the functions w n (x, t) converge to the function w(x, t) in the sense of T , where w n (x, t) are solutions of the Cauchy problems (w n ) t = J * w n − w n + f (w n ), x ∈ R, t > −n, with the initial conditions w n (x, −n) = w n,0 (x) := max φ c (x + cn + h 1 ), ke −f (0)n , φĉ(−x +ĉn + h 2 ) .
It is easy to see that y n and z n satisfy as n → +∞. In fact, the formula for y n comes directly from the equality φ c (y n + cn + h 1 ) = ke −f (0)n and the asymptotic behavior of φ c given by (7), so does z n . Notice that A c e −λ(c)z ≥ φ c (z) for any z ∈ R. By (36) and the definition of (y n , z n ), we get On the other hand, denote J(ξ) as the Fourier transform of J. Under the condition (J2), by [ , it is obvious that J(ξ) is differentiable, J(ξ) ∈ L 1 (R) and J (ξ) ∈ L 2 (R). In addition, because J is compactly supported, it follows from [5] that J(ξ) ∼ 1 − Aξ 2 + o(|ξ| 2 ) and J (ξ) ∼ −ξ as ξ → 0, where A = −1/2 J (0) > 0. According to [22, Lemmas 2.1 and 2.2] (see also [1]), the fundamental solution of the following Cauchy problem is the solution of it with initial value u 0 = δ 0 , and it can be decomposed as where K t (x) = R (e t( J(ξ)−1) − e −t )e ixξ dξ satisfies K t (x) L 1 (R) ≤ 2 for any t > 0. It is easy to get that S(x, t) L 1 (R) ≤ 3 for any t > 0. Furthermore, by [5, Lemma 2.2 and Remark 2.1], for some positive constants c, δ and any t > 0. Now fix a couple (x 0 , t 0 ) ∈ R 2 . For |t 0 | < n, we can compare w − w n with a solution of the linear equation which has an initial condition at time −n as the right-hand side of the inequality of (37). Thus, we have Call I, II, and III the three terms in the right-hand side of this last equality, respectively. Consider the first integral I and write it as I = I 1 + I 2 . With the change of variable z = x 0 − y, we have and cλ(c) > f (0). We have y n → −∞ as n → +∞. Then S(x 0 − y, t 0 + n)e λ(ĉ)y−λ(ĉ)h2 e (f (0)−ĉλ(ĉ))n dy → 0, since y n → −∞ and e (f (0)−ĉλ(ĉ))n → 0 as n → +∞. Thus, I → 0 as n → +∞. Similarly, we have III → 0 as n → +∞. Lastly, the integral II can be divided into three terms II 1 , II 2 and II 3 with obvious notation. First of all, zn yn S(x 0 − y, t 0 + n)|ξ(−n) − ke −f (0)n |dy ≤ 3e f (0)(t0+n) |ξ(−n) − ke −f (0)n | → 0 as n → +∞, according to S(x, t) L 1 (R) ≤ 3 for any t > 0 and ξ(t) ∼ ke f (0)t as t → −∞. Now, we deal with term II 2 . We call II 2,1 and II 2,2 the two terms on the right-hand side of the last inequality. Thus, Since K t0+n L ∞ (R) ≤ c(t 0 + n)e −δ(t0+n) J L 1 (R) +c(1 + t 0 + n) − 1 2 , we have K t0+n L ∞ (R) → 0 as n → +∞.
By the same estimates as above, we can show that the entire solution of (1) is unique.
Finally, we end this paper by giving a meaningful remark to demonstrate the differences caused by the decay rates of the traveling wave solutions and the spatially independent solution when J is symmetric and asymmetric. Remark 6.4. When J is symmetric, for any c,ĉ ≥ c * =ĉ * , we have cλ(c),ĉλ(ĉ) > f (0). Let y(t) = φ c (x(t) − ct + h 1 ) = φĉ(−x(t) −ĉt + h 2 ). Then which implies that y(t) decays faster than ξ(t) at the points x(t) as t → −∞. However, when J is asymmetric, (38) may not hold, which means the function ξ(t) may not play a part in the construction of entire solutions in Theorems 1.2, 1.4 and 1.6 even if χ 3 = 1. | 2019-04-22T13:12:54.586Z | 2019-01-01T00:00:00.000 | {
"year": 2019,
"sha1": "8dce70ce76984e4173f221f84c961cebaaafe4ff",
"oa_license": "CCBY",
"oa_url": "https://www.aimsciences.org/article/exportPdf?id=cfd73c03-babd-46d7-98e0-dbac33a8a136",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "8e640cfb15df491ff7d5fb91d8965d3594b3489d",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
5855326 | pes2o/s2orc | v3-fos-license | Quantum Phase Transition between (Luttinger) Liquid and Gas of Cold Molecules
We consider cold polar molecules confined in a helical optical lattice similar to those used in holographic microfabrication. An external electric field polarizes molecules along the axis of the helix. The large-distance inter-molecular dipolar interaction is attractive but the short-scale interaction is repulsive due to geometric constraints and thus prevents collapse. The interaction strength depends on the electric field. We show that a zero-temperature second-order liquid-gas transition occurs at a critical field. It can be observed under experimentally accessible conditions.
At zero temperature most substances exist in the form of solids. The only exception is Helium which undergoes quantum melting at a critical pressure. On the other hand, zero-temperature liquid-gas transitions have never been observed. Indeed, at absolute zero any system must be in its ground state and condensed phases have lower energy than gases. Driving a quantum liquid into a zero-temperature gas state would be possible, if one could control inter-atomic forces. At sufficiently weak inter-atomic interactions, a many-particle bound state (i.e., liquid or solid) would cease to exist and a gas would form instead. While this cannot be accomplished with conventual materials, recent progress in the field of cold dilute gases opens a possibility to tailor a wide range of Hamiltonians with tunable parameters. In this Letter we show that a quantum liquid-gas transition can be observed in a cold gas of polar molecules confined in an optical lattice.
Experiments with cold gases have already allowed the observation of Bose-Einstein condensation, BCS superfluidity and Mott localization 1 . It was proposed that cold gases can serve as realizations of other analogies of electronic matter such as superconductors with p-wave pairing 2 and quantum Hall states 3 . Besides, several new states of matter with different broken symmetries and/or soft modes were predicted in cold atom systems. We address a rather different situation. Liquids and gases have the same symmetry and their only difference is the density: A gas fills all available volume while the density of a liquid is determined by inter-molecular interactions.
While cold gases do not represent true ground states of alkali metals, they are highly stable due to their low density which guarantees a low probability of multi-particle recombination processes 4 . A well-established way to control interaction in cold gases utilizes Feshbach resonances and allows changing both the strength and sign of the short-range inter-atomic potential which can be modeled as a delta-function in space. If it is repulsive the system minimizes its energy by occupying all available volume, i.e., is a gas. Attractive interactions have different effects on fermions and bosons. A Fermi gas undergoes Cooper pair formation while a Bose gas collapses into a regime with strong many-body recombination. In our case a different inter-atomic potential in needed: A liquid state can be formed, if the interaction is attractive at large scales but the short-range force must be repulsive to prevent collapse.
We demonstrate that such potential can be built from the dipole interaction of polar molecules 5,6 . The sign of the dipole interaction is certainly independent of the distance and depends only on the direction of the dipole moments. It was shown recently 6 that the interaction sign can be made distance-dependent by driving molecules with microwave fields. However, such an approach generates many-body forces 6 . It would be interesting to investigate if a liquid phase is possible in such a system but in this paper we focus on a simpler situation with only two-body interactions. It can be achieved by confining polar molecules in a helical optical lattice, i.e., a potential well of a helical shape. Hexagonal arrays of such helices 7 are among numerous periodic and aperiodic structures used in holographic microfabrication experiments. We will see below that such structures open new possibilities for tailoring cold-atom Hamiltonians which cannot be obtained with usual optical lattices in the form of sinusoidal waves.
We focus on the simplest problem of this type with the lattice in the form of a single helix (Fig. 1a). It can be produced with an approach similar to Refs. 7 as discussed in the Appendix. An external electric field polarizes molecules along the axis of the helix. In zero field, molecules have zero angular momentum and hence zero average dipole moment while in strong fields the average dipole moment p can reach values of the order of Debyes. At large inter-molecular distances the dipole interaction V = p 2 (1 − 3 cos 2 θ)/r 3 is attractive (θ = 0). At short distances it becomes repulsive, if the angle γ between the tangent to the helix and its axis exceeds the magic angle cos −1 (1/ √ 3). The distance dependence of the interaction may include multiple maxima and minima for γ ≈ π/2. In this paper we focus on smaller γ so that the interaction dependence on the distance s along the helix has a simpler shape shown in Fig. 1b. The potential resembles the Lennard-Jones interaction used in models of thermal liquid-gas transitions and we show that a transition between a Luttinger liquid and Fermi or Tonks-Girardeau gas occurs at a critical value of the dipole moment p.
The paper is organized as follows: we first formulate the model. Next, we find its phase diagram with a variational method. We support the variational calculation by a proof that for a class of inter-molecular potentials there is a phase transition between a monoatomic gas and a liquid. Then we address the conditions for the experimental observation of the transition.
We consider N ≫ 1 particles of mass m confined in a helical potential well of radius R and pitch d ∼ R. We assume that the wave function is confined in a region of the width ∼ w ≪ R around the helix (Fig.1a). The adiabatic approximation applies and at low temperatures T ≪h 2 /(mw 2 ) the problem reduces to a one-dimensional model with the Hamiltonian where s i is the length of the helix between particle number i and a reference point on the helix, V (s) the dipole interaction. Repulsive short-range interaction keeps particles apart. Hence, in 1D the particle statistics is unimportant.
In what follows we focus on non-identical particles with s n+1 > s n for all n. The ground state energy is independent of statistics and the wave functions of identical bosons and fermions can be obtained from the case of non-identical particles by symmetrization or antisymmetrization.
As the first step, we use the self-consistent harmonic approximation (SCHA), well-known in the theory of disordered classical systems 8,9 . A better approximation will be considered in the end of the paper. Note that a method, similar to SCHA, has been used in a related problem 10 . We interpret the Euclidean action S as a classical Hamiltonian and h as the temperature. We consider a trial distribution function of the form P ∼ exp(−S 0 /h), where the trial action (2) contains the variational parameters h and K; h is related to the average length of the N -particle chain and K describes the fluctuations of the particle positions s k . We need to minimize the variational free energy E(K, h) = [ S − S 0 0 −h ln Z 0 ]/τ, where . . . 0 denotes the average with respect to the trial distribution function P , the trial partition function Z 0 = Πds k exp(−S 0 /h), and the time τ → ∞ is the size of the integration domain. The selfconsistent harmonic approximation provides analytic results for potentials V (s) in the form of the sums of exponentials. Thus, we approximate the potential represented in Fig. 1b by the Morse potential 11 , which has a similar shape. Since the interaction rapidly decreases at large distances, we keep only the interaction between neighboring particles (k and k + 1).
The variational calculation includes three steps: 1) we change the variables s k → s k + kh so that h drops out of Eq. (2); 2) calculate E(K, h); and 3) find the minimum of E(K, h).
Step 2) requires the calculation of the correlation function 12 s n (ω)s * m (ω) =hĜ −1 nm (ω)/2. The matrix G nm (ω) differers from a tridiagonal Toeplitz matrix 13 only by the values of two matrix elements. Its inverse can be found analytically using the same method as for tridiagonal Toeplitz matrices 13 . In the large N limit, step 2) yields . After the minimization with respect to x, the energy per particle The energy ǫ(K) has a local minimum at K = 0, where ǫ(0) = 0. At large A another minimum at K = K min is possible. The phase transition into the state corresponding to that minimum occurs when ǫ(K min ) = 0. Solving the latter equation together with dǫ(K min )/dK = 0 we find a phase transition at A c = 2eα 2h2 /[mπ 2 ].
Thus, at small A we get K = 0 and h = ∞, i.e. the system is a gas. K and h are finite at large A. This means a finite volume at zero external pressure, i.e. a condensed state. In 1D this cannot be a solid and the calculation of the correlation function (s n+k (t) − s n (t) − kh) 2 ≈h ln k π √ Km at k ≫ 1 shows that the particles form a Luttinger liquid 14 .
Our problem is connected with the physics of atoms confined in Carbon nanotubes. If one ignores the periodic potential created by Carbon atoms then a model, similar to ours, emerges (certainly, one cannot tune the interaction between the atoms and obtain a phase transition in a nanotube). A variational study 10 predicted that an increasing interaction drives a monoatomic gas into a diatomic gas phase before a liquid state can be reached. This contradicts our findings. Below we sketch a proof that a liquid state has lower energy than a di-or multi-atomic gas and hence a multi-atomic gas cannot be the ground state. Qualitatively this reflects the fact that in a liquid every particle has bonds with two neighbors and only one bond in a diatomic molecule. Hence, one expects lower energy per particle in a liquid. The prediction of a diatomic gas is thus an artifact of the variational method (and Ref. 10 admits such a possibility).
Our proof is based on a variational upper bound for the ground state energy. We will also use that bound to improve SCHA. We will focus on models in which only neighboring particles (k and k + 1) interact. A diatomic gas was predicted in Ref. 10 for such a model and such a model is relevant for us since dipole forces rapidly decrease at large scales. We will discuss elsewhere a derivation of the statements, proven below, for systems in which all pairs of particles interact via attractive potentials with hard-core repulsion. We first demonstrate that the energy of a triatomic gas is always lower than the energy of a diatomic gas, provided that the interaction potential and the ground state wave functions are well-behaved. Then we use a similar argument to show that the energy of the liquid phase is lower than the energies of all possible multi-atomic gases. In all cases we assume that the potential energy is zero at an infinite interparticle separation s k+1 − s k = +∞ and impose the hard core condition V (0) = ∞.
Consider a system of N particles in the interval −∞ < s k < ∞. The center of mass is at rest in the ground state. Hence, the wave function can be represented as ψ N (∆ 1 , . . . , ∆ N −1 ), where ∆ k = s k+1 − s k . The ground state configuration can be viewed as a set of molecules. A molecule is a bound cluster of n particles k, (k + 1), . . . , (k + n− 1) with finite 15 interparticle distances ∆ k , . . . , ∆ k+n−2 . The molecules are separated by infinite 16 intervals ∆ p . Different molecules do not interact. Hence, the ground state energy is the sum of the energies of separate molecules: One can easily see that if the ground state includes k molecules with N 1 , . . . , N k particles then the energy equals the sum of the ground state energies of the Hamiltonians (1) with N = N 1 , . . . , N k .
We now show that the energy of a diatomic gas can always be decreased, if particles rearrange into trimers. A diatomic gas may form, if two particles have a bound state with the energy ǫ 2 < 0. The bound state wave function ψ 2 (∆ 1 ) is a normalized eigenfunction of the Hamiltonian H 12 , given by Eq. (1) with N = 2. Since the Hamiltonian is real, we can assume that ψ 2 is real 17 . In the diatomic gas, the energy per particle is ǫ 2 /2. We now demonstrate that there is a three-particle state with the energy per particle ǫ ′ < ǫ 2 /2. The threeparticle Hamiltonian . Let us find the average energy E of the trial wave function ψ 3 (s 1 , s 2 , s 3 ) = ψ 2 (∆ 1 )ψ 2 (∆ 2 ). The structure of the trial wave functions prompts the change of variables s 1 , s 2 , s 3 → ∆ 1 , ∆ 2 , s 3 . The Jacobian of this transformation is one. Hence, The integral of the last term in the square brackets reduces to the square of the integral of a full derivative, ( d∆[dψ 2 2 /d∆]/2) 2 = 0. The first four terms in the brackets can be represented as the sum of two two-particle Hamiltonians H 12 + H 23 . Thus, E = 2ǫ 2 and the energy per particle E/3 < ǫ 2 /2. This shows that the energy is lower in the triatomic gas than in the diatomic gas which thus cannot be the ground state.
A similar argument proves a general statement: Consider a system of N particles in the interval −∞ < s k < ∞. Only nearest neighbors interact. Assume that a bound state exists for n < N particles. Then in the ground state the system consists of no more than n molecules and exactly one of those molecules contains more than one particle. Indeed, the ground state energy of the infinite system equals the sum of the energies of its molecules with zero centerof-mass velocities. Imagine that the ground state includes two multi-atomic molecules with m and (k + 1) atoms. Let ψ m (∆ 1 , . . . , ∆ m−1 ) and ψ k+1 (∆ 1 , . . . , ∆ k ) be the ground states of the Hamiltonian (1) with N = m and N = k + 1 and the energies of these states be ǫ m and ǫ k+1 . The sum of these two energies contributes to the ground state energy ǫ g . We now use the argument of the previous paragraph to show that the energy decreases, if we substitute those two molecules with a monoatomic molecule (whose energy is 0) and a (m + k)−atomic molecule. Indeed, the same calculation as in Eq. (3) with the variational wave function ψ m+k = ψ m (∆ 1 , . . . , ∆ m−1 )ψ k+1 (∆ m , . . . , ∆ m+k−1 ) shows that the ground state energy ǫ m+k of the Hamiltonian H m+k (1) with N = m + k cannot exceed ǫ m + ǫ k+1 . The wave function ψ m+k is not an eigenfunction of H m+k and hence ǫ m+k < ǫ m + ǫ k+1 . Thus, there is a state whose energy ǫ g − (ǫ m + ǫ k+1 ) + (0 + ǫ k+m ) is below the ground state energy ǫ g . The contradiction means that no more than one multi-atomic molecule exists in the ground state. Hence, if there were more than n molecules in the ground state then at least n of them would be monoatomic. In such situation the energy decreases, if we form an additional n-atomic molecule from n free particles. This proves that there are no more than n molecules and no more than one of them is multi-atomic. If N ≫ n then exactly one molecule contains at least N − n + 1 particles which can be described as a liquid.
The above discussion shows that only two possibilities exist for the ground state in the large-N limit: a monoatomic gas, if no bound states exist at all, and a liquid. Both possibilities take place at different interaction strengths A. At A = 0 the system must be a gas 18 . At large A, SCHA yields a negative upper bound for the energy. Hence, the monoatomic gas with its zero energy cannot be the ground state at large A and a liquid-gas transition must occur at an intermediate A. SCHA is exact at small and large A. Indeed, it correctly predicts zero energy in the gas phase at small A. At large A the fluctuations of ∆ k are small. Hence, it is legitimate to expand the potential energy V (∆) up to the second order which means that the action is quadratic and SCHA is quantitatively valid. However, SCHA is insufficient near the phase transition. This is clear from the comparison of the variational estimate for the transition point A c and the exact threshold A d for the formation of diatomic molecules in the Morse potential 11 . Contrary to the above proof, the SCHA result for A c exceeds A d =h 2 α 2 /(4m). Thus, a different method is needed near the transition. We try the variational ansatz of the form ψ N = Π N −1 k=1 ψ(∆ k ), where the whole function ψ is a variational parameter. From the calculation, completely analogous to Eq. (3), we find the variational energy E = (N − 1)E 2 , where E 2 is the average energy of a two-particle system in the state ψ(s 1 − s 2 ) in the Morse potential. The lowest E 2 corresponds to ψ(∆) which is the ground state in the Morse potential 11 . Hence, This improves an estimate for the transition point, A c,new = A d . Obviously, E is lower than the SCHA variational energy ǫ = 0 in the interval A c > A > A c,new . The improved variational method leads to an unexpected prediction concerning the order of the liquid-gas transition. The size of a diatomic molecule diverges 11 as Hence, the size l of the N -atomic bound state ψ N diverges according to the same law. This means that the density of the liquid ρ = N/l ∼ √ E → 0 at A → A c . In other words, the variational method predicts a second order liquid-gas transition. Second order transitions between Luttinger liquid states with different densities are known 19 but in contrast to Refs. 19 the symmetry does not change at our transition. According to the Landau theory such transitions must be first-order. On the other hand, examples 20 are known of second order transitions in low-dimensional systems for which the Landau theory predicts the first order. It would be interesting to find a rigorous description of the 1D liquid-gas critical point.
We see that the phase transition occurs when the characteristic kinetic and potential energies are of the same order of magnitude, (hα) 2 /2m ∼ V (1/α). For polar molecules in a helical lattice, the characteristic spatial scale 1/α ∼ πR. Hence, near the transition the dipole energy p 2 /(πR) 3 must be of the order of the recoil energyh 2 /m(πR) 2 . For a realistic optical lattice with πR ∼ 1µm and the molecular mass of the order of 100 we find p ∼ 1D at the transition. Such dipole moments are within reach. A difference between a liquid and gas can be detected in a variant of the Einstein's-boxes experiment. A laser beam, orthogonal to the helix, creates a potential barrier in the center of the helical lattice. A gas occupies all lattice and an approximately equal number of particles will remain on both sides of the barrier. The volume of a liquid is much smaller than the system size far from the transition. Hence, all atoms will be found on one side of the barrier.
In conclusion, we have shown that a gas of polar molecules in a helical optical lattice can be driven by an electric field into a Luttinger liquid state via a continuous phase transition. The gas fills all available volume while the volume of the liquid is determined by the interaction strength. We thank G. P. Crawford, J. Dalibard, E. Demler, A. Kitaev and C. Salomon for useful discussions. We acknowledge the support by NSF under Grant No. DMR-0544116 and the hospitality of LPTENS (D. E. F.).
Appendix.
A helical lattice can be obtained as shown in Fig. 2. A circularly polarized wave with E c = E c (1, i, 0) and the wave vector k(0, 0, ±1) interferes with n laser beams with wave vectors k m = k(cos α cos[2πm/n], cos α sin[2πm/n], sin α) and electric fields e m = E l (− sin[2πm/n], cos[2πm/n], 0). At n = 6 this configuration produces a periodic array of identical helices 7 . We focus on the mathematically simplest case of large n. The optical potential perceived by an atom in the L z = 0 state is proportional to the intensity of light 4 |E| 2 = 2E 2 c + [nE l J 1 (kρ cos α)] 2 + 2nE c E l J 1 (kρ cos α) cos[kz(±1 − sin α) + φ], where φ, ρ, z are polar coordinates. Depending on the sign of detuning the atoms will be trapped near the intensity minimum or maximum. Both correspond | 2007-12-12T00:00:52.000Z | 2007-12-08T00:00:00.000 | {
"year": 2008,
"sha1": "04bc9ee7619406f41a23deb109e5a7cf6ac0cf22",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0712.1251",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "04bc9ee7619406f41a23deb109e5a7cf6ac0cf22",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine",
"Materials Science"
]
} |
84934856 | pes2o/s2orc | v3-fos-license | Effect of using organic microminerals on performance and external quality of eggs of commercial laying hens at the end of laying 1
The objective of the present study was to evaluate the effect of using microminerals in organic form on the performance and quality of eggs from commercial laying hens at the end of laying. Four hundred and eighty Hisex strain hens, 72 to 80 weeks of age, were used. A randomized complete design was used, with six replications and 16 birds for each experimental unit. Five diets were evaluated: basal feed supplemented with all microminerals in inorganic form (control); basal feed supplemented with 50% microminerals zinc (Zn) + manganese (Mn) + copper (Cu) in organic form and 50% in inorganic form; basal feed supplemented with 50% zinc in organic form and 50% in inorganic form; basal feed supplemented with 50% manganese in organic form and 50% in inorganic form; and basal feed supplemented with 50% copper in organic form and 50% in inorganic form. There was no effect of diets on egg production, feed intake, food conversion and egg shell percentage and thickness. Birds fed basal feed supplemented only with zinc or manganese in organic form produced eggs with lower specific weight. The use of basal feed supplemented with copper in organic form has minimized egg loss. However, the best results (lower egg loss, higher specific weight and higher weight of eggs) were obtained with the basal feed supplemented with microminerals Zn + Mn + Cu in organic form and, therefore, it is recommended for feeding of commercial laying at the end of laying.
Introduction
Organic minerals or chelated minerals have been studied by several researchers because they may present better bioavailability than inorganic minerals.According to AAFCO (1997), organic minerals are metal ions chemically linked to an organic molecule (aminoacids), forming chemical structures with unique characteristics of stability and high mineral bioavailability.When chelated correctly, microminerals are absorbed and used by the R. Bras.Zootec., v.39, n.2, p.344-348, 2010 animals at higher proportions than the minerals present in food or in mineral supplements.Absorption of almost 100% of chelated minerals reduces the animals´ mineral requirements.
Results reported in the literature prove that the information about organic minerals is still controversial.In an experiment to evaluate organic sources of zinc (Zn), manganese (Mn) and copper (Cu) alone or in combination, Paik (2001) observed an enhancement in egg production from hens that received the source with organic copper only and from those that received the combination of the three minerals.The egg shell quality was higher when zinc was supplied in its chelated form.However, Mabe (2003) did not observe difference between microminerals in organic or inorganic forms when related to the percentage of shell or to the shell weight by unit of surface area.The addition of microminerals in organic form slightly decreased the egg weight, but increased the resistance of the egg shell from the birds older than 60 weeks.Sechinato (2003) did not observe an enhancement in egg production and quality with the use of organic Zn, Mn, selenium, Cu, iodine and iron compared with the inorganic forms.Nevertheless, isolated supplementation of each organic micromineral in feed had worsened the results compared to supplementation with organic microminerals alone or with inorganic microminerals alone.
The objective of the present study was to evaluate the effect of using organic microminerals on egg performance and external quality from commercial laying hens at the end of laying.
Material and Methods
The experiment was conducted on Somai Nordeste S/A farm, in Montes Claros, northern Minas Gerais, Brazil.The experimental period lasted 9 weeks, corresponding to the 72 to 80 weeks of age of the birds.
Four hundred and eighty Hisex strain light laying hens were used, housed in a conventional laying shed with eating troughs and nipple type drinking troughs, one trough for every two cages, with a density of 4 birds per cage.A thermometer was installed in the center of the shed from which maximum and minimum temperatures were obtained daily.Lighting was by fluorescent 40W lamps and a 16-hour light period was used.
Treatments consisted of five different diets: basal feed supplemented with all microminerals in inorganic form (control); basal feed supplemented with 50% zinc, manganese and copper minerals in organic form and 50% in inorganic form; basal feed supplemented with 50% zinc in organic form and 50% in inorganic form; basal feed supplemented with 50% manganese in organic form and 50% in inorganic form; basal feed supplemented with 50% copper in organic form and 50% in inorganic form.The diets used (Table 1) consisted mainly of corn and soy bran, isoproteic (16.0%CP) and isocaloric (2,830 kcal ME/kg), formulated to meet the nutritional requirements established for the Hisex strain (Interaves, 2009) adapted to farm conditions.Minerals were supplemented per kg of feed (Table 2).
A randomized complete design was used, with five treatments and six replications, totaling 30 plots.Each plot consisted of four cages, each one with four birds, totaling 16 birds per experimental unit.
The following characteristics were evaluated: feed intake (g/bird/day), food conversion (g feed/g egg), production (% eggs/bird/day), eggs weight (g), egg loss (broken and cracked eggs -%), specific weight, egg shell thickness (mm) and egg shell percentage (g/g), all recorded weekly.For specific weight evaluation, every intact egg produced was collected, seven saline solutions were used, with 1,066 to 1,090/cm 3 density and 0.004 gradient among them.Egg shell thickness was evaluated using three eggs per plot, obtaining the reading by micrometer of three points in the equatorial region of the egg, and the egg shell percentage was obtained by the ratio between egg shell weight/egg weight.
Data was submitted to analysis of variance and the means obtained were evaluated by the Scott-Knot test (P<0.05)using the SISVAR computational package, described by Ferreira (2000).
Results and Discussion
No significant differences (P>0.05) were observed in the results of egg production, feed intake and food conversion with the supplementation of feed with microminerals in organic form.However, an effect (P>0.05) was observed of the supplementation on egg weight and loss (Table 3).
Most of the studies in the literature do not report effects of organic minerals on production, feed intake and food conversion (Kienholz, 1992;Dale & Strong, 1998;Ludeen, 2001;Sechinato, 2006), with the exception of Branton et al. (1995), who observed an improvement in laying percentage of birds that received chelated minerals.Paik (2001) observed an increase in laying rate of birds 96 to 103 weeks of age that received organic copper and the association of organic Cu+Mn+Zn.
Birds that received feed supplemented with 50% organic Zn+Cu+Mn produced heavier eggs.This result may have been due to a combined action of the three microminerals used, since they are directly associated to egg formation (Underwood, 1999).Zinc is one of the constituents of carbonic anhydrase, an enzyme involved in egg shell formation (Leeson & Summers, 2001); manganese is the metal activator of enzymes that are involved in the synthesis of mucopolysaccharides and glycoproteins that contribute to the formation of the organic matrix of the shell (Georgievski, 1982); and according to Scott et al. (1982), copper plays the role of cofactor of the lysyl-oxylase enzyme that is important in the formation of collagen cross-links present in the egg shell membrane.
The results obtained for egg weight differed from those reported by Sechinato (2003), who did not observe an enhancement in this variable when supplied feed containing several associated organic microminerals or feed with those microminerals separately.Paik (2001) observed an enhancement in egg weight when supplied feed containing only chelated zinc compared to feed containing only organic microminerals and to feed containing the association of organic zinc and manganese.
There was less eg g loss from the birds that received feed supplemented with organic zinc+copper+manganese and feed supplemented with 50% copper in organic form.These results corroborated with those from Mabe et al. (2003) observed a slight increase in egg shell resistance in birds with 60 and more weeks of age that received zinc, manganese and copper in organic form.Sechinato (2003) also verified better results with the combined use of these microminerals in organic form in feed.
The best result found with using copper in organic form may be related to its influence in the formation of the egg shell membrane, which may have contributed to higher resistance, generating fewer losses.Nevertheless, there is no data in the literature that relates the resistance of this membrane to smaller egg loss by breaking or cracking.
Another factor that may have contributed to the smaller egg loss was the fact that the minerals were chelated, mainly copper, because it is antagonist to zinc, meaning that, copper inhibits zinc digestion and absorption because it favors the proportion of soluble zinc associated to large complexes (Pang et al., 2007).
Feed supplementation with organic microminerals did not influence (P>0.05) the percentage and shell thickness.However, supplementation affected (P>0.05) the specific weight of the eggs (Table 4).
Similar results to the present study were observed by Moreng (1992), Balnave & Zhang (1993), Dale & Strong (1998), Mabe et al. (2003) and Utterback et al. (2005), who also did not observe an improvement in egg shell thickness and percentage from laying hens supplied with different organic sources of microminerals.However, Rutz et al. (2004) observed enhancement in shell thickness as an effect of organic zinc and manganese supplementation in diets for laying hens.That divergence of results may be explained by the fact that this author had used semi heavy weight hens (Isa-Brown strain), which may present different responses than light weight hens, the ones used in this research.
The use of basal feed supplemented only with inorganic microminerals provided specific weight results similar to the results obtained with basal feed with organic Zn+Mn+Cu and with basal feed with organic copper.This result was not expected, because there were no differences among feeds for thickness and shell percentage that are highly correlated with specific weight (Abdallah et al., 1993).These results differed from those observed by Paik (2001) who verified enhancement in egg specific weight from birds supplied with copper, manganese and zinc in organic form.Mabe (2003), however, when evaluating the inorganic and organic form of zinc and manganese supplementation, did not find enhancement in the egg quality parameters, corroborating with Dale & Strong (1998), who also did not obtain enhancement in egg quality with the use of those organic microminerals.
The difference between the results obtained in the present research and results reported in the literature may be explained, in part, by the great variety of chelated molecules on the market and their differences in bioavailability and stability, as well their metabolism in the animal organism.Table 3 -Means of egg production, feed intake, egg weight, food conversion and egg loss obtained with the experimental diets *
Table 2 -
Supplemented minerals in the experimental diets (mg/kg)
Table 4 -
Means of egg shell percentage, egg shell thickness and egg specific weight obtained with the experimental diets Means followed by different lowercase letters in the same column differ statistically by the Scott-Knott test (P<0.05). * | 2019-03-22T16:06:40.063Z | 2010-02-01T00:00:00.000 | {
"year": 2010,
"sha1": "3427f2b6c24a62d8f21f4bb98499ce9a2505e844",
"oa_license": "CCBY",
"oa_url": "https://www.scielo.br/j/rbz/a/kzXBpmjMx4rD3NccQfbXrft/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "3427f2b6c24a62d8f21f4bb98499ce9a2505e844",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Chemistry"
]
} |
209049622 | pes2o/s2orc | v3-fos-license | Speed control of DC motor using conventional and adaptive PID controllers
ABSTRACT
INTRODUCTION
Many applications use dc motor to benefit from their simple, wide and precise control characteristics. Such applications include, for example robotic manipulators, steel rolling mills, electric trains, cranes, electric cars, etc [1][2]. Even the Brushless DC (BLDC) motor has been developed with higher efficiency in operation than classic DC motor [3]. The most flexible control is obtained by means of Separately Excited DC Motor (SEDM). The best quality of this motor is that it provides high torque load sustainable property, and it can be used with batteries and solar cells [4]. DC motors are a good field to study advanced control algorithms, due to the fact that its theory can be projected on other types of motors [5].
The speed control of dc motor with power electronic systems is obtained generally by changing its terminal voltage. A PID controller is a good candidate for speed control of dc motors. It is the most common controller used in industry due to its simplicity and ease of implementation [6]. In addition, the PID controller is used for controlling the brushless dc motor by designing two controller types Fuzzy logic and PI controllers [7]. The unknown dc motor parameters could be estimated by experimental data onto armature current and speed response, or by adapting an adaptive model with reference model created based on experimental data [8]. In some cases the system parameters are changing during operation, and the PID controller cannot adjust its own gains to cope with these changes, which will emanate the need to online retune the PID gains, aka adaptive PID [9]. Parameters tuning of the dc motor has been used by different [10]. An adaptive controller for a dc motor is also designed by utilizing Lyapunov-like function methods based on Digital Signal Processing platform [11].
Proportional Integral Derivative (PID) controllers acquire more than 95% of the controllers in the industrial process control applications; this is accredited to their robust performance, ease of implementation, and functional simplicity. The major flaw of the PID controller is its high sensitivity to variation in the motor parameters and load disturbance. Another disadvantage of such controller is that, it is difficult to tune PID gains [4,6].
Adaptive control techniques can be employed to overcome these deficiencies of conventional PID controller. Adaptive Proportional Integral Derivative (APID) control provides fast speed response and parameter insensitivity [12]. An adaptation mechanism is combined with the conventional PID control to auto-tune the controller gain during system operation. The adaptation mechanism adopted in this thesis is Recursive Least Square (RLS) adaptation algorithm.
SEPARATELY EXCITED DC MOTOR AND DRIVE
Voltage controlled speed controller of dc motors was introduced for the first time by Ward Leonard in 1981 [13] and the field has witnessed a great advancement since. Choppers are used to obtain a controlled dc voltage from a fixed dc source. The speed of the separately excited DC motor can be controlled by the armature voltage, , known as voltage control.
Separately Excited DC Motor (SEDM)
The equivalent circuit of the motor is presented in Figure 1 [14]. Armature and field winding are supplied separately, which makes independent of . The interaction of field flux and armature current in the rotor produces torque [4]. When a SEDM is exhilarated by a field current and armature current and , the motor induces a back emf and a torque to balance the load torque at a particular speed. Motor parameters and calculations are shown in Tables 1 and 2. The back electromotive force is proportional to the motor speed, and the electromagnetic torque developed in the motor is proportional to the armature current as presented by (1) and (2).
Applying Kirchhoff's Voltage Law (KVL) to the armature circuit: Applying Newton's second law to the mechanical shaft: Rearranging equations the previous equations and taking the Laplace transformation will yield to the SEDM block diagram shown in Figure 2.
Chopper Drive of DC Motors
Chopper is a static power electronic device. It acts as a high speed on/off switch; that connects or disconnects the load from the dc source , which creates a chopped dc voltage at the load terminal . Chopper drives are widely used in speed control of SEDM, and can achieve speed ranges above or below the rated speed of the motor. In this thesis buck chopper as illustrated in the circuit of Figure 3 [15] is used to control the motor terminal voltage .
Figure3. Schematic Circuit of Class A Buck Chopper
The chopper DC voltage transfer function, defined as the ratio of the output voltage to the input voltage, is: This ratio D is called the Duty Cycle of the chopper.
CONTROLLER DESIGN
PID controllers are the most commonly used controller in industrial practice for more than 60 years [6], they compose 90% of controllers on process control fields [16]. A conventional closed loop PID control system block diagram is illustrated by Figure 4.
With all its praises, conventional PID controllers have some defects such as the difficulty of tuning the controller gains, and the poor self-adaptability, both of which justifies the need of adaptive control [17]. The difference between a conventional controller and an adaptive controller is that the parameters of the later, noted as θ, are time variant.
Model reference adaptive control (MRAD) has been a well-developed approach of the adaptive control [17] (Tao 2003). Its objective is to force the plant to track the response of some given reference model [18,6]. A MRAC controller block diagram is presented in Figure 5, it consists of a reference model G m , a controller, and an adaptive mechanism. An error is generated whenever the actual output of the system fails to track the reference model output, this error is called the tracking error et, and it is the difference between the actual output of the system y(t) and reference model output y m (t): e t (t) = y(t) − y m (t) The adaptation mechanism provide the controller with parameters θ(t) at each sampling time, depending on the values of te t (t), u(t), and desired input r(t). The purpose of the Adaptive algorithm is to find the controller gains such that ≈ 0. According to [5] the output ( ) is almost equal to the reference model output ( ), that is: Which is rewritten as: The main purpose of this algorithm is to estimate the new parameters vector θ(t k ) at time instant t k by adding a correction vector to the previous parameters estimation vector θ(t k−1 ) at time instant t k−1 [19]. The estimation error in (9) is to be minimized using RLS algorithm, It, recursively and online, estimates θ(t) by applying the following equations [5]: Where ( ) is the adaptation gain and ( ) is the covariant matrix, which is a 3 by 3 matrix given as: And is the learning rate of the algorithm.
SIMULATION THE CONTROL SYSTEM USING MATALB/ SIMULINK AND RESULTS
In this paper work an APID controller is designed and simulated to control the speed of a chopperfed dc motor when the motor load is varied according to a fan load. In order to have a better judgment of the system performance, the motor with the same loading condition will be controlled with conventional PID controller. Both PID and APID speed control system are modeled using MATLAB/Simulink as presented in Figures 6 and 7 respectively.
The MATLAB/Simulink models consist of the following blocks: a. Chopper-Fed Separately excited dc motor; b. PID controller; c. Reference model; d. RLS adaptation algorithm The motor is subjected to a step input of amplitude 1750 rpm with a load that is proportional to the square of the motor speed. The speed responses of the motor are illustrated in Figure 8. The similarity of the APID control system speed and the reference model's is very obvious at first sight. The APID controller achieves excellent tracking, with only e t of 27.9362 rpm. The PID controller, on the other hand had a higher e t of 40.7299 rpm, the superiority of the APID controller tracking is illustrated by Figure 9. The similarity of the APID control system speed and the reference model's is very obvious at first sight. The APID controller achieves excellent tracking, with only e t of 27.9362 rpm. The PID controller, on the other hand had a higher e t of 40.7299 rpm, the superiority of the APID controller tracking is illustrated by Figure 9. Table 3 presents both control systems response criteria as well as that of the reference model. The APID scheme outperforms the PID scheme in every aspect. It accomplished the fastest rise and settling times, the smallest tracking and steady state errors, and a negligible percentage overshoot. The percentage overshoot of the APID controller is drastically reduced; it is even within the tolerated e ss . This improvement in performance is due to the fact that the APID controller gains, unlike the PID controller gains, are not constant, but they change according to the RLS adaptation algorithm to achieve perfect tracking of the reference model. The change in the adaptive controller gain is captured in Figure 10. It is clear the controller gains differ from their initial values, each to reach a value that best suit the application.
CONCLUSION
In this paper a model reference APID controller was designed to control the speed of chopper-fed SEDM, a RLS algorithm with rate limiters was implemented that separately adjust each of the controller gains. The APID control performance was outstanding. It kept the % of the transient response less than 0.2048 %, it accomplished fast settling of no more than 0.1577 s, and kept the final value of the speed within 4 rpm off the desired reference speed. Its ability of tracking the reference speed was phenomenal.
The adaptation algorithm played a key role in the performance of the controller. The RLS algorithm updated the values of the controller gain at each time instant, which allowed the controller to adopt different values of PID gains that adapted the system to changes in the load. It is an online tuning of the controller more or less, and with a rate limiter at the output of each controller gain, the rate of change in the controller gains was limited to prevent any sudden change in the controller gains which ensured system stability.
Based on the result of the work of this paper, the APID controller proved its superiority over the conventional PID controlled systems. The PID controller whilst may perform well enough under constant loading conditions, did not accomplish as good tracking in the case of variable load as did the APID controller. | 2019-12-10T22:01:39.885Z | 2019-12-01T00:00:00.000 | {
"year": 2019,
"sha1": "9ff340b2d7cd38d03e2287af039d4228e7dccf79",
"oa_license": "CCBYNC",
"oa_url": "http://ijeecs.iaescore.com/index.php/IJEECS/article/download/17950/13157",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "7ab0fb199cd08f43b6ef2d2bffa1c2dbace0782c",
"s2fieldsofstudy": [
"Engineering",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
267584449 | pes2o/s2orc | v3-fos-license | Honokiol inhibits gallbladder cancer proliferation and epithelial mesenchymal transformation by suppressing TMPRSS4-induced PI3K/AKT activation
Purpose: To determine the role of TMPRSS4 in gallbladder cancer (GBC). Methods: Quantitative reverse-transcription-polymerase chain reaction (qRT-PCR) and western blotting were used to evaluate the expression of genes, while CCK-8 kit and foci formation assay were used to assess cell growth in GBC-SD and non-obliterative zone (NOZ) cell lines. Cell apoptosis was evaluated by flow cytometry, while cell migration and cell invasion were determined by Transwell assay in GBC-SD and NOZ cell lines. Results: TMPRSS4 was highly expressed in GBC cells, compared to normal cell, HIBEC. Overexpression of TMPRSS4 enhanced cell viability and 2D foci formation in GBC-SD (stratified) and NOZ; however, knockdown of TMPRSS4 reduced cell proliferation and colony formation in the cell lines (p < 0.05). TMPRSS4 deficiency increased cell apoptosis but its reinforced expression decreased cell apoptosis in GBC cell lines (p < 0.05). Moreover, TMPRSS4 positively regulated cell invasion and migration in GBC cells by upregulating TWIST1, vimentin and N-cadherin. TMPRSS4 also regulated the activation of PI3K/AKT signal pathway. Furthermore, honokiol inhibited gallbladder cancer proliferation and migration, and induced cell apoptosis by suppressing TMPRSS4-induced PI3K/AKT activation (p < 0.05). Conclusion: TMPRSS4 plays an important role in regulating cell growth, apoptosis and metastasis of gallbladder cancer cells (GBC). Thus, TMPRSS4 might be a new biomarker in gallbladder cancer.
INTRODUCTION
Gallbladder cancer (GBC) is a relatively infrequent cancer that develops from the biliary tract and is extremely aggressive [1].Almost all cases of GBC are diagnosed at a late stage due to the lack of early symptoms.To date, adjuvant therapy, including chemotherapy and radiotherapy is an indispensable option for the treatment of most GBC patients.However, the efficacy of these traditional therapies is shortlived, as most patients relapse rapidly with concomitant chemo-and radiotherapy resistance [2].Hence, the overall prognosis of GBC is very poor, with an average survival duration of 13.2 -19.0 months [3].Therefore, there is an urgent need for progress in GBC pathogenesis research to pave the way for new therapeutic approaches as well as discovery of new biomarkers for early diagnosis.
TMPRSS4 is currently known to be involved in two functions: embryonic development and cancer [4].It localizes to the cell membrane, participating in regulating signal transduction between cells and their surroundings.It also increases The expression of Heparin-Binding EGF-like Growth Factor (HB-EGF) and the cleavage of its proteolytic form play a significant role in inducing angiogenesis in hepatocellular carcinoma [5].Studies have shown that TMPRSS4 is highly expressed in a variety of solid malignant tumors, including pancreatic cancer, colorectal cancer, lung cancer, cervical cancer, and gallbladder cancer [6].
Twist1 belongs to the helix-loop-helix essential transcription factor family, and is one of the major transcription factors that induce EMT, cell migration and invasion in embryonic development and cancer cells.Reinforcement of Twist1 increases EMT and cancer stem-like cell properties in breast cancer cells [7].TMPRSS4 upregulates the activation of STAT3 and expression of TWIST1, thus exacerbating prostate cancer migration [8].Whether TMPRSS4 affects the migration and invasiveness of gallbladder cancer by regulating TWIST1 is not yet known.
EXPERIMENTAL Cell lines and cell transfection
GBC-SD, HIBEC, and NOZ cells were purchased from the Academy of China (Shanghai, China).All cell lines cultured in DMEM medium were supplemented with 10 % FBS, 100U penicillin and streptomycin.The cell lines were consistently maintained under standard conditions preceding the commencement of the studies [9].All experimental manipulations were performed as previously depicted [10].For cell transfection, the indicated plasmids or siTMPRSS4 were transfected with lipofetamine2000 reagent following the instruction of the reagent manufacturer.
RNA isolation and qRT-PCR
Total RNA was isolated using TRIzol reagent (Ambion, CA, USA), then reverse-transcribed.The cDNA was used as template to measure TMPRSS4 expression at mRNA level.Gene expression levels were analyzed using the delta Ct method and normalized against β-actin.The following primers were used to evaluate related gene mRNA expression levels.
Cell proliferation and 2D foci formation assays
Cell growth assessment was carried out as previously described [11], using 1 × 10 3 cells seeded in a 96-well plate in triplicate.The absorbance of these cells was used to determine cell growth rate with the aid of CCK-8 assay kit (Keygenbio, Nanjing, China).For the colony formation assay, 5 × 10 2 cells were seeded in a 60 mm dish to form 2D colonies.After 2 weeks, the colonies were stained, photographed, scored and counted.
Evaluation of cell apoptosis
For this determination, 2.5 × 10 5 cells were seeded into 6-well plates, then digested with trypsin, and collected, and then fixed with 70 % alcohol.After staining with PI and FITC, they were analyzed by flow cytometry.
Determination of cell migration and invasion
Transwell assay was used to conduct the migration and invasion of GBC cell lines.A quantity of the cells, 2.5 × 10 4 cells, were seeded into the upper chamber of the Transwell assay set-up, and after 24 h, cells attached to the filter were fixed and stained.Cell migration and invasion levels were measured as previously reported [11].
Statistical analysis
Student's t test was performed using SPSS 22.0 for Windows.Data are presented as mean ± SEM of three independent experiments.P < 0.05 considered statistically significant.
TMPRSS4 was upregulated in gallbladder cancer cells
The results are shown in Figure 1.TMPRSS4 was highly expressed in GBC-SD and NOZ compared with HIBEC cells (Figure 1 A).Furthermore, TMPRSS4 protein was also overexpressed in GBC-SD and NOZ when compared with HIBEC cells (Figure 1 B).
TMPRSS4 promotes the proliferation of gallbladder cancer cells
As shown in Figure 2 A, TMPRSS4overexpression or -deficient cell lines were successfully modelled in gallbladder cancer cells with the control group (Figure 2 B), while the knockdown of TMPRSS4 decreased cell proliferation in GBC cell lines compared to negative control cells (Figure 2 B).Furthermore, TMPRSS4 upregulation facilitated foci formation in GBC cell lines compared with the control group.
However, TMPRSS4 deficiency suppressed colony formation in GBC cell lines when compared with control group (Figure 3).Taken together, therefore, TMPRSS4 positively regulated cell growth in gallbladder cancer cells.
TMPRSS4 increased the migration, invasion and mesenchymalization of gallbladder cancer cells by upregulating TWIST1
TMPRSS4 overexpression increased cell migration and invasion in GBC-SD and NOZ cell lines when compared with the control group.TMPRSS4 deficiency exerted an opposite effect (Figure 6).The results show that Twist, Vimentin, and N-cadherin were upregulated by TMPRSS4 overexpression but suppressed by TMPRSS4 inhibition.E-cadherin and ZO-1 were inhibited in GBC-SD and NOZ cells by TMPRSS4 overexpression.The knockdown of TMPRSS4 decreased the expression of Twist, Vimentin, and N-cadherin, but enhanced the expression of Ecadherin and ZO-1 in GBC-SD and NOZ cell lines (Figure 7).Thus, TMPRSS4 positively regulated cell migration and invasion via upregulation of TWIST.
TMPRSS4 activates PI3K/AKT signaling
The effect of TMPRSS4 on the activation of PI3K/AKT signaling was also investigated.As shown in Figure 8, both p-AKT and p-PI3K showed increased expressions in GBC-SD and NOZ cells, following TMPRSS4 upregulation.In contrast, knockdown of TMPRSS4 reduced the expression of p-AKT and p-PI3K in GBC cell lines.
Honokiol inhibits gallbladder cancer proliferation and migration, and induces cell apoptosis
Different doses of HNK was used to treat GBC-SD and NOZ cell line, as shown in Figure 9 A, cell growth was significantly inhibited by 15 and 30 μM HNK.Similarly, cell colony formation was also suppressed dose-dependently by HNK in GBC-SD and NOZ cell line (Figure 9 B).Moreover, cell apoptosis was increased by HNK treatment (Figure 9 C).Furthermore, HNK treatment reduced cell migration and invasion in GBC-SD and NOZ cell lines when compared with the control group (Figure 10).In addition, expression of TMPRSS4 was decreased by HNK treatment in GBC-SD and NOZ cells, while both p-AKT and p-PI3K expressions were inhibited in GBC-SD and NOZ cells(Figure 9 C and Figure 11).
DISCUSSION
Gallbladder cancer (GBC) is a fatal disease with a low incidence but poor prognosis.Clinically, it occurs mainly in women, elderly patients, and Native Americans.Surgical resection is the only possible treatment, but success depends on staging, tumor biology, and the integrity of the resection.There are significant differences in survival rate among patients at different stages, with a 5-year survival rate of 50 % for stage I cancer and 3 % for stage IV cancer [3].Incidental gallbladder cancer (IGBC) is usually diagnosed at an early stage and therefore has a higher survival rate than non-IGBC.Chemotherapy may be considered as a strategy to increase survival [4].Therefore, it is essential to identify the key genes in the development and progression of GBC.Over the past two decades, increasing number of studies have focused on the underlying mechanisms of GBC, including MAPK/ERK, PI3K/AKT/mTOR, and Notch signaling pathways [12].However, the functional regulatory mechanisms of GBC are not fully understood.
Since TMPRSS4 is up-regulated in GBC cells, therefore, the overexpression of TMPRSS4 may be crucial to the development and progression of GBC.TMPRSS4 positively regulated cell growth and 2D colony formation, and cell apoptosis was negatively regulated by TMPRSS4 through the downregulation of Bax.Moreover, TMPRSS4 has been identified as a positive regulator of both cell migration and invasion, exerting its effects through the induction of TWIST and Vimentin.Furthermore, TMPRSS4 exhibits a positive regulatory influence on the activity of the PI3K/AKT signaling pathway.Notably, honokiol demonstrates inhibitory effects on gallbladder cancer by suppressing TMPRSS4-induced PI3K/AKT activation, leading to the inhibition of proliferation, migration, and the induction of apoptosis in cancer cells.
A previous study reported that TMPRSS4 is highly expressed in gallbladder cancer (GBC) tissue, when compared to adjacent normal tissues, and that the expression of TMPRSS4 is negatively correlated with the survival rate of patients [13].The data from this study, showed the similar expression of TMPRSS4 in GBC cells.However, it is noteworthy that the analysis of TMPRSS4 expression in GBC tissue specimens is limited due to the current constraints on sample collection.TMPRSS4 has been demonstrated to play important roles in a diversity of cancers, including breast cancer, prostate cancer, colorectal cancer, hepatocellular carcinoma, gastric cancer, lung cancer, and pancreatic cancer.Min reported that TMPRSS4 upregulated uPA expression to facilitate cell invasion in lung and prostate cancers via activation of JNK signaling pathway.TMPRSS4 was correlated with cell proliferation and aggressiveness in breast cancer [14].Additionally, Guan et al demonstrated that TMPRSS4 increased thyroid cancer proliferation through the activation of CREB [15].These studies all reported the positive function of TMPRSS4 in regulating cell growth, Similarly, in the present study, TMPRSS4 played the same role in the regulation of cell proliferation in GBC cell lines, thus indicating that TMPRSS4 may be a significant marker in tumor cell growth.However, the underlying functional regulatory mechanisms need to be further investigation.
In prostate cancer, TMPRSS4 positively mediated cell growth and cell invasion by upregulating Slug and cyclin D1 [16].Another study also demonstrated that TMPRSS4 induced the expression of TWIST1 to promote cell migration by the activation of STAT3 in prostate cancer [8].Moreover, Lee showed that TMPRSS4 positively regulates the stem-like properties of cancer by upregulating of SLUG and TWIST1 in prostate cancer [17].In the present study, it was found that TMPRSS4 is associated with the expression of TWIST, Vimentin, E-cadherin, and ZO-1 in regulating cell viability and EMT.Recently, Gu reported that TMPRSS4 induces cell proliferation and reduces cell apoptosis by activating ERK1/2 signaling pathway in PDAC [18].Similarly, in the present study, TMPRSS4 increased cell proliferation and foci formation through the upregulation of TWIST, and suppressed cell apoptosis by reducing Bax.
A study has reported that TMPRSS4 induces cell proliferation, cell invasion and EMT by activation of MAPK and AKT in endometrial carcinoma cells.Additionally, previous studies also demonstrated that PI3K/AKT pathway exerts a crucial role in carcinogenesis.The PI3K/AKT pathway correlated with cell growth, metastasis and invasion through the suppression of GPER1 [19].In this study, the role of TMPRSS4 in cell growth and EMT may be regulated by the activation of PI3K and AKT.In order to further confirm the critical role of TMPRSS4 in GBC, function rescue assay is required in a future study.
CONCLUSION
TMPRSS4 positively regulates cell growth, cell migration and cell invasion via the upregulation of TWIST1, and negatively regulates cell apoptosis by Bax inhibition.Honokiol inhibits gallbladder cancer proliferation and migration, and induces cell apoptosis by suppressing TMPRSS4-induced PI3K/AKT activation.These findings offers insight into a key molecular mechanism in the progression of GBC and, hence a possible approach to the treatment of GBC.
Figure 1 :
Figure 1: TMPRSS4 is upregulated expression in gallbladder cancer cells.(A) The mRNA of TMPRSS4 in GBC cells and normal biliary epithelial cell line; (B & C) Expression of TMPRSS4 in HIBEC, GBC-SD and NOZ cell.Error bars represent data from three independent experiments, mean ± SD. ***P < 0.001
Figure 2 :
Figure 2: TMPRSS4 enhances the proliferation of gallbladder cancer cells.(A) GBC-SD and NOZ cells were transfected with pc-TMPRSS4, siTMPRSS4, and their respective control plasmids; the expression of TMPRSS4 was assessed by western blotting.(B) Cell viability of TMPRSS4-overexpressing or knockdown GBC-SD and NOZ cells.Error bars denote data from three independent experiments, presented as mean ± SD. **P < 0.01, ##p < 0.01; * indicates comparison with the negative control group, # indicates comparison with the siNC group
Figure 2 :
Figure 2: TMPRSS4 enhances the proliferation of gallbladder cancer cells.Cell proliferation of TMPRSS4 knockdown/overexpression GBC-SD and NOZ cell lines.Error bars represent data from three independent experiments, mean ± SD. ***P < 0.01, ##p < 0.01, *compared with negative control group, #compared with siNC groupTMPRSS4 inhibits apoptosis of gallbladder cancer cellsAs shown in Figure4, Bax was suppressed while BCL-2 was increased in TMPRSS4 overexpression of GBC-SD and NOZ cell lines.In contrast, knockdown of TMPRSS4 enhanced the expression of Bax and inhibited BCL-2 expression in GBC cell lines.In addition, TMPRSS4 overexpression reduced cell apoptosis, whereas TMPRSS4 deficiency increased cell apoptosis in GBC-SD and NOZ cell lines when compared with control group (Figure5).
Figure 4 :
Figure 4: TMPRSS4 impedes the apoptosis of gallbladder cancer cells.The expression levels of Bax and BCL-2 in TMPRSS4 knockdown/overexpression GBC-SD and NOZ cell lines are depicted.Error bars denote data from three independent experiments, presented as mean ± SD. **P < 0.01, ##p < 0.01; *compared with negative control group, #compared with siNC group
Figure 5 :
Figure 5: TMPRSS4 impedes the apoptosis of gallbladder cancer cells.The impact on cell apoptosis in TMPRSS4 knockdown/overexpression GBC-SD and NOZ cell lines was measured by flow cytometry.Error bars represent data from three independent experiments, mean ± SD. **P < 0.01, ##p < 0.01, *compared with negative control group, #compared with siNC group
Figure 6 :
Figure 6: TMPRSS4 promotes the migration, invasion and mesenchymalization of gallbladder cancer cells by upregulating TWIST1.Cell migration in TMPRSS4 knockdown/over-expression GBC-SD and NOZ cell lines.Error bars represent data from three independent experiments (mean ± SD). **P < 0.01, ## p < 0.01 *compared with negative control group, #compared with siNC group | 2024-02-11T16:34:02.237Z | 2024-02-05T00:00:00.000 | {
"year": 2024,
"sha1": "215d825226a8e2077fc2e5039334f05ab5079c33",
"oa_license": "CCBY",
"oa_url": "https://www.ajol.info/index.php/tjpr/article/download/264154/249331",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "1d9da739823132b795ad49979dc7125a0aa644d5",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
251623009 | pes2o/s2orc | v3-fos-license | Meaningful work, pleasure in working, and the moderating effects of deep acting and COVID‐19 on nurses' work
Abstract Aims This study aims to verify the association between nurses' perception of the meaningfulness of their work and their pleasure in working, and whether this relationship may change based on the level of deep acting performed to cope with emotional regulation demands and the influence of the COVID‐19 pandemic on the healthcare work. Methods Nurses from both private and public Italian institutions (N = 239) completed an online questionnaire between June 2021 and January 2022. A moderated moderation model was tested through SPSS Process macro. The design is cross‐sectional. Results The results show that the perception of meaningfulness of work is positively associated with pleasure in working, especially in conditions of high deep acting. This relationship is further moderated by the COVID‐19 influence so that the association between meaningful work and pleasure in working is stronger in conditions of high COVID‐19 influence and at higher levels of deep acting performed. Conclusion Perceiving one's work as meaningful can be a job resource that protects nurses from the negative effects of emotional regulation demands and even from the stress of dealing with COVID‐19. Impact The study addresses the problem of nurses' emotional regulation demands at work and evaluates the protective role of meaningful work. The findings could be useful for planning prevention interventions (through training in adaptive emotional regulation strategies) or protection interventions (through the promotion of effective coping strategies and the stimulation of one's work engagement).
who worked most intensely in contact with COVID-19 had to face severe mental health challenges, reporting higher levels of burnout, emotional exhaustion, 4 anxiety, depression, 3,5 and lower levels of self-efficacy, resilience, and frontline work willingness. 2 However, medium-high levels of job satisfaction and work engagement were also found. 1,7,8 These controversial findings could be explained by the fact that during the pandemic, and perhaps because of it, the most important characteristics of the nursing profession, such as a sense of duty and sacrifice, dedication to patient care, and a sense of belonging to the nursing profession, have been accentuated. 9 In addition, work-related emotions have also intensified (fear, vulnerability, compassion for patients). 2 On the one hand, this has made the risk of burnout and emotional exhaustion more likely, 10,11 along with the consequent need to cope with emotional labor. 12 On the other hand, it may have acted in a positive sense, increasing the perception of the meaningfulness of one's job role and profession, and acting on engagement and commitment. Therefore, to support the workers who have most suffered the consequences of the pandemic, it is necessary to investigate both the criticalities and the strengths of the work, to increase the reaction and adaptation strategies to the former, and the empowerment strategies of the latter.
So far, the literature has focused on verifying pandemic-related negative effects on work. This study, however, starts from a positive assumption, id est, that the sense of meaningfulness of one's work leads to the pleasure of working, and that this relationship may be moderated by having to deal with emotional labor and by the influence of pandemic on healthcare work.
| Meaningful work and positive work-related outcomes
Work characteristics that could act as resources for workers have been studied for some time. 13 Among these, the perception of meaningful work has shown to have strong links with workers' satisfaction, retention, engagement, commitment, and quality job performance. 14 Meaningfulness of work is the perception that work corresponds to one's values, and that it is relevant and influential, both for oneself and for others 15 ; in other words, it is the perception that one's work has a strong intrinsic and extrinsic significance. This perception may arise from specific job characteristics (e.g., job duties or relationships) or from a sense of belonging to the profession. The latter case is very common in the health professions, as generally, individuals who choose to pursue this career feel a connection with the typical values of the care professions, and in many cases a real vocation. A qualitative study on meaningful work for nurses highlighted three key components, namely the sense of being acknowledged, the possibility of connecting to others, and the perception of making a difference. 14 These characteristics, when present, are more likely to result in higher productivity, engagement, pride, and enjoyment in working. Finding meaning in work may also help to cope with the stressful situations that characterize nursing work. 15 In fact, employees with a stronger perception of work meaningfulness continue to engage in their work even during a crisis, while the same situation constitutes a detrimental distraction for employees with lower work meaningfulness perceptions. 16 According to a recent study, 16 individuals with lower levels of work meaningfulness were significantly less engaged than those who ascribed a higher meaning to their work, especially in conditions of higher perceived COVID-19 crisis strength. Lower work meaningfulness was also associated with lower levels of propensity to take charge at work. 16 In particularly complicated times (e.g., during a pandemic), maintaining a motivated workforce is essential. For healthcare workers during the pandemic, with increased workloads and consequently increased work-related stress, fatigue, and emotional exhaustion, it was crucial that they possessed resources that would lighten the burden. Furthermore, given that individual and job resources are associated with better performance, the motivation of healthcare professionals was also crucial for patients.
| Emotional labor in the health sector
The work of nurses, by its very nature, is one of the most emotionally loaded jobs. Nurses are constantly required to control their emotions to align them with the emotions that the organization requires them to show. The suppression of negative emotions, such as fear and irritation, and the expression of empathy, understanding, kindness, and positivity, 17 are associated with high-quality care services. These emotions, however, may not match the ones that are actually felt. Specifically, maintaining neutral or positive expressions in the face of pain is perhaps the most stressful factor in the health professions, and is closely linked to burnout, stress, job dissatisfaction, fatigue, and health problems. 10,11 The discrepancy between the emotions actually felt and the emotions that must be displayed because required by the job role or work organization is called emotional dissonance. 12 There are two strategies for coping with emotional dissonance. 12 Surface acting involves suppression of real emotions in favor of those required; the subject, therefore, acts on himself and wears a mask, trying to fake emotions that are not actually felt. Deep acting involves an actual attempt to feel the emotions that should be felt; to perform this strategy, a greater cognitive effort is required, but it seems to be more functional. In fact, if surface acting has been widely linked to higher levels of stress, burnout, and negative consequences on psychophysical health, 11,18 the findings that associate deep acting with these outcomes are controversial. 19 In some studies, no differences emerged between the two strategies, both were found to be detrimental. 10 Other studies have found a positive relationship between deep acting and positive outcomes such as personal accomplishment and job satisfaction, 17,20,21 or negative relationships with negative outcomes, such as stress and burnout. 17 The different paths between surface and deep acting could go back to the processes underlying the two strategies. In fact, deep acting involves the reappraisal of emotions, while surface acting involves the suppression of emotions, which requires more cognitive resources and thus efforts. 22 The reappraisal, however, has proven to be effective particularly toward negative emotions, reducing their intensity and thus avoiding physiological responding. 23 It may even generate an increased sense of meaning. 17 What makes surface acting particularly stressful and does not characterize deep acting may be the sense of inauthenticity. 12 Workers who are more used to frequent interactions with the public, owed to the nature of the work itself, positively evaluate being authentic in their relationship with others. Therefore, being forced to feign emotions, that is, to be inauthentic, is a source of distress. Research suggests that the importance of expressing one's real emotions varies from person to person. 24 To someone, feigning emotions while remaining emotionally detached could be beneficial. To someone else, such forcing is comparable to forcing one's true self. This could explain the individual choice of what strategy to use to overcome the emotional demand.
Therefore, emotional dissonance could be stressful based on how important it is for individuals to truly express themselves in interactions with others. 24 In other words, the existence or not of a conflict between emotions felt and emotions shown could moderate the effects of other variables on work-related well-being. In a study, 24 subjects who considered the expression of authentic emotions more important felt higher levels of emotional exhaustion and lower levels of job satisfaction when forced to fake emotions they did not really feel.
Emotional regulation behaviors could occur independently of explicit requests from the organization. 25 In a study, 25 Another theory that could explain why deep acting is sometimes associated with positive outcomes is the one that highlights the difference between challenge and hindrance stressors. 26,27 According to this theory, hindrance stressors are perceived as burdens that make work even more difficult and are therefore associated with high levels of stress and fatigue. Challenge stressors, however, while retaining all the characteristics and dangers of any other stressor, could be perceived by workers as worthy of their efforts, and therefore could be linked to job satisfaction and work engagement. 27 For example, challenge stressors could be related to the possibility of obtaining promotions, recognition, and career advancement, or they could be perceived as legitimate parts of a job. In the case of nurses, emotional stressors could be included in this second category. The request to regulate one's emotions may be perceived as a challenge stressor because it satisfies one's affiliation needs, is linked to appreciation by colleagues and superiors, is culturally associated with the demonstration of competence, and is also closely related to the objectives of the profession. 21 For example, COVID-19's grip on hospitals may have made the need to regulate emotions in interaction with patients frightened by the new disease and the inability to find comfort in their family even more important. Furthermore, it may have amplified the awareness of both the importance of the health professions and the need to increase the quality of the care work to respond effectively to the needs of patients. In other words, emotional stressors may be perceived as challenges as they are identified as an essential part of the job, especially in conditions of high influence from the COVID-19 pandemic.
| Aims
The study aims to (1)
| Participants
The research population for the study is made up of professional nurses, employed in both public and private health facilities throughout Italy. The managers and supervisors of the structures were contacted using the contact details publicly available online and invited to forward the e-mail containing the instructions to participate in the research to all their employees, and these in turn to other colleagues through a snowball sampling procedure. In the email, the research objectives were explained, the data processing and informed consent documents were attached, and the link to the online questionnaire was provided. The inclusion criterion was to fill the role of nurses, therefore subjects covering other roles (e.g., Higher scores indicate a higher presence of the construct. An example of an item is "I still find my work stimulating, each and every day." Cronbach's α in this study was 0.70.
COVID-19 influence was assessed through the item "How much do you think COVID-19 has affected the quality of your work?" Nurses were asked to rate their perception of the influence on a scale ranging from 1 to 10, where high scores correspond to a very high perceived influence of the pandemic on their work.
| Ethical considerations
The project has been approved by the Bioethics Committee of the University of Palermo (protocol n. 72/2022). Each participant has given their consent. The data were analyzed in aggregate form, and complete anonymity was guaranteed.
| Hypotheses testing
Means, standard deviations, and correlations between the study variables are presented in Table 1. From a first analysis of the correlation matrix, the relationships between the perceived influence of COVID-19 and the other variables have a positive direction.
The correlation between pleasure in working and meaningful work is influence and deep acting. On the other hand, no statistically significant correlation was found between COVID-19 influence and pleasure in working, which is not associated with deep acting either.
Finally, a weak correlation was found between meaningful work and deep acting. Table 2 shows the hierarchical regression analysis using pleasure in working as the outcome. In the first step, we introduced gender, age, and seniority (years spent in the role) as control variables. In the second step, we added the study variables. Finally, the interaction terms were added in the third step. The final model explains 26% of the variance. Before calculating the interactions, variables have been standardized (mean = 0 and SD = 1). Two interactions, the one between meaningful work perception and deep acting, and the triple interaction between these two variables and the perceived COVID-19 influence, resulted statistically significant. COVID-19 influence has no significant effect on pleasure in working. However, as hypothesized, meaningful work perceptions and pleasure in working are positively and strongly associated.
To test the moderate moderation model we hypothesized, we used model number 3 of the Macro Process for SPSS. We computed pleasure in working as the outcome, meaningful work perception as the independent variable, deep acting as the first moderator, and COVID-19 influence as the second moderator.
Gender, age, and seniority were entered as covariables. The results confirmed the significant effects of the two interactions that emerged from the hierarchical regression in Table 2 Table 3. Furthermore, still in line with the JD-R model, 13 when the resource is low, as the incidence of stressful factors increases, the positive outcome, that is, the pleasure in working, also significantly decreases.
In line with the theorized distinction between hindrance stressors and challenge stressors, 26 according to which the former generates fatigue while the latter may generate motivation and engagement, in our study deep acting does not seem to be experienced in the same way by the whole sample. For those who perceive their work as meaningful, deep acting is linked to higher levels of pleasure in working than for those who perceive little meaningfulness, for whom conditions of high deep acting are connected to a heavy decrease in the levels of pleasure in working.
In this case, deep acting is therefore identified as a hindrance stressor. For those with a high perception of the significance of work, however, the same stressor previously defined as hindrance seems to be perceived as a challenge stressor, 26 since as it increases the pleasure in working also increases.
The results suggest that the perception of the meaningfulness of work makes the biggest difference. Perceiving one's work as meaningful generates a positive effect on the pleasure of working.
Confirming what was found in the literature, 14 Therefore, having to interface with suffering is taken into account, as is the attempt to show kindness and reassuring emotions. This represents one of the core values of the profession and is among the factors that make it so valuable.
From the literature, we know that worker motivation impacts the use of different emotional regulation strategies. 17 Thus, similarly, nurses with higher perceptions of meaningful work may choose emotional regulation strategies that are more adaptive and functional for them. In this case, deep acting, that is, the attempt to align their emotions with those required by the work situation by modifying real emotions, could allow workers to express authentic emotions, which is linked to positive outcomes. 17,20,21 Instead, nurses with a lower perception of meaningful work experience the need to regulate emotions as a source of stress.
These workers may be at greater risk of emotional exhaustion and burnout, since the need to regulate emotions may even amplify the negative emotions that they experience at work. 10,19 In fact, it is also possible that the fraction of the sample that perceives little meaning in their work may have been influenced by negative experiences, not uncommon during the pandemic, especially among frontline staff. This may have generated feelings of frustration and helplessness, which are more difficult to regulate.
| Limitations
The study's main limitations stem from the cross-sectional design, which does not allow causal interpretations of the relationship between the study variables, and from the use of self-report measures only. Furthermore, the convenience sample constitutes a limit to the generalization of results, since it is inextricably linked to the pandemic situation experienced in Italy. Also, more specific measures should be used in future research to determine the influence of COVID-19. Finally, the absence of any indication of the hospital ward to which the nurses belong (due to the need to guarantee anonymity in some institutions) prevented a more specific interpretation of the results based on that data.
| CONCLUSION
During the pandemic, and perhaps because of it, work-related emotions such as fear, vulnerability, and compassion for patients, intensified. However, some important characteristics of the nursing profession, such as a sense of duty and sacrifice, and a sense of belonging and dedication to patient care, 9 may have also been accentuated. This has made nurses more exposed to the risk of emotional stress on the one hand, but on the other, it has made them more aware of the meaning of their work. with the style through which work is actually conducted within the organization, the more the positive feelings associated with their work are expected to increase and, consequently, the probability that deep acting is chosen as the preferred strategy also increases. 19 Given that not all stressful events can be predicted or avoided, and the experience of COVID-19 has proved it, it is necessary to train broader emotion regulation skills. 21 Some practical activities that can be carried out are, for example, regular supervision interventions with a facilitator who enhances the workers' previous experiences of emotional regulation and suggests other adaptive ones, combined with coping strategies, up to specific actions aimed at emotional wellbeing (e.g., mindfulness techniques).
The literature cited and this study shows how extremely important it is that people who intend to pursuit emotionally stressful jobs are aware of the physical and emotional pressure they will be subjected to. This suggests enhancing career guidance strategies before career entry and assessing individuals' motivation in the pre- | 2022-08-18T06:17:21.061Z | 2022-08-16T00:00:00.000 | {
"year": 2022,
"sha1": "c93433a320979782af0a461aefa38e72cd232f8d",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": "CLOSED",
"pdf_src": "PubMedCentral",
"pdf_hash": "6449e576fd64eab4d2f2bd409b84643d6fd49beb",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
234976048 | pes2o/s2orc | v3-fos-license | Response of the building with a stiffening core during an earthquake of February 02, 2018 in the territory of a metropolis
Earthquakes give rise to a significant number of problems that affect environmental, seismic, and economic risks forthelocalsand construction sites. For the first time in the last 40 years, another local zero depth earthquake was registered in the territory of the city of Almaty. In terms of intensity, this was a 3-4-point earthquake. At 100 meters from the tectonic fault, there is a 16-storey building with a stiffening core and an engineering seismometric service station. With the use of AT 1105 sensors and a PCM-8 recorder, instrumental records of accelerations in the basement and on the roof of the building were recorded, and spectral β curves were constructed. The effective duration of the seismic impact in the basement of the building was quite significant, 47-56 seconds. There is undoubtedly an increase in the intensity of local earthquakes compared to 2007-2014. It was found that at the basement level the value of the vertical component is significantly less than the horizontal one. It is assumed that the abnormally high values of acceleration in the horizontal plane are a consequence of the presence of a tectonic fault near the building. Instrumental records of acceleration (accelerograms) can be used in calculations of both the seismic resistance of an object and the assessment of environmental, social, economic, and non-economic risks.
Introduction
Earthquakes drastically affect the environment. The entire territory of the Almaty region is prone to earthquakes, which can pose a danger to buildings and structures on the territory of the city. The Almaty region has a rich seismic history. The megalopolis is located on the territory of one of the most highly seismic zones in Central Asia. For a little more than a century, there have been two earthquakes with an intensity of 9-10 points, one earthquake with an intensity of 8-9, one with an intensity of 7-8, and over 100 with an intensity of 6 or less. Almaty today is a megalopolis with a population of 2 million people with the prospect of growth by 2050 to 5 million people. But this is also the city where in 1911 there was an earthquake of the class of seismic disasters -the Kemin earthquake on January 4, 1911 with a magnitude of 8.2.
Therefore, instrumental observations are carried out for a significant number of buildings.
In [1] it is noted that, based on the materials of seismic observations, the possibility of earthquakes in the southern and southeastern parts of Almaty has been reliably established. Faults were identified in the city, which are associated with earthquake foci. From January 1,2005 toDecember 4, 2014, 1293 earthquakes with energy classes K = 2.7-9.7 were registered in the city and its immediate vicinity. According to earlier data [2],from April 1, 1972 to December 31, 1982, 983 seismic events with an energy class of K = 5.0-13.0 were registered in Almaty. However, the stations of the engineering and seismometric service on buildings did not register noticeable seismic events in the city.
The Engineering and Seismometric Service (hereinafter -ESS) in the Republic of Kazakhstan is currently represented by 12 stations, including in the city of Taraz (1 station) and the city of Kapshagai (1 station), located at buildings of various designs. 4 stations have both digital and analogue equipment.
Most of the stations have old analog devices. These are VBP sensors that measure velocities and displacements, and OSP sensors that record accelerations and velocities. There are also SM-4 sensors that record displacements. Currently, various measuring systems and instruments have been developed [3][4][5][6][7]. Therefore, the modernization of the stations can potentially be carried out.
A complex of instrumental studies on the behavior of a 16-storey building with a stiffening core, located 100 meters away from the tectonic fault, continues. The building houses seismic station No. 17 "New Square". Earlier, this station obtained instrumental records of the earthquake on September 8, 2017, the source of which was located in the Xinjiang province of the Uygur region of China [8]. There are instrumental records of a weak earthquake on June 3, 2017 with a source also in the city [9].
Thus, experimental research methods (instrumental records of accelerations) and theoretical methods based on the use of computer mathematics systems, such as MATLAB, SCILAB, MAPLE, are combined.
The following objectives were set: -To investigate the reaction of a high-rise building near a tectonic fault.
-To assess the influence of a tectonic fault on the instrumental characteristics of the basement of the building.
-To identify the possibility of resonance phenomena in such buildings during earthquakes in the city.
-To use the instrumental records of a real earthquake obtained with the station of the engineering and seismometric service No. 17 "Novaya Ploschad" to accomplish these objectives.
Method
According to the operational data of the Data Center of the Institute of Geophysical Research, an earthquake occurred in Almaty on February 02, 2018 at 15:20 Astana time (09:20 GMT). The coordinates of the epicenter: 43.15 degrees north latitude, 76.88 degrees east longitude. Magnesium there mb = 3.6. Energy class K = 7.5. The earthquake was felt in Almaty with an intensity of 3-4 points. The earthquake source, according to seismologists, was located in the city.
For comparison, it should be noted that earlier on the territory of Almaty city, a very weak earthquake was recorded on June 3, 2017 at 23.57 Astana time (17.57 GMT), 14 km to the north. The coordinates of the epicenter are 43.30 degrees north latitude, 76.98 degrees east longitude. Magnitude mb = 2.4. Energy class K = 5.7. The earthquake was felt in Almaty with an intensity of 2 points. The focal coordinates of the two earthquakes are very close. The depth of the focus is insignificant. Thus, the last local earthquake on 02/02/2018 is more intense. It was felt by residents of almost all areas of the city. Earthquakes with such parameters were predicted earlier and can be dangerous for the population and housing stock of the city [1][2].
Station No. 17 "New Square" is located on a 16-storey residential building with a stiffening core (Fig. 1). It should be noted that this station began operating in 1987. In 2008, the station was modernized -a digital instrumental measuring system RSM-8 with ADXL sensors was installed.
The structural basis of the building is a braced frame, inside of which there is a lattice stiffening core in the form of a space framework. The size of the core in the plan is 6800-6790 mm. The core of stiffness is developed in two levels by traverses. The spatial stability of the building is ensured by the joint work of single-span frames and a lattice stiffening core. A braced frame rests on columns with a section of 740x740 mm. The columns are made of M400 heavy concrete.
The foundation of the building is made of precast and monolithic reinforced concrete. The soil consists of coarse gravel.
Results
Instrumental records of accelerationsaccelerogramswere obtained (Figures 2-4). Table 1 shows the maximum acceleration values for each of the instrumental recording components. Lines 1-3 correspond to accelerations on the roof of the building, 4-6 -to the foundation of the building (more precisely, the basement). Accelerations in the basement level in the horizontal plane approximately coincide 25.44-26.41 cm / s 2 . In terms of magnitude, this is a 5-6-point earthquake. The vertical acceleration at the basement level is less than that recorded in the horizontal plane. Acceleration in the level of the last tier is less than in the basement level. It can be notedthat the first form is not realized here in its pure form. At the roof level, the value of the horizontal acceleration is 3 times less than the vertical value.
The spectral coefficients on all three axes on the foundation and the roof differ by about 1.4-2.7 times. At the same time, the greatest differences in the values of the spectral coefficients take place in the vertical axis OZ.
At the basement, accelerations are of a pronounced impulsive type (Figure 4). The values of the accelerations in the azimuthal plane at the basement level approximately coincide. The values of the spectral coefficient differ by 7-8%.
At the basement level, the frequency content is stable -the prevailing period is 0.10-0.14 sec.
In general, even visually, the nature of the seismic impact is impulsive. Therefore, although the acceleration values are significant at the basement level, the macroseismic effect does not exceed the intensity of 3-4 points.
It should be noted that during the local earthquake on June 3, 2017, the absolute values of acceleration did not exceed the acceleration values of 2.84 cm / s 2 on the roof, and 1.59 cm / s 2 on the foundation. In the case of the earthquake of 02.02.2018, the accelerations are about an order of magnitude higher.
It is worth noting that on the digital instrumental records of the local earthquake on June 3, 2017, there was an incomprehensible shift of zero lines on the horizontal components on the roof of the building. In the event of an earthquake on 02.02.2018, the indicated displacement did not take place. 6,7 show the spectral curves of seismic events on February 2, 2018. Acceleration peaks at the basement level correspond to an oscillation period of 0.10-0.14 sec. On the roof of the building, the period of the spectrum maximum along the horizontal and vertical axes logically increases.
It should be noted that, despite the low intensity of the earthquake, the spectrum along the OZ axis on the roof of the building has two maxima. At the basement, all spectral curves have at least 2 local maxima.
Discussion
Attention is drawn to the magnitude of the acceleration in the basement of the building in the horizontal plane (25-26 cm/s 2 ). Such accelerations correspond to an earthquake with an intensity of 5-6 points on the MSK 64 scale. Although the general macroseismic effect of the 02.02.2018 earthquake did not exceed 4 points. It can be assumed that significant acceleration values are a consequence of the presence of a tectonic fault located 100 meters from the building, which is a crack in the earth's crust covered with a thick layer of sedimentary rocks.
According to [1], the maximum acceleration values of strong movements in 2007-2014 in the territory of Almaty, except for the 2007 earthquake, are 0.2-1.6 cm/s 2 , which is less than the acceleration values from Table 1. During the earthquake on December 29 2007, at a distance of 26 km from the KNDC (Institute of Geophysical Research) station, accelerations of 32.1-33.9 cm/s 2 were recorded.
Once again, it has been confirmed that earthquake sources can be located both on the territory of the city of Almaty and outside it. Therefore, the use of seismic-insulating foundations of various types remains relevant, which will reduce seismic loads by moving the building as a solid body without deformation of the above-foundation part [10][11][12][13][14].
It is necessary to continue monitoring the behavior of a 16-storey building with a stiffening core, equipped with a station of the engineering and seismometric service No. 17 "New Square". It is necessary to continue studying the behavior of the building both during local earthquakes on the territory of the city and at remote ones [9].
It should be noted that in the city of Almaty, on a tectonic fault, there is a building of a 25-storey hotel, also equipped with an engineering and seismometric service station on the buildings [15]. Thus, comprehensive studies of the influence of fault zones on the behavior of high-rise buildings are being carried out.
The seismic event on February 2, 2017 is a local earthquake with an intensity of no more than 4 points in intensity and frequency content. It can be assumed that the influence of tectonic fault presence is to increase the peak values of acceleration in the horizontal plane in the basement of the building. Accelerations in MSK-64 seismic scale points increase by 1-2 points. The impact at the basement level is impulsive. Acceleration along the vertical axis at the basement level is 4-5 times less than the acceleration value in the horizontal plane. In [1][2] the possibility of earthquakes with foci in the territory of Almaty is indicated. Additionally, the possibility of local earthquakes with foci in the western (north-western) part of the city was established. This will make it possible to clarify the current seismological situation in Almaty, for example, for the purpose of seismic microzoning of the city territory. A 16-storey building with a stiffness core gets deformed in two modes of vibration. The accelerations in the level of the last tier are less than the accelerations in the basement level. Shallow earthquakes of this type are dangerous for low-strength houses with loadbearing brick walls but are not dangerous for flexible buildings with a fundamental tone period of more than 1 second. Undoubtedly, there is an increase in the intensity of local earthquakes in the city compared to 2007-2014. | 2020-12-17T09:11:06.253Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "acb8a4908f953cd869aba68b9e3019f8b51a5913",
"oa_license": "CCBY",
"oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2020/77/e3sconf_ersme2020_01008.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "3f57cc765850100d1fc202910424bd3a195976d9",
"s2fieldsofstudy": [
"Geology"
],
"extfieldsofstudy": [
"Geology"
]
} |
265211827 | pes2o/s2orc | v3-fos-license | Artificial intelligence calculated global longitudinal strain and left ventricular ejection fraction predicts cardiac events and all‐cause mortality in patients with chest pain
Assessment of left ventricular ejection fraction (LVEF) and myocardial deformation with global longitudinal strain (GLS) has shown promise in predicting adverse cardiovascular events. The aim of this study was to evaluate whether artificial intelligence (AI) calculated LVEF and GLS is associated with major adverse cardiac events (MACE) and all‐cause mortality in patients presenting with chest pain.
INTRODUCTION
The ability to detect and provide early intervention for patients with suspected coronary artery disease (CAD) is critical for the prevention of adverse cardiovascular events.Non-invasive identification of CAD remains a clinical challenge despite the widespread utilisation of imaging and provocative testing.Stress echocardiography is widely used for the assessment of CAD and selecting patients for coronary angiography.Despite this, improved strategies are required to increase the diagnostic yield in routine clinical practice, since fewer than 40% of patients undergoing elective cardiac catheterisation have significant CAD. 1 Severe CAD leads to left ventricular (LV) dysfunction; however, in the early stages, LV function is preserved.4][5] This could be because GLS is largely determined by the contraction of longitudinal fibres that reside in the subendocardium, 6 which itself is the myocardial layer most sensitive to myocardial ischaemia. 710][11] In current practice, quantitative LV assessment using LVEF and GLS is operator dependent and requires off-line analysis with manual identification and correction of image contours, which can introduce variability.Artificial intelligence (AI) algorithms can reduce observer variability, as well as be deployed at scale for accelerated analysis of variables that normally require specialist training. 12Recently, AI image processing of echocardiograms for the automated calculation of LVEF and GLS has been developed. 13This approach has been shown to improve precision for identification of LV changes in COVID-19 infection. 14However, investigating the clinical usefulness of AI-calculated LVEF and GLS in patients presenting with chest pain is warranted, especially considering the increased risk of cardiovascular events in this population. 15We hypothesise that automated AI-calculated LVEF and GLS extracted from a large number of resting echocardiograms is independently associated with major adverse cardiac events and all-cause mortality in patients presenting with chest pain.
Study design
The study population consisted of 296 (24 studies of a total 320 were excluded due to poor image quality and inability for the AI soft-
Transthoracic image analysis
All patients underwent a resting transthoracic echocardiogram according to local guidelines.Image analysis of the LV was performed locally using commercially available AI algorithms created by machine learning (Ultromics, EchoGo Core 2.0), which automatically calculated LVEF and GLS (no manual correction).In brief, the AI algorithm was used for automated contouring of the endocardial border of every frame from the apical 4-, 3-and 2-chamber (A4C, A3C and A2C respectively) views, automated identification of the end-diastolic and the end-systolic frames based upon the size of the enclosed area, and automated selection of the cardiac cycle (Figure 1).The automated LV contours and selection of frames were verified and approved by at least one accredited echocardiographer who was blinded to all clinical and study information.The criteria for operator verification of AI includes the correct end diastolic and end systolic frame selection, accurate delineation and tracking of the LV endocardial boarder and tracking of all apical images.Between three blinded operators, the agreement for AI image verification on 214 studies was 96.4%.All LV volumes and LVEF were obtained by performing endocardial tracings and using the biplane method of disks (modified Simpson's rule).
Only cases with acceptable-quality LV views were included, which was defined as lack of apical foreshortening with adequate visualization of all segments and delineation of the entire LV endocardial border by the AI from all views as previously described. 16,17Longitudinal strain was calculated as the average Legrangian strain from the A4C, A3C, and A2C views.Cut-offs for mild, moderately, and severely reduced LVEF were determined by the 2015 American Society of Echocardiography (ASE)/European Association of Cardiovascular Imaging (EACVI) guidelines for cardiac chamber quantification. 18Abnormal GLS was defined as ≥15%. 19
Dipyridamole stress echo
All patients underwent a dipyridamole stress echocardiogram (DSE), either with or without ultrasound contrast.Dipyridamole was infused at a total dose of .84mg/kg over 6 min.Two-dimensional EchoGo core is an automated, cloud-based software medical device for processing of echocardiographic images.Echocardiographic images are uploaded to the cloud-based environment in DICOM format whereby an automated pipeline identifies the apical 2, 3, and 4 chamber images available for analysis.Trained and accredited cardiac physiologists (operators) review the identified images and ensure image selection is appropriate.Apical images are then passed to convolutional neural networks which delineate the left ventricular (LV) endocardium, predict the position of endocardial segments using the 18-segment model and then contour the endocardial border.Operators conduct a quality control check of all contours produced by the software by either accepting or rejecting them.Operators are unable to manually edit or adjust contours but they are able to select available alternative images for auto-contouring.LV global longitudinal strain is calculated as the average Lagrangian strain from contours of the apical 2, 3, and 4 chamber images.LV volumes and ejection fraction are calculated using the apical 2 and 4 chamber contours using the Simpson's biplane method.
echocardiography, 12-lead electrocardiograms and blood pressure monitoring were performed in accordance with standard protocols.
Aminophylline was routinely used to reverse the effect of dipyridamole.All images were acquired in the A4C, A3C, and A2 views using an S5 broadband transducer (Philips, Eindhoven, Netherlands).
Wall motion was evaluated at baseline and peak stress and a semi quantitative wall motion score was calculated (normal, hypokinesia and akinesia) using a 17-segment model of the left ventricle.
Myocardial ischeamia was defined as the occurrence of stress-induced new dyssynergy or worsening of rest hypokinesia in ≥1 myocardial segment.
Participant follow-up and outcomes
Follow-up was obtained by review of the patient's hospital chart, electronic records, and national health status database.The principle end-point of interest for this analysis was major adverse cardiac events (MACE) and, secondarily, all-cause mortality, with patients censored at the time of event or at the last follow-up.A MACE was defined as cardiac death (due to myocardial infarction, cardiac arrhythmias, or congestive heart failure) or non-fatal myocardial infarction (NFMI).
A NFMI was defined by the standard criteria of ischaemic chest pain associated with an elevation of cardiac enzymes with or without electrocardiographic changes (only the first event was counted between cardiac death and NFMI).The diagnosis of cardiac death required documented life-threatening arrhythmias, cardiac arrest, and death attributable to congestive heart failure or myocardial infarction in the absence of any other precipitating factor.Sudden unexpected death was classified as a cardiac death when an obvious non-cardiac explanation was excluded.In patients who underwent coronary angiography within 90-days of their TTE, significant CAD was defined according to ≥50% left main disease and/or ≥70% luminal narrowing in the left anterior descending, circumflex, or right coronary artery by visual assessment.
Statistical analysis
Unless otherwise specified, data are presented as mean ± SD or n (%).
Group comparisons were performed using the Student's t-test or a Mann-Whitney U test for continuous data and categorical data were compared with chi-squared (χ 2 ) test.
To analyse the time-to-event where the event is taken as MACE or all-cause mortality, separately, according to time since the patient's echocardiogram, Cox proportional hazards regression and Kaplan-Meier cumulative event curves were constructed and compared using the log-rank test with a p value < .05considered statistically significant.
Dichotomization of the continuous variables was performed using their clinical cut-off and data were stratified according to (A) LVEF ≥50% and <50% and (B) GLS ≤15% and ≥15%.
Univariate logistic regression was conducted to establish associations, independent of time, between the outcomes (MACE or all-cause mortality) and clinical information (including baseline clinical characteristics, echocardiography results, and clinical outcomes).
AI-quantification of LVEF and GLS: MACE
Patients who experienced a MACE during the follow-up period were significantly older, more likely to have previous CAD events, hypercholesterolaemia, and prescribed cardiovascular protective medication.
In addition, a larger proportion of MACE patients had inducible ischaemia and underwent coronary revascularisation.All-cause mor- GLS ≤ 15% and ≥15%.An LVEF < 50% and GLS ≥ 15% was associated with a 3.7 and 2.5 times increased risk of MACE, respectively, during the follow-up period.
AI-quantification of LVEF and GLS: All-cause mortality
Patients who died during the follow-up period were significantly older, more likely to have had a previous myocardial infarction, undergone previous revascularisation, and prescribed cardiovascular protective medication.In addition, a greater proportion of patients who died had myocardial ischaemia, follow-up revascularisation and experienced a MACE.Importantly, AI calculated LVEF (59.6 ± 13.5% vs. 64.6 ± 8.5%; p < .001)and GLS (−17.1 ± 5.3% vs. −19.3± 4.2%; p = .001)was significantly lower in patients who died during the follow up period (Table 3).In unadjusted analysis, older age, previous MI, previous revascularisation, cardiovascular medication use, lower AI LVEF and GLS, myocardial ischaemia, follow-up revascularisation, and MACE were significantly associated with all-cause mortality (Table S2).Following multivariate analysis using clinic information and backwards stepwise model selection (Model 1), older age and previous MI were independently associated with all-cause mortality.AI calculated GLS (OR 1.08; 95% CI 1.00-1.16)was independently associated with all-cause mortality (Model 2) and the addition of GLS significantly improved discrimination for all-cause mortality (Table 4).AI calculated LVEF (OR .96;95% CI .93-.99) was independently associated with all-cause mortality (Model 3) and the addition of LVEF significantly improved discrimination for all-cause mortality (Table 4).The Kaplan-Meier curves for the cumulative survival and freedom from all-cause mortality are presented in Figure 3, dichotomised according to (A) LVEF ≥50% and <50% and (B) GLS ≤15% and ≥15%.An LVEF < 50% and GLS ≥ 15% was associated with a 2.8 and 2.3 times increased risk of all-cause mortality, respectively, during the follow-up period.
DISCUSSION
This study has demonstrated that AI-calculated LVEF and GLS in resting TTE images provides independent prognostic value for MACE and all-cause mortality in patients with chest pain.In our study, using clinical cut-off values, a LVEF < 50% was associated with a MACE hazard rate of 3.7 times the hazard rate of those with a LVEF ≥ 50%.
Those with a GLS ≥ 15% was associated with a MACE hazard rate 2.5 times the hazard rate in those with a GLS ≤ 15.These findings were similar for all-cause mortality.Importantly, these results are despite significantly greater prescription of cardioprotective pharmacotherapy in those that had adverse events, and the utilisation of downstream practice, since many patients referred for coronary angiography report non-obstructive CAD. 1 Previous research has demonstrated the value of GLS for the prediction of CAD in patients reporting no chest pain at the time of investigation, 20 inpatients hospitalised with chest pain 21 and those with severe CAD with normal LVEF and no regional wall motion abnormalities. 4 In addition, previous research 22,23 has reported greater accuracy for the identification of CAD from endocardial compared to epicardial strain in patients presenting with non-ST segment elevation acute coronary syndrome.This is likely due to the endocardial layer being the most sensitive to myocardial ischaemia 7 as well as undergoing the greatest deformation change during systole. 24wever, prior research has demonstrated greater reproducibility for the acquisition of epicardial GLS compared to endocardial GLS, 25,26 as well as reporting that the epicardium is easier to manually trace. 27portantly, the AI image processing pipeline utilised in this study automatically calculates GLS from the endocardium, removing operator dependence and manual identification and correction of image contours, which with greater utilisation may prove clinically important for the risk stratification of high cardiovascular disease patients.If deployed at scale, this technology has potential to provide accurate and accelerated quantitative analysis of variables that normally require specialist training, 12 which combined with prognostic capabilities may culminate in improved clinical workflow, patient care and reduced healthcare costs.
AI calculated resting LVEF and GLS also independently predicted all-cause mortality.It is well known that a reduced LVEF is associated with an increased mortality. 28However, LVEF measures were within the normal range for both MACE and all-cause mortality patients at baseline.Importantly, recent research demonstrated that the potential value of GLS measures was greatest in patients considered to have preserved systolic function using conventional TTE measures. 29In addition, these findings demonstrate the value of resting AI calculated LVEF and GLS over myocardial ischaemia and ischaemic burden on stress for the long-term prediction of MACE and all-cause mortality.
Indeed, recent research demonstrated that resting GLS was an independent predictor of all-cause mortality in patients with and without CAD, those with a LVEF > 50%, as well as patients who experienced a MACE, with ischaemia on stress providing no independent predictive value. 30These findings demonstrate that AI calculated GLS and LVEF is independently associated with long-term adverse outcomes and these variables should be routinely reported in resting TTE examinations.
Furthermore, a reduced GLS in the presence of a preserved LVEF should prompt follow-up assessment to optimise medical therapy and reduce cardiovascular complications.
LIMITATIONS
Our study has limitations.This was a retrospective, single centre study, which may be subject to case selection bias.In addition, the sample size was small to moderate.However, despite the sample size, there was statistical power to predict adverse outcomes in a typical cardiology patient.As such, future clinical trials require smaller sample sizes with the implementation of AI technology.
CONCLUSION
AI calculated LVEF and GLS is independently associated with major adverse cardiac events and all-cause mortality in high CVD risk patients.Wide deployment of AI technology has potential to significantly impact clinical outcomes and workflow, through improved risk stratification of patients with chest pain, accelerated quantification of labour-intensive technical measures, and reduce healthcare costs.
Figure 2 ,
Figure 2, dichotomised according to (A) LVEF ≥ 50% and <50% and (B) Note: PCI = percutaneous coronary intervention; GLS = global longitudinal strain; LVEF = left ventricular ejection fraction.a A likelihood ratio test shows that Model 2 explains the data significantly better than Model 1. LR chi-squared = 5.15; p-value .023.b A likelihood ratio test shows that Model 3 explains the data significantly better than Model 1. LR chi-squared = 6.13; p-value .013.F I G U R E 2 Kaplan-Meier curves for the cumulative survival and freedom from major adverse cardiac events, dichotomised according to (A) LVEF ≥50% and < 50% and (B) GLS ≤−15% and > −15%.TA B L E 3 Characteristics of patients according to alive or all-cause mortality.Characteristic Alive (N = 236) All-cause mortality (N = 60) p-Value Demographics Age (years) 62.
Note:
MI = myocardial infarction; GLS = global longitudinal strain; LVEF = left ventricular ejection fraction.a A likelihood ratio test shows that Model 2 explains the data significantly better than Model 1. LR chi-squared = 4.27; p-value .039.b A likelihood ratio test shows that Model 3 explains the data significantly better than Model 1. LR chi-squared = 5.77; p-value .016.F I G U R E 3 Kaplan-Meier curves for the cumulative survival and freedom from all-cause mortality, dichotomised according to (A) LVEF ≥50% and <50% and (B) GLS ≤15% and ≥15%.
Characteristics of patients according to no MACE and MACE.Multivariate predictors of major adverse cardiac events.
routine TTE assessment in patients with chest pain.A reduced GLS should prompt healthcare providers to consider prospective surveillance TTE imaging to reduce the risk of deteriorating cardiac function and resultant heart failure.The non-invasive risk stratification of patients reporting chest pain and suspected as having CAD remains challenging in clinical TA B L E 1 Note: MI = myocardial infarction; PCI = percutaneous coronary intervention; CABG = coronary artery bypass graft; CAD = coronary artery disease; LVEF = left ventricular ejection fraction; GLS = global longitudinal strain; WMSI = wall motion score index.TA B L E 2 Multivariate predictors of all-cause mortality. | 2023-11-16T06:18:24.481Z | 2023-11-15T00:00:00.000 | {
"year": 2023,
"sha1": "f8b1f8d91aad8419c9a31f32c11abd95296c92ec",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/echo.15714",
"oa_status": "HYBRID",
"pdf_src": "Wiley",
"pdf_hash": "2f7e5e36918b21c0d1abecd48283ca7acf9ef1bc",
"s2fieldsofstudy": [
"Medicine",
"Computer Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
212718055 | pes2o/s2orc | v3-fos-license | Accretion-induced prompt black hole formation in asymmetric neutron star mergers, dynamical ejecta and kilonova signals
We present new numerical relativity results of neutron star mergers with chirp mass $1.188M_\odot$ and mass ratios $q=1.67$ and $q=1.8$ using finite-temperature equations of state (EOS), approximate neutrino transport and a subgrid model for magnetohydrodynamics-induced turbulent viscosity. The EOS are compatible with nuclear and astrophysical constraints and include a new microphysical model derived from ab-initio calculations based on the Brueckner-Hartree-Fock approach. We report for the first time evidence for accretion-induced prompt collapse in high-mass-ratio mergers, in which the tidal disruption of the companion and its accretion onto the primary star determine prompt black hole formation. As a result of the tidal disruption, an accretion disc of neutron-rich and cold matter forms with baryon masses ${\sim}0.15M_\odot$, and it is significantly heavier than the remnant discs in equal-masses prompt collapse mergers. Massive dynamical ejecta of order ${\sim}0.01M_\odot$ also originate from the tidal disruption. They are neutron rich and expand from the orbital plane with a crescent-like geometry. Consequently, bright, red and temporally extended kilonova emission is predicted from these mergers. Our results show that prompt black hole mergers can power bright electromagnetic counterparts for high-mass-ratio binaries, and that the binary mass ratio can be in principle constrained from multimessenger observations.
INTRODUCTION
Binary neutron stars (BNS) mergers are key astrophysical laboratories to explore the fundamental interactions in dynamical and strong gravity. This was clearly demonstrated by the observation of GW170817 (Abbott et al. 2017a(Abbott et al. , 2019a and its related counterparts (Abbott et al. 2017b). After GW170817, a second event (GW190425) compatible with a BNS source was reported, indicating a merger rate of 250-2810 Gpc 3 per year (The LIGO Scientific Collaboration & the Virgo Collaboration 2020). The interpretation of current and future observations rely on quantitative simulations of astrophysically relevant binaries in the framework of numerical relativity (NR). In particular, observational signatures are strongly dependent on the possible NS masses and the still uncertain equation of state (EOS). The latter determine the properties of the final compact object, of the eventual accretion merger remnant and of the observed gravitational and electromagnetic spectra (see Radice et al. 2020 and reference therein for a recent review by some of us.) The possible NS mass range is ∼0.9 − 3M , where the lower bound is inferred from the formation scenario (gravitational collapse) and from current observations e.g. (Rawls et al. 2011;Ozel et al. 2012). The upper bound is inferred from a stability argument (Buchdahl limit) and from precise measurements of ∼2M NSs in compact binaries containing a millisecond pulsar and a white dwarf (Demorest et al. 2010;Antoniadis et al. 2013;Cromartie et al. 2019). Coalescing cicularized BNS were long expected to have nearly equal masses NS with individual masses around MA ∼ 1.35 − 1.4M and individual spin periods above the millisecond (Lattimer 2012;Kiziltan et al. 2013;Swiggum et al. 2015). For example, the source of GW170817 has a total mass of M 2.73 − 2.77M and a mass ratio q ∼ 1 (see below), with the largest uncertainties coming from the spin prior utilized in the analysis (Abbott et al. 2019a). This expectation was however challenged by GW190425 that is associated to the heaviest BNS source known to date with M 3.2 − 3.7M (The LIGO Scientific Collaboration & the Virgo Collaboration 2020). Spins distributions in GW170817 and GW190425 are both compatible with zero (Abbott et al. 2019a; The LIGO Scientific Collaboration & the Virgo Collaboration 2020). The mass ratio distribution in BNS (here conventionally defined as the ratio between the most massive primary and the secondary NS, i.e. q ≡ MA/MB ≥ 1) is very uncertain. BNS population from pulsar observations indicate mass ratios 1 ≤ q 1.4 (Lattimer 2012;Kiziltan et al. 2013;Swiggum et al. 2015). The mass ratio of GW170817 could be as high as q ∼ 1.37 (q ∼ 1.89) for low (high) spin priors. Similarly, the mass ratio of GW190425 can be as high as q ∼ 1.25 (q ∼ 2.5). Given the expected mass values and the recent observations, it is accepted that BNS mass ratios can reach "extreme" mass ratio q 2. While these values are not as extreme as those that can be reached in black hole binaries, significant differences for the remnant and radiation signals are expected for BNS with q ∼ 1 and q ∼ 2.
Numerical relativity simulations with microphysical EOS performed so far focused on comparable-masses cases and mass ratios q 1.4 (Sekiguchi et al. 2011a,b;Neilsen et al. 2014;Sekiguchi et al. 2015;Palenzuela et al. 2015;Bernuzzi et al. 2016;Sekiguchi et al. 2016;Radice et al. 2016;Lehner et al. 2016;Radice et al. 2017Radice et al. , 2018b. The highest mass ratios of q = 1.5 and q = 2 have been simulated with a very stiff piecewise polytropic EOS (Dietrich et al. , 2017, that is currently disfavored by the GW170817 observation. Mergers of BNS with total mass M ∼ 2.7 − 2.8M and moderate mass ratios up to q 1.4 with a EOS supporting ∼2M , are likely to produce remnants that are at least temporarily stable against gravitational collapse to black hole (BH), as opposed to remnants that collapse immediately to BH (prompt BH formation). However, the conditions for prompt BH formation at high-q have not been studied in detail to date. For a given total mass, moderate mass ratios can extend the remnant lifetime with respect to equal mass BNS because of the less violent fusion of the NS cores and a partial tidal disruption that distribute angular momentum at larger radii in the remnant (Bauswein et al. 2013a). However, large mass asymmetries q 1.6 can favor BH formation due to the larger mass of the primary NS.
Tidal disruption in asymmetric BNS can significantly affect the properties of the dynamical ejecta, favouring a redder kilonova peaking at late times (see e.g. Rosswog et al. 2018;Wollaeger et al. 2018). Moderate mass ratios up to q ∼ 1.3 − 1.4 are found to produce more massive discs than q = 1 BNS (Shibata et al. 2003;Shibata & Taniguchi 2006;Kiuchi et al. 2009;Rezzolla et al. 2010;Dietrich et al. 2017). But black hole formation can significantly alter the remnant disc properties, both in terms of compactness and composition . In turn, this can impact the secular (viscous) ejecta component and the kilonova, with bright emissions generally favoured by the presence of a long-lived NS remnant, e.g. (Radice et al. 2018d;Nedora et al. 2019). Moreover, assuming the Blandford-Znajek mechanism to be the mechanism launching the relativistic jet that produces a gamma-ray burst, a more massive disc is also expected to power a more energetic jet through a more intense accretion process (Shapiro 2017). Many of these aspects are currently not well quantified and they require NR simulations of high mass ratio BNS with microphysics.
In this work, we perform 32 new NR simulations with microphysical EOS, fixed chirp mass Mc = 1.188M and mass ratios up q = 1.8 for four microphysical EOS, including a new microscopic EOS BLh (Sec. 2). The simulations show that for sufficiently high value of the mass ratio (and in a EOS dependent way) the remnant promptly collapses to BH as consequence of the accretion of the companion on the massive primary NS (Sec. 4.) These prompt collapse dynamics is not well described by current NR fitting formulas. By analysing the gravitational waveforms, we further verify current quasiuniversal NR relations for the merger and postmerger gravitational waveforms in the high-q limit (Sec. 5.) We find an overall agreement of the merger relations and characteristic postmerger GW frequencies. But the accurate modeling of postmerger waveforms with high-q, will require more simulations and improved methods than those currently employed. We discuss in detail the differences in the dynamical ejecta between the q = 1 and the high-q mergers in terms of overall ejecta masses, morphology and composition (Sec.6.) High mass ratio and large chirp mass leading to prompt BH formation maximize the dynamical tidal ejecta mass, which is expelled with a peculiar geometry. The r-process nucleosynthesis in these neutron-rich ejecta result in bright (more luminous than the q = 1 case), redder, and temporally extended kilonovae (Sec. 7.) We employ SI units in most of the paper except for masses, reported in solar masses (M ), lengths in km, and densities reported in g cm −3 . Nuclear density is indicated as ρ0 ≈ 2.3 × 10 14 g cm −3 . If units are not reported, we then use geometric units c = G = 1 in context where those are more appropriate (e.g. Sec. 5 and appendices).
EQUATIONS OF STATE
In this work we consider four finite-temperature, composition dependent EOS: the LS220 EOS (Lattimer & Swesty 1991), the SFHo EOS (Steiner et al. 2013), the SLy4-SOR EOS (Schneider et al. 2017); and the BLh EOS . All these EOS include neutrons (n), protons (p), nuclei, electron, positrons, and photons as relevant thermodynamics degrees of freedom. Cold, neutrino-less β-equilibrated matter described by these microphysical EOS predicts NS maximum masses and radii within the range allowed by current astrophysical constraints, including the recent GW constraint on tidal deformability (Abbott et al. 2017a(Abbott et al. , 2019aDe et al. 2018;Abbott et al. 2018) (see below). All four models have symmetry energies at saturation density within experimental bounds. However, LS220 has a significantly steeper density dependence of its symmetry energy than the other models, see e.g. (Lattimer & Lim 2013;Danielewicz & Lee 2014), and it could possibly underestimate the symmetry energy below saturation density.
The LS220 EOS is based on a non-relativistic Skyrme interaction with the modulus of the nuclear bulk incompressibility set to 220 MeV. Non-homogeneous nuclear matter is modelled by a compressible liquid-drop model including surface effects, and considers an ideal, classical gas formed by α particles and heavy nuclei. The latter are treated within the single nucleus approximation (SNA). The transition between homogeneous and non-homogeneous matter is performed through a Gibbs construction.
The SFHo EOS combines a relativistic mean field approach for the homogeneous nuclear matter to an ideal, classical gas treatment of a statistical ensemble of several thousands of nuclei in Nuclear Statistical Equilibrium (NSE) for the inhomogeneous nuclear matter. The transition between the two phases is achieved by an excluded volume mechanism.
The SLy4 Skyrme parametrization was introduced in Douchin & Haensel (2001) for cold nuclear and NS matter. In this work we employ its extension to finite temperature presented in Schneider et al. (2017) using an improved version of the LS220 model that includes non-local isospin asymmetric terms and a better treatment of nuclear surface properties, and treats the size of heavy nuclei more consistently. The transition between uniform and nonuniform phase is achieved by a first order transition, i.e. choosing the phase with lower free energy. A main novelty of this work is the use of the BLh EOS, a new finite temperature EOS derived in the framework of non-relativistic Brueckner-Hartree-Fock (BHF) approach (Logoteta et al, in preparation). The corresponding cold, β-equilibrated EOS was first presented in and applied to BNS mergers in Endrizzi et al. (2018). For the homogeneous nuclear phase, this EOS employs a purely microphysical approach based on a specific nuclear interaction. Consistently with , the interactions between nucleons is described through a potential derived perturbatively in Chiral-Effective-Field theory (Machleidt & Entem 2011). Specifically, the local potential reported in Piarulli et al. (2016) and calculated up to next to-next to-next to-leading order (N3LO) was used as two-body interaction. This potential takes into account the possible excitation of a ∆-resonance in the intermediate states of the nucleon-nucleon interaction. The above potential was then supplemented by a three-nucleon force calculated up to N2LO and including again the contributions from the ∆excitation. The parameters of the three-nucleon force were determined to reproduce the properties of symmetric nuclear matter at saturation density (Logoteta et al. 2016). For the non-homogeneous nuclear phase there is no straightforward extension of these microphysical methods to sub-saturation densities. Thus, the low density part (n ≤ 0.05 fm −3 ) of the SFHo EOS has been smoothly connected to the high density BLh EOS. This necessary extension has been tested with different finite-temperature, compositiondependented tabulated EOS (Hempel & Schaffner-Bielich 2010). They all use 1) relativistic mean field approaches for the homogeneous phase, 2) an ideal, classical gas of a statistical ensemble of several thousands of nuclei in NSE for the non-homogeneous nuclear phase; 3) an excluded volume mechanism to model the transition. No appreciable differences were found in all relevant quantities at subnuclear densities between different high density treatments.
The LS220 and SLy4-SRO EOS are based on Skyrme effective nuclear interactions. In these models thermal effects are introduced starting from a zero temperature internal energy functional that contains an explicit nuclear density dependence. The interaction part of this functional is split into a quadratic term in the nuclear density (playing the role of a two-body nucleon-nucleon interaction) plus a term proportional to some power of the nuclear density. The latter term mimics the effect of many-body nuclear forces. The temperature dependence of the effective nuclear interaction is encoded in the effective mass dependence of the kinetic energy as well as in the single particle potentials. The latter are calculated by the variation of the internal energy with respect to the neutron and proton densities. Assuming indeed a constant entropy, smaller effective masses translate into larger kinetic energies and thus higher matter temperatures. The LS220 EOS assumes that the effective nucleon mass is the bare nucleon mass at all densities while for the SLy4-SRO we have m * N /mN = 0.695 at saturation density, being m * N and mN the effective and the bare nucleon masses respectively.
The resulting Euler-Lagrange equations are then solved in mean field approximation. In this approach thermal effects are included by introducing Fermi-Dirac distributions at finite temperatures for the various nuclear species. Mesons and nucleon fields, and consequently all thermodynamical quantities, acquire automatically a temperature dependence through the self consistent solution of the mean field equations.
Differently from the other models considered in the present work, thermal effects enter in a quite different way in the BLh EOS. The calculation of the Free energy in the BHF approach (Bombaci et al. 1993) requires first the determination of an effective in-medium nuclear interaction, starting from the bare nuclear potential. This effective interaction (G-matrix) is obtained by solving the Bethe-Goldstone integral equation which describe the nucleonnucleon scattering in the nuclear medium and properly takes into account the Pauli principle. Finally, the nucleon single particle potentials Ui(k, T ) (i = n, p) is obtained through the integration of the on-shell G-matrix. Ui(k, T ) is a sort of mean field felt by a nucleon of momentum k due to the presence of the surrounding nucleons. The determination of Ui(k, T ) allows for the calculation of the Free energy from which all the other thermodynamical quantities can be derived. The procedure described above is complicated by the non-linear and non-local dependence of Ui(k, T ) in Bethe-Goldstone equation. We finally note that this scheme provides many-body correlations which are beyond the mean field approximation. Such correlations are not present in the other EOS models considered in the present paper.
EOS Constraints and NS equilibrium models
Fundamental differences in the EOS models translate in different NS structures. For cold, non-rotating NSs, the considered EOS can support maximum masses in the range M TOV max ∼ 2.06 − 2.10M , while the predicted radii of a 1.4M NS lay in the range R1.4 ∼ 11.78 − 12.74 km. More specifically, LS220, SFHo, SLy4-SRO, and BLh EOS have M TOV max of 2.04, 2.06, 2.06, and 2.10 M , and R1.4 of 12.8, 12.0, 11.9, and 12.5 km, respectively. The predicted maximum NS masses and the 1.4M NS radii are all compatible at one-sigma level with the recent detection of an extremely massive millisecond pulsar (Cromartie et al. 2019) and with results obtained by the NICER collaboration (Miller et al. 2019;Riley et al. 2019), although being systematically on the lower side. Note that EOS allowing NS radii R1.4 13 km are currently disfavoured by both GW BNS and X-ray pulsar observations (Abbott et al. 2019a;Miller et al. 2019;Riley et al. 2019).
Finite temperature effects introduce additional pressure support. On the one hand, for the typical central entropies expected for nuclear matter during a BNS merger (s 2kB/baryon), this additional support is not sufficient to significantly alter the maximum TOV mass (Kaplan et al. 2014) or the central baryon density due to the large degree of degeneracy of matter above saturation density. On the other hand, thermal effects can provide a more significant impact for matter at lower densities, increasing the NS radius. In Fig. 1 we report equilibrium sequences in the mass-radius and mass-central density plane obtained for the EOS used in this work, considering both a cold (continuous lines) and an isentropic (dashed lines) EOS with s = 2 kB baryon −1 . Due to thermal effects, R1.4 increases by the 15.6% for the LS220 EOS and 36.4% for the SLy4 EOS. while for the BLh and the SFHo EOS the variation is ∼ 21 − 22%. The different relative impacts on the NS radius clearly correlate with the different values of the nucleon effective mass. Rotational support also increases the maximum NS mass. For example, in the limiting case of rigid rotation at the Keplerian limit, the maximum NS mass is increased by ∼ 20% for all EOS models, as visible in Fig. 1, dotted lines. Since this affects the whole star, the NS radius is typically increased by ∼ 40%, but at the same time the central density is decreased by a similar amount, if one compares non-rotating and Keplerian NSs of identical masses. These properties emphasizes the importance of using the full EOS (i.e. including thermal effects) in merger simulations. Thermal (and composition, see (Kaplan et al. 2014)) effects are indeed key to quantify the prompt collapse dynamics, mass-shedding in the remnant and disc properties.
Methods
We construct initial data for irrotational binaries in quasi-circular orbit solving the constraint equations of 3+1 general relativity in presence of a helical Killing vector and under the assumption of a conformally flat metric (Gourgoulhon et al. 2001). The equations are solved with the pseudo-spectral multidomain approach implemented in the Lorene library 1 . The EOS used for the initial data are constructed from the minimum temperature slice of the EOS table employed for the evolution assuming neutrino-less betaequilibrium.
The initial data are then evolved with the 3+1 Z4c freeevolution scheme for Einstein's equations (Bernuzzi & Hilditch 2010;Hilditch et al. 2013) coupled to general relativistic hydrodynamics. For the latter, we use the WhiskyTHC code (Radice & Rezzolla 2012;Radice et al. 2014b,a) that implements the approximate neutrino transport scheme developed in Radice et al. (2016Radice et al. ( , 2018d and the general-relativistic large eddy simulations method (GRLES) for turbulent viscosity (Radice 2017). The interactions between the fluid and neutrinos are treated with a leakage scheme in the optically thick regions (Ruffert et al. 1996;Rosswog & Liebendoerfer 2003;Neilsen et al. 2014) while free-streaming neutrinos are evolved according to the M0 scheme (Radice et al. 2018d). The 1 http://www.lorene.obspm.fr/ latter is a computationally efficient scheme that incorporates an approximate treatment of gravitational and Doppler effects, is well adapted to the geometry of BNS mergers and free of the radiation shock artifact that plagues the M1 scheme (Foucart et al. 2018). The turbulent viscosity in the GRLES is parametrized as σT = mixcs, where cs is the sound speed and mix is a free parameter sets the intensity of the turbulence. For the simulations of this work σT is prescribed as a function of the rest-mass (baryon) density using the results of the high-resolution general relativistic magnetohydrodynamics simulations results of a NS merger of Kiuchi et al. (2018). Simulations with this model were already presented in Perego et al. WhiskyTHC is implemented within the Cactus (Goodale et al. 2003;Schnetter et al. 2007) framework and coupled to an adaptive mesh refinement driver and a metric solver. The spacetime solver is implemented in the CTGamma code (Pollney et al. 2011;Reisswig et al. 2013a), which is part of the Einstein Toolkit (Loffler et al. 2012). We use fourth-order finite-differencing for the metric's spatial derivatives method of lines for the time evolution of both metric and fluid. We adopt the optimal strongly-stability preserving third-order Runge-Kutta scheme (Gottlieb & Ketcheson 2009) as time integrator. The timestep is set according to the speedof-light Courant-Friedrich-Lewy (CFL) condition with CFL factor 0.15. While numerical stability requires the CFL to be less than 0.25, the smaller value of 0.15 is necessary to guarantee the positivity of the density when using the positivity-preserving limiter implemented in WhiskyTHC.
The computational domain is a cube of 3, 024 km in diameter whose center is at the center of mass of the binary. Our code uses Berger-Oliger conservative AMR Berger & Oliger 1984 with sub-cycling in time and refluxing (Berger & Colella 1989;Reisswig et al. 2013b) as provided by the Carpet module of the Einstein Toolkit (Schnetter et al. 2004). We setup an AMR grid structure with 7 refinement levels. The finest refinement level covers both NSs during the inspiral and the remnant after the merger and has a typical resolution of h 246 m (grid setup named LR), h 185 m (SR) or 123 m (HR).
Black-hole formation is indicated by the appearance of an apparent horizon (AH) that is computed with the module AHFinderDirect (Thornburg 2004). With the gauge conditions employed in the simulations, the BH is formed and simulated as a puncture (Thierfelder et al. 2011a;. The AH finder is sensitive to the initial guess and to the relatively low resolution of the puncture compared to vacuum simulations. We obtained horizon data for all the LR and most of the SR simulations. Simulations were re-run adding more initial guesses if they previously failed, but we could not rerun the HR simulations for which the AH was not found initially. Moreover, in presence of matter, the large spurious gradients in the matter fields close to the puncture might introduce difficulties. In our simulations we find necessary to switch off the GRLES scheme in regions with α < 0.1 in order to compute the AH robustly. Finally, the employed grid structure is not optimal to follow the dynamics of the BH+disc remnant; thus simulations are stopped ∼5 − 10 ms after BH formation.
BNS Models
We consider 10 binaries with fixed chirp mass Mc 1.188M and simulate them at different resolutions. The chirp mass is The main properties of the BNS initial data are summarized in Tab. 1. We simulated the equal mass case and mass ratio q = 1.67 for all the EOS with the GRLES scheme. The highest mass ratio simulated are q = 1.8 for the BLh and SLy EOS. A subset of models were simulated also without turbulent viscosity to directly assess its impact on the merger dynamics and on the ejecta properties. The initial separation between the NS is set to 45 km, corresponding to ∼4 − 6 orbits to merger. Note that similar equal mass LS220 and SFHo BNS were already presented in Perego et al. 2019, but the mass here is slightly larger. An equal mass SLy4 without turbulent viscosity was instead presented in Endrizzi et al. (2020).
The table also reports the tidal parameter (Favata 2014) where Λi ≡ 2/3k i 2 (GMi/Ric 2 ) 5 , with i = (A, B), are the dimensionless quadrupolar tidal polarizability parameters of the individual stars (Flanagan & Hinderer 2008;Damour & Nagar 2010), k i 2 the dimensioless quadrupolar Love numbers (Damour 1983;Hinderer 2008;Damour & Nagar 2009;Binnington & Poisson 2009), and (Mi, Ri) the NS mass and radius. The tidal parameter enters at leading-order the post-Newtonian dynamics and it is directly measurable from the GW (Damour & Nagar 2010;Damour et al. 2012b). Its range for fiducial BNS systems isΛ ≈ (10, 2000), where softer EOS, larger masses and higher mass-ratios result in smaller values ofΛ. It can be used as a measure of the binary compactness and correlates with prompt collapsed remnant and disc masses (Radice et al. 2018b;Zappa et al. 2018).
MERGER DYNAMICS & REMNANT
Starting at a GW frequency of ∼570 Hz, the binaries revolve for ∼4-6 orbits before reaching the moment of merger. The latter is defined as the peak amplitude of the (2, 2) GW mode and marks the end of the chirp signal. A summary of the merger dynamics for all the runs is given in Fig. 2, which shows the maximum mass density (fluid frame) and the minimum of the lapse function, α. Black-hole formation is indicated by the lapse dropping below α 0.3. Note that the 1+log slicing of the spherical puncture has lapse function at the horizon αAH 0.376 (Hannam et al. 2007), but punctures formed in our simulations have dimensionless spins Table 1. BNS models considered in this work. M TOV max is the maximum gravitational mass for a TOV solution with the specified EOS, C TOV max = GM TOV max /Rc 2 is the compactness relative to the maximum mass configuration. M b is the total baryonic mass of the BNS, M A and M B are the gravitational masses of the individual NSs at infinite separation, q is the mass ratio M A /M B ≥ 1 andΛ is the tidal parameter of Eq. 1. f GW (0) is the initial GW frequency. Masses are expressed in M , frequencies in Hertz. ∼0.7 for which αAH 0.3 (See Appendix A). At the same time the lapse decreases below αAH, the maximum density increases beyond 6ρ0 and it is then unresolved on the grid due to the gauge conditions (Thierfelder et al. 2011a). The remnants of BLh q = 1.8, LS220 q = 1.67, SFho q = 1.67 and SLy4 q = 1, 1.67, 1.8 collapse to BH within ∼2-3 ms from merger. We call prompt BH collapse mergers those in which the NS cores collision has no bounce but instead the remnant immediately collapse at formation, see Tab. 4. This usually happens within 1-2 ms from the moment of merger and can be identified by the maximum density monotonically increasing to the collapse, Fig. 2. Note that this definition of prompt collapse implies negligible shocked dynamical ejecta because the bulk of this mass emission comes precisely from the (first) core bounce (Radice et al. 2018d). The BH masses of the prompt collapsed remnants are MBH 2.49, 2.44, 2.45, 2.47, 2.52M for BLh q = 1.8, LS220 q = 1.67, SFho q = 1.67 and SLy4 q = 1.67, 1.8 respectively. The BH spins are aBH 0.66, 0.7, 0.68, 0.69, 0.66. The remnants of LS220, SFHo and SLy q = 1 also form BH within the simulated time and they have MBH = 2.41, 2.41, 2.38 and spins aBH = 0.5, 0.75, 0.76 respectively. The SLy4 q = 1 merger was simulated in Endrizzi et al. (2020) without viscosity, and in that case the remnant survives for ∼10 ms. The earlier collapse here is a consequence of the angular momentum redistribution by the subgrid model for turbulent viscosity (Radice 2017). Overall these results for the BH spins consistently indicate an upper limit on the BH rotation of aBH 0.8, also when including q ∼ 2 BNS (Kiuchi et al. 2010;Bernuzzi et al. 2014Bernuzzi et al. , 2016Dietrich et al. 2017). In the following, we first discuss the details of the BH formation highlighting the effect of high mass ratio and the main differences with respect to the (well-studied) equal mass cases. Then, we discuss the properties of the remnant discs.
It is well known that for comparable masses the two NS cores enter in contact before reaching the moment of merger (Thierfelder et al. 2011b) and the last two 2-3 GW cycles before the amplitude's peak are emitted by the cores collision and remnant formation. At high mass ratios, a new effect is the tidal disruption of the companion and its accretion onto the primary NS. This has been reported also in previous simulations with a stiff polytropic EOS (Dietrich et al. 2017), and we confirm it here for softer and microphysical EOS. As a representative example we show in Fig. 3 of BLh q = 1 vs. q = 1.8. The accreting material has initially low temperatures but as soon as the accretion becomes more massive and faster the temperature raises. At approximately the time of the snapshot the accreting material shocks against the primary NS core and there the temperature raises up to ∼100 MeV. As a consequence of this shock, some material becomes unbound, although the exact amount of ejecta cannot be confidently measured in the simulations (see Sec. 6).
the cases
The new aspect highlighted by our simulations is the dynamics of prompt collapse for high mass-ratio BNS. In a q ∼ 1.5 − 2 binary the tidal disruption and accretion of the companion NS onto the massive primary NS can drive the remnant unstable and causes a prompt collapse to BH. The process is shown in Fig. 4 (top panels) in a 3D volume rendering of the rest mass density for the representative case of the BLh EOS. The BLh q = 1.8 has a rather massive primary NS with MA = 1.856M as compared to the maximum TOV mass for the BLh EOS (M TOV max = 2.103M ), and a companion NS of small compactness (MB = 1.020 and CB 0.12.) The companion NS is almost completely destroyed by tidal effects and its accretion results in the prompt formation of a BH surrounded by a massive accretion disc (see below). By contrast the lower massratio and equal-mass binaries with the same chirp mass produce a less compact remnant and none of them collapse to the end of the simulated time (middle and bottom panels). Note that the equal mass BLh was evolved beyond 80 ms postmerger. We checked with a sequence of simulations at intermediate mass ratios that the behavior is continuous in the mass ratio parameter (See Appendix B).
Comparing our results to the numerical-relativity-based models of prompt collapse available in the literature we find that the current models fail to predict the behavior at high mass ratio. This is not surprising since all the models are calibrated using almost exclusively comparable masses simulations. In particular, the prompt collapse model proposed in Bauswein et al. (2013a) pre-dicts prompt collapse for BNS with masses exceeding a threshold mass where the quantity kthr can be expressed in an approximately EOSindependent way in terms of the maximum mass TOV compactness. Using also data from Hotokezaka et al.
The above model does not include any dependence on the mass ratio and predicts that all models simulated in our work would produce a NS remnant, except the BLh q = 1.8. The prediction is shown as solid line in the M vs. Cmax diagram in Fig. 5; prompt collapse would be expected for BNS above the solid line. A possible way to improve the criterion in Eq.
(2) is to correct the threshold mass by a function of the mass ratio, f (ν). For example, one could look for a criterion based on the chirp mass. Letting lowers the threshold and approximately reproduces our results (dashed and dotted lines in the Fig. 5.) The limited data points available do not allow us more quantitative studies or fitting. Another approximate criterion for prompt collapse that is independent on the EOS is based on the value of the tidal parameter (Zappa et al. 2018;Agathos et al. 2020) Note that theΛ parameter contains the mass ratio dependence, and for q 1 Figure 3. Snapshots of premerger dynamics for BLh q = 1.8 (top) and q = 1.0 (bottom) simulations. Shown is the rest-mass density in the orbital plane at ∼9 ms corresponding to the third orbit from the beginning of the simulations and 2 orbits to the moment of merger. The companion in the q = 1.8 BNS is tidally disrupted and a significant accretion onto the primary is taking place. Accretion starts approximately after one orbits from the beginning of the simulations.
Comparing to our data, we find that it predicts correctly the prompt collapse of the highest simulated mass ratio for SFHo and SLy4, but fails for the LS220 and BLh. This is also expected sinceΛ does not account for tidal disruption but only measures the binary compactness [Cf. discussion in Appendix A in Breschi et al. (2019).] Let us now discuss disc formation, evolution and properties. Following a common convention, we define disc the baryon material either outside the BH's apparent horizon or the one with densitites ρ 10 13 g cm −3 around a NS remnant. The baryonic mass of the discs are computed as volume integrals of the conserved restmass density D = √ γ W ρ from 3D snapshots of the simulations in postprocessing (γ is the 3-metric's determinant and W the Lorentz factor). Estimates for the disc masses are reported in Tab. 4. The disc mass is reported as measured at the time when it is maximum during the simulation. For remnant collapsing to BH this can be interpreted as the mass at BH formation, since the disc mass can only decrease with time due to accretion. For NS remnants the disc (remnant at lower densitites) can also increase its mass over time as it acquires matter expelled from the higher densities shells. Examples of the disc mass evolutions for different remnants are shown in Fig. 6. Note that we show the BLh q = 1.8 and LS220 q = 1 simulations at resolution SR but without turbulent viscosity and the q = 1.67 with viscosity but at LR because these are the longer dataset available to us (see below for a discussion about turbulence.) In the case of comparable mass BNS the accretion disc is formed during and after the merger. As time evolves, if the remnant does not collapse, it continuosly shedes mass and angular momentum increasing the mass of the disc and generating outflows (Radice et al. 2018a;Nedora et al. 2019). This is why in Fig. 6 the accretion disc mass is increasing with time for these binaries. These processes terminate with BH formation, which is accompanied by the rapid accretion of a substantial fraction of the disc. An important consequence is that, in the case of comparable mass ratio binaries, prompt BH formation results in very small accretion disc masses (Radice et al. 2018b), because the mechanism primarily responsible for the formation of the disc is shut off immediately in these cases.
In high mass ratio BNS mergers the companion star is tidally disrupted (Fig. 4). In these cases, the bulk of the accretion disc is constituted by the tidal tail, which is for the most part still gravitationally bound to the remnant. This tail is launched prior to merger. So massive accretion discs are possible even if prompt BH formation occurs (see also Kiuchi et al. 2019.) In general, high q binaries are found to generate more massive discs than binaries with the same chirp mass but lower q (Shibata et al. 2003;Shibata & Taniguchi 2006;Kiuchi et al. 2009;Rezzolla et al. 2010;Dietrich et al. 2017). The postmerger evolution of these discs is also very different. While in the massive NS case the central object pushes material into the disc and drives outflows, in the case of high massratio binaries forming BHs the fallback of the tidal tail perturbs the disc and drives rapid accretion onto the BH as evinced by the rapid decrease of the disc masses with time shown in Fig. 6.
These different formation mechanisms are imprinted in the Due to the absence of strong compression and shockes, the discs formed in high mass ratio binaries are initially colder and more neutron rich (Fig. 7). Since high-q BNS mergers launch tidal tails to large radii, comparable mass-ratio binaries create discs that are initially more compact and have higher Ye's. Besides the mass ratio, the structure of the disc is also strongly dependent on the nature of the remnant. In the case of BH remnants, the discs are typically more compact and thin then those around massive NS remnants. In the latter case, because of the additional pressure support, the discs reach higher densities ∼10 13 g cm −3 and become partly optically thick to neutrinos Endrizzi et al. 2020).
GRAVITATIONAL WAVES
In this section we analyze the GW signals computed from the simulations. The latter are too short for a quantitative comparison with inspiral-merger models, e.g. (Akcay et al. 2019). Hence, we focus = E mrg b /(νM ) and the angular momentum j mrg = J mrg /(νM 2 ) at the moment of merger from the multipolar GW. Those and other quantities at merger are EOS-independent functions of the tidal parameter (Bernuzzi et al. 2015a). To find these relations it is best to use, instead ofΛ, the parameter determining both tidal dynamics and tidal waveform at leading post-Newtonian order (Damour & Nagar 2010;Damour et al. 2012b). High mass-ratio effects are included by further considering the parametrization where c is a fitting parameter (Zappa 2018;Breschi et al. 2019). Binding energy, angular momentum and the waveform key quantities at merger are reported in Tab. 3 for all simulations. From the table one notices that the binding energy and the angular momentum increase (binding energy is less negative) asΛ decreases and q increases (Tab. 1); consequently the merger GW frequency and amplitude decreases. The dimensionless BH spin of the remnants is aBH ∼ 0.7 (Tab. 4), and it can be compared to the angular momentum available at merger considering its reduced value jBH(ν) = aBHν. The angular momentum at merger is partly radiated in GW and partly gives the disc angular momentum and BH spin. For the q = 1.8 prompt collapse remnants (ν 0.22959, BLh and SLy4) we obtain jBH(0.22959) = 0.66/0.22959 2.87 to be compared to j mrg 3.5. For the q = 1 (ν = 0.25) SLy4 and SFHo with BH formation we obtain jBH(0.25) = 0.76/0.25 3.04 to be compared to j mrg 3.4. These estimates, obtained using gauge invariant quantities, indicate that discs around BHs generated by prompt collapse q = 1.8 binaries have a reduced angular momentum that is larger by about 60% than that of discs around equal BHs resulting from the prompt collapse of equal mass NS binaries. This observation is strengthened by the fact that the postmerger GW is weaker if the BH is promptly formed (see below). Figure 8 compares the new NR data of this paper (Tab. 3) with the fits of simulations of the CoRe collaboration for q ≤ 1.5 proposed in Breschi et al. (2019). The fits are consistent with the new data with q > 1.5 within the the uncertainties, indicating the robustness of the model (and especially the ansatz Eq. 8). The fits for the binding energy and angular momentum at merger were not presented in (Breschi et al. 2019) and are thus are given here in Appendix C.
Regarding the postmerger waveform, Fig. 9 (top panel) shows a comparison between the waveforms from the BLh BNS for the three mass ratio considered here. The figure clearly indicates that for similar (though not identical) initial frequencies, the moment of merger occurs earlier for unequal mass simulations due to tidal disruption of the high-q binaries, where the companion has larger radius than the primary NS (See also Fig. 3 and Fig. 4). As expected, the dependency of the waveform on q is smooth as shown explicitely in Appendix B. Note that, in general, the postmerger amplitude is smaller for high-q than for equal mass due to a less violent shock between the two NS cores and either a less compact remnant or the formation of a BH that quickly rings down to a stationary state.
The only new unequal mass simulation with a long postmerger GW signal is the BLh with q = 1.67. For this case, the value of the characteristic postmerger frequency f2 is properly captured from NR fits presented in Breschi et al. (2019): from the simulation we get f2 ≈ 3.31 kHz, while for the same binary the NR fit predicts f NRPM 2 ≈ 3.01 kHz, which is within the uncertainty of the fits (∼12%). This result is inline with the interpretation of (Bernuzzi et al. 2015b;Radice et al. 2017): the postmerger f2 frequency is mostly determined by κ T 2 and the merger physics. Figure 9 (bottom The results extracted from high mass ratio simulations are consistent with the current fits in the limit of the uncertainties. panel) shows the comparison between the spectrum of this NR simulation and the respective spectrum generate with the NRPM model of (Breschi et al. 2019). While the NRPM model captures well the characteristic frequencies, it does not reproduce the morphology of these high-mass-ratio waveforms due to imperfect modeling of the characteristic amplitudes and damping times. This fact further stresses the need of new simulations to improve postmerger models and/or of more agnostic approaches to kiloHertz GW modeling (Cf. discussion in Breschi et al. (2019).) In the context of high mass ratio binary coalescences, higher- order modes could play an important role. The GW strain h(t, x) is the sum of the contribution of the several modes h m (t, r) times the spin-weighted spherical harmonics (s) Y m (θ, φ) with s = −2 that contain the dependence on the source's sky position, The maximum amplitudes A m = |h m | for the different modes at merger and postmerger are shown in Fig. 10. and (4, 4). The contribution of the odd modes and (2, 0) increases in the postmerger. The (2, 0) mode, in particular, is relevant in the early postmerger times and its amplitude could reach the 15% of the (2, 2) amplitude. This is interpreted as due to radial oscillations of the remnant which contribute to the emitted signal and could generate a coupling with the dominant mode (Stergioulas et al. 2011) in analogy to what happens with nonlinear perturbations of equilibrium NS (Dimmelmeier et al. 2006;Passamonti et al. 2007;Baiotti et al. 2009;Stergioulas et al. 2011). The waveform mode hierarchy for high-q binaries is similar to that of the q = 1 binaries. However, the odd modes have a larger relative contribution to the signal during the late inspiral at merger (Cf. Dietrich et al. 2017). The amplitudes of these modes can be up to the 20% of the (2, 2) amplitude before merger and in the late postmerger. However, inspection of the waveforms show that during the very dynamical early postmerger phase the amplitude of the (2, 1) and (3, 3) modes can instantanously reach the same order of the (2, 2). The contributions of the subdominant modes in the GW correlates to density modes in the NS remnant triggered in asymmetric mergers, see e.g. (Stergioulas et al. 2011;Bernuzzi et al. 2014).
DYNAMICAL EJECTA
Mass ejecta are calculated on coordinate spheres at r 300 km assuming stationary spacetime and flow, and flagging the unbound mass according to the geodesic criterion. A particle on geodesics is unbound if the 4-velocity component ut ≤ −1 and thus it reaches infinity with velocity v∞ (1 − u 2 t ) 1/2 . This geodesic criterion neglects the fluid's pressure, thus underestimating the mass, but it is considered appropriate for the dynamical ejecta that are moving on ballistic trajectories, e.g. (Kastaun & Galeazzi 2015).
We compute the mass-histograms of the ejecta main properties and show them in Fig. 11. In the case of equal mass BNS (top panels), dynamical ejecta are distributed all over the solid angle and composed of both the tidal and the shocked component, Table 4. Dynamical ejecta average properties for each simulation and for different resolutions. M ej is the total mass of the ejecta; θ ej and φ ej are the mass weighted rms of the polar and azimuthal angle, respectively; υ ej and Ye , are the mass-averaged electron fraction and speed. The last column is the ratio Xs = M shocked ej /M ej , where the shocked and tidal ejecta are defined by those with entrpy respectively above and below the threshold of 10k B per baryon. Simulations without turbulent viscosity are indicated with *. (e.g., Hotokezaka et al. 2013;Bauswein et al. 2013b;Sekiguchi et al. 2015;Radice et al. 2018d). The velocity of the material peaks at v ∼ 0.1 − 0.2 c and has high speed tails extending up to v ∼ 0.6 − 0.8 c. The largest tail velocities are reached by the softest EOS and in the polar regions, where baryon pollution is minimal, as a consequence of the NS cores' bounce ( Fig. 3 of Radice et al. 2018d). Note however, masses ejecta 10 −5 and velocities 0.9 c cannot be trusted and can suffer of large numerical errors due to the atmosphere treatment and imperfect mass conservation (Appendix B). The ejecta's composition is characterized by a wide range of Ye; for the LS220 and the BLh EOS two peaks at ∼0.1 − 0.15 and at ∼0.4 are clearly visible; they roughly correspond to the shocked and tidal components, although the former has also a significant amount of material with low Ye material. Note the SFHo model peaks instead at ∼0.25. Comparing the two equal mass LS220 BNS, we find a small effects of turbulent viscosity: the viscous ejecta have a more prominent peak at lower Ye and a slightly reduced tidal component, possibly due to the difference in the early-postmerger dynamics around the moment of core bounce. The dynamical ejecta of asymmetric BNS with q 1.67 (middle panels of Fig. 11) are quantitatively different from sym- Figure 11. Distributions of the ejecta mass in the polar angle (left), velocity (middle) and electron fraction (right) of the dynamical ejecta. Each row refers to a different mass ratio, from top to bottom q = 1, 1.67, 1.8. Note that the angle θ = 0 • identifies the orbital plane, while θ = 90 • is the pole above the remnant. Data refer to resolution SR; data from simulations without turbulent viscosity are also shown. metric BNS. The ejected material is distributed more narrowly about the orbital plane and decreases almost monotonically to the polar latitudes. The dependence on the azimuthal angle is also very different from the equal-mass cases. Because the matter is almost entirely expelled by tidal torques, the ejecta is distributed over a fraction of the azimuthal angle around its ejection angle and has a crescent shape, Fig. 12. This is similar to what observed in blackhole-neutron-star binaries (Cf. Kawaguchi et al. 2016.) Hence, the ejecta for high-q BNS are not formed isotropically. Most of the unbound mass has low Ye 0.1, although several q = 1.67 BNS have a second peak at 0.4. Thus, while the tidal component is the dominant for asymmetric BNS, a small shocked component persists. The velocity distributions have comparable peak values indicating that the tidal component has velocities comparable to those of shocked component (Cf. Fig. 6 of Dietrich et al. 2017). Note that the fast tails are suppressed for increasing mass ratio. This is because of the less violent merger and bounce experienced by these binaries. These features are even more extreme for the q = 1.8 case (bottom panels of Fig. 11). The above results appear consistent with those reported in (Sekiguchi et al. 2015;Lehner et al. 2016), although different EOS and more moderate mass ratio were used there. The mass-averaged properties of the dynamical ejecta computed from the histograms are reported in Tab. 4. We show results for all the resolutions available in order to convey an idea of the uncertainties. The latter are difficult to precisely quantify since strict convergence is not observed in the data. However, the results are robust for a large variation of the grid resolution with mass variations at the ∼20% level between SR and HR and less than a factor two between LR and SR. Note there is a factor 2 (1.5) between the spacing of LR and SR (SR and HR) grids. The following discussion mostly refers to highest resolutions available, as the LR is not always sufficient to properly resolve the composition (see below).
The large mass asymmetry can boost the mass ejecta by up to a factor four with respect to the equal mass cases. The average electron fraction of the dynamical ejecta from asymmetric BNS is ∼0.11, a factor two smaller than for the respective equal mass BNS. The mass-distribution is concentrated around the equatorial plane. The rms of the polar angle is ∼5 − 15 • for asymmetric BNS with q = 1.8 − 1.67, while it is ∼35 • for symmetric BNS. Overall these results show that while the tidal component of the dynamical ejecta is dominant with respect to the shocked ejecta in high mass ratio binaries, a delayed collapse can produce unbound mass with electron fractions that can extend to Ye ∼ 0.4. The rms of the azimuthal angle is reduced from 106 • of symmetric BNS to less than half, 50 • , for asymmetric BNS. We recall that the rms of a uniform distribution with support on the segment 2α ∈ (0, 2π] is φ = √ 3/3(π − α), thus giving φ 104 • if the support is the full interval (360 • ) and φ 54 • if the support is half of the interval (180 • ). A similar argument holds also for the polar angle support around the equator, π/2 − α ≤ θ ≤ π/2 + α, for which θ = √ 3/3 α. This is correct as far as the ejecta is emitted uniformly over a small portion around the equator (a good approximation in the case of high mass ratio BNS).
The tidal and shocked contributions to the dynamical ejecta are calculated by conventionally distinguishing the unbound matter with specific entropy smaller or larger than 10kB per baryon, respectively (Radice et al. 2018d). The last column of Tab. 4 reports the ratio indicating the mass fraction of the shocked ejecta to the total value. For the BLh EOS Xs increases from 0.01 to 0.3 and 0.9 for q = 1.8 to 1.67 and q = 1, respectively. For the SLy EOS Xs 0.01 for q = 1.8 and q = 1.67 that have a similar dynamics charcaterized by the accretion-induced BH formation and prominent tidal ejecta, and Xs 0.8 for q = 1. The other two q = 1 mergers with shortlived NS remnants have Xs 0.7 that reduces to 0.1 for q = 1.67.
As an example, we discuss mass-histograms for the shocked and tidal components separately for the BLh q = 1.67, Fig. 13. The tidal component is confined within an angle of θ 10 • from the orbital plane; most of the mass has Ye ∼ 0.05 with the largest electron fractions Ye ∼ 0.15 reached at those latitudes. The velocities are uniformly distributed v ∼ 0.1 c. The shocked component, instead, has mass mostly distributed at angles θ ∼ 25 • but it extends to polar latitudes. The ejecta has electron fraction Ye ∼ 0.17−0.25 for θ 25 • and Ye ∼ 0.25 − 0.25 for θ > 60 • . The velocity of the bulk ejecta at the orbital latitudes is v 0.25 c, minimal at around θ ∼ 27 • , and has a peak v 0.3 c at polar latitudes. In general, the shocked component is slightly delayed with respect the tidal component because it is generated when the NS cores' bounce (Radice et al. 2018d). Table 4 also highlights a dependency on resolution, especially for high mass ratios BNS. This is expected since resolving NS with different sizes is more challenging than with equal sizes for the box-in-box AMR. In particular, the LR resolutions does not seem sufficient to deliver quantitatively robust results for all the cases, especially at high q and with viscosity. Note, for example, that ejecta mass decreases with resolutions indicating numerical dissipation plays a role enhancing the ejecta. Moreover, Ye raises very rapidly from the NS surface; in the case the latter is not well resolved the tidal ejecta might be spuriously composed of material from the interior, as observed in the BLh q = 1.8 LR simulation.
We finally comment on the effect of viscosity on the dynamical ejecta. Radice et al. (2018c) pointed out that the dynamical ejecta in asymmetric BNS can be enhanced by the thermalization of mass accretion streams between the secondary and the primary neutron star. This viscous component of the dynamical ejecta are characterized by large asymptotic velocities and have masses that depend on the efficiency of the viscous mechanism. Figure 14 shows the ejecta mass for the BLh q = 1.8 and LS220 q = 1.67 BNS. The viscous dynamical ejecta is not present because the shocked component ejecta is negligible. Actually, the turbulent viscosity here can reduce the tidal dynamical ejecta as a consequence of the different angular momentum distribution due to turbulence. Note the effect is significant and robust with respect to the variation of the grid resolution. The effect of viscosity is much reduced in the LS220 q = 1.67 BNS and practically negligible considering the numerical uncertainties (only the SR is shown for clarity). This might be related to the differences in the EOS at low density (Sec. 2). The simulations of (Radice et al. 2018c) employed the GRLES scheme as those presented here, but using mix = const and varying systematically the constant for the turbulent parameter. We cannot currently exclude that the specific subgrid model mix(ρ) built from (Kiuchi et al. 2018) determines a different effect with respect to the mix = const model. A detail investigation of the viscous dynamical ejecta with the subgrid model mix(ρ) for intermediate values of q will be presented elsewhere.
SYNTHETIC KILONOVA LIGHT CURVES
We compute synthetic kilonova light curves for each of the BNS mergers presented in this work. We use a multicomponent, anisotropic kilonova model that takes into account the angular distribution of the ejecta properties as well as the presence of different kinds of ejecta (Perego et al. 2017;Radice et al. 2018a,d;Barbieri et al. 2020). The latter differ by the mechanisms that cause the ejection and the timescales over which they operate. Within this framework, the homologously expanding ejecta is discretized in velocity space and the photon diffusion time is estimated by timescale arguments. Radiation is assumed to be in local thermodynamical equilibrium up to the relevant photosphere and photon emission is modelled as a superposition of blackbody spectra. The differ-ent ejecta components comprise the dynamical ejecta discussed in Sec. 6 and possibly winds expelled by the remnant disc on longer timescales (0.1-1s) by means of neutrino irradiations and turbolent viscosity of magnetic origin. The kilonova emission produced by each component depends mainly on three quantities that characterize the ejecta, namely the amount of mass, Mej, its average expansion velocity, vej , and an (effective) grey photon opacity, κej. In all our kilonova models, we locate the merging BNS at a distance of 40 Mpc and we consider a reference viewing angle of π/6 with respect to the rotational axis of the binary. If not otherwise specified, the model parameters and input physics are assumed to be as in the best fit model named BF to AT2017gfo of (Perego et al. 2017).
We first examine the kilonova emission obtained by considering only the dynamical ejecta discussed in Sec. 6. In the case of q = 1 mergers, matter is expelled over the entire solid angle and we follow the model presented in Perego et al. (2017); Radice et al. (2018a,d). We assume the ejecta to be axisymmetric and the photon diffusion to proceed mostly radially. In these cases, we discretize the polar angle in 30 slices over the whole solid angle. We use azimuthal averages of the angular distribution of the ejected mass, electron fraction and of mean expansion velocity directly extracted from the latest stages of our NR simulations. While the ejecta mass and mean velocity are directly input into the kilonova model, the electron fraction is used to assign the ejecta opacity according to κ dyn = 1 cm 2 g −1 for Ye > 0.25, κ dyn = 20 cm 2 g −1 otherwise. Alternatively, for the q = 1.67 and q = 1.8 cases the dynamical ejecta is confined (in very good approximation) within a crescent across the equatorial plane (see Sec. 6) and we emply the model described in Barbieri et al. (2020) (see also Kawaguchi et al. 2016) in which the photon emission is the combination of radial and lateral emissions from an optically thick disc. In this case, we use the total ejecta mass, Mej, and mean velocity, vej , obtained by our NR simulations to initialize a vertically homogeneous, radially expanding disc. For the grey opacity, we assume always κ dyn = 20cm 2 g −1 since in these cases Ye < 0.25 (often Ye < 0.10). For the disc half-opening angle in the polar direction we use θ disc = √ 3 θej , while for the azimuthal disc opening we set φ disc = 2 √ 3 φej (See Sec. 6). The crescent shape breaks the axisymmetry of the emission. In our calculations, we always assume the dynamical ejecta to be emitted toward the observer. For Barbieri et al. (2020) for q = 1.67, 1.8. Binaries are always assumed to be located at a distance of 40 Mpc and to be observed under a viewing angle of 30 • with respect to the BNS rotational axis. The bump observed in the Ks band for the BLh q = 1.67 model results from the radial emission from the crescent pointing towards the observer. small polar opening angles, this assumption is not very relevant, since the radial emission is subdominant. In the case of larger discs (as in the BLh q = 1.67 case) the radial emission can be relevant and our model assumptions can be more questionable.
In Fig. 15, we present light curves in three different photometric bands (g, z, and Ks) to span the relevant wavelength interval from visible to near-infrared radiation, for the three different models obtained with the BLh and SLy4 EOS. We first notice that, even in the case of prompt collapse, BNS mergers can power bright kilonovae. In particular, in the high-q models the light curves from the dynamical ejecta are possibly brighter, with wider light curves peaking at later times compared with the q = 1 mergers. This is due to the crescent-like configuration of the expanding ejecta. On the one hand, when matter is emitted over a large portion of the solid angle (as it usually happens for q ∼ 1) the hotter ejecta is buried inside the optically thick region and high energy photons have to diffuse and thermalize before being emitted in the kilonova. On the other hand, thanks to the disc-like geometry of the crescent, the innermost, hotter portion of the disc provides a significant contribution to the kilonova emission at any time, explaining the brighter and more substained emission. These effects are visible in all bands, but the increase in magnitude moving from q = 1 to higher q's is more pronunced in the infrared band as a consequence of the lower electron fraction (and thus of the higher opacity) of the dynamical ejecta in the crescent. This effect is even amplified by the larger amount of dynamical ejecta observed in the high-q models (with the only exception of the SFHo models).
The peak times of all kilonova models are shown as a function of the mass ratio in Fig. 16. In addition to the models presented in Tab. 1, we include here also a few more LR models computed with the BLh EOS (See Appendix B) to better explore the dependence on q. The kilonova peak times of mergers undergoing accretioninduced prompt collapse are significantly delayed with respect to the q = 1 cases. For the BLh merger, the emission in g, z and Ks bands peaks between few hours and within a day respectively if q = 1, and and between a day and a week if q = 1.8. The near-infrared frequencies are those that vary most as a function of the mass ratio. The SLy4 light curves shows a similarly behaviour, although less data points are available. Less variation in the peak times is observed in the LS220 and SFHo mergers between q = 1 and q = 1.67, but note that in those cases the dynamical ejecta mass also vary less with the mass ratio.
We tested that the features described above do not depend on the specific velocity profile for the homologously expanding ejecta, in which most of the mass resides in the innermost part of the disc. Indeed, a uniform matter distribution in velocity space as suggested in Kawaguchi et al. (2016) provides very similar results. This is due to a compensation effect between the larger amount of decaying material and the denser (thus, optically thicker) vertical profile of the disc in our models. These features are robust also with respect to the uncertainties on the ejecta properties of numerical origin. Considering the ejecta properties extracted from simulations at different resolutions gives some quantitative changes that mostly affect the light curves' luminosity. Here is worth to remark that a factor two of uncertainty in the ejecta mass can translate in up to an order of mag-nitudes in luminosity. Moreover, current light curve models suffer of larger systematic uncertainties in nuclear (e.g. mass models, fission fragments and β-decay rates) and atomic (e.g. detailed wavelength dependent opacities for r-process element) physics (Eichler et al. 2015;Rosswog et al. 2017;Gaigalas et al. 2019).
The models presented in Fig. 15 do not contain potentially relevant contributions to the total ejecta coming from disc winds. Thus, the resulting light curves could be considered as lower limits for the kilonova emission. To estimate the potential impact of the disc wind emission on our results, in Fig. 17 we also present light curves obtained by considering a three component kilonova model for the same three photometric filters and models of Fig. 15. The dynamical ejecta profiles are NR informed as previously discussed. For the disc winds, we consider both a neutrino-driven and a viscosity-driven wind. Since wind ejection is expected over a wide portion of the solid angle, we model the related kilonova emission using again the framework described in Perego et al. (2017); Radice et al. (2018a,d). For the neutrino-driven wind component, the amount of ejecta is assumed to be 5% (1%) of the disc mass if the remnant is a long-lived (short-lived or promptly collapsing) massive NS. Due to the effects of neutrino irradiation, the effective grey photon opacity is set κw = 1 cm 2 g −1 , while the wind expands within a π/4 angle around the polar axis with an average speed of vw = 0.08c. For the viscous wind component, the amount of ejecta is always assumed to be 20% of the disc mass, expanding with an average speed of vv = 0.06c, while the grey opacity is set to κv = 5 cm 2 g −1 . To compute the masses of the wind ejecta we consider the disc masses presented in Sec. 4.
Since the disc ejecta is usually more relevant than the dynamical one (see, e.g., Radice et al. 2018d), the large differences between q = 1 and high-q models in the kilonova light curves observed in the one component models reduce for the multicomponent cases. Nevertheless, since BNS mergers with higher mass ratios tend to produce also more massive discs, also these possibly more complete models confirm that BNS mergers undergoing prompt merger can power bright kilonovae and high-q's can possibly produce kilonovae that are brighter and charaterized by wider peaks in all relevant bands, compared to more symmetric mergers mergers that have the same chirp mass. More specifically, in the case of high-q binary models for which the dynamical ejecta has a relatively large mass (up to 10 −2 M ) and is highly anisotropic (e.g. BLh and SLy4 q = 1.8), the emission from the crescent is significant at all time and possibly dominant for mergers forming discs of not too large masses (M disc 0.1M ). The opposite scenario is realized in symmetric binaries: in all q = 1 models, irrespectively of the EOS, the low mass, widely distributed dynamical ejecta has a visible impact on the light curves only at very early times and in the blue portion of the kilonova spectrum. At later times, and especially at red and infra-red frequencies, the emission is dominated by disc winds.
The observations of AT2017gfo (Villar et al. 2017 and references therein) are also included in Fig. 17 and can be qualitatively compared to the lightcurves from the simulations (note that the simulated BNS have chirp mass consistent with GW170817). The light curves from high-q mergers are generically flatter and more extended in time than those of AT2017gfo. Assuming these particular light-curve models, the observation of AT2017gfo would exclude high-q and stiff EOS withΛ 600 (long-lived NS remnants) consistently wih the low-spin prior GW analysis (Abbott et al. 2019a,b). The plots also highlight that the light curves in different bands favour different mass ratio, thus anticipating system-atics (and degeneracies) between the multicomponents light curves and the binary parameters.
CONCLUSIONS
In this paper, we explored in a systematic way the dynamics, the ejecta, and the expected kilonova light curves of highly asymmetric BNS mergers by means of detailed simulations in NR. The latter employed different finite-temperature, composition dependent EOS, and numerical resolutions. The prompt collapse dynamics discussed here for high-q BNS has a underlying mechanisms different from the equal-masses prompt collapse: in the former case, the collapse is driven by the accretion of the companion onto the massive primary star. For binaries with increasing mass ratio and fixed chirp mass, the companion NS undergoes a progressively more significant tidal disruption. Thus, in these BNS sequences accretioninduced prompt collapse should be always present after a critical mass ratio in connection to the maximum NS mass. For example, for the BLh EOS the critical mass ratio should fall in the interval 1.54 < qthr < 1.67.
The remnan BH in these high-mass-ratio mergers is surrounded by a massive accretion disc in contrast to comparable masses prompt collapse merger that have no significant disc left outside the BH. The accretion discs of high-mass ratio mergers are primarily constituted of tidally ejected material, hence they are initially cold and neutron rich. The simulations show that fallback of the tidal tail perturbs the disc and affect its accretion. The long-term disc and fall back dynamics is relevant to understand the complete kilonova emission and also for GRB afterglow (extended) emission (Rosswog 2007;Metzger et al. 2010;Desai et al. 2019). This study is left for the future.
Perhaps the most relevant astrophysical consequence of our work is the possibility of having massive dynamical ejecta from these accretion-induced prompt collapsing remnant. The ejecta mass can reach Mej ∼ 0.007 − 0.01M and are mostly emitted within 10 • − 20 • about the orbital plane and in a portion of 100 • − 180 • in the azimuthal angle. The ejecta are neutron rich with Ye 0.1 and with velocities v 0.1 c. The related kilonova light curves are predicted to be usually significantly brighter than the equal masses case (at fixed chirp mas) in all the bands as a consequence of the crescent-like geometry of the expanding dynamical ejecta. The light curves peak at later times and are powered by the sustained emission of the innermost, hotter portion of the crescent especially in the infrared bands.
We suggest that the confident detection (or confident nondetection) of an electromagnetic counterpart for a high-mass binary can directly inform us about the binary mass ratio. Because the latter is currently poorly constrained by GW analysis, the kilonova counterpart can deliver significant complementary information. Multimessenger analysis of high-mass events are thus particularly relevant. They will require a precise numerical relativity characterization of the ejecta in terms of the binary parameters that is not currently available, as well as improved nuclear and atomic physics input or suitably parametrized models for the light curves.
Our results can help interpreting GW190425 in the scenario that the GW was produced by an asymmetric binary with q 1.6 (Note the chirp mass for GW190425 is even larger than the one simulated here, while large mass ratios are excluded for GW170817 if spins are small). Using the methods developed in Agathos et al. (2020), The LIGO Scientific Collaboration & the Virgo Collaboration (2020) estimated that the probability for the remnant to prompt Figure 17. Kilonova light curves as in Fig. 15, but employing a three component model for the BLh and SLy BNS. The dynamical ejecta component is taken as in Fig. 15. The other two components are assumed from a neutrino-driven and a viscosity-driven wind. The neutrino-wind mass is assumed 5% (1%) of the disc mass if the remnant is a long-lived (short-lived or promptly collapsing) massive NS; the effective grey photon opacity is set κw = 1 cm 2 g −1 , while the wind expands within a π/4 angle around the polar axis with an average speed of vw = 0.08c. The viscos-wind mass is assumed 20% of the disc mass, expanding with an average speed of vv = 0.06c, and with a grey opacity is set to κv = 5 cm 2 g −1 . The observational data of AT2017gfo are shown as black markers for comparison (see discussion in text). collapsed to BH is ∼97%. The NR fitting models used in Agathos et al. (2020) refer to equal masses and thus are to be considered conservative for q 1.6. Hence, if GW190425 is interpreted as a such asymmetric BNS merger, the BH remnant scenario is further strengthened by our results. Moreover, a bright and temporally extended red kilonova could have been expected as a counterpart if GW190425 was produced by a high-q merger [Cf. (Foley et al. 2020)]. The kilonova signal in this case could be similar to the one produced in BH-NS binaries (Radice et al. 2018a;Kyutoku et al. 2020).
All of our GW waveforms and ejecta data will be publicly available as part of the CoRe database at http://www.computational-relativity.org/ ACKNOWLEDGEMENTS SB, MB, BD, NO and FZ acknowledge support by the EU H2020 under ERC Starting Grant, no. BinGraSp-714626. Numerical relativity simulations were performed on the supercomputer SuperMUC at the LRZ Munich (Gauss project pn56zo), on supercomputer Marconi at CINECA (ISCRA-B project number HP10BMHFQQ); on the supercomputers Bridges, Comet, and Stampede (NSF XSEDE allocation TG-PHY160025); on NSF/NCSA Blue Waters (NSF AWD-1811236); on ARA cluster at Jena FSU. This research used resources of the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. Data postprocessing was performed on the Virgo "Tullio" server at Torino supported by INFN. The authors gratefully acknowledge the Gauss Centre for Supercomputing e.V. (www.gauss-centre.eu) for funding this project by providing computing time on the GCS Supercomputer SuperMUC at Leibniz Supercomputing Centre (www. lrz.de).
APPENDIX A: EXPERIMENTAL ESTIMATE OF THE REMNANT BH
We perform single puncture experiments with the gauge conditions used in the BNS simulations, and study the behaviour of the lapse Figure A1. Dependence on the final black hole dimensionless spin of the lapse calculated at the apparent horizon (top) and the extrinsic curvature multiplied by M ADM at the puncture (bottom) and relative best fits. α and the extrinsic curvature trace K close to the puncture. We show that the evaluation of the extrinsic curvature K at the puncture (origin) allows us to estimate the BH spin and the lapse at the AH. Further assuming an approximate value for the BH mass as given by the quasiuniversal relation MBH ≈ M |e mrg b (κ T 2 )|ν (upper bound), leads to a simple estimate of both mass and spin of the BH. Hence, these results could be useful as a simplified criterion for estimating BH formation without a AH.
The gauge conditions for lapse and shift vector β i employed in the simulations are Campanelli et al. 2006;van Meter et al. 2006;Brügmann et al. 2008) ∂tα − β i ∂iα = −α 2 µLK (A1) whereΓ i are the conformal variables of Z4c (Bernuzzi & Hilditch 2010;Hilditch et al. 2013), η = 1 is a damping term, µS = 3/4, µL = 2/α are the characteristic speeds. For simplicity the initial data for one puncture with different spins are prepared solving for two punctures (Ansorg et al. 2004) and setting one mass much maller than the other q ∼ 10 12 and at a distance smaller than the evolution grid spacing. These simulations are performed with the BAM code with 6 refinement levels and maximum resolutions of h 4.6875 × 10 −2 , 2.34375 × 10 −2 . During the evolution the system quickly settles to a stationary solution with mass MBH and dimensionless spin aBH, both measured with the apparent horizon finder. We then measure the lapse at the horizon αAH and the curvature at the puncture K0. Figure A1 show the puncture's lapse at the horizon (top) and K0 ≡ MADMK(r = 0) at the puncture (bottom) calculated for various spin values. These quantities can be fit to The second fit was proposed also in and the two results agree within the numerical precision of the data.
APPENDIX B: CONTINUOUS DEPENDENCY OF DYNAMICS ON MASS RATIO
We consider here simulations of a sequence of BNS with the BLh EOS, fixed chirp mass and increasing mass ratio. Note all simulations discussed in this Appendix are performed at LR. Figure B1 shows (from top to bottom) the evolution of the maximum value of density and temperature, the gravitational wave amplitude of the dominant l = m = 2 mode and the dynamical ejecta, split into shock-(solid) and tidal-driven (dashed) components. For increasing mass ratio the dynamics smoothly converge towards the prompt collapse of the q = 1.8 binary. This can be observed for both density and temperature maxima, as well as for the moment of merger. On the contrary, the mass ejecta do not show a smooth dependence on the mass ratio. The highest mass ratios (q = 1.8, 1.67) exhibit large tidal-to-shocked ratio, with the q = 1.8 BNS showing almost no shocked ejecta. This is reversed in the equal mass model, where the shocked component is an order of magnitude larger than its tidal counterpart. The outlier models are the ones with mass ratios 1.17 < q < 1.54. For these, the contributions from both components are comparable, with the q = 1.17 model having overall the most amount of dynamical ejecta between the three BNS from both channels. In the extreme q cases, disruption of the lighter NS companion leads to tidally-dominated ejecta, while for equal mass NSs that reach merger only slightly tidally deformed the shocked components dominates. As a complement to the results, we show the violation of hamiltonian constraint and the total baryonic mass conservation for these simulations. The hamiltonian constraint violation is under control for all simulations at all times, and violations are of the same order of magnitude. The total rest-mass is conserved up to fractional level ∼3×10 −5 (approximately floating point precision) before merger for all the simulations. We stress that we use the refluxing scheme (Berger & Colella 1989;Reisswig et al. 2013b) and that these simulations are low resolution, thus the results should be considered conservatives upper limits for the errors in SR and HR, which are indeed smaller. The rest-mass drops affer merger mainly as a consequence of the dynamical ejecta, that are typically one-totwo order of magnitudes larger than the numerical errors. Figure B1. Main scalar quantities for several different mass ratio models with BLh. Each simulation presented here is run at grid setup LR. In the first panel we show the evolution of the maximum density (ρmax/ρ 0 ), in the second panel the evolution of the maximum temperature (Tmax), in the third the gravitational wave amplitude. The last panel shows the evolution of the total mass of the dynamical ejecta: with solid and dashed lines we highlight the contribution of shock-and tidal-driven components respectively. The vertical dashed lines in all panels indicate merger time for each simulation.
APPENDIX C: QUASIUNIVERSAL RELATIONS OF BINDING ENERGY AND ANGULAR MOMENTUM AT MERGER
In this appendix, we introduce NR fit formulae for binding energy e mrg b and angular momentum j mrg for BNS at the moment of merger. The fits are calibrated on 172 NR simulations with q ≤ 1.5 extracted from the CoRe database Radice et al. 2018d). The fitted relations are a rational functions parametrized with ξ, introduced in Eq. (8), For the binding energy e mrg b , the analyzed data span a range from -0.065 to -0.043, and the best-fit coefficients are F0 = 0.20179, n1 = −114.42, n2 = −0.39976, d1 = 286.19, d2 = 2.2687 and c = 1285.2, where c is defined in Eq. (8). The calibration has χ 2 = 6.8×10 −3 and the intrinsic uncertainty of the fit corresponds to ∼7% of the estimation, referring to the 90% credible regions. Regarding the angular momentum j mrg , the data have values between 3.3 and 3.8 and the best-fit coefficients are F0 = 0.028862, n1 = 40.884, n2 = 0.072754, d1 = 0.352, d2 = 0.0004703 and /(νM ) and angular momentum j mrg = J mrg /(νM 2 ) of a BNS at the moment of merger. The employed data are extracted from NR simulations of BNS with q < 1.5 included in the CoRe database. c = 1325.2. In this calibration, we obtain χ 2 = 1.9 × 10 −2 and the fit has an uncertainty of ∼3% within the 90% credible region. | 2020-03-16T01:00:45.515Z | 2020-03-12T00:00:00.000 | {
"year": 2020,
"sha1": "1d50dec2ed73a9e9ba5b3360d4ca924b525b65b3",
"oa_license": null,
"oa_url": "https://academic.oup.com/mnras/article-pdf/497/2/1488/33558021/staa1860.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "cc463499fb8f992148efcd79adf1538215a0a111",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
17690775 | pes2o/s2orc | v3-fos-license | Clinicopathological characteristics and prognostic factors in young patients after hepatectomy for hepatocellular carcinoma
Background The aim of this study was to analyze the clinicopathological characteristics and the prognostic factors for survival and recurrence of young patients who had undergone hepatectomy for hepatocellular carcinoma. Methods Between 1990 and 2010, 31 patients aged 40 years or younger (younger patient group) among 811 consecutive patients with hepatocellular carcinoma who had undergone primary hepatectomy were analyzed with regard to patient factors, including liver function, tumor factors and operative factors. The clinicopathological characteristics of the younger patients were compared with those of patients over the age of 40 (older patient group). Then the prognostic factors of the younger patients were analyzed. Continuous variables were expressed as the means ± standard deviation and compared using the χ2 test for categorical variables. Overall survival and recurrence-free survival rates were determined by the Kaplan-Meier method and analyzed by the log-rank test. The Cox proportional hazards model was used for multivariate analysis. Results In the younger patients, the rates of HBs-antigen-positivity, high alpha-fetoprotein, portal invasion, intrahepatic metastasis, large tumors, low indocyanin green retention rate at 15 minutes, and anatomical resection were significantly higher than the same measures in the older patients. The five-year overall survival rate of the young patients was 49.6%. The prognostic factors of survival were HCV-antibody-positivity and low albumin status. Prognostic factors of recurrence were multiple tumors and the presence of portal invasion. Conclusions In younger patients, survival appeared to be primarily affected by liver function, while recurrence was affected by tumor factors. Young patients with hepatocellular carcinoma should be aggressively treated with hepatectomy due to their good pre-surgical liver function.
Background
Liver cancers are malignant tumors and are the third leading cause of cancer-related death; they are responsible for approximately 700,000 deaths per year [1]. Hepatocellular carcinoma (HCC) has a poor prognosis and accounts for 70 to 85% of primary liver cancers [2]. Generally, there are few opportunities for discovery of malignant tumors in younger patients, and thus they tend to present with a highly advanced malignancy at the time of diagnosis; nonetheless, younger patients can expect long-term survival. The definition of what constitutes a "young patient" differs between studies [3][4][5][6][7][8][9][10][11][12]. HCC is fairly rare in younger individuals, with an occurrence rate of only 0.6 to 2.7% in those under 40 years of age, according to Japanese reports [12][13][14]. In Asia and Africa, which are areas with prevalent hepatitis B virus (HBV), the frequency of HCC is higher than in Japan [4,8,9,11,15]; however, there are still few reports on independent prognostic factors in young patients with HCC.
In this study, we examined the prognostic clinicopathological features, as well as the prognostic factors for survival and recurrence, in young patients with HCC who had undergone hepatectomy.
Methods
Between January 1990 and May 2010, 811 consecutive patients with HCC underwent primary liver resection at the Gastroenterological Surgery I unit of Hokkaido University Hospital in Sapporo, Japan. Of these patients, 31 patients (3.8%) were 40 years old or younger, while 780 patients (96.2%) were over 40 years of age. For group stratification, the former patients were defined as the younger patient group, and the latter as the older patient group. This study was approved by the Hokkaido University Hospital Voluntary Clinical Study Committee and was performed according to the Helsinki Declaration guidelines. The clinicopathological characteristics and surgical data of the patients are shown in Table 1.
The indications for hepatic resection and the type of operative procedures were usually determined based on the patients' liver function reserve, that is, according to the results of the indocyanin green retention test at 15 minutes (ICGR15) [16]. Anatomical resection was performed on patients in whom the ICGR15 was lower than 25%. Anatomical resection was defined as a resection in which the lesions were completely removed anatomically on the basis of Couinauds' classification (segmentectomy, sectionectomy, and hemihepatectomy or more). Non-anatomical partial but complete resection was achieved in other cases. In all patients, surgery was performed at R0 or R1. When R0 and R1 resections were performed, the resection surfaces were found to be histologically or macroscopically free of HCC, respectively. Follow-up studies after liver resection were conducted at three-month intervals, which included physical, serological (liver function test, serum alpha-fetoprotein (AFP) level, and serum protein induced by vitamin K absence-II (PIVKA-II)), and radiological examinations (ultrasound sonography (US) and contrast-enhanced computed tomography (CT) scan or contrast-enhanced magnetic resonance imaging (MRI)). Recurrence was diagnosed on the basis of the results of contrast-enhanced CT and elevation of serum levels of AFP and/or PIVKA-II. Extrahepatic metastasis (lung, lymph node, adrenal gland, brain and bone) was diagnosed by contrast-enhanced chest and abdominal CT, contrast-enhanced head MRI and bone scintigram. The median follow-up period was 111 months (range, 5 to 249 months).
Statistical analysis
Continuous variables were expressed as the means ± standard deviation and compared using the χ 2 test for categorical variables. Overall survival (OS) and recurrencefree survival (RFS) were determined by the Kaplan-Meier
Clinicopathological characteristics and operative variables Patient factors
The ratio of males to females (24:7) in the younger patient group was not significantly different from that of the older patient group. Patients with HBV markers accounted for most of the virus-associated cases: HBsantigen (HBs-Ag)-positive, 26/31 (total number in the younger group) vs. 321/780 (total number in the older group); 84% vs. 41%; P <0.0001. Patients who were hepatitis C virus (HCV)-antibody (HCV-Ab)-positive were significantly fewer in number, that is, 1/31 vs. 310/780 (3% vs. 40%; P <0.0001) in the younger group. Although serum albumin and total bilirubin levels were not significantly different between the groups, patients with ICGR15 ≥15 were 3/31 vs. 360/780 (10% vs. 46%; P = 0.0001).
Tumor factors
The younger group had significantly higher AFP levels compared to the older group (P = 0.0026). Although the number of tumors did not differ significantly between the younger and older patients, there were significantly more cases with a maximum tumor size of ≥5 cm in the younger group (P = 0.0072). The mean maximum tumor diameter in the younger group in this study was 8.6 ± 7.3 cm. Neither macroscopic type nor extrahepatic metastasis was significantly different between the groups.
Operative variables
The rate of anatomical resections in the younger patients was significantly higher than that in the older patients.
Pathological factors
There were significant differences between groups in terms of microscopic tumor thrombus in the portal vein (P = 0.0026) and microscopic intrahepatic metastasis (P = 0.0413) ( Table 1).
Causes of death and recurrence
Among the total 811 patients, 390 (48.1%) died. The mortality rates were 17/31 (54.8%) in the younger patient group and 373/780 (47.8%) in the older patient group. The causes of death, which did not differ significantly between groups, were as follows: HCC recurrence (n = 301; 77.2%; 16 in the younger patients vs. 285 in the older patients), liver failure (n = 36; 9.2%; 0 in the younger vs. 36 in the older patients), and other causes (n = 53; 13.6%; 1 in the younger vs. 52 in the older patients). In addition, two patients in the older group died of operative complications prior to 1995. No patients in the younger group died of operative complications.
Cumulative rates of patient survival and recurrence-free survival
The five-year OS rate of all 811 patients was 57.1%. The five-year OS rate and median survival time (MST) of the younger group were 49.6% and 40 months, respectively, whereas those of the older group were 57.7% and 79 months, respectively (Figure 1). The median RFS time of all 811 patients was 23 months, while that of the younger patients was 6 months, and that of the older patients was 25 months (Figure 2). Neither OS nor RFS were significantly different between the younger and older groups, although recurrence tended to occur earlier in the younger patients.
Factors related to long-term survival and disease-free survival after primary hepatectomy in the younger patient group Table 2 shows those factors that were found by univariate analysis to influence OS and RFS in the younger group. The univariate analysis revealed that OS was significantly related to being HCV-Ab-positive, having a serum albumin level of <4.0 g/l and a maximum tumor size of ≥5 cm, the presence of tumor thrombus in the second and first branches and trunk or opposite side branch of the portal vein (vp2, 3, 4), microscopic intrahepatic metastasis, and histological liver cirrhosis of non-cancerous liver. Univariate analysis showed that RFS was significantly related to multiple tumors, maximum tumor size of ≥5 cm, poor differentiation, the presence of tumor thrombus above vp2 and microscopic intrahepatic metastasis. Multivariate analysis showed HCV-Ab-positive status and serum albumin levels of <4.0 g/l to be independent predictive factors for OS, and multiple tumors and vp2, 3, 4 were independent predictive factors for RFS in the younger group of patients (Tables 3 and 4).
Discussion
In this study, the younger patients with HCC who underwent hepatectomy were more likely than the older patients to be HBV-positive, to have large tumors with portal invasion and to have high AFP, although they also retained better liver function than the older patients. Despite the significant difference in tumor progression, neither OS nor RFS were significantly different between the two groups, although recurrence tended to occur earlier in the younger patients. Multivariate analysis showed HCV-Ab-positive status and serum albumin levels of <4.0 g/l to be independent predictive factors for OS, and multiple tumors and vp2, 3, 4 were independent predictive factors for RFS in the younger patients. Therefore, young patients with hepatocellular carcinoma should be aggressively treated with hepatectomy due to their good pre-surgical liver function.
In the younger group of patients, HCV-Ab-positive status and low serum albumin levels were the liverfunction-related factors that were found to be significantly unfavorable in terms of OS, while multiple tumors and vp2, 3, 4 were the tumor-related factors that were significantly unfavorable in terms of RFS; moreover, these findings were obtained by both univariate and multivariate analyses. Although most of the younger patients had advanced tumors, no differences were found between the younger and older patients in terms of OS.
These results indicate that aggressive and curative liver resection should be performed for young patients with HCC, because most young patients retain good presurgical liver function. The definition of who should be classified as a "young patient" with HCC remains controversial. In the literature, the definition of a young patient with HCC has tended to be a patient aged 40 years or younger [4,8,[10][11][12]14]. Cases of HCC in such patients are comparatively rare, for example, HCC occurs in only 0.6 to 2.7% of this age group in Japanese reports [12][13][14]. In other countries, the reported rates of HCC in this age range are as follows: 8.6% (40 years and younger) in Singapore [11], 10.9% (under 40 years) in Taiwan [8] and 6.5% (40 years and younger) in Hong Kong [4]. Thus most of the existing reports have been from Asia, and they show a difference in frequency among regions. There appear to be many young patients in Asia with HCC who are HBVpositive; HBV is an underlying disease of HCC in young patients, and many carriers live in Asia [17].
Many young patients with HCC have HBs-Ag, that is, up to 71.4 to 100% [3][4][5][7][8][9][10][11]14]. Meanwhile, cases of HCV-Ab-positivity plus HCC among younger patients are reported at rates of 0 to 10% [4,5,[7][8][9][10]12,14], which is much lower than the range for older patients. Rates of Child-Pugh A are 69.1 to 92.3% among younger patients [4][5][6][8][9][10][11][12], which is higher than the range in older patients. It has been reported that histological hepatitis or cirrhosis of non-cancerous liver is significantly less common in younger hepatectomy patients than in older hepatectomy patients among cases with HCC [3,4,12]. Though HCC is generally found by medical examination or follow-up of liver function, in most young patients, HCC is found by symptoms such as pain and/or palpation of an abdominal mass [11,14,18,19]. Accordingly, members of the younger patient group in this study had larger tumors than the older patient group.
This study revealed that the rate of cases related to HBV was 93.5%, and the rate of HBs-Ag-positive cases was 87.0%. The MST of the younger group was 40 months, and the five-year OS rate was 49.6%. These results did not differ significantly from the previously reported MST and five-year OS rates of 27.8 to 52.5 months and 30.5 to 54.8%, respectively, among cases of liver resection for HCC across all ages [20,21]. Therefore, it appears likely that aggressive and curative liver resection contributes to prolonged prognosis.
In regard to tumor factors, several studies have reported that more young than old patients have high AFP levels, that is, the rates of cases in which AFP is equal to or exceeds a value of 400 ng/ml range from 52.6 to 82.0% [3,7,[9][10][11]14], and rates for an AFP of ≥10,000 ng/ml range from 31.6 to 60.0% [3,10,11,14]. In addition, younger patients tend to have larger tumors than older patients, with the maximum diameter of tumors being 6.9 to 12.7 cm in younger patients [3,4,7,10,12,14]. Cases showing portal invasion count for 45.0 to 100% [10][11][12]14] of younger HCC patients. In the present study, the younger patient group had higher AFP levels and larger tumors, was more likely to have portal invasion and showed better liver function than the older group, as has been reported elsewhere [3,7,[10][11][12]14]. It has also been reported that cases with high AFP levels have a poor prognosis due to a correlation between tumor size and AFP [22].
As regards prognostic factors, Chen et al. reported that hepatectomy was a significant favorable prognostic factor among HCC patients aged 40 years and younger [8]. As regards other prognostic factors, AFP [8,11], portal invasion [8,11] and reserved liver function [8,11,12] have been reported, although these remain controversial. In this study, prognostic factors related to OS were HCV-Ab-positive status and low serum albumin levels, and prognostic factors related to RFS were the number of tumors and vp2, 3, 4. It has been suggested that liver function preservation primarily influences survival, and tumor factors influence recurrence. Furthermore, while the time to recurrence in the younger patients was shorter than that in the older patients, the RFS of the younger group tended to overtake that of the older group in the long term. The recurrence rate was 71%, and the site of recurrence was almost always the liver. This rate was comparable to those of other reports, which ranged from 60.2 to 78.2% across all ages [20]. The results to date suggest that aggressive treatments, including re-hepatectomy for recurrence, contribute to an improvement in the long-term prognosis. Moreover, in order to improve prognosis, we should take care to perform aggressive resections, and should also make note of cases with a background of potentially liver-affecting hepatitis B. Chuma et al. reported that the quantity of HBV-DNA and non-treatment for HBV were risk factors for a recurrence of HCC [23]. Li et al. reported that one-year and two-year RFS rates were 23.3% vs. 8.3%, and 2.3% vs. 0%, respectively, in a treatment group receiving lamivudine for HCC due to concurrent hepatitis B vs. a control group [24]. Therefore, viral treatments in combination with cancer treatments, including resection, are important to consider.
There have been few reports on liver transplantation for young patients with HCC. The reason for this lack of information is likely to be that younger patients have relatively larger tumors and, therefore, they tend to have tumors exceeding the Milan criteria. Ismail et al. reported that the outcomes of liver transplantation were better than those of liver resection among patients with HCC who were aged 2 to 27 years, namely, the OS rates were 72% vs. 40%, and the RFS rates were 91% vs. 30% [25]. It was also reported that primary liver transplantation for children with HCC without extrahepatic lesions has a good outcome, even if the tumors exceed the Milan criteria [26]. An accumulation of future cases is expected.
As noted above, many young HCC patients present with advanced tumors and unfavorable prognostic factors. In a study on 16 patients who received liver transplantation for HCC and who had low differentiation and vascular invasion beyond the Milan criteria, Saab et al. reported that those receiving sorafenib (n = 8) had oneyear OS rates and RFS rates of 87.5% and 85.7%, versus 62.5% and 57.1% for the control group (n = 8) [27]. It is expected that supportive treatment with molecular target medicine after liver resection or transplantation could contribute to a prolonged prognosis.
Conclusions
In our younger patients with HCC, survival appeared to be mainly affected by liver function while recurrence was mainly affected by tumor factors. Young patients with HCC should be offered aggressive hepatectomy due to their relatively preserved liver function. | 2017-07-10T05:54:21.658Z | 2013-03-02T00:00:00.000 | {
"year": 2013,
"sha1": "56f70ec5e6370e395496fbc6cd220dfc522393a7",
"oa_license": "CCBY",
"oa_url": "https://wjso.biomedcentral.com/track/pdf/10.1186/1477-7819-11-52",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "43608c1e70977118acc70ce570722df5545f3829",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
219660579 | pes2o/s2orc | v3-fos-license | Effects of Various Ameliorants on pH, Phosphorus Availability and Soybean Production in Alfisols
Alfisols have inherent potential to increase Indonesia’s soybean production, however, alfisols also known for its low phosphorus availability. Field experiment using ameliorants consisting of quail manure, zeolites and rock phosphate was conducted to increase phosphorus (P) availability and soybean production. The aim of this study is to evaluate the effects of ameliorant combinations for improving phosphorus availability and its correlation to soybean production in alfisols. Randomized complete block design with single factor was used, with 9 combinations of ameliorants under study (P0 – P8). Obtained results showed that phosphorus availability is increased up to 72.6% and soybean yield upto 75.9%. Correlation of phosphorus availability and soybean production was significant (r = 0.854). Finally, the best treatment to increase phosphorus availability and soybean production is quail manure 2.5 t.ha-1 + rock phosphate 5 t.ha-1).
INTRODUCTION
Soybean production area decreased in the last decade at least 0.97% per year based on Indonesia's statistic source (BPS) in 2015. Population growth, adds the favour creating a wide gap between soybean production and consumption. The decrease in soybean production is due to continuous use of inorganic fertilizer, climate c hange and other environmental factors. To keep up with soybean consumption, soybean production should be increased and potential type of soil for soybean cultivation is alfisols.
Alfisols are in general acidic with poor fertility status (Bhat et al. 2017). Harter (2007) found soil pH has an important role in nutrient availability including phosphorus. Harter also mentioned, soils with pH below 5.5 with have high amounts of Al 3+ and Fe 3+ . Hence, phosphorus bond with aluminium (Al) or iron (Fe) in alfisols, or in some cases with soil clay (Fink et al. 2016) which cause phosphorus not available for plants. Meanwhile, soybean requires high amount of phosphorus according to Heard (2005) and Monsanto Technology (2015).
Three types of ameliorants were used in this study: quail manure, zeolites and rock phosphate. Rostami et al. (2013) found that addition of cow manure is able to help in binding Fe, Mn and Zn with soil colloid. Related to alfisols, Pinto et al. (2013) stated that organic matter is able to increase phosphorus availabilty indirecty through inhibiting aluminium oxide crystallization. Khasawneh and Doll (1978) found that use of rock phosphate most effective on acid soil and pH has an important role to enhance phosphorus availability in soil.
Use of manure and rock phosphate was studied by Akande et al. (2005). He showed that manure helps to increase rock phosphate effectiveness and on increasing soil phosphorus availability. Zeolites are known for its ability to support crop cultivation through enhancing nutrients and
MATERIALS AND METHODS
This study was carried out in Sukosari district, located in Central Java, Indonesia from 5 May 2018 -5 August 2018. Laboratory analysis was done in soil chemistry and soil fertility laboratory, faculty of agriculture, Sebelas Maret University. Soil characteristics was determined in 3 stages such as before planting (initial soil), at the time of maximum vegetative and harvest phase.
Each treatment was replicated thrice so that 27 experimental units were obtained. The cultivar Dega 1 variety was used as test crop. Soybean was planted with inter-row and intra-row spacing 25x25 cm. The experiment was conducted with due care in plant preparation, land preparation, initial soil sampling, application of treatments, planting, maintenance, harvesting vegetative maximum phase days and generative vegetative) and laboratory analysis.
The soil analysis under this study included: pH was determined in 1:2 w/v soil: water ratio, Bray I method was employed for initial soil available phosphorus determination and Olsen method after harvest (FAO, 2008). The yield components of soybean were recorded: numbers of total soybean pod (pods/plant), numbers of soybean pod content (pods/plant) and soybean yield (ton.ha -1 ). The analysis of variance, Duncan Multiple Range Test (DMRT) and Pearson correlation test were performed.
The quality of ameliorants were measured (Table 2). C/N ratio of quail manure is 13.12 and P 2 O 5 in rock phosphate is 1.97%. Both criteria of C/N ratio and P 2 O 5 are suitable for application based on regulation of Indonesia's Ministry of Agriculture number 28. The CEC in zeolite is 128.60 me/100g ( Table 2). The value has met the criteria of technical requirement of the regulation of Indonesia's Ministry of Agriculture number 70 with CEC zeolite which is at least 120 me/100g.
Soil pH and P availability
The data in Table 3 shows that alfisols soil pH increased from its initial pH from acidic to slightly acid. The present data shows that soil pH range from 6.33 -6.86, which suitable for soybean growth according to NCRRA (2016). Among different treatments, rock phosphate application has shown significant effect on soil pH. Soil with rock phosphate treatments (as in P1;P3;P4;P6;P8) recorded able to increase soil pH and the highest was obtained in P6 treatment up to 54.48% as compared to initial soil pH.
These findings are in agreement with the results of Maryanto and Abubakar (2010). Meanwhile, the results of this experiment showed that zeolites application as in P2, P5 and P7 was not statistically significant over control (P0), but rise in pH compared to its initial pH (Table 2). Recent study by Miller (2016) reported that pH had significant effect on availability of phosphorus. Agreeing to Miller's findings , the results showed that the soil pH under the P6 (rock phosphate 5 t.ha -1 ; quail manure 2.5 t.ha -1 ) reaches to neutral, as well as significantly increased phosphorus availability by 108% over control (P0).
The results indicated that there is an interaction (Alloush, 2003; Abu El-Eyuoon and Abu Ed Zamin, 2018) reported using rock phosphate and manure could help solving common problem in acid soil, such as adjusting soil pH to optimum pH and increase phosphorus availability. However, dose of ameliorant (in this case quail manure or rock phosphate) have important roles, inadequate dose did not increase phosphorus availability significantly as shown on P4 treatments. Suspected that, from previous researches (Bernardi et al. 2010;Allen et al. 1993;Ramesh et al. 2015), addition of zeolite was able to enhance the dissolution of the rockphosphate. However, in this study, it was found that the addition of zeolites together with rock phosphate, or zeolite with quail manure had no significant effect. Addition of rock phosphate and zeolite together in alfisols does increase to alkaline pH (Table 2). This later created alkaline condition in the soil, potentially Ca 2+ availabilty increases and bonds with P, created a calium phosphate bond (Helget, 2016). Stated c ond itio n in so il p ro motes low ph osph orus availability.
Soybean Production
The yield components of soybean such as numbers of total soybean pods, numbers of soybean pods and soybean yield was observed. Ameliorant combination of quail manure and rock phosphate in P1 had significant effect compared to other treatments as shown on Table 4. P1 application did increase total soybean pods and soybean pods content by 109%, compared to control (P0). Similar findings in increase of yield components of soybean was reported under P1 treatment with 3.59 t.ha -1 . P1 treatment showed significantly increased soybean yield (75.9%) as against control (P0). Shinde and Hunje (2019) experiment on chickpea found that application of organic manure gave the highest 100 seed weight. This is due to organic manure ability to supply nutrients for plants. It was recorded that highest soybean production under P1 (quail manure 5 t.ha -1 ; rock phosphate 2.5 t.ha -1 ) treatment.
Regarding to available soil phosphorus (Table 3), soybean requires 0.25 -0.50 ppm of phosphorus (Reuter and Robinson, 1986), during vegetative phase for plants growth and later on kept in form of seed/pods at generative phase (FAO, 2004). The experimental data (Table 3) showed that ameliorant treatment was able to increase phosphorus availability compare to soil without ameliorant treatment (P0). Even more, soil with ameliorant combination of rock phosphate and quail manure (P1 and P6) gave significant result.
Availability of nutrients in this case phosphorus will affect soybean production. To prove this theory, correlation test between phosphorus availability and soybean yield was analyzed. It showed that there was positive correlation between phosphorus availability and soybean yield (r = 0.854). Devi et.al., (2012) found similar result that level of phosphorus in soil determine soybean yield, regarding the importance of phosphorus for growth and nitrogen fixation. Use of zeolite in this study had no significant effect on soybean yield. Theofanoudis et al. (2015) tried similar experiment using manure, mineral fertilizer together with zeolite in cauliflower and reported zeolite addition had no significant effect towards cauliflower yield.
CONCLUSION
The completely randomized design using soybean as test crop in alfisols with 8 treatments of blended phosphorus fertilizers showed that there is an increasing pH (54.48 %) over control under rock phosphate fertilizers. Interaction between manure and rock phosphate helped increased phosphorus availability in alfisols by 108 %. The available soil phosphorus had yielded positive correlation to soybean production (r=0.854). Based on the result of this study, we reccomend treatment P1 (quail manure 5 t.ha -1 and rock phosphate 2.5 t.ha -1 ) because this treatment gave the highest soybean production.
REFERENCES
Akande MO, Adediran JA. and Oluwatoyinbo. FI., (2005). Effects of rock phosphate amended with poultry manure on soil available p and yield of maize and cowpea, African Journal of Biotechnology. 4: 444-448. | 2020-05-21T00:08:26.582Z | 2020-04-15T00:00:00.000 | {
"year": 2020,
"sha1": "d3e86dd4838075f7a116ff584f444c930e3a31bf",
"oa_license": null,
"oa_url": "https://arccarticles.s3.amazonaws.com/webArticle/Final-attachment-published-A-458.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "daa729b41a5a485347d9148b243d6c7ce04b7914",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
246281478 | pes2o/s2orc | v3-fos-license | Strong coupling constant from moments of quarkonium correlators revisited
We revisit previous determination of the strong coupling constant from moments of quarkonium correlators in (2+1)-flavor QCD. We use previously calculated moments obtained with Highly Improved Staggered Quark (HISQ) action for five different quark masses and several lattice spacings. We perform a careful continuum extrapolations of the moments and from the comparison of these to the perturbative result we determine the QCD Lambda parameter, $\Lambda_{\overline{MS}}^{n_f=3}=332 \pm 17 \pm 2(scale)$ MeV. This corresponds to $\alpha_s^{n_f=5}(\mu=M_Z)=0.1177(12)$.
moments can be calculated in perturbation theory in MS scheme G n = g n (α s (µ), µ/m h ) am n−4 h (µ m ) .
Here µ is the MS renormalization scale. The scale, µ m at which the MS heavy-quark mass is defined could be different from µ in general [12]. The coefficient g n (α s (µ), µ/m h ) is calculated up to 4-loop, i.e. including the term of order α 3 s [13,14,15].
In lattice calculations it is more practical to consider the reduced moments [3] R n = where G (0) n is the moment calculated from the free correlation function. The lattice artifacts largely cancel out in these reduced moments.
It is straightforward to write down the perturbative expansion for R n : From the above equations it is clear that R 4 is suitable for the extraction of the strong coupling constant α s (µ) at scale proportional to the heavy-quark mass, m h , while the ratios R n /m h0 with n ≥ 6 are suitable for extracting the heavy-quark mass once α s (µ) is determined. One can also use the ratios of the reduced moments, namely R 6 /R 8 and R 8 /R 10 to determine α s . We will discuss these ratios in the Appendix.
Continuum extrapolations of the reduced moments of quarkonium correlators
In our analysis we used previously published lattice QCD results for the reduced moments in (2+1)-flavor QCD obtained for heavy-quark masses m h = m c , 1.5m c , 2m c , 3m c and 4m c , Table 1 The continuum results for the reduced moments of quarkonium correlators at different heavy-quark masses. The last column shows the α s values extracted from R 4 with µ = m h . The first, second, and third errors in α s correspond to the lattice error, the perturbative error, and the error due to the gluon condensate, respectively, see text.
1.0m c 1.2778 (20) 1.0200 (16) 0.9166 (17) 0.8719 (21) 0.3798(28)(31) (22) 1.5m c 1.2303(30) 1.0792 (20) 0.9860 (20) 0.9462 (23) 0.3151(43) ( with m c being the charm-quark mass [8]. The lattice calculations have been performed using HISQ action at several values of the lattice spacing [8]. The lattice spacing has been fixed through the r 1 parameter from the static quark-antiquark potential [9,10,11], and the value r 1 = 0.3106(18) fm (7) obtained from the pion decay constant was used [16]. Furthermore, the calculations have been performed at two values of the light quark masses corresponding to the pion mass of 161 MeV and 320 MeV in the continuum limit, and no dependence on the light quark mass of the reduced moments was found within errors [8]. The lattice results on R n , n = 4, 6, 8, 10 are found in Tables VII-XI of Ref. [8] for different lattice spacings. The errors in the tables include statistical errors, errors related to mistuning of the charm-quark mass and finite volume errors. All the errors have been added in quadrature. The bare charm-quark masses are found in Table I of [8].
Because the tree-level lattice artifacts cancel out for the reduced moments the lattice spacing dependence can be parameterized as where α b s = g 2 0 /(4πu 4 0 ), g 2 0 = 10/β is the boosted gauge coupling. We performed joint fits of the lattice results on R 4 and R n /m h0 obtained at different quark masses to Eq. (8) and Eq. (9) setting d i jk = e i jk = 0. The reason for setting the coefficients of the log terms to zero was to avoid having too many poorly constrained parameters since the logarithmic dependence on am h0 is much weaker than the power-law dependence. For the continuum extrapolations of R 4 , where the lattice spacing dependence is the most prominent, we also performed fits allowing for a few terms proportional to log(am h0 ). These fits are discussed 7 in the Appendix. Furthermore, the maximal number of terms N and M i in Eqs. (8) and (9) should be sufficiently large so that higher order terms have negligible impact on the continuum extrapolation. The continuum extrapolations of R 4 based on the joint fits of m h = m c − 4m c lattice data turned out to be much more stable with respect to fit range variations than the extrapolations performed in Ref. [8] separately for each value of m h . In particular, there was no problem incorporating 4m c data in the analysis unlike in Ref. [8], where this was not possible. A sample fit of the data on R 4 is shown in Fig. 1 As a cross-check we also use Akaike information criterion (AIC) [18,19] to obtain continuum result for R 4 from the performed fits. First, we calculate the AIC weights for each fit and then calculate the weighted average of the fit results with the corresponding weights.
Interestingly, this resulted in central values of R 4 very similar to those shown in Table 1.
Furthermore, as mentioned above we also performed the continuum extrapolations, which allow for a few terms proportional to log(am h0 ). The corresponding continuum results for R 4 are not significantly different from the ones in Table 1, see Appendix.
To obtain continuum results for R n /m h0 , n ≥ 6 it is sufficient to consider fits with N = 1 and M 1 = 2. This is because the errors on R n /m h0 are much larger than for R 4 . These errors are dominated by the uncertainties in m h0 , which are essentially the uncertainties in m c0 multiplied by the corresponding constant (3/2, 2, 3 and 4). The uncertainties in m c0 come from the errors in tuning the charm-quark mass in the lattice calculations due to the errors of the ground state charmonium mass and the error in the lattice spacing [8]. The errors on m c0 are given in the sixth column of Table 1 in Ref. [8]. We used several values of (am h0 ) max in our fits. The differences in the central values of (R n /m h0 ) cont corresponding to the fits with various (am h0 ) max turned out to be much smaller than the statistical errors. So one can choose any of these fit results. For the final continuum estimate we choose the fits with the smallest χ 2 /d f , which turned out to be the fits with (am h0 ) 2 max = 1.0 for R 6 /m h0 and R 8 /m h0 , and the fit with (am h0 ) 2 max = 0.8 for R 10 /m h0 . The corresponding results are given in Table 1.
The new continuum results agree very well with the previous ones but have smaller errors.
For some cases the error reduction is significant.
Having determined the continuum limit of the reduced moments of quarkonium correlators we are in the position to obtain the strong coupling constant. As discussed in section II the value of α s (µ) can be obtained by comparing R 4 calculated in perturbation theory to the continuum extrapolated lattice result. The renormalization scale µ has to be of the order of the heavy-quark mass. We also need to fix the renormalization scale µ m at which the heavy-quark mass is defined. The most natural choice is µ m = µ and will be used throughout in this paper. We consider the following choices of the renormalization scale, Table 1. There is a perturbative error due to missing higher order corrections. We estimated this error in the same way as in Ref. [8], namely we added a term proportional to r 43 (α s /π) 4 with the coefficient which was varied between −5 and +5, to be conservative. Finally, there is an error due to the gluon condensate contribution which was estimated in Ref. [8] by varying the poorly know gluon condensate by factor two.
For larger heavy-quark mass the perturbative error is smaller because the corresponding α s (µ) is smaller. The lattice error on the other hand is larger for larger m h since R 4 is closer to one and the relative error on R 4 increases. The error due to the gluon condensate rapidly decreases with increasing m h and is negligible for m h > 2m c . As an example we show the values of α s for µ = m h in Table 1 are several uncertainties in this determination. One is due to the error in the continuum extrapolated value of R n /m h0 . The second source of the uncertainty is due to the missing higher order perturbative corrections in R n . There is also an uncertainty due to the gluon condensate contribution. These have been estimated in the same manner as in the case of R 4 . Finally there is an uncertainty due to the error in α s . The latter in turn is also affected by the error due to the gluon condensate contribution, which is correlated with the gluon condensate error in R 6 . This correlation should be taken into account. The perturbative error in R 4 and R n , n ≥ 6 can be assumed to be uncorrelated. The statistical errors in R 4 and The charm-quark masses obtained from R 10 /m c0 are not shown since they appear to be very close to the ones obtained from R 8 /m c0 . The values of m c determined for different µ/m h but same value of µ in GeV also agree with each other except for µ/m h = 3, which are about two sigma lower than the ones for µ/m h = 1, for µ < 5 GeV. This fact may indicate that the above procedure of estimating the perturbative error due to missing higher order terms was not sufficiently conservative for µ < 5 GeV. Since the dependence of α s on the charm-quark mass is logarithmic the above small inconsistency in m c determination is insignificant compared to other sources of errors. We also note that because of the uncertainty in the absolute scale we could not improve significantly the charm-quark mass determination compared to the result of Ref. [8] despite the reduced errors in the continuum values for R n /m c0 , n ≥ 6.
Having determined the charm-quark mass we are in a position to obtain the running coupling constant and the Λ-parameter for three-flavor QCD. Using the values of α s shown in Fig. 2 Table 2. In this Table we show Our error estimate for the Λ n f =3 MS parameter is quite conservative and is significantly larger than the one by HPQCD collaboration [4]. This is due to the absence of Bayesian priors and different choices of the renormalization scale µ and not just µ = 3m h . We compare our results for the Λ-parameter with other three-flavor lattice determinations, namely from the moments of quarkonium correlators [4], from the static quark anti-quark potential [22,23,24], from step-scaling analysis by ALPHA collaboration [25], from ghost-gluon vertex in Landau gauge [26], and from the light quark vector current correlator [27]. This comparison is shown Fig. 5. We see that our result is consistent with other lattice determinations. to the present determination. The result of Ref. [22] are not shown as these are superseded by Ref. [23].
bottom quark mass. The running charm-quark mass in MS scheme is obtained from R 8 as This value of α n f =5 s agrees with other determinations from the moments of quarkonium correlators within errors [3,4,5,7,28]. It also agrees with the averaged α n f =5 s from lattice determinations [1,2].
Before closing this section let us discuss the determination of the strong coupling constant from the ratios R 6 /R 8 and R 8 /R 10 . As mentioned in section II the heavy-quark mass drops out in these ratios and therefore they are well suited for the determination of α s .
Naively, one would expect that the continuum extrapolation of these ratios is simpler than for R 4 as the higher order moments are less sensitive to the short distance physics. Using such reasonings in Ref. [7] the strong coupling constant was determined using only these ratios. The ratios R 6 /R 8 and R 8 /R 10 have been also used in an attempt to determine α s with additional cross-checks [8]. It turns out, however, that the cutoff dependence of R 6 /R 8 and R 8 /R 10 is far from simple, and it is challenging to describe it quantitatively. Furthermore, the finite volume effects are also significant for these ratios. The continuum extrapolations for R 6 /R 8 and R 8 /R 10 from simultaneous fits of the lattice data at different quark masses are discussed in the Appendix. We explain there why these continuum extrapolations are difficult.
It turns out that in order to obtain consistent α s determination from these ratios additional priors have to be imposed. The continuum results for R 6 /R 8 at m h = m c and m h = 1.5m c are consistent with the previous results [8]. However, it is not possible to obtain reliable continuum results for R 6 /R 8 for m h > 2m c . In the case of R 8 /R 10 the finite volume effects are quite severe for m h = m c and therefore, reliable continuum results can be obtained only for m h ≥ 1.5m c . The continuum results for R 8 /R 10 turned out to be systematically larger than in Ref. [8]. As the result the corresponding α s values are larger than the α s values obtained from R 8 /R 10 in Ref. [8] and agree well with the corresponding ones obtained from R 4 . On the other side the strong coupling constant extracted from R 8 /R 10 has larger error and therefore, does not improve the precision of our α s determination. Nevertheless, it does provide a useful cross-check of our analysis.
Conclusion
In this paper we revisited the determination of the strong coupling constant from the moments of quarkonium correlators. Using previously published lattice results on the reduced moments in (2+1)-flavor QCD with heavy-quark masses m h = m c , 1.5m c , 2m c , 3m c and 4m c at several lattice spacings we estimated the continuum results on the fourth moment.
These estimates were based on simultaneous fits of the lattice spacing dependence of the reduced moments at several quark masses, similar to the analysis of HPQCD collaboration [4,5]. The new continuum estimates turned out to be much more robust compared to the ones obtained from fits of the cutoff dependence of R 4 performed separately for each quark mass [8]. While both studies use the same form to parameterize the cutoff dependence of R 4 , there is an essential difference. The present analysis strongly relies on the specific form of the cutoff dependence given by Eqs. (8) and (12), while in Ref. [8] it is just an effective way to parameterize the lattice spacing dependence of these quantities, and is not essential for the final continuum result. In this study we constrain the lattice spacing dependence at each heavy-quark mass with the lattice spacing dependence of all other heavy-quark masses, while the previous analysis in Ref. [8] permitted independent variation of the coefficients at different heavy-quark masses. The continuum results at m c and 1.5m c are in good agreement in these approaches. This is reassuring for controlling the continuum extrapolation of these quantities at least for the two lower values of the quark masses. We also revisited the continuum extrapolations of R 6 /R 8 and R 8 /R 10 using simultaneous fits of the lattice results at different quark masses. We have shown that the apparent weaker cutoff dependence of these ratios is misleading, and reliable continuum extrapolations are challenging. We were able to obtain reliable continuum extrapolations for R 6 /R 8 only for m h ≤ 2m c . For
A Continuum extrapolation of the ratios and α s determination
In this appendix we discuss continuum extrapolations for R 4 which allow for terms proportional to log(am h0 ).
Furthermore, we discuss the continuum extrapolations of the ratios R 6 /R 8 and R 8 /R 10 and the determination of α s from these ratios.
As discussed in the main text including terms proportional to log(am h0 ) is challenging as the logarithmic dependence on am h0 is much weaker than the power-law dependence. Therefore, only a few logarithmic terms can be included in the fits to avoid over-fitting and the number of terms in Eq. (8) should be also reduced. We As discussed in the main text it is also possible to determine the strong coupling constant from the ratios R 6 /R 8 and R 8 /R 10 as the heavy-quark mass drops out in these ratios. The apparent cutoff dependence of the ratios R 6 /R 8 and R 8 /R 10 calculated on the lattice is indeed smaller than for R 4 [8]. As we have seen above, to describe the cutoff dependence of R 4 many powers of am h0 are needed and the coefficients often have opposite signs from one order in (am h0 ) 2 to the next one. Therefore, the apparent cutoff dependence of R 4 turns out to be smaller as we increase the heavy-quark mass contrary to the naive expectations. The situation could be similar for R 6 /R 8 and R 8 /R 10 . Furthermore, the cutoff dependence of the numerator and denominator, while being significant, could cancel out in the ratios, thus fooling one into thinking that cutoff effects are small and can be modeled with a low order polynomial in (am h0 ) 2 . We should keep these issues in mind when performing continuum extrapolations of the ratios.
To obtain the continuum result for R 6 /R 8 we perform simultaneous fits of the lattice data at different quark masses to As in Ref. [8] we omit data on fine lattices to avoid finite volume effects when performing fits. The χ 2 /d f of the fit is large unless we use high order polynomials in (am h0 ) 2 . However, using high order polynomials in the fit results in many poorly constrained parameters. Furthermore, a closer look at the lattice data reveals that the slope of the (am h0 ) 2 dependence is quite different for various m h , explaining why χ 2 /d f is large. The apparent Table 4 The continuum values of the ratios R 6 /R 8 and R 8 /R 10 and the corresponding coupling constants α s (µ = m h ) for different values of the heavy-quark masses m h , see text. deal with these problems we omit lattice results for m h ≥ 3m c . Including these data will require adding many more parameters in Eq. (12) The fit for (am h0 ) 2 max = 0.6 as well as the main features of the lattice data for R 6 /R 8 are demonstrated in Fig. 6. No significant dependence of the continuum result on (am h0 ) 2 max have been found. We choose results of fits with (am h0 ) 2 max = 0.6 for the final continuum estimate, which are shown in Table 4. We also used AIC to obtain the continuum values and these were very close to the central value from the above fits. The new continuum estimate for R 6 /R 8 agree with the results of Ref. [8] within errors.
From the continuum results on R 6 /R 8 at m c and 1.5m c we determine the corresponding α s (m h ) by comparing to the 4-loop perturbative results, which are also given in Next we perform the continuum extrapolation for R 8 /R 10 . As for R 6 /R 8 we fit the cutoff dependence of the lattice results at different quark masses with Eq. (12). The finite volume effects are the largest for R 10 and thus for R 8 /R 10 , and it is possible that the finite volume errors in Ref. [8] were not adequate for many of the β values, especially in the case of m h = m c . This may explain why the α s values obtained from R 8 /R 10 were systematically lower [8]. in β for 6.74 ≤ β ≤ 7.28. We interpret this as indication that the finite volume errors are not under control for β > 6.88 and m h = m c . We note that the low central value of R 8 /R 10 is not unique to Ref. [8] but has been seen in other works [3,6,7] as well with the exception of Ref. [4]. The non-monotonic dependence of R 8 /R 10 with increasing β is also observed for m h = 1.5m c and 2m c but the maximum is shifted to significantly larger values of β. Finally, for m h = 3m c and 4m c this non-monotonic behavior cannot be clearly observed because of the large errors on the finest lattices. The above differences in the cutoff dependence of R 8 /R 10 at different quark masses make a simultaneous fit of the cutoff dependence very difficult. This difficulty is likely related to the finite volume effects. To solve this problem we discard data on R 8 /R 10 with small spatial extent. For m h = m c the finite volume effects are under control for β up to β = 6.88, which corresponds to the bare charm-quark mass am c0 = 0.48 and spatial extent N s = 48 (c.f. Table I Table 4. We also applied the AIC to different fit results and the resulting continuum estimates turned out to be close to the central value of the fit with (am h0 ) 2 max = 0.8. From the continuum values for R 8 /R 10 in Table 4 we determine α s (m h ) by comparing | 2020-12-14T02:16:06.830Z | 2020-12-11T00:00:00.000 | {
"year": 2020,
"sha1": "aa803ddb2c8d783f1fa37e07bced53e1d8305928",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1140/epjc/s10052-022-09998-0.pdf",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "aa803ddb2c8d783f1fa37e07bced53e1d8305928",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
78589359 | pes2o/s2orc | v3-fos-license | Comparative dynamic analysis of morbidity in various age groups in Russian Federation
Aim of the study ― To perform a comparative analysis of morbidity rates in children aged 0-14 years and 15-17 years, in population older than 18 years, in women older than 55 years and men older than 60 years in Russian Federation based on the data of healthcare visits in 2004-2014. Material and Methods ― Data in statistical yearbooks published by Rosstat was studied by calculating mean morbidity rates and their amplitude through the years and by comparing that using deterministic factor analysis. Results ― Increase of morbidity rate was observed in (1) children aged 0-14 years by 665.9‰ (1667.0 to 2332.9‰) with mean rate of 2000.0±333.0‰, in children aged 15–17 years by 358.6‰ (1060.2 to 1418.8‰) with mean rate of 1245.8±185.6‰, difference of 307.3‰ and amplitude of 147.4‰; (2) in population older than 18 years by 49.3‰ (515.4 to 564.7‰) with mean rate of 541.8±26.4‰, and in women older than 55 years and men older than 60 years by 42.4‰ (2039.9 to 2082.3‰) with mean rate of 2054.5±27.9‰, difference of 6.9‰ and amplitude of 1.5‰. Specific characteristics of morbidity in various age groups were determined. Conclusion ― Rate of healthcare visits in Russian Federation was higher for children aged 0-14 years and population older than 18 years. Morbidity increase rate was higher in children aged 0-14 years and women older than 55 years and men older than 60 years. Structural differences in disease groups were detected, which may be taken into account when planning preventive measures according to population age.
Introduction
Morbidity in various age groups has some specific qualities, which are defined not only by anatomical and physiological differences, but also by possibility of receiving medical and prophylactic care, by lifestyle and living conditions, which are related to levels of social and economic development in regions of Russian Federation [1].The aforementioned problem exists in other countries.Foreign researchers in field of healthcare organization also confirm that lifestyle of population lies in foundation of national health, and health condition in various age groups allows to organize medical care and prophylaxis with specific differences among them taken into account [2].Sources indicate that while health of various age groups in Russian Federation tends to deteriorate [3], such negative age-related trends are observed in other countries too, including the countries of European Union [4].
To a great extent, basis of potential future health is determined by childhood.Unfavorable tendencies in children's health are observed in Russian Federation during the last decades, which are characterized by increase in incidence of functional disorders and chronic diseases and decline in physical development rate [5].According to British researchers, children in other countries also tend to suffer from negative changes in health [6].
The degree of health decline in population allows to represent rates of incidence, while its structural analysis allows to define priority prophylactic measures for organizing healthcare.Dutch researchers state that it is necessary to study morbidity in all groups of country's citizens to observe demographic trends and monitor progress towards national goals in healthcare development [7].Organization of preventive medicine based on differences in health of various age groups in Russian Federation presents itself as a relevant issue.
The aim of this study was to perform a comparative dynamic analysis of general morbidity rates in children aged 0-14 and 15-17 years, in population older than 18 years, in women older than 55 years and men older than 60 years in Russian Federation based on healthcare visits and requests in 2004-2014.
Study of statistical yearbooks
Evaluation of general morbidity in Russian Federation based on healthcare visits and requests was performed through consecutive study of data on population of different ages in state-published statistical yearbooks (Rosstat) for 11 years (2004-2014) [8,9].
Dynamic analysis
Absolute numerical data on population morbidity grouped by age (based on Rosstat yearbooks) was put in a dynamic range and, by calculating mean values for each group, transformed to relative parameters (extensive and intensive), which were characterized and compared against each other.Extensive parameter is a structural share of certain group of diseases (ICD-10) [10] in population, while the intensive parameter is an incidence of disease in population.Structural and logical relationship between those parameters was analyzed and interpreted.
Statistical analysis
Mean arithmetic values and their fluctuations through the years were calculated and compared by deterministic factor analysis [11] using Microsoft Office Excel 2010 and SPSS 11.5 software.
Results
In order to deepen the understanding of diseases' incidence and possible improvement of healthcare management levels of general morbidity and priority disease classes were studied among the children aged 10-14 years and 15-17 years, population older than 18 years and women older than 55 years together with men older than 60 years in Russian Federation through 2004-2014 time period.
It was established that morbidity rate increased: 1) in children aged 0-14 years by 40.0%(from 1667.0 to 2332.9‰) with mean value of 2000.0±333.0‰and average increase of 4.0% per year, in children aged 15-17 years by 33.8% (from 1060.2 to 1418.8‰) with mean value of 1245.8±185.6‰and average increase of 3.4% per year (Table 1); 2) in population older than 18 years by 9.6% (from 515.4 to 564.7‰) with mean value of 541.8±26.4‰and average growth of 1.0% per year, while in women older than 55 years and men older than 60 years it increased by 2.1% (from 2039.9 to 2082.3‰) with mean value of 2054.5±27.9‰and average increase of 0.2% per year (Table 2).Among different disease classes the highest incidence is observed in the following groups: 1) in children aged 0-14 years: diseases of the respiratory system (1096.1‰),injuries and poisoning (104.6‰),diseases of the skin and subcutaneous tissue (91.4‰), which held the top three positions, while infectious and parasitic diseases (86.6‰) and diseases of the digestive system (84.2‰),holding fourth and fifth positions, were also notable; aforementioned classes include 84,3% of all disease cases; in children aged 15-17 years: diseases of the respiratory system (600.9‰),injuries and poisoning (140.5‰),diseases of the skin and subcutaneous tissue (79.2‰), which held the top three positions, while diseases of the digestive system (67.8‰) and diseases of the genitourinary system (58.0‰)were also notable, holding fourth and fifth positions; aforementioned classes include 77.7% of all disease cases (Table 3).
2) in population older than 18 years: diseases of the respiratory system (154.1‰),injuries and poisoning (86.1‰), complications of pregnancy and childbirth (77.9‰), which held the top three positions, while diseases of the genitourinary system (50.0‰) and diseases of the skin and subcutaneous tissue (40.0‰), holding fourth and fifth positions, were also notable; aforementioned classes include 67.5% of all disease cases; in women older than 55 years and men older than 60 years: diseases of the respiratory system (126.3‰),injuries and poisoning (73.2‰), diseases of the circulatory system (56.3‰),which held the top three positions, while diseases of the eye and adnexa (41.9‰) and diseases of the skin and subcutaneous tissue (38.5‰) were also notable, holding fourth and fifth positions; aforementioned classes include 64.9% of all disease cases (Table 4).
Other classes of diseases in studied population of Russian Federation had lesser incidence and their comparison was of little statistical interest.
Discussion
Russian researchers in fields of social science, economics and medicine, including "public health and healthcare", state that increase in morbidity rates in population of Russian Federation is observed since 1990, pointing out at certain structural differences in it among various age groups [12,13].Comparison of mean values of morbidity for different disease classes with age taken into account demonstrates characteristics and specifics of disease incidence and health-seeking behavior.
National healthcare development researchers of US National Center for Chronic Disease Prevention and Health Promotion (NCCDPHP) and National Institutes of Health (NIH) in their articles on results of PROMIS research prove that studying rates of morbidity and its structure in population of different age groups facilitates forming of specific preventive practices [14].
As of now, however, foreign scientific papers available in open access do not include information on comparison of morbidity rates and structure in whole population stratified by age in specific countries, but some short messages on disease incidence in certain age groups do exist.Brief reports by authors from various countries include morbidity rate and structure in children, adult and elderly population with different age stratification [15].That's why it's impossible to compare evaluated morbidity rates between Russian Federation and other countries.Age differences were observed when comparing morbidity rates in Russian Federation through 2004-2014.Difference in morbidity level was: 1) 307.3‰ in children aged 0-14 years (665.9‰)and 15-17 years (358.6‰)with amplitude of 147.4‰, which is evidence of increasing morbidity in children aged 0-14 years; 2) 6.9‰ in population older than 18 years (49.3‰) and in women older than 55 years and men older than 60 years (42.4‰), with amplitude of 1.5‰, which confirms that morbidity rate is higher in population older than 18 years and morbidity growth rate is higher in women older than 55 years and men older than 60 years.
People of various age groups have different lifestyle.Learning activity is dominant in children (0-17 years).In population older than 18 years working and other activity is prevalent.Consequently, certain disease incidence rates will be different.
Main changes in disease incidence in Russian Federation through the studied period are characterized by increase of: 1) congenital malformations (by 67.3%), diseases of the blood and blood-forming organs (by 55.6%), certain infectious and parasitic diseases (by 52.4%), diseases of the respiratory system (by 45.2%), diseases of the ear and mastoid process (by 42.0%), diseases of the digestive system (by 19.5%), diseases of the skin and subcutaneous tissue (by 13.3%) in children aged 0-14 years; 2) diseases of the genitourinary system (by 92.7%), diseases of the circulatory system (by 79.8%), endocrine, nutritional and metabolic diseases (by 48.5%), diseases of the musculoskeletal system and connective tissue (by 48.2%), injuries and poisonings (by 34.3%) in children aged 15-17 years; 3) diseases of the blood and blood-forming organs (by 36.0%),certain infectious and parasitic diseases (by 32.5%), diseases of the nervous system (by 27.3%), diseases of the genitourinary system (by 24.8%), diseases of the respiratory system (by 18.0%), injuries and poisonings (by 15.0%) in population older than 18 years; 4) diseases of the circulatory system (by 80.4%), neoplasms (by 56.7%), diseases of the eye and adnexa (by 43.0%), diseases of the ear and mastoid process (by 37.8%), endocrine, nutritional and
Table 3 .
Comparative characteristics of morbidity rate and structure of children aged 0-14 years and 15 -17 years in Russian Federation stratified by disease classes (mean values for 2004-2014 period) Class of disease (ICD-10) Incidence per 1000 children * Amplitude of variance by years (
Table 4 .
Comparative characteristics of morbidity rate and structure in population older than 18 years and women older than 55 years and men older than 60 years in Russian Federation stratified by disease classes (mean values for 2004-2014 period)
Table 2 . Comparative dynamic analysis of general morbidity in population older than 0-14 years and women older than 55 years and men older than 60 years in Russian Federation through 2004-2014
* -value taken from official statistical yearbooks published by Rosstat; ** -calculated by authors. | 2016-10-10T18:24:48.217Z | 2016-09-01T00:00:00.000 | {
"year": 2016,
"sha1": "f98d62ee370777e30d3dcaf2c0c0666c18d40d44",
"oa_license": "CCBYNC",
"oa_url": "http://www.romj.org/files/pdf/2016/romj-2016-0307.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "f98d62ee370777e30d3dcaf2c0c0666c18d40d44",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
234076965 | pes2o/s2orc | v3-fos-license | Applying the Deep Learning Model on an IoT Board for Breast Cancer Detection based on Histopathological Images
In breast cancer diagnosis, pathologists evaluate microscopic images of tissue samples to determine if it is benign or malignant. The manual examination process could result in delayed diagnosis, which leads to late cancer treatment and can risk lives. In this paper, we proposed an automated, low-cost, and portable breast cancer detection based on histopathological images by using deep learning. The deep learning models were designed by using the Convolutional Neural Network (CNN). This paper compares the performance of the CNN model by using transfer learning utilizing a pre-trained model (VGG16) and the performance of a CNN model without transfer learning. The result shows that transfer learning provides a good base for classification of histopathological images. The model was successfully deployed on a Raspberry Pi, which demonstrates the model efficiency to run on a lightweight and portable processor.
Introduction
Breast cancer is the second most common cancer to occur towards women worldwide, and it is the most common cancer that been detected among Malaysian women. Some studies show that breast cancer developed between 20 years old and 40 years old. A woman tends to get breast cancer when their age is increasing. Nearly 3,500 breast cancer cases are detected every year. Breast cancer is a significant cancer among women in Malaysia and followed by cervical cancer [1].
Traditionally, breast cancer usually detected by pathologists manually examining in the lab by using a microscope or x-ray images to analyze whether the breast tissue has cancer or not. The disadvantage of this process is the time taken for the pathologist to examine the tissues, which will take a long time to know the results. This process is time-consuming. Fast detection is essential and needed so that action can be considered earlier to cure and prevent cancer from becoming worst. There are numerous technologies available that can enhance breast cancer detection, and the utilization of artificial intelligence approaches in medical fields can be considered as excellent assistance in the decision-making process of medical practitioners. Most of the research that has been done in breast cancer detection is using machine learning method to detect the existence of cancer from the histopathological (microscopic) images [2] [3] [4][5] [6]. Deep learning demonstrated a high accuracy rate [6]. Deep learning can be considered as a sophisticated version of neural networks with a higher number of layers that result in more complex computation. The accuracy and efficiency of the algorithm depend on the deep learning network design, which is specific to the application. Hence, the study to improve the deep learning structure for cancer detection based on microscopic images is desirable.
Also, the equipment for breast cancer detection is usually costly. It is expensive to buy or do the maintenance of the machine. Patients need to wait for their turn to get the diagnosis results since the resources (machine, money, and staff) are limited. Low cost and portable devices are useful to speed up the breast cancer diagnosis and to allow scalable examination.
The whole new impact can be brought to the environment of breast cancer detection by using deep learning on a low cost, small, and embedded device. Similar work has been conducted for skin cancer detection [7]. Hence, our research mainly focuses on the development of a deep learning model to detect breast cancer based on histopathological images on an IoT board so that it is low cost, portable, and easy to access.
The system examines the microscopic images of breast tissues and decides whether the breast tissue has cancer. Two deep learning design approaches have been used in this research. The first approach is the Convolutional Neural Network (CNN) model, which is developed from scratch (without a pre-trained model). The second approach is by using transfer learning that combines the convolutional layers of a pre-trained VGG16 model with the fully connected layers, which is fine-tuned for breast cancer pathological images. This paper compares the performance of breast cancer detection models from these two approaches. Then, the model was successfully applied on an IoT board (a Raspberry Pi) to demonstrate its low-cost and portability.
The second section of this paper will discuss the related works, followed by methodology. Next, we present the results, which include the performance of the breast cancer detection models and demonstrate the application of the model on a Raspberry Pi. Finally, we summarize our works in this paper in conclusion 2. Related Works Breast cancer auto-detection and classification from breast histopathology images were developed in [2] from the DRYAD database. First, the contrast enhancement is used to remove the stain because the histological image is not clear, and the noise needs to be reduced by using the Gaussian blur filter so that the contrast can be improved, and noise can be reduced. Next, segmentation is applied by using the K-mean clustering algorithm, and then, watershed segmentation and color thresholding were performed. Shape and morphology features were extracted during feature extraction. In this study, a simple rulebased classifier and decision tree J48 were used for classification.
In [3], the histopathological images were taken from Mansoura University Hospital with 72 microscopic image samples of each benign and malignant. Then, the images are resized by using any of two sizes: (1024 x 1024) or (512 x 512). The noise in the images was removed by using a median filter. Next, the purposed of unsharp masking was applied to make the quality of the color image better. Kmeans, C-means Clustering and Watershed were used during the segmentation. The pre-processed images then underwent a feature extraction process that includes shape, texture, and color descriptors. Three classifiers that were used in this research are Support Vector Machine (SVM), K-Nearest Neighbours (K-NN), and Back-Propagation Neural Network (BPNNs).
In [4], developed a system for diagnosis, prognosis and prediction of breast cancer by using Artificial Neural Network (ANN) and Learning Vector Quantization (LVQ). A total of 683 microscopic images is taken from the Wisconsin breast cancer diagnosis (WBCD) database. The amount of 444 images is used for training that contains 260 benign and 184 malignant meanwhile, 239 images are used In [5], proposed a technique of a parallel neural network model to detect breast cancer. The data set that been used in this research is from the University of Wisconsin hospital. The data set consists of 699 images where 458 images are benign, and 241 is malignant. The feed-forward neural network model and backpropagation learning algorithm with momentum and the variable learning rate are trained. In this paper, single and multilayer networks are trained. The output results show that a multilayer neural network is better than the single layer in terms of time taken that need to be trained.
Based on [6], new hybrid convolutional and recurrent deep neural network for breast cancer histopathological image classification is proposed. The initial pathological data set images of 249 have been taken from Bioimaging 2015 data set, and 3771 extended images have been collected by collaboration with Peking University International Hospital. The total pathological images used is 4020.
In [7], proposed an implementation of a low-cost embedded device, Raspberry Pi, that classifies the dermatologist-level accuracy skin lesion images and works in a standalone without network connectivity. This application motivates the potential of developing such a low-cost device for automated detection of breast cancer based on microscopic images.
Methodology
This project involved several stages, which include image pre-processing, design, and implementation of deep neural network models to select the best possible hyperparameters and deployment of the model on hardware.
Model Development
The breast cancer histopathological images dataset will be taken from the Breast Cancer Histopathological Image Classification (BreakHis) database. After the image pre-processing completed, 70% of the images were split for training, 15% for validation, and 15% for testing. All the pre-processed images will be used as data input for the training Convolutional Neural Network (CNN).
By using transfer learning, we take a pre-trained model VGG16 and re-purpose it for solving the breast cancer detection problems. The fully connected layers of the pre-trained model were removed, and the remaining convolutional layers are used as feature extractors. The new fully connected networks are then trained with the samples of microscopic images to fine-tune the model in solving breast cancer detection. After the complete CNN model (convolutional layers with fully connected networks) has been trained, the images from the testing set will be applied to the trained model to classify the output. The final output determines whether the breast cancer histopathological images are cancerous (Malignant) or non-cancerous (Benign).
Breast Cancer Histopathological Database (BreakHis)
A total of 7,909 breast cancer histopathological images is taken from the BreakHis database, as shown in figure 1. Table-I. All the images will be split into three sets of data: training, validation, and testing.
Image Pre-processing
Image pre-processing is a process to improve the input image by removing unwanted distortions or enhances the image features for further process. Since the microscopic images have very detailed and complicated pattern structured, image pre-processing is needed to overcome this problem. The first technique that will be used is resizing the images. The purpose of this technique is to reduce the size of pixels and to make the process become faster. The image pre-processing was done by using the OpenCV (Open Source Computer Vision) library, which is one of the open-source machine learning software libraries where it specifically for computer vision problems. The images consist of different magnifying factors (40X, 100X, 200X, and 400X). All of the images are initially in size of 700X460 pixels. Then, the images will be resized to 200X200 pixels to make sure that all the images have the same dimensions of square and in the same pixels. The second techniques that will be used is the conversion of RGB to grayscale images. This technique is applied because, in the colour images, it makes more difficult to identify the images features such as edges compared to grayscale images. The format of images is originally kept as PNG (Portable Network Graphic). After all the pre-processing steps are done, the images will be saved at the Google Drive. Then, the images will be classified into training, validation, and testing sets for the CNN model used.
Convolutional neural network (CNN) Development
The model is developed by using Python. Python is useful for developing machine learning and deep learning because it has extensive libraries such as Keras, TensorFlow, NumPy, and many more. Convolutional Neural Network (CNN) is a deep artificial neural network that specially used to recognized and classify images meanwhile, the other neural network is typically used in other applications such as voice recognition, pattern recognition, and many more. This breast cancer detection will detect whether it is cancer or non-cancer. Therefore, there will be only two classes for the output, namely Benign and Malignant. This means that binary classification is used, "0" or "1". 1 stand for Malignant, and 0 is for Benign.
The four basic operations that are needed to build breast cancer detection, neural network models: The convolutional process involves three elements: input matrix, feature descriptor, and feature maps. An input matrix in this research is a standard image consisting of three channels where the channels are RGB (red, green, and blue). In this preliminary research, grayscale images are used. Grayscale images consist of only just one channel. So, a single 2D matrix will be representing an image. The value of each pixel for the matrix has a range from 0 to 255, where 0 indicating black and 255 pointing white. The feature descriptor (also called a kernel) is an element at the middle process that takes an input matrix, filters it, and compute the dot product. Finally, the feature map is produced. The convolution effects are depending on the operations such as edge detection or sharpen. The filter matrix will change based on the process that is needed. The objective of this step is to decrease the size of the image and make processing faster and easier. Furthermore, the network will quickly identify the patterns in the new image. Many feature maps will prevent the loss of image information because every feature map identifies the location of features in the image. This operation of convolution will maintain the originality of the image.
ReLU stands for the Rectified Linear Activation Unit function, where it usually most commonly used after every convolution operation, and it is a non-linear operation. ReLU aims to increase nonlinearity on CNN. Images are made of different objects that are not linear to each other. In this project, max pooling will be used. Max pooling is a method of down-sampling that is used to detect the most important features of the image. It is needed as it enables the convolutional neural network to recognize the object when presented with the image in any way. It allows the system to learn and detect the images with a different kind of position, pattern, angle, or textures. This will be useful as it can detect the small objects inside the image irrespective of where they are located, such as breast cancer tissues. There are many types of pooling, which is mean pooling, sum pooling, and max pooling.
The classification process is then performed, involving the fully connected layer where every neuron in the next layer is connected to every individual neuron in the previous layer. The objective of this process is to get the train, and test data combined the features into a wider variety of attributes that make the convolutional neural network more capable of classifying images as in the project, it develops its classes as cancerous and non-cancerous. Pooling and convolutional layer outputs will give highresolution features of the breast cancer image. The fully connected layer based on the training dataset develops these features and categorized into cancerous and non-cancerous.
Types of CNN Layers
In our design of the CNN model, there are five types of layers: flatten, dense, LeakyRelu activation, dropout, and softmax.
Flatten Layer
The flatten layer is where the two-dimensional matrix of features is flattened into a vector (see figure 2). The flatten layer is placed in between the convolutional layer and the fully connected layer. After transforming into a vector, it fed into a fully connected CNN classifier.
Dense Layer
Generally, dense layers and fully connected layers are the same functions where each neuron in the next layer is connected to every single neuron in the previous layer. Dense formula is shown in equation (1): Where activation is the element-wise activation passed as the activation argument, the kernel is a weight matrix formed in the layer, and bias is a vector generated by the layer.
LeakyRelu Activation Layer
LeakyRelu, as illustrated in figure 3, is a type of activation layer which always use in the Keras compared to other activation function such as ReLU or Sigmoid. Leaky ReLU has a small slope for negative values, instead of altogether zero. For example, leaky ReLU may have y = 0.01x when x < 0. The big advantage of LeakyRelu compared to the normal ReLU is that it fixed the "dying ReLU" problem because it does not have a zero-slope part, and also this activation layer can speed up the training process.
Softmax Layer
Softmax is implemented through a neural network layer just before the output layer. The Softmax layer must have the same number of nodes as the output layer. That is, Softmax assigns decimal probabilities to each class in a multi-class problem. Those decimal probabilities must add up to 1.0. This research has two types of classes that are Malignant and also Benign.
Keras and Transfer Learning
Keras is a high-level neural networks API (Application Programmable Interface), written in Python, and capable of running on top of TensorFlow. It is user-friendly and easy to implement because there are fewer variables to use for the model to run. When using Keras, there are a few steps that need to be considered. Firstly, training data is defined. Then, the neural network model needs to be defined, which is Keras sequential model. In Keras, there are two way to build the model, sequential and functional model. Keras sequential model is a model constructed of layers in a linear stack. Thirdly, the learning process are configured where in the sequential model, and there are three arguments used, which are Adam (Adaptive moment estimation) optimizer, categorical cross-entropy loss function, and accuracy metric. Lastly, all the data is passed to the training model.
Transfer learning is usually stated through the use of pre-trained models. A pre-trained model is a process where it that has been trained with a large amount of dataset to solve a problem similar to the one that currently want to be solved. When using Keras API, there are many pre-trained architectures that available such as Xception, MobileNet, DenseNet, InceptionV3, VGG16, VGG19, and many more. These pre-trained models that available can be used instead of start doing the model itself from the zero [12]. The pre-trained model needs to be modified by fine-tuning the model.
Pre-trained Model VGG16
VGG16 is used as a pre-trained model for transfer learning in this project. The model was trained with 14 million images from the ImageNet database of 1000 classes. The training of VGG16 was performed on NVIDIA Titan Black GPU, and training time was for weeks. As shown in figure 5, VGG16 architecture consists of 13 convolutional layers, five max-pooling layers, and three dense layers, which is in total 21 layers. However, only 16 are weight layers. The network has an image input size of 224 x 224 pixels. Transfer learning has an advantage in the training process of deep learning models in terms of reducing the size of the training dataset that required and time-consuming for the model to be trained. Nowadays, transfer learning has been suggested in the research field as one of the solutions to the insufficiency of training data in healthcare. Figure 5. VGG16 Architecture [13] 3.8. Hardware After the model has been completed and successfully trained and tested at the Google Colab, the completed model and testing images will be transferred and implemented on Raspberry Pi. The Raspberry Pi 3 B+ is used to show the efficiency of the model to run on the lightweight processor, as shown in figure 6.
The only limitation by using Raspberry Pi is that it cannot train a deep neural network because it does not have enough memory of CPU power to train deep neural networks from scratch [14]. Table 2. shows the specification of Raspberry Pi 3 B+.
Accuracy Results
The performance of breast cancer detection by using transfer learning and without transfer learning is presented in this section. The model compared on both training and validation datasets. The training set is used to train the model. Meanwhile, the validation set is used to evaluate model performance. Ten epochs were tested to assess either the model is overfitting or underfitting. An epoch is to define how many times the algorithm sees the entire data set [15]. Every time the algorithm has seen all samples in the dataset, an epoch has finished. When training with a smaller epoch, it will tend to become underfitting, and a bigger epoch will lead to overfitting the data. The number of epochs used to train is Table 3. shows the accuracy and loss based on each epoch by using transfer learning approach. Figure 7 shows the graph of the training and validation accuracy of the model. In this result, the highest training accuracy is 79% meanwhile for the validation accuracy is 76%. This means that this training model is expected to perform with 76% accuracy on the new data. For epoch from 8 to 10, when the training accuracy increases, the validation accuracy has slightly decreased. This means that the training model is fitting the training set better but slightly losing its ability to predict new data, which is the beginning of the data overfitting. 10 epoch of 10, the training loss is 42%, and the validation loss is 54%. This means that the data has started to overfitting because the validation loss is higher than the training loss. Figure 9. shows the graph of training and validation accuracy of the model and figure 9. represents the graph of training and validation loss of the model. The highest training accuracy is 69% meanwhile for the validation accuracy is 67%. This means that this training model expected to perform with 67% accuracy on the new data. For epoch from 4 to 10, when the training accuracy increases, the validation accuracy is inconsistent and when it comes to epoch 6, the graph trend becomes drastically decreased and suddenly sharply increased after it. This means that the training model is not fitting the training set and losing its ability to predict the new data, which is the beginning of the data overfitting. Figure 10. shows that the training loss keeps decreasing, which means that the model is learning to recognize all the data set in the training set. Based on the graph, the minimum reading of training loss is 55% meanwhile for the validation loss is 60%. For the epoch of 10, the training loss is 55%, and the validation loss is 63%. This shows that the data has started to overfit because the validation loss is higher than the training loss. Figure 10. Training and Validation Loss Graph without VGG16
Without Pre-Trained Model (VGG16)
After the model has been trained, the images been tested. Figure 11. shows the testing output for the test images of the 40X magnification factor on Google Colab with 79.07% accuracy. Figure 11. Testing Accuracy Table 5 shows the testing accuracy and loss that were achieved for the tested histopathological images based on every magnification factor of 40X,100X,200X, and 400X. After the image has processed in the model, it will display the output of the ID 0 and the label of Benign and ID1 with the label of Malignant. It will also display the percentage of accuracy based on the detected image. Based on the output, the image was detected as Malignant with an accuracy of 95.41%, and it also recognizes the image as Benign with a low accuracy percentage of 4.59%. It displays that the final decision of that output is ID:1 with the label: Malignant based on the highest rate of accuracy. Figure 13. and Figure 14. show the output every time the single image was tested. The model has been deployed on Raspberry Pi to show that it can run on a lightweight processor. The images can be chosen to be prove based on different magnifying factors. Figure 13. shows the output of breast cancer detection, where it detects benign with an accuracy percentage of 67.34% for benign and 32.66% for Malignant. The highest accuracy shows the final output, which it recognized as Benign. The graphical user interface (GUI) is used to display the result on the Raspberry Pi. This GUI is created by using AppJar platform and developed by using Python language. It combined the backend process with the CNN model that have been tested. AppJar can be run on Linux and Windows. Figure 15. shows that the verified image has detected as Benign.
Conclusion
Breast cancer detection by using histopathological images on an IoT board is proposed by using the deep learning method. It detects whether the histopathological images of the patients are Malignant or Benign. The histopathological images are taken from the BreakHis database with different criteria, such as magnifying factors (40X, 100X, 200X, 400X). Two images pre-processing method has been done, RGB to grayscale and image resizing. The type of neural network model used is Convolutional Neural Network (CNN), that was trained by using a GPU.
The model that developed by using transfer learning (VGG16) shows 76% accuracy compared to the model developed without using VGG16, which is the accuracy is 67.51%. We also found out that 200X of the magnifying factor of microscopic images perform better in term of testing accuracy and loss compared to 40X, 100X, and 400X. The model is successfully deployed on Raspberry Pi to show that it can run on the lightweight processor, and it is a portable device. This breast cancer detection system is recommended to be implemented at the hospital or clinic since it considered as a low-cost system and easy to access.
In conclusion, this research outcome will serve as a preliminary step for our research in using deep learning for breast cancer detection based on histopathology images and then apply it on IoT boards and devices. In future works, we plan to improve the model through better network design and ensemble of deep learning algorithms by incorporating transfer learning approaches. | 2021-05-10T00:03:38.034Z | 2021-02-01T00:00:00.000 | {
"year": 2021,
"sha1": "52aeb660481c112b2b08c27e5a44cbf955b397c2",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1755/1/012026",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "6fd354c523275bedebe78364e9395d9b71f6fa31",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Physics"
]
} |
81062499 | pes2o/s2orc | v3-fos-license | Focus on children murdered by parents in Italy : A sad reality
With a documented history of over a century, it is noted that child murders, perpetrated by their own parents, are an interesting and dramatic phenomenon in the Italian territory. There are three forms of child homicides: neonaticide, infanticide and filicide. Thanks to several legal reports and studies, it is possible to draw the profile of the typical murderer: usually a young, Northern Italian woman, unemployed, in a conflicting relationship and suffering from psychiatric disorders. In most cases, the crime takes place at home. No particular method for committing the murder is preferred, but the death of the child can be due to different causes. Precautionary measures should be taken: parents should never be left alone facing health or psychiat-ric problems, families should be helped and supported during difficult times, women should be well-informed and aware of their rights.
INTRODUCTION
The murder of children by their parents can be divided into three forms: neonaticide, infanticide and filicide.The term neonaticide refers to cases where homicide occurs immediately after birth.
Infanticide (from the Latin infantis cidim or caedium which means the killing of those who cannot talk yet) is, on the other hand, the killing of a child within the first year of life.Filicide occurs when the victim is over one year of age.The legal definition currently in force in the Italian Criminal Code is provided by article 578, entitled "Infanticide in Conditions of Material and Moral Abandonment", which states: "The mother causing the death of her newborn immediately after childbirth, or of the fetus during childbirth, when the fact is determined by conditions of material and moral abandonment related to childbirth, is punished with imprisonment from four to twelve years".(1)
THE SITUATION IN ITALY OVER THE LAST CENTURIES
From 1880 to 1883, 30 cases of child murder occurred annually, in Italy.From 1906From to 1911, there were 47 cases each year.From 1950 to 1959 there were 75 cases per year.In the following ten years, there were 54 child murders per year.There was a rapid decline after 1978.This evidence could be linked to the law on voluntary termination of pregnancy.( 2 after the law on anonymous childbirth came into force, that allows the mother not to recognize the child and leave it in the hospital to ensure its care and legal protection.(4) In contrast, there has been a sharp increase in the number of filicides in Italy over the past 15 years, with 379 cases in the whole period, and a peak in 2014 when 39 filicides took place, one every 10 days, 77% more than in 2013.The filicides of under 14s have increased, from 9 cases in 2013 to 24 in 2014 (+166.7%).61.5% of filicides were committed by fathers, mostly of older children, and 38.5% by mothers, especially of children under 14 years of age.
THE PROFILES OF THE MURDERER AND THE VICTIM AND OTHER DATA REGARDING THE CRIME
The outline of the murderous mother is varied.A study conducted on more than 50 reports carried out from 1967 to 2003 on the entire Italian territory asserts that in most cases the parents' age was from 26 to 32 years.56% of them were born in the North of Italy and 30% in the South of Italy; 42% had dropped out of middle school and 25% of elementary school.19% had completed their secondary ed-ucation and 3% had graduated.61% were married, 14% single, 15% lived with their partners, and 9% were separated.They had a conflicting relationship with their partners in 33% of cases, a good rela-tionship in 28%, and had an absent husband in 10%.58% were housewives, 8% employed in factories or unemployed, 5% of them were students or pensioners, 3% secretaries.In 62% of cases they came from middle class families and in 28% from middle-low class families.52% only had one child, 33% had two children, 15% three or more children.(25) In 74% of cases, at the time of the offense, the perpetrators already suffered from psychiatric disor-ders and received care at local health services (55% depression, 11% psychosis, 8% dissociative syn-drome, etc.), while in 29% they had no disorder.In 69% of cases there were previous warning inci-dents, 35% had clear signs of distress, and 25% had been admitted into psychiatric hospitals.A quar-ter of the women with psychiatric problems had tried committing suicide at least once, and 5% had already tried to kill the future victim.So, with regard to the motive, the first cause is mental illness (61%), followed by Medea syndrome (14%).In the field of mental illness, we remember psychotic disorder (49%), personality disorders (17%), anxiety disorders (25%), organic disorders (7%), and mood disorders (1%).(2,25) At the time of the offense, the culprit is unable to understand and take action in 68% of cases, in 28% this ability is greatly diminished, while 14% of murderous parents are perfectly alert and conscious of what they are doing.It is important to underline this last statement, because in these cases, that is when the crime takes place in the absence of an impairment of the ability to understand and take ac-tion, the law considers imprisonment.(2) After the murder, the convicted person commits suicide (23%), immediately confesses the crime (21%), or conceals the corpse (10%).(2,25) After the sentence of mental illness, in-ability to understand and take action, and social danger, hospi-talization in a psychiatric court is considered for 56% of cases; in 13% there is detention in prison; in 6% of cases, community or psychiatric hospitalization occurs.(2,25) The victims are males in 53% of cases, females in 47%, younger than one month of age in 16% of cases, between one month and one year in 20%, and between 2 and 6 years in 36%.In 89% of cases the victims did not exhibit any physical or mental illness.(2,25) The type of crime involved in filicide can differ: the murder or the attempted murder of just one child occurs, in 61% and 24% of cases, respectively.In 6% of cases there are more victims.The crime takes place at home more often than outdoors (85% vs 11%).At home, homicide occurs more frequently in the bathroom (64%), in the bedroom (20%) and rarely in the dining room (13%).No particular way to commit the murder is preferred: homicide is committed with drowning (19%), suffocation (18%), puncture and cutting (15%), with defenestration (15%), with strangulation (10%) or using a firearm (4%).( 2)
THE CAUSES OF DEATH
The death of the fetus or of the infant may occur naturally in relation to non-criminal causes, such as pre-maturity, congenital disease, and umbilical haemorrhage.During delivery, the infanticide of the baby is not so frequent but, it is possible and it can be due to cranial contusion, perforation of the fon-tanelles or airway obstruction.(26) The main causes of death of a newborn child carried out by parents are: ■ skull fracture, caused, for instance, by blows against a wall or ceiling; ■ suffocation with hands, with pillows, with excessively tight hugs, by locking in crates or trunks and rarely burying the child when he/she is still alive; ■ strangulation using hands or a rope, but even using the umbilical cord of the infant; ■ drowning; ■ wounds: generally caused by cutting tools aimed at mutilation to facilitate the concealment of the remains; ■ burns: fire is frequently used to hide the corpse; ■ poisoning by sponges soaked in toxic substances; ■ lack of care to keep the baby alive (e.g. by food deprivation); The parent can cause the death of two or more children (enlarged homicide) with gas asphyxiation, stabbing, drowning, or shooting with firearms.(27)
CLASSIFICATIONS
In 1969, Resnick already proposed an interesting classification of filicide.(28) As a result of a study carried out from 1951 to 1967, he highlighted five categories of filicide and underlined that minors at higher risk are those up to six months of life.The aforementioned 5 categories identified by the author are the following: 1. "Altruistic filicide" is when the mother kills her child with the intention of saving him or her from a pre-existing illness and then commits suicide ("extended suicide").2. "Highly psychotic filicide" occurs when a parent kills his or her child during imperative command hallucinations.3. "Unwanted child filicide" occurs when the child is seen as the result of an extramarital relationship or because the mother is immature and adolescent or close to menopause.Suicide attempts are un-common in this case.4. "Accidental filicide" occurs when the mother causes the child's death in an impulsive gesture due to frequent crying and screaming of the baby, even though she is generally not prone to violence.These mothers are often affected by personality disorders, irritability, and impulsive behaviour.They often suffered abuse in early life; the husband is also often disinterested in the problems of his wife. 5. Lastly, there is "filicide due to revenge on the spouse".Nivoli (29) presents another classification: 1. Filicide caused by passive and negligent mothers occurs when the mother, especially if she is young, does not adequately care for the child's needs (nourishment, clothing suitable for temperature, protection, and medical care).They see their child as a threat for their own well-being, or as somewhat intrusive.The death of the child is caused by passive and omissive behavior.2. Mothers who kill their children because of their frustrations.They are mothers who believe that the child is the cause of a ruinous existence.They perceive that the child has "re-shaped" their body through pregnancy, conditioned them to live in an unpleasant environment or with a com-panion they do not love or do not live happily with, forced them to spend the whole day taking care of his or her needs or whims.
3. Mothers who deny the pregnancy and consider the neonate as faeces.These are mothers who de-ny, in a hysterical way, their pregnancy, dressing in such a way as to conceal that they are preg-nant without requesting medical treatment during gestation or at birth, which is then performed in solitude.In the immediacy of childbirth, they kill or abandon their baby (in dumps, public toilets, etc.).4. Mothers who misplace the desire to kill their own mothers onto their child.
There is therefore an introjection of the desire to kill their own bad mothers, and only secondarily the shift of this aggression towards their child.Thanks to the observation of more than 500 psycho-biographies, (30) infanticide mothers can be sub-divided according to motive and/or psychopathology into 20 categories, divided between women able to understand and take action and those who have a totally or partially altered ability to understand and take action.When the filicide is caused by women capable of understanding and taking action, the motives are: life stressor events, pietas (altruistic murder), the mother's immaturity, the fact that the child is hyperac-tive, the fact that the child is perceived as the result of a sinful act, Medea syndrome, personality dis-orders (addicted, narcissistic or histrionic), the fact that the child is unwanted, depression in a narcis-sistic subject, and behavioral disorders due to alcohol and drug abuse.(30,(31)(32)(33)(34) If pathological causes with partial (greatly diminished) or total impairment to understand and take action are present, the motives are: postpartum psychosis, hysteric fundus, major depression, schizophrenia, twilight state, psychotic disorder due to a general medical condition, epilepsy, oligophfrenia, plaque sclerosis, and multiple personality.(30)
CONCLUSION
Even though filicide occurs in different scenarios, for different reasons, both by the mother and by the father, a profile of the perpetrator of the infanticide can be traced.This is often a young woman aged between 18 and 32 years, of Italian nationality and with an average level of schooling; married but in a problematic and/or conflicting relationship with the partner; a housewife, often not as a result of her inclinations but generally to please her husband/partner.She executes the crime in her home, especial-ly in the bathroom and in the bedroom, usually on children under the age of 7; she uses "immediate" modes to commit the crime, such as drowning, suffocating, defenestration.The main motive is mental illness; in fact, after the crime, the perpetrator is often found in confusion at the crime scene, she con-fesses or attempts suicide; she has shown signs of psychological distress in the past (attempted sui-cide, psychiatric hospitalization, and in some cases attempted murder of the future victim).(25) Measures to prevent the murder of these children should be undertaken.For instance, 40 "cradles of life" are spread over the Italian territory; these being a legacy of the old "wheel of life" in convents and monasteries where infants were once left.They generally remain emp-ty.To limit the number of infanticides and neonaticides, women should be better informed of this ser-vice, anonymous and free, but potentially life-saving.Women should also be aware of their rights, as the one of being able to give birth anonymously.However, there are still very few women making use of such a possibility.(35) Precautionary measures should also be taken during separation and divorce.Couples arriving at court should have free access to psychotherapy or family mediation paths to really understand what the psy-chological and legal consequences of divorce may be.Believing that a legal suit of separation is only a matter of law is quite irresponsible, assessing the level of hatred that may arise in a couple that is dis-integrating.We need a procedural reform of family law and, above all, an awareness campaign to make sure that children's rights have a priority over all others.(3,36,37) Since in most cases women are the perpetrators of infanticide, and as most of them suffer from psy-chiatric illnesses, experts suggest that "we need to intercept mothers" before the irreparable happens.Not only social workers, but everyone who is, for any reason, in close contact with mothers and fa-thers, should be able to recognise any warning signs and give support. | 2018-11-03T06:04:22.643Z | 2018-05-03T00:00:00.000 | {
"year": 2018,
"sha1": "dd06d96a9226be8aeed64881a2d566ecf91d9cc7",
"oa_license": "CCBY",
"oa_url": "http://www.signavitae.com/wp-content/uploads/2018/05/SIGNA-VITAE-2018-141-49-52.pdf",
"oa_status": "HYBRID",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "dd06d96a9226be8aeed64881a2d566ecf91d9cc7",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
44397740 | pes2o/s2orc | v3-fos-license | Interface localisation-delocalisation transition in a symmetric polymer blend: a finite-size scaling Monte Carlo study
Using extensive Monte Carlo simulations we study the phase diagram of a symmetric binary (AB) polymer blend confined into a thin film as a function of the film thickness D. The monomer-wall interactions are short ranged and antisymmetric, i.e, the left wall attracts the A-component of the mixture with the same strength as the right wall the B-component, and give rise to a first order wetting transition in a semi-infinite geometry. The phase diagram and the crossover between different critical behaviors is explored. For large film thicknesses we find a first order interface localisation/delocalisation transition and the phase diagram comprises two critical points, which are the finite film width analogies of the prewetting critical point. Using finite size scaling techniques we locate these critical points and present evidence of 2D Ising critical behavior. When we reduce the film width the two critical points approach the symmetry axis $\phi=1/2$ of the phase diagram and for $D \approx 2 R_g$ we encounter a tricritical point. For even smaller film thickness the interface localisation/delocalisation transition is second order and we find a single critical point at $\phi=1/2$. Measuring the probability distribution of the interface position we determine the effective interaction between the wall and the interface. This effective interface potential depends on the lateral system size even away from the critical points. Its system size dependence stems from the large but finite correlation length of capillary waves. This finding gives direct evidence for a renormalization of the interface potential by capillary waves in the framework of a microscopic model.
Interface localisation-delocalisation transition in a symmetric polymer blend: a finite-size scaling Monte Carlo study M. Müller * and K. Binder Institut für Physik, WA 331, Johannes Gutenberg Universität D-55099 Mainz, Germany (August 21, 2000 submitted to Phys.Rev.E) Using extensive Monte Carlo simulations we study the phase diagram of a symmetric binary (AB) polymer blend confined into a thin film as a function of the film thickness D. The monomerwall interactions are short ranged and antisymmetric, i.e, the left wall attracts the A-component of the mixture with the same strength as the right wall the B-component, and give rise to a first order wetting transition in a semi-infinite geometry. The phase diagram and the crossover between different critical behaviors is explored. For large film thicknesses we find a first order interface localisation/delocalisation transition and the phase diagram comprises two critical points, which are the finite film width analogies of the prewetting critical point. Using finite size scaling techniques we locate these critical points and present evidence of 2D Ising critical behavior. When we reduce the film width the two critical points approach the symmetry axis φ = 1/2 of the phase diagram and for D ≈ 2Rg we encounter a tricritical point. For even smaller film thickness the interface localisation/delocalisation transition is second order and we find a single critical point at φ = 1/2.
Measuring the probability distribution of the interface position we determine the effective interaction between the wall and the interface. This effective interface potential depends on the lateral system size even away from the critical points. Its system size dependence stems from the large but finite correlation length of capillary waves. This finding gives direct evidence for a renormalization of the interface potential by capillary waves in the framework of a microscopic model.
I. INTRODUCTION.
Confining a binary mixture one can profoundly alter its miscibility behavior. [1][2][3][4][5] If a mixture is confined into a quasi one-dimensional (e.g., cylindrical) pore no true phase transition occurs, unlike the prediction of the mean field theory. Fluctuations destroy long-range order and only a pronounced maximum of the susceptibility remains in the vicinity of the unmixing transition in the bulk. In a two-dimensional system (e.g., a slit-like pore or a film) with identical surfaces a true phase transition occurs (capillary condensation) and the shift of the critical point away from its bulk value has been much investigated. [6] The confinement changes the universality class of the transition from 3D Ising critical behavior in the bulk to 2D Ising critical behavior in the film. The latter manifests itself in much flatter binodals in a film close to the unmixing transition than in the bulk. No such change of the critical exponents is observed in mean field theory.
The phase behavior of symmetric mixtures in a thin film with antisymmetric surface interactions has attracted abiding interest recently. [7][8][9][10][11][12] The right surface attracts one species with exactly the same strength as the opposite surface attracts the other species. In contrast to capillary condensation, the phase transition does not occur close to the unmixing transition in the bulk, but rather in the vicinity of the wetting transition. Close to the unmixing transition in the bulk, enrichment layers at the surfaces are gradually built up and an interface is stabilized in the middle of the film. In this "soft-mode" phase the system is laterally homogenous -no spontaneous breaking of the symmetry occurs. If the wetting transition of the semi-infinite system is of second order one encounters a second order localisation-delocalisation transition slightly below the wetting transition temperature. The system phase separates laterally into regions where the interface is located close to one surface (localized state). The order parameter, i.e., the distance between the interface and the center of the film, grows continously. This prediction of phenomenological theories has been corroborated by detailed simulation studies [10,13,14] and it is also in accord with experimental findings. [15,16] If the wetting transition is of first order and the thickness of the film not too small, mean field calculations [17,18] predict the occurance of two critical points which correspond to the prewetting critical point of the semi-infinite system. Unlike the wetting transition, [6] the prewetting transition can produce a critical (singular) behavior in a thin film, because only the lateral correlation length diverges at the prewetting critical point; the thickness of the enrichment layers at the surfaces remains finite. The mean field treatment invokes approximations and it cannot be expected to capture the subtle interplay between 2D Ising fluctuations at the critical points, "bulk"-like composition fluctuations, and interface fluctuations typical for the wetting transition. [13] Consequently, a detailed test of the mean field predictions via Monte Carlo simulations is certainly warranted and elucidates the role of fluctuations. Using Monte Carlo simulations of the Ising model Ferrenberg et al. studied the interface localisation-delocalisation transition also for the case that the wetting transition of the semi-infinite system is of first order. [19] The simulation study was centered on the dependence on the film thickness, which is a convenient parameter to be varied in experiments. However, the study was restricted to the coexistence between strictly symmetric phases and many questions remained open.
The general features of the phase behavior are shared by all binary mixtures. Here, we present large scale Monte Carlo simulations aiming at investigating the phase behavior of a symmetric binary polymer blend confined between antisymmetric walls. Computationally, simulations of a polymer blend [20] are much more demanding than studying simple fluids (e.g., the Ising model), but recent mean field calculations made detailed predictions for the phase behavior of confined polymer mixtures [17,18] and serve as guidance for choosing the model parameters in the simulations. Simulating polymer blends, we can, at least in principle, control the importance of fluctuations by varying the degree of interdigitation, i.e., the chain length. [18,20] The mean field theory is expected to become accurate in the limit of infinite interdigitation. In a binary polymer blend the wetting transition occurs at much lower temperatures than the critical temperature of the unmixing transition in bulk. [21] Hence, "bulk"-like composition fluctuations are not important in the vicinity of the wetting transition temperature and we can isolate the effect of interface fluctuation. Moreover, these systems are also suitable candidates to examine the phase behavior experimentally. Indeed, one of the first studies of the "soft-mode" phase has employed a binary polymer blend. [15] Our paper is broadly arranged as follows: First, we present a phenomenological description of the phase behavior in a film with antisymmetric short range surface interactions. Using a standard model for the effective interface potential we calculate the phase behavior in mean field approximation, discuss the regime of validity of the mean field approach, and consider the crossover between the different critical behaviors. Second, we briefly describe our coarse grained lattice model for a binary polymer mixture. Then, we present our Monte Carlo results: We obtain the phase diagram for film thicknesses ranging from D = 1.1R g to 7R g , where R g denotes the radius of gyration of the polymer chains, investigate the critical behavior and present evidence that interface fluctuations renormalize the effective interface potential. We close with a comparison of the phase diagram to the behavior of the bulk and of films with symmetric boundary conditions.
II. BACKGROUND.
Rather than describing the configuration of the system by the detailed composition profile across the film, much qualitative insight into the thermodynamics can be deduced from the effective interface potential. Below the bulkcritical temperature enrichment layers of the prefered components form at the surfaces and stabilize an AB interface which runs parallel to the walls. The effective interface potential g wall (l) describes the free energy per unit area as a function of the distance l between this AB interface and a wall. In the case of short range forces between the monomers and the walls, the interface profile becomes distorted in the vicinity of the walls and this gives rise to an interaction which decays exponentially as a function of the distance l between the AB interface and a single wall: This expression retains only the lowest powers of exp(−λl), which are necessary to bring about a first order wetting transition of the semi-infinite system. The coefficient a is explicitely temperature dependent, while the temperature dependence of b and c is neglected. c > 0 is assumed throughout the discussion. All coefficients are of the same magnitude as the interfacial tension σ between the coexisting bulk phases. For polymer blends this quantity scales with chain length N and monomer number density ρ like √N /R 2 g .N = (ρR 3 g /N ) 2 measures the degree of interdigitation. 1/λ denotes the spatial range of the interactions and it is of the order R g . b < 0 gives rise to a second order wetting transition at a = 0; and b = 0 to a tricritical transition. For b > 0 one encounters a first order wetting transition at a wet = b 2 /4c where the thickness of the enrichment layer jumps discontinuously from l − = 1/λ ln(2c/b) to a macroscopic value. [18] The wetting spinodals take the values a > 0 (from the wet phase) and a < b 2 /3c (from the non-wet phase). The concomitant prewetting line terminates at the prewetting critical point a pwc = 16a wet /9 and l pwc = 1/λ ln(9c/2b).
We approximate the effective interface potential in a film to be the linear superposition of the interactions originating at each wall and analyze the behavior. SCF calculations [18] lend support to this approximation. The interface potential in a film of thickness D takes the form: In general, the phase boundaries depend on the variables a/c, b/c and λD. If we proceeded as in the Ref [13] by expanding the cosh in powers of [l − D/2], the further analysis would be rather cumbersome. A more transparent procedure employs the variablẽ to rewrite the interface potential in the form The qualitative form of the effective interface potential has been inferred previously on the basis of a Landau expansion, [18] here it is derived explicitely from the standard form of the interface potential (1) for a first order wetting transition in the semi-infinite system. Negative values of r correspond to second order localization-delocalization transitions, r = 0 to a tricritical one, and positive values of r give rise to first order transitions. t measures the distance from the tricritical transition temperature (for r ≤ 0) and denotes the triple temperature in the case of a first order interface localization-delocalization transition (cf. below). For r ≤ 0 the phase boundaries depend only on the two parameter combinations r and t. In these variables the limit λD → ∞ is particularly transparent: cr → b/2, ct → a − a wet and m → exp(−λl).
A. r ≤ 0: second order and tricritical interface localization-delocalization transition A second order interface localization-delocalization transition (i.e. r < 0) will occur either if the wetting transition is second order (i.e., b < 0) or if the wetting transition is first order but the film thickness D small enough to comply with 0 < b < 6c exp(−λD/2). This behavior is in accord with previous findings [9,19,18] and we shall corroborate this further by our present simulations. Since the coexisting phases are symmetric with respect to exchanging l and D − l, phase coexistence occurs at ∆µ coex ≡ 0 or ∂g/∂l = (∂g/∂m)(dm/dl) = 0. From this condition we obtain for the binodals:m The critical temperature is given by t c = −r 2 and ∆t = t c − t denotes the distance from the critical temperature at fixed r. For r < 0 the binodals at the critical point open with the mean field exponent β 2DMF = 1/2. This corresponds to mean field critical behavior (2DMF) of a system with a single scalar order parameter, i.e., m = [l/D−1/2]. At larger distance the order parameter grows like m ∼ (∆t) β2DTMF with β 2DTMF = 1/4. The latter exponent is characteristic for the mean field behavior at a tricritical point (2DTMF). The crossover between mean field critical and tricritical behavior occurs around |∆t cross | ∼ r 2 . As we decrease the magnitude of r → 0 we approach the tricritical point and the regime where mean field critical behavior is observable shrinks. At the tricritical point only the tricritical regime (2DTMF) exists, i.e., ∆t cross = 0, and the binodals take the particularly simple formm = (∆t/3) 1/4 . The crossover in the binodals for r = −0.4 is illustrated in the inset of Fig.1(a). Of course, the above considerations neglect fluctuations and the behavior close to the transition is governed by Ising critical exponents and two dimensional tricritical exponents, respectively. The crossover between Ising critical behavior (2DI) and tricritical behavior (2DT) occurs at |∆t cross | ∼ r 1/φcross , where the crossover critical exponent is not 1/2 (as for the crossover between the mean field regimes) but rather 4/9. [22][23][24] Following Ref [13] we calculate the critical amplitudes and estimate the location of the crossover between mean field critical behavior and the region where fluctuations dominate the qualitative behavior. For small values of the order parameter m = [l/D − 1/2] we approximate m ≈m exp(λD/4)/(λD) and obtain for the mean field critical amplitudes: The susceptibility of the order parameter above the critical temperature is related to the inverse curvature of the interface potential in the middle of the film 1/χD 2 = ∂ 2 g ∂l 2 |l=D/2 . Using Eq(4) we obtain for critical and tricritical mean field transitions: The ratioĈ + MF /Ĉ − MF of the critical amplitudes above and below the critical point is universal and takes the mean field value 2 at the critical point and 4 at the tricritical point. At the transition the correlation length ξ diverges. This lateral length is associated with fluctuations of the local interface position, i.e., capillary waves. In mean field approximation the parallel correlation length takes the form: andξ + MF /ξ − MF = √ 2 and 2, respectively. Knowing the critical amplitudes we can estimate the importance of fluctuations via the Ginzburg criterium: [25] As it is well known, mean field theory is self-consistent if the fluctuations of the order parameter in a volume of linear dimension ξ are small in comparison to the mean value of the order parameter. For our quasi-two-dimensional system (d = 2) we obtain: for r=0 tricritical (9) This result is as expected: For our quasi-to-dimensional system we obtain for a second order interface localizationdelocalization transition ∆t ≪ Gi 2DI ∼ |r| exp(−λD/2)/ √N in accord with Ref [13], while we obtain ∆t ≪ Gi 2DT ∼ exp(−λD)/N upon approaching the tricritical point. For bulk (d = 3) tricritical phenomena Landau theory is marginally correct.
Combining the above results we find the following behavior upon approaching the critical temperature: Far away from the tricritical point, i.e., r ≫ exp(−λD/2)/ √N we find mean field tricritical behavior (2DTMF) for ∆t ≫ r 2 , mean field critical behavior (2DMD) for r 2 ≪ ∆t ≪ |r| exp(−λD/2), and finally two dimensional Ising critical behavior (2DI) for |r| exp(−λD/2) ≫ ∆t. Closer to the tricritical point, i.e. r ≪ exp(−λD/2)/ √N , we find mean field tricritical behavior (2DTMF) for ∆t ≫ exp(−λD/4), two dimensional tricritical behavior (2DT) for exp(−λD/4) ≪ ∆t ≪ Cr 1/φcross , and Ising critical behavior (2DI) for Cr 1/φcross ≫ ∆t. The prefactor C must be chose such that all crossover lines ( 2DI ↔ 2DT, 2DT ↔ 2DTMF, 2DTMF ↔ 2DMF, and 2DMF ↔ 2DI) intersect in a common point. This yields C = (N exp(λD)) −1+1/2φcross . Of course, the term "crossover line" is not meant as a sharp division between different behaviors, but should be understood rather as a center of a smooth crossover region. Likewise, the above constant C may involve a constant of order unity which has been suppressed for simplicity. The two different sequences can be clearly distinguished in the Monte Carlo simulations, because the probability distribution of the order parameter exhibits a three peak structure [36] only close to the tricritical point (2DT). We shall use this property to distinguish between the two different sequences in our MC simulations. The anticipated behavior is summarized in Fig.1(a).
In the Monte Carlo simulation this rich crossover scenario is further complicated by finite size rounding. The Monte Carlo results are subjected to pronounced finite size effects whenever the correlation length becomes of the order of the lateral system size. In the mean field regime the correlation length scales like ξ ∼ R g exp(λD/4)∆t −1/2 . Knowing the Ginzburg number for the crossover from 2DMF to 2DI behavior we estimate the correlation length in the Ising critical regime: [13] where we have assumed that the scaling functionf assumes a power law behavior for small and large arguments and we have used the value ν 2DI = 1 appropriate for the divergence of the correlation length in the 2DI regime. Similarly, we determine the correlation length in the 2DT regime: ν tri = 5/9 denotes the exponent of the correlation length in the 2DT universality class. [22][23][24] The correlation lengths at the various crossovers are compiled in Tab.1. The largest correlation length occurs at the crossover from 2DT to 2DI behavior ξ 2DT ↔2DI ∼ R g exp(λD(3/4 − ν tri /2φ cross ))N 1/2−νtri /2φcross |r| −νtri /φcross (12) In order to observe the true Ising critical behavior for negative values of r, the system size L has to exceed this correlation length. In the vicinity of the tricritical point (i.e., for small negative values of r) this requirement is very difficult to be met in computer simulations.
B. r > 0: first order interface localization-delocalization transition For positive values of r the interface potential exhibits a three valley structure. The three minima atm = ± √ r andm = 0 have equal free energy at t = 0. This corresponds to the triple point. At lower temperatures an A-rich phase coexist with a B-rich phase, and since the two phases are symmetrical the coexistence occurs at ∆µ coex = 0. The binodals below the triple point take the form: Above the triple temperature t > 0 there are 2 two phase coexistence regions symmetrically located aroundm = 0. These phase coexistences terminate in two critical points. Since the coexisting phases correspond to a thick and a thin enrichment layer of the prefered phase at each wall, there is no symmetry between the coexisting phases, and the exchange potential ∆µ coex at coexistence differs from zero. Unfortunately, the phase boundaries for t > 0 and r > 0 depend not only on r and t but also on λD explicitely, and we have determined them numerically. The dependence of the critical temperature t c on r for several values of λD is presented in Fig.1(b). The coexistence curve for b/c = 4.44 and various values of λD are presented in the inset of Fig.1(b). As the film thickness is decreased the critical temperature decreases and the critical points move closer to the symmetry axis of the phase diagram. They are determined by the condition: In two limiting cases a simple behavior emerges: (i) If |λ(l − D/2)| ≪ 1 we can replace the derivative w.r.t. l by derivatives w.r.t.m and obtain: [18] t c = 7r 2 /5 and m c = ± 2r/5. This approximation holds for r ≪ exp(−λD/2). Expanding g in powers of δm =m −m c we obtain (omitting an irrelevant term linear in δm): This allows us to calculate the binodals in the vicinity of the critical points, the susceptibility and parallel correlation length. The presence of a 5th order term in δm in the expansion (15) is a manifestation of the fact that the phase boundaries of the prewetting transitions are not symmetric aroundm c . This lack of symmetry is also evident from the numerical results in Fig.1(b) (insert). The critical amplitudes scale in the same way with r,N , and λD as for r < 0. In particular we find for the crossover between 2DMF behavior and 2DI behavior Gi 2DI ∼ |r| exp(−λD/2)/ √N . (ii) In the limit of large film thickness λD → ∞, the critical point tends towards the prewetting critical point at t c = t pwc = 7r 2 /9. In this limit confinement effects are negligible and the coexistence curves in the vicinity of the critical points corresponds to the prewetting lines at the corresponding surfaces. We expect the same critical behavior as at the prewetting critical point. In this case the Ginzburg number does not depend on the film thickness. For λD → ∞ we employ the interface potential at a single wall, and find for the validity of the mean field description:
III. THE BOND FLUCTUATION MODEL AND SIMULATION TECHNIQUE.
Modeling polymeric composites from the chemical details of the macromolecular repeat units to the morphology of the phase separated blend within a single model is not feasible today even with state-of-the-art supercomputers. Yet, there is ample evidence that by a careful choice of simulation and analysis techniques, coarse grained models of flexible polymers -like the bond fluctuation model [20,26] -provide useful insights into the universal polymeric features. In the framework of the bond fluctuation model each effective monomer blocks a cube of 8 neighboring sites from further occupancy on a simple cubic lattice in three dimensions. Effective monomers are connected by bond vectors of length 2, √ 5, √ 6, 3, or √ 10 in units of the lattice spacing. The bond vectors are chosen such that the excluded volume condition guarantees that chains do not cross during their motion. [27] Each effective bond represents a group of n ≈ 3 − 5 subsequent C − C-bonds along the backbone of the chain. [28] Hence, the chain length N = 32 employed in the present simulations corresponds to a degree of polymerization of 100 − 150 in a real polymer. If we increased the chain length N , the mean field theories would yield a better description of the equilibrium thermodynamics (self-consistent field theory is believed to be quantitatively accurate in the limit N → ∞) but the length scale of the ordering phenomena would be larger. Hence, our choice of N is a compromise determined by the computational resources. The statistical segment length b in the relation for the radius of gyration We study thin films of geometry L × L × D. Periodic boundary conditions are applied in the two lateral directions, while there are hard impenetrable walls at z = 0 and z = D + 1 modeling a film of thickness D. The average number density in the film is ρ 0 = 1/16, i.e., half of the lattice sites are occupied by corners of monomers. This density corresponds to a melt or concentrated solution. The density profile of occupied lattice sites, normalized by the bulk value, is presented in Fig.2 for film thickness D = 24 and 48. For this choice of temperature and monomer-wall interaction an interface is stabilized in the center of the film. Due to the extended shape of the monomers and the compressibility of the fluid there are packing effects at the walls. [21] Overall the walls are repulsive and the monomer density is slightly reduced in the boundary region. The spatial extension of this region is independent of the film thickness. Moreover, the density is reduced at the center of the interface as to reduce the energetically unfavorable contacts between unlike species. [29] Both effects are not incorporated into the mean field calculations [17,18] and cause the density in the "bulk"-like region of the film to be slightly larger for thinner films than for thicker ones. In the following we employ the density of occupied lattice sites in the layers 5 ≤ z ≤ 8 as a measure of the density of the film. For large D the data are compatible with a behavior of the form ρ = ρ 0 (1 + 0.85/D). The film thickness ranges from D = 12 ≈ 1.7R g to D = 48 ≈ 7R g and we vary the lateral extension over a wide range 48 ≤ L ≤ 264 to analyze finite size effects. In the two layers nearest to the walls, monomers experience a monomer-wall interaction. An A-monomer is attracted by the left wall and repelled by the right wall, the interaction between B-monomers and the walls is exactly opposite. Each monomer-wall interaction changes the energy by an amount ǫ w = 0.16 in units of k B T . For these parameters the wetting transition and the phase diagram of a blend confined between symmetric walls has been investigated previously. [21] Binary interactions between monomers are catered for by a short ranged square well potential ǫ ≡ −ǫ AA = −ǫ BB = ǫ AB ≡ 1/k B T which is extended up to a distance √ 6. The phase separation is brought about by the repulsion between the unlike species. The Flory-Huggins parameter is χ = 2z eff ǫ where z eff ≈ 2.65 denotes the effective coordination number in the bulk [30,20] at ρ 0 = 1/16. For ǫ w = 0.16 previous simulations find a strong first order wetting transition at T wet = 1/ǫ wet = 14.1 (7). [21] This value corresponds to χN ≈ 12, which is well inside the strong segregation limit.
The polymer conformations are updated via a combination of random monomer displacements and slithering snakelike movements. The latter relax the chain conformations about a factor of N faster than the local displacements. [30] We work in the semi-grandcanonical ensemble, [31] i.e., we control the temperature T ≡ 1/ǫ and the exchange potential ∆µ between the two species, and the concentration fluctuates. This semi-grandcanonical ensemble is realized in the Monte Carlo simulations via switching the polymer identity A ⇀ ↽ B at fixed chain conformation. The different Monte Carlo moves are applied in the ratio: slithering snake : local displacements : semi-grandcanonical identity switches = 12:4:1. During production runs, we record all 150 slithering snake steps the composition, the energy, and the surface energy and obtained the joint probability distribution in form of a histogram. We use the semi-grandcanonical identity switches in junction with a reweighting scheme, [29,32] i.e., we add to the Hamiltonian of the system a reweighting function H rw = H orig +W (φ), which depends only on the overall composition φ = n A poly /(n A poly +n B poly ). n A poly and n B poly denote the number of A and B polymers in the simulation cell, respectively. The choice W (φ) ≈ − ln P (φ) , where P (φ) denotes the probability distribution of the composition in the semi-grandcanonical ensemble, encourages the system to explore configurations in which both phases coexist in the simulation cell. Otherwise these configurations would be severely suppressed due to the free energy cost of interfaces. In the framework of this reweighting scheme the system "tunnels" often from one phase to the other and this allows us to locate the phase coexistence accurately and measure the free energy of the mixture as a function of the composition φ. Use of histogram extrapolation technique [33] permits histograms obtained at one set of model parameters to be reweighted to yield estimates appropriate to another set of model parameters. We employ this analysis technique to obtain estimates for the reweighting function W (φ).
IV. RESULTS.
Firstly, we locate the critical points of the phase diagrams. For very small film thickness we find a second order localisation-delocalisation transition even though the wetting transition is of first order. Swift et al have predicted this behavior in the framework of a square gradient theory [9] and such a behavior is also born out in our self-consistent field calculations for polymer blends [17,18] and simulations of the Ising model. [19] Upon increasing the film thickness we encounter a nearly tricritical transition. A truly tricritical transition cannot be achieved by tuning the film thickness only, because of the discreteness of the lattice, but it could be brought about by varying the monomer-wall interaction. In an experiment using real materials, of course, the film thickness can be varied continously, and a truly tricritical transition is in principle always accessible. For even larger film thickness the interface localisation-delocalisation is first order and we find two critical points at φ = 1/2.
Secondly, we locate the triple line for the two largest values of the film thicknesses and discuss how capillary waves lead to a strong dependence of the effective interface potential on the lateral system size.
Thirdly, we detail our results on the thickness dependence of the phase diagram and relate our findings to the binodals of the bulk and the mixture confined into a film with symmetric boundaries.
A. Critical points. For film thicknesses which are comparable to the radius of gyration of the molecules, the effective interface potential originating from the two surfaces strongly interfere. This might change the order of the interface localisationdelocalisation transition from first to second. In this case, a single critical point occurs on the symmetry axis φ = 1/2 of the phase diagram. The transition is thought to belong to the 2D Ising universality class. In Fig.3 (a) we present the probability distribution of the composition for various inverse temperatures ǫ, film thickness D = 8 and lateral film extension L = 80. Upon increasing the monomer-monomer interaction ǫ the probability distribution P (φ) changes from single-peaked to bimodal, which indicates that a phase transition occurs in this temperature range. No signature of the trimodal distribution occurs and, hence, we conclude that the system is far away from the tricritical point, i.e., |r| > exp(−λD/2)/ √N . In this case, we expect a crossover from 2DMF to 2DI behavior. Along the coexistence curve ∆µ = 0 and its extension to higher temperatures we use the cumulant intersection method to locate the critical point. [35] In the vicinity of the critical point the probability distribution of the order parameter m = φ − φ coex = φ − 1/2 scales to leading order like: [35] P (m, L, t) ∼ L β/ν P ⋆ (L β/ν m, L −1/ν t) where t = (ǫ c − ǫ)/ǫ c denotes the distance from the critical point along the coexistence curve and β and ν are the critical exponents of the order parameter and the correlation length. P ⋆ is characteristic of the universality class and has been obtained from simulations of the Ising model [34] at the critical temperature t = 0. Cumulants of the form m 2 / |m| 2 are expected to exhibit a common intersection point for different system sizes L at the critical temperature. [35] The value of the cumulant at the intersection point is universal. Our simulation data are presented in panel (b) and exhibit some corrections to scaling due to the crossover 2DMF to 2DI behavior. Similar corrections were observed in simulations of a second order interface localization/delocalization transition in the Ising model. [13] From the intersection points of neighboring system sizes and from the intersection of the cumulant with the universal value of the Ising model we estimate the critical temperature to be ǫ c = 0.0520 (5).
In the inset of Fig.3(b) we show the probability distribution normalized to unit variance and norm at our estimate of the critical temperature ǫ c = 0.052, and compare the distribution to universal scaling curve of the 2D Ising universality class. The probability distributions for the smaller system sizes are slightly broader than the universal scaling curve, but the deviations decrease as we increase the system size.
The simulation data for D = 12 are presented in Fig.3(c) and (d). As we lower the temperature the probability distribution of the composition for L = 48 changes from single peaked to bimodal. At intermediate values of ǫ, however, a three-peak structure is clearly discernable. This is characteristic of the 2DT regime and indicates the vicinity of the tricritical point. In the phenomenological considerations this regime occurs only for |r| < exp(−λD/2)/ √N . We note that the distribution for that small lateral system sizes resembles at no value of ǫ the universal shape of the order parameter distribution of the 2D Ising model. We conclude that the finite size rounding for this lateral system size sets in before we observe the crossover from 2DT to 2DI behavior, i.e., the correlation length ξ 2DT ↔2DI in Eq (12) exceeds the lateral systems size L. For such small lateral extensions the universal properties of the transition are completely masked. Larger system sizes and a careful finite size scaling analysis is indispensable to determine the type of transition and accurately locate the transition temperature.
The temperature dependence of the cumulant is presented in Fig.3(d). There is no unique intersection point and the value of the cumulants at the crossing is larger than the universal value of the cumulant of the Ising class. This behavior indicates pronounced corrections to scaling due to the crossover from 2DT behavior away from the critical point to 2DI behavior at the critical point. From the intersection points of neighboring system sizes and from the intersection of the cumulant with the universal value of the 2D Ising model we estimate the critical temperature to be ǫ c = 0.0589 (10).
The inset of panel (d) compares the distribution of the order parameter at our estimate of the critical temperature and the Ising scaling function. As we increase the lateral system size the "third" peak in the distribution vanishes and P (φ) gradually approaches the universal scaling curve. This indicates that our largest system sizes exceed the correlation length at the crossover from 2DT to 2DI behavior. The comparison of P (φ) with the universal scaling curve for several system sizes accurately locates the critical point and gives evidence that the transition belongs to the 2D Ising universality class.
For D ≤ 12 we find a single interface localisation-delocalisation transition of second order at φ = 1/2.
D = 14 ≈ 2Rg: tricritical interface localisation-delocalisation transition
The three-peak structure in the probability distribution for D = 12 and small lateral extensions L has indicated the vicinity of the tricritical interface localisation-delocalisation transition. Increasing the film thickness we need larger and larger lateral extensions to observe the 2DI behavior as ξ 2DT ↔2DI diverges. Right at the tricritical point the distribution of the composition is expected to exhibit a three-peak structure for all lateral system sizes and the distribution, when scaled to unit variance and norm, coincides with a universal scaling function. Wilding and Nielaba [36] have obtained this scaling function via simulations at the tricritical point of the spin-1 Blume-Capel model [37] in two dimensions. Assuming that the tricritical interface localisation-delocalisation transition belongs to the same universality class, we vary the film thickness D and the interaction strength ǫ as to match the probability distribution of the composition onto the predetermined scaling function of the tricritical universality class. This strategy largely facilitates the search of the tricritical interface localisation-delocalisation transition. Fig.4(a) displays the probability distribution of the composition for film thicknesses ranging from D = 12 to D = 18 and the universal scaling curve. The temperature was adjusted for each film thickness such that the relative heights of the central and outer peaks correspond to the ratio of the universal scaling curve. For small D < D tri the "valley" between the peaks is too shallow and for D > D tri the probability between the peaks is too small. For D ≫ D tri this situation corresponds to the triple point (cf. below) and the probability of finding a system between the peaks is suppressed by the free energy cost of the interface between the phases with composition close to 0 or 1 and the "soft-mode" phase with composition φ = 1/2. As we increase the film thickness the temperature at which the ratio between the peak height equals 1.2 shifts towards lower temperatures and approaches the wetting transition temperature from above.
Panel (a) of Fig.4 suggests that the tricritical transition occurs close to the film thickness D = 14. This is further corroborated in Fig.4(b), where we show the distribution function at ǫ = 0.06151 for various system sizes. Within the statistical accuracy of our data the distribution functions for the larger systems sizes collapse well onto the universal scaling curve. For smaller systems the outer peaks are slightly sharper and centered at smaller values of the order parameter. Of course, no perfect data collapse can be expected because we can tune the film thickness only in units of the lattice spacing. In view of the statistical accuracy and possible systematic corrections to scaling, however, we did not attempt to vary the monomer-wall interaction ǫ w as to achieve a better collapse. For D = 14 the system is very close to the tricritical transition.
3. D = 24 ≈ 3.5Rg and D = 48 ≈ 7Rg: critical points for φ = 1/2 Though the system is strictly symmetric the critical points for larger film thickness (D > D tri ) do not occur at φ = 1/2 but rather there are two critical points at critical compositions φ c and 1 − φ c . These critical points are the finite film thickness analogs of the prewetting critical points, which occur in the limit D → ∞. [17] Below the critical temperature the phase diagram comprises two miscibility gaps. The coexisting phases correspond to surfaces with a thin and a thick enrichment layer of the preferred component. Due to the missing symmetry between the coexisting phases the coexistence value of the chemical potential ∆µ coex differs from zero. We determine ∆µ coex via the equal weight rule, [38] i.e., we adjust ∆µ such that Along this coexistence curve and its finite-size extension to higher temperatures we use the cumulant intersection to locate the critical temperature. This is shown in Fig.5(a) for the film thickness D = 24. For the system sizes accessible in the simulations the intersection points between cumulants of neighboring systems sizes systematically shift to lower temperatures and the value of the cumulant at the intersection point gradually approaches the value of the 2D Ising universality class from above. The latter is indicated in the figure by the horizontal line. From these data we estimate the critical parameters to be ǫ c = 0.061(1), and φ c = 0.18 (2) and φ c = 0.82(2) respectively. This corresponds to a critical thickness l c = Dφ c = 0.62R g of the enrichment layer. A similar procedure has been employed to locate the critical temperature in the film of thickness D = 48. The temperature and system size dependence of the cumulants are displayed in Fig.5(c). From this we extract the estimate ǫ c = 0.0625(10) for the critical temperature and φ c = 0.09(2) and φ c = 0.91(2) for the critical compositions. This value corresponds to a distance between then wall and the interface of l c = 0.63R g . Since increasing the film thickness from 3.5R g to 7R g does not change T c or l c substantially, we are in the regime λD ≫ 1 and the critical behavior is characteristic of the prewetting critical point in the semi-infinite system.
The behavior of the cumulants and the very gradual approach of the probability distribution towards the Ising curve indicate pronounced corrections to scaling. For the simulation of the bulk phase diagram [39] a nice cumulant intersection has been obtained with system sizes in the range 24 3 to 56 3 . In the present study we employ systems with about an order of magnitude more polymers and obtain no clear intersection of the cumulants! There are three reasons for strong corrections to the leading 2D Ising scaling behavior: (i) The aspect ratio D/L of our simulation cell is always finite. Truly two-dimensional behavior can only be observed for vanishing aspect ratio, and our data might fall into the broad crossover region between three-dimensional critical behavior and the two-dimensional critical behavior. Such a crossover has been studied in our polymer model for neutral walls [40] and walls, which attract both the same species (i.e., capillary condensation). [21] However, we note that unlike these situations there is no threedimensional critical point in the vicinity for antisymmetric boundary conditions. The temperature of the unmixing transition in the bulk is a factor 4 higher than the critical point in a thin film. Since the critical point in a thin film is related to the prewetting transition of the semi-infinite system, i.e., a transition with no three dimensional analogy, we expect the corrections to be qualitatively different from the case of neutral or symmetric boundaries. (ii) Unlike the situation for small film thickness D = 12 the probability distribution of the order parameter is asymmetric, because the critical point does not lay on the symmetry axis of the phase diagram. This missing symmetry between the two phases gives rise to field-mixing effects, [34] which manifest themselves in corrections of relative order L −(1−α−β)/ν . These corrections are antisymmetric to leading order and, hence, are not expected to influence even moments (like the cumulants) of the order parameter distribution profoundly. The effects are, however, detectable in the order parameter distribution which we present in Fig.5(b) and (d). The distribution functions at our estimate of the critical temperature clearly lack symmetry and approach very gradually the symmetric scaling curve of the 2D Ising universality class. (iii) Additionally, there are corrections to scaling by non-singular background terms. One source of (non-critical) composition fluctuations are "bulk-like" fluctuations in the A-rich and B-rich domains. In a bulk system, i.e., with periodic boundary conditions in all directions, the susceptibility is rather small. At ǫ = 0.065 it takes the value χ bulk T = V ∆φ 2 = 0.047 with ∆φ = φ − φ . In a system of size 96 × 96 × 24 this susceptibility corresponds to composition fluctuations of the order ∆φ 2 ∼ 5 · 10 −4 . Therefore, we believe that "bulk-like" composition fluctuations are not the major source of background terms. However, we cannot rule out that the presence of an AB interface gives rise to enhanced composition fluctuations. Another source of corrections to scaling stems from the fluctuations in the average interface position itself. Since the effective interaction between the interface and the wall is rather weak, they give rise to a finite but large susceptibility away from the critical point. We have estimated the susceptibility from the curvature of ln P (φ) close to the triple point (i.e., T ≈ 0.9T c ), and we have obtained values of the order χ T ∼ 3 · 10 2 (and a smaller value is obtained if the interface is close to a wall.) For the same system size as above, this yields composition fluctuations of the order ∆φ 2 ∼ 0.04 (a value which should be compared to φ c (D = 24) = 0.18 (2)). This observation partially rationalizes why the peak in the probability distribution of the composition close to φ = 1/2 is always broader than the peak which corresponds to the phase in which the interface is close to the wall. As we approach the critical temperature composition fluctuations grow. At the critical point the typical composition fluctuations are of the order ∆φ 2 ∼ √ L γ/ν−d ∼ L −1/8 , where we have used the critical exponents for the susceptibility γ = 7/4 and the correlation length ν = 1 appropriate for the 2D Ising universality class. Hence, for small system sizes typical fluctuations yield compositions which differ substantially from the critical composition; only for very large sizes the composition fluctuates in the vicinity of the critical value. Moreover, the critical density is much displaced from the symmetry axis φ = 1/2 and typical fluctuations in a finite system are cut-off by the constraint 0 < φ or φ < 1. Therefore, the susceptibility of a small system is reduced compared to the value expected from the leading scaling behavior. This observation is in accord with our Monte Carlo data, and a similar reasoning has been used by Bruce and Wilding [41] in discussing background terms to the specific heat and the concomitant corrections to scaling in the energy distribution.
B. The triple point.
For the largest two film thicknesses D = 24 and D = 48 the interface localisation-delocalisation transition is first order and the concomitant two miscibility gaps join in a triple point. At this temperature an A-rich phase, a B-rich phase and a phase where the interface is located in the middle of the film (φ = 1/2) coexist. The coexisting phases correspond to three peaks in the distribution of the composition. Upon increasing the lateral system size the peak positions do not shift (as opposed to the behavior at the tricritical point), the peaks become more pronounced and configurations with intermediate compositions are more and more suppressed, because of the presence of interfaces between the coexisting phases.
The composition of the system and the average interface position are related via l = φD (integral criterium), where we assume that the coexisting bulk phases are almost pure, i.e., φ bulk coex ≈ 0 or 1. From the probability distribution we then calculate the effective interface potential g(l): In principle, not only fluctuations of the interface position ∆l 2 but also "bulk"-like fluctuations contribute to Since the wetting transition in a binary polymer blend occurs far below the critical point of the bulk, the bulk susceptibility is very small, and the latter contribution can be neglected.
The dependence of the free energy per unit area on the position of the interface is a key ingredient into the theory of wetting. [42,43,3,[44][45][46][47] The interface interacts with the boundaries and the (bare) interface potential exhibits three minima. These correspond to the three coexisting phases. In the two phases with φ close to 0 and 1, the interface is localized close to the wall, the interaction between the wall and the interface is rather strong, and the effective interface potential possesses a deep minimum. In the "soft-mode" phase the interface is only weakly bound to the center of the film and the minimum is much broader. In Fig.6 we present the effective interface potentials for film thicknesses D = 24 (a) and D = 48 (b) and various lateral system sizes in the vicinity of the triple temperature. The three minima are clearly visible, however, the shape of the interface potential and the value of the minima depend on the lateral system size L. Moreover, the minima which correspond to the localized states broaden and (slightly) shift to larger distances between wall and interface upon increasing L (cf. inset).
Fluctuations of the local interface position, i.e., capillary waves, lead to a renormalization of the effective interface potential g(l) and cause the dependence of g(l) on the lateral system size, which we observed in a microscopic model of a polymer mixture. Describing the configuration of the system only via the local position l(x, y) of the interface (sharp kink approximation) we write the coarse grained free energy in form of the capillary wave Hamiltonian: [44,46,48] where σ approaches the AB interface tension between the coexisting bulk phases for large separations between the wall and the interface. An increase of σ at smaller distances l as revealed by previous MC simulations is neglected. [21] In the vicinity of a minimum of g(l) we may approximate the interface potential by a parabola.
δl denotes the deviation of the local interface position from the position where the g(l) attains its minimum. ξ = 2π/k is the parallel correlation length of interface fluctuations. For lateral distances much smaller than ξ the fluctuations of the local interface position are hardly perturbed by the interaction between the interface and the wall; the interface behaves like a free interface. For lateral distances which exceed ξ capillary waves are strongly suppressed. ξ is larger for the minimum of g(l) in the center of the film than for the minima, in which the interface is localized at a wall. From the curvature of the effective interface potential g(l) for film thickness D = 24 we estimate k 1 = d 2 g dφ 2 /σD 2 = 0.26 and k 2 = 0.031, where we have used the bulk value σ = 0.0382 for the interfacial tension at ǫ = 0.068. For the thicker film we obtain k 1 = 0.3, but the curvature in the middle of the film could not be accurately estimated. The value is of the order k 2 ∼ O(0.005), and we expect this value to decrease exponentially with the film thickness. Hence, this fluctuation effect is the stronger the larger the film thickness. For the system sizes employed in the MC simulations k L is of order unity.
In our Monte Carlo simulations the finite lateral system size L acts as an additional cut-off for the spectrum of interface fluctuations [14] and upon increasing L we extend the spectrum of interface fluctuations. Allowing for interface fluctuations we decrease the free energy of the system. Therefore, we expect the free energy density of the system to decrease when we increase the lateral system size, and we expect the effect to be the stronger the larger ξ . Therefore, the free energy of the "soft-mode" phase becomes smaller compared to the free energy of the phase, where the interface is located close to a wall when we increase L. This effect is clearly observed in the MC simulations. To be more quantitative, we consider a system where the laterally averaged interface position is at the minimum of g(l), and we expand the deviation δl(x, y) from the minimum in a Fourier series {a nm cos(q n x) cos(q m y) + b nm cos(q n x) sin(q m y) + c nm sin(q n x) cos(q m y) + d nm sin(q n x) sin(q m y)} (22) with q n = 2πn/L. The coefficients a 00 = b 00 = c 00 = d 00 = b 0m = c 0m = d 0m = d n0 vanish identically, all other coefficients can take any real value. Using this expansion (22) and the effective interface Hamiltonian (20), we calculate the average size of fluctuations (23) and the free energy where the factor η nm takes the values η 00 = 0, η n0 = η 0m = 1/2 and η nm = 1 for n = 0 and m = 0 in order to account for the restriction on the coefficients a, b, c, d. The additive constant is independent of the wavevector cut-off k . The dependence of the free energy on the system size is dominated by the small q behavior. In this regime the discrete nature of the wavevector space matters and, hence, we do not replace the sum over q by integrals. Using the measured values of the wavevector cut-offs we calculate the lateral system size dependence of the free energy difference between the "soft-mode"phase and the delocalized state. The results are compared to the MC data in Fig.7. Good agreement is found for large L, whereas there are deviations for smaller L. For small L the amplitude of the fluctuations becomes large and the a parabolic interface potential is no longer a good approximation -especially for the localized state where the interface is located very close to the walls. We have used histogram extrapolation to adjust the temperature such that the difference ∆g(l) = g 2 − g 1 of the minima vanishes. This corresponds to the equal height criterium for the triple point. The equal weight condition, which we have applied to determine the binodals close to the critical points, would require: ∆g = 1 L 2 ln k1 k2 . Both conditions agree, of course, when we extrapolate our results to L → ∞. From this procedure we obtain the following estimates for the triple point: 1/ǫ triple = 14.7(4) and φ triple = 0.015, 0.5, 0.985 for D = 24 and 1/ǫ t = 14.2(4) and φ triple = 0.0066, 0.5, 0.9934 for D = 48. The thickness of the microscopic enrichment layer at the wetting transition temperature is of the order l wet = 0.05R g ; a value which is consistent with expectation for strong first order wetting transitions.
The dependence of the critical temperature and the triple temperature on the film thickness is summarized in Fig.8. When we increase the film thickness the critical temperature 1/ǫ c shows a non-monotonic dependence. At D = 14 the tricritical point (where the critical temperature and the triple temperature merge) occurs at ǫ tri = 0.0615(5), for film thickness D = 24 we find ǫ c = 0.0610(10) and at D = 48 ǫ c = 0.0625 (10). This effect is rooted in two opposing effects. On the one hand, the self-consistent field calculations predict the Flory-Huggins parameter χ SCF c (D) to decrease upon increasing the film thickness D for an incompressible fluid. This shift in temperature decreases exponentially with the film thickness. One the other hand, packing effects, which are not incorporated in the self-consistent field calculations, increase the density in the "bulk"-like portion of the film when we decrease the film thickness. These packing effects at the walls depend strongly on the computational model, but qualitatively similar effects might occur in experimental systems as well. This thickness dependence of the density in the middle of the film modifies the relation between the depth of the square well potential and the χ-parameter. This leads to a behavior of the form ǫ c ∼ χ SCF c /(1 + 0.85/D), where we use the dependence of the density profile (cf. Fig.2) on the film thickness as obtained by direct measurement in the Monte Carlo simulations. A dependence of the fluid packing structure on the density is neglected. A similar 1/D correction to the difference in surface free energies between the A and the B-rich phase has been observed in previous simulations. [21]. Attempting to separate these two effects we also present [(1 + 0.85/D)ǫ c ] −1 which corresponds to the inverse Flory-Huggins parameter. Within the error bars the behavior of this quantity is consistent with the mean field prediction. The critical value of the inverse Flory-Huggins parameter increases and the triple value decreases as we increase the film thickness. The latter approaches the wetting transition temperature [21] T wet = 14.1(7) from above.
C. The phase diagram.
For film thickness D = 48 we have determined the complete phase diagram. Close to the critical point we assume 2DI behavior with an exponent β = 1/8 for the order parameter and employ finite size scaling to estimate the critical amplitude. Outside the critical region but above the triple temperature we have estimated the location of the binodals via the equal weight criterium in a system of size L = 64; but no finite size analysis has been applied. The phase diagram for a blend confined into a film with antisymmetric walls is presented in Fig.9. (a) Confinement into a film with antisymmetric boundary conditions enlarges the one phase region up to the prewetting critical temperature. Since the wetting transition in binary polymer blends occurs far below the unmixing critical temperature in the bulk the effect is quite pronounced. The temperature region between the prewetting critical point and the triple point is about 11% of the wetting transition temperature. This value strongly depends on the details of the structure at the walls. The stronger the wetting transition the larger are the prewetting lines and the more extended is the region of the two miscibility gaps. The phase diagram of the bulk and a film with symmetric walls are displayed for comparison in Fig.9. The symmetric film has the same thickness as the antisymmetric film and the monomer-wall interactions at one wall are identical and attract the A-component. While the prewetting at the wall which prefers the A-component leads to a two phase region in the antisymmetric case there is only a change in curvature of the binodal detectable in the symmetric case.
Panel (b) of Fig.9 presents the phase diagram as a function of temperature and exchange chemical potential. In the antisymmetric case ∆µ coex = 0 up to the triple temperature. There, two coexistence lines emerge which are the thin film analogies of the prewetting lines at the two walls. Since the monomer-wall interactions are short ranged the prewetting line in the bulk and the coexistence curves in the film deviate from the bulk coexistence value linearly (up to logarithmic corrections). [49] They end in two critical points. Though the system is strictly symmetric with respect to exchanging A ⇀ ↽ B phase coexistence is not restricted to ∆µ = 0, and the coexisting phases are not related by the symmetry of the Hamiltonian. The coexistence curve of the symmetric film is shown for comparison. The coexistence value of the chemical potential is shifted to values disfavoring the component attracted by both walls. There is a change in the temperature dependence of the coexistence curve close to the wetting transition temperature, but the coexistence curve stays far away from the prewetting line. If the two lines intersected there would also be a triple point in the symmetric case. [21,50] Since, the shift of the chemical potential ∆µ is roughly proportional to the inverse film thickness (Kelvin equation) we expect a triple point to occur only for much larger film thicknesses. This is in accord with self-consistent field calculations. [21] The typical distance l between the interface and the wall at coexistence is of order D/2 in the antisymmetric case, while it is only of the order R g ln D/R g in the symmetric case. Hence, smaller film thicknesses are sufficient to study the interaction between the interface and the wall, and antisymmetric boundary conditions are computationally more efficient to investigate the wetting behavior.
V. SUMMARY AND DISCUSSION.
We have studied the phase diagram of a symmetric polymer mixture in a thin film with antisymmetric boundary conditions via large scale Monte Carlo simulations. The walls interact with monomers via a short range potential; the one wall attracts the A component and repels the B component while the interaction at the opposite wall is exactly reversed. The salient features of the phase diagram and its dependence on the film thickness as obtained by our MC simulations are in accord with the results of mean field theory. [9,17,18] Fluctuations, which are neglected in the mean field calculations, do not modify the qualitative phase behavior. However, they give rise to a rich crossover behavior between Ising critical behavior, tricritical behavior and their mean field counterparts. This has been elucidated by phenomenological considerations and is qualitatively consistent with our simulation results.
Since the critical point of the thin binary polymer film occurs at much lower temperature than the unmixing transition in the bulk, "bulk-like" composition fluctuations are only of minor importance. The dominant fluctuations of the composition of the film arise from capillary waves at the interface between the A-rich and B-rich regions in the film. The interaction between the walls and the interface is rather small, because it is mediated via the distortion of the interface profiles at the walls and the strength of the interaction decreases exponentially with the distance. Hence, the interface is only very weakly bound to the minimum of the effective interface potential. These large fluctuations give rise to rather pronounced corrections to scaling in our systems of limited size. However, using the cumulant intersection method [35] and the matching of the order parameter distribution onto the predetermined universal scaling function, [34] we give evidence for the 2D Ising universal character of the critical points. The same strategy has proven computationally very convenient to locate the tricritical point as a function of the film thickness. [36] This technique allows us to locate the critical points of the confined complex fluid mixture with an accuracy of a few percent.
Interface fluctuation do not only impart 2D Ising critical behavior onto the critical points, but they are important in the whole temperature range. Monitoring the probability distribution of the laterally averaged interface position we extract the effective interface potential g(l). Its dependence on the lateral system size yields direct evidence for the renormalization of the interface potential by interface fluctuations. Interface fluctuations lead to a broadening of the minima in the interface potential, a shift of the minima towards the center of the film, and to a relative reduction of the free energy of the broader minimum. This leads to a systematic overestimation of the triple temperature by the mean field calculations.
Moreover, our simulations indicate that packing effects in thin films result in corrections of the order 1/D to the density of the film or to the effective Flory-Huggins parameter. Such corrections are likely to mask completely the subtle thickness dependence of the triple temperature and the triple temperature predicted by the mean field calculations. For short range interactions between walls and monomers the predicted shifts decrease exponentially with the film thickness D. However, power-law dependencies are expected for the case of long range (i.e., van der Waals) interactions between walls and monomers.
The gross features of the phase diagram as well as our simulation and analysis techniques are not restricted to binary polymer fluids but generally apply to binary liquid mixtures in confined geometries. Moreover, mean field calculations [17] indicate that for small deviation from perfectly antisymmetric boundary conditions a qualitatively similar phase behavior emerges. The stronger the first order wetting transitions at the boundaries, the larger deviations from antisymmetry are permissible without alternating the topology of the phase diagram. Hence, a thin binary film on a substrate against air/vacuum, where the substrate energetically favors one component of the mixture while the other component has an affinity to the air surface, is an experimental realization of the boundary conditions discussed here. Our findings also imply that ultrathin enrichment layers at one surface are unstable in the temperature range T wet < T < T c . Such effects have been observed experimentally [51] in polymeric films, although for a liquid-vapor transition instead of a liquid-liquid demixing. However, recent experiments have observed the wetting transition in binary polymer blends. [52,53] TABLE I. Compilation of the boundaries of the different regimes in the vicinity of the tricritical point and the correlation lengths at the crossover. The latter quantity gives an estimate of the system size required to observe the crossover in the Monte Carlo simulations. crossovers |∆tcross| ξcross/Rg 2DT ↔ 2DI (N exp(λD)) −1+1/2φcross r 1/φcross exp(λD(3/4 − νtri/2φcross))N 1/2−ν tri /2φcross r −ν tri /φcross 2DI ↔ 2DMF |r|N −1/2 exp(−λD/2) |r| −1/2N 1/4 exp(λD/2) 2DMF↔ 2DTMF 1. (a) Illustration of the different regimes for a second order and tricritical transition: 2DTMF: mean field tricritical behavior, 2DMD: mean field critical behavior, 2DI: two dimensional Ising critical behavior, and 2DT: two dimensional tricritical behavior. The inset shows the temperature dependence of the order parameter m for r = −0.4 as calculated within mean field theory (see Eq. (5)). For tc − t ≪ 16r 2 /3 2DMF behavior is found, while 2DTMF behavior is observed at larger distances from the critical point. (b) Dependence of the critical temperature tc on the distance r from the tricritical point. The curves correspond to different values of λD as indicated in the key. Thick lines, which bracket the behavior, correspond to tc = 7r 2 /5 (valid for small r) and tc = 7r 2 /9 (valid in the limit λD → ∞). The inset presents the binodals at fixed strength b = 4.44 of the wetting transition of the individual surface and several values of λD as indicated in the key. For choice of parameter b/c = 4.44 > 3 exp(−λD/2) (and, hence, r > 0) there are two critical points for all values of the film thickness.
FIG. 2. Density of blocked lattice sites normalized by the bulk value as a function of the distance from the wall at ǫ = 0.06 and ǫw = 0.16 for film thicknesses D = 24 and D = 48. Note the strong packing effects at the wall for z ≤ 5. For these parameters an interface is stabilized at the center of the film. The position of the interface fluctuates in the interval Rg ≈ 7 < z < D − Rg (cf. Fig.6) The inset presents the normalized density averaged over the layers 5-8. This region is marked by the arrow in the main panel. | 2018-04-03T05:46:03.257Z | 2000-10-20T00:00:00.000 | {
"year": 2001,
"sha1": "267dea449e718f952368a1dae6c2c7182bdbaabb",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/cond-mat/0010325",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "267dea449e718f952368a1dae6c2c7182bdbaabb",
"s2fieldsofstudy": [
"Materials Science",
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine",
"Materials Science"
]
} |
225843781 | pes2o/s2orc | v3-fos-license | Soil and banana crops (Musa paradisiaca L.) risk by chromium (Cr) accumulation through leachate and its health risk assessment
The leachate efluen of Semarang landfill is directly flew to the upper course of Kreo river is a fact that make Kreo river is possibly contaminated by heavy metal. Many banana trees are planted around the contaminated0soil. This study was conducted to determine Cr toxicity on banana trees that grow on contaminated soil and to determine public health risks that is caused by leachate of Semarang landfill. The measurement of bioaccumulation factor as the basic data for environmental safety and as the evaluation of wastewater processing is conducted through Cr level measurement. The research shows 1) Cr concentration on WWTP Jatibarang Semarang landfill’s efluen leachate in the rainy season exceeds the environmental safety limit. 2) Cr concentration in the water before and the after of efluen leachate contamination is under the standard. 3) Cr contamination in the soil around Jatibarang landfill’s efluen leachate contamination area exceeds its limit. 4) The bioconcentration factor shows the banana trees can accumulate Cr from soil to the tree. 5) Most of Cr accumulation in the banana trees is translocated to the air and to the root of the tree. 6) The banana plants such as root, stem and pseudostem from the Cr contaminated soil is safe for consumption.
Introduction
Heavy metal pollution is one of the world's environmental problem since its distribution and toxic are dangerous for human health. Miss management of landfill as the waste recycling and processing area possibly contaminates the ecosystem through obstruction vegetation and environment. Heavy metal accumulation on the ground will lead the reduction of crops productivity and safety. Chromium is needed for the metabolism of proteins, fats and carbohydrates. Lack of Cr (III) causes some dysfunctional diseases of metabolism such as prolonged stress during pregnancy, physical trauma, and infection [1]. The excess Cr may cause nephrotoxicity. The chromium exposure causes inflammation of the skin, respiratory system, liver damage and ulcer formation [2]. The limitation of heavy metal intake in humans is needed. It is around 0.05-0.2 mg / day -1 [3].
Parts of banana tree such as roots, stems and pseudostems are often made as traditional medicine and traditional culinary. The application of banana tree's parts as traditional medicine includes: root of banana tree that are used as blood medicines, gender and anthelmintic diseases; banana stem has antilithiatic activity which helps to reduce and breakdown the the formation of magnesium ammonium [4]; Pseudostem juice is used for the treatment of diarrhea, hemoptysis, cholera and dysentery [4,5]. Part of the banana tree is also used as traditional culinary, among others: Banana stems are used as delicious food such as Sayur Ares (Lombok-NTB), Bura Piong Pa or banana stem chicken (Toraja-South Sulawesi), Sayur Bonggol Pisang (Yogyakarta); Pseudostem banana is used as a typical and delicious food by the Dayak Tribe -Kalimantan.
Further discussion about the metal absorbtion by crops and its health risk for human consumption, especially banana, is necessary. In Indonesia, investigation about heavy metal contamination on crops, which is connected on human health, is not popular, though there are certain references from foreigners' researchers like [6,7,8]. The research's purpose is to determine Cr toxicity on banana trees that grow on contaminated soil and to determine public health risks that is caused by leachate of Semarang landfill. The measurement of bioaccumulation factor as the basic data for environmental security and waste processing evaluation is conducted by measuring Cr.
Experimental Design Research Study
Samples is divided into 6 research stations: 1st station, efluen leachate; 2nd station, water condition before efluen leachate; 3rd station, water condition after efluen leachate; 4th station, the soil that is planted with banana in Jatibarang landfill area; 5th station, the soil that is planted with banana in Jatibarang landfill area; 6th station, the soil that is planted with banana in Jatibarang landfill area. Soil sampling is conducted by dredging down the soil around 30 cm deep. 200-gram extract sample is packaged in the labeled polyethylene plastic. Sampling is also conducted on banana tree. The root, stem, and pseudostem are taken than packaged in the labeled polyethylene plastic. All of samples are packaged correctly in the labeled polyethylene plastic then saved in the 4 o C airtight sterile icebox. The samples are analyzed in research development: and Disease Control Center (BBTKLPP Yogyakarta). Cr concentration analysis uses an atomic absorption spectrophotometer.
Health Risk Index (HRI)
Rasio Estimation ratio of vegetation sample contamination with oral doses reference is determined through Health Risk Index (HRI). Estimation of contamination is calculated by dividing the daily intake of metal (DIM) with its safe point. Index point that higher than 1 is not considered safe for human health [9].
Health risk assessment for human conducts Health Risk Index (HRI) is calculated with this formula:
HRI = DIM / Rfd
The daily intake of metal is written as DIM and oral doses reference is written as Rfd. Rfd point for Cr is 0.300 (mg kg −1 body weight d −1 ) [10,11]. Meanwhile, the daily intake of metal (DIM) is calculated with this formula [20]:
DIM (C_metalxC_factorxD_food intake)/body weight
Metal concentration on banana tree is shown as C_metal (mg kg −1 ). C_factor as the conversion factor, D-food as daily intake of banana and the body weight mean (BW) is written as body weight. Conversion factor of green banana into its dry weight is 0,085. Body weight estimation for adult people is 55,9 Kg and daily intake of vegetable for each people is considered on 0,345 kg −1 d -1 [12,13]. In this research, adult's body weight, for each, is arranged into 60 Kg [14]. The fruit and vegetable intake which is suggested by expert is less than 400 g/day [15].
Data Analysis
The laboratory result is well-defined with descriptive quantitative research analysis through literature study. The calculation of heavy metal concentration in the water sample can be seen in Table 1. Cr concentration in WWTP Semarang landfill's efluen leachate exceeded its environmental safe limit, which around 0,0840 (mg/L). Meanwhile, Cr concentration in the water before and after the contamination of efluen leachate is below its standard. The result of laboratory examination shows that pH of efluen leachate and Kreo water sample have alkali on 8,0-8,3 point. Based on FAO [16], pH sample limit tolerance for irrigation water is 6.50-8.40. The water sample's pH alkali characteristic cannot be accepted since it tends to be protected by the soil [17].
Cr Concentration in Soil Sample
The calculation of Cr concentration and pH in the soil can be seen in Figure 2 and Table 2. The heavy metal content in the soil which is taken from the efluen leachate area contains Cr < 4,569 until 19,899 mg/kg heavy metal concentration. Based on this data we can conclude that based on PP (Government Regulation) No 101 -2014 about dangerous, poisonous waste material management (Bahan Berbahaya dan Beracun (B3)) Cr contamination in the soil surface of Jatibarang landfill's efluen leachate contamination area exceeds its standard [18]. The Cr exceeding is caused by the Cr input in the soil which is affected by natural factors that are dominated by human activities. Contamination estimation risk which is got through divide daily intake of metal (DIM) with its safe limit minus one explains that banana plants such as root, stem and pseudostem which is planted in the contaminated soil is safe to be consumed by human.
Chromium (Cr)
In the solid phase, Cr tends to be immobile, idle, and not dangerous. But it will change drastically when they in liquid phase. Chromium mobilization happens when Cr is mixed in the ground water, flowed to the lower place, then potentially enrich Cr concentration in the soil. Cr mobilization through the absorption process on organic and inorganic soil components or be precipitated as pure density. Cr leads an effect when it reacts with different soil components which then affecting its availability, mobility and solubility. Cr exists in the environment in the solid and liquid phases [19].
Cr in The Water Sampling
The existence of Cr in efluen leachate is connected with the contamination of hazardous and toxic waste materials (used batteries, woven fabrics, used packaging, iron, and steel) in landfill which releases Cr and another heavy metal in Kreo river. Lower Cr concentration in the water is caused by: 1) Waste technology management is not optimal. 2) Fitoremidiation technology application to ensure water ecosystem which can be seen efluen leachate's Cr that can be controlled through bioaccumulator plants in the lower course of Kreo river. The bioaccumulator plants is Typha latifolia and Eichhornia crassipes. 3) Heavy metal dilution in the water. This continuous wastewater processing which enters Kreo river creates Cr accumulation in the soil [4]. The pH level escalation can be caused by methane in leachate composition. The existence of methane is caused by a fact that the wastewater processing before the water is allowed to enter Kreo river, not degradates it perfectly. It affects the efluen leachate in the Kreo river. Besides that, it can be lead by the waste water management installation that conducts aeration system which produces over populated methane bacteries that is caused from the transformation of organic and acetate acids into metana [19].
Cr Heavy Metal in The Soil
Cr enters the soil caused by the washing of particles that are contained Cr through geochemical processes such as weathering or diagenetic reactions. The existence of Cr in here is as a cationic species [20]. Cr (III) tends to tie down its mobility and bioavailability since it forms a strong bonds with alumino-silicate clay, Fe / Al hydro oxide in soil and another soil organic matter. Soil in the efluen leachate area contamination of Semarang TPA has pH around 7.05-8.20 point. Alkaline characteristic of pH in here is caused by the raising of affinity in the soil through the increasing of the negative charge in the soil. The negative charge of Cr (VI) is responsible for the increasing of Cr (VI) mobility and bioavailability by expelling minerals, humus, and negatively charged clay in the soil. This process makes the increasing of pH. The increasing of pH tend to push. The metal precipation [21]. The high number of soil metal concentration is caused by the WWTP Semarang landfill contamination. In this research, Cr has capability to accumulate heavy metal from soil to plants. The soil sample that has high concentration point often has very high accumulation level [22]. Cr accumulation to the soil possibly happens since there is a contamination of hazardous and toxic waste materials in Jatibarang landfill. Bad waste water processing leads the waste interacts with its environment. This contact leads chemical transformation in the environmental system, and the metal can not degradated [22]. The heavy metal transformation from solid phase to liquid happens if there is a soil kasion changing, ph, or oxidation-reduction potential [19].
Cr Heavy Metal in Plants.
The increasing of Cr concentration in the roots is caused by the binding of Cr (III) and Cr (IV) to the cell's wall, then the root of the reductase enzyme reduces Cr (VI) to Cr (III). The increasing of Cr concentration causes many damages to plants including limited root growth, leaf damage, and the reduction of biomass [22,23].
Health Risk
Health risk measurement for banana trees that is planted in the efluen leachate contamination area in Jatibarang landfill is needed since it is often consumed by locals. DIM and HRI point is calculated is shown in Table 2. The result shows that the banana which is planted in the efluen leachate contamination area in Jatibarang landfill is safe for consumption. Nonetheless, WWTP Semarang landfill monitoring has to be conducted, since the safe point of banana plants such as root, stem and pseudostem consumption which is planted in the efluen leachate contamination area in Jatibarang landfill is caused by the capability of banana tree to filter the toxic. Another crops plantation in this contamination area has to be monitored since each plant has different capability to filter toxic, especially Cr contamination. This hypothesis is proven through HRI < 1 point, since the capacity heavy metal absorbment depends on the variety of the plants that can be changed by human and environmental factors [19].
Conclusion
The existence of Cr heavy metal in efluen leachate is connected with the contamination of hazardous and toxic waste materials (used batteries, woven fabrics, used packaging, iron, and steel) in landfill which releases Cr and another heavy metal in Kreo river. This study shows that leachate and soil in the efluen leachate contamination area of Semarang landfill exceeds environmental safety quality standards. The entry of leachate into the environmental system causes the accumulation of metals in the soil then becomes a potential problem when it enters the human body through the food chain. Leachate which is thrown into the Kreo river must be monitored continuously to prevent it polluting the environment. Bananas have the ability to attract Cr (phytoextraction) from the contaminated land. Bananas have the ability to do contaminant transpiration (Cr metal) by vaporizing the Cr into the atmosphere as a harmless material (fitovolatisation). This study confirms that parts of banana plants such as root, stem and pseudo stem are safe for human consumption. Plants that grow on contaminated land need to be assessed for its safety if they are to be used by humans. Risk management based monitoring is needed so that the ecosystem balance is maintained [24]. | 2020-07-09T09:12:08.827Z | 2020-06-01T00:00:00.000 | {
"year": 2020,
"sha1": "470da5622a57ad376b6408ac497fbea91398bf97",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1567/4/042058",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "fa96c6dca09fc293ec51bf8de50d3245aa97ccf1",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Physics",
"Environmental Science"
]
} |
231595276 | pes2o/s2orc | v3-fos-license | Metal-Organic Frameworks: Synthetic Methods and Potential Applications
Metal-organic frameworks represent a porous class of materials that are build up from metal ions or oligonuclear metallic complexes and organic ligands. They can be considered as sub-class of coordination polymers and can be extended into one-dimension, two-dimensions, and three-dimensions. Depending on the size of the pores, MOFs are divided into nanoporous, mesoporous, and macroporous items. The latter two are usually amorphous. MOFs display high porosity, a large specific surface area, and high thermal stability due to the presence of coordination bonds. The pores can incorporate neutral molecules, such as solvent molecules, anions, and cations, depending on the overall charge of the MOF, gas molecules, and biomolecules. The structural diversity of the framework and the multifunctionality of the pores render this class of materials as candidates for a plethora of environmental and biomedical applications and also as catalysts, sensors, piezo/ferroelectric, thermoelectric, and magnetic materials. In the present review, the synthetic methods reported in the literature for preparing MOFs and their derived materials, and their potential applications in environment, energy, and biomedicine are discussed.
Introduction
Metal Organic Frameworks (MOFs) constitute a class of solid porous materials, which consist of metal ions or metallic clusters, which act as nodes, and polydentate organic ligands, which act as linkers between the nodes. The metal nodes (metal ions or metallic clusters) act as connection points and the organic ligands bridge the metal centers through coordination bonds, thus, forming networks of one-dimension, two-dimensions, or threedimensions. The main structural features of the MOFs, which are directly related to their properties and applications, are the high porosity, the large volume of the pores, which can reach the 90% of the crystalline volume or more, the large specific surface area (several thousand m 2 ·g −1 ), and the high thermal stability (250-500 • C) due to the presence of strong bonds (e.g., C-C, C-H, C-O, and M-O). An important sub-class of MOFs are the Isoreticular Metal Organic Frameworks (IRMOFs), which were first synthesized by the group of Yaghi [1]. The archetype IRMOF-1 was based on octahedral Zn-O-C clusters and 1,4-benzenedicarboxylic acid (BDC) bound to form a network with pcu topology. The series of IRMOFs retain the pcu topology and are based on varied organic linkers resulting in variable pore volumes and surface area. According to the terminology officially adopted by IUPAC on 2013 [2], MOFs are a sub-class of coordination networks (i.e., coordination compounds which extend to one-dimension, two-dimensions, or three-dimensions through repeating coordination entities), which are a sub-class of coordination polymers (i.e., coordination compounds with repeating coordination entities extending in 1, 2, or 3 dimensions, which do not need to be crystalline). MOFs are dynamic systems susceptible to structural changes upon external stimuli, such as temperature and pressure, and may not be crystalline.
The chemistry of MOFs has evolved rapidly in recent decades and it has become possible to adjust the size and shape of the pores, the network topology, and their surface area, so The present review presents the synthetic methods used to prepare MOFs of various dimensionality and porosity, and outlines their potential applications in the adsorption of many compounds, such as biologically important compounds (drugs, antibiotics, etc.), toxic pollutants and gas, electrochemical energy storage systems and sensors, catalysts and electrocatalysts, and efficient drug delivery carriers.
Synthesis of MOFs
The synthesis of MOFs is determined by many factors related to the reaction time and temperature, the solvent used, the nature of the metal ions and the organic ligands, the size of the nodes and their structural characteristics, the presence of counterions, and the kinetics of the crystallization, which should lead to nucleation and crystal growth. In most cases, the synthesis of the MOFs is performed in the liquid phase by mixing solutions of the ligand and the metal salt. The choice of the solvent is based on its reactivity, solubility, and redox potential. The solvent also plays an important role in determining the thermodynamics and activation energy for each reaction. In some cases, solid state synthetic methods have been used even though difficulties in single crystal growth have been encountered. Slow evaporation of the reaction solution has been used very often to grow crystals of MOFs. In most cases, MOFs are synthesized under solvo(hydro)thermal conditions at a high temperature and pressure. This is the 'classic' method for preparing MOFs. Other alternative synthetic methods, such as mechanochemical, electrochemical, microwave, and sono-chemical methods, have been developed in recent years. These methods are low cost, faster, and yield cleaner products ( Figure 1) [14,15].
Slow Evaporation and Diffusion Methods
Both methods are performed at room temperature and do not need energy supply. During the slow evaporation method, solutions of the reagents are mixed and left for slow evaporation and crystals are formed when a critical concentration is reached, to favor nucleation and crystal growth. Mixtures of low boiling point solvents are often used to accelerate the process [16,17]. During the diffusion method, solutions of the reagents are placed one on the top of the other, separated by a layer of solvent, or are gradually diffused by diving physical barriers. In some cases, gels are used as crystallization and diffusion media. Crystals are formed in the interface between the layers, after the gradual diffusion of the precipitate solvent into the separate layer [8]. The diffusion technique is used specifically if the products are not very soluble. MOF-5 or IRMOF-1 with formula [Zn 4 O(BDC) 3 ]·(dmf) 8 (C 6 H 5 Cl)] n (BDC 2− = 1,4-benzodicarboxylate) was prepared by diffusion of Et 3 N into a solution of Zn(NO 3 ) 2 and H 2 BDC in dmf/chlorobenzene and addition of a small amount of hydrogen peroxide to facilitate the formation of O 2− bind to the center of the SBU [18].
Solvo(Hydro)-Thermal and Iono-Thermal Method
Solvo(hydro)thermal reactions are carried out in closed vessels under autogenous pressure above the boiling point of the solvent [19]. Most of the MOFs reported so far, have been synthesized using solvo(hydro)thermal conditions [20][21][22][23][24]. The reactions are usually carried out in polar solvents using closed vessels (autoclaves) at temperatures in the range of 50-260 • C, and require long periods (hours and sometimes days). Teflon-lined autoclaves are used for reactions at high temperatures above 400 • C. The temperature of the reactions may be increased in order to facilitate bond formation, especially if kinetically inert ions are used, and to ensure proper crystallization. The temperature also affects the morphology of the crystals, while prolonged reaction times may lead to decomposition of the final product [25,26]. The cooling speed rate should be very slow and affects crystal growth. High boiling point solvents are most often used. The most common are dimethyl formamide (dmf), diethyl formamide (def), MeCN, MeOH, EtOH, H 2 O, Me 2 CO, or their mixtures. During solvo(hydro)thermal conditions, the initial reagents may undergo unexpected chemical transformations, which are not achieved under milder synthetic conditions, leading to new ligands formed in situ.
The ionothermal synthesis is based on the use of ionic liquids as solvents and templates and can be considered as a subclass of solvo(hydro)thermal methods. Ionic liquids are environmentally-friendly reagents, compared to conventional organic solvents, because of their low vapor pressure, high solubility for organic molecules, high thermal stability, and nonflammability, which makes them excellent reagents for the synthesis of MOFs as well as other classes of materials (i.e., zeolites and chalcogenides). Ionic liquids also offer both anions and cations as counterions and/or as templates for the frameworks of MOFs, and have been widely explored in recent years as alternatives for the synthesis of MOFs [27].
Microwave-Assisted Method
The method is often used for the synthesis of organic and nanoporous inorganic materials [28]. More recently, the method was used for the synthesis of metal clusters [29] and MOFs [30,31]. The advantages of the method are the short reaction time required, the high yield, and the low cost. The microwave-assisted synthesis of HKUST-1 with formula [Cu 3 (BTC) 2 (H 2 O) 3 ] (BTC 3− = 1,3,5-benzenetricarboxylate) gave crystals with improved yield and physical properties, requiring a much shorter reaction time, with respect to its conventional hydrothermal synthesis [32]. Despite the fact that the specific technique cannot produce crystals, the microwaves facilitate the motion of the molecules, leading to nucleation and formation of crystals with a controlled shape and size by appropriately adjusting the concentration and the temperature of the reaction [33].
Mechanochemical Method
The method uses mechanical forces, instead of using a solvent, at room temperature, to form coordination bonds by either manual grinding of the reagents or more often in automatic ball mills. In some cases, a small amount of solvent may be added into the solid reaction mixture and succeeded to obtain one-dimensional, two-dimensional, and three-dimensional coordination polymers [34]. The mechanochemical method facilitates mass transfer, reduces particle size, heats, and locally melts the reagents, thus, accelerating the reaction time. It constitutes an environmentally-friendly green chemistry method, which produces materials of high purity and high efficiency at short reaction times [35]. The application of mechanical chemistry to the synthesis of MOFs is additionally attractive because it is an alternative to the high temperature and pressure solvo(hydro)thermal synthesis. The biggest disadvantage of the method is the isolation of amorphous products, unsuitable for single-crystal X-ray structural studies.
Electrochemical Method
The method is used for the synthesis of MOF powders on an industrial scale. The metal ion is provided by anodic dissolution into reaction mixtures that contain the organic ligands and electrolytes. The major advantages of this method are the slighter temperatures of reaction and extremely quick synthesis under milder conditions, compared to solvothermal method. Several MOFs, such as HKUST-1, ZIF-8, MIL-100(Al), MIL-53(Al), and NH 2 -MIL-53(Al), have been synthesized by this method in an electrochemical cell, and the influence of several reaction parameters on their yield and texture properties have been investigated [36].
Sonochemical Method
Sonochemistry deals with the chemical transformations of molecules under highenergy ultrasonic radiation (20 kHz-10 MHz). The bubbles formed when a reaction solution is irradiated with ultrasound radiation create local hot spots of a short lifetime with a high temperature and pressure, which promote chemical reactions and immediate formation of crystallization nuclei [37][38][39]. High quality crystals of MOF-5 and MOF-177 with a size of 5-25 µm and 5-20 µm, respectively, were prepared via the sono-chemical method in the presence of 1-methyl-2-pyrrolidone as a solvent, in a substantially reduced reaction time [40,41].
Microemulsion Method
The method is widely used in the preparation of nanoparticles and has recently been used to synthesize MOFs [42,43]. Water microemulsions contain nanometer-sized water droplets immobilized by a surfactant on the organic phase. The micelles of the microemulsions act as nanoreactors and control the kinetics of nucleation and crystal growth. The size and number of micelles in the microemulsion can be adjusted by varying the water to surfactant ratio and the type of the surfactant. The method is advantageous because the dimensions of the nanoscale materials can be controlled, while the major disadvantages are the high cost and the fact that most of the surfactants used are environmental pollutants.
Post-Synthetic Modification
The method involves the introduction of desired functional groups into the MOFs after their synthesis (PSM, Post Synthetic Modification) and is essentially a process of chemical transformation of the MOFs after their isolation. The method has been widely used to prepare isostructural MOFs with different physical and chemical properties [44][45][46][47]. For example, IRMOF-3 containing 2-amino-1,4-benzenedicarboxylic acid can undergo chemical modification with a diverse series of anhydrides and isocyanates yielding isostructural MOFs containing different functional groups [48]. Post-synthetic modification can involve the replacement of the primary structural units of the MOFs (BBR, Building Block Replacement), including solvent-assisted ligand exchange (SALE, Solvent-assisted Linker Exchange) [49], replacement of the non-bridging ligands, and metal nodes. Complete exchange of the organic ligands can occur during the SALE process, adding different functionality to the MOF. BBR reactions involve the heterogeneous exchange of ligands or metal ions by breaking and forming chemical bonds within the original MOF [50]. BBR methods are used when the direct synthesis of the desired MOF is not achieved, functionalizing the pores or nodes within the MOFs, affording or enhancing desired functional properties such as catalysis, selective gas adsorption, redox, and ionic conductivity. BBR reactions are observed only on or near the outer surface of MOF crystals [51][52][53]. Post-synthetic modification reactions can create defects on the MOFs either by missing or replacing metal nodes or by missing or partially replaced organic linkers. Such defects can be also generated during the conventional synthesis of MOFs and during the crystallization process and crystal growth [54]. Mixed-metal MOFs, containing at least two metal ions in their framework, can be prepared under post-synthetic methods, as well as from one-pot methods or by using metalloligands, and possess new properties and activities due to the presence of the second metal ion [55]. Structural defects and inhomogeneities are often related to important material properties, and, hence, defect engineering has been effectively applied in order to modify and functionalize MOFs for applications in catalysis, gas sorption, separation and storage, and luminescent and magnetic materials.
Template Strategies
The use of template molecules in the reaction mixture can lead to novel MOFs, which are difficult to obtain by traditional synthetic methods [56]. The template molecules that have been widely used are small organic molecules, including organic solvents, organic amines, carboxylic acids, N-heterocyclic aromatic compounds, ion liquids, surfactants, and other organic molecules. Each of this class of organic compounds affects the synthesis and crystallization of the MOF differently. For example, the solvent polarity and solubility affect the crystallization of the MOF, the organic amines adjust the pH of the reaction solution and facilitate the deprotonation of the organic ligands, carboxylate compounds act as ligands to the metal centers and can fill the pores of the MOF, aromatic heterocyclic compounds act as counterions when protonated and as weak organic bases, ion liquids act as solvents and counterions, and surfactants form micelles in solvents, which determine the shape and size of the MOFs. Other molecules, which may act as templates, are coordination compounds (e.g., [Ru(2,2 -bipy) 3 ] 2+ ), polyoxometalates, block co-polymers, MOFs, polystyrene spheres, substrates such as graphene oxide, and rarely biomacromolecules. The template synthesis strategy is used for the preparation of hierarchical porous materials, with mesoporous and microporous channels for hosting large molecules such as proteins and enzymes. However, the most common synthetic approach for hierarchical MOFs is the reticular chemistry strategy by using ligands of extending length to obtain MOFs of the same topology but with a variable pore size.
Applications of MOFs
MOFs display a range of structural features, namely large surface area, high porosity, crystallinity and thermal stability, and functionality of pores and frameworks, which render them promising materials for environmental and biomedical applications, as catalysts, sensors, absorbers for toxic gases, and metal ions.
Gas Adsorption/Separation/Storage for Energy and Environmental Applications
MOFs have been extensively studied for applications in gas storage. For example, H 2 and CH 4 represent alternative energy resources for future vehicles, and their effective usage still remains a challenge for the automotive industry. The capture of toxic industrial gases, such as NH 3 and H 2 S, and volatile hydrocarbons, like benzene, as well as the removal of SO 2 and NO x from flue gas, are of great importance for environmental protection. A very critical step in the chemical industry is the separation of mixtures of gases, such as CO 2 capture and CO 2 /CH 4 , CO 2 /N 2 separation, O 2 purification, and so on. CO 2 is the main greenhouse gas and is responsible for global warming and for water acidification. MOF-74-Mg, which is the magnesium analogue of MOF-74, shows the highest CO 2 uptake capacity of 228 and 180 cm 3 ·g −1 at 273 and 298 K and 1 bar, respectively ( Figure 2) [57]. The exceptional CO 2 uptake by MOF-74-Mg is attributed to the increased ionic character of the Mg-O bond, which imparts additional uptake beyond weight effects while maintaining the reversibility of adsorption. MOF-210 has a very high surface area of 10,450 m 2 ·g −1 and shows a CO 2 uptake value of 2400 mg·g −1 (74.2 wt %, 50 bar at 298 K), which is larger than that of MOF-177 or MIL-101(Cr) (60 wt % and 56.9 wt %, respectively) [58][59][60]. MOF-200 has a similar CO 2 uptake as MOF-210 under similar conditions. Other MOFs, which show considerably higher CO 2 uptake compared with other solid materials, are the NU-100 (69.8 wt %, 40 bar at 298 K), the MOF-5 (58 wt %, 10 bar at 273 K), and the HKUST-1 (19.8 wt %, 1 bar at 298 K). Synthetic strategies for the preparation of MOFs with efficient CO 2 uptake capacity have been developed and include amine incorporation, introduction of functional groups, and additional metal ions and control of pore size. The method used in the industry for CO 2 separation is the amine scrubbing, which is high-energy consuming and presents disadvantages, such as amine degradation and equipment corrosion. Alternatively, MOFs incorporating amines have been examined as potential candidates for CO 2 separation. For example, incorporation of N,N -dimethylethylenediamine (mmen) within the [Mg 2 (dobpdc)] MOF (dobpdc 4− = 4,4 -dioxido-3,3 -biphenyldicarboxylate) afforded [mmen-Mg 2 (dobpdc)], which displays an exceptional capacity for CO 2 adsorption at low pressures, 2.0 mmol·g −1 (8.1 wt %) at 0.39 mbar, and 25 • C and 3.14 mmol·g −1 (12.1 wt %) at 0.15 bar and 40 • C, at conditions relevant to removal of CO 2 from air and flue gas, respectively [61]. In addition, [en-Mg 2 (dobpdc)] and [dmen-Mg 2 (dobpdc)] (en = ethylenediamine, dmen = N,N -dimethylethylenediamine) display significant CO 2 uptakes (3.63 mmol·g −1 and 3.77 mmol·g −1 , respectively) at 0.15 bar [62]. Bio-MOF-11, [Co 2 (ad) 2 (CH 3 CO 2 ) 2 ]·2dmf·0.5H 2 O (ad − = adeninate) contains pyrimidine and amino groups within the pores of the framework and exhibits high CO 2 capacity (~6 mmol·g −1 at 273 K) and exceptional selectivity for CO 2 over N 2 at 273 K (81:1) and 298 K (75:1) [63]. MOFs functionalized with low-molecular weight polymers containing amino groups, such as PEI (polyethyleneimine), have shown impressive CO 2 uptake, which were many times larger than the respective MOFs. For example, PEI-MIL-101-125 display CO 2 uptake of 3.95 and 4.51 mmol·g −1 , respectively, over four times than that of MIL-101-125, and PEI@UiO-66 shows CO 2 uptake up to 1.65 mmol·g −1 and CO 2 /CH 4 selectivity 111, which are much larger values than UiO-66 [64,65]. Yaghi and co-workers reported the functionalization of the organic ligand of IRMOF-74-III with primary amine through ligand modification, thus, yielding six analogs with different functional groups (-CH 3 , -NH 2 , -CH 2 NHBoc, -CH 2 NMeBoc, -CH 2 NH 2 , and -CH 2 NHMe). Spectroscopic data revealed that CO 2 binds chemically to IRMOF-74-III-CH 2 NH 2 and IRMOF-74-III-CH 2 NHMe to form carbamic species. The CO 2 uptake of IRMOF-74-III-CH 2 NH 2 is 3.2 mmol·g −1 at 800 Torr and 298 K [66]. Introduction of polar functional groups in the pores of MOFs through direct synthesis or post-synthesis modification was proved an efficient method to enhance the adsorption capacity and selectivity of CO 2 . For example, UPC-12 exhibits high selectivity for CO 2 due to the formation of H-bonds between the CO 2 molecules and the -COOH groups within the pores and the π-π stacking interactions between the CO 2 molecules and the bpy moieties of the MOF [67]. NbO-type MOFs retain the NbO-type structure upon functionalization with different groups, such as amide, nitro groups, and N-heterocycles, and display higher CO 2 uptake than the parent MOFs [68][69][70][71][72]. Ligand functionalized UiO-66, UiO-66(Zr)-(COOH) 2 shows high CO 2 /N 2 selectivity of 56 upon a 15/85 CO 2 /N 2 gas mixture at 303 K and 1 bar, and UiO-67 functionalized BUT-10 and BUT-11 show enhanced CO 2 adsorption uptakes (50.6 and 53.5 cm 3 ·g −1 , respectively) and separation selectivity over N 2 and CH 4 (18.6 and 31.5 for a 15/85 CO 2 /N 2 gas mixture, and 5.1 and 9.0 for a 10/90 CO 2 /CH 4 gas mixture) [73,74]. The control of the pore size of the MOFs allows inclusion of smaller guests (e.g., CO 2 3.30 Å) and enables ultra-high selectivity. However, the precise control of pores with a size of 3-4 Å is very difficult. Examples of isoreticular MOFs, SIFSIX-2-Cu, SIFSIX-2-Cu-i, SIFSIX-3-Zn, and SIFSIX-3-Cu showed more efficient CO 2 capture for the latter, which exhibits the smallest pore size [75,76]. The approach of 'Single Molecule Trap' (SMT) for the capture of a single CO 2 molecule was developed by Zhou and co-workers who prepared paddlewheel dicopper complexes (SMT-1-2) with intramolecular metal-metal distance of 7.4 Å, suitable for the accommodation of one CO 2 molecule. Incorporation of SMT-1 to the 3D framework of PCN-88 enhanced the CO 2 uptake to 4.20 mmol·g −1 at 296 K and 1 bar with respect to 0. 63 Light hydrocarbon separation is a very important and crucial process in the petroleum industry, and their efficient separation will reduce the energy consumption and cost. Ethylene (C 2 H 4 ) and propylene (C 3 H 6 ) are used in the production of polymers. During the production of C 2 H 4 , an impurity of~1% of C 2 H 2 is also produced. The microporous MOF [Cu(atbdc)] (H 2 atbdc = 5-(5-amino-1H-tetrazol-1-yl)-1.3-benzenedicarboxylic acid), UTSA-100, displays high C 2 H 2 /C 2 H 4 selectivity and high C 2 H 2 uptake from mixtures containing 1% acetylene. At 296 K and 1 atm, the acetylene and ethylene uptake amount of UTSA-100 are 95.6 and 37.2 cm 3 ·g −1 , respectively, which is much higher than that of M'MOF-3 [84,85]. Other examples of MOFs exhibiting high C 2 H 2 uptake are UTSA-300 (3.41 mmol·g −1 at 273 K and 1 bar) [86], SIFSIX-1-Cu (8.50 mmol·g −1 at 298 K and 1 bar) [87], [Mn 3 (bipy) 3 6 ]·2(bipy)·4H 2 O (3.2 mmol·g −1 at 27-83 K and 1 bar) [88], and NOTT-300 (6.34 mmol·g −1 at 293 K and 1 bar) [89]. In addition, the raw propylene (C 3 H 6 ) product contains trace impurity of propyne (C 3 H 4 ), which is highly undesirable. Chen and co-workers reported a flexible-robust MOF, ELM-12, which shows strong binding affinity and suitable pore confinement for propyne, and obtained propylene with a purity over 99.9998%, which is the propyne impurity removed to a concentration below 2 ppm [90]. The separation of C 2 H 4 and C 3 H 6 from their mixtures with the respective alkanes by using distillation processes shows low efficiency because of the similarity of their boiling points. Their separation can be alternatively achieved by the formation of π-complexes of olefins with transition metal cations [91,92], and by using MOFs, such as KAUST-7, which contains channels allowing the adsorption of C 3 H 6 but does not permit the C 3 H 8 to diffuse/adsorb into the pore system [93,94]. Another important and difficult process in the chemical industry is the separation of benzene and cyclohexane, as well as C 2 H 2 /CO 2 separation. Conventional distillation is high energy consuming. Therefore, alternative methods involving the use of suitable MOFs containing open metal sites, or introducing π-π stacking interactions of the π-electron deficient pore surface and π-rich guest molecules have been developed [95][96][97][98].
(H 2 O) 4 ][Mn(CN)
Alternative technologies for H 2 storage and its potential use as renewable fuel for vehicle applications have long explored during the early years of MOFs development. MOFs have also been examined for the removal of hazard and toxic species produced via coal combustion and refinery processes, such as CO, NH 3 , NO 2 , SO 2 , H 2 S, benzene, etc. Besides their toxic pollutant character, these materials are important in the chemical industry as sources for the production of commodity chemicals. A Cu(I) loaded MOF, Cu(I)@MIL-100(Fe) shows CO adsorption capacity of 2.78 mmol·g −1 at 298 K and 1 bar, about seven times than that of MIL-100(Fe), and CO/N 2 adsorption selectivity of 169, due to strong π-complexation between Cu(I)@MIL-100(Fe) and CO [110]. Defect-engineered MOFs [Ru 3 (btc) 2−x (pydc) x X y ] (X = Cl, OH, OAc; x = 0.1, 0.2, 0.6, 1.0; 0 ≤ y ≤ 1.5, H 3 btc = benzene-1,3,5-tricarboxylic acid; H 2 pydc = pyridine-3,5-dicarboxylic acid) derived by incorporation of organic ligand H 2 pydc into the framework of the mixed-valence Ru II/III MOF [Ru 3 (btc) 2 Cl 1 . 5 ], display CO total uptake up to 3.88 mmol·g −1 , which is 2-3 times larger than that of the parent MOF [111]. NH 3 is among the industrial chemicals of highest toxicity and the development of materials for its adsorptive removal from air is of high importance. Dincă and co-workers reported the MOFs, 15.47, 12.00, and 12.02 mmol·g −1 at 298 K and 1 bar [112]. Lan and co-workers reported the removal of carcinogenic benzene by the isoreticular MOFs, NENU-511, NENU-512, NENU-513, and NENU-514 with uptake of 1556, 1519, 1687, and 1311 mg·g −1 , respectively [113]. Zr(IV)-based MOFs, such as UiO-66-NH 2 , urea modified UiO-66 and UiO-66-ox have been widely investigated for the removal of NO 2 due to their high chemical stability [114][115][116]. [M(btc)(ted) 0.5 ] (M = Ni, Zn, bdc = 1,4-benzenedicarboxylate, ted = triethylenediamine), NOT-202a, MFM-300(In), and SIFSIX-1-Cu are among the MOFs examined for the removal of residual SO 2 in flue gas, a process of fundamental importance because traces of SO 2 (500-3000 ppm) are produced by coal combustion along with CO 2 (10-12%), and react with the organic amines used during the removal of CO 2 with the scrubbing process, thus, causing permanent loss of amine activity and decreasing the efficiency of the process [117][118][119][120]. The removal of H 2 S from the refinery-off gases and natural gas is necessary in order to avoid poisoning of the gases and the catalyst involved in the subsequent utilization of H 2 and CH 4 . Prominent examples of MOFs proposed for H 2 S removal are Ga-soc-MOF, rare earth-based MOFs with fcu topology, kag-MOF-1, and composites containing Cu-BTC and S-doped or N-doped graphite oxides [121][122][123][124].
Sensing Applications
MOFs are especially attractive as novel sensing materials because they display a high surface area, which enhances detective sensitivity, specific structural features (open metal sites, tunable pore sizes, etc.), which promote host-guest interactions and selectivity, and flexible porosity, which enables reversible release and uptake of small molecules, cations and anions, biomolecules, and so on. The guest molecules can induce visible changes including a shift of the emission spectrum or change in the emitting color, and change in the fluorescence intensity such as 'turn-on' and 'turn-off' processes.
A thermostable Mg-based MOF, [Mg(pdda)(dmf)] (H 2 pdda = 4,4 -(pyrazine-2,6-diyl) dibenzoic acid), which contains nanoholes and non-coordinating nitrogen atoms inside the walls of the holes, displays high selectivity for Eu 3+ ions at low concentrations in aqueous solutions [125]. A luminescent Ln-MOF, [Me 2 NH 2 ][Tb(bptc)] (H 4 bptc = 3,3 ,5,5tetracarboxylic acid) exhibits rare chiral helical channels despite the achiral nature of the organic ligand. Luminescent studies showed highly selective fluorescence quenching response to Fe 3+ ions in a liquid suspension, rendering it as a potential chemo-sensor for Fe 3+ ions [126]. A bimetallic Eu-Tb MOF with 1,4-benzenedicarboxylate ligands showed Pb 2+ selectivity in polluted environmental waters. The color of the luminescent Ln-MOFs could be fine-tuned from green to red by doping the MOFs with different Tb/Eu ratios, and, in the presence of Pb 2+ , the emission color of the MOFs changes from red-orange to green, which is visually observed by naked eyes [127]. A Cd-MOF, [Cd(edda)] (H 4 edda = 5,5ethane-1,2-diylbis(oxy)]diisophthalic acid), exhibits ratiometric fluorescence response to Hg 2+ for the first time with a fast response (~15 s) and especially high sensitivity of~2 nM below the permissible limits in drinking water set by the U.S. Environmental Protection Agency. This behavior is attributed to the collapse of the crystal structure of the Cd-MOF induced by Hg 2+ [128]. MOFs have been proven as very promising materials for the uranium extraction during radionuclide separation and seawater mining due to their ability for post grafting with functional groups with strong affinity for the uranium ions and porous functionalization for storage of hydrated U(IV) ions. HKUST-1, UiO-66, MILs, and ZIF-8 display stability under gamma irradiation. The phosphoryl-urea-functionalized UiO-68(Zr) MOF was the first organo-modified MOF that exhibited uranium extraction behavior. Examples of phosphonate-functionalized, amidoxime-functionalized, aminefunctionalized MOFs, among others, display adsorption capacity up to 360 mg·g −1 [129].
Volatile organic molecules and explosive compounds can be efficiently detected by MOFs based on guest-dependent luminescent responses either by shifting of the emission spectrum or by changes in the luminescent intensity [130][131][132].
Various 2− with high selectivity and sensitivity [140]. ZnO quantum dots on MOF-5 is an effective fluorescent sensing platform for the phosphates tested for the assessment of phosphates in environmental aqueous samples [141], and CdSe/CdS/Cd 0 . 5 Zn 0 . 5 S/ZnS quantum dots on MOF-5, QD@MOF-5 composite display size-selective thiol sensing [142]. MOF-based sensors for humidity measurements have been studied based on changes of fluorescence or electrochemical signals, such as CuMOF, thin-film of HKUST-1, Cu-BTC film, and amine-functionalized MOF nanoparticles NH 2 -MIL-125(Ti) [143][144][145][146][147]. pH and temperature sensors based on luminescent MOFs have been extensively studied for monitoring pH changes in biological environments and for luminescent thermometers. Core-shell nanocomposites with an MOF core have been developed for sensing biological molecules, such as human serum albumin, bacterial endospores, and cancer cell apoptosis [152][153][154]. Luminescent MOFs are also successful in detecting DNA, RNA, protein, and other biomolecules and present advantages over other sensing materials for biomolecules (e.g., single-walled carbon nanotubes, graphene oxide, carbon nanoparticles, gold nanoparticles), such as structural diversity, high sensitivity, and biodegradability. Biocompatibility and non-toxic metal clusters need to be developed in order for in vivo sensing to be realized [155].
MOFs and their derived materials are suitable for the construction of electrochemical sensors. The water stable Cu MOF, [Cu 2 (HL) 2 (OH) 2 (H 2 O) 5 ]·H 2 O (H 2 L = 2,5-dicarboxylic acid-3,4-ethylene dioxythiophene) was used to construct an electrochemical sensor for simultaneous detection of ascorbic acid and L-tryptophan [156]. Composites containing carbon spheres and Al-MIL-53-(OH) 2 MOFs on the nafion polymer were used to modify the glassy carbon electrode for the construction of a dopamine sensor. The dopamine signals were enhanced due to the good electrical conductivity and the large surface area of the MOF nanocomposite and the film-forming ability of nafion [157]. Cu-BTC MOFs electrodeposited onto a glassy carbon electrode and modified by graphene oxide were used to construct an electrochemical sensing platform for 2,4,6-trinitrophenol. The sensor can detect TNP in the presence of other nitrophenols due to the high electrical conductivity and high electrocatalytic activity of the nanocomposite [158]. The first example of an electrochemiluminescence(ECL)-active Ru/Zn MOF shows high stability and high ECL due to the large electron transfer of the reaction system, and was used to construct an ECL sensor for cocaine in the serum sample [159]. A turn-on ECL immunosensor for the detection of N-terminal pro-B-type natriuretic peptide (NT-proBNP) was based on MOFs consisting of zinc and tris(4,4 -dicarboxylic acid-2,2 -bipyridyl) ruthenium(II) dichloride combined with the antibodies. The MOFs can enhance the loading of the ECL probe, [Ru(dcbpy) 3 ] 2+ , and improve the loading of NT-proBNP-specific antibodies [160]. Recently, field effect transistor (FET) sensors based on MOFs and their derived materials have been developed for practical applications. FET sensors consist of a source and a drain electrode, both of which contact a semiconductor layer. For example, a molecularly imprinted polymer (MIP) film in the presence of MOF-5 was used to construct an FET sensor for the detection of recombinant human neutrophil gelatinase-associated lipid calin [161]. Quartz crystal microbalance (QCM) sensors and piezoelectric sensors based on MOFs have been also developed for detection of small organic molecules (e.g., MeOH, EtOH, MeCN, Me 2 CO). For example, KAUST-7 (NbOFFIVE-1-Ni) and KAUST-8 (AlFFIVE-1-Ni) were used for a QCM sensor for SO 2 , a Cu-BTC/polyaniline nanocomposite for a QCM-based hydrogen sensor, and MIL-101(Cr) for a QCM-based pyridine sensor [162][163][164]. The MOF [Mn 5 (NH 2 bdc) 5 (bimb) 5 ]·(H 2 O) 0.5 (NH 2 bdcH 2 = 2-amino-1,4-benzene dicarboxylic acid, bimb = 4,4 -bis(1-imidazolyl)biphenyl) displays typical ferroelectric behavior, suggesting that MOFs can be potentially applied for the construction of piezoelectric sensors [165].
Catalytic Applications
MOFs have been extensively used as heterogeneous catalysts for the synthesis of fine chemicals, which are extremely important in the chemical industry. The properties that render MOFs suitable for heterogeneous catalysts are related to the robust nature, which is required for catalysis under extreme conditions, the porosity, and large surface area, which facilitated the catalytic activity as well as the presence of pores and channels, which are needed for catalytic selectivity and the organic ligands that can tune the catalytic reactivity and selectivity. The catalytic active sites of MOFs may be the metal nodes, the functionalized ligands, and the pores of the structure. The synthesis of fine chemicals is most commonly realized through oxidation reactions (e.g., epoxidation, sulfoxidation, aerobic oxidation), 1,3-cycloaddition reactions, transesterification reactions, C-C bond formation reactions (e.g., Heck reaction, Sonogashira coupling, and Suzuki coupling), and hydrogenation reactions of unsaturated organic molecules. The MOFs as heterogeneous catalysts may act as Lewis acids through the metal ions or metal nodes as well as the organic ligands, or as support for the moieties that carry the oxygen or the noble metals necessary for the catalytic reaction. The Zn MOF-5 was partially substituted with manganese and the bimetallic MnFe-MOF-74 was used for the epoxidation of the alkene with high selectivity (up to 99%, Figure 5) [166,167]. Composites of metal complexes immobilized on MOF can also act as Lewis acids in epoxidation reactions, such as post synthetically modified (Cr)NH 2 -MIL-101, post synthetically modified UiO-66 and UiO-67 with salicylaldehyde molybdenum complex, and copper functionalized UiO-66 [168][169][170]. The aerobic oxidation of alcohols to aldehydes or ketones requires the presence of noble metals inside the pores of the MOF or attachment to the modified ligands. Palladium and gold nanoparticles introduced into the nanoporous MOF are used for selective aerobic oxidation [171,172]. Cu-based MOFs are usually used as catalysts for 1,3-dipolar cycloaddition reactions, which is the formation of five-membered ring compounds [173,174]. Several examples of MOF catalysts in transesterification reactions have been reported, such as UiO-66 and UiO-67 [175,176]. C-C bond formation reactions such as Heck reactions, Sonogashira coupling, and Suzuki coupling are extremely important for organic synthesis and require the presence of palladium or palladium nanoparticles as catalysts, which are incorporated in the pores or are attached to the functionalized organic ligands. For example, palladium complexes, such as bis(tri(1-piperidinyl)phosphine) palladium chloride or bis(triphenylphosphine) palladium dichloride incorporated in a Ni-MOF for the Heck reaction of estragole with iodobenzene [177], palladium incorporated in a Zr-MOF based on 2,2 -bipyridine-5,5 -dicarboxylate ligands applied in the carbonylative Sonogashira coupling at atmospheric pressure in the presence of CO [178], and palladium dichloride immobilized on a mixed-ligand MOF containing bipyridyl and biphenyl moieties for Suzuki catalysis [179]. Palladium nanoparticles incorporated in Zr MOF-808 is an excellent heterogeneous catalyst for Heck reaction without an additional base [180], whereas palladium nanoclusters in NH 2 -UiO-66 (Zr) used in the Suzuki catalysis in the presence of light give 99% conversion and selectivity of biphenyl compounds [181]. A wide range of unsaturated organic compounds, such as α,β-unsaturated aldehydes, cinnamaldehyde, nitroarene, and nitro compounds, alkenes and alkynes, quinoline, benzene, and other aromatic compounds, can be hydrogenated with a very high yield and selectivity under mild conditions in the presence of MOFs and derived materials as heterogeneous catalysts. For example, Pt nanoparticles incorporated within MIL-101(Fe,Cr) used as catalysts for the hydrogenation of α,β-unsaturated aldehydes to unsaturated alcohols [182]. MIL-120 incorporated with Ni particles showed a better result on gas-phase benzene hydrogenation than the Ni/Al 2 O 3 catalyst [183], which is a well-defined hollow Zn/Co ZIF composite with rhombic dodecahedron shape that displayed superior activity and selectivity toward the semi-hydrogenation of acetylene [184], and Ir nanoparticles encapsulated in ZIF-8 used in the hydrogenation of phenylacetylene [185]. The catalytic activity for the CO 2 →CO reduction with [Ru 3 (btc) 2-x (pydc) x X y ] catalysts (X = Cl, OH, OAc, x = 0.1, 0.2, 0.6, 1.0; 0 ≤ y ≤ 1.5, H 3 btc = benzene-1,3,5-tricarboxylic acid, H 2 pydc = pyridine-3,5-dicarboxylic acid) as monitored by UHV-FTIR spectroscopy, showed peaks characteristic of the presence of (CO)Ru δ+ species. The CO 2 →CO conversion at 90 K is attributed to charge transfer from the 3d Ru orbitals to the 2π u CO 2 antibonding orbital, possibly yielding chemisorbed CO 2 δ− species that might act as a reaction intermediate to produce CO [111]. These defectengineered MOFs also act as olefin hydrogenation catalysts after activation with H 2 to produce Ru-H species, assisted by the presence of the basic pyridyl-N atom of the pydc linkers [111]. Cu-based MOFs, [Cu 3 (btc) 2 ] HKUST-1 (btc 3− = benzene-1,3,5-tricarboxylate) and [Cu 3 (btb) 2 ] MOF-14 (btb 3− = benzene-1,3,5-tribenzoate) display high catalytic activity toward CO oxidation at low temperatures (105 K), which is related to the CO species adsorbed on the Cu 2+ coordinatively unsaturated metal ion sites upon exposure to various amounts of O 2 [186]. Several MOFs, for example NU-1000, UiO-66, HKUST-1, and MIL-101(Cr)-DAAP, have been tested as heterogeneous catalysts for the catalytic destruction of the phosphate ester bonds and phosphate-fluoride bonds, in chemical warfare agents, such as DMNP (dimethyl 4-nitrophenyl phosphate), DENP (diethyl 4-nitrophenyl phosphate), BNPP (bis(4-nitrophenyl) phosphate), and the highly toxic GD (O-pinacolyl methylphosphonofluoridate), known as Soman [187]. Two-dimensional MOFs have been recently developed as catalysts of outstanding intrinsic reactivity, as support materials for catalysts, and as catalysts with multifunctional catalytic activity for diverse organic transformations. Their enhanced catalytic activity is associated with the ultra-thin thickness and more accessible active sites, which decrease the diffusion resistance and increase the host-guest interactions, rendering these materials much better than the corresponding bulk MOFs [188]. For example, 2D MOFs based on tetrakis(4-carboxyphenyl)-porphyrin display unique photochemistry and high efficiency in light-harvesting applications and showed catalytic activity in photooxidation reactions ( Figure 6) [189][190][191]. Incorporation of nanoparticles or enzymes as well as post-synthetic modification provided new materials with enhanced catalytic activities [192]. For example, [Zr 12 O 8 (OH) 14 (BPYDC) 9 ] (H 2 BPYDC = 2,2 -bipyridine-5,5 -dicarboxylic acid), MON-19, loaded with platinum nanoparticles, displays efficient hydrogenation of C=C bonds under mild conditions without external high-pressure hydrogen [193]. The electrocatalytic activity of MOFs has been investigated in the field of hydrogen evolution reaction (HER), oxygen evolution reaction (OER), oxygen reduction reaction (ORR), carbon dioxide reduction reaction (CO 2 RR), and electrochemical sensing [194]. The requirements for MOFs to display electrocatalytic activity is to possess three electrochemical factors, i.e., onset potential, current density, and redox-active metal sites. MOF-derived electrocatalysts for HER have been extensively studied, such as a bimetallic NiMo-MOF composite with current density of 10 mA·cm −2 at low overpotential of 80 mV and Tafel slope of 98.9 mV·dec −1 , whose enhanced HER activity is due to the structural merits of MOF and the synergy between the MOF and the Ni/Mo metal atoms [195,196]. A cobalt phosphide 2D-MOF nanosheet showed excellent electrocatalytic performance for water splitting, i.e., HER and OER, in acidic and alkaline media with Tafel slopes of 59 and 64 mV·dec −1 and a current density of 10 mA·cm −2 at the overpotentials of 140 and 292 mV, respectively, which are comparable to those of commercial noble-metal catalysts [197]. Other examples for OER and ORR are a composite of NiCo/Fe 3 O 4 hetero-particles within MOF-74 with a Tafel slope of 29 mV·dec −1 and a current density of 10 mA cm −2 at the overpotentials of 238 mV [198], CoNi 2 -MOF (GTGU-10c2) nanobelts with small Tafel slope of 58 mV·dec −1 and a current density of 10 mA·cm −2 at the overpotentials of 240 mV with long-term stability of more than 50 h in alkaline medium [199], a Co-MOF with a high turn-over frequency of 93.21 s −1 at the overpotential of 350 mV and current density of 10 mA·cm −2 compared to RuO 2 [200], and bimetallic Ni/Zn-MOFs with electrocatalytic performance increasing at the higher Ni ratio samples [201]. Recently, Cu-MOF, Zn-BTC-MOF, and Cu-HKUST have been reported for electrochemical reduction of CO 2 in a standard three electrode set-up for ionic liquids [202][203][204][205].
MOFs have been extensively studied as potential photocatalysts due to their porous nanostructures and controllable semiconductor properties as well as their ability to incorporate co-catalysts such as metals and metal oxides. Their photocatalytic activity is realized through different mechanisms involving parts of the MOFs that absorb light, such as ligandto-metal charge-transfer mechanism (LMCT), ligand-to-ligand charge transfer mechanism (LLCT), metal-to-ligand charge transfer (MLCT), or metal-to-metal charge transfer (MMCT) mechanism, and dual excitation pathways. For example, [Zr 6 O 4 (OH) 4 L 6 ] (H 2 L = 2,2diamino-4,4 -stilbenedi-carboxylic acid) was examined for the CO 2 →CO photocatalytic reduction and displays a narrow band gap that absorbs in the visible region with a formation rate of 96.2 µmol·h −1 ·mmol −1 MOF through an LMCT mechanism [206]. [Zr 6 O 4 (OH) 4 L 6 ] (H 2 L = 4,4 -(anthracene-9,10-diylbis(ethyne-2,1-diyl))dibenzoic acid), NNU-28, displays high efficiency for visible-light-driven CO 2 reduction with a formate formation rate of 183.3 µmol·h −1 ·mmol −1 MOF through dual excitation pathways involving both the Zr 6 oxo cluster and the anthracene-based ligand [207]. Encapsulation of the photosensitizer [Ru(bpy) 3 ] 2+ into the porous structure of PCN-99, an indium anionic MOF with H 3 DCTA = 10,15-dihydro-5H-diindolo-[3,2-a:3 ,2 -c]carbazole-3,8,13-tricarboxylate acid, Ru(bpy) 3 @PCN-99, displays heterogeneous photocatalytic activity toward the aerobic hydroxylation of arylboronic acid, through the MLCT or MMCT mechanism. The photosensitizer absorbs light and emits an electron, which migrates to the LUMO of the organic ligand or the metal node of the MOF [208]. Graphene oxide (GO)-MOFs composites have been examined as photocatalysts in water-oxidation reactions. The GO-MIL-LIC-1(Eu) composite, in the presence of [Ru(bpy) 3 Cl 2 ] sensitizer and Na 2 S 2 O 8 electron acceptor, under nitrogen atmosphere and visible-light irradiation, displays O 2 production of 125 µmol, which is more than two times that of the MIL-LIC-1(Eu) MOF [209]. Encapsulation of perovskite quantum dots, CH 3 NH 3 PbI 3 (MAPbI 3 ) in the pores of a Fe-porphyrin MOF PCN-221(Fe x ), gave the composite photocatalyst MAPbI 3 @PCN-(Fe 0 . 2 ), which exhibits remarkably high total yield of 1559 µmol·g −1 for photocatalytic CO 2 reduction to CO (34%) and CH 4 (66%), which is 38 times higher than that of the parent MOF, due to transfer of the photogenerated electrons in the quantum dots to the Fe catalytic sites of the MOF [210]. Core-shell HKUST-1@TiO 2 composite shows photocatalytic reduction efficiency of CO 2 to CH 4 (five times over that of TiO 2 ) and selectivity over hydrogen in the photocatalytic reduction compared to parent HKUST-1 and TiO 2 [211]. Quantum dots nanoparticles in MOFs have been extensively studied as potential photocatalysts, such as the CdS/UiO-66-NH 2 composite for the selective visible light oxidation of benzyl alcohol to benzaldehyde with molecular oxygen as an oxidant [212], CdS@MIL-101(Fe) nanocomposites for the selective oxidation of benzyl alcohol to benzaldehyde using visible light under mild conditions [213], and CdS/Zn-MOF composites for the photocatalytic water splitting under visible light irradiation [214]. MOFs and their composites, especially the environmentally-friendly Fe-MOFs, are used in advanced oxidation processes (AOPs) as photocatalysts for the removal of organic compounds from water and wastewater by oxidation through reactions with hydroxyl radicals [215,216]. For example, Fe-MOFs with 1,4-piperazinediylbis(methylene) phosphonic acid, STA-12(Fe), used for H 2 O 2 activation under natural sun-light irradiation, displays highly efficient photocatalytic decomposition of organic dyes from aqueous solution, and demonstrates excellent reusability, suggesting potential application in water depollution [217]. MOFs as heterogeneous photocalysts for chemical warfare agents destruction have been examined, such as Zr-based MOFs (PCN-57 analogues) with benzothiadiazole and benzoselenadiazole, which display selective photocatalytic activity for the oxidation of the mustard gas simulant, 2-chloroethyl ethyl sulfide (CEES) to the nontoxic 2-chloroethyl ethyl sulfoxide (CEESO) [218], and post-synthetically modified Zr 6 -based MOF, NU-1000, with the photosensitizer BODIPY (boron-dipyrromethene) ligand, which shows enhanced singlet oxygen generation for selective detoxification of the sulfur mustard simulant CEES to CEESO with a half-life of~2 min [219].
Piezo/Ferroelectric, Thermoelectric, and Dielectric Applications
The piezoelectric materials convert mechanical energy into electrical energy through the direct piezoelectric effect and can be considered as energy harvesters to generate energy when direct electricity or batteries are not available. A subclass are the ferroelectric materials, which exhibit spontaneous electric polarization whose direction can be reversed by applying external electric fields. Piezo/ferroelectric materials, such as crystalline and ceramic materials, polymers, and liquid crystals find potential applications in piezoelectric quartz crystals as ultrasonic transducer, sensors and actuators, filters, ultrasonic motors, energy harvester, optical devices and so on. Besides traditional piezo/ferroelectric materials, MOFs have investigated for potential applications, among these [Zn 2 (mtz)(nic) 2 The thermoelectric materials, which can generate electric potential from a temperature difference, constitute an environmentally-friendly approach of energy generation from waste heat. Besides inorganic compounds, such as oxides and alloys, the approach of conductive MOFs as new potential thermoelectric materials has been developed. This approach includes first-row transition metal MOFs with thiolate ligands, such as [Cu(pdt) 2 ] (pdt = 2,3-pyrazinedithiolate), inclusion of guest molecules in known MOFs, such as TCNQ@HKUST-1 (HKUST-1 = [Cu 3 (BTC) 2 ], BTC = benzene tricarboxylate), I 2 and metal nanoclusters in order to improve the conductivity of the material. 2D MOF nanosheets of bis(thiolato) ligands and light transition metals, i.e., π-d conjugated systems, and postsynthetically modified MOFS, i.e., guest@MOFs and conductive-polymer grafted MOFs, are promising candidates for the fabrication of thermoelectric devices due to their excellent conductivity [227].
Semiconducting devices are based on dielectric materials, which display ultra-low dielectric constants (κ < 3.9 as in SiO 2 ). MOFs feature ultra-low dielectric constants, which are considered as promising materials for the future microelectronics industry. The requirements of doing so are thermal stability at a high temperature, predictable mechanical behavior, electrical insulation, and adhesion to other interlayers. DFT calculation on various MOFs, such as IRMOF-1 family, UiO-66, UiO-67, MIL-140, and MOF-74-M (M = Mg, Mn, Fe, Co, Ni, Zn), revealed the influence of the structural and chemical characteristics on their electronic and dielectric properties, demonstrating their ability to behave as insulators and low-dielectric constant materials, and predicted dielectric constants in the range of 1.25 to 2.0 [228,229]. Surface-anchored HKUST-1 thin films grown by liquid phase epitaxy (LPE) were studied by spectroscopic ellipsometry (SE) to determine an optical constant of n = 1.39 at a wavelength of 750 nm (κ~1.93) [230]. ZIF-8 thin films deposited on silicon wafers studied by SE and the dielectric constant was measured by impedance analysis at different frequencies and temperatures yielding κ = 2.33 at 100 kHz [231]. Other MOFs, which display ultra-low dielectric constants (below that of SiO 2 ) are, for example, [Sr 2 (1,3-dbc) 2 (H 2 O) 2 ] (bdc = 1,3-bis(4,5-dihydro-2-oxazolyl)benzene), which retains its crystallinity up to 420 • C with κ~2.4 [232], [Zn 2 (Hbbim) 2
Biomedical Applications
MOFs and their derived materials have been increasingly studied as drug carriers, bioimaging agents, and therapeutic agents due to their excellent physicochemical properties. The majority of known drug carriers, such as liposomes, nanoparticles, and micelles, show poor drug loading (less than 5%) and rapid drug release. Therefore, porous MOFs with high drug loadings are considered as candidates for delivery applications. The requirements for efficient drug carriers are the high load of drugs, the control of the drug release, the control of matrix degradation, and the low toxicity. The loading of the drugs into MOFs can be achieved by non-covalent encapsulation into the MOF by physisorption, by post-synthetic modification of the organic ligands after the synthesis of the MOF, by the use of the drugs as organic ligands in building the MOFs, and by attaching the drugs to the subunits of the MOF [246].
MIL-100 and MIL-101 based on trimetallic nodes and BTC (1,3,5-benzene tricarboxylic acid) or BDC (1,4-benzene dicarboxylic acid) were the first MOFs suggested as drug delivery systems in 2006. It was shown that both MIL-100 and MIL-101 are able to absorb very large amounts of ibuprofen (up to 1.4 g per gram of MIL-101), which was completely released under physiological conditions in three (MIL-100) or six (MIL-101) days [247].
[Gd(BDC) 1 . 5 , are produced by the uncontrolled growth of the tumor and the dysfunction in metabolism. The hydroxyl radicals are able to cause damage to tumor cells, which are more than any other ROS. Modern anti-cancer therapies, such as photodynamic therapy (PDT), sonodynamic therapy (SDT), and chemodynamic therapy (CDT) are based on the development of the above studies. MOFs have been applied for PDT, SDT, and CDT since the last decade [249]. An Hf-porphyrin nanoMOF, DBP-UiO (DBP = 5,15-di(pbenzoato)porphyrin), acts as an excellent PDT photosensitizer as indicated from the efficient generation of 1 O 2 and its cytotoxic assay [246]. The drug delivery system Fe-MIL-53-NH 2 -FA-5-FAM/5-FU is based on the Fe-MIL-53-NH 2 MOF, which displays high loading capacity for the anti-cancer drug 5-fluorouracil (5-FU), conjugated with the fluorescence imaging agent 5-carboxyfluorescein (5-FAM) and folic acid (FA), and exhibits better toxicity to cancer cells due to the targeted 5-FU release and acts as a potential contrast agent for MRI ( Figure 9) [250]. Nanospheres of [Zn(bix)] (bix = 1,4-bis(imidazole-1-ylmethyl)benzene) can encapsulate and release known anticancer drugs, such as doxorubicin (DOX), camptothecin (CPT), SN-38, and daunomycin (DAU), and show very strong cytotoxic effects against human promyelocytic leukemia cells (HL60) due to the release of DOX from the MOF spheres, causing the death of cancer cells [251]. One of the most important issues for the application of MOFs as potential drug delivery systems is the issue of biotoxicity. That is, MOFs may be harmful to humans. For this reason, biological MOFs based on active pharmaceutical ingredients, such as amino acids, proteins, peptides, and low toxicity metal ions, such as zinc and iron, have been developed (BioMOFs). For example, [Zn(cys) 2 ] (cys = cystine) tailor with methylene blue (MB) and sorafenib (SOR) was tested as a drug delivery system against colorectal cancer and Leishmania in PDT (for MB) and hepatocellular carcinoma (for SOR) [252]. Iron-based MOFs, such as MIL-100 and MIL-88A, showed no cytotoxicity to mouse macrophages J774, A1, human leukemia, and human multiple myeloma cells, and low iron concentration in tissues after 1, 7, and 30 days of treatment. Other important issues for potential application as drug delivery issues are the size, shape, and the biological stability of the MOFs [253]. UiO-66 and UiO-67 modified with poly(ε-caprolactone) have been applied as potential drug carriers for the anti-cancer drugs paclitaxel and cisplatin [254]. Fe-MIL-100, Zr-UiO-66, Fe-MIL-53, and Fe-MIL-127 have been used as caffeine carriers [255,256]. Sodium diclofenac was loaded into ZJU-101 through ion exchange and penetration procedures and showed a more quick release of the drug in inflamed tissues with lower pH (5.4) than normal tissues (7.4) [257]. Ion exchange of dimethylammomium cations with procainamide in {[Zn 8 O(ad) 4 (BPDC) 6 ]·2Me 2 NH 2 ·8DMF·11H 2 O} (ad = adeninate; BPDC = biphenyldicarboxylate), known as bio-MOFs-1, showed drug loading up to 22 wt % after 15 days and slow drug release in pure water [10]. Current trends in nanomedical applications of MOFs in PDT and other anti-cancer treatments involve surface functionalization of the external surface of MOFs in order to fit specific requirements. For example, grafting of functional polymers such as PEG, in order to improve the colloidal stability, tailoring of fluorophore for bioimaging applications, and functionalization of targeting molecules such as peptides for target binding [258]. For example, the azide group in UiO-66-N 3 can react with the alkane group via click reactions, thus, modifying the target molecule on the surface of the MOF [259]. The coordinative incorporation of oligohistidine-tags on metal-organic framework nanoparticles based on MIL-88A, HKUST-1, and Zr-fum was investigated for the cellular uptake of peptides and proteins with MOF-NPs [260].
MOFs have been investigated as antibacterial and antifungal agents. For example, BioMIL-5 derived from Zn(II) and azelaic acid displays a 3D nonporous framework and shows interesting dermatological and antibacterial effects against the Gram positive bacteria S.aureus and S.epidermidis [261], [Zn(hzba) 2 ]·2.4H 2 O (hzba = 4-hydrazinebenzoate) inhibited the bacterial growth and metabolic activity of Staphylococcus aureus [262], Ag(I)-MOFs were tested against S.aureus and E.coli and showed significant antibacterial activity [263], and the first antibacterial Co-MOF, Co-TDM (TDM = tetrakis[(3,5-dicarboxyphenyl)-oxamethyl]methane) inactivates the Gram negative bacteria E.coli [264]. HKUST-1 showed strong anti-fungal activity against Saccharomyces cerevisiae and Geotrichum candidum due to the release of copper ions into the medium after breaking down the crystal of the MOF [265], and [Cu 3 (BTC) 2 (H 2 O) 3 ] (BTC = 1,3,5-benzenetricarboxylate) was investigated against Aspergillus oryzae, Candida albicans, Fusarium oxyporum, and Aspergillus niger and exhibited powerful anti-fungal activity due to its ability to reduce the oxygen gas and the production of ROS, which damage the cell and inhibit the microorganisms [266].
Concluding Remarks
Metal-organic frameworks have attracted increased interest due to their specific structural features, related to their porous nature and large specific surface area, as well as high thermal stability. MOFs can be easily synthesized under ambient or extreme conditions at high temperature and high pressure, and also by green chemistry methods, such as by mechanochemical methods and by electrochemical and sonochemical methods. The post-synthetic modification of the MOFs after their synthesis has been widely used to introduce functional groups and give the desired physical and chemical properties. For large-scale commercial production, MOFs are synthesized in continuous-flow solvo(hydro)thermal/tank/microfluidic or milli-fluidic reactors. The possibility of functionalization of the size and shape of the pores offers high potential for applications in the energy and environment, including gas sorption, storage, and separation, as well as metal ions and toxic molecules for analytical and sensing purposes, inclusion of drugs, and biologically important molecules as smart carriers for anti-cancer and anti-bacterial therapies. MOFs can be modified with nanoparticles, polymers, and cyclodextrines to form hybrid composite materials, which have been tested as smart materials for catalysis, drug delivery systems, and new therapeutic agents, textiles for air filters, radiation blocking, and noise reduction, sensors for gases, and ions for separation purposes.
The multifunctional nature of MOFs and their composite materials offers high impact for development of new materials for clean and emerging technologies in the automotive industry, energy production, clean air and water, and health. MOF-based commercial products have been already moved to market from startups in US and Europe for carbon capture, storage of highly toxic gases in the semiconductor industry, capture of water from humid air, selective separation of lithium ions for electric vehicles, removal of toxic metals and ions from water, and adsorbent nanomaterials. MOFs and derived materials are at the 'heart' of the smart materials needed to evolve the fourth industrial revolution during our century. | 2021-01-14T06:16:24.200Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "4ba897af7a1c5ba6c60f40c66e21c2b9e35273a9",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1944/14/2/310/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "cd4995fd7915b0d875792887954acdd81512051f",
"s2fieldsofstudy": [
"Materials Science",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
243094719 | pes2o/s2orc | v3-fos-license | THE EFFECT OF SEPARATION OF INITIAL ASSESSMENT DOCUMENTS ON THE LENGTH OF STAY IN EMERGENCY INSTALLATION IN KENDARI CITY GENERAL HOSPITAL
Background: A preliminary survey of prospective researchers in the Kendari City General Hospital especially in the Emergency Installation (ER), the number of emergency room visits has decreased in the last 3 years, in 2017 the total number of patients treated in the ER was 10.869 patients, then in 2018 the number of visits decreased to 10.768 patients, and in 2019 the number of visits fell to 9.747 patients. Meanwhile, based on the data in the last few months in 2020 there was a very large spike in decline in the last month, in January 2020 the number of patient who visited, were 849 people, then in February 2020 the number of the patient were 1202 patients, March 2020 the number of visits was 1216 patients and finally in April 2020 it fell to 451 people. The purpose of this study is to analyse the effect of separation of the initial assessment documents on Length of Stay in Emergency Installation in Kendari City General Hospital. Methods: The research design was experimental with design pretest post-test one group design. The research location was in the Emergency Room of the Kendari City Regional General Hospital which was carried out during February 2021. The population of all patients who were hospitalized was> 17 years old while the sample was 18 people with the sampling technique of separated random sampling. Result: This study found separation of the initial assessment documents affected significantly to Length of Stay patients in Emergency room in Kendari City General Hospital (pvalue = 0.045 < 0.05). Conclusion: Separation technique of the initial assessment documents on Length of Stay was effective to reduce the Length of Stay patients in Emergency room in Kendari City Hospital.
INTRODUCTION
The main health services in Indonesia that are able to provide curative, preventive and outpatient services and inpatient services as a whole are part of a health service arrangement by the hospital. Education and training facilities for health workers and research are also part of a hospital's function (1). Hospitals are health services that require good management and service systems to develop into quality institutions (2).
To meet the demands and needs of customers who want solutions to their health problems and receive services from health workers in an effort to find healing and recover from the pain they suffer is the main task of the hospital. Customers always want services that are ready, responsive, comfortable and fast are things that are always chill by the patient. In an effort to fulfill the demands of the patient, the main thing is excellent service by the hospital (3).So it takes a lot of energy in efforts to change people's behavior in line with the health development program (4). Excellent service is the most important element in a hospital that is always asked to provide services especially in public health that are able to provide optimal health service standards so that in the future, service to patients can increase in the globalization era (5).
Length of Stay (LOS) in the ER is usually used to monitor the density status and length of service for all patients in the ER, which are calculated from the time the patient arrives until the time the patient leaves the ER (6). Length of Stay (LOS) is also an effective measuring tool for assessing the performance of an Emergency Room and the quality of the ER, the total Length of Stay (LOS) consists of arrival service times, laboratory examination service times, radiology examination service times and availability of beds in other inpatient rooms (7).
The accuracy and speed of safety efforts made to patients who enter the emergency room requires standards that are adjusted to the capabilities and competencies of health workers, so that later they are able to be responsible for proper and fast emergency handling efforts. By improving human resources, emergency service management, as well as facilities and infrastructure in accordance with this standard can be achieved (8).
The process of admission to patients in the Emergency Room focuses on the stages of patient entry and the achievement of targets at predetermined times. The Emergency Model of Caretermining targets in the throughput stage only takes four hours, and divides them into three time frames. In the first time frame, the throughput process is set at 2 hours, calculated from the time the patient enters the ER (registration), then triage or sorting the patient based on the patient's acuity level, then an assessment is carried out by health workers in the ER, namely the initial examination and supporting diagnostics and a management plan. clinical. The second time frame flow in the throughput is set at 1 hour whose activities are reviews by a team of specialists, consultation and disposition by doctors for later determination of admission to hospitalization, discharge, nor any other action. The third time flow of the throughput for 1 hour is the time to wait for the patient to be discharged from the ER for referral, discharge, or hospitalization.(9)
METHOD
The research design was experimental with designpre test post test one group design.The research location was in the Emergency Room of the Kendari City Regional General Hospital which was carried out during February 2021. The population of all patients who were hospitalized was> 17 years old while the sample was 18 people with the sampling technique of separated random sampling. Then it can be determined that 9 samples will be separated from the initial assessment documents and 9 samples that will continue to follow the existing and current procedures in the emergency Sumanto, T., Sunarsih, and T. Supodo. DOI: 10.36566/ijhsrd/Vol3.Iss2/83 https://ijhsrd.com/index.php/ijhsrd e-ISSN: 2715-4718 departments Regional General Hospital Kendari City RESULT Table 1 shows that based on Length of Stay in phase 1 time frame model, there are 14 patients / respondents (77.8%) that can be completed in less than 2 hours, while as many as 4 patients / respondents (22.2%) take more than 2 hours to be able to complete phase 1 of the time frame model. (10). In normal conditions, the emergency services at the Kendari City Regional General Regional General Hospital l are an integral part of 79 other documents which are entirely in the Medical Record sheet which cannot be separated, this often hinders the flow of patient care because this initial assessment document is on the order of 13, 14 , 15 and 19 in the emergency medical record bundle of Kendari City Regional General Hospital l as a whole which must be run sequentially according to the numbering of the documents given, and resulted in the initial assessment being unable to continue before filling in the first document until the 12th medic record document was completed.
In this study, the 4 initial assessment sheets were separated from the medical record sheet, so that the initial assessment process for the patient could coincide with other documents on the medical record sheet, especially the patient administration process which was generally in the initial document on the medical record sheet. So that in its application, this process is expected to be Sumanto, T., Sunarsih, and T. Supodo. DOI: 10.36566/ijhsrd/Vol3.Iss2/83 https://ijhsrd.com/index.php/ijhsrd e-ISSN: 2715-4718 able to cut the length of stay of patients in the ER at the Kendari City Regional General Hospital.
Length of Stayis the time lag between patients when they are in the room, area, installation, department, or special unit in a hospital until they move to another place, in an Emergency Room service condition, LOS is defined as the length of time the patient is in the Emergency Room, starting when list up to physically leave the patient from the Emergency Room(11).
Emergency Model Of Care divides the stages of patient throughput in the Emergency Department into three time frames or phases (the 2: 1: 1 time frame model), including: the initial stage to complete the initial assessment and clinical management planning which usually takes two hours. The second stage is a review by a specialist team or a consultation then a position with a prediction of one hour. the final stage, namely the transfer of patients, whether referring, going home or to the inpatient unit is also predicted to be completed in one hour.
This study does not look at Length of Stay as a whole, but only focuses on the first phase in the time frame model, which is the initial stage for completing the initial assessment and clinical management planning which generally takes two hours, this is because in phase 2, namely a review by a specialist team, an assessment process. the beginning has been completed so it does not have a lag effect on service time to patients.
Based on the results of the Independent Sample t-Test statistical test, it can be concluded that there is an effect of separation of the initial assessment documents on Length of Stay. This can be seen from the number sig. (2-tailed) that is smaller than 0.05, that is, with a value of 0.045.
The results of this study indicate that the separation of the initial assessment documents does have an overall effect on the phase 1 time frame model which has an impact on the cut in the length of stay of patients in the ER. Several officers at the IGD Kendari City Regional General Hospital also admitted that this separation made the service time for patients cut by up to 50% of the time in general, which is usually up to 120 minutes of service, can be cut up to 60 minutes.
This research is in line with the results of research conducted by (7), who argued that the assessment time had an effect on the length of stay of patients in the emergency room. This fact was in line with the research conducted by Bukhari and colleagues in 2014. They re-evaluated the LOS of patients in the ER and the factors that influence it, and found that LOS was associated with arrival time, triage level, consultation time, laboratory examination time, radiological examination time and physical disposition time (waiting time for transfer to an inpatient bed), with the concept of time frame guide emergency model of care, where in this concept it states that phase 1 time The IGD assessment model or time frame is the dominant one, consuming half the IGD LOS targeted.
The results of the observation of a total of 18 visits in the phase 1 time frame model showed that the average time needed for the initial assessment was 112 minutes for patients whose initial assessment sheet was not separated from the medic record bundle with the fastest time being 60 minutes and the longest time 192 minutes, while for patients the initial assessment sheet was not separated from the medic record bundle. The assessment sheet initially separated from the medic record bundle only had an average of 43 minutes to go through the phase 1 time frame model with the longest time was 90 minutes and the fastest was only 15 minutes.
The facts above show that in general, the separation of the initial assessment documents makes the service time in phase 1 of the time frame model more efficient which allows a reduction in the length of stay of patients in the ER at the Kendari City Regional General Hospital, if dividing the overall model time frame which is usually 2: 1: 1 with a total of 4 hours, the time can be shortened to 1: 1: 1 with a total of only 3 hours. If the LOS target is set to follow the 4-hour Emergency model of care, then this target has generally been met.
Researchers assume that if the research becomes the standard application used by the IGD of the Kendari City Regional General Hospital, namely separating the initial assessment document from the Medical Record bundle, it will be able to improve the quality of service in the ER in a timely manner, so that in the future the Regional General Hospital is sufficient to modify the second and third phase services in The time frame model which is also possible can narrow the service time in the ER, so that the total Length of Stay of patients in the ER can be shorter, which is likely to have an impact on increasing patient acceptance quantitatively with the speed of service, because the faster a service will also have an impact on the higher number of patients served.
CONCLUSION
There is an effect of separation of the initial assessment documents on Length of Stay so as to shorten the Length of Stay in the Emergency Room at the Kendari City Regional General Hospitall. Then the system for separating the initial assessment document from the medical record bundle can be applied in a standard manner by the Kendari City Regional Regional General Hospital management so that it can be used by the IGD section of the Kendari City Regional General Hospital. | 2021-09-01T15:08:14.974Z | 2021-06-26T00:00:00.000 | {
"year": 2021,
"sha1": "99e281fa71280090d3736bff50dec47fb23f711f",
"oa_license": "CCBYSA",
"oa_url": "https://ijhsrd.com/index.php/ijhsrd/article/download/83/58",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "414f198f64618238b653b4b7871c8ab3931ec285",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
237427352 | pes2o/s2orc | v3-fos-license | Correlation between physico-chemical water quality and river ecosystems in Malaysia rivers with different land uses
River water quality is one of the major issues in water supply in Malaysia. Most people understand that, water pollution caused by the discharge from the factories and municipal waste but is that the real case? So, in order to answer the above question, this study was done to determine the interaction or correlation between physico-chemical water quality and river ecosystems. Component of river ecosystems measured are river riparian composition, large woody debris (LWD), canopy cover and substrate composition (D50). This study was conducted in Sungai Mengkibol, Sungai Madek and Sungai Dengar in Johor. There were a total of five sampling sites, three for impact stations and two as reference stations, including one highland station. For water quality, six in-situ parameters were measured namely temperature, conductivity, dissolved oxygen (DO), pH, turbidity and salinity using a multi parameter probe as well as a single parameter probe. Meanwhile, field survey form was used to assess river habitat namely river riparian compositions, canopy cover and LWD. In addition, Pebble Count Method was used to measure substrate compositions (D50). Results shows that, physico-chemical water quality was correlated (p < 0.05) with riparian cover, LWD and canopy cover but it was not correlated (p>0.05) with substrate composition (D50) of the river. Based on the results obtained from the study, it can be suggested that physico-chemical river water quality was not caused by pollution alone but river ecosystems were also played significance role in determining the quality of water. Hopefully, this finding can be used by the responsible authorities for effective river management in order to sustain our drinking water supply.
Introduction
Basically, river has its own ecosystem which consists of several components or attributes. The changes of any one or more attributes will change the whole river ecosystem including water quality. Pollutants that enter and are in the body of water usually depend on the type of land use in the river area. Different land uses produce different pollutants. Normally, physicochemical parameters were used as an indicator to show whether the river water quality is good or bad [1]. Salam et al., [2] determined the heavy metals component in Perak River Malaysia as one of the parameter to gauge water quality status of Perak River and they expect the source come from wastewater and industrial discharges. This is in line with the findings of Camara et al., [3], where they found agricultural and forest-related activities as a cause of river pollution. However, lately more and more studies are being conducted by researchers to look at not only the physico-chemical quality but also the health of the river by correlating between aquatic life, especially benthic macro invertebrates with physico-chemical water quality [4]. [5], use the term 'integrity' to represent the natural state of an ecosystem (having little or no human disturbance), while on the other hand, Karr [6] defined river health as river which can still be used by the society at large. In the same breath, Townsend and Riley [5] pointed out certain considerations to be taken into account before quantifying the state of health of a river. Firstly, the need to determine whether some of the river ecosystem attributes are more fundamental to the maintenance of their functions than others. This is very important if physico-chemical or biotic measures are going to be used to assess river ecosystems whether it is being protected or not. Furthermore, the need to determine whether any single measure taken is sufficient to assess a river's health and, if not, the minimal parameters or attributes of measures have to be identified and incorporated into the monitoring programmes. Trush and Mc Bain [7] has proposed 10 general attributes of river ecosystems which are required to be considered while performing river health assessment or monitoring. One of them is channel morphology followed by flows and water quality, riverbed surface, riverbed scour and fill, fine and coarse sediment, channel migration, functional floodplain, channel resetting floods, riparian, and groundwater table. Moreover, based on a report published, it suggested that to maintain a riverine environment, biota, habitat and water quality are used as indicators for ecosystem health [8]. These indicators, however, consist of three river ecosystem attributes, namely biological, physical and chemical, so that these then can form an integrated approach to assess river health. According to the report [8], aquatic biota is used as a key indicator of river health because damage to the biota is often the end-point of environmental degradation. The availability and quality of habitat can affect the biotic community's characteristics in a river system, so that, evaluating habitat is an important component of ecosystem health assessment [9]. Water quality is also one of the river ecosystem attributes and thus can affect the biotic community present. The observation of flow also can be used as a supplementary element of the assessment. The observations will broadly indicate whether flow persisted along the length of the river system during the study period, and provide useful contextual information for interpreting the biotic data. However, the effort or management of water resources requires the cooperation of all parties, whether the public, project developers, the private sector or the government, but, according to Nilsson and Malm Renofalt [10] the biggest role should be played by politicians or those who govern the country. This is because they are the ones who can make decisions and they are also the ones who can make policies and regulations. They are also the ones who can find allocations to implement related projects.
This study was conducted to determine the correlation between physicochemical water quality and river ecosystems for river receiving water run-off from different land uses.
Materials and Methods
This study was conducted within Sungai Endau watershed in the districts of Segamat, Kluang and Mersing in the state of Johor. The main tributary of these catchments is Sungai Sembrong which in-turn is fed by several tributaries such as Sungai Madek, Sungai Mengkibol and Sungai Dengar. River tributary that was selected for the study sites was in Order 2 to 3. This Order was chosen in the study in order to reduce the size of catchment area, so that, all the attributes can be assessed. There were a total of five sampling sites, three for impact stations and two as reference stations. Two sampling stations per site and three sampling points per station were identified, except the most up-stream station at Gunung Berlumut only had one station. Two sampling stations were identified for Sungai Mengkibol which was located in the middle of the town of Kluang. This station was categorized as the impact station (urban river) and incidentally was a receiver basin for all kinds of domestic wastes including wastewater from the industries and agriculture waste which were located upstream of the sampling stations. Two sampling stations were identified for Sungai Madek which was located in Lenggor Forest Reserve, Kahang, in Kluang. These stations were categorized as impact stations (logging area). The study site for agricultural land use is at Sungai Dengar Oil Palm Plantation. Two sampling stations were identified for Sungai Dengar which was located in the Gunung Berlumut of Kluang. These stations were categorized as impact stations (agricultural activities). Two sampling stations were identified for Sungai Hulu Dengar which was located at the foot of Gunung Berlumut, Kluang. These stations were categorized as reference points. One sampling station was identified for Sungai Hulu Dengar which was located at the top of Gunung Berlumut itself about 300 m above mean sea level. This station was also categorized as reference point.
There were two types of data obtained from river water quality sampling exercise, i.e. in-situ and laboratory analyses data, where all the sampling and data analysis was carried out based on the standard procedure provided by the USEPA [11] and the Standard Methods for the Examination of Water and Wastewater [12]. Water Quality Index (WQI) and National Water Quality Standards for Malaysia (NWQS) published by the Department of Environment Malaysia used to interpret the data obtained.
All the river ecosystems components were recorded in the prepared field survey form prior to statistical analysis. The sampling method was mainly observation with the aid of some instruments such as measuring tape to measure the size of Large Woody Debris (LWD) and metric caliper as well as the meter ruler to measure the size of substrate during the pebble counts. LWD survey was basically undertaken to estimate total LWD volume or density, and data from all categories was collected for the entire basin. Measurements were designed such that for each piece of LWD encountered, volume can be approximated using the formulae for various geometric shapes. For small streams at least 100 m of channel was surveyed to stabilize the volume estimate as suggested by Janisch et al., [13]. On the other hand, canopy cover and riparian vegetation assessment was followed the Field Methodology for the Christchurch River Environment Assessment Survey (CREAS) [14] and also from the study conducted by Timbol et al., [15]. In addition, statistical analyses were also performed by using one-way ANOVA test to compare whether there were significant differences between treatment means, or in other words, to determine whether there were significant differences in terms of water quality parameters between sampling events and sampling stations. Chi-square test was also performed to determine the association level between one variable with another. Pearson Chi-Square value or P-value was used to determine the level of association. Numerical data or quantitative data was transformed into categorical data where the data was categorized into different categories and the value of each data was then categorized into simple coding.
Results and Discussion
Physicochemical water qualities were crosstab with river discharge, substrate composition, LWD, canopy cover and riparian composition (Table 1). There was no association and correlation between water quality and river discharge (p>0.05). Similarly with the cross tabulations made between water quality and substrate composition, there was no association and correlation amongst them (p>0.05). However, there was a significant association between physicochemical water quality with large woody debris (p<0.05), canopy cover (p<0.05) and riparian composition (p<0.05). At the same time, the correlations between water quality with LWD and canopy cover were very strong with Pearson's R value of 0.825 and 0.664 respectively. On the other hand, there was a weak correlation between physicochemical water quality with riparian cover with a Pearson's R value of 0.389. The results recorded is supported the concept of river health where it is usually associated with good physical and chemical water quality as well as maintenance of natural habitat, natural river morphology and sustainable aquatic life. As each attribute has its specific and intended use, each of which may not give a true picture of the health of river when such assessment is conducted, an alternative approach for a complete river health assessment would obviously entail integrating these attributes in a unified manner. Integration here means connecting or creating links between one with other attributes in the system. In this respect, the integration of the river ecosystem attributes is basically connecting or linking the factors usually used such as riparian cover, canopy cover, LWD, substrate compositions, river discharge, river bank type, shape of river channel, river meander, physicochemical water quality and aquatic life. Riparian cover and compositions as well as canopy were the main factors which contributed to changes of all other factors of the ecosystem, such as LWD, substrate compositions, river discharges, riverbank types, shapes of river channel, river meanders, physicochemical water quality and aquatic life forms. The added advantage of determining each of them will give an indication as to how a river system has been abused by human intervention. For example, river riparian cover acted as filter for suspended solids before water flow into a river and also helped to impede and slow down the flow indirectly helping to minimize river bank erosion. The river canopy, on the other hand, plays the role in reducing water temperature. This is amply exhibited from the results obtained where rivers which had reach canopy cover recorded a good water quality as compared to the least or those of poor canopy cover. Usually the undisturbed river which are located upper most of the stream channel have good canopy cover compared to those locations at the downstream channel which is almost always with poor canopy as a result of human encroachment. The results from this study revealed that good water quality were recorded in rivers which had good canopy cover (Sungai Hulu Dengar, Dengar and Madek) compared to the river with the least or poor canopy cover (Sungai Mengkibol), especially in terms of lower temperature regime and higher content in dissolved oxygen. This was confirmed with study from Fatimah and Zakaria Ismail [16], where they found very good water quality of the Hulu Selai River, Endau-Rompin National Park.
All the results obtained was summarized in a form of diagram showing the interaction of different components as illustrated in Figures 1. LWD, canopy cover and riparian cover would be the most crucial component needed to be determined in order to ascertain the physicochemical water quality. The direction of these arrows in the diagram towards the attribute of physicochemical water quality represented the association in addition to indicating which attributes are dependent variables and which are independent. From Figure 1, parameters given in the circle which are placed surrounding the diamond box represent independent variables, i.e. attributes such as riparian cover, canopy cover, LWD, substrate compositions and river discharge which will have an influence on physicochemical water quality. However, this study revealed that, D50 and river discharge were not the influencer of water quality. Meaning that, these two parameters were not the crucial parameter to determine physicochemical water quality. The information obtained could be used as a basis for river rehabilitation programme. Additionally, the determination of river riparian and canopy cover are also important to ascertain the degree of channel deformation and density of LWD. These two attributes together with physicochemical water quality would have to be put in place first and must form a compulsory feature when beginning the process of river rehabilitation. The first tier tool or indicator for river rehabilitation process comprises of primary (mandatory) parameter which is the physicochemical water quality and of secondary parameters which are canopy cover, riparian cover, LWD, substrate compositions (D 50 ) and river discharge. Figure 1. Interaction between physicochemical water quality with riparian cover, canopy cover, LWD, river discharge and substrate composition.
Conclusion
There are correlations between water quality with LWD, canopy cover and riparian cover but there was no association and correlation for river discharge and substrate compositions (D50). The correlation between water quality with LWD and canopy cover were very strong, however, there was a weak correlation between physicochemical water quality with riparian composition. It can be concluded that river water quality changes with the density of LWD, canopy cover and riparian cover. The higher the density means the better the quality of river water. | 2021-09-07T20:02:26.633Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "60243c0157eaef1aaaeb2c6a9c67a1b9dbb87049",
"oa_license": "CCBY",
"oa_url": "https://iopscience.iop.org/article/10.1088/1755-1315/842/1/012041/pdf",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "60243c0157eaef1aaaeb2c6a9c67a1b9dbb87049",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Physics"
]
} |
232185575 | pes2o/s2orc | v3-fos-license | The Edge Geometry of Regular Polygons -- Part 1
There are multiple mappings that can be used to generate what we call the 'edge geometry' of a regular N-gon, but they are all based on piecewise isometries acting on the extended edges of N to form a 'singularity' set W. This singularity set is also known as the 'web' because it is connected and consists of rays or line segments, with possible accumulation points in the limit. We will use three such maps here, all of which appear to share the same local geometry of W. These mappings are the outer-billiards map Tau, the digital-filter map Df and the 'dual-center' map Dc. In 'Outer-billiards, digital filters and kicked Hamiltonians' (arXiv:1206.5223) we show that the Df and Dc maps are equivalent to a 'shear and rotation' in a toral space and the complex plane respectively, and in 'First Families of Regular Polygons and their Mutations' (arXiv:1612.09295) we show that the Tau-web W can also be reduced to a shear and rotation. This equivalence of maps supports the premise that this web geometry is inherent in the N-gon. Since the topology of W is complex, we hope to make some progress by studying the region local to N. The edges of every regular N-gon are part of a Tau-invariant region that should include at least 1/4 of the 'First Family' S[k] tiles defined by N. The emphasis here are the S[1] and S[2] tiles adjacent to N, but we will also study their interaction with neighboring tiles. Since all S[k] tiles evolve in a multi-step fashion, it is possible to make predictions about the 'next-generation' tiles which survive in the web. The Edge Conjecture defines just 8 classes of N-gons so there is an 'Eightfold Way' for regular polygons.
The -singularity set W (a.k.a. the 'web') can be generated by iterating the extended edges of the N-gon as shown below. If these extended edges are truncated they form classical star polygons and the cyan First Family S[k] tiles are defined to be 'conforming' to these nested star polygons. For N = 14 the maximal S[k] is S[5] -also known as D. Here it is congruent to N but for N-odd it is a 2N-gon with edge length identical to N. This D tile is always globally maximum among regular tiles that can evolve in W (with any extended edge length), and rings of these D tiles guarantee that the resulting 'generalized star polygon' in (iii) is invariant.,Therefore it is sufficient to iterate just the star polygon edges to define the default web W. If full extended edges are iterated, concentric rings of these D tiles will guarantee global stability (and introduce no additional scaling or geometry). See [VS].
Figure 1
The web development for the regular tetradecagon known as N = 14. In part (iii) the truncated edges of N are iterated under , to form the 'web' W which is bounded by a ring of maximal 'D' tiles.
These 'generalized star polygons' in (iii) share the same scaling and dihedral symmetry as N so they are inherent in N and their geometry is determined by the matching cyclotomic field N . Each S[k] in the First Family of N defines a star [k] point and a scale [k]. The number of 'primitive' scales (gcd(k,N) = 1) matches the 'algebraic complexity' of N, namely (N)/2 where is the Euler totient function. This is the rank of the maximal real subfield N + of N .
Based on a 1949 result of C.L.Siegel communicated to S. Chowla [Ch], the primitive scales form a unit basis for N + , so this is what we call the 'scaling field' of N. The traditional generator of N + is N = 2cos(2π/N) which is + -1 where = exp(2πi/N). We will typically use primitive scales as alternate generators of N + because the resulting scaling polynomials will be more meaningful than the generic polynomials in N .
Our ultimate goal is to understand the topology of W, but currently the only cases where the topology of W is understood are the 'linear' or 'quadratic' cases of N = 3,4,5,6,8,10 and 12. The N = 14 case described here has (N)/2 = 3 so it is classified as 'cubic' and there are two nontrivial primitive scales along with scale[1] = 1, so W is probably multi-fractal.
As noted by J. Moser in 1978, this -web can be regarded as a discontinuous version of the phase space for a Hamiltonian system, so all three maps will have geometry related to the classical 1969 Standard Map of Chirikov. This connection is outlined in [A3] of [H5] At this time the -representations are the most meaningful since there is a well-developed theory of dynamics for both regular and non-regular N-gons. But for computational purposes the Df and Dc maps are simpler and more efficient and we will sometimes use these alternative maps to generate W.
Our primary concern here is the geometry local to N and this will always include the S[1] and S[2] tiles of N, but as noted above, his geometry is typically shared by adjacent tiles in an invariant region local to N. These invariant regions are driven by 'resonances' between N and the S[k] satellites. As N increases the minimal step-3 resonance will dominate so for N = 200, S[66] will be the 'shepherd' and for N = 101 it will be S [34]. However the twice-odd case has a very strong step-4 resonance so S[Floor [N/4]] appears to be the shepherd. In general the divisors of N should be a major factor determining these regions, but the relationship is not well-understood. . Each of the 6 large-scale invariant regions for N = 60 can be further subdivided into smallerscale invariant regions since invariance exists at all scales. When N is twice-even there will be a 'mutation' in S [N/4] since the web steps of S[k] are k = N/2-k and a resonance will occur when N/gcd (N.k ) > 2. Here S [15] of N = 60 will be formed from two distinct web cycles of length 4 so it will be the octagon 'weave' of two squares with different radii. For N twice-even this step-4 resonance is usually only a precursor to the invariant region which may be closer to step-3.
The Generalized First Family Theorem (GFFT) of [H5] uses the web evolution of an S[k] to define its local 'families'. Only N itself has a 'normal' step-1 web evolution and the S[k] evolve with k = N/2-k steps. For example S[25] below has k = 5 so it has step-5 local families . These satellite families include central 'penultimate' tiles which we call M k . Because these tiles span two families their steps will be additive, so M 1 will be step 4+5 and M 2 will be step 5+6 and M 3 will be step 6+7. Some of these tiles foster their own families but recursion is very limited.
It is not surprising that a secondary tile like S[6] can inherit the same k = N/2-6 = 24 step sequence as the original S[6], because for N twice-even S [25] is an N-gon that can play the role of a step-5 surrogate N. This means that both S[6]s can share a similar mutation. Since N/gcd(60,24) = 5, they will both be the weave to two pentagons with 'base' spanning gcd(60,24) star points, but this can occur in multiple ways and the Mutation Conjecture of [H5] predicts the minimal surviving star point will be the (absolute) minimal of N/2-1-jk . This correctly predicts star [5] to star[7] for S[6] but the surrogate S[6] below runs from star [11] to star [1].
Figure 5 Detail of S[24] and S[23] also showing a mutation in S[10]
In the pages to follow it will be clear that most displaced S[k] will have displaced mutations so S[6] (and S[10]) are not exceptions. For S [24], k = 6 and N/gcd(60,6) = 10 so the mutation is two decagons with base star[1] to star [5]. For S [10], k = 20 so N/gcd(60,20) = 3 and the base will be 20 steps. The original S[10] goes from star[9] to star [11] as predicted but here it is star [3] to star [17]. This star [17] point also defines a step-13 family of M 3 consisting of just S[10] and S [23] and this implies that S[10] can play the part of an M tile with step 13 +7 to match k . Indeed S[10] has its own short step-20 family consisting of S[3] and S [23]. Following the blue lines of symmetry all of this geometry references back to N N). For the 8k+2 family such chains appear to exist, but in general all we know is that for Neven,S[2] will have a step-4 family of DS [k] where the matching S[1] of N is DS [N/2-2]. Here for N = 60, counting backwards mod-4 from DS[28] yields the DS [k] shown below. This 8k+4 family is the only one where S[2] is mutated since N/gcd (N/2-2,N) = N/4. So S[2] below is the weave of two regular 15-gons and the 8k+4 Conjecture says that for the mod-16 subfamily 12+16j the DS[4] will have an alternative 'parent' which we call Px. For N =12,Px = S[3] of N.
Figure 6
The geometry local to S[2] for N = 60 Table 1 below summarizes the 8 classes of geometry that make up the 'Eight-Fold Way'. N. The DS[k] are the 'next-generation' tiles of S[2] predicted by the Edge Conjecture so they arise in the early magenta web. The limiting web is black. 8k family 8k + 2 family 8k + 4 family 8k + 6 family 8k + 1 family 8k + 3 family 8k+ 5 family 8k+7 family
Organization of the three sections of this paper
Section 1 The singularity set of the outer-billiards map (i) Definition of the outer-billiards map and the primitive domains (atoms) (ii) Definition of the singularity set W (also known as the 'web') (iii) Default web based on the star polygons of N (iv) The three maps Section 2 The local evolution of the singularity set W (i) The web W is partitioned by the star points of N so it can be regarded as the disjoint union of the local webs of the S[k] tiles.
(ii) When N is even each S[k] is formed in a step k = N/2-k fashion and when N is odd these indices are doubled to k = N-2k. The 'effective' star points of the S[k] will be also be step-k . and the Generalized First Family Theorem of [H5] defines the right-side 'families' of the S[k].
(iii) The S[1] and S[2] tiles will have retrograde (ccw) steps 2 and 3 for N even and twice this for N odd. This simplifies the analysis of the joint S[1]-S[2] web and shows that S[2] has an effective step-4 web for N even and step-8 web for N odd.
(iv) The Edge Conjecture predicts that the 'next-generation' DS[k] tiles S[2] will exist at least in a mod-4 fashion for N even and a mod-8 fashion for N odd. These are called the Rule of 4 and and Rule of 8 and together they define 8 distinct classes of dynamics for N of the form 8k + j.
( Section 3 Catalog of edge geometry for N ≤ 25 Appendix 'Deep-Field ' maps of the edge geometry for N = 19 and N = 200 Section 1. The singularity set of the outer-billiards map Definition 1.1 (The outer-billiards map ) Suppose that P is a convex n-gon in Euclidean space with origin internal to P. If p is a point external to P that does not lie on a blue 'trailing edge' of P, then the (clockwise) outer-billiards image of p is a 'central' reflection about the nearest clockwise vertex of P, so (p) = 2c-p where c is the nearest clockwise vertex of P as shown on the left below. Since τ is not defined on P or on the trailing edges, the level-0 web is defined to be W 0 = E E t . where E is the set of edges of N and the E t are the extended 'trailing' edges of N. (For a counterclockwise these would be extended forward edges.) W 0 is called the level-0 exceptional (or singular) set of τ. Since W 0 is connected, the complement of W 0 external to P consists of n disjoint open (convex) sets which are known as level-0 tiles or 'atoms'. Using these primitive T k tiles, the mapping τ can be defined as τ (p) = τ k (p) if p T k where k (p) = 2c k -p. Therefore the domain of k is T k , which we write as Dom( k ) = T k .
It follows that Dom() is the union of the primitive tiles, T k . Since the T k are called the 'level-0' tiles, after iteration k of the web algorithm (defined below) any new tiles which arise will be called level-k tiles.
By definition Dom(τ 2 ) is Dom(τ)τ -1 (W 0 ). The union of W 0 and τ -1 (W 0 ) is called the level-1 web, W 1 . In general the level k (forward) web is defined to be: The level k inverse web is defined in a similar fashion using τ and the extended forward edges: At each iteration these webs are distinct and there are computational advantages to using both webs so we will define the level k web W k to be the union of W k f and W k i .
It is not necessary to implement τ -1 explicitly because this map can be obtained from τ by reversing the orientation of the generating polygon. As long as the origin of the coordinate system is inside the polygon, the orientation can be reversed by taking a reflection about the vertical axis -which we call Tr.
Figure 1.2 The mapping Tr and its inverse
Our arbitrary choice is to implement τ clockwise, but for a counterclockwise matrix P, it is just a matter of reflecting P to get Tr [P], then using the clockwise τ, and reflecting the result back as shown above. For regular polygons centered at the origin, P and Tr [P] are identical (except for orientation) so the procedure for generating the web is very simple: implement τ for a clockwise P and recursively apply τ to the forward edges to obtain the inverse web W k i . Sometimes this is sufficient, but if a true web W k is needed, apply Tr to the resulting inverse web to get W k f . This works because there is no loss of generality in assuming that the original matrix was counterclockwise. Since W k i and W k f are reflections, any analysis can be done with either web. The web W 5 i is in blue and the forward web W 5 f is in magenta. These webs are just reflections of each other about the center of N, so using both webs is computationally very efficient. The tiles S[1] and S[2] get their names from the fact that their centers have -orbits around N that skip 1 or 2 vertices on each iteration.
The star polygon web W for a regular polygon
As described in [H5], every regular N-gon defines a sequence of nested star-polygons formed by extending the edges until they meet. The 'maximal' star polygon in this process is simply W 0 = W 0 f W 0 i and this will be our traditional choice for initial W 0 because is easy to show that the resulting web W will be invariant. This maximal star polygon will always be bounded by a ring of 'D' tiles which are maximal among all regular tiles that can be formed in the web W.
By our remarks above, -1 is applied to a horizontal reflection of N, so W k and W k i are also related by a simple reflection and it is our convention to first generate W k i by mapping the 'forward' extended edges under and if desired a reflection gives W k also. In the limit W and W i must be identical but at every iteration they differ, so it is efficient to utilize both for analysis.
Example 1.4
The star polygon webs of N = 7 and N = 14, The forward and trailing edges are shown in blue and magenta. Here we generate the level-k (inverse) webs W k i by iterating the blue forward edges under τ k for k = 0,1,2,3 and 50. The magenta trailing edges are shown for reference. At every stage these images could be enhanced by taking the union with the horizontal reflection. In the limit it would not matter.
We call these 'generalized star polygons'. They retain the dihedral symmetry group N of N. Because the region bounded by D tiles is invariant under , our default 'region of interest' will be the regions outlined above. By Lemma 4.1 of [H5] the orbital 'step-sequence' of D is maximal at step <N/2> which is N/2-1 for N even and Floor [N/2] for N odd. The points in these star regions cannot have steps that exceed that of D. The Twice-Odd Lemma of [H5] says that the web of N = 7 can be faithfully embedded in the web of N = 14. Except for scaling, they both share the magenta 'darts' outlined on the right. This will typically be our region of interest. Implementing singularity sets by iterating all the extended edges of N is very inefficient. Because of rotational symmetry it should be sufficient to iterate a subset of the edges and this is where the Df and Dc maps are more efficient. They are based on just one or two extended edges. So for N = 14 shown below, the 14 primitive regions reduce to just three for the Digital Filter map and two for the Dual Center map. This timing is a little misleading because by convention one iteration of the -web iterates H on all 14 edges of N and these edges interact to produce a web which is accelerated relative to Df or Dc. The acceleration factor is variable, but for N = 14 it is about 20 times for a large-scale web, but this decreases for small-scale webs where the inter-edge interaction is less significant.
The biggest issue with Df or Dc is that the dynamics are very different from so they do not reproduce the global -web faithfully. The horizontal axis defined by the extended 'base' edge of N is a line of discontinuity of all three maps but to preserve the rotational symmetry relative to this horizontal axis it will be necessary to rotate the web cw one step. This is still very fast.
Df and Dc are comparable in efficiency, but Df is only defined for N even and the points must be rectified, so we will typically use the dual-center map here, along with . Since all exact calculations take place inside the cyclotomic field of N, there are sometimes advantages to using a complex-valued implementation of the web.
The key to the simplicity of the dual center map is the extra level of symmetry which results from shifting the origin to a vertex of N. Now the webs of N and -N can be generated together in a very efficient fashion and these two interact locally in the same fashion as the web development of W described in [H5]. This means that Dc First Families will be faithful to the First Family Theorem as illustrated below. This top-bottom juxtaposition of families actually occurs in the global -web so the Dc web is locally compatible with the web. Since N = 14 and N = 7 share the same web, it would be possible to study them together, but typically we will study N = 7 at the origin where the dynamics are different. This is true for also. where w = 2Pi/N to 35 decimal places. This extends z by 1,-1 or 0 depending on whether z is above, below or on the real axis. Then Dc rotates the result about the origin by 2π/N. The following code generates the 8 th iteration on the right above : Under exact arithmetic, these extensions in level 8 would not exist and the web would be periodic after 7 iterations. Using Dc with an approximate rotation angle w will generate such extensions because when the two intervals [-1,0] and [0,1] map back to the x-axis there will be a small systematic error in the Sign function and most of the [-1,0] points will think they are negative while the [0,1] points will tend to be positive. This is easy to see using the Tally function for the 2000 points in the 7 th iteration of Dc. It is necessary use an approximate w for extended calculations because exact evaluation of the Sign function may not be feasible. Therefore these extensions will occur in a more-or-less random fashion and there is no reason to iterate any points in the interval [-1,1]. Since both N and -N are known, the solution is use an initial interval of the form [1, x] or [-x,-1]. For the local geometry on the edges of N it is sufficient to set x = 2. Because of the reflective edge symmetry of N, it would be sufficient to iterate just [-1.5,-1] . The resulting web will also have +/symmetry, so it is efficient to combine the resulting W k with -W k as shown by the following example from [H5].
Below is an example from N = 14 where we iterate 1,000 points in the interval H = {-2,-1} at a depth of 5,000. (The interval {-1,1} will generate N and -N in a period N orbit.) Here we crop these 5 million web points and their negatives and reflections to the desired region. (Less than 1 minute to generate and 1 minute to crop on a modest computer.) To generate the full star polygon web for the dual-center map as shown in the insert above, we can use the blue interval shown on the left below. This interval includes [-1,1] so that N and -N will be part of the web. Shown here are levels 0,1, 2, 3, 8, 25 50 and 200.
The regions above and below the real axis are identical to the outer-billiards web, Since the web has 2π symmetry about the center of N, a local rotation of this Dc web will yield correct geometry above and below the horizontal axis. This web algorithm is very fast and there is a bonus of plus-minus symmetry as in the Df map. This maps the lower web to the upper as shown below in magenta for level-8. In addition there is reflective symmetry in green. The reflective symmetry here is relative to the center of N so it only applies to the upper half plane. These symmetries yield an accelerated web with amazing efficiency and make it possible in two lines of code to generate webs with greater detail than ever before.
Note: This web algorithm is so efficient that processing the graphical data can be a major issue. For Mathematica the raw data is typically points in 35-decimal place 'postscript' format, so the data files are large. This size can be an issue when converting to raster form for display. This is usually done with Photoshop but the data points need to be in high-speed memory for processing and this can be a major issue for billions of points. Since these images are generated inside Mathematica, which is itself a very powerful image processor, an alternative is to save the raw data but use Mathematica to generate on-screen images which can be preserved by software or by a simple screen capture on a 4K monitor with resolution of 3840 by 2160. Some graphics cards allow the user to double this resolution to 8K and we will occasionally do this. On a typical monitor the screen resolution is a relatively coarse 72 dpi but inside Photoshop a 4K image captured with Screen Save will be 23Mb and 33 by 50 inches, which will still have good quality at 200 dpi (and smaller size) for printing. See the Appendix for more on postscript files.
Section 2. The evolution of the outer-billiards web W
In Section 4 of [H5] we described the evolution of the early web, and we will summarize the results here. For every regular N-gon, the star points partition the extended edges so the web W can be regarded as the disjoint union of local webs determined by these intervals. angle' k with =2π/N and k = N/2-k for N even and k = N-2k for N odd. We will use N = 22 to illustrate the even case. By rotational symmetry it is sufficient to analyze the evolution of W in a single domain (atom) of or -1, and this is done below for N = 22.
The magenta trailing edges of N are lines of discontinuity of so they partition each domain of -1 . For N even there are N/2-1 star points which partition the horizontal forward extended edge L as shown below. The outermost region will be unbounded. For N = 22 we will see how this 10 th region defines the S[10] (D) tile of the First Family. This tile will be congruent to N but it will evolve in a 'retrograde' fashion so if the center was shifted to D, its extended edges would generate N. By symmetry D also generates a left-side copy of N and this defines endless rings. Our default region is the first ring bounded by 22 Ds. In this region S[9] is in a central position and the Twice-Odd Lemma of [H5] says that its 'in-situ' web geometry will match N = 11.
Figure 2.2
The second iteration of W in a single domain of -1 for N = 22 As indicated above, the evolution local to each star[k] is a simple outwards 'shear' of magnitude sN (as changes from one target vertex to the next) and then a variable rotation that would align each magenta trailing edge with the horizontal 'base' edge L. These rotations are the star angles k where k = N/2-k for N even. So D with k = 1 evolves in the same fashion as N itself. The star[1] point of D is first extended outwards by the shear and then rotated by . This occurs recursively because the shear applies to all points on the horizontal line L as well as its image (L). The rotation is a constant in the magenta region defined by D, so D will be maximal among all S[k] (and among all possible regular tiles in the web W).
For S[9] of N = 22, the star[1] point is star[9] of N and now the rotation is 2 . This is also applied recursively so S[9] evolves from the interval star[9]-star[10] in a step-2 fashion -which is why S[9] (and all odd S[k]) will be N/2-gons.This analysis applies only to the interior of the region between consecutive blue edges. On the opposite side of these blue forward edges the shears must be reversed as show by the arrows above. Two consecutive forward edges will be apart. Therefore for D the top edge will be N/2+1 = 12, and that remains true for all S[k].
When N is twice-odd this offset between top and bottom will be even, so tiles like S[9] will be formed in a redundant fashion from either cycle. Therefore all odd S[k] will be N/2 gons with skips 2,4,6,… For the even S[k] the rotation angle will be odd and hence not synchronized top to bottom, so the even S[k] will be regular N-gons formed from both cycles. (When N is twice odd, these k = N/2-k steps in the web evolution of S[k] will cause 'mutations' in the S[k] when gcd(k ,N) >1, because the web cycles will be shortened. When N is twice even the N/2-1 offset between top and bottom cycles is odd so the cycles are no longer redundant and the mutation condition is a more forgiving gcd(k ,N) >2. Therefore for N = 12, the S[4] tile is not mutated.) When N is odd the (relative) shears are unchanged from the even case and the star angles are compatible since they are of the form (/2)(N-2k) = (N/2-k). Since the S[k] are 2N-gons their local indices are k = 2(N/2-k) = N-2k and D will again have index 1 with rotation angle /2. Therefore D will be a 2N-gon with the same side as N. Since k = N-2k must be odd, the primary web cycle be odd and it will be shortened iff gcd (N-2k,2N) > 1. The top cycle will also be odd since it is based on edge N+2 of D, so these two cycles will be synchronized mod 2 as in the twice-odd case, but now both cycles are odd relative to the base edge, so there will be no genderbased mutations in the S[k], but of course D will have the full spectrum of gender changes in the S[k] tiles -which we call the DS[k].
Local webs of the S[k]
Our convention for numbering the S[k] is right to left but the star angles increase left to right so for N even the k steps of S[k] are N/2-k and this is doubled to N- Below is a symmetry plot for N = 39 where the step-8 symmetry of S[2] is partitioned into 4 vertex-based steps corresponding to the step-8 effective star points. Even though N is odd, the DS[k] will evolve relative to S[2] which is a 2N-gon. Therefore the web steps of the DS[k] will be k = 2N/2 -k = N -k just like the even case. This will imply that the 2 nd generations for N-even and N-odd are similar although the relative steps here are doubled. Compare this with the symmetry lines of the First Family where the 'parent' N-gon is step-1.
The Edge Conjecture and Recursion
Below we summarize these results about the evolution local to the S[2] tile of N. (i) We conjecture that these M[k] and D[k] tiles will always exist and converge to GenStar [N] with geometric scaling GenScale[N/2] and temporal scaling given by (ii) below , it will satisfy the second-order difference equation D k = nD k-1 + (n+1)D k-2 where n = N/2 and D 1 = n and D 2 = n 2 The solution to this equation is Therefore the periods of the D[k] for N = 2n will satisfy this closed-form equation.
(iii) The ratio of the periods will be D k /D k-1 = − + This sequence of ratios will clearly approach n + 1 = N/2 + 1 in the limit. It will begin with n and exceed n+1 on the second iteration and then alternate low-high relative to the limit.
(iv) Since 'most' M[k] form on the edges of the D[k-1] they will have the same limiting ratios and also satisfy the same basic difference equation as the D[k], but the new initial conditions will be M 1 =period of M[1] = n and M 2 =period of M[2] = n(3(n-1)/2 + 2 ), (This later period can be explained by noting that as shown below, 'most ' D[k] D[3] at 13 3 + 14⸱13 so the proposed difference equation for the periods D k of the D[k] is D k = nD k-1 + (n+1)D k-2 where n = N/2 and D 1 = n and D 2 = n 2 Mathematica's solution is: Any twice-odd pair like N = 26 and N = 13 will share the same web, but the dynamics of the embedded N = 13 may be very different from the dynamics local to N. Here with N = 10 there is only one non-trivial scale so N = 5 and N = 10 share the same geometric scaling and since this geometry is selfsimilar, they also share the same temporal scaling. This implies that N = 10 and N = 5 must share the same difference equations for the D[k] and M[k] (but possibly with distinct initial conditions). The resulting four equations are given in Section 5 of [H5] and will be reproduced below.
This invariance occurs at all scales and these regions are typically 'nested' as shown below.
As noted earlier every N-gon has an invariant local region that includes the S[1] and S[2] tiles.
All the tiles in this region would be expected to share similar dynamics and neighboring tiles like S[3] and S[4] can have a significant effect on the geometry. Because of rotational symmetry, rotated copies of S[k] will share this same geometry. Here we look at these rotations and the resulting vertical 'towers' of S[k] which also share star points. Once again the odd cases have shared indices that are double the even case. These embeddings are of theoretical interest but at least for the purposes of edge geometry, we will regard the odd cases as relatively independent of the matching twice-odd case. This seems prudent until we fully understand how they are related. . Therefore mutations should occur whenever gcd(N/2,k ) > 1 and the mutation will be the weave of two (
The 8k+2 Conjecture described above says that these N-gons will have an edge geometry driven by sequences of self-similar D[k] and M[k] tiles with known geometric and temporal scaling, The 8k+4 Conjecture says that the geometry is driven by the mutation of S[2]. The 8k+1 Conjecture says that these families will have a volunteer DS[2] to go along with the predicted DS[5]. The 8k+7 Conjecture says that the predicted DS[3] will generate dual DS[1]s at S [N-3] with step-2 webs which support at least S[1] tiles. The Twice-even S[1] Conjecture says that since S[1] has a step-2 web it can support 'step-2'tiles called Skx which are D tiles relative to S[k] (like N-odd).These Skx include S[2] and S3x is an S[2] tile of S[3] for N 12. Every N-gon has a local web which is invariant and this web would be expected to contain at least 1/4 of the S[k], so there is a link between edge geometry and the large scale geometry. Both are driven by the cyclotomic field and the corresponding scaling field S N with complexity φ(N)/2. Hopefully the examples below may shed some light on the issue of 'nature' (algebraic complexity) vs. 'nurture' (web and edge complexity under ). As N increases there appears to be a surprising amount of diversity within the 'algebraic families' shown below. [T] and [H3] to support that hypothesis. The blue 'darts' shown here are anchored by D[k] tiles so they scale geometrically by GenScale[5] and we want to show that their temporal scaling is 6. The light-blue and dark-blue darts are the 2 nd and 3 rd 'generations'. Note that 5 copies of the 1 st generation dart will almost tile the full generation '0' but tiling the 1 st generation dart with light blue darts will take at least 7 copies and it will take at least 41 dark-blue darts to cover the previous dart. In [H3] we derived the following difference equations to describe this growth of decagons and pentagons: d n = 3d n−1 + 2p n−1 ; p n = 6d n−1 + 2p n−1. This yields the table below for one blue dart.
Eliminating p n gives the second-order difference equation: d n = 5d n-1 + 6d n-2 . This equation must also describe the growth of the D[k] for N =10 as in the 8k+2 Conjecture, and indeed it is identical to D k = nD k-1 + (n+1)D k-2 where n = N/2. It is a simple linear second-order homogeneous difference equation which can be solved by assuming that D k = C 1 z 1 k + C 2 z 2 k where z 1 and z 2 depend on n and n+1 and C 1 and C 2 depend on the initial conditions. − − + = − + Therefore in both cases z 1 = -1 and z 2 = 1+n and this explains why in both cases the ratio must approach 1+n in an alternating fashion. The first few periods are given below In Section 4 of [H3] (2013) we gave this same difference equation for N = 5, but these results and others were obtained years earlier and communicated to Richard Schwartz at Brown. He passed them on to his colleague S.Tabachnikov who had already studied this case and in the following years we exchanged e-mails as their interests evolved to other areas. We did not realize that N = 5 was just a special case of a family of 8k+2 polygons based on N = 10. For us personally it was a simple case of 'bias' in favor of the beauty and elegance of primes like N = 5. What can N = 10 offer that is not already known for N = 5 ? Now there are likely an infinite number of 'generalized' N = 10s but we know almost nothing about the matching N/2-gons. This seems to be a very gender-specific world, especially when dynamics are involved.
As expected the geometry of these 'chaotic' regions is a mixture of scales. The first such region shown below is presided over by a virtual MS2[4] with a central S1[4] (D[4]) embedded in a 'sea' of larger S2[4] (and M[k]) scale by 9, and this same limiting scaling will clearly apply to the entire default star region. Therefore this web W will have similarity dimension Log[9]/Log[1/GenScale[8]] ≈ 1.2465. Below we will compare this with N = 5 and 12. N = 9 N = 7 and N = 9 are the 'boundary' cases of the 8k+7 and 8k+1 families so it may not be prudent to make generalizations about these families based on these two cases. Concerning N = 9, the Edge Conjecture makes no predictions before DS [5], but the 8k+1 Conjecture says that volunteer DS[2]s will exist with potential for extended family structure. We will see that this is true for N = 9, but there is little evidence to show that this conjecture is the result of the embedding of 8k+1 in the 'well-behaved' 8k+2 family. As explained in Figure 17. ) in what appears to be a multi-fractal convergence with an unknown spectrum of temporal scaling. For N = 9 the star[1] convergence appears to be much simpler with a limiting temporal scaling of 20. This may be true because this convergence appears to take place inside 'islands' of single-scale self-similar dynamics which characterize N = 9 and 18, and to a lesser degree N = 7 and 14. allows it to support M[1]s at its vertices, and this seems to a stabilizing influence since these M[1] tiles serve an important role. The next level of this sequence is shown in the enlargement below. The local geometry of S[1][2] shows signs of cubic instability, but these issues are localized and do not affect the convergence. This is true for N = 18 also. (This is a Dc plot so star[2] of S[2] is the {-1,0} vertex of -N.) It is an easy matter to track the tiles in this sequence to estimate their temporal scaling using the -periods of their centers. [3]. Since N = 10 has quadratic complexity, the web it shares with N = 5 has just one non-trivial scale and should be fractal in nature. As indicated earlier, N = 5 and N = 8 are the only non-trivial regular cases where the dynamics and singularity sets have been studied in detail. In [T] (1995) S. Tabachnikov derived the fractal dimension of W for N = 5 using 'normalization' methods and symbolic dynamics and in [S2] (2006) R. Schwartz used similar methods for N = 8. In [BC] (2011) Bedaride and Cassaigne reproduced Tabachnikov's results in the context of 'language' analysis and showed that N = 5 and N= 10 had equivalent sequences. (The Twice-Odd Lemma implies equivalent webs but not equivalent dynamics.) In N = 5 earlier we gave an independent analysis of the temporal scaling based on difference equations and here we will give the implications of the 8k+2 Conjecture for the matching cases of N = 10 and N = 5. As noted earlier, S[2] will have period N/2 so it will define two disjoint invariant regions and it is sufficient to track to just the blue regions shown here. The first few periods are: D k: {5, 25, 155, 925, 5555, 33325, 199955, 1199725, 7198355} and M k {5, 40, 230, 1390, 8330, 49990, 299930, 1799590,10797530} [ ] The first few periods are : D k {5, 35, 205,1235, 7405, 44435, 266605, 1599635, 9597805} and M k :{10, 50, 310, 1850, 11110, 66650, 399910, 2399450, 14396710} = 2D k for N = 10.
N = 11
This is the lone 'quintic' polygon (along with the matching N = 22). N = 11 is the first nontrivial member of the 8k+3 family. The Edge Conjecture predicts that for N odd, S[1] will be the DS [N-4] . On the positive side N will have the same edge length as D -even though they have different genders.
Figure 11.4 The shared geometry of S[1] and S[2]
By convention we will choose to study the left side web of S[1] because the clockwise early web defines initial star points on this side of S[1] -and these 'effective' star points are step-4 as noted above. It is our contention that the overall dynamics and geometry of this region are driven by these effective star points -as in the generalized First Family Theorem. Of course N = 11 is special in many ways -and the true nature of the 8k+3 family and the efficacy of these step-4 star points will be better illustrated by larger N values -such as N = 19, 27, 35 and 43. In general these step-4 families will contain very few remnants of normal First Families. will typically not have extended orbits because they will map to trailing edges of N within a few iterations. But these star points typically have one-sided limiting orbits which can be calculated exactly using surrogate initial points and we will do this here for both Gx and Sk.
It appears above that StarS1[7] is vertex v 6 of Sk2 but there is a small horizontal offset involved and the same is true for 16 (StarS1[7]) and Sk. The Sk and Sk2 tiles are related by a simple rotation about the center of Gx, so we can work with either Sk or Sk2 once Gx is known.
Calculations for Gx (based on N = 11 at the origin and with hN = 1) (i) Any exact point on an extended edge of Gx will define the matching star point, so to find StarGx[4] all that is needed is p 1 = 16 (p 0 ) where p 0 = StarS1[7]. This is a simple calculation that Mathematica will do with exact arithmetic -but only if the correct vertex points in the orbit are known. Here it will likely generate an error because these star points will either have no image at all under or will soon map to a trailing edge of N. Therefore we will use a surrogate neighbor such as pn = p 0 + {0, .000001}. This will put pn inside Sk2 -which is an advantage because all points in a tile such as Sk2 must map together.
Tiles like Sk2 and Gx are prominent in the web, so it is easy to probe them with test points and find their periods. Every point inside Sk2 has period 338 -except the center which has period 169. These are called 'period-doubling' tiles . Here we only need to track 16 iterations of pn for Gx, but we will later track further iterations of pn to get the vertices of Sk -as shown by the dotted blue arrows above.
(ii) Ind = IND[pn,16] will generate the indices of the first 16 vertices of N in the orbit of pn. These are {8,11,3,7,10,3,6,9,11,1,2,4,6,8,9,10} which means that the vertices visited will be c 8 ,c 11 , etc -where by convention c 1 is the 'top' vertex of N. (In general we prefer to use stepsequences of orbits which are the first differences of these indices -because all the Sk will have the same step sequences -but not indices.) We will assume that p 0 will have this same 'corner sequence' and if this is false the error will be large and easily detected.
Calculations for Sk
(i) Now that cGx is known we can work with either Sk or Sk2 because they are related by RotationTransform[8π/11,cGx]. This implies that 16 (p 0 ) is simply a rotation by 8π/11 of StarS1[7] -but this calculation depends on knowing cGx.
As indicated earlier, it is possible to use the same pn point as a surrogate and simply extend the orbit to 169 iterations. So IND[pn,169] will generate the indices, but our software for PIM is based on the return map 2 and 169 is odd. One solution is to replace p 0 with p 2 = (p 0 ) (and np with (np). This is easy since the first index is known to be 8, so (np) = 2c 8 -np and IND[(np),170] will yield the corner sequence for (np) which should also be the corner sequence for p 2 = (p 0 ). Therefore P = PIM[p 2 ,85] will give the exact 2 orbit for p 2 which will end with px = P (iii) Any interior point can be used to find the center of a period-doubling tile because all points map to a reflection about the center under half of the period. The calculations above yield two candidate lines that must pass through the center. Using their intersection for the center avoids a distance calculation which might be an issue with the scaling field S 11 . Now map cSk2 ,v 5 and v 2 to Sk using RotationTransform[8π/11, cGx] which is exact relative to S 11 .
(iv) In Sk2, v 2 defines v 6 and the 'height' which we call h 1 . Since v 5 is known the height defines v 3 and the horizontal center line defines v 1 and v 4 because the slopes of the edges are known. This defines Sk and the remaining Sk2, Sk3, Sk4 and Sk5 are either rotations or reflections about the center line joining Gx with StarS1[1].
(v) Sk clearly has a close geometric relationship with Gx and we will try to make this precise. Later we will show that these two tiles generate an 'offspring' called Sxx in a manner similar to the Sx tile of S[5] and Px at D.
There are two nested isosceles triangles here with heights h 1 and h 2 . And Sk is embedded in the smaller triangle. The 'star[k] angles' of any odd N-gon have the form π -k where is 2π/11, so the angle θ at StarGx[4] is π -8π/11. Therefore Tan
Calculations for Sxx
(i) We will find the parameters of the Sxx on left side of Gx. This is a simple application of the π π π π π π π π π π The expression for Sxx [[6]][[1]] (which is star[1] of Sxx) is much worse and would run for more than 30 pages of normal print .The first few terms of that expression are shown below: Figure 11.8 The first few terms in the trigonometric expression of the horizontal coordinate of vertex 6 of Sxx. For expressions like this that run for multiple pages, Mathematics asks if you want more (or less) . At each stage there are 'unresolved' numbered terms that are eventually evaluated. Note that the whole expression is in grey as a warning that it is partial.
But because calculations within the scaling field S 11 are so efficient, Mathematica only takes a few seconds to find the polynomial for these coordinates relative to GenScale ). This is identical to the lengthy trigonometric expression in Figure 11.8 above but Mathematica has a very hard time simplifying such an expression unless it is told to do so in the context of S 11 .
Here the vertical coordinate of vertex 6 is -1, but in general the vertical coordinates must be determined relative to hN not sN. Once again Mathematic has no problem doing this. There is little doubt that Sxx will have some form of local extended family. Earlier we showed that geometrically Sxx is closely related to Gx, and based on this plot it seems that there is also a close relationship with Sk2 because Sxx appears to be the first tile in a sequence of tiles converging to the star point shared by Sxx and Sk2. The second tile in that sequence can be seen above. , the dynamics are different and this combined count helps to minimize these differences. The dynamics of any composite N-gon allows for the possible 'decomposition' of expected orbits unto groups of orbits with smaller periods. This makes it difficult to match tile counts with periods, but for self-similar webs, the effect of these exceptions diminishes with each generation and in the limit the -ratios will match the geometric ratios. A comparison of the fractal dimension of the quadratic N-gons: N = 5, 8, 10 and 12 N = 5, 8, 10 and 12 have φ(N)/2 = 2, so they have quadratic complexity-where the only nontrivial scale is GenScale [N]. Since the webs are naturally recursive, a single scale should yield a self-similar web and we will derive the similarity dimension of these webs below. Since the geometric scaling is known, the only issue is the 'temporal' scaling -which describes the limiting growth in the number of tiles. For self-similar webs this temporal scaling can be derived from a simple 'renormalization' process -where a representative portion of the web is scaled by GenScale [N] and mapped to itself under k as shown by the magenta lines below. (The web for N = 10 is identical to the web for N = 5 and the N = 8 and 12 cases are closely related since their cyclotomic fields are generated by {√2,i} and {√3,i}.).
AlgebraicNumberPolynomial[ToNumberField[Sxx[[1]][[2]]/hN,GenScale],x]
(i) As with N = 5 described earlier, each self-similar 'dart' is anchored by a D tile. There are 2 components to each dart and they are anchored by same-generation M tiles. Locally each of these M tiles is surrounded by 3 next-generation D tiles, so the D tiles will scale by 6 in the limit.
(ii) N = 8 is twice-even so the D has an edge geometry that can support a next-generation. We will discuss this issue below, but first we present an overview of this second generation on the edges of N. D[2] in this sequence is virtual as shown in Figure 14.1 above. This yields a four generation sequence with local geometry which is a mixture of single-scale self-similarity in blue and multi-scale dynamics in magenta as shown in the insert. The temporal scaling here appears to be 113 and this is consistent with the convergence at star[2] of N. Below is an overview of the First Family of N = 15 N-2k, so S[5] will have k = 15-10 = 5. The S[k] are 2N-gons, so the mutation should be the weave of two 30/gcd(30,5)-gons as shown below. S[6] is not shown here but it is also mutated since k = 15-12 and 30/gcd(30,3) = 10. This is similar to S[3] with k = 9 and 30/gcd(30,9) = 10, so both S[3] and S[6] will be the weave to two decagons in barely perceptible mutations. For the 8k+7 family the volunteer DS[1]s will always be S [N-3] The DS[1] survivors seem to benefit from the step-2 web combined with star[2] being 'effective'. This occurs because the count-down starts from star [N-2] N = 16 has 'quartic' complexity along N = 15,20,24,and 30. Both N = 16 and N = 24 . This is illustrated on the left below using DS[4] as a stand-in for S [4]. MuDS4 in black is a scaled reflected copy of the mutation of S[4] shown above. The revised Mutation Conjecture predicts that the DS[4] of S[2] will share the same mutation as S[4] itself, but for N = 16 it seems that the interaction with S[4] turns this into a simple extended edge mutation. These 'lazy' mutations are actually very common as witnessed by N = 11,15 and 18. They are not 'canonical' riffles or weaves of two regular polygons but if 2gons are allowed this DS[4] mutation can be regarded as a 2-step canonical mutation. Since MuDS4] will have step-8 symmetry it has an effective k = 8 and N/gcd(8,N) = 2 so MuMuDS4 can be regarded as the rhombus weave of two 'tuples' shown in magenta and blue below. 16⸱322826. This appears to be a multi-fractal sequence with distinct even and odd temporal scaling in keeping with the 'quartic' nature of N.
N = 17
N = 17 is order 8 and the second member of the 8k + 1 family. Since N = 9 was strongly influenced by mutations, this is an important test case for the family. As predicted by the 8k+2 Conjecture there is an extended family structure at the foot of D and this will be discussed in the context of N = 34. The 8k+1 Conjecture predicts that there will be a volunteer DS[2] to go along with the predicted DS [5]. Below in Figure 17.3 we give a plausible explanation for this fact. Except for the special case of N = 9, this DS[2] will have a step-4 web because k = N-k and Mod[2N,N-2] will always be 4. This matches S[1] which is DS [N-4] This web is highly fractured but clearly has overall step-4 symmetry as can be seen by the rotated copies of S[6] above and below. This implies that star[7] is effective and star[3] is also effective. This in turn appears to support an S[2] and matching S[1] pairs. This is very similar to N = 25 to follow. . This does not always imply self-similarity of generations, but for N = 18 it appears that the 3 nd generation at the foot of S[2] is self-similar to the 2 nd generation and this chain of scaled 2 nd generations should continue. Even though the first generation is not be part of this sequence, this is still a strong version of convergence involving generations instead of just D[k] and M[k]. The revised 8k+2 Conjecture says that all the DS[k] predicted by the Edge Conjecture will survive and this leaves open the possibility of self-similar generations, but we believe that in general these generations will vary.
Figure 18.1 -The edge geometry
In the twice-odd family, the odd S[k] and DS[k] will be 'mutated' into N/2-gons and it is our convention to display them in this form. The revised Mutation Conjecture of Section 2, predicts that there will be further (matching) mutations of S[3] and DS[3] since both have k = 9-3 = 6 with 9/gcd(9,6) = 3. Therefore S[3] and DS[3] will be the weave of blue and magenta triangles as shown above for S[3] (but with reversed orientation because their webs are reversed).
The Mutation Conjecture says that since N/2-1-k = 2, the mutation base will be run from star The elongated octagons which surround PM are collectively called Dx. Our Web Conjecture says that their edges (relative to the edge of N) , should be in the scaling field S 9 along with PM. One way to see this is to embed First Family members from DS[4] (or S[2]) as shown above. [N-4] = DS [15]. Algebraically N = 19 has complexity 9 along with N = 27 which is the next member of the 8k+3 family.
There may be an 8k+3 Conjecture which says that for 'most' family members there will be a conforming volunteer between pairs of DS[k]. The one known exception is N = 35 The (known) families of these M tiles show no clear relationship with D 2 . This D 2 is clearly a 2Ngon and appears to share a vertex with Dx below. This Dx has period 2546 and period doubling so the center (and height) can be found to arbitrary precision. This means that D 2 is also known to arbitrary precision, but we do not have the exact value of a second star point. Since S[2] has k = N/2 -2 = 8 and gcd(20,) = 4, S[2] will consist of two 'woven' N/4-gons, so here it will be an equilateral decagon which is the 'Riffle' or weave of two regular pentagons with slightly different radii. The 8k+4 Conjecture gives the expected implications of this mutation. This is a 'mod-16' conjecture and the two branches are anchored by N = 12 and N = 20. In all cases the predicted DS[4]s will be 'almost-vertex' tiles of the larger N/4-gons and there will be real or virtual Px tiles that will be actual vertex tiles of the smaller N/4-gons. These Px tiles will always be 'parent' tiles of the DS[4] but that will occur in two different ways.
N = 21
N = 21 has algebraic order 6 along with N = 13,26,28,36 and 42. Both N = 21 and N = 13 . It is clear that Px is like a 'swing-state' between these two influences. This will have a lasting effect of the limiting geometry as can be seen in the plot above. Returning to the Px region, it is clear that the Sx tile is more closely aligned with Px than DS[5], but they both have copies of Sx as satellites. At this scale it is hard to see, but the (right-side) star[4] of Sx supports a small tile which is weakly conforming to this mutual star point. It is highly likely that there is a convergent sequence of such tiles. Throughout this region there are clusters of tiles which appear to be invariant. These colonies have the potential to foster endless chains of future generations with their own unique geometry The edges of Px support similar colonies. It is likely that almost all N-gons will have tiles on all scales and it is equally likely that these tiles will exist in diverse environments. Here this future geometry would be expected to retain some quartic influence. . This is our traditional generator of the scaling field S 11 = S 22 . When N is 8k+2, there should be a matching convergent sequence of tiles, but here the sequence is almost entirely virtual. However the local geometry of this GenStar point is still replicated everywhere, so at all scales there should be copies of the origin. This means there should be colonies like those seen above throughout the geometry. Below we will track some of these invariant colonies in the vicinity of Mx. One way to probe a star point is to generate the parameters of ideal tiles that would exist and then iterate the centers under to look for temporal scaling. It is not surprising that there are no obvious coherent sequences here, but it is easy to find colonies similar to those in the vicinity of Sx and the S k tiles of N = 11, See Figure 11.7. Here we will examine one of these 'island' colonies in the vicinity of Mx. Over a period of years we have studied these invariant islands local to Mx. One of them is called N40 and it is shown at the right below. In the left panel N40 is shown with its reflection near the S 2 tile which is just off the screen above. See the overview in Figure 22.5. It was no surprise to discover that the geometry of N40 is congruent to a region close to star There is a similar (but not identical) pair that point to the center of the satellite of S 1 .
These islands have their own scaling and geometry which appears to be more uniform than the surrounding geometry. This may be because they all have a similar origin in the vicinity of the 4 th or 5 th (virtual) generations at GenStar. In this sense they could provide some insight into 'the geometry of 'future' generations. One clear indication of its origin is that it is symmetric with respect to its center line -which is just a continuation of the blue line of symmetry of running from the center of S[1] to star[1]..
. We have seen with N = 14 and N = 18 that in these twice-odd cases, the S[1] tile may have 'hidden' structure based on its dual role as N/2-gon and N-gon. This is also true for N = 22 as shown below. The [N] and it has displacement -1 with the Dc convention. D[3] can be seen in the 3 rd generation enlargement below. This failure may be due to the interaction of the mutations for N = 24. From experience with N= 9 , 12 and 16, it is clear that individual mutations can evolve in a predictable fashion, but there is no theory that attempts to explain how distinct mutations interact. Here it is clear that D[2] is not formed in a 'normal' step-2 fashion and the only candidate for an M[3] is highly mutated.
The S[6] tile of D[1] is also mutated but it is possible to construct the resulting tile based on the star points of the unmutated S[6]. However each generation becomes more difficult to track and the 4 th generation shown below is almost unrecognizable. There is no theory for '2 out of 4' constructions as there is for '2 out of 2' with N twice-even, but this 3 out of 4 web splitting does occur naturally as the web evolves. In general these step-4 webs are not well-behaved. Typically they have step-4 rotational symmetry with large-scale structures like those shown here.
Appendix: 'Deep Field' maps of the edge geometry for N = 19 and N = 200
In his Wikipedia article on the outer billiards map, (https://en.wikipedia.org/wiki/Outer_billiard) Richard Schwartz listed the foremost unanswered questions. One of these was: Show that outer billiards relative to a regular polygon has almost every orbit periodic.
This says that the points with non-periodic orbits should have Lebesgue measure 0. This Hamiltonian 'phase-space' conjecture has a long history beyond outer billiards and has never been proven, but we believe it is true. From a practical standpoint it means that every web W should be dominated by periodic tiles and this 'white matter' should have 'full measure' leaving only measure 0 for 'dark matter'. For the quadratic cases of N = 5,8,10 and 12, this is clearly true because the web W has a simple fractal structure where the non-periodic exceptional points are at most countable. For regular N-gons, these non-periodic orbits cannot originate inside tiles because every point in a tile has the same period and the 'inner star' region around N is invariant and bounded. Therefore no tile with non-zero measure can have a non-periodic orbit.
Any non-periodic point can potentially be of value because it may be possible to use the orbit to illuminate the tile borders. Indeed some of these orbits appear to be locally 'dense' in the limiting web. The star points of N are technically non-periodic, because they have no image under or -1 , but the neighborhoods of these 'saddle-points' have the potential to act as 'candles' to illuminate the web structure. No point on an extended edge of N can be periodic because these points have no inverse image, but such points could be non-periodic and never quite map to an extended trailing edge. N = 5 has such orbits. Any orbit with very long period can possibly be used to trace the details of the web. We conjecture that any 'dark matter' always dissolves into tile structure on closer examination so 'most' N-gons will have structure on all scales. By default, the images generated by Mathematica are vector based Postscript files and we typically use 35-decimal place accuracy for each point, so there is virtually no loss of detail on enlargement. But from a practical standpoint it is necessary to convert images from vector form to 'raster' pixel form for display or printing. This is usually done with a program like Photoshop or Adobe Illustrator. To keep the file size down, most of the images in this paper use a modest 200 dpi which would enable one or two levels of screen enlargement. These sample raw images were scanned by Photoshop at 600 dpi to give 7200 by 3500 pixels which is about 25 Mb raw and 4Mb compressed. The original Postscript file from Mathematica was 400 Mb. Even on a fast computer it can be a time-consuming process for Mathematica to generate these files. This N = 19 data set had about 1 million points and took more than 20 minutes to generate the Postscript file. It is not clear whether large N values with higher algebraic complexity may yield denser webs with more potential for non-zero Lebesgue measure. With the Dc map the edge length is fixed at 1 and there is a nominal price to pay for larger N values -primarily the smaller rotation angle w = 2Pi/N and inherent loss of accuracy. .00024 67 so it will be a challenge to explore future generations. These images of N = 19 and N = 200 each took about 5 billion iterations to generate a paltry 1 million points. In the limit with the convention of edge length 1, the rotation angle of N would vanish and the N gon would become an unbounded ray. | 2021-03-12T02:16:29.143Z | 2021-03-11T00:00:00.000 | {
"year": 2021,
"sha1": "e1732362126d424386fffc44d8cf59c30844708d",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "e1732362126d424386fffc44d8cf59c30844708d",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
221589510 | pes2o/s2orc | v3-fos-license | Weaker Braking Force, A New Marker of Worse Gait Stability in Alzheimer Disease
Background: Braking force is a gait marker associated with gait stability. This study aimed to determine the alteration of braking force and its correlation with gait stability in Alzheimer disease (AD). Methods: A total of 32 AD patients and 32 healthy controls (HCs) were enrolled in this study. Gait parameters (braking force, gait variability, and fall risk) in the walking tests of Free walk, Barrier, and Count backward were measured by JiBuEn® gait analysis system. Gait variability was calculated by the coefficient of variation (COV) of stride time, stance time, and swing time. Results: The braking force of AD was significantly weaker than HCs in three walking tests (P < 0.001, P < 0.001, P = 0.007). Gait variability of AD showed significant elevation than HCs in the walking of Count backward (COVstride: P = 0.013; COVswing: P = 0.006). Fall risk of AD was significantly higher than HCs in three walking tests (P = 0.001, P = 0.001, P = 0.001). Braking force was negatively associated with fall risks in three walking tests (P < 0.001, P < 0.001, P < 0.001). There were significant negative correlations between braking force and gait variability in the walking of Free walk (COVstride: P = 0.018; COVswing: P = 0.013) and Barrier (COVstride: P = 0.002; COVswing: P = 0.001), but not Count backward (COVstride: P = 0.888; COVswing: P = 0.555). Conclusion: Braking force was weaker in AD compared to HCs, reflecting the worse gait stability of AD. Our study suggests that weakening of braking force may be a new gait marker to indicate cognitive and motor impairment and predict fall risk in AD.
INTRODUCTION
Alzheimer disease (AD) is a neurodegenerative disease characterized by cognitive impairment such as recall, orientation, calculation, attention, and execution, resulting in the decline of life quality, disability, and mortality (Tsai et al., 2019). AD is a chronic and progressive disease with a clinical duration of 8-10 years (Masters et al., 2015). The pathological mechanisms of AD were the deposition of β-amyloid (Aβ) plaques and formation of neurofibrillary tangles recognized as unique characteristics of AD (Jack et al., 2018). The increasing incidence of AD has placed a great economic burden on the world and families (Masters et al., 2015;Alzheimer's Association, 2019). Currently, no therapy can reverse the underlying mechanisms of the disease. The management of AD is focused on delaying disease progression and treating comorbidities (Masters et al., 2015).
The gait stability of AD is usually worsened, and fall risk is increased (Sheridan et al., 2003). The motor abilities of AD gradually deteriorate, making patients unable to walk eventually. Impaired motor abilities and fall events complicate the disease, leading to poorer prognosis placing a huge burden on caregivers (Schirinzi et al., 2018a). Therefore, a better comprehension of AD-related changes in gait stability may contribute to the development of methods to assess cognitive and motor abilities and interventions to delay dementia and prevent falls.
Cognitive function plays an important role in normal walking, which is required to receive and analyze environmental information and adjust posture to avoid tripping or falling. The impairment of cognition, especially attention, execution, and working memory, may lead to poor gait performance and fall events. Gait abnormalities are not only concomitant symptoms of AD, but also signs of cognitive decline (Montero-Odasso et al., 2012a).
Studies via structural and functional brain imaging have shown that cognition and motor control shared the same brain regions, particularly in the frontotemporal lobes (Montero-Odasso et al., 2017). The ''dual-task paradigm'' (walking while performing an attention-required task) has been recognized as the optimal test to study the interaction between cognition and motor control (Pelosin et al., 2016). In dual-task, two simultaneous tasks interfere with each other, competing for cortical resources (Montero-Odasso et al., 2012a), thus making it more sensitive to detect the impairment of cognition and motor control, which has been demonstrated in some neurological disorders. With the addition of cognitive tasks, gait abnormalities were more pronounced in patients with AD (Sheridan et al., 2003), Parkinson disease (PD; Pelosin et al., 2016), and multiple sclerosis (Liparoti et al., 2019) than that in controls.
Braking force, a gait marker associated with gait stability, is defined as an active force that reverses the fall of the center of mass (COM), before the swing leg touches the ground during the single-support phase (Chastan et al., 2009).
The goal of postural control while walking is to maintain the COM (a point equivalent of the total body mass in the overall reference system) within the posture basis to maintain balance. When an individual raises his/her leg forward (the raised leg is called swing leg), the COM falls vertically because of gravity. During the single-support phase, the fall of the COM needs to be halted by braking force before the swing leg touches the ground to maintain the COM within the posture basis. For patients with balance disorders, such as PD patients without medication, braking force is absent, and the fall of the COM is halted passively by the touch to the ground of the swing leg (Chastan et al., 2010;Maillot et al., 2014).
Braking force has become an important parameter in the field of biomechanics. The altered braking mechanism could indicate gait instability and falls in the elderly (Maillot et al., 2014), PD, progressive supranuclear palsy (PSP; Chastan et al., 2009), and peripheral neuropathy (Meier et al., 2001). However, the alteration of braking force and its correlation with gait stability in AD have not been studied.
Gait variability is an established marker of gait stability to predict fall events (Montero-Odasso et al., 2012b). It is commonly used to evaluate the gait performance of patients with cognitive impairment in ''dual-task paradigm'' (Montero-Odasso et al., 2012a). Patients with AD and behavioral variant of frontotemporal dementia (FTD) showed higher gait variability manifested as worse stability parameters in single-task, and the stability parameters further deteriorated in dual-tasks (Rucco et al., 2017).
To sum up, we hypothesized that the braking force of AD is weakened and differs under single-and dual-task, which is associated with worse gait stability reflected by higher gait variability and increased fall risk. This study aimed to determine the changes of braking force, gait variability, and fall risk under single-and dual-task walking in AD patients compared to healthy controls (HCs) and also to explore the relationship between them.
Study Subjects
In this clinical study, 32 AD patients were recruited from the Memory Clinic of the Neurology Department, the First Affiliated Hospital of Wenzhou Medical University, and 32 HCs were recruited from the physical examination center of the outpatient department. The demographic data such as gender, age, educational level, height, weight, and previous medical history and medication history were collected. All participants completed the Mini-Mental State Examination (MMSE) and reached the following criteria as previously reported (cognitive intact: illiteracy >19, primary school >22, middle school, and above >26; Zhang et al., 1999). The assessments of MMSE were carried out by a well-trained neuropsychologist unaware of the performance in gait tests.
The disease durations of all patients were recorded, and magnetic resonance imaging (MRI) data are all available. According to clinical manifestations, combined with the characteristics of MRI and MMSE, all patients met the ''probable AD '' diagnostic criteria (1984) of the National Institute of Neurological Speech Disorders and Stroke (Mckhann et al., 1984). The exclusion criteria were as follows: (1) walking requiring assistance or need auxiliary equipment, such as crutches or four-wheel walking aid; (2) diseases of the lower limbs, including muscle atrophy of the lower limbs, knee replacements, hip replacements, or a history of leg fractures within a year; (3) previous history of stroke or other neurological diseases, including PD, multiple sclerosis, myasthenia gravis, cerebellar disease, myelopathy, etc.; (4) have severe mental illness (major depression, bipolar disorder, schizophrenia, alcohol abuse, drug addiction); (5) severely impaired cognitive function or unable to understand and complete the three prescribed walking tests; and (6) unwilling to sign the informed consent.
HCs were free from the above exclusion criteria and other chronic diseases that need long-term medication, such as hypertension and diabetes. HCs were also required to attain the normal range in MMSE mentioned above and be free from cognitive impairment.
This study was approved by the ethics committee of the First Affiliated Hospital of Wenzhou Medical University. Participants signed the informed consent in the presence of a neurological physician before enrollment. Guardian or family members could sign informed consent on behalf of the AD patient when their cognitive function is insufficient to understand the content of the protocol.
Gait Analysis
Gait performance under single-and dual-task was assessed by an electronic walkway system (JiBuEn gait analysis system developed and produced by Hangzhou Zhihui Health Management Co., Limited; Xie et al., 2019). In the presence of professional doctors, 32 AD patients and 32 HCs completed all walking test tasks, including a single-task walking Free walk and two dual-task walking Barrier and Count backward. With the help of doctors, all participants wore the gait analysis equipment (a pair of shoes for gait detection and related sensor transmission modules, which were worn on the waist, left thigh, right thigh, left calf, and right calf of the participants). All walking tests were conducted in a quiet environment with a clean floor and a professional escort. We put signs on both ends of the field, 10 m apart. In Free walk, participants were allowed to walk at a comfortable pace without any outside intervention. In Barrier, participants were asked to walk through two obstacles 30 cm apart on the walking path, without giving any hint during the walking. In Count backward, participants were asked to Count backward from 100 while walking, even if the participants' count performance was wrong.
Stride time was measured as the sum of stance time and swing time (Darweesh et al., 2019). Stride time variability was determined as the coefficient of variation (COV) of stride time and was represented as COV stride . Likewise, stance time variability and swing time variability were the COV of stance time and swing time and were represented as COV stance and COV swing , respectively (Boripuntakul et al., 2014). The normal range of COV stride , COV stance , and COV swing were 0% to 10%. The value of braking force was defined as the ratio of the force of the heel strike to the theoretical extremum, and it is no more than 1 (normal range is 0.65-1). Fall risk was calculated by a formula developed by the system according to data collected (Verghese et al., 2009;Taylor et al., 2013), and its normal range is 0% to 15%.
All parameters obtained from the three walking tests were calculated and exported automatically according to the motion signals collected and transmitted by sensors, and the data were stored on a local hard disk.
Statistical Analysis
Continuous variables were described in terms of mean ± standard deviation (SD) or median [interquartile range (IQR)], depending on whether the data are a normal distribution or not. The normality of data was verified by the Kolmogorov-Smirnov test. Categorical variables were shown in terms of numerical value (percentage). The Student t-test was used to compare normal distribution variables. Asymmetrically distributed variables were compared with the Mann-Whitney test. Categorical variables were compared using the χ 2 test. Pearson correlation test was performed for bivariate correlations. Statistical differences of all variables were represented by P values, and low-probability events were defined as P ≤ 0.05, which was significant. SPSS software (version 22.0 for Windows) was used for the statistical analyses.
Demographic and Clinical Characteristics
The demographic and clinical characteristics of the patients with AD and HCs were summarized in Table 1. There was no difference in age and body mass index (BMI) between the two groups (P > 0.05). The ratio of female with AD was higher (P = 0.044) and the MMSE score of AD was significantly lower (P < 0.001) than that of HCs, respectively. The medication history of AD is provided in Supplementary Table S1. The medications taken by patients included cholinesterase inhibitor (ChEI), memantine, selective serotonin reuptake inhibitor (SSRI), and antipsychotics. The specific use ratio and dose are listed in Supplementary Table S1.
Gait Parameters
Gait parameters are listed in Table 2. COV stride and COV swing showed no difference in the walking of Free walk and Barrier, but significant difference was seen in the walking of Count backward (COV stride : P = 0.013; COV swing : P = 0.006) between patients with AD and HCs. COV stance showed no difference in all walking tests. Braking force (P < 0.001, P < 0.001, P = 0.007) and Fall risk (P = 0.001, P = 0.001, P = 0.001) showed a significant difference in all walking tests between patients with AD and HCs, respectively. The braking force of patients with AD and HCs in three tasks is illustrated in Figure 1. The COV stride , COV stance , and COV swing of patients with AD and HCs in three tasks are illustrated in Figures 2-4. Fall risk was not shown.
The results of Pearson correlations between gait variability, braking force, and fall risks are shown in Table 3. There were significant negative correlations between braking force and gait variability in Free walk (COV stride : P = 0.018; COV swing : P = 0.013; Supplementary Figures S1A,B) and Barrier (COV stride : P = 0.002; COV swing : P = 0.001; Supplementary Figures S2A,B), but not Count backward (COV stride : P = 0.888; 3.6 (3.8) 0.001 * P < 0.05 between patients with AD and HCs.
FIGURE 1 | The comparison of braking force between patients with Alzheimer disease (AD) and healthy controls (HCs) in the walking of Free walk, Barrier, and Count backward. * P < 0.05, * * P < 0.001.
DISCUSSION
The result of our present study demonstrated that braking force was reduced and fall risk was increased, respectively, in AD when compared to HCs under single-and dual-task walking, but gait variability of AD was significantly higher than HCs only under dual-task walking. Braking force was correlated with fall risk in three walking tests, but correlated with gait variability only in the walking tests without much cognitive distraction. The fall risk of AD was significantly higher than that of HCs, which is consistent with previous researches (Sheridan et al., 2003;Montero-Odasso et al., 2012a). Braking force was related to fall risk, indicating that braking force could reflect the gait stability of AD to some extent.
The neural mechanism of braking force contributing to gait stability has not been fully understood (Chastan et al., 2009). The research on the correlations between braking force and the structure and function of the brain was sparse. A study on healthy individuals showed that the right prefrontal lobe was activated in the braking process, especially in the inferior frontal gyrus and the preliminary motor cortex . Besides, the degeneration of the hippocampus would impair visual, vestibular, and proprioceptive perception, resulting in the incomplete reception of the environmental information necessary to maintain normal walking (Annweiler et al., 2012). During the braking process, the brain ought to collect visual information and control muscle locomotion, thus maintaining the proper magnitude and direction of the braking force (Meier et al., 2001). It was reported that the disruption of visual and somatosensory inputs would decrease braking force in healthy adults (Chastan et al., 2010). Therefore, the atrophy of the hippocampus and cortex in AD pathogenesis would weaken the braking force. Moreover, midbrain atrophy was observed in PD and PSP patients with Frontiers in Aging Neuroscience | www.frontiersin.org impaired braking force (Chastan et al., 2009(Chastan et al., , 2010. It remains to be explored what the role of the midbrain is in braking force of AD. It is worth noting that AD, PD, and PSP are all neurodegenerative diseases, of which cognitive and motor impairments constitute the most common manifestations (Schirinzi et al., 2020). A reduction of Aβ 42 in cerebrospinal fluid (CSF) was observed in AD, PD, and PSP, which is typically associated with a higher load of Aβ 42 accumulation in the brain (Blennow et al., 2016;Schirinzi et al., 2018b), thus disrupting neurotransmission and synaptic plasticity and triggering neurodegeneration (Martorana et al., 2015). The concentration of Aβ 42 in CSF of AD and PSP was inversely proportional to the severity of motor impairment (as reflected in the scores of motor abilities; Schirinzi et al., 2018a,b). Among PD patients, the diffuse malignant subtype with more severe cognitive and motor impairment showed AD-like CSF profile with lower levels of Aβ 42 , when compared to other subtypes. This result is consistent with those identified in postmortem studies in which the diffuse malignant subtype had more Aβ plaques and cortical degeneration (Fereshtehnejad et al., 2017). These findings suggest that the amyloidosis might be associated with motor impairment in neurodegeneration.
Furthermore, Aβ 42 can deposit in cholinergic nuclei, interfering with cholinergic system, which is susceptible to Aβ 42 -related degeneration (Schirinzi et al., 2018b). Cholinergic system plays a vital role in motor control. Cholinergic transmission is involved in attention and executive function, and its disruption can lead to impaired attentional ability and increased fall risk. It has been suggested that cholinergic transmission was disrupted in AD (Schirinzi et al., 2018a), PD and PSP (Gilman et al., 2010), and cholinergic dysfunction was correlated with motor impairment in AD (Schirinzi et al., 2018a). The aforementioned studies on AD, PD, and PSP, in which braking force was proven weakened, suggest a possible underlying mechanism that the amyloidosis of AD pathogenesis is likely to impair motor control and weaken braking force by interfering with cholinergic transmission.
Central cholinergic activity showed progressive attenuation among old nonfallers, old fallers, and PD patients, and it was negatively correlated with dual-task cost. Cholinergic dysfunction may disrupt attention distribution, which is prominent in dual-task, resulting in impaired motor control and increased risks of falling (Pelosin et al., 2016). In dualtasks, attention is assigned to cognitive tasks, so that gait stability is not well maintained. Cholinergic dysfunction of AD heightened dual-task cost, and a more difficult cognitive task with a larger need for cortical resources produced greater dual-task cost. Therefore, the differences of COV stride and COV swing between patients with AD and HCs enlarged as the walking tasks became more complicated, and significant differences were shown only in dual-task walking while counting backward. It turned out that gait variability was more sensitive to dual-task in reflecting cognitive function, which was consistent with previous researches (Montero-Odasso et al., 2012a,b). However, the braking force did not show this trend in our results. Further studies are required to verify the effect of cholinergic activity on braking force and gait variability.
We continued to explore the correlation between braking force and gait variability. This is the first time to list stride time and its components stance time and swing time simultaneously to compare the correlations among them. COV swing showed the same trends as COV stride in different groups and conditions, whereas COV stance did not. It could be inferred that COV stride was mainly influenced by COV swing . The swing phase of gait was supposed to play a vital role in gait stability. Braking force functions in the single-support phase, which is the swing phase, to maintain gait stability. This just proved the correlation between braking force and gait variability. The results of Pearson correlation test shown in Table 3 and Supplementary Figures S1-S3A,B also verified that braking force was correlated with gait variability, but only in Free walk and Barrier, not in Count backward. Cognitive requirements significantly increased gait variability but slightly reduced braking force.
A kinematics study in patients with AD and FTD can shed some light on the explanation of the results of braking force. It was found that the range of motion (RoM, reflecting the magnitude of joint excursion) was reduced in these two groups, respectively, when compared to HCs in single-and dual-tasks (Rucco et al., 2017). From the biomechanical analysis, in singletask, the RoM of AD was only impaired in the swing phase, which is a critical period for joint motion to maintain dynamic stability (Rucco et al., 2017), whereas with the addition of cognitive task, the RoM of AD was impaired in both stance and swing phase (Rucco et al., 2017). Because braking force does not work in the stance phase (Maillot et al., 2014), it is elucidated that the effect of the cognitive task on gait has little correlation with braking force. As for the performance of FTD, the impairment of RoM was worse than that of AD, but it did not deteriorate with the addition of cognitive tasks (Rucco et al., 2017), similar to the presentation of braking force. The pathogenesis of FTD is the atrophy of the frontotemporal lobes (Rucco et al., 2017). As mentioned previously, the motor control area is mainly in the frontotemporal lobes (Montero-Odasso et al., 2017), and the area associated with braking force is the prefrontal cortex . It might be inferred that weaker braking force may be involved only in the lesion of frontotemporal lobes responsible for motor control in the course of AD. This inference needs further investigations by biomechanical researches.
In addition, it is necessary to mention the effect of drugs on gait. ChEI and memantine have been developed for the treatment of AD (Sharma, 2019). ChEI and memantine treatment could not only decrease gait variability in AD (Montero-Odasso et al., 2009;Beauchet et al., 2013), but also improve balance and stability in PD (Devos et al., 2010;Henderson et al., 2016;Lauretani et al., 2016). It is suggested that the gait abnormalities in AD samples were underestimated because of their use of antidementia drugs. However, a relatively low percentage of patients took SSRI and antipsychotics because of the accompanying emotional and psychotic symptoms. It was reported that SSRI and psychotropic use could increase fall risk (Liu et al., 1998;Leipzig et al., 1999). Drug use, as a covariable of gait, would take a certain effect on the gait performance driven by AD pathogenesis itself. More precise studies are needed to control drug use.
Individuals with cognitive impairment are at high risk of falling, but interventions that took effect for individuals with normal cognition may not work well for them. It was inferred that the mechanisms of falling may be different in them (Montero-Odasso et al., 2012a). Measuring cognitive-related gait markers, such as braking force and gait variability, could better reflect the disease severity and fall risk of AD, providing reference for nursing and rehabilitation.
We would like to acknowledge that this cross-sectional study was preliminary and presented some limitations. The confounding of covariates and the lack of longitudinal observation make us conservative about the revelation of weaker braking force on gait instability in AD. Further confirmatory studies are required to adjust potential covariates including the drug utilization and conduct follow-up over the duration. Besides, the small sample limited the analysis of multiple cognitive domains. It might make sense to clarify the relationship between braking force and some cognitive domains, particularly visual-spatial capacity and executive function. Possibly, it may explain more about the decline of braking force in AD to analyze its association with the alteration in cortex and subcortex by functional imaging, as well as muscle joint movement by biomechanical studies. Another important limitation to our study is the poor characterization of the AD patients; we did not perform lumbar puncture or positron emission tomography neuroimaging to detect the presence of Aβ pathology in our AD patients. Moreover, even though the screening of cognitive examination with MMSE was done in each subject, it cannot be ruled out with confidence that among the HCs are Aβ-positive people. Further efforts are necessary, aiming at optimizing the inclusion criteria and research methodology, to enhance the reliability of the research results.
Despite these limitations, there are some strengths in our study. We pioneered the study of braking force in dementia and explored its correlations with common gait markers. Our data suggest that weaker braking force is related to worse gait stability of AD, and patients with AD may benefit from gait examination, serving as a speculative foundation for practical methodologies.
CONCLUSION
Our study first found that the weaker braking force was a sign of worse gait stability in AD. It was negatively correlated with fall risk and correlated with gait variability under the condition without much cognitive distraction. Braking force is expected to be a novel gait marker to estimate fall risk without the addition of cognitive tasks. Further prospective researches are deserved to investigate its correlation with cognition, motor control, and gait variability.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by the ethics committee of the first affiliated hospital of Wenzhou Medical University. The patients/participants provided their written informed consent to participate in this study.
AUTHOR CONTRIBUTIONS
ZW designed the study. QC, MW, YW, YH, YF and XS all participated in organizing subjects and data acquisition. QC did the statistical analysis, then interpreted the data and made the corresponding tables and figures. QC and MW drafted the manuscript. WK revised the manuscript. ZW, JH and XY supervised the study. All contributing researchers are listed here. All authors contributed to the article and approved the submitted version.
FUNDING
This study was supported in part by the Natural Science Foundation of Zhejiang Province (LY19H090013) and the Science and Technology of Medicine and Health project of Zhejiang Province (2020KY637). These projects had no further effort in study design, data collection, and analysis, the decision to publish, or preparation of the article. | 2020-09-11T13:10:12.644Z | 2020-09-11T00:00:00.000 | {
"year": 2020,
"sha1": "0807475539cc3ec9651beb3ec6fabca77664381a",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fnagi.2020.554168/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0807475539cc3ec9651beb3ec6fabca77664381a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
17090672 | pes2o/s2orc | v3-fos-license | Nonlinear vibration behavior of graphene resonators and their applications in sensitive mass detection
Graphene has received significant attention due to its excellent mechanical properties, which has resulted in the emergence of graphene-based nano-electro-mechanical system such as nanoresonators. The nonlinear vibration of a graphene resonator and its application to mass sensing (based on nonlinear oscillation) have been poorly studied, although a graphene resonator is able to easily reach the nonlinear vibration. In this work, we have studied the nonlinear vibration of a graphene resonator driven by a geometric nonlinear effect due to an edge-clamped boundary condition using a continuum elastic model such as a plate model. We have shown that an in-plane tension can play a role in modulating the nonlinearity of a resonance for a graphene. It has been found that the detection sensitivity of a graphene resonator can be improved by using nonlinear vibration induced by an actuation force-driven geometric nonlinear effect. It is also shown that an in-plane tension can control the detection sensitivity of a graphene resonator that operates both harmonic and nonlinear oscillation regimes. Our study suggests the design principles of a graphene resonator as a mass sensor for developing a novel detection scheme using graphene-based nonlinear oscillators.
Background
Graphene has recently been attracting the scientific community due to its excellent electrical [1][2][3] and/or mechanical properties [4][5][6][7][8]; these remarkable properties have enabled the exploitation of graphene for the development of nano-electro-mechanical system (NEMS) such as nanoresonators [9,10]. Specifically, since a pioneering work by researchers at Cornell [11], graphene has recently been extensively taken into account for designing nanoresonators that can exhibit high-frequency dynamic range [11][12][13] with favorable high Q factors [13][14][15][16][17]. The high-frequency dynamics of graphene is attributed to its excellent mechanical properties such as Young's modulus of approximately 1 TPa [4][5][6][7][8]18]; it is noted that a resonant frequency is linearly proportional to the square root of Young's modulus when a device operates in harmonic oscillation [9,10]. Until recently, most research works [11][12][13][14][15] (except a work by Eichler et al. [17]) have focused on the harmonic oscillation of a graphene resonator. However, the nonlinear vibration of a graphene resonator has not been well studied yet, albeit a recent study [17] reports an experimental observation of the nonlinear vibration of a graphene resonator. The nonlinear elastic deformation of a graphene is ubiquitous due to the fact that a monolayer graphene is an atomically thin sheet so that the out-of-plane deflection of a graphene is much larger than its thickness [19], which indicates that a graphene can easily undergo a nonlinear elastic deflection. Moreover, as discussed in our previous study [9,20,21], the nonlinear vibration is a useful route to the development of novel sensitive detection scheme based on nanoresonators made of nanomaterials such as carbon nanotubes.
To gain a detailed insight into the underlying mechanism of the vibration of a graphene resonator, an atomistic simulation such as molecular dynamics (MD) simulation has been widely utilized. For instance, Park and coworkers [22,23] have studied various effects such as edge effect and/or internal friction effect on the vibrational behavior of graphene resonators using MD simulation. Furthermore, Park and coworkers [24] have investigated the energy dissipation mechanism of vibrating polycrystalline graphenes fabricated from chemical vapor deposition method by using MD simulation. Despite the ability of MD simulation to provide detailed characteristics of the vibrational behavior of graphene resonators, MD simulation is computationally restricted to studying the vibrational behavior of a graphene resonator whose length scale is <10 nm (e.g., see refs. [22,24]). On the other hand, most experimental studies have considered a graphene resonator whose length scale is >1 μm (e.g., see refs. [11][12][13][14][15]). This clearly indicates that a current atomistic simulation is unable to be utilized to analyze an experimentally observed vibrational behavior of a graphene resonator whose length scale is in the order of micrometer.
The computational limitation of atomistic simulations in depicting the underpinning principles of experimentally observed mechanics of graphene resonators has led researchers [19,[25][26][27] to consider a continuum elastic model, particularly a plate model, for unveiling the vibrational characteristics of a graphene resonator. In order for a continuum elastic model to dictate the atomistic feature of the mechanics of a graphene, the elastic constants of a continuum elastic model (e.g., plate model) for a graphene have to be determined from an atomistic simulation such as MD simulation as it was taken into account for deciding the elastic constants of atomic structures (e.g., lattice) [7,8]. Recently, a plate model with its elastic constants obtained from MD simulation has allowed revealing the mechanisms of the mechanics of a graphene. More remarkably, in a recent study by Isacsson and coworkers [19], a plate model has been utilized for studying the vibrational behavior of a graphene resonator; it is shown that the vibrational behavior of a graphene resonator predicted from a plate model, whose elastic constants were determined from atomistic model, is consistent with an experimentally observed vibration of a graphene resonator. However, a recent study by Isacsson et al. [19] has only concentrated on the harmonic oscillation of a graphene resonator, even though a graphene resonator can easily reach the nonlinear vibration regime. To the best of our knowledge, despite recent studies [28,29] theoretically reporting the nonlinear vibration of a graphene resonator, the nonlinear oscillation of a graphene resonator (particularly, nonlinearity tuning), as well as atomic mass detection using graphene-based nonlinear oscillators, has not been well studied based on a continuum elastic model and/or MD simulation.
In this work, we have studied the nonlinear vibration of a graphene resonator using a continuum elastic model, i.e., plate model. We have found that nonlinear oscillation is a useful avenue for improving the detection sensitivity of a graphene resonator and that the detection sensitivity of a graphene-based nonlinear oscillator is governed by both the actuation force (which determines the nonlinearity of vibration) and the size of a graphene resonator. It is shown that the nonlinearity of vibration for a graphene resonator can be tuned by an in-plane tension and that such in-plane tension can modulate the detection sensitivity of a graphene resonator that operates in both harmonic and nonlinear oscillations. In particular, an in-plane tension improves the dynamic frequency range and detection sensitivity of a graphene resonator that operated in harmonic oscillation, while an in-plane tension deteriorates the dynamic frequency range and sensing performance of a graphene-based nonlinear oscillator. Our study sheds light on a continuum elastic model for gaining insight into not only the underlying mechanisms of nonlinear vibration-based enhancement of the dynamic frequencies and sensing performance of a graphene resonator, but also the role of an in-plane tension in modulating the nonlinearity of a graphene resonator.
Theory and model
Graphene can be modeled as a plate whose mechanical deformation is attributed to the strain energy composed of bending energy U B and stretching energy U S represented in the form [30] where κ, E S , h, and ν represent the bending rigidity, axial stretching modulus, thickness, and Poisson's ratio of a graphene, respectively, w(x, y, t) indicates the outof-plane deflection of a graphene, x and y are the coordinates along the in-plane direction of a graphene, respectively, N 0 is a constant axial tension (due to prestrain) applied to a graphene, and the symbol Ω in an integrand indicates the surface integral. The strain energy can be related to a potential field prescribed to the atomic structure of a graphene, as has been elucidated in Cauchy-Born model [31][32][33][34], such as Here, U atom i is a potential field prescribed to an i-th carbon atom for a graphene, r is the atomic coordinates of a graphene, and N is the total number of carbon atoms for a graphene. Here, it should be noted that when the elastic constants of a graphene (i.e. κ and E S ) are determined from Equation 2, an axial tension N 0 is assumed to be zero (i.e., pre-stress is not applied to a graphene). As described in a literature [25], the force field parameters provide the elastic constants of a graphene such as κ = 1.5 eV and E S h = 2,000 eV/nm 2 .
In order to obtain the equation of motion, we need to know the kinetic energy T for a vibrating graphene. The kinetic energy T can be written as where ρ 0 is the mass density of a graphene; the mass density ρ 0 can be straightforwardly determined from a relation of ρ 0 = Nm C /S, where m C is the atomic mass of a carbon atom, and S is the surface area of a graphene. The equation of motion for a vibrant graphene can be obtained from the minimization of a Hamiltonian H defined as H = U + T − W, where W is the work done by an external force field such as actuation force. The variation of the Hamiltonian, δH, can be obtained as [30,35] δH ¼ ρ 0 @ 2 t w þ κr 4 w À where f is an actuation force per unit area for a graphene, a symbol δ indicates a variation, δw is a virtual out-of-plane deflection of a graphene, a Greek index indicates the coordinates, i.e., α = x (for α = 1) or y (for α = 2), and a repeated Greek symbol represents Einstein's summation rule. The equation of motion is therefore given by Here, it should be noted that in-plane displacements are ignored in the governing equation given by Equation 5 since in-plane displacements are small in comparison with out-of-plane displacement w(x, t). In this work, for theoretical convenience, we assume that axial force N 0 is the biaxial loading represented in the form of N 0 = N 0 (e x + e y ), where e x and e y indicate the directional unit vectors in the x and y directions, respectively. Furthermore, the actuation force f is assumed to be in the form of f = f 0 cosΩt, where f 0 is the amplitude of an actuation force, and Ω is a driving frequency. For solving the equation of motion given by Equation 5, we assume that the out-of-plane displacement, i.e., w(x, y, t), can be decomposed in the following form [9,[36][37][38] Here, z(t) indicates a time-dependent amplitude, and ψ(x, y) represents the deflection eigenmode for a vibrant graphene. In our work, we presume that a monolayer graphene exhibits a rectangular shape and that all edges of a graphene are clamped. The deflection eigenmode that satisfies the clamped boundary conditions is represented in the form where a and b are the lengths of the graphene edges, respectively (see Figure 1). By substituting Equation 6 into Equation 5 followed by integration by parts, the equation of motion represented in Equation 5 becomes the Duffing equation [39][40][41] as follows where the parameters μ, α, λ, and p 0 are given by The vibrational behavior of a graphene resonator can be numerically described by solving the Duffing equation given by Equation 8.
For a case in which atoms are adsorbed onto a graphene resonator, as shown in Figure 1, we have to update a parameter μ, while other parameters are identical to those given by Equations 9.b to 9.d, since atomic adsorption only affects the inertia term dictated by a parameter μ. When atoms are locally adsorbed onto a graphene with adsorption site given as (x m , y m ) as shown in Figure 1, the inertia term μ is given by where δ(x) is the Dirac delta function, and Δm is the total mass of adsorbed atoms. For the case in which atoms are uniformly adsorbed onto a graphene (i.e., mass adsorption occurs on the entire surface of a graphene resonator), the inertia term μ is written as Here, Δm 0 is the mass of atoms adsorbed onto the unit area of a graphene, while Δm is the total mass of adsorbed atoms onto the entire surface of a graphene. Based on inertia terms for both mass-adsorbed graphene (as described in Equation 10) and a bare graphene (represented in Equation 9.a), it is straightforward to compute the resonant frequency shift, Δω, of a graphene resonator due to the mass adsorption; Δω = ω(m + Δm) -ω(m), where ω(m) is the resonant frequency of a bare graphene resonator whose effective mass is given as m (with m = ρab), and ω(m + Δm) is the resonant frequency of a graphene resonator onto which the mass adsorption with the amount of Δm occurs. This frequency shift due to mass adsorption is typically negative since mass adsorption increases the overall mass of a resonator and, consequently, reduces the resonant frequency of a resonator with respect to that of a bare resonator.
Resonance behavior of graphene
To verify the robustness of a continuum elastic model described in the section 'Theory and model' , we have considered the vibration behavior of a graphene resonator with a size of 6 μm × 6 μm when a graphene is actuated by a force amplitude of p 0 = 0.001 aN (where 1 aN = 10 −18 N) in order to induce the harmonic oscillation of a graphene resonator. Moreover, we have taken into account the case in which an in-plane tension N 0 is driven by pre-strain, such as N 0 = E S hE 0 /(1 − ν), where E S , h, and ν indicate the stretching modulus, thickness, and Poisson's ratio of a graphene resonator, respectively, and E 0 is the pre-strain applied to a graphene resonator. With E 0 = 4×10 −5 , the resonant frequency of a graphene undergoing a harmonic oscillation is predicted as ω 0 = 20.63 MHz, which is consistent with the experimentally measured resonant frequency of 19.8 MHz (see Figure 2a and also ref. [11]). This indicates that a continuum elastic model (i.e., plate model) is suitable for understanding the dynamic behavior of a graphene resonator. It should be noted that a continuum elastic model overestimates the resonant frequency of a graphene when compared with that measured from experiments (for more details, see description as follows).
As described above, the resonant frequency of a graphene resonator predicted from a continuum elastic model is overestimated in comparison with that measured from experiment. This may be attributed to the boundary condition that we used in our simulation; in our simulation, we have utilized the fully clamped boundary conditions such as ψ(x, y) = rψ(x, y) = 0 along all edges; this boundary condition is referred to as "fully clamped" boundary condition. In order to understand the effect of boundary condition on the resonant frequency of a graphene resonator, we have taken into account the deflection eigenmode in the form of ψ(x, y) = sin(πx/a)⋅sin(πy/b), which satisfies the boundary condition as follows: ψ(x, y) = 0 at all edges, but rψ(x, y) 6 ¼ 0 along all edges. This boundary condition is referred to as "weakly clamped" boundary condition. As shown in Figure 2b, the resonant frequency of a graphene resonator that is weakly clamped is almost close to the theoretical predictions obtained from the membrane model [15]. Moreover, it is shown that the resonant frequency of a graphene resonator is critically dependent on the boundary condition; the weak clamping of a graphene resonator reduces its resonant frequency. In this work, we have considered a fully clamped graphene resonator, otherwise specified. The resonant frequencies of graphene resonators predicted from our continuum elastic model (i.e., plate model) are consistent with experimentally measured frequencies of graphene resonators [15]. In addition, it is shown that pre-strain increases the resonant frequency of a graphene (Figure 2c). Now, we have taken into account the nonlinear oscillation of a squared graphene resonator with a size of D = 250 nm, to which pre-strain with the amount of E 0 = 10 −5 is applied. It is shown that when the amplitude of an actuation force is in the order of 0.01 fN, the graphene resonator undergoes harmonic oscillation. On the other hand, when a graphene resonator is actuated by the amplitude of an actuation force in the order of >0.05 fN, the graphene resonator experiences nonlinear vibration (Figure 3a). This is attributed to the geometric nonlinear effect due to fully clamped boundary condition. In order to quantitatively characterize the nonlinear vibration of a graphene resonator, we have introduced a dimensionless parameter θ = (ω − Ω 0 )/Ω 0 , where ω is the resonance of a nonlinearly oscillating graphene, and Ω 0 is the harmonic resonance of a graphene defined as Ω 0 = (α/μ) 1/2 . Here, it should be noted that the dimensionless parameter θ represents the degree of nonlinearity for the resonance of a graphene. Figure 3b shows the nonlinearity of graphene resonance (dictated by the dimensionless parameter θ) as a function of the amplitude of an actuation force. It is found that when a graphene resonator bears a prestrain with the amount of 10 −5 , the amplitude of an actuation force in the order of 0.5 fN results in θ = 0.15, which indicates that the resonance behavior at an amplitude of 0.5 fN is close to the harmonic oscillation. As the amplitude increases, the dimensionless parameter θ significantly increases, indicating that the nonlinearity of graphene resonance can be induced by a large amplitude of actuation force. In particular, when the graphene resonator is actuated with an actuation amplitude of 5 fN, the parameter θ becomes θ = 0.75, indicating that the resonance behavior is highly nonlinear. This indicates that the nonlinear vibration of a graphene resonator can be easily observed even when the actuation amplitude is applied in the order of 1 fN. The nonlinear vibration of a graphene resonator actuated with an actuation amplitude p 0 even in the order of 1 fN is attributed to the fact that the deflection amplitude of a graphene resonator is typically much larger than the thickness of a graphene resonator.
Resonant response of graphene resonators to mass adsorption
Even though graphene resonator has recently been extensively considered as a NEMS device for its applications in actuation, it has been barely employed as a mass sensor that enables highly sensitive atomic detection (e.g., measurement of atomic weight). The high detection sensitivity of a graphene resonator is attributed to the high-frequency dynamic range that is achieved due to high elastic stiffness and low mass density of a graphene resonator. In this study, we have scrutinized the resonant responses of graphene resonators, which undergo not only harmonic oscillation but also nonlinear vibration, to atomic adsorption onto the surface of a graphene resonator. Our study on the nonlinear response of a graphene resonator to mass adsorption is ascribed to our previous finding [9,20,21] that nonlinear oscillation is useful in improving the detection sensitivity of a nanoresonator.
We have considered a squared graphene resonator whose size is D = 250 nm without any applied pre-strain (i.e., N 0 = 0). Figure 4a depicts the resonant frequency shift of the graphene resonator due to atomic adsorption as a function of atomic mass as well as the amplitude of an actuation force. In this work, we have assumed that atomic mass was adsorbed onto the center of a graphene resonator. In general, as shown in Additional file 1: Figure S1, the frequency shift due to mass adsorption for a graphene is dependent on the location at which atomic mass was adsorbed. Moreover, it is presumed that the stiffness of the adsorbed molecule is ignored since the elastic modulus of a graphene is in the order of 1 TPa [6], which is much higher than that of adsorbed molecules such as proteins whose elastic modulus is in the order of 10 GPa [42,43]. It is shown that the resonant frequency shift due to mass adsorption is linearly proportional to the adsorbed mass when a graphene resonator is actuated by a small amplitude of actuation force (e.g., p 0 = 1 aN). On the other hand, when a graphene resonator is excited by a large amplitude of actuation force (e.g., p 0 = 5 fN), the frequency shift due to mass adsorption is no longer proportional to the adsorbed mass. It is also found that as the amplitude of actuation force increases, the resonant frequency shift due to mass adsorption significantly increases, which indicates that nonlinear oscillation increases the detection sensitivity of a graphene resonator as anticipated. In order to gain a deep insight into the effect of nonlinear oscillation on the detection sensitivity of a graphene resonator, we have taken into account the resonant response of a graphene resonator to atomic adsorption with the amount of Δm = 10 ag. As shown in Figure 4b, the frequency shift of a graphene resonator to atomic adsorption with the amount of Δm = 10 ag is critically dependent on the amplitude of actuation force. In addition, we have shown the aforementioned dimensionless parameter θ as a function of the amplitude of actuation force (Figure 4b). It is shown that when the actuation amplitude is <0.5 fN, which corresponds to the harmonic oscillation (θ < 0.2), the frequency shift of a graphene resonator with a size of D = 250 nm (without any pre-strain) due to atomic adsorption with the amount of 10 ag is in the order of 0.1 GHz. On the other hand, when an actuation amplitude is increased to 5 fN, which corresponds to highly nonlinear oscillation (i.e., θ = 1.6), the frequency shift due to mass adsorption (with the amount of 10 ag) increases by about sixfold (i.e., Δω = 0.6 GHz). This clearly elucidates that the nonlinear vibration is useful in increasing the frequency shift of a graphene resonator due to mass adsorption, which highlights the nonlinear vibration that improves the detection sensitivity of a graphene resonator.
We have also studied the effect of graphene size (D) on the frequency shift of a graphene resonator due to mass adsorption, Δω, with respect to the actuation amplitude (p 0 ). It is found that for a graphene resonator whose size is D ≥ 150 nm, the increase of actuation amplitude enhances the frequency shift of a graphene resonator due to mass adsorption, which is consistent with our conjecture that nonlinear vibration improves the detection sensitivity of a graphene resonator. On the other hand, for a graphene resonator whose size is D ≤ 100 nm, an increase in the actuation amplitude does not significantly amplify the frequency shift of a graphene resonator due to mass adsorption in comparison to that of a large-scale graphene resonator (i.e., D > 200 nm). This indicates that even though large actuation amplitude induces the nonlinear oscillation of a small-scale graphene resonator (e.g., D < 100 nm), the nonlinear vibration does not remarkably increase the frequency shift due to mass adsorption. This result suggests that the length scale of a graphene resonator plays a central role in not only the dynamic frequency range of a graphene resonator but also in the sensing performance of graphene-based nonlinear oscillators.
Effect of pre-strain applied to graphene resonators on their resonance behaviors and sensing performances As described in previous studies [9,20,21], a mechanical tension (due to pre-strain or pre-stress) applied to a resonator leads to the increase of the dynamic frequency range of a resonator as well as its sensing performance. In this study, we have investigated how a pre-strain applied to a graphene resonator improves not only the dynamic behavior of a graphene resonator but also the detection sensitivity of a graphene resonator that operates in both harmonic and nonlinear oscillations.
We have studied the frequency change of a graphene resonator due to an in-plane tension with respect to the actuation amplitude (Figure 5a). Here, the frequency change is defined as the difference between the resonant frequencies of a graphene resonator bearing an in-plane tension and a bare graphene resonator, respectively. For a graphene resonator operating in harmonic oscillation, an in-plane tension increases the resonant frequency of a graphene resonator, which attributes to the fact that an in-plane tension stiffens the system. When a graphene resonator is actuated by the actuation amplitude in the order of 1 fN (leading to the nonlinear vibration of a graphene), it is interestingly found that the application of an in-plane tension (<6 pN/nm) to a graphene resonator reduces the resonant frequency of a graphene, which indicates that the in-plane tension is not useful in increasing the dynamic frequency range of a graphene operating in nonlinear vibration. This is consistent with our previous studies [20,21] reporting that the dynamic frequency range of a nanoresonator undergoing nonlinear oscillation is decreased by a mechanical tension. However, it is remarkably found that when an in-plane tension is >6 pN/nm, the application of such in-plane tension increases the resonant frequency of a graphene resonator actuated by the amplitude of 1 fN. This may be attributed to the conjecture that an in-plane tension of 6 pN/nm to a graphene resonator actuated by an amplitude of 1 fN may induce the transition from nonlinear vibration to harmonic oscillation.
In order to validate our conjecture that an in-plane tension can play a role in the transition from nonlinear vibration to harmonic oscillation, as shown in Figure 5b, we have plotted the dimensionless parameter θ (representing the degree of nonlinearity) as a function of actuation amplitude and in-plane tension. When the actuation amplitude is in the order of 0.01 fN, the vibration behavior of a graphene becomes almost harmonic oscillation regardless of in-plane tension, as anticipated. For a graphene resonator actuated by an actuation amplitude of 1 fN, the dimensionless parameter for a bare graphene resonator (i.e. N 0 = 0) is in the order of 10, which indicates that the vibration behavior of a bare graphene is almost nonlinear oscillation. On the other hand, when an in-plane tension with the amount of 10 pN/nm is applied to a graphene resonator actuated by the amplitude of 1 fN, the dimensionless parameter is in the order of 0.5, which indicates that an in-plane tension reduces the nonlinearity of the resonance of a graphene. As in-plane tension increases, the nonlinearity is significantly reduced even up to the order of 10 −1 , indicating that the resonance behavior is almost harmonic oscillation. Our result suggests that an in-plane tension plays a central role not only in increasing the dynamic frequency range of a graphene resonator but also in inducing the transition from nonlinear vibration to harmonic oscillation. Figure 5c depicts the critical in-plane tension that is responsible for the transition from nonlinear resonance to harmonic oscillation. Here, the critical in-plane tension is defined as an in-plane tension at which nonlinear oscillation is transitioned to harmonic vibration. It is found that the role of in-plane tension in such transition is highly correlated with the size of a graphene resonator. For instance, for a graphene resonator whose size is D = 200 nm, the resonance behavior of a graphene actuated by an amplitude of ≤0.3 fN is almost close to harmonic oscillation. On the other hand, the vibrational behavior of a 350-nm graphene resonator driven by even an amplitude of 0.1 fN becomes nonlinear oscillation. This indicates that the size of a graphene resonator determines the actuation amplitude that is required to induce the nonlinear vibration of a graphene resonator. Moreover, it is shown that the smaller the graphene resonator is, the smaller is the amount of an in-plane tension that can induce the transition from nonlinear oscillation to harmonic resonance. This suggests that an in-plane tension-driven transition from nonlinear vibration to harmonic oscillation is determined by the size of a graphene resonator. Now, we have studied the role of an in-plane tension on the detection sensitivity of a graphene resonator that experiences both nonlinear vibration and harmonic oscillation ( Figure 6). For a graphene resonator operating in harmonic oscillation (e.g., a graphene actuated by an amplitude of 1 aN), an in-plane tension critically amplifies the frequency shift of a graphene resonator due to mass adsorption, which is consistent with our conjecture that the detection sensitivity of a graphene resonator is increased by an in-plane tension due to the in-plane tension-driven increase of the resonant frequency of a graphene. On the other hand, for a graphene resonator undergoing nonlinear vibration (e.g., actuated by an actuation amplitude of 1fN), an in-plane tension with the amount of <7 fN (corresponding to the critical inplane tension that induces the transition from nonlinear vibration to harmonic oscillation of a graphene) decreases the amount of frequency shift due to mass adsorption, which suggests that an in-plane tension is ineffective in improving the detection sensitivity of a graphene resonator operating in nonlinear oscillation. However, when a graphene resonator is actuated by an amplitude of >7 fN, an in-plane tension increases the amount of frequency shift for a graphene resonator due to mass adsorption, which is attributed to the fact that a graphene resonator actuated by an amplitude of >7 fN obeys the harmonic oscillation. Moreover, we have also investigated the frequency shift of a graphene resonator, which operates in either harmonic oscillation or nonlinear vibration, due to mass adsorption (i.e., Δm = 20 ag) as a function of the size of graphene as well as inplane tension. It is interestingly found that for graphene resonators operating in both nonlinear oscillation and harmonic vibration, an in-plane tension-induced improvement of detection sensitivity of a graphene resonator is significantly dependent on the size of a graphene such that an in-plane tension is useful in increasing the detection sensitivity of a graphene resonator whose size is D % 100 nm, whereas an in-plane tension is ineffective in enhancing the sensing performance of a graphene with a size of D > 300 nm in comparison with the detection sensitivity of a graphene resonator with a size of D = 100 nm. Our study sheds light on the important role of an in-plane tension on modulating not only the resonance behavior of a graphene resonator but also the detection sensitivity of a graphene resonator.
Conclusions
In this work, we have studied the vibrational behaviors of graphene resonators as well as their sensing performance based on continuum elastic model such as plate model. It is shown that nonlinear vibration is useful in improving the detection sensitivity of a graphene resonator and that an in-plane tension is able to tune both the nonlinearity of the vibrating graphene resonators and their detection sensitivity. It should be noted that, in this work, our continuum model is only applicable to a monolayer graphene resonator. For modeling the multilayered graphene resonator, the interactions between graphene sheets have to be considered in the continuum modeling [27], which will be studied for our future work. Moreover, our continuum elastic model discards the finite size effect (i.e., edge effect) on the dynamic behavior (and also sensing performance) of a monolayer graphene resonator; here, edge (stress) effect arises from the imbalance between coordination numbers for edge atoms and bulk atoms, respectively [25,44]. This edge stress effect on a monolayer graphene is conceptually identical to the surface stress effect on a nanowire resonator [9]. Such edge effect on the frequency behavior of a graphene resonator and its sensing performance will be studied for our future work. Figure 6 Effects of pre-strain on graphene resonator in either harmonic or nonlinear oscillation. (a) Frequency shifts of a graphene resonator, operating in either harmonic oscillation or nonlinear vibration, due to atomic adsorption with mass of Δm = 20 ag as a function of in-plane tension. (b) Frequency shifts of graphene resonators, which undergo harmonic oscillations, due to atomic adsorption (with mass of Δm = 20 ag) as a function of the size of graphene resonator and in-plane tension. (c) Frequency shifts of graphene resonators, operating in nonlinear vibrations, due to atomic adsorption (with mass of Δm = 20 ag) with respect to the size of graphene and in-plane tension. | 2016-01-13T18:10:52.408Z | 2012-09-04T00:00:00.000 | {
"year": 2012,
"sha1": "7b5a6ed81cb3db34dcccd3161640130c97a1eec4",
"oa_license": "CCBY",
"oa_url": "https://nanoscalereslett.springeropen.com/track/pdf/10.1186/1556-276X-7-499",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "770f01e7a17841eaa3708358027ae5544aba0ea5",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
} |
16767407 | pes2o/s2orc | v3-fos-license | Factors influencing e-commerce development: Implications for the developing countries
: The rapid growth of E-Commerce initiatives in the world reflects its compelling advantages, such as enhanced governmental performance, lower cost structure, greater flexibility, broader scale and scope of services, greater transparency, accountability, and faster transactions. This study aims to determine the connection and effects that attitudes have on e-commerce is paramount to developing e-commerce. In developing countries, IT and communication or rather e-commerce growth are substantial. Technology effectiveness is essential in E-Commerce success. However, human, economic, and other organizational issues must be taken into account as well. In this study, we evaluated the current status of E-Commerce in developing countries. The evaluation of current status reveals opportunities that should be seriously tackled by organizations, if they are to survive the consequences of globalization and open markets. There should be an immediate implementation of a governmental infrastructure to support e-commerce.
INTRODUCTION
The availability and continued growth of Internet technologies (IT) have created great opportunities for users all over the globe to benefit from IT services and use them in a variety of different ways. The use of IT to conduct business online is known as Electronic Commerce (E-Commerce).
We are witnessing a boom of new technologies, especially in the service sector (IT, Telecommunications, Internet, etc.). Due to technological advances economic transactions have become much easier and faster and this is mainly because of the development of e-commerce. Real engine of the new economy, e-commerce is a remarkable source of competitive advantage for businesses and a new space for consumers. In the coming years, growth and profitability will depend most likely the ability to introduce these new emerging technologies and adopt new methods of business transactions. Since many years ago computers, appliances, plane tickets and many other items are available for purchase on the Internet using cards issued by local banks. Although this technological trend could significantly strengthen the national economic structure, its role and place in developing countries economic structure remains unclear and leaves many questions to ask: Where is the e-commerce today?
What are the obstacles to e-commerce? Is e-commerce having a bright future to become a mainstream business for growth, and what steps to take to get there?
While developed countries have harnessed and adopted E-Commerce, developing countries are not yet fully adapted to its adoption. The aim of this study is to investigate the factors that play a role in the adoption and development of E-Commerce and, hence, develop strategies that conceptualize the influential factors that form as enablers and disablers of E-Commerce. In this paper we provide some answers about the current situation of e-commerce think later on prospects that will enable the benefits from all the advantages offered by this new mode of trade. This paper is organized as follows. Firstly, a concept of e-commerce is briefly introduced, followed by the construction of the research model, including all the aspects of e-commerce that are the object of our investigation. Finally, implications drawn on the study results and analysis are discussed, followed by the research limitations and a conclusion.
UNDERSTANDING THE CONCEPT OF E-COMMERCE
Information and communication technology (ICT) is radically transforming the way individuals, organizations, and governments work. The internet in today's information societies has become an essential channel that is used for dissemination of information, products, and services. People prefer to use the internet as a transaction tool in different areas, such as, learning, shopping, marketing, travel, trading, etc. Carter and Belanger (2003) emphasized the use of ICT to improve efficiency and access to government services across all stakeholders in G2C, G2E, G2G and G2B services. Additionally, governments have realized the importance of the internet and have undertaken critical transformations to use it to deliver public services, so that citizens can always access them regardless of their location (Abdulkarim, 2003). Fang (2002) has described e-government (part of e-commerce) as a method for governments to use the most innovative ICT services, particularly web-based internet applications. These applications are able to provide citizens and businesses with more convenient access to government information and services, to improve the quality of services and provide more opportunities for democratic institutions and processes. E-Commerce involves many issues such as trust, security, privacy, accessibility, familiarity, awareness, and quality of public services (Jaeger, 2003). For instance, the rapid growth of E-Commerce initiatives in the MENA (Middle East and North Africa) region reflects its compelling advantages, such as enhanced governmental performance, lower cost structure, greater flexibility, broader scale and scope of services, greater transparency, accountability, and faster transactions. However, getting people to be continually engaged in e-commerce services is a challenge since only with a few mouse clicks they will be moved away. An agreement seems to enhance better customer service and its consequent effect on online satisfaction and reuse. Especially, online satisfaction is not the only primary driver of online customers' continuous behavior, but also the key to building and retaining a loyal base of long-term customers. Many institutions, such as the World Bank, the United Nations, Europe's Information Society DG, the Canadian Common Measurement Tool (CMT) of satisfaction, the European Customer Satisfaction Index and the American Customer Satisfaction Index, evaluate e-commerce progress and satisfaction using various methods and indices (Fitsilis, Anthopoulos, & Gerogiannis, 2010).
Businesses implementing E-Commerce in developing countries face substantially greater challenges than businesses in developed countries due to the unreliability of the internet connection, the poor availability of accessing it due to the poor infrastructure, the high cost of doing so, and also the low level of ICT penetration throughout the country (Molla and Licker, 2005b;Molla and Licker, 2005a). Aleid (2009) carried out an investigation of different E-Commerce schemes in a number of countries with regard to culture, infrastructure and human behavior. They find that there are a number of factors that may inhibit the diffusion of E-Commerce into developing countries (e.g. infrastructure, security, E-Commerce laws). This study will focus on Developing countries, which is considered to be a marketplace, which is booming for E-Commerce activities in the Middle East (Eid, 2011). Developing countries require further Internet access, exploring opportunities for the Internet in education, government and commerce. However, for these things to be achieved certain requirements need to exist where certain factors play an important role. Next we discuss the most essential factors for the development and effectiveness of e-commerce. Johnson-George andSwap (1982: 1306) asserted that "willingness to take risks may be one of the few characteristics common to all trust situations." Kee and Knox (1970) argued that to appropriately study trust there must be some meaningful incentives at stake and that the trustor must be cognizant of the risk involved. The definition of trust proposed in this research is the willingness of a party to be vulnerable to the actions of another party based on the expectation that the other will perform a particular action important to the trustor, irrespective of the ability to monitor or control that other party (Park and Kim, 2003). Trust can be a vital factor in business to consumer (B2C) E-Commerce. It gives consumers faith to buy products or services even if an e-trader is unknown. It encourages more use of E-Commerce technologies, makes the e-transaction process easier, enhances the level of acceptance and adoption of E-Commerce, leads to the improvement of consumer commitment, raise customer satisfaction, introduces the concept of loyalty, sustains long-term relationships with customers and assists the acquiring of a competitive benefit. Future purchases can be motivated and increased prices tolerated. It reduces customer worries about information privacy, and helps customers to tolerate the irregular mistakes made by the e-trader (Pittayachawan, 2008). Trust is a complicated concept and has a multitude of sides to be addressed. There are a number of researchers who have continually approached the "trust" issue from a technical side such as Internet and network security and even web interface design (Fernandes, 2001;Clifford et al., 1998;Pittayachawan, 2008). Nonetheless, according to Klang (2001) and Ratnasingham and Kumar (2000), considering just the technical perceptions will not guarantee trust in e-commerce.
SECURITY, FRAUD AND HACKING
It is widely acknowledged by both government and industrial organizations that, from a consumer point of view, issues of information security are a major obstacle to the growth of E-Commerce. The perception of risk regarding Internet security has also been recognized as a concern for both experienced and inexperienced users of Internet technologies (Miyazaki and Fernandez, 2001). Furthermore, Miyazaki and Fernandez (2001) have identified the fraudulent behavior by online retailers as a key concern for Internet users and, therefore, E-Commerce users Rose et al. (1999) identifies hackers as an obvious security threat to E-Commerce.
This happens because the online availability and accessibility of the stored data of many corporations gives any hacker on the Internet the chance to steal data from these corporate databases. These threats have been identified in several new studies (Aleid et al., 2009;Al-Ghaith et al., 2010). Dixit and Datta (2010) studied the acceptance of e-banking among adult customers in India. The findings depicted that many factors like security and privacy, trust, innovativeness, familiarity, and awareness level increase the acceptance of e-banking services among Indian customers.
AWARENESS AND PERCEIVED USEFULNESS
Within the context of the information systems (IS) domain, much research has outlined the significance of the influence of perceived usefulness on attitude towards the use of e-commerce.
The real reason why customers would use E-Commerce is that they find it a useful facility for conducting shopping online (Alghamdi, 2011). Furthermore, according to Sathye"s (1999) research, the use of online banking services, which is a good example of e-commerce, is new knowledge to many customers, and the lack of awareness of online banking is a crucial factor in preventing customers from adopting it. In his study of 500 Australian customers, he concluded that customers were not aware of the potential benefits of online banking. This was supported by another study by Howcroft et al., (2002) in which they found that the issue of lack of awareness and knowledge of online banking services contributes ecommerce adoption challenges. Suki and Ramayah (2010) studied user acceptance of the e-Government services in Malaysia. Their results indicate that the important determinants of user acceptance of the e-Government services are perceived usefulness, ease of use, compatibility, interpersonal influence, external influence, self-efficacy, facilitating conditions, attitude, subjective norms, perceived behavioral control, and intention to use e-Government services/system.
ACCESSIBILITY
As the internet is fast becoming a major source of information and services, a well-designed e-commerce website has become essential so that citizens can access public information and improve their participation. E-commerce websites can serve as a tool for both communication and relations for the customers and general public. Information and data can easily be shared with and transferred to external stakeholder (Moon, 2002). Henry (2006) defines web accessibility as getting people to use, perceive, understand, direct and interact with the web. The International Standards Organizations (ISO) has defined accessibility as "the usability of a product, service, environment or facility by people with the widest range of capabilities". Gummerus et al. (2004) define the user interface as the channel through which customers are in contact with the e-service provider. Park and Kim (2003) found that the quality of the user interface affects customer satisfaction directly, since it provides physical evidence of the service provider's competence as well as facilitating effortless use of the service. Because of its importance to customer satisfaction, Tan, Tung, and Xu (2009) identified fourteen key factors for developing effective B2C e-commerce websites. Also, Cyr (2008) investigated the effect of B2C e-commerce website user interface design factors (such as information design, navigation design, and visual design) on trust and satisfaction across three developed countries; Canada, Germany, and China. Cyr found that these user interface design variables are key antecedents to website trust and website satisfaction across cultures.
PERCEIVED QUALITY
The perceived quality of a service has two dimensions; the technological dimension, which refers to what is delivered, and the functional dimension, which refers to how the service is delivered. Speed of response, offer updates, and site effectiveness, refers to the technical quality (Rust & Lemon, 2009). Interactive communication, personalization of the communication and of the service, as well as new forms of customer access refers to the functional aspect of quality. Product/service quality is defined as the customer perception of the quality of information about the product/service that is provided by a website (Park & Kim, 2003). According to Mcknight etal., 2002, website content quality has been argued to be an antecedent of online customer trust on quality. In addition, Park and Kim (2003) found that the information quality affects customer satisfaction directly. Karunasena and Deng (2012) have identified the critical factors for evaluating the public value of e-Government in Sri Lanka. The study showed that the deliveries of quality information and services, user-orientation of information and services, efficiency and responsiveness of public organizations and contributions of public organizations to the environmental sustainability are the critical factors for evaluating the public value of e-Government in Sri Lanka.
ROLE OF GOVERNMENT
The government"s role in developing countries as an important one that facilitates the essential requirements for the development of E-Commerce such as providing robust secure online payment options, ensuring a solid ICT infrastructure, providing educational programs and building up awareness using different means such as media and education institutions. The results of their study show the significance of government promotion and support as a crucial factor (AlGhamdi et al., 2011). According to , state that the government demonstrates strong commitment to promoting E-Commerce. In Saudi Arabia, Eid (2011) posit in his study that the Saudi Government"s support was recognized as an important element in the development and growth of local E-Commerce. According to Eid"s study, some Saudi citizens believe in the importance of the government"s role. Interviewee 8 commented on the diffusion of E-Commerce by government and private accreditation in providing the basic facilities such as a house address for every citizen, to be used online for accurate delivery of products and documents and special services. If there is no reliable postal service, there will be no egovernment.
CONSTRUCTING A RESEARCH MODEL
The development of E-Commerce in this research is measured along the four facets stipulated in the diagram below. This study allows the researcher to discover general attitudes and perception that people have on personal, technological and transactional levels. To determine the connection and effects these attitudes have on e-commerce is paramount to developing e-commerce. The facets surrounding this study revolve on the following; Security and privacy: the perception of e-commerce portals as secure platforms without any uncertainty and adverse consequences after e-commerce use, and the ability to determine when and what extent information about them is communicated to others for maintaining confidentiality.
Trust and Loyalty: the willingness of people to rely on and willingness to frequently use e-commerce portals for conducting transactions based on the feelings of confidence and assurance.
Accessibility and Awareness: the perception of user interface quality and the degree of awareness on information about products and services delivered from conducting transactions from any location at any time through e-commerce portals.
Quality and benefits: the perception of quality of products and services offered from e-commerce portals and benefits that arose from conducting such transactions.
IMPLICATIONS FOR ECOMMERCE IN DEVELOPING COUNTRIES
In developing countries, IT and communication or rather e-commerce growth are substantial. Technology effectiveness is essential in E-Commerce success. However, human, economic, and other organizational issues must be taken into account as well. In this study, we evaluated the current status of E-Commerce in Developing countries. The evaluation of current status reveals opportunities that should be seriously tackled by organizations, if they are to survive the consequences of globalization and open markets. There should be an immediate implementation of a governmental infrastructure to support e-commerce. This thesis explored the areas enabling and huddles to the development of e-commerce. Online consumers face problems concerning security and privacy. They are exposed with online risk such as hacker mischiefs. Moreover, when buyers make payment using credit cards, they are exposing their banking information which could also be manipulated by hackers. The results of this research showed that majority of the respondents felt that internet shopping is risky due to the same reason. Amongst the perceived risks is financial, product performance, social, psychological and time convenience loss. Other than stolen credit card information, there are also risks in delivery. The time taken for delivery may take quite some time, therefore, anything could happen in the process of delivery. Buyers may lose the item. Online vendors might not be responsible for the loss and this leaves the buyers to bear all the consequence. When the perceived risk is greater, the relationship between intention and online purchasing will be weakened.
The implementation of an effective Internet e-commerce solution in Developing countries or other country that want to develop its e-commerce system can consider the following key steps: Developing strategy; before implementing Internet e-commerce, an organization must clearly define its goals. Many companies create goals that are not measurable or specific. Assessing readiness; before taking on the complexities (and risks) associated with implementing Internet e-commerce, an organization and its management should take stock of their current systems and capabilities. Four key drivers predict an enterprise"s ability to succeed in e-commerce. These four drivers are: leadership; governance; competencies; and technology.Designing the project; Although projects will differ greatly in the details, there are some common requirements for implementing Internet e-commerce, including: managing the project, developing an outsourcing strategy, selecting an Internet service provider, selecting e-commerce service providers; and designing website security. Integrating the solution; In developing an Internet ecommerce platform, an organization must also consider how to integrate its e-commerce applications with its other business processes. For example, the richness of corporate intranet applications positively affects e-commerce capabilities. Extending intranet applications into the Internet permits an organization to provide more value to customers in several ways: real-time access to information; and ability to perform business transactions. Measuring effectiveness: Given the major investment that implementing e-commerce entails, it is only common sense to measure the return. Successful e-commerce companies have serious and accountable metrics and clear agreements about using them across the organization. It is in the appropriateness and completeness of the metrics selected that typically set successful ecommerce implementations apart from the ones that are unsuccessful.
CONCLUSION
Under the guidance of grounded theory and through analyzing and synthesizing the gathered data, a content analysis of e-commerce enablers and disablers in developing countries was constructed.
This research highlights the most important factors that need to be considered in order to support the proliferation and advancement of e-commerce. Countries need to encourage and improve the ecommerce developments. This research sheds light on the potential factors that may play a significant role in supporting the proliferation and advancement of E-Commerce in developing countries. The outcomes of this study may contribute to the market stakeholders" understanding of their potential customers" needs and current concerns. Exploring the market, especially at this time while e-commerce is still in its development stage, is critical for industry stakeholders in order to ensure the success of this Christian Mbayo Kabango, Asa Romeo Asa Factors influencing e-commerce development: implications for the developing countries emerging market. Future research should focus on studying the development of e-commerce and testing the research model. Consequently, potentially important dimensions of the study could include an investigation in multiple cities, and especially in more rural areas, which may lead to more accurate and comprehensive results and analysis. Also, comparative research in different parts of the world would produce more complete findings. The results of this study could then be compared with those of other developing countries having similar conditions to see if there is a significant difference. | 2019-05-28T13:14:13.896Z | 2015-04-30T00:00:00.000 | {
"year": 2015,
"sha1": "08f3aa930219ad665e7cfc506abe0248ae076144",
"oa_license": "CCBY",
"oa_url": "https://researchleap.com/wp-content/uploads/2015/05/6.-Factors-influencing-e-commerce-development-implications-for-the-developing-countries1.pdf",
"oa_status": "GREEN",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "ddffaceec5cf5a156ce453ee3837f078805bbf95",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Business"
]
} |
6100749 | pes2o/s2orc | v3-fos-license | Evolutionary Constraints on the Norovirus Pandemic Variant GII.4_2006b over the Five-Year Persistence in Japan
Norovirus GII.4 is a major cause of global outbreaks of viral gastroenteritis in humans, and has evolved by antigenic changes under the constantly changing human herd immunity. Major shift in the pandemic GII.4 strain periodically occurs concomitant with changes in the antigenic capsid protein VP1. However, how the newly emerged strain evolves after the onset of pandemic remains unclear. To address this issue, we examined molecular evolution of a pandemic lineage, termed the GII.4_2006b, by using the full-length viral genome and VP1 sequences (n = 317) from stools collected at 20 sites in Japan between 2006 and 2011. Phylogenetic tree showed a radial diversification of the genome sequences of GII.4_2006b, suggesting a rapid genetic diversification of the GII.4_2006b population from a few ancestral variants. Impressively, amino acid sequences of the variable VP1 in given seasons remained as homogeneous as those of viral enzymes under annual increase in the nucleotide diversity in the VP1 coding region. The Hamming distances between the earliest and subsequent variants indicate strong constraints on amino acid changes even for the highly variable P2 subdomain. These results show the presence of evolutionary constraints on the VP1 protein and viral enzymes, and suggest that these proteins gain near maximal levels of fitness benefits in humans around the onset of the outbreaks. These findings have implications for our understanding of molecular evolution, mechanisms of the periodic shifts in the pandemic NoV GII.4 strains, and control of the NoV GII.4 pandemic strain.
The VP1 protein is the major structural protein of the mature virion, which protrudes from the virion surface and plays pivotal roles in the viral interactions with hosts. The VP1 protein is composed of two domains, protruding (P) and shell (S) (Prasad et al., 1999). The P domain is further divided into two subdomains, P1 and P2 (Prasad et al., 1999). The P2 subdomain is placed on the tip of the VP1 protein and constitutes the major antigenic site around the binding site to the putative receptor(s) for infection (Donaldson et al., 2010). This structural feature causes sequence variation (Lindesmith et al., 2008(Lindesmith et al., , 2011(Lindesmith et al., , 2012a(Lindesmith et al., , 2013Bok et al., 2009;Debbink et al., 2012) and structural diversity (Chen et al., 2004(Chen et al., , 2006Donaldson et al., 2010), particularly in the P2 subdomain. Meanwhile, the functional importance of the P2 subdomain can cause suppression of deleterious changes and/or changes that reduce viral replication fitness. However, very little is known about evolution of the VP1 protein during viral maintenance in human populations.
To address this issue, we examined here molecular evolution of the VP1 protein of a pandemic lineage, termed GII.4_2006b, which is also known as the GII.4 Den Haag 2006b. In the autumn/winter of 2006, the national epidemiological surveillance of infectious diseases in Japan reported an unusual increase in the number of outbreaks of NoV infections (Infectious Disease Surveillance Center 1 ). This augmentation was associated with the nationwide spread of a newly emerging GII.4 variant (Motomura et al., 2008), termed GII.4_2006b. The GII.4_2006b initially coexisted as a minority strain among various other NoV lineages in Japan, but starting in October of 2006 it spread extremely rapidly and remained as the major epidemic variant across Japan between (Motomura et al., 2008. In this study, we characterized nucleotide and amino acid diversities of the VP1 proteins, using serially collected full-length 317 NoV genome and VP1 sequences from infections in Japan between 2006 and 2011. The obtained results show long-term persistent of GII.4_2006b in human populations in Japan as a dominant GII.4 subpopulation. Interestingly, both the VP1 protein and viral enzymes had remained as highly homogeneous populations, indicating strong evolutionary constraints on changes in these proteins following the onset of the outbreaks.
NoV Genome Sequencing
Stool specimens were collected from individuals with acute gastroenteritis at 20 regional public health institutes in Japan 1 http://idsc.nih.go.jp/iasr/prompt/graph-ke.html between May 2006 and March 2011 in compliance with the Food Sanitation Law of Japan, according to the methods for the protection of personal information (including methods for anonymization in an unlinkable fashion). The research was approved by research and ethical committee in National Institute of Infectious Diseases. Three to five stool specimens were collected at each site in each year. NoV genome sequences were obtained from the stool specimens as described previously (Motomura et al., 2008(Motomura et al., , 2010.
Genotype Determination
Norovirus genotype was determined by construction of phylogenetic trees of viral genome sequences. Multiple sequence alignments were done as described previously (Motomura et al., 2008(Motomura et al., , 2010 using the MAFFT (Katoh et al., 2009) and alignment tools implemented in the MEGA software suite (Tamura et al., 2011). Phylogenetic trees were constructed as described previously (Motomura et al., 2008(Motomura et al., , 2010 using MEGA software (Tamura et al., 2011). The reliability of interior branches in the tree was assessed by the bootstrap method with 1,000 resamplings.
Analysis of Diversity of Sequence Population
Mean diversity in the entire sequence population was computed with the "Sequence Diversity" menu in MEGA software suite (Tamura et al., 2011). The overall pairwise mean distance between the sequences was computed with the "Distances" menu in MEGA. As substitution models, a maximum composite likelihood and a Poisson model were used for nucleotide and amino acid sequences, respectively. Variance was estimated by the bootstrap method with 100 to 500 bootstrap replications.
Analysis of Individual Amino Acid Variation
Amino acid variations at each position of the VP1 (1-530) were calculated as previously described with a multiple sequence alignment as described previously for other viral proteins (Naganawa et al., 2008;Oka et al., 2009;Takahata et al., 2017) on the basis of Shannon's equation (Shannon, 1997): where H(i), p(x i ), and i indicate the amino acid entropy score of a given position, the probability of occurrence of a given amino acid at the position, and the number of positions, respectively. An H(i) score of zero indicates absolute conservation, whereas 4.4 bits for amino acids or 2.0 bits for nucleic acids indicates complete randomness.
Analysis of Amino Acid Substitutions by Hamming Distance
We used the Hamming distance to assess the changeability of the earliest NoV GII.4_2006b variant in Japan. In information theory, the Hamming distance between two sequences indicates the minimum number of "amino acid substitutions" required to change one sequence into the other. Because the length of amino acid sequences of the VP1 proteins of the NoV GII.4_2006b subpopulations were identical (540 amino acid residues), the Hamming distance measured in this study means the number of different amino acid residues between the earliest and subsequently emerged variants in two aligned sequences. Python was used as the programming language to compute Hamming distances. Hamming distances between the sequence in May 2006 (accession number AB447443; earliest GII.4_2006b sequence in our NoV genome dataset) and the later GII.4_2006b sequences (n = 249) were computed by creating a sequence that assigns mismatches and matches at corresponding positions in the two sequences, and then by counting the numbers of the mismatches.
Nucleotide Accession Numbers
The DDBJ database accession numbers of the 317 NoV GII.4 genome sequences used in this study are provided in Supplementary Table S1 (n = 250, GII.4_2006b) and Supplementary Table S2 (n = 67, GII.4 non-2006b).
Persistence and Diversification of NoV GII.4_2006b Genome in Japan between 2006 and 2011
We obtained 317 genome sequences of NoV GII.4 from the stool specimens collected at 20 sites in Japan between 2006 and 2011 ( Figure 1A). Eight distinct lineages of NoV GII.4 were (Motomura et al., 2008(Motomura et al., , 2010; Supplementary Tables S1, S2). Among the newly emerged eight GII.4 lineages, only the pandemic variant GII.4_2006b had been detected dominantly and continually throughout Japan ( Figure 1B). The GII.4_2006b represented about 79% (n = 250) of the total GII.4 genomes detected during the 5 years. Phylogenetic analysis shows that the GII.4_2006b genome sequences diverged radially from a few roots ( Figure 1C). These data suggest genetic bottlenecks followed by a rapid genome diversification of GII.4_2006b between 2006 and 2011.
Diversity of NoV GII.4_2006b ORFs
The GII.4_2006b RNA genome encodes three open reading frames, ORF1, ORF2, and ORF3 (Figure 2A). ORF1 encodes viral enzymes and non-structural proteins. ORF2 and ORF3 encodes structural proteins, VP1 and VP2, respectively. We first examined whether the sequence diversity is different among the three ORFs by using the 250 GII.4_2006b genome sequences obtained in this study. The phylogenetic tree and mean diversity in the entire sequence population show that the nucleotide diversity was similar among the three ORFs ( Figure 2B). In contrast, a marked difference was observed in the diversity of amino acid sequences: the ORF1 and ORF2 amino acid sequences remained significantly less diversified than that of ORF3 ( Figure 2C). The data suggest the presence of constraints on the amino acid changes of proteins encoded by ORF1 and ORF2.
Temporal Changes in the Sequence Diversity of NoV GII.4_2006b The GII.4_2006b RNA genome encodes eight viral proteins ( Figure 3A). Shannon entropy of amino acid sequences of the 2006b ORF1 in the present genome dataset indicates that the potential sites for the internal cleavage of the ORF1 precursor protein were perfectly conserved in amino acid levels [H(i) = 0/0] for p48/NTPase (Q/G), NTPase/p22 (Q/G), and VPg/Pro (E/A), Figure 1C. The analyses were done with tools included in the MEGA software suite (Tamura et al., 2011). Nucleotide sequences (B). Amino acid sequences (C).
Frontiers in Microbiology | www.frontiersin.org To assess the changeability of individual viral proteins, we examined temporal changes in the sequence diversity of the eight protein-coding regions using the 250 GII.4_2006b genomes. The genome sequences were divided into five groups based on the collected seasons, and the overall mean distance of the sequences in a season was calculated using MEGA. The nucleotide mean distance sequentially increased for the eight protein-coding regions (Figure 3B, Nuc), indicating a continuous increase in the dissimilarity of every gene segment in the GII.4_2006b variant population in Japan. In contrast, the temporal change in amino acid sequence diversity was very different among the eight proteins (Figure 3B, Ami). Interestingly, the amino acid mean distance of the generally hypervariable VP1 protein sequences remained comparable to that of three viral enzymes (NTPase, Pro, Pol) for 5 years, with the mean distance remaining at less than 0.01 with small variances (Figure 3B, Upper). After the 3rd epidemic season, the VP1 amino acid distance even decreased.
Meanwhile, the amino acid mean distances of the p22 and VP2 proteins sharply increased in parallel with an increase in the nucleotide distances (Figure 3B, p22 and VP2), suggesting the continuous diversification of these proteins in association with nucleotide diversification. The mean amino acid distance for the p48 protein increased with time yet less extensively than those for the p22 and VP2 proteins (Figure 3B, p48). The mean amino acid distance for the VPg protein stayed at relatively low levels with large variances (Figure 3B, VPg). In sum, these data suggest the presence of strong constraints on amino acid changes in the capsid protein VP1 and enzymes (NTPase, Pro, Pol) of the GII.4_2006b under the diversification of nucleotide sequences.
Long-term Circulation of the NoV GII.4_2006b Subgroup Carrying the Identical Capsid Protein VP1
We identified a GII.4_2006b subpopulation (n = 23) whose nucleotide sequences differed from each other, yet encoded FIGURE 4 | Long-term circulation of the NoV GII.4_2006b subgroup carrying the identical capsid protein VP1. (A) Identification of a GII.4_2006b genome subpopulation group 1 that encodes the identical VP1s in distinct genetic backbones. Nucleotide (Upper) and deduced amino acid (Lower) sequences of the group 1 genomes (n = 23) were aligned using MAFFT software (Katoh et al., 2009), and Shannon entropies at individual positions were calculated as described previously (Naganawa et al., 2008;Oka et al., 2009;Takahata et al., 2017). The distribution of Shannon entropy scores in the GII.4_2006b genome is shown. (B) Detection frequency of the group 1 genomes in five seasons between 2006 and 2011 in Japan. (C) Neighbor-joining tree of the GII.4_2006b VP1 nucleotide sequences (1620 nucleotides). Colored circles indicate the group 1 sequences. (D) P domain dimer model of the GII.4_2006b VP1 protein was constructed as described (Motomura et al., 2008(Motomura et al., , 2010. Blue residues indicate the GII.4_2006b-specific amino acid substitutions at potential epitopes in the P2 subdomain of the GII.4 VP1 (Lindesmith et al., 2012b). exactly the same VP1 amino acid sequences ( Figure 4A). The members of this population, tentatively termed group 1, were detected at distantly located 11 sample collection sites in Japan during the study period (Supplementary Table S1 and Figure S1). They emerged in the second epidemic season in 2007 and continuously circulated without no changes in the VP1 amino acid residues, representing about 6-13% of the GII.4_2006b genomes in each season ( Figure 4B). The group 1 genomes continuously accumulated nucleotide substitutions in the VP1coding region (Figure 4C), but only synonymous substitutions ( Figure 4A). The GII.4_2006b variants at the onset of epidemics generally had 10 substitutions at the potential epitopes A, B, D, and E (Lindesmith et al., 2012a) (Figure 4D). The group 1 VP1 protein had an additional substitution (S393G) on the epitope D.
Temporal Change in Hamming Distance for the NoV GII.4_2006b Capsid Protein VP1
The NoV VP1 protein has an architecture similar to that of the VP1 proteins of other single-stranded RNA viruses (Prasad et al., 1994(Prasad et al., , 1999; Figure 5A). The S domain is highly conserved, whereas the P2 domain is hypervariable among GII.4 variants. To assess the changeability of the P2 domain of the GII.4_2006b, we examined the temporal accumulations of amino acid substitutions in the S, P1, and P2 regions of the GII.4_2006b VP1 using the Hamming distance between the earliest and subsequent VP1 variants. As the earliest VP1 variant of the GII.4_2006b, we used a sequence from a May 2006 sample, which was collected in spring about 5 months before the onset of the nationwide epidemics of the GII.4_2006b in October of 2006 in Japan (Motomura et al., 2008).
For the S domain, the Hamming distances of the variants in given seasons were at a constant peak of 0 for 5 years (Figure 5B, VP1 Shell). The data suggest that the GII.4_2006b variants having amino acid substitutions in the S domain were mostly cleared during epidemics. For the P1 and P2 subdomains, the peaks of Hamming distances were fixed at 1 and 3 after the second and first epidemic seasons, respectively ( Figure 5B, VP1 P1 and P2). The data indicate that most of the GII.4_2006b variants in the early epidemics had a few amino acid substitutions in the P domain but they could not accumulate more mutations after the second epidemic season. Thus the P domain was more variable than the S domain in the GII.4_2006b variants, as has generally been documented for other NoVs. However, the accumulation of amino acid substitutions was strictly constrained in the P domain of the GII.4_2006b variants during epidemics. In contrast, the Hamming distances of VP2, a minor structural protein in virion (Glass et al., 2000), continuously increased and showed no evidence of fixation of the peak distance during the study period (Figure 5B, VP2).
DISCUSSION
In this report, we studied molecular evolution of the NoV capsid protein of a pandemic lineage, GII.4_2006b. This NoV subpopulation predominated over other coexisting NoV GII.4 subpopulations between the 2006 and 2011 in Japan (Figure 1). Notably, the amino acid sequences of variable VP1 protein of the GII.4_2006b populations remained as homogeneous as that of the viral enzymes for the 5 years under an increase in nucleotide diversity (Figures 2, 3). Even the GII.4_2006b population possessing the identical amino acid sequence in the VP1 protein had persisted in the study period (Figure 4). Even the hypervariable antigenic P2 subdomain of the VP1 protein had resisted sequential accumulations of amino acid substitutions (Figure 5). These results suggest the presence of strong evolutionary constraints on the VP1 protein of the NoV pandemic strain. The finding has implications for our understanding of molecular evolution, mechanisms of the periodic shifts in the pandemic NoV GII.4 strains, and control of the NoV GII.4 pandemic strain.
First, the finding has implications for understanding fitness landscape and evolution of the VP1 protein of NoV GII.4 pandemic strain. The strong constraints on changes imply that the VP1 protein and enzymes of the GII.4_2006b variants had already gained near maximal levels of fitness benefits in humans around the onset of the outbreaks and that new mutations in the VP1 protein were mostly cleared from the GII.4_2006b population, probably due to a reduction in the viral fitness for the spread in humans. In order to predominate over other coexisting GII.4 variants, the pandemic variant should have the VP1 structure that confers the best ability to evade preexisting herd immunity against NoV at that time, while also having affinity to bind to receptor(s) on human cells. Because the antigenic sites are located near the receptor-binding site, new antigenic mutations always have the risk to attenuate VP1 protein function and thereby to cause reduction in the viral replication fitness in humans. Thus, it is possible that the VP1 protein of the pandemic strain had remained conserved in human populations primarily by the necessity to maintain advantageous physical property of the VP1 protein for immune evasion and infectivity simultaneously.
Secondary, the finding has implications in the periodic shifts of the pandemic NoV GII.4 strains. Provided that the VP1 protein sequence of a given pandemic variant remained conserved following the onset of epidemics as seen in the GII.4_2006b, the human herd immunity against the VP1 protein would become increasingly more effective in association with the spread of the virus in humans. Consequently, niche for the pandemic variant in humans would be reduced, and the pandemic variant eventually be replaced by an alternative variant that has the fittest capsid structure under human herd immunity at that time. Consistently, the numbers of reported NoV infection cases in Japan had decreased annually since the late 2007, and the GII.4_2006b was replaced by a new global pandemic strain GII.4_Sydney 2012 in the 2013/2014 season, as reported in other countries (van Beek et al., 2013;Eden et al., 2014).
Finally, the finding has implications in the control of NoV pandemic strains. Although development of vaccines and antiviral agents are of special importance to reduce damages from the NoV infections, structural variations in the viral proteins can be problematic. In this regard, the present study suggests the presence of strong constraints on changes in capsid protein and enzymes of a NoV GII.4 pandemic variant on the course of 5-year persistence across Japan. The finding provides a rationale for developing vaccines and antiviral agents against a pandemic strain. A basic premise of the control is that the sequences of the VP1 protein and viral enzymes of a given pandemic variant remain highly homogeneous after the onset of pandemic. Therefore, it is important to further accumulate information on the evolution of newly emerged pandemic strains to clarify whether present observations of the amino acid conservation in the VP1 and viral enzymes can be extended to other GII.4 pandemic variants. In parallel, it would be important to study genetic diversity of NoV in nature in order to develop systems to predict a new pandemic variant in advance.
AUTHOR CONTRIBUTIONS
HS conceived the study. MY prepared the computing environment for information science. KK, NT, MN, and TT organized collection of stool specimen. TO, HN, and KM performed sequencing. HN and HS performed variation analysis. HS prepared the manuscript. All authors read and approved the final manuscript. | 2017-05-04T00:11:42.423Z | 2017-03-13T00:00:00.000 | {
"year": 2017,
"sha1": "3f05512fca52cd55793ce8b8331eabc1192919ea",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2017.00410/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3f05512fca52cd55793ce8b8331eabc1192919ea",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
226297404 | pes2o/s2orc | v3-fos-license | Performance and Stability of Tenofovir Alafenamide Formulations within Subcutaneous Biodegradable Implants for HIV Pre-Exposure Prophylaxis (PrEP)
A critical need exists to develop diverse biomedical strategies for the widespread use of HIV Pre-Exposure Prophylaxis (HIV PrEP). This manuscript describes a subcutaneous reservoir-style implant for long-acting delivery of tenofovir alafenamide (TAF) for HIV PrEP. We detail key parameters of the TAF formulation that affect implant performance, including TAF ionization form, the selection of excipient and the exposure to aqueous conditions. Both in-vitro studies and shelf stability tests demonstrate enhanced performance for TAF freebase (TAFFB) in this long-acting implant platform, as TAFFB maintains higher chemical stability than the TAF hemifumarate salt (TAFHF). We also examined the hydrolytic degradation profiles of various formulations of TAF and identified inflection points for the onset of the accelerated drug hydrolysis within the implant using a two-line model. The compositions of unstable formulations are characterized by liquid chromatography-mass spectrometry (LC-MS) and are correlated to predominant products of the TAF hydrolytic pathways. The hydrolysis rate of TAF is affected by pH and water content in the implant microenvironment. We further demonstrate the ability to substantially delay the degradation of TAF by reducing the rates of drug release and thus lowering the water ingress rate. Using this approach, we achieved sustained release of TAFFB formulations over 240 days and maintained > 93% TAF purity under simulated physiological conditions. The opportunities for optimization of TAF formulations in this biodegradable implant supports further advancement of strategies to address long-acting HIV PrEP.
Introduction
There are an estimated 38 million people presently living with HIV globally [1]. Although the annual numbers of newly infected people has steadily declined, 1.7 million new infections were reported in 2019 [2]. Progress for further reduction of new infections is challenging, as some countries have experienced rising rates and more than 50% of key populations still do not have access to current HIV prevention services [1,2]. This suggests that coverage with the currently available HIV prevention product regimens, such as oral pre-exposure prophylaxis (PrEP) is insufficient. A broader range of HIV prevention products that extend beyond daily oral therapy-such as long-acting (LA) injectables, implants and topical methods-could further reduce the rate of new infections overall and in key pH-dependent and inversely correlated to the rate of water ingress. Devices with thicker PCL walls are used to delay the hydrolysis of TAF within our implants. As a result, we did achieve a high level of TAF purity after 240-days of in-vitro exposure. This manuscript supports the continued advancement of new long-acting delivery systems to address HIV PrEP.
Excess TAF FB (~40 mg) was mixed with neat excipient (~1 mL) in a 20 mL scintillation vial. Similarly, excess TAF HF (~30 mg) was mixed with excipient (~0.4 mL). Each excipient mixture was incubated at 37 • C over a period of 2 days, after which the concentration of TAF FB or TAF HF was measured by high-performance liquid chromatography (HPLC) to determine the solubility of the API.
To determine the stability of the API, the excipient mixtures were prepared as specified above and incubated at 37 • C for an additional 7 days prior to being analyzed by HPLC.
Implant Fabrication
Research-grade PCL pellets were purchased from Sigma Aldrich, referred to as "Sigma-PCL" throughout this paper (weight average molecular weight (M w ) = 103 kDa, Cat# 440744, St. Louis, MO, USA) and in medical-grade from Corbion, referred to as "PC17" throughout this paper (average M n = 93 kDa, PURASORB PC 17, Amsterdam, The Netherlands). PCL tubes were fabricated via a hot-melt, single screw extrusion process using solid PCL pellets at GenX Medical (Chattanooga, TN, USA). All tubes were 2.5 mm in outer diameter (OD) and had wall thicknesses of 70, 100, 150, 200 or 300 µm, as measured with a 3-axis laser measurement system and light microscopy at GenX Medical.
PCL tubes were sealed at both ends using injection sealing wherein the PCL tube was marked and trimmed to the correct length to achieve an implant with a 40 mm paste length with 3 mm of headspace at both ends for sealing. The initial seal was then created on one end of the implant by placing the tube over a stainless steel rod that filled all the tube except for a 3 mm headspace at one end, placing a Teflon collar around the headspace to support the tube wall and injecting molten PCL into the cavity of the headspace. After the injected PCL was solidified, excess PCL was trimmed and the collar was removed to form a cylindrical seal approximately 2 mm long that is compatible with commercial contraceptive trocars.
TAF FB and TAF HF were mixed with excipients at varying mass ratios prior to loading into the implant. Each mixture was first ground with a mortar and pestle to create a smooth paste and then backloaded into a 1 mL syringe fitted with a 14-gauge blunt tip needle. The TAF formulation was then extruded through the needle into the empty tube. Alternatively, the TAF formulation was loaded into the PCL tube using a modified spatula. After the filled formulation reached the 40-mm mark, the interior tube wall was cleaned with a rod and sealed in a similar manner to the first seal. After fabrication, all implants were weighed to determine the total payload and photographed with a ruler to record the final dimensions. Paste area was measured with ImageJ (Version 1.50e, National Institutes of Health (NIH), Bethesda, MD, USA) and release rates were normalized to the surface area of a full-sized implant (2.5 mm OD, 40 mm in length), 314 mm 2 . The end of the implants (i.e., end-seals) were not included in calculations of the implant surface area.
Implant Sterilization
All implants were fabricated and handled under aseptic conditions using a biosafety cabinet. Certain implants were exposed to gamma irradiation, as indicated in the text. Implants exposed to gamma irradiation were first packed in amber glass vials and then irradiated with a dose range of 18-24 kGy at room temperature, using a Cobalt-60 gamma-ray source (Nordion Inc., Ottawa, ON, Canada) at Steris (Mentor, OH, USA). Samples were exposed to the source on a continuous path for a period of 8 h.
In Vitro Release Studies
In vitro release characterization involved incubation of the implants in 40 mL 1X phosphate buffered saline (PBS) (pH 7.4) at 37 • C and placed on an orbital shaker. TAF species in the release media was measured by ultraviolet-visible (UV) spectroscopy at 260 nm using the Synergy MX multi-mode plate reader (BioTek Instruments, Inc, Winooski, VT, USA). The release buffer was sampled three times per week during which the implants were transferred to 40 mL of fresh buffer to maintain sink conditions. TAF quantity released in each PBS buffer during the time interval was calculated and cumulative mass of drug release as a function of time was determined. All the in-vitro release and stability studies have sample sizes of 20 implants, unless noted otherwise. At defined timepoints, 2 devices were taken down to determine chromatographic purity of TAF and water content inside the implant reservoir.
Stability Analysis of TAF Formulation
The purity of TAF formulations inside the implant reservoir was evaluated by slicing open an implant, extracting the entire reservoir contents into an organic solution and measuring TAF chromatographic purity using high performance liquid chromatography coupled with UV spectroscopy (HPLC/UV). The analysis was performed using a Waters BEH C18 column (2.1 mm × 50 mm, 1.7 µm) under gradient, reversed phase conditions with detection at 260 nm. Shelf stability implants containing TAF FB formulations were analyzed using an Agilent Zorbax column (4.6 mm × 150 mm, 3.5 µm). For each implant, one single aliquot was prepared and quantitated by linear regression analysis against a five-point calibration curve. TAF purity was calculated as % peak area associated with TAF relative to total peak area of TAF related degradation products (detected above the limit of detection (LOD) ≥ 0.05%). The TAF formulations within the implant were analyzed after exposure of the implant to a simulated physiological condition (i.e., 1X PBS, pH 7.4 at 37 • C) for up to 240 days. A scalpel was used to slit the implants lengthwise and the contents of the scalpel were blotted on pH paper to assess the pH within the device core.
Loss on Drying Analysis
A glass-stoppered, shallow weighing bottle was placed in the vacuum oven at 40 • C for 30 min, cooled to room temperature and weighed (W bottle ). Retrieved implants from each timepoint were placed in individual bottles and their weight was recorded (W w ) and placed in the vacuum oven at 40 • C overnight (the glass stopper was removed from the bottle but left in the vacuum oven as well). The glass bottle was closed prior to weighing it with the implant (W d ). This process was repeated until the new recorded weight (W' d ) was within 0.1 mg of W d . The implant loss on drying (i.e., the amount of water content that was present in the implant) was calculated using the following equation: Pharmaceutics 2020, 12, 1057 5 of 17
Shelf Stability
Implant filled with TAF FB and TAF HF formulations were placed in aluminum foil pouches. Half of the pouches were sealed using an impulse heat sealer (AIE-110T, American International Electric Inc., Industry, CA, USA), while the remaining ones were left unsealed. The sealed pouches containing the implants were further split into two groups: stored under ambient conditions and in an incubator at 37 • C with 40% relative humidity (RH). The same was repeated for the unsealed pouches containing the TAF formulation implants. The implants were removed from the pouches and assessed for purity using HPLC at the following timepoints: 0, 90, 180 days. Note: the implants containing TAF HF formulations were sterilized at >40 kGy using the same parameters as specified in Section 2.3.
Differential Scanning Calorimetry (DSC)
The melting behavior of PCL samples was assessed with modulated differential scanning calorimetry (MDSC) (TA Instruments Q200, RCS90 cooling system, New Castle, DE, USA). Approximately 8 mg of extruded polymer tubing was placed in a Tzero™ Pan and sealed with a Tzero™ Lid and a dome-shaped die, resulting in a crimped seal. Samples were then placed in a nitrogen-purged DSC cell, cooled to 0 • C, then heated to 120 • C at a rate of 1 • C/min with an underlying heat-only modulation temperature scan of ±0.13 • C every 60 s. The melting temperature (T m ) of the polymer was determined by the peak temperature of the melting endotherm and the enthalpy associated with melting was determined by integrating linearly the area of the melt peak (between 25 and 65 • C) using the TA Universal Analysis software (version 4.5A, TA Instruments, New Castle, DE, USA). PCL samples did not exhibit exothermic peaks in the non-reversing heat flow signal indicating that PCL did not experience cold-crystallization during the melting process; therefore, the total heat flow curve was used to assess the mass % crystallinity. The mass % crystallinity was calculated using the following equation, where X c represents the mass fraction of crystalline domains in PCL, ∆H m represents the enthalpy of melting measured by the DSC and xH fus represents the theoretical enthalpy of melting for 100% crystalline PCL, reported as 139.5 J/g [24,25].
Gel Permeation Chromatography (GPC)
The MW of PCL was analyzed via GPC by first dissolving samples in tetrahydrofuran (THF) to 10 mg/mL injecting 40 µL of sample using an Agilent 1100/1200 HPLC-UV instrument (Santa Clara, CA, USA, flow rate of 1.0 mL/min). Polystyrene polymer standards (498 Da to 554 kDa) were used to calibrate the MW of samples.
In Vitro Performance of PCL Reservoir Implants with TAF HF and TAF FB Formulations
In this study, we investigated various TAF formulations within a biodegradable implant under simulated physiological conditions. The implant configuration comprises a reservoir of formulated API encapsulated by a biodegradable PCL membrane ( Figure 1). The implants were fabricated from PCL tubes produced via a hot-melt, single screw extrusion process using solid PCL pellets, with an outer diameter (OD) of 2.5 mm and a length of 40 mm. Implants containing TAF in solid form with no additional excipients were first evaluated under in-vitro conditions that mimic physiological environments. Implants containing only TAF exhibited non-linear release profiles (see Figure S1), likely a consequence of the dissolution process of solid TAF within the reservoir and the lack of PCL membrane-controlled release of drug. Therefore an excipient was incorporated into the API formulation to tailor dissolution rates. We conducted a screening study with several excipients identified from the FDA's inactive ingredient list. The excipient screen involved mixing TAF FB or TAF HF with various excipients and incubating the mixtures at 37 • C. The solubility and stability of each TAF form within the pharmaceutical grade excipients were determined by a HPLC method after 2 or 9 days of incubation, respectively (Table 1). Lead excipients were identified for each API with criteria of showing <3% impurity level. Sesame oil and castor oil were therefore down-selected for TAF FB . Besides the castor oil, a poly(ethylene glycol) (PEG) excipient was also considered for TAF HF , because of a high TAF HF stability reported by Schlesinger et al., within the thin-film polymer implants [26]. We selected PEG600 as an excipient for TAF HF , due to its larger molecular weight as compared to other PEG excipients of shorter chain length (i.e., PEG300, PEG400).
Pharmaceutics 2020, 12, x FOR PEER REVIEW 6 of 17 [26]. We selected PEG600 as an excipient for TAFHF, due to its larger molecular weight as compared to other PEG excipients of shorter chain length (i.e., PEG300, PEG400). After selection of the excipients, we conducted an in-vitro study to assess the release kinetics and the chromatographic purity of TAFHF and TAFFB within the reservoir over time when implants were immersed in aqueous, physiologically relevant conditions (pH = 7.4, 37 °C). TAFHF and TAFFB were first formulated with the lead excipients at a mass ratio of 2:1 and loaded in the PCL extruded tubes (100 µm wall thickness) comprising PCL with Mw of 145 kDa. Table 2 shows the formulation, TAF payload and configuration of tested implants. The cumulative release profiles for various TAF formulations are shown in Figure 2. All implants exhibited a period of zero-order drug release. TAFFB implants exhibited linear release with a constant release rate of TAF species (non-degraded TAF and tenofovir-containing species) over 210 days, while the TAFHF implants demonstrated zero-order release of TAF species up to 120 days. The drug release rates for implant containing formulations with TAFHF and PEG600 exhibited relatively large variations and deviated from zero-order release near day 90, which was likely attributed to the swelling of the implants. This swelling behavior was previously observed for other hydrophilic excipients (i.e., glycerol, PEG300, PEG400) due to the highwater solubility, which leads to high osmotic pressure within the implant core. The average release rates for TAFHF-CO formulation and TAFHF-PEG600 formulation were 0.38 ± 0.04 mg/day and 0.68 ± 0.20 mg/day, respectively. TAFFB formulations exhibited lower release rates than the TAFHF formulations, with average release rates of 0.26 ± 0.04 mg/day and 0.18 ± 0.03 mg/day for castor oil and sesame oil formulation, respectively. The differences in release rates between implants containing TAFHF and TAFFB are likely related to the solubility of these APIs within the excipients and PBS buffer, since TAFHF showed a higher solubility within PBS (11.6 mg/mL) than TAFFB (5.8 mg/mL). In addition, the TAFFB-CO formulation demonstrated a faster release rate than the TAFFB-SO implants, which is also likely attributed to the higher solubility of the TAFFB within castor oil. After selection of the excipients, we conducted an in-vitro study to assess the release kinetics and the chromatographic purity of TAF HF and TAF FB within the reservoir over time when implants were immersed in aqueous, physiologically relevant conditions (pH = 7.4, 37 • C). TAF HF and TAF FB were first formulated with the lead excipients at a mass ratio of 2:1 and loaded in the PCL extruded tubes (100 µm wall thickness) comprising PCL with M w of 145 kDa. Table 2 shows the formulation, TAF payload and configuration of tested implants. The cumulative release profiles for various TAF formulations are shown in Figure 2. All implants exhibited a period of zero-order drug release. TAF FB implants exhibited linear release with a constant release rate of TAF species (non-degraded TAF and tenofovir-containing species) over 210 days, while the TAF HF implants demonstrated zero-order release of TAF species up to 120 days. The drug release rates for implant containing formulations with TAF HF and PEG600 exhibited relatively large variations and deviated from zero-order release near day 90, which was likely attributed to the swelling of the implants. This swelling behavior was previously observed for other hydrophilic excipients (i.e., glycerol, PEG300, PEG400) due to the high-water solubility, which leads to high osmotic pressure within the implant core. The average release rates for TAF HF -CO formulation and TAF HF -PEG600 formulation were 0.38 ± 0.04 mg/day and 0.68 ± 0.20 mg/day, respectively. TAF FB formulations exhibited lower release rates than the TAF HF Pharmaceutics 2020, 12, 1057 7 of 17 formulations, with average release rates of 0.26 ± 0.04 mg/day and 0.18 ± 0.03 mg/day for castor oil and sesame oil formulation, respectively. The differences in release rates between implants containing TAF HF and TAF FB are likely related to the solubility of these APIs within the excipients and PBS buffer, since TAF HF showed a higher solubility within PBS (11.6 mg/mL) than TAF FB (5.8 mg/mL). In addition, the TAF FB -CO formulation demonstrated a faster release rate than the TAF FB -SO implants, which is also likely attributed to the higher solubility of the TAF FB within castor oil. According to Fick's first law of diffusion, the release rate is directly proportional to the drug concentration gradient across the PCL membrane, which is equivalent to the solubility of API within the excipients when zero-order release kinetics is achieved [27]. As shown in Table 1, TAF FB showed higher solubility within castor oil than sesame oil, resulting in higher release rates. Similarly, the TAF HF -PEG600 formulation demonstrated a faster release rate than the TAF HF -CO implants, which is likely attributed to the higher solubility of the TAF HF within PEG600.Therefore, the release rate of the implant is dictated by the solubility of the API within the selected excipients and the release media. The excipient choice is critical for tuning the release rate of TAF. According to Fick's first law of diffusion, the release rate is directly proportional to the drug concentration gradient across the PCL membrane, which is equivalent to the solubility of API within the excipients when zero-order release kinetics is achieved [27]. As shown in Table 1, TAFFB showed higher solubility within castor oil than sesame oil, resulting in higher release rates. Similarly, the TAFHF-PEG600 formulation demonstrated a faster release rate than the TAFHF-CO implants, which is likely attributed to the higher solubility of the TAFHF within PEG600.Therefore, the release rate of the implant is dictated by the solubility of the API within the selected excipients and the release media. The excipient choice is critical for tuning the release rate of TAF.
In Vitro Stability Assessment of Implant Formulations
To evaluate the degradation profiles of TAFHF and TAFFB inside the reservoir of implants exposed to PBS at 37 °C, implants were periodically removed from the study and sacrificed to assess the stability of TAF using HPLC analysis. Figure 3a,b shows the chromatographic purity of TAFHF in the reservoir of implants over 120 days. By day 120, the purity of TAFHF decreased from 99.8% to 17.6% for TAFHF-CO formulations and from 99.7% to 1.36% for TAFHF-PEG600 formulations (raw data shown in Table S1). These results show that TAFHF-CO formulations exhibit a higher degree of stability than the TAFHF-PEG600 formulations after exposure to simulated physiological conditions, which is consistent with the stability results from the excipient screen. It is reasonable to expect that TAFHF-PEG600 formulation degrades at a faster rate given the hydrophilicity of PEG600. In comparison, the chromatographic purity of TAFFB formulations was also monitored using the HPLC method. Results are shown in Figure 3c,d and raw data are listed in Table S2. Unlike TAFHF formulations, a high level of purity was achieved for TAFFB formulations at 210 days. The TAFFB-SO formulation demonstrated a slightly higher purity (92%) than that of TAFFB-CO formulations (85%) at 210 days, which is in a good agreement with the excipient screen results.
Irrespective of the formulation, TAF within the implants remained unhydrolyzed and chemically stable within the implant at the beginning of the in-vitro study, then reached a point in time when the degradation rate accelerated. These results are consistent with our hypothesis of a twostage degradation profile of TAF, where TAF in the solid-state remains relatively stable and then hydrolyzes to TAF degradants once solid TAF dissolves in aqueous solutions. To assess the
In Vitro Stability Assessment of Implant Formulations
To evaluate the degradation profiles of TAF HF and TAF FB inside the reservoir of implants exposed to PBS at 37 • C, implants were periodically removed from the study and sacrificed to assess the stability of TAF using HPLC analysis. Figure 3a,b shows the chromatographic purity of TAF HF in the reservoir of implants over 120 days. By day 120, the purity of TAF HF decreased from 99.8% to 17.6% for TAF HF -CO formulations and from 99.7% to 1.36% for TAF HF -PEG600 formulations (raw data shown in Table S1). These results show that TAF HF -CO formulations exhibit a higher degree of stability than the TAF HF -PEG600 formulations after exposure to simulated physiological conditions, which is consistent with the stability results from the excipient screen. It is reasonable to expect that TAF HF -PEG600 formulation degrades at a faster rate given the hydrophilicity of PEG600. In comparison, the chromatographic purity of TAF FB formulations was also monitored using the HPLC method. Results are shown in Figure 3c,d and raw data are listed in Table S2. Unlike TAF HF formulations, a high level of purity was achieved for TAF FB formulations at 210 days. The TAF FB -SO formulation demonstrated a slightly higher purity (92%) than that of TAF FB -CO formulations (85%) at 210 days, which is in a good agreement with the excipient screen results. outperformed TAFHF formulations within our platform, we further measured the pH of the formulated implant core using pH paper prior to purity analysis. Both TAFFB formulations showed similar pH levels of ~4-5 at 210 days, whereas the pH of residual TAFHF formulation inside the device core was ~2-3 at ~200 days (data not shown). Implant reservoir loaded with TAFHF formulation is expected to have a more acidic environment due to fumaric acid. This explains the low stability of TAFHF formulation because lower pH leads to more rapid TAF degradation. Irrespective of the formulation, TAF within the implants remained unhydrolyzed and chemically stable within the implant at the beginning of the in-vitro study, then reached a point in time when the degradation rate accelerated. These results are consistent with our hypothesis of a two-stage degradation profile of TAF, where TAF in the solid-state remains relatively stable and then hydrolyzes to TAF degradants once solid TAF dissolves in aqueous solutions. To assess the degradation rate constants for the initial and accelerated stages, the chromatographic purity of TAF was plotted as a function of time and a two-line model was applied to depict the non-linear degradation profiles of various TAF formulations. Excellent fits to the experimental data were obtained (R 2 > 0.94 for all the fittings, see Figure 3). The two-line model also identified the inflection point of these degradation profiles, which represents the point in time when accelerated degradation of TAF begins.
As expected, TAF HF -PEG600 formulation exhibited a higher initial rate of degradation with an early inflection point at~38 days, compared to~78 days for TAF HF -CO formulation. Furthermore, the two-line models can project the purity of TAF HF along the time axis for these specific formulations. It is predicted that TAF HF would be completely hydrolyzed and depleted within the 100 µm thick implants with the TAF HF -CO formulation after exposure to the in-vitro environment for~140 days. In contrast, TAF FB formulations exhibited a lower rate of degradation with a delayed onset of the accelerated degradation phase (~130 days). Based on the model, the TAF FB -SO and TAF FB -CO formulations can maintain >90% purity for up to 220 days and 155 days, respectively. In particular, the TAF FB -SO formulation showed a lower rate of degradation for both initial and accelerated phases of degradation, as compared to TAF FB -CO formulation. These results indicate that TAF FB formulations are more stable than the TAF HF formulations under simulated physiological conditions. This is likely attributed to the differences in the hydrolytic instability of TAF FB and TAF HF formulations within the device cores, as both TAF FB and TAF HF degrade through hydrolysis [9,14]. Specifically, hydrolytic degradation and pathway of TAF are dependent on the pH in the implant microenvironment [22]. The degradation rate of TAF increases in basic conditions due to the P-O bond in its structure that is prone to hydrolysis in alkaline conditions [28]. In addition, a higher instability of TAF in low-pH conditions is also observed and is likely due to the presence of P-N (phosphoramidate) bond, which particularly is susceptible to acid hydrolysis [29]. Thus, a pH "stability window" for TAF between pH 4.8-5.8 was determined by Grattoni et al. [22]. To elucidate why TAF FB formulations outperformed TAF HF formulations within our platform, we further measured the pH of the formulated implant core using pH paper prior to purity analysis. Both TAF FB formulations showed similar pH levels of~4-5 at 210 days, whereas the pH of residual TAF HF formulation inside the device core was~2-3 at~200 days (data not shown). Implant reservoir loaded with TAF HF formulation is expected to have a more acidic environment due to fumaric acid. This explains the low stability of TAF HF formulation because lower pH leads to more rapid TAF degradation.
Besides the pH level of the microenvironment in the device core, we also hypothesized that a correlation exists between the level of TAF degradation and the amount of water uptake by the implants. Therefore, we measured the water content within the implants using a loss-on-drying method. This approach is adapted from the United States Pharmacopeia (USP) Chapter for Loss-on-Drying, where we compared the weight of an implant before and after it was dried in a vacuum oven at 40 • C after removal from buffer. The percentage of water content for various TAF formulations was plotted as a function of time (Figure 4). The amount of water ingress at different time points along with the cumulative amounts of drug release at a given time is also listed in Tables S3 and S4. At Day-120, the TAF HF -CO implants gained~10 mg of water, whereas TAF HF -PEG600 implants gained~18 mg of water. The significantly large amount of water uptake by the TAF HF -PEG600 implants explains the low levels of chromatographic purity of TAF HF inside the implant reservoir. In contrast, TAF FB implants measured a much lower percentage of water ingress as compared to TAF HF implants at a given time. For instance, the amount of water ingress at 120 days for TAF FB -CO and TAF FB -SO only consists of 4.6% and 8.4% of the total mass of the implants, respectively, which are significantly lower than that of TAF HF -CO (~11.6%) and TAF HF -PEG600 implants (~21.6%). In addition, TAF FB -SO formulation exhibited a lower level of water ingress than the TAF FB -CO formulation, which is likely due to a lower release rate of the TAF FB -SO formulation. As shown in Table S4, the amount of cumulative drug release for TAF FB -CO formulation is higher than TAF FB -SO formulation at a given time. Although both TAF FB -SO and TAF FB -CO formulation exhibited a comparable pH level in the implant core (pH of~4-5), higher drug release may create more void space for water to permeate into the implants resulting in faster degradation of TAF FB . Thus, reducing the release rate of drug from the implants can potentially slow down the rate of water ingress and improve the formulation stability. Besides the pH level of the microenvironment in the device core, we also hypothesized that a correlation exists between the level of TAF degradation and the amount of water uptake by the implants. Therefore, we measured the water content within the implants using a loss-on-drying method. This approach is adapted from the United States Pharmacopeia (USP) Chapter for Loss-on-Drying, where we compared the weight of an implant before and after it was dried in a vacuum oven at 40°C after removal from buffer. The percentage of water content for various TAF formulations was plotted as a function of time (Figure 4). The amount of water ingress at different time points along with the cumulative amounts of drug release at a given time is also listed in Tables S3 and S4. At Day-120, the TAFHF-CO implants gained ~10 mg of water, whereas TAFHF-PEG600 implants gained ~18 mg of water. The significantly large amount of water uptake by the TAFHF-PEG600 implants explains the low levels of chromatographic purity of TAFHF inside the implant reservoir. In contrast, TAFFB implants measured a much lower percentage of water ingress as compared to TAFHF implants at a given time. For instance, the amount of water ingress at 120 days for TAFFB-CO and TAFFB-SO only consists of ~4.6% and 8.4% of the total mass of the implants, respectively, which are significantly lower than that of TAFHF-CO (~11.6%) and TAFHF-PEG600 implants (~21.6%). In addition, TAFFB -SO formulation exhibited a lower level of water ingress than the TAFFB-CO formulation, which is likely due to a lower release rate of the TAFFB-SO formulation. As shown in Table S4, the amount of cumulative drug release for TAFFB-CO formulation is higher than TAFFB-SO formulation at a given time. Although both TAFFB-SO and TAFFB-CO formulation exhibited a comparable pH level in the implant core (pH of ~4-5), higher drug release may create more void space for water to permeate into the implants resulting in faster degradation of TAFFB. Thus, reducing the release rate of drug from the implants can potentially slow down the rate of water ingress and improve the formulation stability. Similar to the TAF degradation profiles, the water ingress profiles also showed a two-step process, where the amount of water ingress was negligible at the beginning of the study, then increased dramatically at an accelerated rate (Figure 4). To assess the water permeation rates, twoline models were also applied to the water ingress profiles of various formulations. Excellent fits to Similar to the TAF degradation profiles, the water ingress profiles also showed a two-step process, where the amount of water ingress was negligible at the beginning of the study, then increased dramatically at an accelerated rate (Figure 4). To assess the water permeation rates, two-line models were also applied to the water ingress profiles of various formulations. Excellent fits to the water ingress data were obtained for the TAF HF -CO, TAF FB -CO and TAF FB -SO formulations (R 2 > 0.92 for all the fitting). Table 3 shows the inflection point for each formulation along with the quantities of drug remaining, TAF purities and water content at the given inflection point. The inflection points in the water ingress profiles closely aligned with those of the degradation profiles, confirming that the TAF purity is correlated with the amount of water ingress. In contrast, the water ingress profile for the TAF HF -PEG600 implants is sigmoidal and does not fit the two-line models. As a hydrophilic and water-soluble excipient, PEG draws more water into the implant core, resulting in swelling of the implants. Additionally, four TAF HF -PEG600 implants that were subjected to the loss-on-drying analysis were compromised due to swelling, resulting in an artificially higher amount of water ingress. It is worth noting that TAF FB formulations at the inflection point (~130 days) showed higher purity than the TAF HF formulations at their inflection points (~38-78 days) while the percentages of water ingress are comparable for these formulations, indicating that TAF FB is less susceptible to hydrolysis as compared to TAF HF . This is likely due to the differences in the pH level within the device core as previously discussed. We further explored the relationship between the purity of TAF and the water content within these implants. Figure 5 illustrates the purity of TAF and the water content as a function of time for implants containing formulations of TAF HF and TAF FB . Both the TAF degradation and water ingress profiles appear to exhibit a two-stage process. For example, the purity of formulated TAF HF inside the implant reservoir decreases at an accelerated rate after~80 days in-vitro, while the rate of water ingress also increases after the same time (see Figure 5a). Similar degradation behaviors were observed for TAF FB around~130 days for both castor oil and sesame oil formulations (Figure 5b). This data suggests that the predominant reason for accelerated TAF degradation is related to water ingress, wherein the accumulation of water within the reservoir accelerates the rate of TAF hydrolysis. In addition, the increasing concentration of TAF degradants within the implant core could in turn draw a larger amount of water into the implant, as the most prominent TAF degradants (e.g., tenofovir, monophenyl-TFV) are hydrophilic and/or water-soluble. A possibility also exists that the increased concentration of solute resulting from the breakdown of TAF creates an osmotic gradient as a driving force for water to imbibe into the implant core. Taken together, the degradation of TAF is caused by water ingress and the resultant degradation products may accelerate the rate of water permeation. Thus, a strong correlation between TAF impurity and water content within the implants was established. To summarize, as the degradation of TAF is dependent on the pH and amount of water ingress, implants containing the TAF FB -SO formulations showed a pH level within the "stability window" that substantially mitigates the TAF degradation and a relatively low release rate that slows down the rate of water ingress. Therefore, TAF FB -SO formulations were identified as the lead formulation for the sustained delivery of TAF within our biodegradable drug delivery platform. Pharmaceutics 2020, 12, x FOR PEER REVIEW 11 of 17 Figure 5. TAF purity and water uptake for the TAFHF implants (a) and TAFFB implants (b) as a function of time.
Hydrolytic Degradants of TAF
In addition to assessing the chromatographic purity of TAF, individual TAF degradants > 0.05% were identified using known markers on HPLC with confirmation by liquid-chromatography mass spectrometry (LC-MS) at the relative retention time (RRT). Figure 6 shows the levels of individual degradants as a function of time for various TAF formulations. All degradants detected in the implant resulted from the hydrolysis of TAF, with the predominant species including TFV (the parent API) and monophenyl-TFV (an intermediate in the TAF hydrolytic pathway) for all formulations. These two degradants were measured at comparable levels within the TAFFB-CO, TAFFB-SO and TAFHF-CO implants, whereas the TAFHF-PEG600 implants contained a significantly higher level of monophenyl-TFV than TFV. The higher amount of monophenyl-TFV within the TAFHF-PEG600 implant is likely related to a larger amount of water ingress resulting from the compromised integrity of the implant. The observed TAF degradants are well-aligned with the degradation products in acid solution state reported in the literature. Figure 7 shows the postulated predominant TAF degradation pathway proposed by Golla et al. [14]. First, the phosphoramidate moiety of TAF undergoes hydrolysis to form monophenyl-TFV (RRT of ~0.6) on the release of alanine isopropyl ester (RRT of ~0.5), then monophenyl-TFV further undergoes phosphorus phenyl ester hydrolysis to yield TFV (RRT of ~0.36 and 0.05).
Hydrolytic Degradants of TAF
In addition to assessing the chromatographic purity of TAF, individual TAF degradants >0.05% were identified using known markers on HPLC with confirmation by liquid-chromatography mass spectrometry (LC-MS) at the relative retention time (RRT). Figure 6 shows the levels of individual degradants as a function of time for various TAF formulations. All degradants detected in the implant resulted from the hydrolysis of TAF, with the predominant species including TFV (the parent API) and monophenyl-TFV (an intermediate in the TAF hydrolytic pathway) for all formulations. These two degradants were measured at comparable levels within the TAF FB -CO, TAF FB -SO and TAF HF -CO implants, whereas the TAF HF -PEG600 implants contained a significantly higher level of monophenyl-TFV than TFV. The higher amount of monophenyl-TFV within the TAF HF -PEG600 implant is likely related to a larger amount of water ingress resulting from the compromised integrity of the implant. The observed TAF degradants are well-aligned with the degradation products in acid solution state reported in the literature. Figure 7 shows the postulated predominant TAF degradation pathway proposed by Golla et al. [14]. First, the phosphoramidate moiety of TAF undergoes hydrolysis to form monophenyl-TFV (RRT of~0.6) on the release of alanine isopropyl ester (RRT of~0.5), then monophenyl-TFV further undergoes phosphorus phenyl ester hydrolysis to yield TFV (RRT of~0.36 and 0.05).
Pharmaceutics 2020, 12, x FOR PEER REVIEW 11 of 17 Figure 5. TAF purity and water uptake for the TAFHF implants (a) and TAFFB implants (b) as a function of time.
Hydrolytic Degradants of TAF
In addition to assessing the chromatographic purity of TAF, individual TAF degradants > 0.05% were identified using known markers on HPLC with confirmation by liquid-chromatography mass spectrometry (LC-MS) at the relative retention time (RRT). Figure 6 shows the levels of individual degradants as a function of time for various TAF formulations. All degradants detected in the implant resulted from the hydrolysis of TAF, with the predominant species including TFV (the parent API) and monophenyl-TFV (an intermediate in the TAF hydrolytic pathway) for all formulations. These two degradants were measured at comparable levels within the TAFFB-CO, TAFFB-SO and TAFHF-CO implants, whereas the TAFHF-PEG600 implants contained a significantly higher level of monophenyl-TFV than TFV. The higher amount of monophenyl-TFV within the TAFHF-PEG600 implant is likely related to a larger amount of water ingress resulting from the compromised integrity of the implant. The observed TAF degradants are well-aligned with the degradation products in acid solution state reported in the literature. Figure 7 shows the postulated predominant TAF degradation pathway proposed by Golla et al. [14]. First, the phosphoramidate moiety of TAF undergoes hydrolysis to form monophenyl-TFV (RRT of ~0.6) on the release of alanine isopropyl ester (RRT of ~0.5), then monophenyl-TFV further undergoes phosphorus phenyl ester hydrolysis to yield TFV (RRT of ~0.36 and 0.05).
Six-Month Shelf-Stability of Implants with TAF Formulations
To assess the stability of TAF formulations under different storage conditions we conducted a 6-month shelf stability study of the lead TAFHF and TAFFB formulations identified in the in-vitro evaluations. All the TAF implants comprised Sigma-PCL with a wall thickness of 70 µm and formulations of a 2:1 weight ratio of TAFHF to castor oil or a 2:1 weight ratio of TAFFB to sesame oil. Implants were placed in open and closed foil pouches and then stored at 22 °C/50% RH and 40 °C/75% RH for six months. The long-term and accelerated storage conditions are selected based on FDA Guidance [30]. The purity of TAF within the implants was measured at 0, 90 and 180 days using the UPLC method. The chromatographic purity of each TAF formulation as a function of time is presented in Figure 8 and Table S5. As expected, formulations for both the TAFHF and the TAFFB formulations demonstrated higher stability under storage conditions as compared to aqueous invitro conditions. When the package remained intact, the stability of TAF remained > 97% for all formulations. Conversely, implants in the opened pouches, representing an unintentional breach in the packaging, resulted in substantially different stability profiles between TAFHF and TAFFB. At 180 days in accelerated stability conditions (40 °C/75% RH), the purity of TAFHF-CO substantially decreased to 91.2%, whereas the purity of TAFFB-SO remained at 97.5%. This result further shows that TAFFB outperforms TAFHF within the reservoir-style implant but also illustrates the importance of packaging design for future product translation efforts. This shelf-stability study also identified the individual TAF degradants > 0.05% by RRT and LC-MS and showed the predominant degradants were monophenyl-TFV and TFV, as in the in-vitro studies. Figure 9 illustrates the levels of individual degradants at 180 days for various TAF formulations. Interestingly, TFV was detected at significantly higher levels than monophenyl-TFV for both TAFFB and TAFHF formulations under solid-state conditions of the shelf stability tests. For
Six-Month Shelf-Stability of Implants with TAF Formulations
To assess the stability of TAF formulations under different storage conditions we conducted a 6-month shelf stability study of the lead TAF HF and TAF FB formulations identified in the in-vitro evaluations. All the TAF implants comprised Sigma-PCL with a wall thickness of 70 µm and formulations of a 2:1 weight ratio of TAF HF to castor oil or a 2:1 weight ratio of TAF FB to sesame oil. Implants were placed in open and closed foil pouches and then stored at 22 • C/50% RH and 40 • C/75% RH for six months. The long-term and accelerated storage conditions are selected based on FDA Guidance [30]. The purity of TAF within the implants was measured at 0, 90 and 180 days using the UPLC method. The chromatographic purity of each TAF formulation as a function of time is presented in Figure 8 and Table S5. As expected, formulations for both the TAF HF and the TAF FB formulations demonstrated higher stability under storage conditions as compared to aqueous in-vitro conditions. When the package remained intact, the stability of TAF remained > 97% for all formulations. Conversely, implants in the opened pouches, representing an unintentional breach in the packaging, resulted in substantially different stability profiles between TAF HF and TAF FB . At 180 days in accelerated stability conditions (40 • C/75% RH), the purity of TAF HF -CO substantially decreased to 91.2%, whereas the purity of TAF FB -SO remained at 97.5%. This result further shows that TAF FB outperforms TAF HF within the reservoir-style implant but also illustrates the importance of packaging design for future product translation efforts.
Six-Month Shelf-Stability of Implants with TAF Formulations
To assess the stability of TAF formulations under different storage conditions we conducted a 6-month shelf stability study of the lead TAFHF and TAFFB formulations identified in the in-vitro evaluations. All the TAF implants comprised Sigma-PCL with a wall thickness of 70 µm and formulations of a 2:1 weight ratio of TAFHF to castor oil or a 2:1 weight ratio of TAFFB to sesame oil. Implants were placed in open and closed foil pouches and then stored at 22 °C/50% RH and 40 °C/75% RH for six months. The long-term and accelerated storage conditions are selected based on FDA Guidance [30]. The purity of TAF within the implants was measured at 0, 90 and 180 days using the UPLC method. The chromatographic purity of each TAF formulation as a function of time is presented in Figure 8 and Table S5. As expected, formulations for both the TAFHF and the TAFFB formulations demonstrated higher stability under storage conditions as compared to aqueous invitro conditions. When the package remained intact, the stability of TAF remained > 97% for all formulations. Conversely, implants in the opened pouches, representing an unintentional breach in the packaging, resulted in substantially different stability profiles between TAFHF and TAFFB. At 180 days in accelerated stability conditions (40 °C/75% RH), the purity of TAFHF-CO substantially decreased to 91.2%, whereas the purity of TAFFB-SO remained at 97.5%. This result further shows that TAFFB outperforms TAFHF within the reservoir-style implant but also illustrates the importance of packaging design for future product translation efforts. This shelf-stability study also identified the individual TAF degradants > 0.05% by RRT and LC-MS and showed the predominant degradants were monophenyl-TFV and TFV, as in the in-vitro studies. Figure 9 illustrates the levels of individual degradants at 180 days for various TAF formulations. Interestingly, TFV was detected at significantly higher levels than monophenyl-TFV for both TAFFB and TAFHF formulations under solid-state conditions of the shelf stability tests. For This shelf-stability study also identified the individual TAF degradants >0.05% by RRT and LC-MS and showed the predominant degradants were monophenyl-TFV and TFV, as in the in-vitro studies. Figure 9 illustrates the levels of individual degradants at 180 days for various TAF formulations. Interestingly, TFV was detected at significantly higher levels than monophenyl-TFV for both TAF FB and TAF HF formulations under solid-state conditions of the shelf stability tests. For instance, TAF FB -SO formulations contained 1.31 ± 0.04% of TFV and 0.03 ± 0.002% of monophenyl-TFV at 180 days, while TAF HF -CO formulations showed 1.87 ± 0.77% of TFV and 0.85 ± 0.57% of monophenyl-TFV at 180 days.
Improving the Stability of TAFFB Formulations
As discussed above, the degradation rate of TAF is pH-dependent and is related to the rate of water ingress. To further enhance the stability of TAFFB, Grattoni et al., included a trans-urocanic acid additive within an implant to preserve the optimal pH and maintain TAF purity > 90% in vitro for over 9 months [22]. We are currently exploring various pH modifiers and hydrophile lipophile balance (HLB) modifiers to further enhance the chemical stability of the TAFFB formulations [31,32]. Here, to improve the stability of TAF, we evaluated the ability to mitigate the amount of the water ingress by reducing the release rate of TAF. Previously we have demonstrated that the release rates of TAF from the implant are inversely proportional to the wall thickness of the PCL tubes [18]. In this study, we evaluated the release rate of TAFFB-CO and TAFFB-SO formulations within PCL tubes at 150, 200 and 300 µm wall thickness. We used the PCL tubes comprising PC17, a medical-grade PCL with Mw of 93 kDa, to support future preclinical studies. Figure 10 shows the cumulative release profiles of the TAFFB-CO and TAFFB-SO implants at different wall thicknesses. Similarly, implants containing TAFFB-CO formulations exhibited higher release rates than TAFFB-SO implants. We also observed the inverse relationship between the thickness of PCL walls and the release rates of TAF for these medical-grade implants. For instance, as the wall thickness increased from 150 to 300 µm for implants comprised of TAFFB-CO formulation, the release rates of TAF decreased from 0.31 ± 0.06 mg/day to 0.10 ± 0.02 mg/day. The chromatographic purities of TAFFB-SO and TAFFB-CO formulations were assessed using the HPLC method at 210 and 240 days. As presented in Table 4, the chromatographic purity of TAF formulations is inversely correlated with the release rates of TAF, demonstrating the ability to delay the degradation of TAF by lowering the release rates of the implants and thus reducing the water ingress rate. After 240-day of in-vitro exposure, we achieved a purity of 93.2% for TAFFB-SO formulation within 300 µm implants. Importantly, thicker-wall implants also offer high mechanical strength and good device integrity. Although the release rate of 300 µm TAFFB-SO implants is relatively low, the therapeutic level of TAF could be potentially achieved using multiple implants that are subcutaneously inserted, similar to Probuphine ® [33] and Norplant ® [34]. Although the use of multiple, low-dose implants will not circumvent the unavoidable hydrolysis of TAF within these current implant formulations, the use of multiple implants to achieve the desired dosing could extend the therapeutic duration to longer periods of time with protection. It is worth noting the degradation profile of TAF formulations assessed under in-vitro conditions may not be reflective of the in-vivo conditions and efforts are currently underway to evaluate the
Improving the Stability of TAF FB Formulations
As discussed above, the degradation rate of TAF is pH-dependent and is related to the rate of water ingress. To further enhance the stability of TAF FB , Grattoni et al., included a trans-urocanic acid additive within an implant to preserve the optimal pH and maintain TAF purity >90% in vitro for over 9 months [22]. We are currently exploring various pH modifiers and hydrophile lipophile balance (HLB) modifiers to further enhance the chemical stability of the TAF FB formulations [31,32]. Here, to improve the stability of TAF, we evaluated the ability to mitigate the amount of the water ingress by reducing the release rate of TAF. Previously we have demonstrated that the release rates of TAF from the implant are inversely proportional to the wall thickness of the PCL tubes [18]. In this study, we evaluated the release rate of TAF FB -CO and TAF FB -SO formulations within PCL tubes at 150, 200 and 300 µm wall thickness. We used the PCL tubes comprising PC17, a medical-grade PCL with M w of 93 kDa, to support future preclinical studies. Figure 10 shows the cumulative release profiles of the TAF FB -CO and TAF FB -SO implants at different wall thicknesses. Similarly, implants containing TAF FB -CO formulations exhibited higher release rates than TAF FB -SO implants. We also observed the inverse relationship between the thickness of PCL walls and the release rates of TAF for these medical-grade implants. For instance, as the wall thickness increased from 150 to 300 µm for implants comprised of TAF FB -CO formulation, the release rates of TAF decreased from 0.31 ± 0.06 mg/day to 0.10 ± 0.02 mg/day. The chromatographic purities of TAF FB -SO and TAF FB -CO formulations were assessed using the HPLC method at 210 and 240 days. As presented in Table 4, the chromatographic purity of TAF formulations is inversely correlated with the release rates of TAF, demonstrating the ability to delay the degradation of TAF by lowering the release rates of the implants and thus reducing the water ingress rate. After 240-day of in-vitro exposure, we achieved a purity of 93.2% for TAF FB -SO formulation within 300 µm implants. Importantly, thicker-wall implants also offer high mechanical strength and good device integrity. Although the release rate of 300 µm TAF FB -SO implants is relatively low, the therapeutic level of TAF could be potentially achieved using multiple implants that are subcutaneously inserted, similar to Probuphine ® [33] and Norplant ® [34]. Although the use of multiple, low-dose implants will not circumvent the unavoidable hydrolysis of TAF within these current implant formulations, the use of multiple implants to achieve the desired dosing could extend the therapeutic duration to longer periods of time with protection. It is worth noting the degradation profile of TAF formulations assessed under in-vitro conditions may not be reflective of the in-vivo conditions and efforts are currently underway to evaluate the degradation profiles of the these TAF formulations in preclinical studies using animal models (i.e., rabbit, dog, non-human primate) [23].
Pharmaceutics 2020, 12, x FOR PEER REVIEW 14 of 17 degradation profiles of the these TAF formulations in preclinical studies using animal models (i.e., rabbit, dog, non-human primate) [23].
Conclusions
This manuscript highlights the importance of the ionization form of a drug when developing implantable drug delivery systems. Although the ionized hemifumarate salt form of TAF is currently used in clinically available oral formulations, we show that the non-ionized free base form of TAF is better suited for our reservoir-style implant due to higher chemical stability over time. The development of implants as LA drug delivery systems require API formulations that maintain a high degree of purity when exposed to physiological conditions over extended periods, in many cases for months to years. These studies show that the TAFFB outperforms the TAFHF by maintaining higher purity for a longer time in a reservoir-style implant. The reasons for the higher purity of TAFFB is likely a result of multiple effects: slowed ingress rate of water due to lower release rates of TAFFB, the absence of the fumarate salt and achievement of an optimum pH window. To further delay the hydrolysis of TAF within our implants, we used thicker PCL walls to reduce the rates of drug release and water ingress. Using this approach, a purity of 93.2% of TAF was achieved with 300 µm implants comprising TAFFB-SO formulation (release rate of 0.07 mg/day) after 240-days of in-vitro exposure. In general, the delivery of hydrolysable drugs from an implant is feasible but requires careful consideration of attributes that could affect drug breakdown, including the implant form factor, mechanism of drug release, drug ionization form and environmental exposures (e.g., pH, temperature, water content). For the reservoir-style TAF implant presented in this paper, many of these parameters can be controlled to improve the stability of TAF and ultimately improve the performance of the implant for achieving long-acting HIV PrEP.
Supplementary Materials: The following are available online at www.mdpi.com/xxx/s1. Figure S1: Daily release profiles of implants filled with TAF hemifumarate salt (TAFHF) (a) and TAF freebase (TAFFB) (b). Table S1: Chromatographic purity of TAFHF formulated with castor oil or PEG600 inside implants exposed to simulated
Conclusions
This manuscript highlights the importance of the ionization form of a drug when developing implantable drug delivery systems. Although the ionized hemifumarate salt form of TAF is currently used in clinically available oral formulations, we show that the non-ionized free base form of TAF is better suited for our reservoir-style implant due to higher chemical stability over time. The development of implants as LA drug delivery systems require API formulations that maintain a high degree of purity when exposed to physiological conditions over extended periods, in many cases for months to years. These studies show that the TAF FB outperforms the TAF HF by maintaining higher purity for a longer time in a reservoir-style implant. The reasons for the higher purity of TAF FB is likely a result of multiple effects: slowed ingress rate of water due to lower release rates of TAF FB , the absence of the fumarate salt and achievement of an optimum pH window. To further delay the hydrolysis of TAF within our implants, we used thicker PCL walls to reduce the rates of drug release and water ingress. Using this approach, a purity of 93.2% of TAF was achieved with 300 µm implants comprising TAF FB -SO formulation (release rate of 0.07 mg/day) after 240-days of in-vitro exposure. In general, the delivery of hydrolysable drugs from an implant is feasible but requires careful consideration of attributes that could affect drug breakdown, including the implant form factor, mechanism of drug release, drug ionization form and environmental exposures (e.g., pH, temperature, water content). For the reservoir-style TAF implant presented in this paper, many of these parameters can be controlled to improve the stability of TAF and ultimately improve the performance of the implant for achieving long-acting HIV PrEP.
Supplementary Materials:
The following are available online at http://www.mdpi.com/1999-4923/12/11/1057/s1. Figure S1: Daily release profiles of implants filled with TAF hemifumarate salt (TAFHF) (a) and TAF freebase (TAFFB) (b). Table S1: Chromatographic purity of TAFHF formulated with castor oil or PEG600 inside implants exposed to simulated physiological conditions over 120 days. Table S2: Chromatographic purity of TAFFB formulated with sesame oil or castor oil inside implants exposed to simulated physiological conditions over 210 days. Table S3: Water ingress within the implant reservoir of TAFHF formulated with castor oil or PEG600 measured via the loss-on-drying method. Table S4: Water ingress within the implant reservoir of TAFFB formulated with castor oil or sesame oil measured via loss-on-drying method. Table S5: Chromatographic purity profile of TAFHF castor oil and TAFFB sesame oil implants stored in open and closed foil pouches stored at 22 • C/50% RH and 40 • C/75% RH over 6-months. | 2020-11-11T14:08:14.723Z | 2020-11-01T00:00:00.000 | {
"year": 2020,
"sha1": "b2d9bca3a5860f94380231f67ca8c2cea697a57c",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1999-4923/12/11/1057/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8d0279a523c751db7da5ded516c11e0eb7f650fa",
"s2fieldsofstudy": [
"Medicine",
"Materials Science"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
227278898 | pes2o/s2orc | v3-fos-license | Energy Efficiency Maximization in Downlink Multi-Cell Multi-Carrier NOMA Networks With Hardware Impairments
In this paper, we investigate energy efficiency (EE) maximization in multicell multicarrier non-orthogonal multiple access (MCMC-NOMA) networks with hardware impairments (HIs). We formulate the optimization problem as a mixed integer nonlinear NP-hard problem, which is difficult to solve efficiently. To solve this problem, we decompose it into two subproblems. The first subproblem is the user and base station (BS) association and subchannel assignment problem, where binary whale optimization algorithm (BWOA) is proposed to handle it. For the second subproblem of the non-convex power allocation problem, successive pseudo-convex approximation (SPCA) is employed to establish the problem’s pseudo-convexity. The approximate problem is separable into a sequence of equivalent problems that can be easier to solve. Each problem of the obtained sequence has a stationary point solution and is guaranteed to converge. The simulation results demonstrate that the proposed algorithm achieves a performance comparable to that of the successive lower bound maximization (SLBM) algorithm and outperforms both fractional transmit power allocation (FTPA) benchmark for NOMA and the conventional orthogonal multiple access (OMA).
I. INTRODUCTION
Non-orthogonal multiple access (NOMA) is considered as one of the promising technologies for the fifth-generation mobile networks (5G) and beyond 5G (B5G) [1]- [5] and has received great attention from researchers due to its excellent connectivity. NOMA gains its superiority over the conventional orthogonal multiple access (OMA) due to its spectral efficiency (SE). In the power domain NOMA, two users can concurrently occupy the same subchannel and successive interference cancellation (SIC) is applied to extract the desired signal at the receiver [1]. In contrast, each user can solely occupy one subchannel in OMA. Therefore, the spectrum resources can be efficiently exploited using NOMA, and this leads to enhancing the throughput of the network [5], [6]. During this concurrent utilization of the spectrum resources, The associate editor coordinating the review of this manuscript and approving it for publication was Rui Wang . NOMA considers the users' channel conditions and prioritizes the users with different channel conditions. Thus, fairness can be guaranteed and the throughput of the entire system can be improved [7], [8].
Although some studies in the above literature considered only power allocation to enhance, joint resource allocation is significant to improve system performance. For instance, the authors in [21] considered joint subchannel and power optimization to maximize the EE for downlink NOMA heterogeneous network. Convex relaxation was used to solve the subproblem of subchannel allocation while fractional programming and Lagrange dual method are used to obtain the closed-form solution for the power allocation. Joint energy efficient user-resource block (RB) association and power allocation were investigated for uplink hybrid NOMA-OMA in [22] by considering the tradeoffs between the feasibility and the complexity of the solution. Joint user scheduling and power allocation were studied in [23]. The authors formulated the EE maximization problem as a multi-objective optimization (MOO) problem. For the solution, the problem was decoupled into two single-objective optimization (SOO) problems. A non-cooperative game and global optimal search (GOS) were used to solve the scheduling subproblem, while successive convex approximation (SCA) was used to allocate the power across subchannels.
All the above studies in [9]- [14], [16]- [23] have assumed perfect channel conditions at the transceivers. However, in practice, wireless systems experience hardware impairments (HIs) due to quantization problems, phase imbalances and non-linearities of the amplifier [24]. To alleviate the impact of HIs, some algorithms have been introduced such as compensation and calibration algorithms. However, those algorithms have not guaranteed the elimination of HIs effect and have been suffering from some problems such as the inaccuracy of calibration, estimation errors as well as correlation with noise types [25]. Consequently, there are still residual HIs in NOMA systems [26].
The impact of HIs on NOMA systems has been investigated in several studies. The performance in terms of outage probability has been studied in [26]- [31] and the throughput was investigated in [5], [33]. In [34], the authors studied the secrecy performance of a NOMA energy harvesting (EH) network in the presence of residual hardware impairments with an untrusted relay. The secrecy outage probability was used to evaluate the performance of the system. The study in [35] used outage probability and ergodic rate to evaluate the performance of full duplex NOMA networks with HIs over Rician fading channels. Outage probability and ergodic rate were also used in [36] to investigate the effect of residual hardware impairments (RHIs), channel estimation errors (CEEs) and imperfect SIC on cooperative NOMA system over Nakagami-m channels. HIs impact on two way multiple relay NOMA networks was investigated in [37], the opportunistic relay selection was applied to improve the spectral efficiency. The authors also studied the EE of the system.
In addition to the above research, some studies have been dedicated to investigating the impact of HIs on the EE in NOMA systems. The study in [5] considered EE maximization and throughput improvement in EH NOMA. The authors proposed transmit time-switching based algorithm to handle both problems. The authors proposed a joint design to optimize power splitting ratio and beamforming vectors to maximize EE of the system. The authors in [38] investigated the EE maximization in cooperative NOMA systems with HIs. The authors designed an algorithm of power and amplification gain allocation for EE maximization problem based on fractional programming (FP), Lagrange dual method and Dinkelback's method.
From the above literature, there are few studies focused on HIs impact on EE of wireless NOMA networks. Moreover, since our focus here on EE maximization in MCMC-NOMA networks, we realized a lack of literature focusing on this topology. Most of the aforementioned works were focusing on single-cell. In practice, wireless networks are usually multi-cell deployed networks. Motivated by the above, we consider MCMC-NOMA networks topology with HIs. We investigate the performance with regard to EE maximization. The formulated EE maximization problem is mixed integer NP-hard problem due to the existence of binary variables related to the users and the base stations (BSs) association via the multicarrier and the intra-cell and inter-cell interference. Thus, we give a novel algorithm to handle it. Our main contributions are summarized as follows: • Different from the existing works in [5] and [38], we consider the downlink MCMC-NOMA network with HIs working in the non-cooperative mode, where the BS sends the signal directly to the users. We derive the EE maximization problem by representing the effect of HIs as a distortion in the transceiver signal. The problem is a mixed-integer nonlinear programming (MINLP) problem, which is non-convex and NP-hard. Obtaining the solution to this problem is usually challenging.
• The optimization problem is decomposed into two subproblems. The first problem is user-BS association and subchannel assignment, and the second problem is the energy-efficient power allocation. The first subproblem is handled using the binary whales optimization algorithm (BWOA). For power allocation, we designed successive pseudo-convex approximation (SPCA) based framework to perform the power allocation.
• Different from the power allocation schemes proposed in [12], [23] and [14], our proposed power allocation scheme has the advantage of parallelizing the solution process by converting the problem into separable subproblems, where each subproblem has a closed-form solution. Thus, the proposed power allocation scheme is more suitable for MCMC topology compared to other schemes such as successive lower bound maximization (SLBM) [39].
• The simulation results have shown comparable performance to that of the SLBM method in terms of EE. Moreover, the simulation results demonstrate the superiority of the proposed algorithm over fractional transmit power allocation (FTPA) benchmark for NOMA as well as OMA scheme. Our work's scope mainly focuses on solving user association and subchannel assignment and power allocation. Since obtaining the optimal solution for the power allocation is complicated and entails very high complexity, we focus on the problem approximation. Those can be considered as a significant delimitation. VOLUME 8, 2020 The remainder of the paper is organized as follows. Section II presents the system model and problem formulation. Section III discusses user-BS association and subchannel assignment. Section IV discusses the energy-efficient power allocation for MCMC-NOMA networks. Section V shows the simulation results and Section VI concludes the paper.
II. SYSTEM MODEL AND PROBLEM FORMULATION
., N sc }, respectively. The system bandwidth B is divided into subchannels such that the bandwidth of the subchannel n is B n = B N sc . We assume each user is equipped with one antenna. The BS m sends a superposition symbol x m,n to the associated user i via the subchannel n, this signal is given as where υ m,i,n ∈ {0, 1} is the user-BS and subchannel-BS indicator. υ m,i,n = 1 indicates the user i is associated with BS m and assigned with the subchannel n and υ m,i,n = 1 if otherwise. p m,i,n is the allocated power to the user i served by BS m via subchannel n. s m,i,n is the transmit signal of BS m in the subchannel n with E s m,i,n 2 = 1. χ m,i,n is the distortion noise due to the hardware impairments between the BS m and the user i on the subchannel n with zero mean and variance φ 2 m,i,n p m,i,n , φ m,i,n is the level of hardware impairments.
The received signal at the user i served by BS m through subchannel n is given as where h m,i,n = g m,i,n d −α m,i is the channel coefficient between the BS m and its associated user i, g m,i,n is the Rayleigh fading channel gain between the BS m and the user i on the subchannel n. d m,i is the distance between the user i and BS m and α is the pathloss exponent. η m,i,n is AWGN with zero means and variance σ 2 . I m,i,n is the accumulated intercell interference received by the user i from other BSs on subchannel n except BS m where P t,n is the total allocated power by BS t on subchannel n and given by Let H m,i,n represents the channel response to noise and interference ratio (CRNIR) Without loss of generality, we assume the following order According to NOMA protocol, SIC is carried out by the user to retrieve its intended signal. Then, signal-tointerference-plus-noise ratio (SINR) for the user i served by BS m via the subchannel n is expressed as where I aggr m,i,n is the aggregated interference experienced by the user i associated with the BS m through the subchannel n and it's defined as The achievable data rate of the user i served by the BS m via the subchannel n is defined as Then, the sum rate of the network can be expressed as Hence, the network's total energy efficiency is given as where P T represents the total power and given by where p c is the circuit power consumption excepted transmit power part [40].
B. PROBLEM FORMULATION
Based on the above analytical derivation, our objective is to maximize the network EE of the network. The optimization problem can be formulated as follows where constraints C1 is to guarantee the successful performance of SIC in specific order. Constraints C2 is the total power constraints of the BS limited by the maximum transmit power of m-th BS P max m . Constraints C3 is the data rate requirements for each user. C4 is the nonnegativity of user's power. Constraints C5 represent the user-BS association and subchannel assignment indicator. Constraints C6 to assert that the user can associate with only one BS through only one subchannel.
Since EE maximization problem for a single cell is NP-hard [12], it can be observed that the problem is MINLP problem which is also NP-hard problem and difficult to solve. To effectively tackle this problem, we decompose it into two subproblems. First, user-BS association and subchannel assignment subproblem. Second, energy efficient power allocation in MCMC-NOMA networks. Even after the decomposition, obtaining the optimal solution is difficult and this is another limitation drawn up the nature of the problem. The details are presented in section III and section IV.
III. USER-BS ASSOCIATION AND SUBCHANNEL ASSIGNMENT
In this section, we investigate the user-BS association and subchannel assignment subproblem. We assume that equal power is allocated to each user then we propose a meta-heuristic algorithm to tackle this subproblem. The subproblem is defined as follows Metaheuristic optimization algorithms have many advantages such as the simplicity of the concepts, can be exploited to solve wide range of problems and can bypass local optima [41]. Generally, metaheuristic optimization methods have two processing phases, exploration and exploitation [43], [44]. In exploration phase, the optimizer utilizes operators to explore the search space and determine the areas of interest. This phase usually depends on the random generation of the variables. In the exploitation phase, the detailed search in the areas of interest takes place.
The proposed algorithm is BWOA which imitates the hunting behavior of the humpback whales [45]. BWOA was tested with different problems and has shown competitive performance compared to state-of-the-art optimization methods. Humpback whales use a hunting technique called bubblenet. In this technique, humpback whales move in a shrinking circle while blowing bubbles under their favorite small preys forcing them to the top water [41]. The algorithm includes three elements, encircling the prey, bubble-net attacking and searching the prey.
A. ENCIRCLING THE PREY
To formulate the procedural behavior, the define the following equations [45] where X * (t) is the position of the best search agent, t is the current iteration and the symbol · is to indicate the element-wise product. C is a coefficient vector and calculated by where r is a random vector in [0, 1]. The positions of the agents are updated by where A is a coefficient vector calculated as follows where a is a control parameter linearly decreased from 2 to 0 with iterations during exploration and exploitation phases (i.e. a = 2 1 − t T ) where, T represents the maximum number of iterations. Equations (16) and (18) are to balance the exploitation and exploration. To enhance the exploration and exploitation over the course of the optimization, the parameter C can be random in [0,1].
B. BUBBLE-NET ATTACKING
The humpback whales swim around the prey in shrinking circle and move in spiral-shaped path (shrinking encircling and spiral updating position mechanisms). First the step size for shrinking and encircling is defined as the following transfer function The value of ϕ SE is used to toggle between zero and one. The position of the search agent is updated as where p B is a uniform random number in [0,1] and C (·) indicates the complement operation. For the spiral updating position, first the step size is defined as Then, then the position is updated by
C. SEARCHING THE PREY
Applying similar mechanism as shrinking encircling, the searching for prey is performed with the coefficient vector A > 1, where | · | represents the absolute value. Then, replacing the position of the best agent X * (t) with randomly selected position of a whale X rand . This helps in extending the search space. The step size is defined as Thus, the position of search agent is updated as Before applying the above procedure, we use the penalty method to convert the constrained problem into an unconstrained problem by combining the objective function and the constraints together [42]. Then, the result is the fitness function. We define the fitness function as follows where µ i is a penalty factor used for the ease of implementation for all inequalities constraints, where Algorithm 1 illustrates the the above procedures. Initially, we start with a random set of solutions and calculate the corresponding fitness assuming there's a channel diversity and in this case. The search dimension is depending on the number of the BSs. The best searching agent is determined based on its fitness. Then, the position of the searching agent is updated either randomly or according to the best obtained solution in which the agents follow the best search agent. The random search agent is chosen to update the position of the searching agent when |A| > 1. Otherwise, the best solution is chosen. The exploration and exploitation are attained with the decrement of parameter a. Moreover, based on the value of p B searching agent chose shrinking and encircling or spiral movement. If either of the termination conditions is satisfied, BWOA terminates. Algorithm 1 BWOA-Based User-BS Association and Subchannel Assignment 1: Initialize whale population X i , i = {1, . . . ., N }, t = 1, T , . 2: Calculate the fitness for each agent according to (25) and determine the best search agent X * (t).
• Procedure for user-BS association 3: for each search agent (user) do 5: Update a, A, C and generate p B .
6:
if p B < 0.5 then 7: if |A| < 1 then 8: Update D according to (15) and ϕ SE according to (19). 9: Update the position X (t) according to (20). Select a random agent X rand and update D.
12:
Update ϕ SP using (23) and X (t) by (24). 13: end if 14: else 15: Update D as follows 16: Update ϕ SUP by (21). 17: Update the position X (t) according to (22). 18: end if 19: end for 20: Calculate the fitness of each search agent by (25). 21: Update X * (t) of the best search agent. 22: t = t + 1 23: end while • Procedure for subchannel assignment 24: Repeat the above procedure for user multiplexing such that It's noteworthy that there are two main parameters to be adjusted A and C. While decreasing A, as a consequence we can divide the iterations into two sets. When |A| ≥ 1, the set of iterations are dedicated for exploration and the remaining iterations are dedicated for exploitation. Furthermore, the defined mechanism of the algorithm determines the search space in the surroundings of the best solution. Hence, BWOA is generally considered as a global optimizer [41]. Next, we give the complexity analysis for Algorithm 1. The complexity of computing the fitness is O (N ue D), where N ue is the number of whales (users) and D is the search dimension.The search dimension in our case depends on M and N sc . The position of each searching vector is updated with each iteration; therefore, the complexity of this step is O (N ue D). Since algorithm is doing the procedure twice with different search dimensions, then the computational complexity of Algorithm 1 is O (T N ue (M + N sc )) where T is the maximum number of iterations.
IV. ENERGY-EFFICIENT POWER ALLOCATION IN MCMC-NOMA NETWORKS
After obtaining the user-BS association and subchannel assignment, we will introduce an iterative solution to solve our power allocation subproblem in this section. Although we have tackled the binary variable, yet the problem still non-convex due to the presence of the interference as well as the existence of the constraints C1-C4. Thus, obtaining the optimal solution is difficult. Therefore, we will introduce an approximation to reduce the complexity of the calculations. The power allocation subproblem is defined as
A. SUCCESSIVE PSEUD-CONVEX APPROXIMATION METHOD
Let f (p) is an equivalent function to our objective function in (28). Hence our problem is defined as Our objective is to propose introduce an approximate function of f (p) that can be decomposed into subproblems that can exhibit closed-form solutions [46]. Letf p, p (t) represents the approximate iterative function of f (p) over the iteration t around the point p (t) . The approximate function is defined as Suppose P is the solution set and it is closed and convex set and Bp (t) is the globally optimal point. Then, according to SPCA theory [46], the approximate functionf p, p (t) is assumed to satisfy the following conditions: 1)f p, p (t) is pseud-convex for any p (t) ∈ P.
2)f p, p (t) is continuously differentiable in p for any p (t) ∈ P and continuous in p (t) for any p ∈ P. Sincer m,i,n p m,i,n ; p (t) and p m,i,n + p c are continuously differentiable for any p (t) , then the second condition is satisfied. The proof of fulfillment of the third condition in the Appendix B. Finally, since the power solution is bounded, the fourth and the fifth conditions are satisfied.
In each iteration t, the function maximizer for f p, p (t) is defined as Fractional programming is applicable on (33) since its numerator is concave and the denominator is linear. Applying Dinkelbach's algorithm [47], we get p λ (t,τ ) = arg max where, λ (t,τ ) is the auxiliary variable which is updated in iteration τ + 1 as follows The problem (34) can be decomposed into independent subproblems that can be solved in parallel Problem (36) is convex and P has a nonempty interior. The problem can be further decomposed into dual domain by relaxing the constraints. Hence, recalling back (34), we get the Lagrangian function (37) on the bottom of page 210061. Where, β, κ and π are Lagrange multipliers associated with the sum power constraint, QoS constraints and nonnegativity of power, respectively. They can be updated as follows
5:
Compute λ (t,τ +1) according to (35). 6: τ ← τ + 1 7: end while 8: Compute Bp (t) according to (33). 9: Determine the step size δ (t) by successive line search. Update p (t) as follows 11: t ← t + 1 12: end while The closed form expression of the power is equivalent to finding the root of a polynomial with the second order and it can be expressed as in (42) Where, ρ = h m,i,n 2 . ω m,i,n p (t) and I m,i,n p (t) are given by (43) and (44), respectively on the bottom of page 210061. The power calculation procedures are illustrated in Algorithm 2. Now, we provide the complexity analysis for Algorithm 2. Let I and T respectively represent the maximum number of iteration for both outer (Dinkelbach's algorithm) and inner loops which includes the subgradient method. The calculation of (42) requires M N ue N sc operations while updating λ has computational complexity of O (N ue ). Moreover, the initial value of λ and the method of step sizes calculations to update the multipliers and other parameters are greatly affecting the required number of iterations. Because T is a polynomial function of O MN 2 ue N sc T , the total computational complexity of Algorithm 2 in the worst case is O MN 2 ue N sc IT .
V. SIMULATION RESULTS
In this section, we present the simulation results and evaluate the performance of the proposed method. For the simulation, we consider MCMC-NOMA system with one BS in the center of each cell, the cell radius is 500m and the users are randomly distributed in the cell. The total bandwidth of the system is 5MHz equally divided by N sc = 20 subchannels.
We assume the small-scale Rayleigh fading channels between users and BSs. The levels of HIs φ m,i,n are set to 0, 0.01 and 0.02. The noise power spectral density N 0 = −174 dBm/Hz. The error tolerance parameters , p and λ are set to 0.001 for each. For the comparison part, same user-BS and subchannel assignment scheme is used with NOMA-FTPA scheme in [48], [49] and also with SLBM scheme in [46], [47]. However, for OMA each user is allocated in a separate subchannel. The decay parameter for NOMA-FTPA is set to 0.2. TABLE. 1 illustrates the simulation parameters. Fig. 1 depicts the convergence of BWOA. The figure shows the user-BS association and the subchannel assignment. We selected the first BS and the first subchannel to show the performance. The number of BSs is M = 7 with 8 users for each and the number of subchannels is 20. It can be seen from that the fitness is improving with the number of iterations because the user tends to associated the base station that achieves higher EE and the subchannel with better channel conditions. Moreover, the volatility in curves because the user tends to imitate the user with the best fitness.
To evaluate the performance of the iterative power allocation algorithm, Fig. 2 evaluates the performance with respect to the convergence speed for different HIs levels. We set the number of BS to M = 7 and the number of users to 4 per BS. P max m is set to 40dBm, R min is set to 0.1Mbit/s and p c is 20dBm. One can notice the EE increases with each iteration until it converges in 6 iterations. Although both SPCA and SLBM have comparable convergence behaviour with regard to the number of iterations, the proposed SPCA method is superior than SLBM due to some reasons. First, the approximate problem of the proposed method is in fact a set of independent subproblems that can be parallelly solved. Each subproblem has a closed-form solution as in (42). Conversely, when applying SLBM on (28), the approximate problem can only be solved by a general purpose solver [47]. The proposed scheme holds the same advantageous traits when compared with other well known methods such as SCA [12], [23]. Fig. 3 illustrates the performance of the proposed algorithm and other compared schemes for different maximum transmit power of BS P max m while R min and p c are set to 0.1Mbit/s and 20dBm respectively. The EE for the proposed algorithm, NOMA-SLBM and also for OMA grows with the increase of P max m and converges in a certain level. This means the increasing of P max m would not help in increasing the EE. For NOMA-FTPA, the EE increases with increasing of P max m , then this increment turns into decrement after specific level. It's L (p, λ, β, κ, π) = arg max m,i,n I m,i,n( p (t) ) noteworthy that the performance is impacted by the HIs level especially in the case of NOMA-FTPA and this impact is more obvious for higher values of P max m . In Fig. 4, we evaluate the performance of the proposed method with the number of users per BS varies from 2 to 20. Obviously, the network EE rises with the number of users. The growth of EE becomes slower with a higher number of users due to the insufficiency of the allocated power to increase the EE for all users. For performance comparison, the proposed NOMA-SPCA scheme has comparable performance with NOMA-SLBM scheme, however when the number of users per BS equals 20, the EE of the proposed algorithm is 7.7% and 10.6% higher than NOMA-FTPA for HIs levels of 0, and 0.02, respectively. When comparing the performance with OMA, the number of users per BS is set to 20. The proposed method has 47% and 51% better performance than OMA for 0 HIs level and HIs level of 0.02.
The performance of the network EE for different values of circuit power p c is shown in Fig. 5. The number of BSs is set to 7 with 4 users for per BS. The EE decreases with increasing of p c . The proposed NOMA-SPCA scheme is achieving a comparable performance to that of the NOMA-SLBM scheme. For HIs level of 0.01, the performance of the proposed method is 24.9% better than NOMA-FTPA and 68.9% better OMA when p c = 27 dBm. Fig. 6 shows the performance of the network EE for different values of QoS constraints R min . As R min increases, the EE of the network decreases due to the growth of power consumption to meet the minimum rate requirements. While the EE of OMA tends to be static with the increasing of R min , yet the proposed method still outperforms NOMA-FTPA and OMA and maintaining a performance comparable to that of the NOMA-SLBM scheme.
VI. CONCLUSION
In this paper, we studied EE maximization in MCMC-NOMA networks with HIs. We formulated our optimization problem and decoupled it into two subproblems. We adopted a metaheuristic algorithm, namely binary whale optimization algorithm (BWOA) to tackle user-BS association and subchannel assignment. For the power allocation subproblem, we utilized the SPCA method to approximate our original problem into a sequence of approximate problems that can be solved in parallel and exhibit closed-form solutions. The proposed scheme has shown efficiency in handling EE maximization and has surpassed the conventional OMA as well as NOMA-FTPA. In spite of that the proposed SPCA scheme has demonstrated a comparable performance to that of the NOMA-SLBM scheme, though SPCA scheme is advantageous due to the parallelization when solving the problem. Subsequent research on MIMO-NOMA network will be considered in which we shall try to prove the feasibility and efficiency of the proposed framework in such a scenario.
APPENDIX A
Following similar steps as [47]. Considering the function r m,i,n p m,i,n ; p (t) m,−i,n , for sake of simplicity, let c 1 = |hm,i,n| 2 / φ 2 m,i > 0 and c 2 = σ 2 +I m,i,n / φ 2 m,i > 0. Taking the first and the second derivatives with respect to p m,i,n as in (45) and (46)
APPENDIX B
To verify the third condition, we take the gradient of the approximate functionf p;p (t) and the original function f (t) and show they are identical at p m,i,n = p (t) m,i,n (see (47)) on top of page 210063. Where, putting into consideration that ∇ p m,i,nr m,i,n p (t) = ∇ p m,i,n N ue j=1 r m,j,n p (t) and r m,i,n p (t) = r m,i,n p (t) . | 2020-11-26T09:01:32.345Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "67c99ab13c00ad229e8fd253a4b28fb0b549d81c",
"oa_license": "CCBY",
"oa_url": "https://ieeexplore.ieee.org/ielx7/6287639/6514899/09264208.pdf",
"oa_status": "GOLD",
"pdf_src": "IEEE",
"pdf_hash": "5b01aaf8ba9e1a01d701a7e53d7641710e1c32a0",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
52047186 | pes2o/s2orc | v3-fos-license | Doubly F-Bounded Generics
In this paper we suggest how f-bounded generics in nominally-typed OOP can be extended to the more general notion we call `doubly f-bounded generics' and we suggest how doubly f-bounded generics can be reasoned about. We also (attempt to) prove, using a coinductive argument, that our reasoning method is mathematically sound.
Introduction
F-bounded generics, as found in mainstream OO programming languages such as Java, C#, Scala and Kotlin, allows a type variable to be used in defining the upper bound of the type variable, i.e., in defining its own upper bound. Examples of f-bounded generic class declarations include the following declarations. Simply stated, doubly f-bounded generics allows a type variable to be used in defining both an upper bound and a lower bound of the type variable.
Examples of doubly f-bounded generic class declarations include the following ones. is a useless declaration. No type argument can be used to instantiate class F, since no type argument can be simultaneously a subtype and a supertype of the same type yet be unequal to it 2 ).
Illustrating Example
To better understand f-bounded generics and doubly f-bounded generics, let's recall that the term 'f-bounded generics' actually means 'function-bounded generics' (or, more precisely, using category-theoretic language, it means 'functor-bounded generics'). This means that a (lower or upper) bound of a type variable of some generic class is not a constant type (even if an infinite one) but that the bound varies with the value of the type variable that gets passed to the class. This in turn means that each type argument that may instantiate the generic class has two corresponding bounding types defined by the functions specified as the bounding functions. The type argument is a valid type argument if the type argument is a subtype of the corresponding upper bounding type and a supertype of the corresponding lower bounding type.
Unbounded Functions.
To illustrate more vividly how we view f-bounded generics, and more generally how doubly f-bounded generics can be modeled, let's consider functions from analysis, i.e., functions of type R → R from real numbers to real numbers (extended with −∞ and ∞).
Doubly (Constant) Bounded Functions.
To get a step closer to our model of doubly f-bounded generics, we first consider restricting the domain of a function using constants (also sometimes called 'constant functions', i.e., functions whose output value is independent of their input argument).
Example 2.
Consider restricting the domain of the function f (of Example 1) to be the closed interval [1,3]. This domain-restricted function can be expressed as 2 If such a declaration were allowed, the necessary antisymmetry property of subtyping forces T to be equal to F<T>, but only infinite types T can satisfy this equality. The nominality of subtyping, which necessitates the explicit declaration of inheritance/subtyping relations between classes, and the prohibition of expressing circular inheritance/subtyping relations between classes prohibits the explicit expression of subtyping relations that involve infinite types (since only finite types can be expressed explicitly). 3 The infinite values −∞ and ∞ here play a role similar to the role played by types Null and Object, respectively, in the OO subtyping relation.
Doubly F-Bounded Functions.
More interestingly, we can consider restricting or bounding the domain of f using two (non-constant) functions over x.
Example 3. Consider the function
whose parameter x is f-bounded (i.e., function-bounded) by the two functions l (x) = x /2 (for lower bound) and u (x) = 3x (for upper bound), plotted in Figure 2 Notice that for plotting f we had to first decide which values for x are valid arguments to f , i.e., which values simultaneously satisfy the two inequalities x/2 ≤ x and x ≤ 3x.
Using simple reasoning, it is easy to see that both inequalities are satisfied only for values of x ≥ 0 (check Figure 2 Hence the plot of f in Figure 2 To make things even more interesting and more "realistic", we can use slightly more complex bounding functions.
Example 4. Consider the f-bounded function
Finally, we make things even more interesting, where the restricted domain of an f-bounded function can be the union of multiple intervals over R.
Example 5. Consider the f-bounded function
From the curves in Figure 2.8, and their crossing points, we deduce that no other intervals are included in the domain of f (as noted earlier, valid values of x must have the corresponding red curve below the dotted green line and the corresponding blue curve above the dotted green line, but one or both of these two conditions are not true in all intervals lying outside [−∞, 1.3] and [6, 7.7]]).
Bounded Generics
Understanding the simple example of f-bounded functions over the real numbers that we presented in Section 2, particularly how the domain of these functions is decided, is key to understanding how we view doubly f-bounded generics.
It should be noted that in all functions considered in Section 2 we had a fixed "template" that got filled/instantiated with different pairs of functions l (x) and u (x) that define the lower and upper bound for each value of x, respectively. As the reader may have intuitively guessed by now, the two most significant differences between f-bounded functions and our model of doubly f-bounded generics are, firstly, switching from the totally ordered set R of real numbers (ordered by lessthan-or-equals, ≤) to the partially ordered set T of ground generic types (ordered by subtyping, <:), then, secondly, switching from functions over real numbers R (which map real numbers to real numbers) to "functions"-more accurately, generic classes/type constructors-over types T (which map types to types).
The definition of a function over a partially ordered f-bounded domain may not be visually intuitive as its totally ordered counterparts (as illustrated in Section 2), yet the abstract non-visual understanding of how such functions are defined can be almost as simple as understanding the definitions of the example functions (defined over the totally ordered set R) we presented in Section 2.
The iterative construction of the graph of T-the subtyping relation between ground generic types in nominally-typed OOP-was presented in [1], using the graph theoretic notion of partial Cartesian graph products [2]. Similar to how the different domains of function f were decided in the examples of Section 2, a type T a ∈ T is valid as a type argument to some doubly f-bounded generic class if the bounding ground types l (T a ) ∈ T and u (T a ) ∈ T define an interval type in T [3]. More precisely, a type T a ∈ T is a valid type argument if there exists a path in the graph of T that goes from the lower bound type l (T a ) to the upper bound type u (T a ) passing through T a , or equivalently, if both of [l (T a ) , T a ] and [T a , u (T a )] are interval types in T. 4
Input-Side Recursion
The usefulness and value of the example of functions from analysis lies not only in providing a means to present (doubly) f-bounded functions in a simpler setting (i.e., that of a totally ordered set) but also in it possibly offering inspiration when answering questions that may seem hard in the context of doubly f-bounded generics but are simpler to answer in the context of functions in analysis, as illustrated by the following example.
Example 6. Consider the generic class declaration class Enum<T extends Enum<T> >. 4 While referring to the different plots of l (x) and u (x) in Section 2 (in which a dotted green line represents the identity function id (x) = x, a red curve represents the lower bounding function l (x), and a blue curve represents the upper bounding function u (x)), it should be noted that this condition corresponds to (i.e., is the partial order counterpart of) the condition that the dotted green line is above (i.e., ≥) the red curve and below (i.e., ≤) the blue curve.
This declaration is considered, by many OO software developers, to be among the most confusing class declarations, not only because of the use of type variable T in its own bound (which is the defining feature of f-bounded generics) but also because the very class getting declared (namely, class Enum) is also used to define the bound of the type variable T.
Fortified with the examples presented in Section 2, however, it should now be clear that this declaration is similar to the domain-restricted function f x ≤ x 3 =x 3 . Pondering a little over this definition of f , it can be easily seen that the definition states that f is defined only for values of x that are less than the unbounded function x 3 , which (as if accidentally) happens to have the same expression as f itself (but not the same domain).
Given the plot of x 3 in Figure 4.1 (which, except for the additional dotted green line for the identity function, is the same as the plot in Figure 2 It may be argued, for good reasons, that the x 3 in the bound of x (in the definition of f ) should actually be interpreted, as is customary, as a recursive definition of f (i.e., one that involves a self-reference) and thus that the definition of f should rather be written as f (x ≤ f (x)) = x 3 and that the domain (i.e., valid values of x) should be decided accordingly. However, it is our claim that for our purposes (namely, deciding valid values of x, i.e., deciding the domain of f ) this would make no difference (i.e., that the resulting domain of f will be the same).
The reason behind our claim (which is corroborated by the example in Section 2, as well as many examples one can think of 5 ) is that self-references in genuine recursive definitions of functions affect the value of the function itself (i.e., 5 Can our claim be proven? We believe it can, and we believe the proof, even for general functions on partially-ordered sets, will likely be a simple proof. As such we believe we may be able to produce this proof soon, instead of having to depend on corroborating examples (and the lack of counterexamples) to support our claim. (See Appendix A for a proof attempt.) the "return/output value" of the function, e.g., as in the recursive definitions of the factorial/Gamma function f (x) = x * f (x − 1) and the Fibonacci function 2)), unlike the case we have at hand (i.e., f-bounded functions and f-bounded generics) where the self-reference plays a different role and is used rather differently, i.e., only to decide valid input values to the function. We tentatively call these two different uses of self-reference as 'recursion on the output/codomain side of the function' (customary recursion) and 'recursion on the input/domain side of the function' (i.e., input-side recursion/self-reference), respectively.
Valid Type Arguments and Admittable Type Arguments.
An immediate implication on type checking and subtype checking in Java (and similar nominally-typed OO programming languages) that is suggested by our claim is that when particularly checking whether a type argument to a generic class with input-side recursion is a valid type argument to the class (i.e., checking that the type argument is a subtype of its upper bound and a supertype of its lower bound) no recursive referencing back to the subtyping relation (involving the same particular pair of types) is necessary, since (according to our model) all type arguments passed to the bounding functions in such a case are indeed valid type arguments that (as long as they are well-formed types) are in no need of validity checking.
Let us illustrate this with an example.
Example 7. Consider the Java class declarations class Enum<T extends Enum<T> > {} class Color extends Enum<Color> {}.
During type checking a program containing these declarations, particularly when checking whether a type argument (such as Object or Color) is a valid type argument to class Enum (i.e., whether Enum<Object> or Enum<Color> are valid types) the type checking algorithm must confirm that the type argument satisfies its bound(s) (i.e., whether Object is a subtype of Enum<Object> or Color is a subtype of Enum<Color>). By our model and claim, these second instantiations of class Enum (i.e., types Enum<Object> and Enum<Color>), which appeared while checking the validity of type arguments to Enum, need not be checked for the validity of their own type arguments (i.e., types Object and Color), since (similar to the expression x 3 in Example 6 of Section 4) class Enum is treated-in only this context where the type checking algorithm is checking the validity of a type argument to the class-as having unrestricted/unbounded type parameters, and thus that these second instantiations of Enum are valid types (i.e., in no need of validation themselves).
Given that class Object (the standard class) does not extend class Enum, and thus type Object is not a subtype of Enum<Object> (the second instantiation), the type checking algorithm concludes that the type Enum<Object> (i.e., the original/first instantiation that we started with during type checking) is not a valid type. On the other hand, given the extends clause in the declaration of class Color, type Color is a subtype of Enum<Color> (the second instantiation), and thus the type checking algorithm concludes that the first instantiation Enum<Color> is a valid type.
It should be noted that the reasoning method used above (suggested by our model of f-bounded generics) differs significantly from the reasoning method upon which current implementations of type checking in OO compilers and OO type systems are based, which, although reaching the same decisions regarding class Enum as those we reached above, resort to much more complex infinite/coinductive logical arguments to justify such typing/subtyping decisions.
Given the discussion and the example above, to formalize our reasoning method we make a distinction regarding type arguments, where we differentiate between admittable type arguments of a class and valid type arguments of the class.
In particular, for any generic class G a type TA is an admittable type argument of class G as long as TA is a well-formed (reference/object) type, particularly disregarding any declared bounds on the corresponding type variable in G. On the other hand, in all but one of the program contexts where a parameterized type can occur, an admittable type argument TA of G is a valid type argument if TA also satisfies the bounds declared in G on the corresponding type variable (i.e., if TA is a supertype of its lower bound and a subtype of its upper bound). That is, in all such contexts G<TA> should be accepted by the type checker as a valid parameterized type. In the context where bounds of the type variable(s) of G are declared, however, our model of f-bounded generics necessitates that all admittable type arguments of G are also considered valid type arguments.
In other words, our model of f-bounded generics (including doubly F-bounded generics) states that all valid type arguments of a generic class G are (by definition) admittable ones, in all contexts, and it requires that the converse (i.e., that admittable arguments are valid) holds in the special context of declaring bounds of type variables of G. In all other contexts, an admittable type argument of G is valid if and only if it also satisfies the declared bounds in G.
Discussion
In this paper, using a notion we call 'f-bounded functions' from analysis, we illustrated that a bound of a type variable in f-bounded generics is a function (over types, i.e., is of type T → T, where T is the set of ground types) that specifies a bound for each value of the type variable, which in turn decides whether the value (i.e., a type argument) is a valid type argument.
Our illustration immediately suggested how f-bounded generics can be generalized to doubly f-bounded generics, where both an upper and a lower bounding function (over types) can be specified.
Our illustrating example further allowed us to consider how we may reason about functions (in analysis) that have (what we call) 'input-side recursion,' i.e., functions where the definition of a function specifies that the value of the function at some input value is an (upper or lower) bound of the input value.
Accordingly, we suggested how we can reason, in the same way, about the declaration of a generic class with input-side recursion (i.e., where the instantiation of the generic class having the type variable as the type argument is a bound of the type argument, e.g., as in the class declaration class C<T extends C<T> >, where the particular instantiation of class C whose type argument is T is an upper bound of T).
We finally also discussed one of the possible implications of our model of fbounded generics on the type checking algorithm of nominally-typed OOP languages.
coinductive argument [4]) that the domains can be decided easily (i.e., without explicitly resorting to coinductive arguments in the decision procedure). 6 Hence this appendix.
A.2. Preliminaries. Let P be a partially-ordered set. Let f : P → P be a function defined over P (i.e., whose domain and codomain are the same, thus sometimes also called an endofunction or endomap over P ). Let l, u be two other endofunctions over P .
In this paper we consider restricting the domain of f , using functions l and u. In particular, we stipulate that a value x in the domain of f has to be greater than or equal to the value of function l at x, i.e., that l (x) ≤ x, and that it, i.e., x, has to be smaller than or equal to the value of function u at x, i.e., that x ≤ u (x). This restricted-domain function f can be expressed succinctly as We call such restricted-domain functions doubly f-bounded functions (or, dfbfs, for short).
A.3. Deciding Domains of Doubly F-bounded Functions.
In the main body of this paper we gave examples that illustrate how the domain of dfbfs from analysis (i.e., defined over the real numbers R) can be decided, seemingly easily using the plots of the functions involved. That included even examples for the special cases (of practical interest) where the defined function f is itself one of the two bounds of its own parameter x (but not both), i.e., the cases 7 where the definition of f can be expressed as It should be noted that if f is used as the bounding functions for both bounds of x, then the restricted-domain f will be defined only for the fixed points of f (since f then can be expressed as f (f (x) ≤ x ≤ f (x)) which then is equivalent to f (x = f (x)), which states that f is defined only for its own fixed points.) To the best of our mathematical knowledge (as of today), fixed points of functions can be found, iteratively, if P is a complete partial order (CPO) and the function f being defined is monotonic (i.e., if ∀x, y ∈ P. (x ≤ y) =⇒ (f (x) ≤ f (y))). But for general functions (i.e., ones that may not be monotonic) defined over general (i.e., not necessarily complete) partial orders, no general method for finding fixed points exists.
Further, if P is a pointed CPO (i.e., has a least member, ⊥, usually called 'bottom') and f is a monotonic function over P , then even a least fixed point of f is guaranteed (by Banach/Tarski/Brouwer's theorems? TODO) to exist. In that case the least fixed point (lfp) of f can be found simply by iterating the application of f over ⊥, i.e., by computing the sequence f (⊥), f (f (⊥)), f (f (f (⊥))), · · · , until a fixed point is found (i.e., until two successive values in the sequence are the same).
Given, however, that while deciding the domain of dfbfs we are not specifically and explicitly seeking to find fixed points, we are guessing that our problem (i.e., deciding the domains of dfbfs) may be simpler than finding the fixed points, and 6 Informally, as a proof principle, coinduction states that a property holds if there is no good reason for the property not to hold. 7 We call these dfbfs as ones with 'input-side recursion' or with 'input-side self-reference'.
thus in no need of a completeness condition on P , in no need for a monotonicity condition on f , and in no need for an explicit coinductive argument in solving it (i.e., deciding the domain, as suggested by the illustrating dfbfs from analysis). A further reason for us to not consider seeking fixed points is the context of our application (i.e., the context in which we wish to apply our result). As we pointed out in Footnote 2 in the main body of this paper, in doubly F-bounded generics, due to nominal subtyping (i.e., that subtyping has to be explicitly declared), it is impossible for any type T to be equal to the instantiation of a generic class C with type T as the type argument of the class (i.e., in generic nominally-typed OOP, for no type T can we have T=C<T>).
As such we can safely, i.e., without loss of generality, restrict our attention to finding domains of dfbfs having definitions of the form f (x < f (x)) (without an equality possibility) whose domain (a subset of P ) we call P V (the subset of P having values of x that are valid as arguments to f ).
It is our assertion that the domain P V of such a function f is the same as the domain P V of a dfbf (over P ) with a definition of the form where g and f are functions that have the same "expression" as f , but where g has/gives valid values corresponding to all elements of P (i.e., the domain of g is the whole of P , and is not restricted to a subset of it). 8 We reason by cases as follows: If x ∈ P V (i.e., is in the set of valid arguments to f ) then ∃y ∈ P, x < y = f (x) = g(x), and thus we also have x ∈ P V (since x < g (x)).
If x ∈ P V , then f (x) is undefined, or, more precisely, is an "invalid value", and, by coinductive reasoning [4], we know that x < g(x) and thus, by the definition of P V (i.e., the domain of f ), we have x ∈ P V . 9 As such, we have P V ⊆ P V and P V ⊆ P V , and thus P V = P V . Secondly, since, by our choice of f , we have ∀x ∈ P V , f (x) = f (x), then, using the extensionality of functions, given that f and f have the same domain (and codomain), we have f = f . | 2018-08-22T21:31:15.507Z | 2018-08-18T00:00:00.000 | {
"year": 2018,
"sha1": "79602579dd51983934098acdf15d4806e9350208",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "aa45bfe10cbc16de009a89f1064138669d18d30e",
"s2fieldsofstudy": [
"Mathematics",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
248030327 | pes2o/s2orc | v3-fos-license | Colonic Pseudolipomatosis: A Rare but Characteristic Endoscopic Condition
Patient: Female, 65-year-old Final Diagnosis: Pseudolipomatosis Symptoms: Diarrhea Medication:— Clinical Procedure: — Specialty: Gastroenterology and Hepatology Objective: Rare disease Background: Colonic pseudolipomatosis (CP) can pose a diagnostic challenge due to its rare incidence and multiple presentations, most of them not very familiar to the endoscopist. Its etiology and pathogenesis have not been completely clarified. It can be related to mucosal iatrogenic injury caused during endoscopic examination or to chemical injury caused by residual disinfectants on the surface of the scope after cleansing. Imaging tests such as CT or MRI do not contribute to the diagnosis, but this condition has characteristic features that must be differentiated from pre-malignant lesions, like lateral-spreading tumors, in order to avoid further investigation and unnecessary treatment, such as endoscopic mucosal resection. Case Report: We report a case of a 65-year-old man who underwent to a screening colonoscopy due to his strong family history of colorectal cancer. Confluent whitish laterally-spreading lesions with a round pit-pattern in white-light HD scope were identified in the cecum and ascending colon. The lesion was biopsied with a cold forceps. Histopathologic analysis revealed multiples cysts filled with gas within the mucosal layer, associated with a mild inflammatory process, mainly composed of mononuclear cells and eosinophils. No giant multinuclear cells were identified. Moreover, although there was a mild inflammatory process in the epithelium, the architectural organization and tissue maturation were preserved with no nuclear atypia, consistent with a diagnosis of colonic pseudolipomatosis. Conclusions: Colonic pseudolipomatosis is a rare, benign condition that must be not mistaken for more serious conditions, as CP requires no further investigation or treatment. In this setting, proper diagnosis is key to avoid unnecessary procedures.
Background
Colonic pseudolipomatosis (CP) is a benign condition that can pose a diagnostic challenge due to its rare incidence and multiple presentations [1]. As this entity is not very familiar to the endoscopist, the diagnosis can be confused with other lesions.
There are few reports of this condition in the literature and it appears that the finding of CP is somewhat "endoscopistspecific" since there are extremely experienced endoscopists who have never seen a lesion of this type, while other endoscopists see it frequently. This would imply that some difference in technique or patient selection may play a part in the development or recognition of this lesion [2].
The etiology and pathogenesis of CP have not been completely clarified. It may be related to mucosal iatrogenic injury caused during endoscopic examination or to chemical injury caused by residual disinfectants on the surface of the scope after cleansing.
Imaging tests such as CT or MRI do not contribute to the diagnosis, but this condition has characteristic endoscopic features that must be differentiated from pre-malignant lesions, like lateral-spreading tumors (LSTs), in order to avoid further investigation and unnecessary treatment, such as endoscopic mucosal resection (EMR) [3,4].
We present a case of CP in an asymptomatic patient undergoing colorectal cancer screening videocolonoscopy, review the endoscopic and histologic features, and discuss its clinical significance.
Case Report
We report a case of a 65-year-old married man who was born in Rio de Janeiro, with mild constipation controlled by regular fiber intake and no comorbidities. Previous lower endoscopies revealed no polyps, except for the last exam, done in 2018, in which a microvesicular hyperplastic polyp was retrieved from the sigmoid colon.
The patient was asymptomatic, and due to his strong familiar history of colorectal cancer (CRC), he underwent a screening colonoscopy in 2021. A full exam after adequate bowel preparation (Boston 9) was undertaken. Confluent whitish laterally-spreading lesions with a round pit-pattern in white-light HD scope were identified in the cecum and ascending colon. The lesion was biopsied with cold forceps (Figure 1). The remaining colonic segments were unremarkable.
Histological evaluation revealed accumulation of gas in the colonic mucosa. The lamina propria presented clear (air) spaces of variable size, in aggregates, isolated or confluent, with an apparently empty center, surrounded by an inflammatory infiltrate composed of mononuclear cells and eosinophils. No giant multinuclear cells were identified. The epithelial lining appeared slightly reactive and hyperplastic, maintaining regular architectural organization, without nuclear atypia. The air spaces resembled fat cells (thus the name pseudolipomatosis). The focal nature, intramucosal location, and lack of nuclei distinguish this from adipose tissue (Figure 2).
Discussion
The term pseudolipomatosis was first coined by Snover et al in 1985 [2]. CP is a rare, benign condition, with underestimated prevalence [5] of about 0.02% to 0.3% in series of endoscopic exams [6]. It is most commonly found between the fifth and sixth decades of life [6], with no clear sex preference [1,5]. In general, patients are asymptomatic, but symptoms like chronic diarrhea, bloating, lower gastrointestinal tract bleeding, and positive fecal occult blood test have been associated with CP [1,5]. Mucosal lesions disappear over weeks or months without treatment [1,6,7].
CP is more often present in the left colon, while some reports [4] have described a similar incidence between rightand left-sided lesions [6,7]. Lesions have also been described in the rectum, skin, duodenum, stomach, endometrium, and oral/nasal mucosa [4].
Usually, CP manifests as yellow or whitish lesions, single or multiple, and sometimes as confluent plaques located in 1 or more colonic segments. Lesions may vary from 0.2 to 5 cm in width, most with a peripheral erythema [1]. More commonly they are visualized while withdrawing the endoscope, but sometimes large lesions are found during insertion [3].
Histopathology analysis reveals empty spaces in the lamina propria, varying from 20 to 240 µm wide [6]. Specific staining for fat and mucin is routinely negative, and cholesterol crystals are not visualized in the exam with polarized light. No lipid deposits are identified by Sudan Black staining or by immunochemistry, and it is usually negative for anti-CD 31, anti-CD 34, and anti-protein S100 [1,5].
Unlike lipomatosis, pseudolipomatosis shows no adipose cells [6]. In cystic pneumatosis, colonic epithelial lining present mild local edema, with some gas bubbles bursting as the mucosal layer retracts. Histologically, it is noteworthy that there are empty cysts in the submucosa rather than in mucosal layer, surrounded by macrophages and giant cells infiltrate [8].
Colonic lymphangioma have great lymphatic vascular cavities in the colonic wall bounded by endothelial cells that are CD 31-and CD 34-positive. Hyperplastic lipomatoses of the ileocecal valve and colonic lipoma are both characterized by the presence of adipose cells. Malakoplakia of the colon reveals a chronic inflammatory process characterized by closely packed histiocytes containing calcospherites know as Michaelis-Guttman bodies [9].
CP is not considered an infectious disease, but its etiology and pathogenesis are unclear. It can be related to mucosal iatrogenic injury caused by epithelial stretch, abrasion, biopsy, and hyperinsufflation, which allow gas to infiltrate into the colonic wall. CP has also been associated with chemical injury caused by residual disinfectants on the surface of the scope after cleansing with hydrogen peroxide [1,2,5,7]. A pulmonary source of the gas seems very unlikely, since experiments in animals and in cadavers have rarely reproduced submucosal cystic into the colonic wall, but subserosal cysts were observed [8]. Snover and Cox have suggested that CP might be operator-dependent [2]. Kim and Baek [3] described an increase in number of CP cases as glutaraldehyde was replaced with peracetic acid for scope cleansing. The authors related a series of 12 cases out of 1276 patients who received colonoscopies during a 1-year period (0.94%), and they were able to induce mucosal lesions, by comparing glutaraldehyde 2%, ethanol 70% and peracetic acid 2%. In a pig model, lesions were very similar to that observed in CP from human colonic mucosa reproduced after use of peracetic acid, in which the mucosal effect was directly related to higher concentration, while no lesion was induced after use of glutaraldehyde or ethanol [10]. Likewise, Sheehan and Brynjolfsson [11] studied the effect of peracetic acid on rat colonic mucosa, with histopathologic results very Baek study, while other authors were not able to experimentally induce CP after using glutaraldehyde. Similarly, Snover et al [2] were unable to create CP by injecting air directly into colonic submucosa, and Waring et al [12] failed to induce CP after hyperinsufflation in a colon cadaver study.
Brevet et al [5] described 9 cases out of 2099 exams performed over 2 years (0.4%, prevalence). All patients were male, with a mean age of 52 years. Three cases were right-sided, 4 were found in the transverse colon, and 2 cases were left-sided. Eight cases were observed during endoscope insertion, while 1 case was observed during withdrawal, which suggests that the lesions found were probably not caused by or at least were not induced by the disinfection process, similar to the present case reported here.
Finally, Nakasono et al [13] classified CP into 2 groups by histopathologic analysis of 15 lesions from 14 patients, based on the size of the cysts. There was no difference between the groups regarded to sex, age, or other clinical conditions. Group A C B D A, with major/minor size cyst ratio <3, presented lesions in the upper part of the muscularis mucosae layer with no submucosal involvement, smaller cysts, and no association with lymphatic follicles. Group B, with major/minor size cyst ratio >4, presented lesions in the lower part of the muscularis mucosae layer, sometimes into the submucosa, and had more variable cyst sizes and positive correlation with lymphatic follicles. Although the pathogenesis of group A was not clearly explained, the authors observed that findings of CP were closely related to the histopathologic findings observed in pneumatosis coli, suggesting that these 2 conditions had the same pathogenesis, ie, the penetration of the gas through the colonic crypts during colonoscopy [8,14].
Conclusions
Colonic pseudolipomatosis is a rare, benign condition that must be not mistaken for more serious conditions, as CP requires no further investigation or treatment. In this setting, proper diagnosis is key to avoid unnecessary procedures. | 2022-04-09T15:19:37.879Z | 2022-03-31T00:00:00.000 | {
"year": 2022,
"sha1": "1b1818ac75dc49c6dfcccedd72e276ccd237034e",
"oa_license": "CCBYNCND",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "6f60bc4481003b20cc1e9eb78a06a2284915d485",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
53992261 | pes2o/s2orc | v3-fos-license | Occurrence of Infectious Laryngotracheitis Virus ( Iltv ) in 2009-2013 in the State of São Paulo – Brazil
Infectious laryngotracheitis is a very important respiratory disease because it causes significant economic losses in the poultry industry. The target of ILTV infections is the respiratory system, and the main organ in which the virus remains latent is the trigeminal ganglia. However, the virus has demonstrated tropism for other organs as well. The present study was conducted to determine the presence of Infectious Laryngotracheitis Virus (ILTV) in the state of São Paulo. Samples submitted to LABORUSP during the last four years (20092013) analyzed by a nested/PCR technique. Out of the 682 samples from layers tested for LTIV, 12.46 % were positive, and derived from in both traditional (trachea and trigeminal ganglion) and untraditional (cecal tonsils, digestive tract and kidneys) organs utilized for ILTV diagnosis. The present work showed that ILTV is circulating in commercial layer flocks in São Paulo State, and that the LTIV is present in other organs in addition to the respiratory tract and trigeminal ganglion; however, it was not determined if the circulating virus is a vaccinal or field strain.
INTRODUCTION
Infectious laryngotracheitis (ILT) of birds is a highly contagious disease that primarily affects chickens, pheasants, and partridges, with hens as the primary host.Starlings, sparrows, crows, pigeons, and ducks seem to be resistant to the virus (Guy & Garcıa, 2008).The causative agent is a pneumotropic virus of the family Herpesviridae, genus Iltovirus.Taxonomically, this virus is classified as a Gallid herpesvirus 1 (Zhao et al., 2013).This disease is included in the list of mandatory notification of terrestrial and aquatic animal diseases of the OIE.Its notification to the Brazilian Official Service is also mandatory (http://www.oie.int/en/animal-health-in-the-world/oie-listed-diseases-2014/).
Infectious laryngotracheitis was first described in 1925 (May & Tittsler, 1925), and since then it has been reported in many countries, where it is endemic, especially in regions with intensive poultry production with large concentrations of birds and multi-age farms, such as in North America, China, Europe (especially in Poland), Australia, Africa, southwest Asia, New Zealand, and South America (Chacón & Ferreira, 2009;Hidalgo, 2003).Viral transmission occurs via horizontal transfer, and the primary replication sites are in the tracheal mucosa and conjunctiva, where it can causes inflammation, mucoid or serous discharge, cough, and dyspnea.Poor egg production and weight gain are also observed (Coppo et al., 2013b).
The virus invades the trigeminal nerve during the lytic phase of infection, resulting in a latent infection that may be present during the entire life of the animal.Some stressors, such as the onset of lay and placement with other birds, may reactivate virus replication and shedding (Coppo et al., 2013a;Hughes et al., 1991;Hughes et al., 1989;Williams et al., 1992).Recent experimental studies indicate that the virus can also be detected in other organs, such as the heart, liver, spleen, lung, kidney, tongue, thymus, proventriculus, pancreas, duodenum, small intestine, large intestine, cecum, cecal tonsils, bursa, and brain (Oldoni et al., 2009;Wang et al., 2013;Zhao et al., 2013).
Molecular techniques, such as polymerase chain reaction (PCR), have been successfully used for the detection of ILTV in respiratory and other organs (Chacón et al., 2007;Oldoni et al., 2009;Rodríguez Avila et al., 2007).Polymerase chain reaction presents high sensitivity and specificity for the detection of ILTV.It is able to detect viral DNA when other tests, such as histopathology and immunofluorescence, yield negative results (Crespo et al., 2007).Therefore, PCR provides a rapid diagnostic test that can aid in the differentiation between field and vaccinal strains (Clavijo & Nagy, 1997).
The aim of this study was to evaluate the occurrence of the Infectious Laryngotracheitis Virus in samples submitted to the Laboratory of Avian Diseases of the School of Veterinary Medicine of the University of São Paulo between 2009-2013.
Sampling
A total number of 682 samples from layer flocks from several cities of the state of São Paulo (Bastos, Iacri, Tupã, Guapiaçu, Rancharia, Parapuã, Ibiúna) was analyzed.Out of the total number of samples, 337 derived from the trachea, 308 from the trigeminal ganglia, 11 from the lungs, six from the cecal tonsils, seven from the digestive tract, two from the liver, two from the spleen, one from the pancreas, and eight from the kidneys (Table 2).
DNA extraction
Total DNA was extracted according to the method of described by Chomczynski (1993).Briefly, samples were homogenized in sufficient 0.01 M phosphate buffered saline (PBS; pH 7.4) to yield a 10% (w/v) suspension, and clarified at 3000 x g for 20 min at 4°C.The supernatant was separated and an aliquot (200 µL) incubated for 5 min at 37°C with 1000 µL of phenol/guanidine thiocyanate solution.Chloroform (200 µL) was then added to the solution, the mixture was centrifuged (12,000 x g for 15 min at 4°C), propanol (750 µL) was added, and the whole mixture was cooled at -20°C for at least 2 h.Precipitated DNA was collected by centrifugation (12,000 x g for 20 min at 4°C).Any DNA that remained adhered to the wall of the tube was rinsed off with 70% ethanol and the solvent was allowed to evaporate.The total DNA sample was dissolved in 30 µL of Tris-EDTA buffer.
Viral Detection
Viral detection was performed by a nested-PCR technique, which was oriented to the amplification of the gene that encodes protein E, as described by Chacón & Ferreira (2008).
In a DNAse-free fresh microtube, 1 X PCR Buffer, 1.25 mM of each dNTP, 0.5 pmol of each of the forward (GE1S) and reverse (GE2AS) primers (as described in table 1), 1.25 U of Platinum® Taq polymerase (Invitrogen by Life Technologies, Carlsbad, CA, USA), and 2.5 µL of extracted DNA were added.DNA free ultrapure water was included to bring the volume to 25 µL.Amplification was performed in a Mastercycler® Nexus X1Eppendorf (Eppendorf AG, Hamburg, Germany).Thermal cycling consisted of an initial denaturation step of 3 min at 94 °C, followed by 45 cycles of denaturing at 94 °C for 1 min, annealing at 58 °C for 30 s, and extension at 72 °C for 45 s.The end cycle was followed by an extension step at 72 °C for 10 min.The second round of amplification (nested-PCR) was performed in a similar manner, although a second set of primers (GE3S forward and GE4AS reverse) was employed, as described in Table 1.The PCR products were visualized after separation by electrophoresis in an agarose gel (1.5%) using Blue Green Dye (LGC, Sao Paulo, Brazil) to stain the DNA.The size of the amplified product was estimated using the 100 base pair DNA Ladder molecular size marker (Invitrogen).
DISCUSSION AND CONCLUSION
Infectious laryngotracheitis (ILT) is a disease that causes singnificant economic losses due to increased mortality and reduced growth rates and egg production (Guy & Garcıa, 2008).The target of ILTV infections is the respiratory system, specifically the epithelium of the trachea and larynx, although the sinuses and lungs may also be infected.The target site of infection largely depends on the route of inoculation (Bagust & Johnson, 1995).In the present study, ILTV was detected in seven (63.6%) lung samples and 57 (16.6%) tracheal samples, showing a wide viral distribution and strong tropism for the respiratory organs, despite its detection in the intestines, cecal tonsils, and trigeminal ganglia.
The main organ in which ILTV is latent is the trigeminal ganglia, possible because the trigeminal nerve is the main enervator of the upper respiratory tract, tongue and eyes, and the distal part is involved in the innervation of the trachea (Bagust & Johnson, 1995;Bagust, 1986;Williams et al., 1992).The presence of ILTV was detected in 19 of the evaluated trigeminal ganglia samples, showing that the virus became latent in a few chickens.
Our study indicates that viral detection was detected in higher numbers in the lungs, trachea, and trigeminal ganglia.This is consistent with previously reported results, indicating that greater viral replication was detected in the respiratory organs (Oldoni et al., 2009;Rodríguez Avila et al., 2007).However, other organs, such as the lungs, cecal tonsils, digestive tract, and kidneys were also positive.These findings are in agreement with other studies (Wang et al., 2013;Zhao et al., 2013), in which viral DNA was detected and quantified in the heart, liver, spleen, kidneys, tongue, thymus, proventriculus, duodenum, pancreas, small intestine, large intestine, cecum, cecal tonsils, bursa, and brain.These results indicate that circulating ILTV strains may show tropism for other organs in addition to the respiratory system.Further studies, using ILTVspecific immuno-histochemical techniques should be performed in order to confirm such finding.
Five hundred ninety-seven (597) out of 682 (87.5%) samples were negative by the aforementioned molecular tests, implying partial success of the vaccination program regarding reducing viral activity.Indeed, the positive samples (12.46%) revealed viral presence in healthy chickens.In addition, viral presence could also mean that a field strain is circulating among layer flocks.This is suggested by the detection of ILTV in uncommon organs and indicates that the pathogenesis of the disease is not well understood.Several studies using molecular techniques, such as PCR for detection of ILTV (Chacón et al., 2007;Chacón & Ferreira, 2008;Clavijo & Nagy, 1997;Crespo et al., 2007), prove that these techniques are very sensitive and specific.In the present study, a reaction of nested-PCR, oriented to the amplification of part of the gene encoding glycoprotein E, was successfully used to investigate the presence and tissue location of ILTV in layer chickens.
The results of this study demonstrate that ILTV is circulating in laying flocks reared in São Paulo state; however, it is unknown if the circulating virus is a vaccinal or a field-derived strain.Further studies should focus on differentiating the nature of these strains.Additionally, it must be noted that, at the time of diagnosis, organs other than those of the respiratory system presented ILTV infection.
Figure 1 -
Figure 1 -Nested-PCR product (219 bp) of the partial amplification of ILTV glycoprotein E gene.Lanes A, B, E, G, H: tested samples.Lane C: positive control.Lane F: negative control, and Lane D: Molecular size marker (100 bp).
Table 1 -
Primers, nucleotide sequences, amplified products (in bp), and references used in the nested-PCR test.
Table 3 -
ILTV distribution in the 682 analyzed samples from different organs of layers in São Paulo State. | 2018-11-22T15:40:22.843Z | 2015-03-01T00:00:00.000 | {
"year": 2015,
"sha1": "8ac6bfb8f05fa30895dbb17ae2b1483eb3541cb2",
"oa_license": "CCBY",
"oa_url": "https://www.scielo.br/j/rbca/a/h4jPrxDk5VSbMQQZMRcTw5v/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "8ac6bfb8f05fa30895dbb17ae2b1483eb3541cb2",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology"
]
} |
46943976 | pes2o/s2orc | v3-fos-license | Social media for health promotion in diabetes: study protocol for a participatory public health intervention design
Background Participatory health approaches are increasingly drawing attention among the scientific community, and could be used for health promotion programmes on diabetes through social media. The main aim of this project is to research how to best use social media to promote healthy lifestyles with and within the Norwegian population. Methods The design of the health promotion intervention (HPI) will be participatory, and will involve both a panel of healthcare experts and social media users following the Norwegian Diabetes Association. The panel of experts will agree on the contents by following the Delphi method, and social media users will participate in the definition of the HPI by expressing their opinions through an adhoc online questionnaire. The agreed contents between both parties to be used in the HPI will be posted on three social media channels (Facebook, Twitter and Instagram) along 24 months. The 3 months before starting the HPI, and the 3 months after the HPI will be used as control data. The effect of the HPI will be assessed by comparing formats, frequency, and reactions to the published HPI messages, as well as comparing potential changes in five support-intended communication behaviours expressed on social media, and variations in sentiment analysis before vs during and after the HPI. The HPI’s effect on social media users’ health-related lifestyles, online health behaviours, and satisfaction with the intervention will be assessed every 6 months through online questionnaires. A separate questionnaire will be used to assess the panel of experts’ satisfaction and perceptions of the benefits for health professionals of a HPI as this one. Discussion The time constraints of today’s medical practice combined with the piling demand of chronic conditions such as diabetes make any additional request of extra time used by health care professionals a challenge. Social media channels provide efficient, ubiquitous and user-friendly platforms that can encourage participation, engagement and action necessary from both those who receive and provide care to make health promotion interventions successful.
Background
The prevalence of diabetes is substantially increasing globally. In 2014 it was estimated that 422 million adults had diabetes, a number around four times higher then the 108 millions in 1980 [1]. This is a serious and chronic disease, that can lead to severe complications and premature death [1]. After the diagnosis has been confirmed, prevention actions are initiated that aim to reduce factors that can worsen the health of the affected and reduce the risk of premature death. These risk factors include tobacco use, an unhealthy diet, physical inactivity and excessive consumption of alcohol, etc. [1,2]. The adequate education and counselling of patients with diabetes and their families has been emphasized by the World Health Organization [1]. These efforts have also been highlighted by the Norwegian Ministry of Health and Care Services as a key measure to promote patient commitment and self-treatment, and to reduce the development of complications [2]. However, medical practitioners may not always have the time to provide adequate and detailed answers to questions, or to offer health education or counselling to patients and their families during the infrequent and brief consultations they are allotted.
Social media channels have been proposed as effective educational tools through which to promote secondary prevention measures and behaviour change [3][4][5][6][7][8][9][10], and could also represent an effective platform for answering questions from patients and their relatives. Social media channels are powerful outlets for public health promotion due to their cost-effectiveness, precise evaluations of campaign success, and increased sustainability [11]. By the end of 2016, 2.34 billion users worldwide were using social media, and it is estimated that there will be 3 billion users by 2020 [12]. In high-connected countries such as Norway, social media are becoming increasingly important to seek out and share health information [13]. Evidence suggests that nowadays patients could be seeking and sharing health information on social media [13][14][15][16][17][18][19] as an additional resource to the consultations with their clinicians [20,21]. However, although much of the health information available on social media seems to be of reasonably good quality [19], social media users are subject to risks associated with misleading or inadequate health information [19,22].
Public health institutions, healthcare professionals, and other stakeholders could be more actively participative in these outlets, not only for answering diabetes patients' questions, and correcting possible misinformation; but also taking advantage of the popularity of these channels to provide relevant health information for people with diabetes. In this sense, a participatory program in which the contents of the health promotion are agreed upon with people affected with diabetes has the potential to engage the target audience, and therefore to enhance patients' wellbeing and satisfaction, and to improve health outcomes [23]. These participatory health approaches are increasingly drawing attention among the scientific community [24][25][26][27], and could be used for health promotion programmes on diabetes through social media.
Methods/design
Aim, design and participants The main aim of this project is to research how to best use social media to promote healthy lifestyles with and within the Norwegian population. Secondary objectives of the project are: To analyze health behaviour on social media To analyze online discussions concerning diabetes To suggest systems and procedures to improve usage of social media for the dissemination of health information The design of the health promotion intervention (HPI) will be participatory, and will involve both a panel of healthcare experts and social media users from the Norwegian Diabetes Association [28]. On one side, a panel of five healthcare professionals with an expertise in diabetes and/or patient health education will agree on the contents that will be used for the health promotion intervention. The panel of experts will agree on the contents by following the Delphi method [29,30].
On the other side, all social media users from the Norwegian Diabetes Association will be invited to actively participate in the definition of the HPI by expressing their opinions through an adhoc online questionnaire regarding contents, frequency and format. The questions included in the questionnaire will be agreed between researchers, healthcare professionals and personnel from the Norwegian Diabetes Association. The online questionnaire will be distributed using LimeSurvey, an open source online survey tool [31]. Links to the online questionnaire will be posted on the three social media channels (Facebook, Twitter and Instagram). The online questionnaire will not track any information that can identify or trace users. The questionnaire will include questions regarding topics of interests, preferred frequency of the messages, preferred format of the information, and preferred social media channels.
The agreed contents to be used in the health promotion intervention (between the panel of experts and social media users of diabetes groups) will be posted on the social media channels. These messages will draw on the Laugh Model [11], a framework that proposes using social media to change behaviour [11]. Because social media are attractive platforms through which the intervention can be presented, users are more likely to engage in them.
Intervention
The HPI is expected to start at the end of 2017, and last until the end of 2019. The HPI will last 24 months in total, and will be carried out through the Norwegian Diabetes Association, Diabetesforbundet [28] and other relevant organizations' social media channels, with the aim of reaching over all their social media users (over 35,000 by October 2017) [28]. During the first 12 months of the intervention (Phase I), health messages promoting secondary prevention measures and behaviour change for diabetes will be disseminated through the social media channels. In the following 12 months (Phase II), in addition to the health messages, the HPI will include an "ask the experts" service, whereby people with diabetes will be able to send their questions.
Control
The 3 months before starting the HPI, and the 3 months after the HPI will be used as control data. The formats, frequency, and reactions to the published HPI messages on the 3 social media channels (Facebook, Twitter, and Instagram) will be tracked during the 3 months prior and 3 months after the HPI.
Patterns of users' online discussions and behaviours related to diabetes will also be actively monitored for 30 months: 3 months before starting the HPI, during Phase I, during Phase II, and 3 months after the HPI. All of the contents tracked on social media will be analysed according to the Social Support Behaviour Code [32] regarding the five support-intended communication behaviours: 1) information support, i.e. providing information or advice; 2) esteem support, i.e. communicating respect and confidence in abilities; 3) network support, i. e. communicating belonging to a group of persons with similar concerns or experiences; 4) emotional support, i. e. communicating love, concern, or empathy; and 5) tangible assistance, i.e. providing, or offering to provide, goods or services.
Additionally, content sentiment analysis will be carried out during the whole study (3 months before starting the HPI, during Phase I, Phase II, and 3 months after the HPI). Sentiment analysis will involve automatic classification of social media' comments to the HPI messages into positive, neutral and negative feelings [33]. Sentiment analysis will be used as outcome to assess the impact of the health promotion intervention.
Primary outcome
The primary outcome is the effect of the participatory social media health promotion intervention on the reported health-related lifestyles and online health behaviours of people living with diabetes.
Users of diabetes social media groups will be surveyed through an adhoc questionnaire in order to assess the HPI's effect on their health-related lifestyles, and also on their online health behaviours. The questionnaire will be distributed before starting the HPI, and every 6 months during the 24 months of the HPI, i.e. during Phase I, during Phase II, and at the end of the health promotion intervention.
A questionnaire will be also distributed to the health professionals involved in the project, to the panel of experts, to assess their satisfaction and their perceptions of the benefits for health professionals of a HPI such as this one.
Secondary outcomes
Secondary outcomes will be assessed through monitored contents on the study participants' social media channels. These are: 1) identification of positive measures regarding the five support-intended communication behaviours, these behaviours will be identified and assessed according to the Social Support Behaviour Code [32]; 2) an increase in positive mood of diabetes patients during the health promotion intervention, which will be assessed with automatic classification'sentiment analyses tools; and 3) an increase in perceived quality of health information after the health promotion intervention, which will be compared with the perceived quality of health information assessed before the intervention.
Analysis of the results derived from the users' involvement, the HPI, the monitoring of online health information, and the monitoring of health behaviour will be used to suggest systems and procedures for the Norwegian Diabetes Association and other stakeholders to improve their usage of social media for future health promotion interventions.
Statistical methods
The opinions of social media users and healthcare professionals revealed by the questionnaires will be analysed using descriptive statistics. Results will be expressed in form of frequencies and percentages for each categorical variables and mean, standard deviation (SD) and 95% confidence interval (95% CI) for continuous variables. T-Tests, ANOVA, correlation and Chi-Square analyses will be performed as well. Descriptive statistics will also be used to summarize positive and negative mood in sentiment analysis; and also the 5 types of social support found on social media as reactions to the participatory health promotion intervention: 1-information support; 2-esteem support; 3-network support; 4-emotional support, and 5-tangible assistance, according to the Social Support Behaviour Code [32].
Quantitative data analysis will be performed with the latest version of the SPSS statistical package. For the sentiment analysis we will use any of the available tools in Norwegian language (e.g. Polyglot, Lexalytics, or similar).
Discussion
Participatory health intervention approaches and collaborative research involving researchers and community representatives are increasingly drawing attention among the scientific community [24][25][26][27], and could be used for health promotion programmes on diabetes through social media.
Within the Norwegian health services, there is little systematic use of social media to promote healthy lifestyles or offer advice to its users. While the reason for this is unclear, one might speculate that the traditional health services are slow in adopting new means of communication with their users [34]. However, the time constraints of today's medical practice combined with the piling demands caused by chronic conditions such as diabetes make any additional request on the time of health care professionals a challenge. Social media channels provide efficient, ubiquitous and user-friendly platforms that can encourage participation, engagement and action necessary from both those who receive and provide care, to make health promotion interventions successful.
In the present project, there will be collaboration between patient users, patient user organizations, and health professionals/researchers. The HPI will be developed jointly by the participants and will rely heavily on feedback from real patient users. The project is unique and innovative in the Norwegian setting and may provide important insights that can be used for other health promotion interventions drawing on social media and heavy user involvement in Norway.
This research project will investigate the use of a participatory approach to promote healthy lifestyles on social media among Norwegians with diabetes. The project will contribute to improving quality of care and quality of life while reducing social inequalities in health, since it is based on media that are available and accessible for all. The use of a participatory approach can potentially increase diabetes patients' engagement and satisfaction with the health promotion intervention, and therefore help people attain healthier lifestyles and the intervention can also provide benefits for participating health professionals and the health service. | 2018-06-08T12:57:38.314Z | 2018-06-05T00:00:00.000 | {
"year": 2018,
"sha1": "ef8191cfaa29d77091dbd182c2b091f8d68b3da8",
"oa_license": "CCBY",
"oa_url": "https://bmchealthservres.biomedcentral.com/track/pdf/10.1186/s12913-018-3178-7",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ef8191cfaa29d77091dbd182c2b091f8d68b3da8",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
158302963 | pes2o/s2orc | v3-fos-license | Time for complete transparency about conflicts of interest in public health nutrition research [version 2; referees: 2 approved]
We are a group of researchers and academics with decades of experience in the protection and promotion of public health. We are writing to raise our concerns about how conflicts of interest are reported in public health nutrition research. We highlight examples of why it is important to accurately declare such conflicts, as well as providing examples of situations in which conflicts of interest have been inadequately reported. We call on researchers, and others, to be transparent about conflicts of interest in research. Journal editors in particular have an important responsibility in fully understanding how conflicts of interest can impact on research findings and interpretations. They need to agree and adopt clear guidelines on conflicts of interest and ensure that authors abide by these to facilitate trust in the scientific process and the credibility of published articles. This is a very a timely and well written article that highlights a key issue in research. Stronger guidance for journals (and funding and reporting in general) is key to a clearer objective evidence base, upon which decisions for action can be made. I thought, but could not find the email links, that PubMed had agreed to include COI declarations in their abstracts - it would be good to check this out and add if confirmed. I have no substantive comments; one minor- WHO uses organization (not s). Abstract - 2 last sentence: “can impact on research findings” – you could add “and interpretations” or similar.
Interactions between commercial food and drink companies 1 and professionals and bodies responsible for improving public health and health promotion have generated concerns for decades [1][2][3] . These interactions are often hailed as unique opportunities to make a difference to the public's health that would otherwise not be possible without industry involvement 3 . In late 2018, a series of events attracted considerable media attention in the United Kingdom and beyond. In September, Public Health England announced their partnership with the alcohol industryfunded body DrinkAware on a campaign called 'Drink Free Days', which has the stated aim of helping people cut down on the amount of alcohol they are regularly drinking. This partnership was met with much criticism -with Public Health England's alcohol adviser, Sir Ian Gilmore, resigning from this role because of concerns that such interactions with alcohol industry actors and related industry-funded organisations come at the expense of public health 4 . Then, in late November, Diabetes UK announced that it had joined forces with sugar-sweetened beverage manufacturer Britvic in a three-year partnership. Again, this interaction was met with much public criticism, which Diabetes UK has rejected 5 . On a more positive note, in October 2018 the Dieticians Association of Australia terminated partnerships with food manufacturers and industry associations following longstanding criticism and internal member advocacy 6 .
Such interactions with industry are also common among individual researchers. In a recent article published in the British Medical Journal, van Tulleken reported that cow's milk allergy may be acting as a Trojan horse for the €44bn global breastmilk substitute industry to forge relationships with healthcare professionals in the UK and around the world 7 . He further highlighted that many of those involved in producing milk allergy guidelines declared interests with breastmilk substitute manufacturers either at the time of writing or subsequently. A series of recent studies have highlighted links between nutrition researchers and Coca Cola 8,9 , contributing to a narrative that pushes policy towards measures to increase exercise by children, which is of course a good thing, while deflecting attention from the role of sugarsweetened beverages in obesity and poor nutrition. Such interactions between public health, paediatric and nutrition experts and commercial food and drink companies can undermine trust in researchers and their scientific integrity 10,11 .
Concerns about interactions between researchers and commercial food and drink companies are well-founded as corporate interests typically prioritise investing in research that supports their policy and legal positions, and this can divert research attention away from questions that are more pressing for public health 12,13 . Such interactions are also more likely to lead to findings that confirm the benefits or lack of harm of the sponsor's products 14 , even when independently sponsored research comes to differing conclusions. As early as 1965 the US sugar industry began funding research to downplay the role of sugar as a dietary risk factor for coronary heart disease, shifting the focus towards cholesterol and fat instead, with decades-long implications for nutrition guidance and policy 15 . A Cochrane review concluded that industry sponsored studies more often report findings in a direction that favours the sponsor 16 . Similarly, in a systematic review of the effects of soft drink consumption on nutrition and health, the authors found that studies funded by the food industry reported significantly smaller effects than did non-industry-funded studies 17 . Such industry-funded research generates doubt among scientists, policy-makers and the public by generating conflicting or confusing results 18 . In the light of these and other revelations, members of the public are increasingly sceptical about research that is supported by commercial funding 19 , as are members of the research community 20 .
An important element of maintaining public trust in the scientific process and the credibility of published articles is whether 1 Those involved in the primary production, manufacturing, wholesaling, retailing of fresh, packaged, or hot or cold ready-to-eat foods and/or drinks, as well as third parties working for such companies, including trade associations and research bodies.
Amendments from Version 1
We thank the reviewers for their feedback on our open letter; we have addressed their comments as follows.
Lisa H. Amir: • Abstract -2nd last sentence: "can impact on research findings" -you could add "and interpretations" or similar: We have added the suggested text.
• Letter -6th paragraph: first sentence needs a ref for ICJME guidelines: Reference added.
• Last sentence of this paragraph needs rewording of "its significance" which is confusing, to something like "the influence of funding on research reporting": We have added the suggested text.
• 7th paragraph: COI, personal communications. Should the name of the editor and/or date of personal communications be included here?: This is not a requirement of HRB Open Research; we did not seek permission from the editor to include this information and therefore have not included these details.
• Another thought -it seems to me that public health journals could take a stance on these issues...: Thank you for this information. We have added the following sentence: "Some journals and search engines have clear policies around conflicts of interest. For example, it is the policy of the International Breastfeeding Journal to decline for publication any manuscript that has received funding, sponsorship or any other means of support from breast milk substitute manufacturers." Barrie Margetts: • I thought, but could not find the email links, that PubMed had agreed to include COI declarations in their abstracts -it would be good to check this out and add if confirmed: Thank you for this suggestion. We have added the following text: "Since March 8, 2017, PubMed has included conflict of interest statements below the abstract when these statements are supplied by the publisher".
• I have no substantive comments; one minor-WHO uses organization (not s). This has been amended.
REVISED
conflicts of interest are transparently disclosed during the planning, implementation, writing, peer review, editing, and publication of scientific work. Determining what constitutes a conflict of interest can be difficult for researchers and editors as there is limited guidance available. However, when researchers receive funding from a commercial company to undertake research related to their products, brand or area of interest, a conflict of interest exists 21 . Although this seems obvious, a number of corporations have supported positions that seek to dismiss concerns about such conflicts by arguing that everyone has some interest, for example, in progressing their scientific reputation to attract further funding, so commercial sponsorship should not raise particular concerns 22 .
Procedures for the reporting of conflicts of interest are covered within the International Committee of Medical Journal Editors (ICJME) guidelines 23 . Where authors do not conform to ICJME guidelines, journal editors must take responsibility for encouraging full disclosure. A common sentiment within the research community is that transparency is the key to appropriately managing and avoiding conflicts of interest; that is, as long as the authors are fully transparent, then readers can make up their own minds about conflicts of interest. However, this sentiment fails to acknowledge the limited understanding both academic and clinical researchers have on this issue 24,25 . Of particular concern is the limited awareness of how research funding and unconscious bias work together. This relationship can result in researchers being influenced by funding even when they think they are being unbiased 26 . Further limitations of disclosure are apparent from research showing that it may give licence to researchers to exaggerate their findings, while reviewers often fail to take adequate account of the influence of funding on research reporting 27 .
Recently in a scientific article published ahead of print in Annals of Nutrition and Metabolism, the authors of the article stated that they had "no conflicts of interest or financial ties to disclose" despite declaring that the writing of the article was supported by Nestlé Nutrition Institute 28 . This Institute has clear links with Nestlé 29 , the world's biggest breast-milk substitute and complementary baby food manufacturer 30 , and therefore it has a clear financial interest in the study 31 . We wrote a Letter to the Editor of the journal to raise our concerns about how conflicts of interest were reported therein. The Editor declined to accept our letter for publication asserting that the authors had disclosed their funding source and that readers could apply their own interpretation. The Editor further stated that the Editorial Board would critically review and question conflict of interest (COI) statements where questions may arise, but added that COI declaration remains the responsibility of the authors (personal communications, 11 November 2018). While COI is the responsibility of the authors to declare, it is the responsibility of the journal to have robust policies and to clearly explain them in a way that leaves no room for ambiguity.
The practice of declaring no conflicts of interest while also reporting financial support from vested interests is not uncommon in early life nutrition research. This occurs despite the World Health Organization highlighting the need to avoid conflicts of interest in all areas relating to infant and young child feeding in at least eight World Health Assembly resolutions. In a paper outlining the recommendations of an International Expert Group around follow-up formula for infants, several authors reported financial ties with breast-milk substitute companies yet declared that "none of the authors reports a conflict of interest" 32 . Shortcomings in editorial policies toward conflicts of interest (financial and nonfinancial) of editors and other staff involved in manuscript decisions have previously been highlighted 33 . Indeed, the ICJME guidelines state that all those involved in the peer-review and publication process, including authors, peer reviewers, editors, and editorial board members of journals, must consider their conflicts of interest and disclose all relationships that could be viewed as conflicts of interest.
Researchers and journals have important responsibilities regarding conflicts of interest 34 . Some journals and search engines have clear policies around conflicts of interest. For example, it is the policy of the International Breastfeeding Journal to decline for publication any manuscript that has received funding, sponsorship or any other means of support from breast milk substitute manufacturers 35 . Since March 8, 2017, PubMed has included conflict of interest statements below the abstract when these statements are supplied by the publisher 36 . It is time to for researchers, journals, funders and others involved in the research process, to engage more critically with the challenges of conflicts of interest in research. This requires clear understanding of what is, and is not, a conflict of interest, how to identify them, the impacts of conflicts of interest on scientific integrity, how to prevent them, and greater transparency in the reporting of conflicts of interest in research, something that is often lacking 37 . Journal editors in particular have an important responsibility in fully understanding how conflicts of interest can impact on research findings and the credibility of published articles for journals and authors.
Clear guidelines on managing interactions with commercial food and drink companies, including avoidance of damaging conflicts of interest, are urgently needed. Journals will need to play an important role in implementing such guidance. To aid in this process, a project funded by the UK's Medical Research Council has reviewed evidence and built international consensus on the principles that underpin governance of interactions between researchers and commercial food and drink companies. Guidance for researchers, journals and funders will be published in 2019 38 . It will enable researchers to identify and assess conflicts of interest at different stages of the research process and suggests governance strategies to manage these.
Journals -as well as research institutions, professional bodies and funders -should use this forthcoming guidance to formulate or update their own conflict of interest policies and ensure that authors, peer reviewers, editors, and editorial board members References abide by these to promote trust in the scientific process and the credibility of published articles.
Disclaimer
The views expressed in this article are those of the author(s). Publication in HRB Open Research does not imply endorsement by the Health Research Board of Ireland.
Data availability
No data is associated with this article.
Barrie Margetts
Faculty of Medicine, University of Southampton, Southampton, UK This is a very a timely and well written article that highlights a key issue in research. Stronger guidance for journals (and funding and reporting in general) is key to a clearer objective evidence base, upon which decisions for action can be made. I thought, but could not find the email links, that PubMed had agreed to include COI declarations in their abstracts -it would be good to check this out and add if confirmed.
I have no substantive comments; one minor-WHO uses organization (not s).
Does the article adequately reference differing views and opinions? Yes
Are all factual statements correct, and are statements and arguments made adequately supported by citations? Yes
Is the Open Letter written in accessible language? Yes
Where applicable, are recommendations and next steps explained clearly for others to follow? Yes For complete transparency, I am a Trustee of Firststeps nutrition, which is run by Competing Interests: one of the authors. I have not discussed this paper with any of the authors, but feel I should inform the readers.
Reviewer Expertise: Public Health Nutrition
I have read this submission. I believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard. | 2019-02-09T19:34:03.980Z | 2019-01-02T00:00:00.000 | {
"year": 2019,
"sha1": "0afb9349dcd599eee22ea26019216ebf00a86642",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.12688/hrbopenres.12894.2",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "ef9e2bd0e562d44e4cfe47b22803024574d29b4f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Political Science"
]
} |
257431965 | pes2o/s2orc | v3-fos-license | Adjustment of foraging trips and flight behaviour to own and partner mass and wind conditions by a far-ranging seabird
fl uenced by wind directions. Across trips, birds oriented predominantly with quartering tail winds which maximized ground speeds. Individuals experienced variable support from tail winds, and those that bene fi ted more on outbound journeys (when winds were generally weaker) travelled faster, reached foraging areas more quickly and, over the entire trip, had higher mass gain per day at sea. Additionally, birds that were lighter on departure gained more mass and birds with heavier partners ranged further from the colony. Our results suggest that decisions involving where to go and how far, respectively, are based on prevailing wind patterns and an assessment of the condition of the pair. Consequently, while birds sought to bene fi t from wind assistance, those encountering greater tail wind support had more successful foraging trips, indicating that wind use may have direct fi tness consequences. Published by Elsevier Ltd on behalf of The Association for the Study of Animal Behaviour. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).
Many animals are highly adapted to cover vast distances in search of ephemeral food resources. Pelagic seabirds have particularly wide-ranging foraging trips, made possible through efficient use of wind. During incubation, partners alternate long periods of fasting and so should adjust foraging and flight decisions according to the condition of the pair, as well as wind conditions experienced at sea. Here, we tracked incubating Juan Fern andez petrels, Pterodroma externa, with GPS and immersion loggers, assigned at-sea behaviours using hidden Markov models, and weighed birds and their partners, to investigate the roles of wind and mass on flight and foraging behaviour, and the link between wind use and trip success. Birds conducted long anticlockwise looping trips, on average lasting 20.4 days and covering 10 741 km. They reached a region in the southeastern Pacific Ocean where prey search behaviour was concentrated, typically about 3400 km west of the colony. Outbound and return journeys appeared to broadly benefit from predictable southeasterly trade and westerly winds, respectively. Over finer scales, departure bearings were influenced by wind directions. Across trips, birds oriented predominantly with quartering tail winds which maximized ground speeds. Individuals experienced variable support from tail winds, and those that benefited more on outbound journeys (when winds were generally weaker) travelled faster, reached foraging areas more quickly and, over the entire trip, had higher mass gain per day at sea. Additionally, birds that were lighter on departure gained more mass and birds with heavier partners ranged further from the colony. Our results suggest that decisions involving where to go and how far, respectively, are based on prevailing wind patterns and an assessment of the condition of the pair. Consequently, while birds sought to benefit from wind assistance, those encountering greater tail wind support had more successful foraging trips, indicating that wind use may have direct fitness consequences. Published by Elsevier Ltd on behalf of The Association for the Study of Animal Behaviour. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).
Foraging animals should favour strategies that minimize costs (e.g. travel time, energy expenditure) while maximizing benefits (e.g. energy gain; MacArthur & Pianka, 1966). For predators foraging on ephemeral or patchily distributed prey, the optimal searching strategy may be to increase the probability of finding prey (Andersson, 1981;Sims et al., 2008) by maximizing the distance covered and minimizing energy costs per unit distance travelled (Pyke, 1981). In pelagic ocean environments beyond coastal and continental shelf waters, the distribution of prey is considered to be particularly patchy or sparsely distributed, at least at the (sub)mesoscale (1e100 km), compared to coastal or continental shelf waters (Robinson et al., 2021;Weimerskirch, 2007). Faced with these constraints, oceanic predators that routinely commute between breeding and foraging areas often have morphological, behavioural or physiological adaptations that allow them to cover vast distances to forage successfully (e.g. Au & Pitman, 1986;Ballance et al., 1997;Sims et al., 2008).
Pelagic seabirds such as albatrosses and petrels (Order: Procellariiformes) are extremely well adapted for low-cost flight and many species routinely conduct transhemispheric migrations (Bonnet-Lebrun et al., 2021) and travel over 10 000 km in a single foraging trip (e.g. Clay et al., 2019;Taylor et al., 2020;Weimerskirch et al., 2000). Their high wing aspect ratios (long, narrow wings; Spear & Ainley, 1997a) and dynamic soaring flight style allow them to exploit vertical wind speed gradients near the sea surface (Kempton et al., 2022;Richardson, 2011;Sachs et al., 2013). Consequently, their flight performance is shaped by wind patterns over a range of spatiotemporal scales, from migration routes (e.g. Gonz alez-Solís et al., 2009) to finer-scale flight decisions within foraging trips (e.g. Clay et al., 2020). Generally, procellariiform seabirds orient with tail or cross-winds (Spear & Ainley, 1997a) allowing them to maximize achieved ground speeds (i.e. flight speeds relative to the ground; Spear & Ainley, 1997b;Ventura et al., 2020;Wakefield et al., 2009) and reduce the energetic costs associated with flapping flight (Weimerskirch et al., 2000). Morphological traits such as body mass and associated wing loading (body mass per unit wing area) strongly influence flight performance and so optimal flight and foraging strategies vary among and within species (Pennycuick, 1982;Spear & Ainley, 1997a;Wakefield et al., 2009). When breeding, many seabirds commute between breeding and often distant foraging sites, constraining where and for how long they can forage before having to return to incubate the egg or care for the chick. Thus, optimal foraging strategies vary over short timescales as a function of variable wind conditions encountered as well as changes in body mass that occur as food is ingested and energy used (Alerstam et al., 2019;Wakefield et al., 2009). This reliance on wind likely has important energetic consequences. While changes in wind influence foraging strategies and breeding success (Thorne et al., 2016;Weimerskirch et al., 2012), the extent to which wind use influences trip success (duration or mass gained) remains poorly understood.
Seabirds are monogamous and the majority exhibit biparental care, meaning that breeding success is dependent on the decisions of both an individual bird and its partner. Research on species with long-lasting pair bonds shows that cooperation between the pair is important to manage the costs of current and future reproduction, and parents often adjust their behaviour in response to their partner (Griffith, 2019;Lessells & McNamara, 2012). Since most seabirds share incubation, the foraging bird must gather not only enough food for its immediate needs but also an additional amount sufficient for its forthcoming incubation stint. Therefore, birds have to trade spending sufficiently long at sea to recover lost body condition against returning before the partner leaves, which may result in egg neglect and nest failure (Chaurand & Weimerskirch, 1994;Ronconi & Hipfner, 2009). The optimal strategy should take into account the condition of the partner, and experiments have shown handicapped Manx shearwaters, Puffinus puffinus, and Antarctic petrels, Thalassoica antarctica, return quicker if their partner is in poorer condition (Gillies et al., 2021;Tveraa et al., 1997). Most studies of pair coordination have focused on species with short (e.g. several hours to a day) trips, where regular changeover and communication facilitate an assessment of the partner's efforts and condition Kavelaars et al., 2019). For practical reasons, this is much more difficult for species with long (and few) incubation shifts.
Gadfly petrels, Pterodroma spp., have extremely long (up to ca. 20 days) and few incubation bouts (Warham, 1990). While the 35 species have generally been poorly studied, recent tracking studies have shown that birds undertake some of the widest-ranging foraging trips of any seabird, ranging up to 5000 km from breeding colonies (e.g. Clay et al., 2019;Taylor et al., 2020). The vast distances covered by individuals are facilitated by the small (5e10%) amount of time spent resting on the sea surface (Bonnet-Lebrun et al., 2021;Clay et al., 2017;Ramírez et al., 2013). These birds are highly dependent on winds for their gliding flight (Spear & Ainley, 1997b), and appear to take advantage of ocean basin-scale wind circulation patterns to facilitate long trips (Adams & Flora, 2010;Clay et al., 2019;Ventura et al., 2020). Indeed, subtropical Desertas petrels, Pterodroma deserta, also adjust movements to finer-scale variation in winds, resulting in faster trips than predicted if birds just followed basin-scale wind patterns (Ventura et al., 2020).
We present the first tracking study of Juan Fern andez petrels, Pterodroma externa, a large gadfly petrel endemic to Isla Alejandro Selkirk (33 46 0 S, 80 47 0 W), Juan Fern andez Islands, Chile, in the southeast Pacific Ocean. Through combining GPS data with mass measurements of each tracked bird and its partner, we examined the effects of (1) pair mass and (2) wind on the flight behaviour and routes taken by birds, as well as (3) the effect of wind use on overall trip success (trip duration, mass gain). The species is one of the most commonly sighted seabirds in the subtropical southeast Pacific Ocean (Miranda-Urbina et al., 2015) and is abundant in its nonbreeding range in the eastern tropical Pacific Ocean (based on at-sea surveys; Ballance et al., 1997;Spear & Ainley, 1998), but little is known about its movements and foraging behaviour during breeding. It has long (19.5-day) incubation shifts (Brooke, 1987;Warham, 1990), and by analogy with other similarly sized congeners (Clay et al., 2019;Ventura et al., 2020) is predicted to range far from its colony; however, data from temperature loggers attached to chick-rearing adults suggested most foraging is within 1000 km of the colony (Smith, 2008).
We used hidden Markov models (HMMs) to classify major behavioural states at sea (directed flight, area-restricted search [ARS], rest) and overlaid tracks with maps of averaged wind conditions to test the prediction that birds conduct large-scale anticlockwise looping trips that broadly follow prevailing wind patterns in the southeast Pacific Ocean (sensu Clay et al., 2019;Ventura et al., 2020;hypothesis 1a;H1a). We examined the extent to which birds fine-tuned their routes and flight behaviour to wind they encountered. First, if wind strongly dictated the travel paths of birds, they should adjust initial departure directions to orient favourably with wind directions experienced (H1b). Second, across trips, birds should orient with quartering tail winds (<90 angle between the bearing of the bird and wind direction), facilitating higher ground speeds (H1c; Spear & Ainley, 1997a). Moreover, given adults presumably have some prior knowledge of both the distribution of prey and synoptic winds at the macroscale (100 km to thousands of km; Ventura et al., 2020), detours taken from the most direct route to foraging areas should serve to take advantage of finer-scale variation in wind and allow birds to maximize overall distance travelled (H1d), presumably at low energetic cost. We also examined whether bird and partner mass influence trip decisions; specifically, we expected that birds departing at a lower mass would gain more mass at sea (e.g. Kim et al., 2018;Weimerskirch, 1995;H2a) and those with partners in better condition would travel further and/or take longer trips (Tveraa et al., 1997;H2b). Lastly, we determined whether a more efficient use of tail winds, which presumably promotes faster ground speeds (see H1c) or increased distance covered (see H1d), results in increased trip success, either a shorter trip and/or increased mass gain (H3).
Data Collection
Fieldwork was conducted on Isla Alejandro Selkirk between December 2019 and February 2020 where most birds nest in a mixed colony with Stejneger's petrels, Pterodroma longirostris, on the southern half of the island at an altitude of around 850 m (Brooke, 1987). The majority of study burrows were situated in a grassy area overlooking the pinnacles known as Tres Torres on the northern fringes of the colony (33 46 0 48 00 S, 80 47 0 24 00 W). Burrows were opened in mid-December using a standard procedure involving cutting a flowerpot-shaped sod (ca. 20 cm in maximum diameter) roughly above the nest chamber, allowing the temporary removal of the incubating bird. The sod was replaced, and a capping stone was put atop for safety. From the bird's perspective the nest was not altered. The procedure permitted regular checks of the nest chamber over the laying period, daily from 18 December until 9 January and thereafter on alternate days until 19 January. The first egg was laid on 17 or 18 December and the median lay date was 31 December (N ¼ 47). After laying, the female, identified by her lower mass (<530 g on day after laying) and distended cloaca, remained in the burrow for several days (range 1e12, N ¼ 19) until relieved by her partner, with a greater mass (>550 g on first day after return) and normal cloaca. The incubating bird, found shortly after laying, was ringed, weighed using a Pesola spring balance, and its wing length (maximum chord) measured. To assess daily mass loss of incubating birds for the purpose of estimating departure and return masses, females were weighed again roughly 5 days after laying if still present, while males were also weighed intermittently during their first long incubation stint. Overall, we obtained measurements of mass loss across 21 intervals for 18 birds. The balance was only able to measure masses up to 600 g, so for six focal birds and 13 partners which were measured on return with a mass >600 g, we waited a few days to weigh the bird, and then estimated its return mass using a daily mass loss estimate (see below). Of these, five focal birds could not be reweighed either because the field season had ended or because the birds (N ¼ 2) had deserted by the second visit.
Birds were tagged prior to departure and the majority (23 of 33) of loggers that were successfully retrieved were from females, since males could only be tracked after their first long (ca. 20-day) incubation bout. Twenty-four nanoFix-GEO GPS loggers (30 Â 14 mm and 19 mm high, 9 g; PathTrack, Otley, U.K.) and seven FastLoc GPS devices (38 Â 13 mm and 12 mm high, 10 g; PathTrack; Clay et al., 2019) were attached to the four central tail feathers using Tesa tape and programmed to obtain a position every 20 min (nanoFix-GEO) or 40 min (FastLoc GPS) ( Table 1). In addition, geolocator-immersion loggers (Intigeo C65-SUPER: 14 Â 8 mm and 6 mm high, 1 g; Migrate Technology Ltd, Cambridge, U.K.) were attached to a Darvic plastic ring on the tarsus of 34 birds and programmed to sample wet/dry status every 6 s, providing a score out of 50 every 5 min.
Ethical Note
All capture, handling and tagging procedures were in accordance with permits provided by Servicio Agrícola y Ganadero, Chile (permit no. 9793/2019). Ethical approval was provided by the Department of Zoology, University of Cambridge and the Corporaci on Nacional Forestal (CONAF; certificate 009/2019) provided authorization to work in the Archip elago de Juan Fern andez National Park. Birds were caught by hand in their burrows and the attachment of devices, always carried out in decent weather by a licensed bird ringer (BTO A 1871MP), took about 15 min. Device retrieval took 5 min. The total mass of the GPS and immersion loggers, rings and attachment materials (ca. 12 g) represented 2.7 ± 0.2% (range 2.2e3.1%) of birds' departure mass (446 ± 41 g, range 387e534 g). There were no detectable differences in trip duration (GPS: 19.6 ± 2.7 days, N ¼ 24; no GPS: 19.4 ± 4.4 days, N ¼ 5; t test: t 27 ¼ 0.12, P ¼ 0.903) or mass gain (GPS: 117.5 ± 30.8 g, N ¼ 22; no GPS: 126.7 ± 35.7 g, N ¼ 3; t 23 ¼ À0.48, P ¼ 0.637) between birds equipped with both a GPS and immersion logger and those carrying just an immersion logger, although we acknowledge the sample size for the latter group is particularly small.
Data Processing
Data processing and statistical analyses were conducted in R v. 4.0.3 (R Core Team, 2020). We assigned departure and return dates based on nest monitoring, specifically the day after the nights when the bird departed and returned, respectively. If daily burrow checks were not made at the time of departure or return, departure and return dates were verified based on either immersion or GPS data. Mass gain was calculated for each bird by subtracting the departure from the return mass (day after the nights of departure and return, respectively), which we estimated based on the mass at weighing, the time difference (if any) between capture and departure or return and capture and an estimated daily mass loss (Table 2). Daily mass loss while incubating was assumed to be constant (Brooke, 1995) and calculated based on a linear regression applied to multiple measurements of the same individuals (slope ± SE ¼ 6.8 ± 2.1 g/day). Indeed, the slope was remarkably similar among individuals (Fig. A1). The mass of the partner taking over incubation duties from the departing bird was also adjusted upwards if it was not weighed the first day after the night of its return. As the majority (13 of 25) of partners were initially weighed at >600 g and had to be reweighed and the return mass estimated, we consider partner mass to be an estimate and may not be the precise return mass. There was slight variation in wing length among individuals (5.9% difference between smallest and largest) in contrast to departure (38.0%) and return mass (19.5%) and mass gain (172.9%), and wing length did not correlate with mass variables (Spearman rank correlation: departure mass: r S ¼ 0.22, P ¼ 0.281; return mass: r S ¼ 0.24, P ¼ 0.287; mass gain: r S ¼ 0.05, P ¼ 0.819); thus we use mass as a proxy for body condition.
GPS data were run through an iterative forwards/backwards speed filter (90 km/h) in the 'trip' package (Sumner, 2020) to remove locations (N ¼ 2, <0.01% of total) associated with unrealistic flight speeds. We used the 'track2kba' package (Beal et al., 2021) to remove incomplete trips and remove locations at or around the breeding colony, based on a distance buffer of 10 km, and to calculate the maximum range (maximum distance from the colony) and cumulative distance travelled (sum of straight-line distances between consecutive locations). Trip duration was based on data from birds monitored at the colony.
We linked GPS data to hourly wind data downloaded from the European Centre for Medium Range Weather Forecasts (ECMWF) ERA5 reanalysis data set (https://doi.org/10.24381/cds.adbb2d47; accessed May 2020) at a spatial resolution of 0.25 . Zonal (V u ) and meridional (V v ) wind components nearest in time to each interpolated tracking location were extracted using the 'raster' package (Hijmans et al., 2021) from which wind speed (V w ) and direction (q w ) were computed using the following equations: For each location we calculated relative wind direction (Dq), which was the absolute difference between the bearing of the bird and wind direction, scaled to between 0 (tail wind) and 180 (head wind). We calculated tail wind support (i.e. wind speed in the direction of travel; V tw ) using the formula: V tw ¼ V w cosDq, after converting Dq from degrees to radians. To determine wind conditions experienced at the colony, we extracted the mean hourly wind speed and direction within a 5 km buffer around the colony.
We fitted multivariate hidden Markov models (HMMs) to interpolated tracks within the 'momentuHMM' package (McClintock & Michelot, 2018) to identify behavioural states at sea (e.g. Clay et al., 2020;Halpin et al., 2022;Tarroux et al., 2020). Trips were projected to an azimuthal equal area projection and linearly interpolated to 40 min intervals using the 'adehabitatLT' package (Calenge, 2006). We considered the following three states using two input variables, step lengths and turning angles: directed flight (high speeds, shallow turning angles), area-restricted search (ARS; moderate speeds, moderate to wide turning angles) and rest (low speeds, shallow to moderate turning angles; Appendix Table A1). A gamma distribution was chosen for step lengths and a von Mises distribution for turning angles. We initially incorporated wet events from the immersion data (as a proxy for prey capture on the sea surface) as a third input variable, and also tested whether searching behaviour could be split into dry and wet searching behaviour (the latter indicative of foraging; Carneiro et al., 2022; see Appendix for details). The inclusion of wet events did not improve model fit, and since birds are known to sometimes aerially pursue volant prey (meaning foraging may not be associated with wet activity; Spear & Ainley, 1998), we opted for the more parsimonious three-state model without immersion data. We used the Viterbi algorithm to estimate the most likely sequence of behavioural states from the fitted model (Rabiner, 1989) and initial values of step and angle distributions were chosen based on 100 iterations of random values within a biologically realistic range (Clay et al., 2020). The final HMM was assessed for goodness of fit and autocorrelation using QQ, pseudoresidual and autocorrelation plots.
We partitioned outbound, middle and return stages of trips ( Fig. 1) using a methodology similar to Wakefield et al. (2009) based on distance and time thresholds as well as the proportion of time in ARS (see Appendix for details; Fig. A2). For outbound and return stages, we calculated sinuosity (S) as a measure of detours taken by birds from the most direct straight-line route to and from middle stages of trips, using the equation S ¼ 1 À D1 D2 , where D 1 is the greatcircle beeline distance between the first and last location of the entire outbound and return stage, and D 2 is the sum of straight-line distances travelled between consecutive locations for each trip stage, both calculated using the 'fields' package (Nychka et al., 2021). A high value indicates that the bird has taken a substantial detour.
Statistical Analysis
Effects of wind on foraging and flight behaviour We plotted trips and core foraging areas in relation to the average values and predictability of wind speed and direction over the study period (Fig. 2) to examine whether trips took advantage of persistent wind fields (H1a). Predictability was based on the inverse of scaled (between 0 and 1) coefficient of variation values of hourly wind speed and direction in each grid cell. We defined core foraging areas as 50% kernel utilization distributions (UDs) of ARS locations calculated in the 'adehabitatHR' package (Calenge, 2006). UDs were calculated for each trip using a smoothing factor (h) of 50 km and cell size of 5 km and averaged across trips.
We tested whether the average q w experienced during the first 3 h of each trip explained bird departure bearings using circularecircular regression in the 'circular' package (Lund et al., 2017). Generalized additive mixed models (GAMMs) were then applied to all locations assigned as directed flight to model the potentially nonlinear effects of V w and Dq on ground speeds (H1c), using the 'mgcv' package (Wood, 2022). Ground speed (m/s) was calculated from step lengths and took a Gaussian error distribution. We included the factor DayNight to test for differences by day and night, calculated using the 'maptools' package with civil twilight (6 below horizon) included as day. Both wind variables were found not to be collinear and were standardized by subtracting from the mean and dividing by the standard deviation. We included an individual identity random intercept as a smooth term and an autoregressive moving average (ARMA) autocorrelation term to control for serial autocorrelation. We compared a series of candidate models containing DayNight, the linear and smoothed effects of V w , Dq and the two combined, as well as their interaction, in the form of a tensor smooth product interaction, using the Akaike information criterion (AIC), with the best supported model that with the lowest AIC. The number of knots for smooths was set to five to reduce overfitting and smooths were produced using cubic regression splines with shrinkage, allowing variables to be penalized out of the model during fitting (Wood, 2017), to reduce the risk of overparameterization. We calculated root mean squared error (RMSE), the square root of the sum of all errors divided by number of values as a measure of model fit. The RMSE was calculated using k-fold cross validation whereby each fold was a separate individual (N ¼ 18). Models with the lowest RMSE were deemed to be the best fitting. We tested for significant differences in ground speeds by day and night using Tukey's post hoc comparisons in the 'multcomp' package (Hothorn et al., 2016).
We also subsequently ran a series of models on outbound and return stages of trips to test whether V w and ground speeds of birds differed across the two stages (see Appendix for details).
Effects of bird and partner mass on trip characteristics and mass gain
We ran three linear models to investigate the effects of bird and partner mass on trip duration, maximum range and mass gain (H2a-b), with each metric taking a Gaussian distribution. We did not consider cumulative distance travelled as it was highly correlated (Spearman rank correlation: r S > 0.7) with the other two variables (see Results). Bird and partner mass were not significantly correlated (Pearson correlation: r ¼ 0.30, P ¼ 0.143) permitting their inclusion, and their importance was assessed using backwards model selection and likelihood ratio tests using the package 'lmtest' (Hothorn et al., 2022).
Effects of Route Selection and Tail wind Support on trip Outcomes
A series of tests was conducted to examine links between V tw , path sinuosity, trip duration and mass gain. We first ran Spearman rank correlations to test for significant correlations among route sinuosity, durations and cumulative distances travelled during outbound and return stages (H1d). As these variables were not significantly correlated (see Results), we then ran a series of linear models examining the effect of average V tw on sinuosity, durations, distances travelled and average ground speeds during outbound and return stages. Second, we ran three linear models with trip duration, mass gain and mass gain/day as response variables to test the effect of V tw (H3). For mass gain and mass gain/day models, bird mass was also included as a covariate (see above), as was the proportion of the trip spent in ARS, to test the alternative hypothesis that mass gain was related to foraging activity rather than wind use. Lastly, we ran a model with mass gain/ day as the response and V tw during outbound and return stages as separate covariates (which were not correlated). All linear models took a Gaussian distribution and likelihood ratio tests were used to select significant variables. Sinuosity during the return stage was square root transformed to conform to the assumption of a normal distribution. Unless otherwise specified, means are provided ± 1 SD.
RESULTS
We tracked Juan Fern andez petrels during incubation, recording 18 complete GPS trips (13 with geolocator-immersion loggers) and another 14 with geolocator-immersion loggers only. Sample sizes were reduced as (1) several devices were lost at sea or failed to download or (2) the birds were not recaught before fieldwork ended (Table 1). Trips based on the larger sample of colony monitoring data lasted 19.8 ± 3.4 days (range 13e30 days) and complete GPS trips lasted 20.4 ± 2.8 days (16e24 days), with birds travelling up to 13 178 km (mean ± SD ¼ 10 741 ± 1672 km) and ranging up to 4166 km (3404 ± 630 km) from the colony. Trips generally took the form of looping anticlockwise journeys across the southeast Pacific Ocean towards a large oceanic region between the Foundation Seamounts and the East Pacific Rise (ca. 35 e45 S and ca. 130 e100 W), with more concentrated movements at middle stages, presumably associated with foraging around the Subtropical Convergence ( Figs. 1 and 2a). Two birds went further south than the rest (ca. 50e55 S) with one at Point Nemo, the point in the world's oceans furthest from land (Fig. 1).
Foraging Behaviour during Outbound and Return Stages
Birds spent 45.8 ± 7.5% of their time in directed flight, 40.0 ± 4.9% of their time in ARS and only 14.2 ± 3.6% of their time resting. Immersion activity data corresponded well with states assigned from GPS data, with the most (94.3 ± 1.1%) and least (24.8 ± 4.8%) time spent dry during directed flight and rest states, respectively, with high percentages (80.9 ± 3.1%) of ARS also spent dry (i.e. in flight). Somewhat surprisingly, the number of wet events/h was highest during rest (1.78 ± 0.42), closely followed by ARS (1.57 ± 0.13), and with a considerable number also occurring during directed flight (1.19 ± 0.11). This indicates that at a 40 min GPS resolution, finescale foraging behaviour may also be captured within the rest state, while birds also make frequent feeding attempts during more directed travel (Fig. 2a). More time appeared to be spent in ARS during the outbound (25.2 ± 10.8%) than the return (19.3 ± 7.0%) stage, although the majority of ARS (63.8 ± 17.6%) occurred during the middle stages, a period that represented 41.1 ± 12.3% of total trip time (outbound: 33.1 ± 9.8%; return: 25.8 ± 6.1%).
Effects of Wind on Foraging and Flight Behaviour
Birds used predictable southeasterly trade winds to assist outbound journeys in a northwesterly or westerly direction, while returning birds headed southeast into the region of stronger and predictable westerlies associated with the Antarctic Circumpolar Current (H1a; Fig. 2). In contrast, middle stages of trips occurred in the southern portion of the South Pacific Gyre where wind speeds are weaker and more variable, and directions also variable.
The best GAMM explaining variation in ground speeds over the whole trip included the tensor smooth of V w and Dq as well as DayNight (Table 3). Ground speeds increased with V w and were highest (ca. 20 m/s) under the strongest winds encountered (10e15 m/s; Fig. 4). Ground speeds were similarly high for tail winds and cross-winds (<60 ) and lowest for head winds (>90 ; H1c). The two-dimensional tensor interaction showed the optimum conditions for fast flight were V w >5 m/s and Dq < 50e60 (Fig. 4c). Birds' ground speeds were 1.3 ± 0.1 m/s slower at night than by day (Tukey's post hoc test: P < 0.001). Ground speeds were slightly higher (modelled difference of 0.92 ± 0.04 m/s) during the return than the outbound stage, mainly due to the faster V w experienced (0.75 ± 0.03 m/s), rather than because the birds flew significantly faster for a given V w as a result of their increased return mass (see Appendix for details).
Effect of Route Selection and Tail Wind Support on Trip Outcomes
After controlling for significant effects of departure mass (above), we found no effect of average V tw over the whole trip on trip durations (c 2 1 ¼ 2.31, P ¼ 0.128) or mass gain (c 2 1 ¼ 0.90, P ¼ 0.342; Fig. 6a). However, the negative relationship between V tw and overall trip duration became significant when we removed one individual that experienced anomalously low (i.e. negative) wind support on the return voyage (c 2 1 ¼ 4.62, P ¼ 0.032; Fig. 6d) There was large variability in the outbound distances travelled (range 2450e6743 km) and time taken (range 2.7e11.7 days) to complete the outbound stage (Fig. 6e, i). Birds that took looping detours covered greater distances (r S ¼ 0.74, P < 0.001) and took more time to do so (r S ¼ 0.64, P ¼ 0.004), but did not appear to benefit more from V tw (c 2 1 ¼ 0.04, P ¼ 0.846; Fig. 6i, k; H1d). Instead, birds with greater V tw had shorter outbound commute durations (c 2 1 ¼ 4.51, P ¼ 0.034) due to faster ground speeds achieved (c 2 1 ¼ 19.94, P < 0.001; Fig. 6e, g). Outbound V tw did not influence overall mass gain (c 2 1 ¼ 1.22, P ¼ 0.268), but was associated with increased mass gain per day at sea (c 2 1 ¼ 5.02, P ¼ 0.025; Fig. 6b; H3). Return commutes were shorter and less variable in duration (range 3.3e7.5 days), mainly because birds experienced faster V w (see above) rather than because they covered less distance (range 2715e7028 km; Fig. 6f, j). Yet, while higher V tw enabled faster ground speeds (c 2 1 ¼ 6.77, P ¼ 0.009; Fig. 6h), there was no link between V tw and the duration of return commutes (c 2 1 ¼ 2.67, P ¼ 0.102), overall mass gain (c 2 1 ¼ 0.42, P ¼ 0.519) and mass gain per day at sea (c 2 1 ¼ 0.50, P ¼ 0.481; Fig. 6c, f).
Wind direction (°)
Departure bearing (°) The association between wind direction and departure bearing is plotted linearly but was modelled using a circularecircular regression (P ¼ 0.031).
DISCUSSION
Our study reveals that incubating Juan Fern andez petrels have long and extremely wide-ranging foraging trips which are adjusted according to regional and local winds as well as the condition of the bird and its partner. Birds travelled between 2000 and >4000 km from Isla Alejandro Selkirk to forage in one of the most remote regions on Earth (Point Nemo). To do so, they exploited predictable trade and westerly winds on outbound and return stages of trips, respectively, allowing them to maximize ground speeds and reduce the time spent travelling to and from foraging areas. Despite the broad similarity in foraging areas used by most individuals, there was substantial variation in the routes taken to and from those areas. The individuals that experienced greater tail wind support on outbound journeys reached foraging areas quicker and, on returning to the colony, had achieved a higher mass gain per day spent at sea. Moreover, birds that were lighter on departure gained more mass at sea and those with heavier partners ranged further from the colony. These results suggest that the decision of how far to go on foraging trips is based on the condition of the pair and on wind patterns, but that the success of trips is linked to how birds use winds on foraging commutes.
Large-scale Foraging Behaviour and Use of Prevailing Winds
Through GPS tracking, we have shown that incubating birds took long (16e24 days) foraging trips westwards and ranged much further than previously documented by limited at-sea surveys (Miranda-Urbina et al., 2015;Shirihai et al., 2015), to a region of concentrated search behaviour in the central South Pacific Ocean (35e45 S, 130e100 W). No tracked birds headed east to productive waters associated with the Humboldt Upwelling (<850 km away), matching other evidence that gadfly petrels often do not exploit productive areas closest to their breeding colonies (Taylor et al., 2020;Ventura et al., 2020). This probably reduces competition with other seabirds (e.g. Ballance et al., 1997) such as pink-footed shearwaters, Ardenna creatopus, and sooty shearwaters, Ardenna grisea, that feed off mainland Chile (Carle et al., 2019;Miranda-Urbina et al., 2015), and is possible because the petrels' greater mobility allows them to exploit areas too distant for other seabirds to reach. As we predicted (H1a), birds took long looping anticlockwise routes, matching those of Murphy's petrels, Pterodroma ultima, and wandering albatrosses, Diomedea exulans, which orient with prevailing anticyclonic winds in the South Pacific and Southern Oceans, respectively (Clay et al., 2019;Weimerskirch et al., 2000). By initially heading northwest, birds take advantage of predictable trade winds. Similarly, by taking a southerly route back, birds can use winds associated with the Antarctic Circumpolar Current (below 40 S, the 'Roaring Forties') which persistently blow west to east. In contrast to other subtropical gadfly petrels that on looping trips do not appear to concentrate searching in particular areas but instead intermittently forage during travel (Clay et al., 2019;Halpin et al., 2022;Ventura et al., 2020), Juan Fern andez petrels had long periods of ARS behaviour in the middle portions of trips (representing ca. 40% of their time), during which landing rates were substantially higher than during directed travel. We note though, at the scale of our interpolated (40 min) GPS resolution, many wet events also occurred during the rest state, which indicates the HMM may be misclassifying some finer-scale foraging behaviour as resting. Regardless, our results suggest birds shared a foraging region, although individuals' foraging areas were spread over a wide longitudinal band and did not appear to target a particular topographical feature. Juan Fern andez petrels are known to be social foragers, and in their nonbreeding grounds in the eastern tropical Pacific Ocean they often feed in multispecies flocks in association with subsurface predators such as oceanic dolphins (Delphinidae) or tuna (Scombridae), which make schooling fish available to aerial predators (Au & Pitman, 1986;Ballance et al., 1997;Ribic et al., 1997). While little is known about the marine predator community in the central South Pacific Ocean where ARS activity clustered (e.g. Clay et al., 2017), this region is not used by commercial longline fisheries and so presumably does not have high tuna abundance (Lehodey et al., 2015). This suggests that breeding birds may be using alternative strategies to feed, such as targeting oceanic frontal zones around the Subtropical Convergence where subsurface upwelling supports greater primary productivity, to feed on fish or squid that migrate to the sea surface (Weimerskirch, 2007). Moreover, these foraging areas in the southern arc of the South Pacific Gyre are associated with low and variable wind speeds, which may facilitate manoeuvrability, while the flatter sea surface may help birds locate floating prey.
Effects of Wind on Flight Behaviour and Route Selection
Given their high aspect ratios, gadfly petrels are arguably the seabirds best adapted for efficient flight (Spear & Ainley, 1997a). In line with our predictions, birds tended to orient on departure with the prevailing wind directions (H1b) and across trips oriented favourably with tail and cross-winds to maximize achieved ground speeds of ca. 20 m/s (ca. 70 km/h; H1c). Birds had the highest ground speeds (10e20 m/s) in moderate-to-high wind speeds (above 5 m/s) and with a relative wind direction of less than ca. 60 . Almost 90% of bird locations were oriented within 90 of the wind direction indicating that birds avoided head winds where possible. Gadfly petrels, which have lower wing loading and greater profile drag than albatrosses, may be less able to fly into head winds (Pennycuick, 1982;Spear & Ainley, 1997b), demonstrated here by the extremely low ground speeds attained by birds into head winds. Birds generally oriented with quartering tail winds (ca. 55 ), similar to Desertas petrels (Ventura et al., 2020), likely because at finer scales they tack back and forth across the tail wind component, turning into cross-and head winds to gain lift and then using tail winds to maximize ground speeds during the longer descent phase (Kempton et al., 2022;Sachs et al., 2013). Indeed, Spear and Ainley (1997b) noted that unlike albatrosses that rely on energy from waves for slope soaring (Pennycuick, 1982), gadfly petrels likely use a dynamic soaring flight strategy whereby they gain energy from tilting from one side to another with wind perpendicular to their wings like a sailboat, aided by their large wing areas relative to their mass. By flying across the wind, birds likely also increase the chance of locating prey visually and enhance their sampling of air currents for odour plumes associated with prey (Nevitt et al., 2008).
Despite initially orienting favourably with tail winds, individuals varied widely in the degree of tail wind support they received across outbound journeys. Those individuals taking a more northerly outward journey appeared to orient more with tail winds. However, in contrast to our prediction (H1d), birds that took long detours from the most direct route did not appear to use winds more favourably (i.e. using tail winds) and tail wind support did not explain the distances travelled between the colony and foraging areas.
Changes in Mass and Pair Coordination of Foraging Trips
On average, foraging birds increased their mass by 25% of departure mass. As predicted (H2a), birds with a lower mass at departure gained more mass, in line with previous studies (e.g. Kim et al., 2018;Weimerskirch, 1995). However, they did not do this by taking longer trips, ranging further from the colony or spending a greater proportion of time in ARS behaviour. This contrasts with other seabird studies where birds departing at a lower mass spend more time at sea (e.g. short-tailed shearwaters, Ardenna tenuirostris: Carey, 2011; Manx shearwaters: Gillies et al., 2022) and could be explained by the fact that petrels may aim for roughly the same target mass before returning, but that their return mass (and mass gain) is influenced in an unpredictable manner by the success of outbound and return commutes (Brooke, 2004). We also found that birds responded to their partner's condition at departure as birds with higher partner mass travelled further from the colony (by ca. 1000 km per 100 g; supporting H2b), but did not make longer-lasting trips. Owing to logistical constraints, the majority of partner masses (birds that arrived weighing >600 g) were estimated using a daily mass loss estimate based on all individuals and not measured directly, so should be treated with some degree of caution. However, given that birds with lighter partners (<570 g) did not travel as far as those partners with moderate masses (around 600 g), and the potential error associated with back-calculating return mass was likely an issue only for the . Relationships between average tail wind support (V tw , m/s) and trip characteristics and mass gain over the whole trip and for outbound and return stages of trips, separately: (aec) mass (g) gained per day at sea, (def) durations (days), (g, h) ground speeds (m/s), (i, j) cumulative distances travelled (km) and (k, l) sinuosity (an index between 0 and 1 of detours to and from middle stages of trips). Significant effects of V tw on metrics are shown by a modelled black line with 95% confidence intervals in grey shading. The dotted line in (d) shows that the relationship was significant with the removal of an individual (triangle) that experienced anomalously low tail wind support during the return voyage. heaviest birds (for which we had to wait several days to weigh), we believe this finding is robust to measurement constraints. This result contrasts with our prediction and the findings of a handicapping study which showed Antarctic petrels compensate for a reduction in partner condition through shorter trips (Tveraa et al., 1997). Foraging petrels with fewer time constraints (due to increased partner condition) might travel further to forage because the greater distances covered provide birds with more foraging opportunities than a direct beeline route westwards. These routes may also allow them to use more predictable, but perhaps not necessarily stronger, winds. Ultimately, our observational approach and modest sample size do not allow us to tease out between-and within-pair effects; regardless, our study reveals birds assess their partner's condition, likely to improve coordination (Gillies et al., 2021). How birds are able to assess their partner's condition when there is often very little time together in the burrow at night at change-over, remains a mystery, and future studies should explore the mechanisms through which birds reveal their physiological status, whether directly through vocalizations or indirectly through smell, touch or sight (Boucaud et al., 2016;Kavelaars et al., 2019).
Tail Wind Support and Trip Outcomes
Several studies have established direct links between foraging tactics and energy balance (Tarroux et al., 2020;Weimerskirch, 1995), yet, while wind use dictates foraging energetics (e.g. Weimerskirch et al., 2000), the effects on trip success are not well established. We did not find a direct link between tail wind support and overall mass gain, indicating that orientation with wind directions that promote the fastest travel speeds (i.e. favourable winds) at the fairly coarse scale of our study (40 min GPS resolution) does not appear to directly reduce the amount of energy stored and, by inference, energy expended. However, stronger tail wind speeds on outbound journeys substantially reduced the time taken to reach the middle stage of trips, through their direct effect on ground speeds. Those individuals that were able to reduce the time taken to reach foraging areas had increased mass gain per day at sea as a direct result of their shorter trips, which suggests that reaching core foraging areas faster provides greater rewards than a longer circuitous trip through less productive waters. In contrast, while tail wind support facilitated fast movement on return voyages, wind speeds were higher and less variable meaning that all birds returned rapidly to the colony. Ultimately, petrels seek to optimize routes according to winds, because doing so reduces time costs (Ventura et al., 2020). That said, we have shown that individuals experience variable wind which often prevents an optimal strategy being achieved, and which may have consequences for breeding success.
Conclusions
Our study provides novel insights into the mechanisms promoting and constraining the foraging strategies of gadfly petrels, including their use of private (own and partner mass) and public (wind) information. We suggest that decisions involving where to go are broadly predetermined based on knowledge of prevailing winds and the location of oceanic prey aggregations, while the decisions involving how far to go and for how long are related to the condition of the pair. Crucially, we have also shown that while gadfly petrels are extremely well adapted for efficient flight, wind speed and direction provide a stochastic set of conditions to which birds have to adapt each time they set out on a foraging trip, and which either facilitate or impede foraging journeys, with downstream consequences for the success of trips.
Data Availability
Raw GPS tracking data can be viewed and requested on the BirdLife International Seabird Tracking Database (http:// seabirdtracking.org/mapper/?dataset_id¼1661).
Declaration of Interest
The authors declare that they have no conflict of interest.
Deriving Wet Events from Immersion Data
We defined a wet event based on several criteria: (1) a 5 min period when at least one wet event was recorded (>0) following a 5 min period spent entirely dry (0); (2) if a 5 min period contained dry activity (1e49) and was sandwiched between periods spent entirely dry (50) then a take-off and landing event must have occurred; (3) if there were several consecutive periods when the bird was both dry and wet (1e49) we used a sliding window to assess the minimum number of take-offs and landings that must have occurred.
Incorporating Immersion Data Into Hidden Markov Models
We tested whether the inclusion of immersion data in hidden Markov models (HMMs) improved state classification (Carneiro et al., 2022) and could better explain the foraging behaviour of Juan Fern andez petrels at a 40 min GPS resolution. We summed the number of wet bouts occurring 20 min either side of each interpolated GPS location and included that sum in HMMs as a third input data stream along with step lengths and turning angles derived from GPS data, taking a Poisson error distribution. We compared our simple three-state model with and without wet bouts to a four-state model whereby area-restricted search (ARS) was split into ARS with no wet bouts and ARS with wet bouts, the latter indicative of foraging on the sea surface (hereafter foraging). We also ran a four-state model without immersion data. For both four-state models, we specified slightly smaller step lengths for the foraging than ARS state. The four models were compared using the Akaike information criterion (AIC). However, as the AIC tends towards models with a greater number of states regardless of whether or not they are biologically informative (Pohle et al., 2017), we also manually screened tracks, examined pseudoresidual plots and checked model parameters and activity budgets for each state (Table A2).
As expected, the two four-state models had lower AICs than their three-state counterparts (Table A2). Although the inclusion of wet bouts resulted in models with higher AICs and did not appear to substantially change step length and angle parameters, the number of wet bouts was higher during rest for the three-state model and during foraging and ARS than rest and directed flight, indicating higher landing rates in these states. However, pseudoresidual plots indicated models with wet bouts fitted the data poorly. Lastly, the inclusion of a fourth state appeared to split directed flight into two states (i.e. two states with high step lengths), rather than split the ARS state, as intended. As such, we deemed the original three-state model without immersion data the most biologically informative and parsimonious.
Segmenting Trips Into Outbound, Middle and Return Stages
We partitioned outbound, middle and return stages of trips (Fig. 1) using a methodology similar to that in Wakefield et al. (2009) based on distance and time thresholds as well as the proportion of time in ARS. We first plotted the proportion of the maximum distance reached as a function of the proportion of total trip time for each trip (Fig. A2a), which showed that most ARS behaviour occurred at distal portions of trips. Using a 12 h moving window, we defined the start of the middle (end of outbound) stage as the first location at which at least 75% of time is spent in ARS and the distance from the colony is at least 75% of the maximal distance. The end of the middle (start of return) stage was defined as the last location within the specified criteria (Fig. A2b).
Modelling Ground Speeds and Wind Speeds Encountered in Outbound Versus Return Stages
We ran a series of generalized additive mixed models (GAMMs) in the 'mgcv' package (Wood, 2022) to test whether birds had higher ground speeds on their return than outbound stage of trips, and whether this was due to (1) stronger winds encountered on their more southerly return routes or (2) faster air speeds (ground speed minus wind speed) potentially due to an increased mass (and higher wing loading). Five models were run just on outbound and return stages with the following covariates: (1) DayNight, (2) TripStage (a factor encoding outbound or return stage) and Day-Night, (3) the tensor smooth of wind speed and relative wind direction and DayNight (the best-fitting model explaining ground speeds; Table 3), (4) the same as (3) but with TripStage, and (5) the same as (4) but with TripStage as an interaction on the tensor smooth. The best models were selected using AIC and RMSE as specified in the main text. We also ran a linear mixed model (LMM) to test whether wind speeds encountered by birds differed between outbound and return stages. We compared a model with wind speed as the response (taking a Gaussian error distribution), the factor TripStage as the covariate and individual identity as a random intercept, with the null model (i.e. no covariates), using a likelihood ratio test in the 'lmtest' package (Hothorn et al., 2022).
The best GAMM was that with the tensor effect of wind speed and relative wind direction and the factors DayNight and TripStage, but not the interaction between the tensor product smooth and TripStage (Table A3). Ground speeds were slightly higher (modelled difference of 1.07 ± 0.06 m/s) during the return than the outbound stage, likely because birds experienced marginally faster wind speeds on average (modelled difference of 0.72 ± 0.05 m/s) on return stages (LMM: c 2 1 ¼ 197.47, P < 0.001), rather than because they flew faster for given wind conditions (due to their increased wing loading). Mean values are provided with standard deviations in parentheses, except for turning angles for which concentration parameters are provided in parentheses.
Table A3
Model selection for generalized additive mixed models (GAMMs) comparing ground speeds during outbound and return stages, while controlling for the effects of wind speed and direction and differences by day and night Figure A1. Mass loss over time for each individual based on multiple measurements on the nest. Each individual is a different colour and measurements are indicated by coloured dots.
Table A2
Comparison of three-and four-state models with and without the inclusion of the number of wet bouts from immersion data as a third input variable along with step lengths and turning angles derived from the GPS data For the percentage of time in each state and input variable columns, values are presented for each state in the order that the states are listed.
Step length values are means. Higher values of angle concentration represent more concentrated turning angles. AIC ¼ Akaike information criterion; ARS ¼ area-restricted search. | 2023-03-11T14:09:57.193Z | 2023-04-01T00:00:00.000 | {
"year": 2023,
"sha1": "f37fb6873c8b10cac6c668e4521d108f0f805fc4",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.anbehav.2023.02.007",
"oa_status": "HYBRID",
"pdf_src": "Elsevier",
"pdf_hash": "8c888863a2b17b9751beea8cfac3e612604d7ac3",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": []
} |
264335699 | pes2o/s2orc | v3-fos-license | STUDY OF HYBRIDIZATION OF COMPLEMENTARY SINGLE-STRANDED POLYNUCLEOTIDES POLY(rA) AND POLY(rU)
The study of the interaction of intercalators ethidium bromide (EtBr), methylene blue (MB) and groove binding compound Hoechst 33258 (H33258) with single-stranded synthetic polyribonucleotides poly(rA) and poly(rU) has been carried out by the method of UV-melting, at ionic strengths of the solution 0.04 M and 0.1 M . Stirring of poly(rA) and poly(rU) by equal-molar concentrations was shown to result in hybridization with formation of double-stranded (ds-) structure poly(rA)-poly(rU). It was revealed that EtBr, with higher degree in comparison with MB, stimulates hybridization and stabilizes the formed ds-structure poly(rA)- poly(rU). It was also found out that the hybridization process and affinity of EtBr and MB depend on the ionic strength of the solution and these processes occur much more effectively at the ionic strength 0.04 M . On the other hand, it was shown that the groove binding ligand H33258 practically does not affect the stabilization of the formed ds-structure poly(rA)-poly(rU).
Introduction.Nowadays one of the most actual and important topics of molecular biophysics is the study of the interaction of low-molecular compounds with nucleic acids (NA) [1,2].These interactions, along with their fundamental value, lie at the basis of bioanalysis that use NA with various length and sequence for molecular recognition [3,4].Bioanalytic methods are based on biosensor technologies, which permit solving number of biological, medical as well as genetic important problems [5,6].Genochips and genosensors are referred to these instruments, by which the interaction of NA complementary chains with different length and sequence is possible to detect.It occurs due to the formation of stable adenine-thymine (uracil) and guanine-cytosine pairs, which is called hybridization process and owns high specificity.In fact, using genosensors (genochips) one can register complementary binding of different NA chains and modulate this process varying different factors [7].Hybridization between various ss-molecules of NA can be assessed by different physical methods -absorption, fluorescent spectroscopies or UV-melting method.It should be mentioned that hybridization is the most important phase of operations on NA-chips or NA-sensors [8,9].
NA are important biological targets for numerous compounds.Many of them have applicative value, since they make possible to control transcription processes, DNA reaction with RNA, including formation of hybrid helices DNA-RNA and triplexes of ds-DNA and RNA.Among such compounds intercalators have a special value.It is connected with the fact that intercalation is possible only into ds-DNA, since they are also markers of hybridization process.Particularly, for intercalators ethidium bromide (EtBr), acridine orange (AO), methylene blue (MB) the optic and fluorescent properties change while intercalating, which can be used for registration of signal of biochips and biosensors [10][11][12][13].
The present work is aimed at studying the hybridization process between complementary ss-polynucleotides poly(rA) and poly(rU) both in presence and in absence of EtBr, MB as well as non-intercalator Hoechst 33258 (H33258).
UV-melting of hybrid duplexes poly(rA)-poly(rU) and their complexes with the mentioned ligands was carried out on UV-VIS spectrophotometer Unicam SP-8-100 with thermostating cells for cuvettes.Solutions of the samples were placed in hermetically closed quartz cuvettes with 1 cm pathway length.Heating of the solutions was realized via Temperature Program Controller SP 876 Series 2. All experimental data and melting curves were treated in Microsoft Excel software.Experimental error was about ∼5-6%.
Results and Discussion.Effect of intercalators EtBr, MB and groove binding compound H33258 on the hybridization of synthetic complementary homopolyribonucleotides poly(rA) and poly(rU) was studied by the method of UV-melting.In the Fig. 1, the melting curves of hybrid ds-poly(rA)-poly(rU) and its complexes with MB at the ionic strength of the solution 0.04 M (curves 1 and 2, respectively) and 0.1 M (curves 3 and 4, respectively) were presented.It was revealed from the obtained data, that while mixing with equal-molar concentrations, poly(rA) and poly(rU) hybridize.Though, hybridization occurs practically completely with formation of ds-structure, as a result of which the hypochromic degree composes 36-40%.It should be mentioned that the hyperchromic degree of ds-poly(rA)poly(rU) at denaturation is about 35-36%, as it was obtained in [18].
Hybridization process between poly(rA) and poly(rU) takes place both in absence and in presence of MB.Meanwhile, in the result of the formation of dsstructure the melting curves of the complexes poly(rA)-poly(rU)-MB (curve 2) were shifted toward high temperatures as compared to the melting curve of hybrid poly(rA)-poly(rU) (curve 1) at the ionic strength of the solution 0.04 M, as it was obvious in Fig. 1.Obviously, MB, binding to hybrid ds-polynucleotide, stabilizes the structure.Though, at the ionic strength of the solution 0.1 M, the melting curve shift of the complexes poly(rA)-poly(rU)-MB toward high-temperature region is small.It indicates that the interaction of MB with poly(rA)-poly(rU) depends on the structural state of this polynucleotide and is more preferable in conditions at which poly(rA)-poly(rU) is in ds-state and is available for the binding of ligand molecules [19].Being an intercalator, MB, however, not always binds to NA by this mode.The obtained data indicate that at the ionic strength of the solution 0.04 M ds-helix of poly(rA)-poly(rU) is more available for MB binding, than at 0.1 M. Most apparently, at high ionic strengths of the solution poly(rA)-poly(rU) transits to more compactly wrapped state, as a result of which it becomes less available for binding of this ligand by intercalation mode.This fact permits us concluding that MB can completely intercalate into ds-NA in that case, when their helix is unwrapped.Analogously, the melting curves of the complexes of hybrid poly(rA)poly(rU) with classical intercalator EtBr were obtained and presented in Fig. 2. As it is obvious from the presented Figure, the stabilization effect of ds-structure of poly(rA)-poly(rU) by this ligand is more pronounced, than for MB, because a significant shift toward high temperature region takes place at the ionic strength of the solution 0.04 M in relation with the melting curve of poly(rA)-poly(rU).Moreover, the melting curve shift of the complexes poly(rA)-poly(rU)-EtBr at the ionic strength of the solution 0.04 M prevails over the analogous shift, taking place at the ionic strength of the solution 0.1 M.
Most apparently, in the case of EtBr the structural state of ds-NA plays an important role for the binding.On the other hand, at the ionic strength of the solution 0.1 M the shift toward high temperature region for the complexes poly(rA)poly(rU)-EtBr is much higher, than that for MB.These data are in good accordance with the obtained ones in the work [18], concerning to the interaction of the mentioned ligands with ds-DNA.Our obtained data also allow to conclude that for EtBr intercalation, ds-structure of hybrid poly(rA)-poly(rU) is more available at the ionic strength of the solution 0.1 M, than that for MB.Melting curves of the complexes of hybrid poly(rA)-poly(rU) with nonintercalator, groove binding compound H33258 were obtained and presented in Fig. 3.This ligand specifically binds to AT-sequences of DNA, though, it preferably interacts with its ds-structure.As it is obvious from the presented Figure, for this ligand radically another scenario is revealed, as compared to intercalators EtBr and MB.Despite the fact that H33258 preferably binds to ds-NA, the melting curves of poly(rA)-poly(rU) and the complexes poly(rA)-poly(rU)-H33258 coincide, which practically does not depend on the ionic strength of the solution.In the work [20] it was shown that H33258 forms a complex with synthetic ds-polynucleotide poly(rA)poly(rU), which serves as a model for ds-RNA and preserves ds-structure in the interval of the change of the ionic strength of the solution 0.02 ≤ I ≤ 0.1 M. Though, our obtained data indicate that H33258, most apparently, does not affect the hybridization between poly(rA) and poly(rU), while EtBr and MB can facilitate this process with following stabilization of ds-structure after hybridization.Fig. 3. Melting curves of hybrid ds-poly(rA)-poly(rU) and its complexes with H33258 at the ionic strength of the solution 0.04 M (curves 1 and 2, respectively) and 0.1 M (curves 3 and 4, respectively).
The results obtained in [20] also indicate the biphasic nature of the melting curves of the complexes of H33258 with poly(rA)-poly(rU), as well as with its deoxy-analogue poly(dA)-poly(dT).In the case of H33258 complexes with hybrid ds-poly(rA)-poly(rU) the melting curves are monophasic, which also indicates that this ligand does not affect the hybridization of single-stranded complementary polynucleotides poly(rA) and poly(rU) and does not invoke conformational reconstructions of hybrid RNA.
Conclusion.
The obtained data indicate that the mixing of complementary polynucleotoides poly(rA) and poly(rU) results in hybridization with formation of ds-structure.Though, the classical intercalator EtBr, intercalator MB and groove binding compound H33258 bind to formed ds-structure of synthetic poly(rA) and poly(rU) at ionic strengths of the solution 0.04 M and 0.1 M. The obtained data show that EtBr with higher degree, than MB, stimulates the hybridization and stabilizes the formed ds-structure poly(rA)-poly(rU), meanwhile, the hybridization process and affinity of EtBr and MB to these polynucleotides depend on the ionic strength of the solution.The obtained data also indicate that these processes take place with much more effectiveness at the ionic strength of the solution 0.04 M. On the other hand, it was revealed that the groove binding ligand H33258 practically does not influence the stabilization of forming ds-structure poly(rA)-poly(rU).These data indicate that the intercalators EtBr and MB affect more effectively the hybridization between poly(rA) and poly(rU), while the groove binding H33258 either does not influence or has a weak effect on this process.
Fig. 1 .
Fig. 1.Melting curves of hybrid ds-poly(rA)-poly(rU) and its complexes with MB at the ionic strength of the solution 0.04 M (curves 1 and 2, respectively) and 0.1 M (curves 3 and 4, respectively). | 2023-10-20T15:36:49.330Z | 2023-07-07T00:00:00.000 | {
"year": 2023,
"sha1": "841662526b2f8762e7b45168c852a337fc86fde6",
"oa_license": "CCBYNC",
"oa_url": "https://journals.ysu.am/index.php/proceedings-chem-biol/article/download/vol57_no2_2023_pp126-132/pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "a14234dc419e4be6cb61ff1b0a6fbd135f069cb2",
"s2fieldsofstudy": [
"Chemistry",
"Biology"
],
"extfieldsofstudy": []
} |
85373338 | pes2o/s2orc | v3-fos-license | The potential of milk fat for the synthesis of valuable derivatives
The overall decline in milk fat consumption experienced in the last decades has promoted global research eVorts seeking for alternate uses of this valuable natural fat. Milk fat possesses a pleasant Xavor and a rich chemical composition, including a range of bioactive, health beneWcial minor components. The main drawbacks of milk fat from the consumer point of view are its poor spreadability at refrigeration temperature and its high content in saturated fatty acids, which raises health concerns. However, the rich fatty acid composition of milk fat could be utilized for the production of a wide range of added-value derivatives in the food and cosmetic industries, including nutritionally enhanced modiWed fats, food emulsiWers, Xavors, and tailor-made lipids. A promising strategy for the revalorization of milk fat encompasses the isolation and commercialization of the valuable minor components of milk fat, coupled with a broader utilization of physically or nutritionally improved milk fat fractions and derivative products.
Introduction
Milk from livestock has traditionally been an important element of the human diet. Milk is composed of a balanced amount of fat, protein, sugars, vitamins, and minerals [1,2].
The fat component of milk has been used for centuries to produce valued and nutritious food products such as butter, cream, and cheese [1,3].
Milk fat has a complex and rich chemical composition. Its unique sensorial properties (Xavor and mouthfeel) have been much appreciated historically, and its consumption was traditionally recommended [4] and associated with high living standards [3]. However, the consumption of milk fat in developed countries has been declining for the last decades, with a remarkable shift occurring since the 1980s due to the strong commercialization of margarines. The main reasons for this trend were evaluated in a US survey to be the following [3]: • Price: milk fat is relatively expensive, unable to compete with vegetable oils as a food ingredient. • Health image: its high content in saturated fatty acids and cholesterol is believed to increase the risk of coronary diseases and obesity.
• Limited functionality: due to its high solid fat content at refrigeration temperature, butter is poorly spreadable.
• Little product innovation and poor advertisement, in comparison with vegetable oil-based products.
The consumption decline of milk fat has led to the accumulation of milk fat stocks worldwide, which prompted global research eVorts for the development of alternative uses of milk fat as a feedstock for added-value products [1,[5][6][7]. The current situation of the European milk fat market as well as the existing research trends and opportunities for milk fat revalorization is reviewed and discussed in this paper.
The dairy sector
Anhydrous milk fat (AMF) manufacture generally serves as a "safety valve" for the dairy industry [3], as it absorbs excess milk supply above market requirements for other dairy products. Surplus milk is skimmed and the cream is converted consecutively to butter, butteroil, and AMF, as shown in Fig. 1. The skimmed milk is dried to produce skimmed milk powder (SMP). Upon a sudden shortage of milk supply for the manufacture of dairy products, stock AMF and SMP can be recombined and processed into the demanded products. However, even with a stationary milk supply, AMF tends to accumulate due to the imbalance between the market demands for low-fat and fat-rich products.
The European Union is a major world dairy producer [8]. About 135 million tons of dairy products are produced annually in the 27 member states of the European Union (EU-27), mostly for internal consumption (Table 1) [9,10]. About 16% of the milk produced in EU is used for butter manufacture. Milk production is regulated by a quota system, implemented in 1984 in the frame of the Common Agricultural Policy (CAP). The quota is an eVective limit on the amount of milk that dairy farmers produce every year [9], which prevents dairy overproduction and guarantees a minimum selling price of dairy products. The EU is a major exporter of dairy commodities (butter, cheese, and milk powder), together with New Zealand and Australia [8]. The EUЈs world export share of butter was 39% in 2004. However, as the EU market price for dairy products is higher than the world price (because other major exporters produce at lower costs), exports generally take place with the support of subsidies [9].
Subsidized exports are one of the means to absorb excess butter and AMF. Other measures are aids for private storage, public intervention (purchase of surplus by the government at a set price), and internal disposal in the EU market [9]. Through the schemes for internal disposal, excess butter is given to non-proWt organizations or sold at reduced prices for commercial pastry and ice-cream manufacture in competition with vegetable fats [3,9]. The amount is sig-niWcant: in 2004, 600,000 tonnes of butter was disposed in this way [9]. Table 1 shows the European market balances for milk and dairy production between 2004 and 2010, including more detailed information for butter products The 2003 Common Agricultural Policy reform aims to the deregulation of the dairy sector by gradually reducing export subsidies and intervention on dairy commodities. The quota system is planned to disappear by March 2015 [9,10]. The Wnal objective is to accomplish a self-regulated milk supply according to market demands and to align the EU milk prices with global prices. It can be foreseen that further accumulation of butter products can be an immediate consequence of the new policies. New solutions should be found for increasing milk fat consumption and for avoiding losses derived from butter disposal at prices below production costs.
In the following paragraphs, a review of the composition and potential of milk fat for the manufacture of added-value derivatives that can contribute to improve the current situation of the butter market is presented.
Milk fat composition
Milk fat is present in bovine raw milk in concentrations of about 3.5-5 wt% [11,12]. It is found in the form of small globules of a diameter 0.1-15 m, coated with a membrane derived from the secreting cells [13]. About 98% of milk lipids are triacylglycerols, glycerol molecules esteriWed to three fatty acids of variable chain length and saturation degree.
Many volatile and non-volatile compounds contribute to the unique Xavor of milk fat, including lactones, ethyl esters, ketones, aldehydes, diacetyl, dimethyl sulWde, and free fatty acids. In addition, milk fat contains fat-soluble vitamins A, D, and E and cholesterol (0.2-0.4%). The overall composition of milk fat is signiWcantly aVected by the cow breed, the cow diet, stage of lactation, and the season [2,3,5]. Average values are summarized in Table 2.
The triacylglycerol composition of milk fat is the most complex from all edible fats [3,11]. More than 100 diVerent fatty acids have been identiWed, from which about 11 constitute the vast majority (Table 3) [11]. On average, milk fat contains 20 mol% short-chain fatty acids (C4-C10). Over 70% of the total fatty acids are saturated. Milk fat contains a very small amount of polyunsaturated fatty acids, yet it is the richest natural dietary source of conjugated linoleic acid (CLA). CLA, which is a group of positional isomers of linoleic acid (C18:2), has been shown to possess anticarcinogenic, antiatherogenic [11,14,15], and immunomodulating activities, among other health beneWts [4,16]. Recently, anti-tumoral activity has also been associated with butyric acid [4,[15][16][17].
The distribution of fatty acids in milk fat triacylglycerols is non-random, as indicated in Table 3. The majority of short-chain fatty acids C4-C10 are esteriWed in the primary positions (sn-1 and sn-3), while the middle position of the acylglycerol (sn-2) is mostly occupied by medium-and long-chain saturated fatty acids.
Most triacylglycerols contain 24-54 acyl carbon atoms. Due to this large variety of components, milk fat exhibits a large melting temperature range, between ¡30°C and 37° [ 1]. Therefore, both a solid and a liquid phase are present in the temperatures normally encountered during processing and use. The solid phase forms a Wne network of small crystals, which traps and holds the liquid phase by surface tension. The network structure of the solid and liquid phase is responsible for the plasticity and consistency of the fat [18]. The milk fat globule membrane (MFGM) is composed of lipids and proteins, in a ratio of approximately 1:1 by weight. Butyrophilin (40%) and xanthine oxidase (12-13%) are the main proteic components of the MFGM [13], while the lipid components include triacylglycerols (66%) and phospholipids (22%), in particular sphingomyelin, phosphatidylcholine, and phosphatidylethanolamine [11]. Both the protein and lipids of the MFGM have been associated with a variety of positive health eVects, including anticarcinogenesis, antidepressant, and bactericidal activity [5,13].
Milk fat revalorization
The success of any consumer product depends on a good balance between price, perception, and performance [3]. Milk fat is perceived by the consumers as a natural, highquality product but its relatively high price compared to vegetable oils and fats hinders its consumption. On the other hand, the content of saturated fat and cholesterol still raises health concerns, although nowadays the weakness of the link established for many years between saturated fat consumption and hypercholesterolemia and cardiovascular disease is starting to be recognized [4,16,19].
Saturated fats play a key role in providing structure to food. In this respect, one of the main drawbacks of milk fat is its limited functionality, related to the consistency of the fat at range of temperatures. For example, milk fat is too Wrm to spread easily at refrigeration, but not Wrm enough for certain pastry applications [6].
In the authors'opinion, the revalorization of milk fat implies Wnding applications in which other fats or oils cannot compete. For this purpose, the special qualities (Xavor, texture, melting proWle) or components (short-chain fatty acids, CLA, bioactive minor compounds) of milk fat have to be exploited. DiVerent approaches can be considered for the revalorization of milk fat: • enhancing the nutritional or functional performance of milk fat, while maintaining its inherent qualities (milk fat engineering).
• producing added-value food or cosmetic ingredients by isolation or modiWcation of major milk fat components • isolating and commercializing the high-value minor components of milk fat Hettinga [3] suggested that the success of milk fat revalorization relies on Wnding a large number of relatively small outlets for milk fat derivatives and/or innovative applications. This approach was applied with considerable success to the protein fraction of milk [3]. The health beneWcial molecules of milk fat (MFGM components, CLA) could be isolated for commercialization or concentrated further in milk fat products. Although the potential commercial value of the minor components can be very high, only the usage of major components can lead to the reduction of butter stocks. Commercialization of minor components would however contribute to raising the value of milk fat, thus allowing using the major part of the fat in lower value applications. Table 4 shows an overview of the diVerent products and application Welds of milk fat, milk fat fractions, and (potential) milk fat derivatives achieved either commercially or by diVerent research groups around the world in recent years.
Milk fat engineering
Improving the performance of milk fat, both in nutritional and in physical terms (spreadability), is a promising area for research. Milk fat engineering concerns the modiWcation of the fat structure or composition in order to achieve better properties of food products containing or composed of milk fat [1].
Currently, the major trends in milk fat engineering involve its physical or chemical modiWcation, or altering the diet of the cow. ModiWcation of the cowЈs diet, by introducing polyunsaturated oils or other dietary supplements, is a technique with relative success [5,17]. Milk fat with a lower content of saturated fatty acids or enriched in CLA has been obtained from cows under special feeding regimes. For example, farmers in Ireland are producing milk for the manufacture of a naturally spreadable butter [5].
In the next paragraphs, only the "downstream" modiWcation of milk fat (after separation from milk) will be discussed. Nutritional properties/ bioactivity [41,43] Physical modiWcation Physical modiWcation of milk fat involves mainly the improvement of spreadability [3]. This can be achieved by diVerent techniques, including mechanical work (texturization), temperature proWling, blending with other oils, or fractionation [3,6]. Fractionation consists of creating milk fat fractions with diVerent melting point and crystallization patterns [1,20]. Melt crystallization is the most developed fractionation technique, and it is widely applied commercially [3]. The low-melting and high-melting milk fat fractions produced by melt crystallization diVer in the fatty acid composition of their triacylglycerols. Short-chain and unsaturated fatty acids predominate in the low-melting fractions and vice versa [21]. Short-path distillation and supercritical carbon dioxide fractionation have also been investigated. With these techniques, the separation is based on the diVerence in the molecular weight of triacylglycerols, rather than on their melting point diVerences [7,22].
Nowadays, about 800 tonnes per day of milk fat is fractionated worldwide [23]. In general, all fractionation techniques yield milk fat fractions of similar functionality, but high-melting fractions that retain the Xavor of milk fat can only be produced by melt crystallization [22]. Since short-, long-, saturated and unsaturated fatty acids are mixed in milk fat triacylglycerols, complete separation of fatty acid species is not possible by physical fractionation only.
Reduction of cholesterol was also an active area of research in the 1980s. However, it is nowadays recognized that the contribution of dietary cholesterol to coronary diseases is minor [1,3], which has decreased consumer demands for low-cholesterol products.
Chemical or enzymatic modiWcation InteresteriWcation using chemical catalysts results in a random rearrangement of the fatty acids in the triacylglycerols. This has an impact on the melting proWle of milk fat, resulting in more spreadable products. However, the nutritional properties of the fat might be also aVected by the rearrangement of fatty acids in the glycerol sn positions [12]. In addition, the loss of the natural milk fat Xavor in the process hinders the application of this technique [3,24].
Lipase-catalyzed modiWcation reactions are milder than chemically catalyzed reactions and have a lower tendency to aVect the natural milk fat Xavors. These have been mainly oriented to (a) releasing or concentrating Xavor components by hydrolytic or transesteriWcation reactions [1,25] and (b) improving the nutritional properties of milk fat [24,[26][27][28][29][30][31][32][33][34]. From a nutritional point of view, the goal is generally reducing the content of the generally believed hypercholesterolemic saturated fatty acids (C12-C16) and/ or increasing the content of (poly)unsaturated fatty acids without altering the sensory properties of milk fat. To this end, several lipase-catalyzed (trans-) esteriWcation reactions have been investigated. Some researchers found that lipasecatalyzed interesteriWcation resulted in better spreadability of milk fat, but had the adverse eVect of producing a waxlike mouthfeel [24]. TransesteriWcation of milk fat with vegetable oils (soybean, rapeseed, corn) or with polyunsaturated fatty acid concentrates has been studied for producing physically or nutritionally enhanced spreads [26][27][28][29][30][31][32]. The obtained products were richer than milk fat in unsaturated fatty acids and also softer, spreadable at cold temperatures [26].
The natural structure of milk fat triacylglycerols, mostly containing a long-chain saturated fatty acid in the sn-2 position, makes it appropriate for the synthesis of human milk substitutes (HMS), by incorporation of polyunsaturated fatty acids in the sn-1,3 positions. This has been achieved by acidolysis using selective lipases [35].
Nevertheless, commercial production of interesteriWed or transesteriWed milk fat has not been realized so far. Milk and milk products enriched in oleic acid or PUFA exist in the market, but those are produced merely by blending with vegetable or Wsh oil concentrates. Lipase-catalyzed production of nutritionally improved oils and fats at industrial scale is limited to the production of structured lipids of vegetable origin, namely the human milk substitute Betapol®, cocoa butter replacers, and diacylglycerol-based oils [36][37][38]. The relatively high cost of speciWc lipases is still one of the major impediments for the development of such processes.
Use of major components
Butter and cheese Xavor Production or isolation of buttery and cheese Xavors is an existing Weld of milk fat utilization. Lipolyzed milk fat or milk fat fractions develop a stronger Xavor. This is often an undesired eVect, consequence of humidity in the fat. However, it can be applied to create a range of cheese Xavors which can be incorporated to a variety of products. Established commercial applications can be found, in particular in the ice-cream and cheese industries [1,24,38]. Milk fat Xavor has been isolated by extraction with supercritical carbon dioxide [40].
Mono-and diglycerides
Mixtures of mono-and diacylglycerols are extensively used as emulsiWers in the food industry. MAG and DAG mixtures produced from milk fat are more hydrophilic than emulsiWers derived from other oils, because short-chain fatty acids are more polar than long-chain fatty acids. The hydrophilic/lipophilic balance of milk fat mono-and diacylglycerols can be useful for certain product applications [1,38,39]. The synthesis of a diacylglycerol-based milk fat analog with potentially enhanced nutritional properties has also been recently investigated [34]. Kaylegian [41] proposed the utilization of milk as fatty acid reservoir for the production of structured lipids (SL). Structured acylglycerols contain certain fatty acids in speciWc positions of the glycerol molecule, which gives them special nutritional properties. The short-chain fatty acid fraction of milk fat can be used for the synthesis of medium-chain triacylglycerols (indicated for parenteral and sports nutrition), low-caloric fats (triacylglycerols containing at least one short-and one long-chain fatty acid) [37], and short-chain sucrose polyesters to be used as fat substitutes [41]. Short-chain fatty acids for the synthesis of SL are currently obtained from coconut oil fractions containing C8-C10 fatty acids or synthetically. Milk fat, with its relatively high content of C4-C10 fatty acids, appears therefore as an interesting fatty acid source for the production of SL. Recent work by the authors reports the development of a process making integrated use of lipase-catalyzed ethanolysis and supercritical carbon dioxide extraction for the isolation of a short-chain fatty acid concentrate from milk fat [42].
Use of minor components
Components of the milk fat globule membrane (MFGM) Ongoing research exists for isolating the bioactive proteins and lipids of the MFGM [5,44]. Micro-and ultraWltration, coagulation, and solvent-or supercritical extraction are being investigated. The raw material used is often buttermilk, where the MFGM molecules are most concentrated [44]. Isolated components of MFGM could be used in the pharmaceutical industry or incorporated as nutraceutical ingredients in food products.
Conjugated linolenic acid (CLA) Some research has been oriented to the development of techniques for concentrating conjugated linolenic acid (CLA) in milk fat, including transesteriWcation with CLA concentrates [32] or supercritical Xuid fractionation [45]. However, the resulting enrichment in CLA was relatively low and does not seem to be economically justiWed. A more promising approach for increasing CLA content in milk fat has been altering the diet of the cow, by incorporation of dietary supplements based on Wsh oils [5].
Cholesterol Cholesterol, despite not recommended in adult diets, is an important molecule for the infant brain development, and as such, it is suitable for being incorporated in infant food formulations. Cholesterol removed from milk fat can be utilized in this way. At the same time, the cholesterol-free milk fat obtained on the other hand has an increased value as well. Several methods exist for removing cholesterol from milk fat: steam stripping, shortpath distillation, absorption, extraction, and enzymatic techniques [3]. Absorption using cyclodextrines has been commercially applied for producing low-cholesterol cheese and butter, although resulting in relatively expensive products.
Conclusions
Despite the promising results on many research areas, the production and utilization of milk fat derivatives is nowadays limited to the production of Xavors and milk fat fractions. A great potential exists therefore for the development and commercialization of a variety of innovative, addedvalue products, which would contribute to the reduction of the butter stocks and the overall revalorization of milk fat. | 2019-03-22T16:08:59.481Z | 2011-01-01T00:00:00.000 | {
"year": 2011,
"sha1": "60dd47843a723df278b5909a6a72df9043a0ffa6",
"oa_license": "CCBYNC",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00217-010-1387-3.pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "27089805ce5c58a7ba7cdf7c4c13580ac2fb2e19",
"s2fieldsofstudy": [
"Chemistry",
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
255077998 | pes2o/s2orc | v3-fos-license | Anti-apoptotic and autophagic effect: Using conditioned medium from human bone marrow mesenchymal stem cells to treat human trabecular meshwork cells
Introduction Glaucoma is a vision-threatening disease associated with accelerated aging of trabecular meshwork (TM) which results in elevated intraocular pressure (IOP). Increased oxidative stress in TM plays an important role in cellular molecular damage which leads to senescence. Autophagy is an intracellular lysosomal degradation process which is activated when cells are under stressful condition, and emerging studies have demonstrated increased expression of modulators of apoptosis and expression of autophagic cascade in ex-vivo TM specimens or cultured TM cells under oxidative stress. Recently, studies have shown neuroprotective and IOP-lowering effects after transplanting mesenchymal stem cells (MSCs) or injecting condition medium (CM) of MSCs into ocular hypertension animal models. However, knowledge of the underlying mechanism accounting for these effects is limited. Using condition medium (CM) from human bone marrow-derived mesenchymal stem cells (BM-MSCs), we investigated the effects of the CM derived from BM-MSCs on TM autophagy and apoptosis. Methods H2O2 was added to culture medium of human TM cells to mimic oxidative damage in glaucomatous eyes, and the autophagic and anti-apoptotic effects of BM-MSCs-derived CM was explored on the oxidatively damaged cells. Mitochondrial ROS production was examined by MitoSOX™, apoptosis was evaluated using terminal deoxynucleotidyl transferase (dUTP) nick end labeling (TUNEL) staining, and the expression of proteins involved in autophagy as well as extracellular matrix was investigated via Western blot. Results There were no significant differences in TM cell viability when the cells were treated with different concentrations of CM in the absence of oxidative stress. Cell viability was significantly higher in oxidatively damaged TM cells treated with 1X or 5X CM compared to untreated TM cells under oxidative stress. The mitochondrial ROS level significantly increased with oxidative stress, which was mitigated in the CM treatment groups. DNA fragmentation significantly decreased in oxidatively stressed TM cells after treatment with CM. LCB3 II/LCB3 I was significantly elevated in the oxidative stress group compared to the control group and was significantly decreased in the CM treatment groups. Expression of fibronectin was not significantly different among the groups. Conclusion The CM derived from human BM-MSCs has the capacity to rescue oxidatively damaged human TM cells associated with decreased autophagy and apoptosis. The BM-MSCs CM has potential for slowing down age- and disease-related degeneration of TM in patients with glaucoma, facilitating success in the control of IOP.
Introduction
Glaucoma, a leading cause of irreversible blindness worldwide, is a degenerative optic neuropathy that manifests as progressive visual field loss. An estimated 8 million people will suffer from bilateral blindness caused by this disease in the near future [1,2]. Elevated intraocular pressure (IOP) and aging are two of the most important risk factors for glaucoma. Currently, IOP-lowering therapy is the only effective treatment to slow the progression of visual field loss [3,4].
The IOP depends on the balance between the production of aqueous humor from the ciliary process and excretion of it through the trabecular meshwork (TM). As the production of aqueous humor in patients with glaucoma remains comparable to that of individuals without glaucoma [5,6], the balance relies mostly on the function of the TM, a reticular structure in the anterior chamber angle of the eye. Studies on human TM specimens have shown significantly fewer TM cells and increased extracellular matrix (ECM) accumulation in glaucomatous eyes compared to those of age-matched controls [7e9]. In addition, reactive oxygen species (ROS) generated through a lightdependent reaction with melanin in the iris may induce mitochondrial dysfunction and oxidative damage of TM cells, impairing aqueous humor outflow [7e10]. Human studies have shown that glaucomatous eyes have a higher level of oxidative damage to both the nuclear and mitochondrial DNA, which is proportional to the severity of the visual field defect. This phenomenon is present even in eyes with well-controlled IOP treated by glaucoma medication, indicating ongoing oxidative damage in the TM despite treatment [11e14].
Autophagy is a survival reaction when tissue is under stress or environmental change. It has a variety of physiological and pathophysiological roles and acts as a cellular housekeeper to control functional qualities. Autophagy has been reported to be associated with the development of some neurodegenerative diseases and aging [15,16]. Porter et al. demonstrated a decrease in autophagic activity in porcine TM cells using an experimental model mimicking chronic oxidative stress, which is in line with the notion that oxidative stress may decrease autophagic activity [16,17].
In the field of ophthalmology, stem cell therapy is a promising strategy for the treatment of glaucoma [18e20]. In particular, bone marrow-derived mesenchymal stem cells (BM-MSCs) have been broadly explored as a new e via the secretion of cytokines and growth factors [21e23]. Compared to stem cells themselves, stem cell-derived conditioned medium (CM) has the advantages of being easier to manufacture and simpler to pack and transport [24]. Recent studies have shown the neuroprotective and IOPlowering effects of transplanting MSCs or injecting the CM of MSCs (MSC CM) into a rat model of ocular hypertension [25]. However, the cellular mechanisms for how MSCs or MSC CM achieve these effects on TM cells have not been discussed in the literature. Here, we investigate the capacity of CM derived from human BM-MSCs to promote the survival and maintain the functions of human TM cells by evaluating cell proliferation, autophagy, and apoptosis.
Preparation of CM from human BM-MSCs
BM-MSCs (7500, ScienCell, USA) were cultured in Mesenchymal Stem Cell Medium (7501, ScienCell, USA) composed of basal growth medium, 5% FBS (0025, ScienCell, USA), 1% Mesenchymal Stem Cell Growth Supplement (7552, ScienCell, USA), and 1% P/S (0503, ScienCell, USA). The cell culture media was changed every 3 days. BM-MSCs at passage 9 were seeded in the culture dish for 16 h and washed three times with PBS. BM-MSCs were cultured in basal medium for an additional 24 h at 37 C in a 5% CO 2 atmosphere. The medium was collected and concentrated 40x using a 10 kDa centrifugal filter (Amicon Ultra-15, Millipore, USA) [25]. The CM from human BM-MSCs (BM-MSC CM) was stored at 4 C until use.
Evaluating the effects of cell proliferation by BM-MSC CM on human TM cells
The cell viability of 1, 5, and 10-fold concentrated BM-MSC CM on TM cells was determined using Cell Counting Kit 8 (CCK-8, Dojindo, USA) after cultivation for 16 h. Human TM cells were cultured at a density of 5000 cells per well in a 96-well plate and maintained for 1 day. The TMCM containing detached cells was removed and replaced with basal growth medium or the BM-MSC CM. The basal growth medium and 1, 5, or 10-fold concentrated BM-MSC CM were then separately added to the wells (200 ml per well) as the culture medium throughout the culture. At 16 h, cells were washed with PBS and then incubated with CCK-8 reagent following the manufacturer's instructions. The optical density (OD) at 450 nm was measured by enzymelinked immunosorbent assay (ELISA, Sunrise remote, TECAN, USA). The cell viability was calculated using the following formula: Cell viability ð%Þ¼ OD experimental group À OD blank OD control group À OD blank Â100
Mitochondrial ROS production
Mitochondrial ROS production was evaluated by MitoSOX™ (M36008, Invitrogen, USA). At the end of culture, cells were collected and 1 mL of 5 mM reagent working solution added. Cells were protected from light and incubated at 37 C for 30 min. Cells were trypsinized and then washed twice with PBS. Ten thousand cells were analyzed using a BD Biosciences FACSCalibur flow cytometer with excitation and emission wavelengths of 510 and 580 nm, respectively.
Terminal deoxynucleotidyl transferase dUTP nick end labeling (TUNEL) staining
Apoptotic cells were identified by the TUNEL assay (11684795910, Roche, USA) according to the manufacturer's instructions. The cells were washed with PBS and then fixed in a 4% paraformaldehyde solution for 1 h at room temperature. The cells were washed with PBS and then permeabilized using 0.1% Triton X-100 (T8532, Sigma, USA) and 0.1% sodium citrate (71497, Sigma, USA) for 2 min on ice. After washing twice with PBS, 50 ml of the TUNEL reaction mixture was added and then incubated at 37 C. After 1 h, the samples were washed with PBS and 10,000 cells analyzed using a BD Biosciences FACSCalibur flow cytometer.
Cell viability
Cell viability was evaluated using CCK-8 as described in Section 2.3. The OD was measured at a wavelength of 450 nm by ELISA. The cell viability was calculated and compared to the control group (100%).
Western blot assay of proteins involved in autophagy and extracellular matrix
The levels of microtubule-associated protein 1A/1B-light chain 3 (LC3) I and II and fibronectin were detected by Western blot analysis. Cells were collected at the end of culture. The protein concentration in each sample was determined using the bicinchoninic acid (BCA) protein assay kit (500-0001, Bio-Rad, USA) according to the manufacturer's instructions. Total cellular protein was separated on sodium dodecyl sulfate polyacrylamide gels and transferred to polyvinylidene fluoride membranes. The membranes were blocked with 5% nonfat dry milk and incubated at 4 C overnight with specific primary antibodies for fibronectin (F3648, Sigma, USA) or LC3I/LC3II (2775S, Cell Signaling, USA) and GAPDH (8245, Abcam, USA). The membranes were then incubated with secondary anti-rabbit or anti-mouse antibodies conjugated to horseradish peroxidase. For data quantification, the samples were analyzed using the UVP BioSpectrum Imaging System.
Statistical analysis
Statistical analysis was performed using the Student's tetest, one-way or two-way analysis of variance (ANOVA) test, and Tukey's test as appropriate. Data are reported as the means ± standard deviation (SD) of at least three experiments. Significance was set to p < 0.05.
Cellular viability of TM cells
After culturing for 16 h under different concentrations of BM-MSC CM, TM cell viability was not significantly different between the control and 1X, 5X, and 10X CM treatments (Fig. 1). Therefore, we used 1X and 5X CM concentrations for TM culture under oxidative stress and analyzed the mitochondrial function, cell apoptosis, cell autophagy, and cellular ECM protein expression. In order to check the rescue effect of the basal growth medium without BM-MSC CM, concentrated basal growth medium was then added to a final concentration of 1X or 5X. The cell viability of the control, the H 2 O 2 -damaged TM cells (H group), and the damaged cells treated with 1X and 5X concentrated basal growth medium (H-1X BM and H-5X BM groups) was determined (Supplement 1).
Anti-apoptotic effect of TM cultured with BM-MSC CM under oxidative stress
TM cells were exposed to 250 mM H 2 O 2 for 30 min, and then cultured with or without CM. After 16 h, mitochondrial ROS were significantly increased in the H group but decreased in the H-1X CM and H-5X CM groups (Fig. 2). In addition, TUNEL assay (Fig. 3) showed increased DNA fragmentation in TM cells exposed to 250 mM H 2 O 2 , which significantly decreased after treatment with 1X or 5X CM. TM cell viability decreased under H 2 O 2 -induced oxidative stress, but treatment with 1X or 5X CM significantly increased the cell viability over the H group (Fig. 4).
Discussion
We analyzed the treatment effect of BM-MSC CM on oxidatively damaged TM cells by evaluating apoptotic and autophagic effects in In recent years, some research has shown that TM-derived MSCs are progenitors of the mature TM and play a key role in regenerating diseased TM tissue. Furthermore, studies have shown that TM-derived MSCs possess similarities with MSCs derived from other tissues, including surface markers, cytoskeletal constituents, and transcription factor expression [25,32]. In the field of ophthalmic research, CM has been reported to stimulate the proliferation of corneal endothelial cells and maintain their phenotypes [33,34]. In the present study, cellular viability after CM treatment of TM cells did not vary by CM concentration.
The therapeutic potential of BM-MSCs has been broadly studied, including in the treatment of ocular diseases [21e25]. Li et al. [21] described a beneficial effect of BM-MSCs on TM cells under oxidative stress, with the potential to predict candidate genes associated with this process. In another study [25], injection of CM from BM-MSCs significantly decreased the IOP in a laser-induced rat model of open angle glaucoma. We used BM-MSCs because they share a neuroprotective presentation and possess growth factors or cytokines with MSCs. After treating oxidatively damaged human TM cells with BM-MSC CM, we found a high level of correlation between autophagy and cellular functions. Thus, MSC CM could be a promising strategy for the treatment of glaucoma.
Under stressful conditions, autophagy occurs as an intracellular lysosomal degradation process. It is a highly evolutionarily conserved method of cellular degradation and recycling, eliminating damaged cellular constituents and providing raw materials for energy and substrates for reconstruction in the body [35]. Defects in autophagy have been associated with the progressive deterioration that occurs during aging [15,16]. Some studies have shown an autophagic effect in TM senescence [36e38]. The autophagolysosome, as the final product of autophagy, degrades under the actions of lysosomal enzymes and transforms into a phagophore [39]. Activation of the microtubule-associated LC3 binding system is involved in elongation of the phagophore. LC3-II is specifically associated with autophagosome formation and used as a marker of autophagosome accumulation [40]. Therefore, the function of autophagy can be presented by evaluating LCB3 II and LCB3 I protein expression.
Conclusion
The cell viability analysis showed no cytotoxic effects when TM cells were cultured with 1, 5, and 10-fold concentrated CM from BM-MSCs. Treatment of H 2 O 2 -damaged TM cells with CM improved cell viability by decreasing the mitochondrial ROS and apoptosis. H 2 O 2 -damaged TM cells treated with MSC CM had low levels of autophagy based on the LCB3II/LCB3I ratio and normal levels of fibronectin. These results suggest that CM from human BM-MSCs has potential for promoting the survival and maintenance of human TM functions.
Funding
This work was supported by Taipei Veterans General Hospital, Taiwan (number V109C-178).
Declaration of competing interest
None. | 2022-12-25T16:03:51.137Z | 2022-12-23T00:00:00.000 | {
"year": 2022,
"sha1": "7abc58a49567f76d174bde37c2d17382efbfe587",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.reth.2022.12.002",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "2d31dfa679fd1abd6cc623aac5982d28d6779607",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
} |
10694339 | pes2o/s2orc | v3-fos-license | Bioaccessibility of phenolic compounds, lutein, and bioelements of preparations containing Chlorella vulgaris in artificial digestive juices
Chlorella vulgaris Beijerinck is a spherical, green alga belonging to the genus Chlorella and family Chlorellaceae. It has high nutritional value and shows multiple biological effects. Dietary supplements that contain extracts of C. vulgaris are sold in the form of tablets, capsules, powders, and aqueous solutions. To the best of our knowledge, this is the first study to determine the content of bioelements (zinc, iron, and magnesium), phenolic compounds, and lutein before and after incubation with artificial digestive juices from preparations containing C. vulgaris. In this study, we used commercial preparations in the form of powder and tablets. The samples were incubated in artificial gastric juice and then in artificial intestinal juice for 30 and 90 min. The contents of bioelements were determined by using the flame atomic absorption spectrometric method. Lutein and phenolic compounds were analyzed by high-pressure liquid chromatography. We also aimed to evaluate the quality of chlorella-containing formulations by using the methods described in the European Pharmacopoeia 8th edition. According to the results, the preparations containing C. vulgaris demonstrated the presence of phenolic compounds and lutein. Therefore, daily supplementation of preparations containing C. vulgaris substantiates its usefulness for humans. The qualitative composition of the examined organic substances and bioelements was found to be in accordance with the manufacturer’s declarations on the packaging containing C. vulgaris compared with the control samples; however, the contents of bioelements were found to be negligible after incubation with artificial digestive juices. This shows that the examined preparations containing C. vulgaris are not good sources of bioelements such as zinc, iron, or magnesium.
Introduction
A sharp increase in the sales of nutritional supplements for particular uses and also an increase in over-the-counter (OTC) medicines have been observed in recent years. This can be attributed to the fast pace of life, and most of all, to the lack of time for people to follow the rules of a well-balanced diet. Therefore, there is considerable interest among people to balance their nutritional status with pharmacological sources of essential bioelements (e.g., zinc, iron, and magnesium) and biologically active substances taken in the form of readily available and assimilable preparations (e.g., tablets, powders, and syrups) distributed primarily through pharmacies. It is important to check and analyze not only the market of dietary supplements and the honesty of promises presented by the manufacturers in their advertisements but also the content of active ingredients in these preparations. Some of the OTC formulas, dietary supplements, and functional foods fulfilling the demand for most of the nutrients affect health contain algae. Preparations containing algae are available as ready-made preparations: powderslyophilizates, tablets, pills, and capsules and are commonly used in the production of cosmetics (Görs et al. 2010).
Chlorella vulgaris Beijerinck is a spherical, single-celled freshwater alga from the Chlorellaceae. It contains numerous bioactive organic and inorganic substances that exhibit health promoting properties; for example, it is has antihypertensive, anti-inflammatory, antioxidant, anticancer, and immunostimulatory properties as well as it also improves brain function (Suetsuna and Chen 2001;Tokusoglu 2003;Terés et al. 2008;Seyfabadi et al. 2011;Přibyl et al. 2013). Kwak et al. (2012) performed experiment on a group of 40 healthy volunteers and demonstrated the immunomodulatory effects of a C. vulgaris extract. According to their results, there was an increase in the cytotoxic activity of natural killer cells and an increase in the concentration of interferon-γ and interleukin-1β after 8-week administration of C. vulgaris extract in the form of tablets. Oral administration of aqueous extract of C. vulgaris in mice decreased the production of IgE-antibodies and simultaneously increased the mRNA expression of T helper cell cytokines, including interferon-γ and interleukin-12 (Hasegawa et al. 1999). The mechanism of anticancer activity of C. vulgaris extracts also involves the stimulation of production and maturation of granulocytes and macrophages (Justo et al. 2001).
Compounds that are responsible for the aforementioned biological activities are among others phenolic compounds, xanthophylls such as lutein, and bioelements such as zinc, iron, and magnesium. These substances are also specified by the manufacturers of C. vulgaris dietary supplements.
Phenolic compounds exhibit a wide spectrum of biological activities that are attributed to their strong antioxidant activity and have the ability to protect important cellular structures such as cell membranes, structural proteins, enzymes, membrane lipids, or nucleic acids against oxidative damage (Terpinc and Abramovic 2010). Phenolic compounds found in the methanolic extract of C. vulgaris may be responsible for its higher antioxidant activity (Aremu et al. 2016;Muszyńska et al. 2016).
It has been demonstrated that phenolic compounds found in C. vulgaris prevent the activity of free radicals thereby preventing the peroxidation of cell membranes of liver cells. This indicates that C. vulgaris has hepatoprotective activity (Peng et al. 2009). Phenolic compounds from C. vulgaris show potential antioxidant activity by neutralizing free radicals and prevent DNA damage, which in turn prevents tumorigenesis. Furthermore, the extracts of C. vulgaris activate apoptosis in tumor cells. Yusof et al. (2010) demonstrated the in vitro antitumor activity using HepG2 hepatocellular carcinoma cells after incubating the cells with extracts of C. vulgaris obtained using a hot method. Their results showed an increased expression of proteins such as p53 (transcription factor regulating the activation of DNA repair mechanisms and apoptosis in response to DNA damage), enhanced activity of Bax (proteins affecting an accelerated rate of the apoptosis process), caspase-3, and decrease in production of Bcl-2 proteins (B-cell lymphoma 2) that accelerate the apoptosis of tumor cells (Yusof et al. 2010). Naturally occurring lutein is produced primarily in higher plants and algae. Lutein is an important compound with antioxidant activity found in C. vulgaris and is essential for humans (Koushan et al. 2013). Lutein is the intracellular product of C. vulgaris, and thus lutein-rich Chlorella may be developed as a high-value health food (Shi et al. 1997).
In this study, we aimed to determine the content of bioelements because of their physiological role (in human metabolism by building blocks and being enzymes activators). Zinc is responsible for growth and proper functioning of the immune system (Livingstone 2015). Iron is an essential element in cellular aerobic respiration. Magnesium is the second most abundant intracellular cation and is an essential element responsible for maintenance of life. It is involved in various cellular functions and enzymatic reactions (Baaij 2015). Because of their high nutritional value and multiple beneficiary effects, dietary supplements containing C. vulgaris extracts are available in the market in the form of tablets, capsules, powders, and aqueous solutions. Numerous studies have described the content of biologically active substances in dietary supplements of C. vulgaris (Seyfabadi et al. 2011;Koushan et al. 2013;Přibyl et al. 2013). However, to the best of our knowledge, this is the first study to determine the content of bioelements (zinc, iron, and magnesium), phenolic compounds, and lutein in preparations containing C. vulgaris after incubation with artificial digestive juices (under conditions that stimulate the human gastrointestinal tract) which demonstrate their bioavailability. The secondary aim was to evaluate the quality of chlorella-containing formulations by using the methods described in the European Pharmacopoeia 8th edition ( 2013).
Materials
Dietary supplements containing Chlorella vulgaris from a commercial origin, two preparations in the powdered form and four in the tablet form, were evaluated (Table 1). Names of the dietary supplements were changed to Chlorella S, A, O, M, B, and C to retain privacy.
Reagents
All phenolic compounds used in this study were of standard high-pressure liquid chromatography (HPLC) grade. p-Coumaric was from Fluka (Switzerland) and phydroxybenzoic acid, cinnamic acid, kaempferol-7rhamnoside, apigenin, and the xanthophyll lutein were purchased from Sigma-Aldrich (USA). Epigallocatechin and epigallocatechin gallate were from ChromaDex (USA).
Artificial saliva
Briefly, 100 mL of KH 2 PO 4 at a concentration of 25 mmol L −1 , 100 mL of Na 2 HPO 4 at a concentration of 24 mmol L −1 , 100 mL of KHCO 3 at a concentration of 150 mmol L −1 , 100 mL of MgCl 2 at a concentration of 1.5 mmol L −1 , 6 mL of C 6 H 8 O 7 at a concentration of 25 mmol L −1 , and 100 mL of CaCl 2 at a concentration of 15 mmol L −1 were subsequently added to a flask and then, four-time-distilled water was added to bring the total volume to 1000 mL (Arvidson and Johasson 1985).
Artificial gastric juice
Briefly, 2.0 g of NaCl and 3.2 g of pepsin were dissolved in four-time-distilled water; then, 80 mL of HCl at a concentration of 1 mol L −1 was added to bring the volume to 1 L (Polish Pharmacopoeia 2014).
Artificial intestinal juice
Briefly, 20 mg of the pancreatic extract, 120 mg of a bile salt, and 8.4 g of NaHCO 3 were dissolved in four-time-distilled water to obtain a total volume of 1 L (Neumann et al. 2006).
Apparatus
The release of active compounds from the preparations containing C. vulgaris was examined using the prototype Gastroel-2014 apparatus, which was constructed at the Department of Inorganic and Analytical Chemistry at the Faculty of Pharmacy, Medical College, Jagiellonian University (Opoka et al. 2016). This apparatus was used to examine the release of compounds into the artificial digestive juices; it imitates gastrointestinal motions and provides a constant temperature of 37°C.
Mineralization of the preparations containing C. vulgaris was performed in the Magnum II microwave mineralizer ERTEC (Poland) for 1 h in three magnetron cycles: 15 min at 60% power, 15 min at 80% power, and 30 min at 100% power. Mineralization of solutions after digestion with artificial digestive juices using Gastroel-2014 was performed in the UV R-8 mineral Polish mineralizer, which was performed by UV irradiation of the mineralized test solution in a quartz reaction vessel in 5 cycles of 6-8 h each.
Thermo Scientific AA Spectrometer iCE 3000 SERIES UK was used to measure metals in samples.
Sample preparation
Analysis of metals in the preparations containing C. vulgaris The samples were mineralized to determine the content of metals (Mg, Zn, and Fe) in the preparations containing C. vulgaris. Then, 0.2 g of the preparations was weighed with an accuracy of 0.1 mg and was transferred into a Teflon vessel to which 2 mL of perhydrol and 6 mL of concentrated nitric acid were added. Mineralization was performed in a closed system (microwave mineralizer) until a clear, colorless solution was obtained. The solutions after mineralization were transferred to quartz evaporators and evaporated to Balmost dry^on a heating plate at a temperature of approximately 200°C to remove excess of reagents. Four-time-distilled water was added to the residue for a quantitative transfer to a volumetric flask, which was then filled with water to obtain a volume of 10 mL.
Analysis of metals and organic compounds in the extracts of preparations containing C. vulgaris
Extracts of C. vulgaris preparations were obtained as a result of in vitro digestion using Gastroel-2014. The samples were incubated with artificial gastric juice and in artificial intestinal juice for the same time intervals. Initially, 0.5 g of each sample was weighed and transferred to 100 mL Erlenmeyer flasks which was then wetted with a solution of artificial saliva (2 mL, 1 min). To this, 100 mL of gastric juice was added and the flasks were closed with a stopper and placed in the apparatus. The process of incubation continued for the next 30 and 90 min. Then, the contents of the flasks were filtered using a Büchner funnel and a vacuum set. The residue was transferred to the Erlenmeyer flasks together with the filter and then 100 mL of intestinal juice was added. The digestion process lasted 30 and 90 min and then the extracts were filtered again. Next, 5 mL of the obtained filtrates was collected for each of the determination of metal content and organic compounds. A control sample was prepared in the same manner without adding the preparation of C. vulgaris. The content of bioelements in the analyzed, mineralized samples was examined by flame atomic absorption spectrometry (F-AAS). Lutein and phenolic acids were analyzed by reversed phase high-pressure liquid chromatography (RP-HPLC).
Analysis of Zn, Fe, and Mg content before and after incubation with artificial digestive juices by using the F-AAS method
Concentrations of Zn, Fe, and Mg were determined using the F-AAS method. Thermo Scientific AA Spectrometer iCE 3000 series was used for all the measurements. Each sample was analyzed in quadruplicate, and the results are presented as mean values. Satisfactory agreement between the determined and the certified element concentration values was achieved.
RP-HPLC analysis of phenolic compounds
The extracts obtained from the digestive juices were analyzed for the contents of phenolic compounds by using the RP-HPLC method. These analyses were performed according to the procedure developed by Sułkowska-Ziaja et al. (2017). The analyses were performed at 25°C, with a mobile phase consisting of A-methanol, B-methanol:0.5% acetic acid, 1:4 (v/v). The gradient was as follows: 100% B for 0-20 min; 100-80% B for 20-35 min; 80-60% B for 35-55 min; 60-0% B for 55-70 min; 0% B for 70-75 min; 0-100% B for 75-80 min; and 100% B for 80-90 min at a flow rate of 1 mL min −1 , λ = 254 nm (phenolic acids and catechins) and λ = 370 nm (flavonoids). Phenolic compounds were quantified by measuring the peak area with reference to the standard curve derived from five concentrat ions (0.03-0.50 mg mL −1 ). A quantitative analysis of phenolic compounds was performed using a calibration curve assuming the linear size of the area under the peak and the concentration of the reference standard. The results were expressed in milligram per 100 g of dry weight (d.w.).
RP-HPLC analysis of lutein
Lutein in artificial digestive juice extracts was separated and analyzed by using an RP18 column (4.6 × 250 mm, 5 μm) at 30°C. The mobile phase consisted of solvent A : m e t h a n o l : w a t e r, 80 : 2 0 ( v / v ) and solvent B: methanol:dichloromethane, 75:25 (v/v). The following gradient procedure was used: starting at sample injection, 20% B for 5 min, 20-60% B for 5 min, 60-100% B for 25 min, 100% B for 5 min, 100-20% B for 10 min, and 20% B for 10 min. The flow rate was 1.0 mL min −1 (Yuan et al. 2008). The comparison of UV spectra at λ = 450 nm and the retention times with the standard compound enabled the identification of the lutein present in the analyzed samples.
Analysis of tablet properties
The tablets were evaluated as per the standard procedure described in the European Pharmacopoeia 8th edition (2013.) for uniformity of weight, hardness, friability, and disintegration time. Tablets were also tested for variation in thickness to determine any variability associated with the tablet press or the method of preparation.
Average weight of tablets was obtained according to pharmacopeia limits by weighing 20 randomly selected tablets on an analytical balance (OHAUS Adventurer Pro). Hardness was determined for at least ten tablets by using the Erweka TBH 20 hardness tester (Erweka GmbH) and adopting a minimum hardness of 40 N as the acceptance criterion. For each formula, friability was evaluated from the percentage weight loss of 20 tablets tumbled in a Erweka TAR 120 friabilator (Erweka GmbH, Hausenstamm, Germany) at 25 rpm for 4 min. The tablets were dedusted, and the loss in weight caused by fracture or abrasion was recorded as the percentage weight loss. Friability of less than 1% was considered acceptable. The respective disintegration times of the prepared tablets were measured in 900 mL of purified water with disks at 37°C by using an ERWEKA ZT 222 tester (Erweka GmbH). Six tablets were randomly selected from each formulation and were put into a basket rack. The disintegration time was recorded until all the fragments of the disintegrated tablet passed through the screen of the basket. For nonmodified tablets, the disintegration time should be no longer than 15 min. Thickness of the tablet was determined for 20 tablets by using a digital vernier caliper (0-150 mm).
Statistical analysis
Values are presented as mean ± standard deviation (SD). All experiments were performed four times. Statistical analysis was performed using one-way ANOVA with the Tukey-Kramer post hoc method of multiple comparisons. p < 0.05 was accepted as the level of statistical significance. Chemometric tools were used to facilitate the analysis and interpretation of the data obtained in the experiment; these included the two main methods: cluster analysis (CA) and principal component analysis (PCA). CA enabled the identification of groups of similar objects (preparation with C. vulgaris) that were described by nine parameters (concentration of metals and organic compounds). PCA as a method of calculation allowed the reduction of the data size and demonstration of the correlations between the objects in a twodimensional space. Calculations were performed using GraphPad InStat (USA) and Statgraphics Centurion XVII. Statistical significance was established at p < 0.05.
Results
The preparations of C. vulgaris were incubated with artificial digestive juices (Gastroel-2014 apparatus) to estimate the actual quantities of bioelements, phenolic compounds, and lutein available to humans. The incubation was performed under conditions imitating those in the human gastrointestinal tract (temperature of 37°C and movements mimicking peristalsis in the gastrointestinal tract).
The following phenolic compounds were determined using the RP-HPLC method after incubation with artificial digestive juices from the preparations containing C. vulgaris: phydroxybenzoic acid, p-coumaric acid, cinnamic acid, kaempferol 7-rhamnoside, epigallocatechin gallate, apigenin, and lutein from the xanthophyll group (Table 2).
The highest amounts of phenolic compounds released into the artificial digestive juices as compared to the control samples (methanol extracts) were as follows: p-hydroxybenzoic acid, cinnamic acid, kaempferol 7-rhamnoside, and apigenin. With respect to p-hydroxybenzoic acid, the largest amounts for all preparations were extracted in the artificial intestinal juice after an extraction time of 30 min (0.86-2.74 mg (100 g) −1 d.w.). p-Coumaric acid was determined only in tablets, and its content was significantly lower (0.27-1.15 mg (100 g) −1 d.w.) than that of methanolic extracts (1.62-4.48 mg (100 g) −1 d.w.). Cinnamic acid (both in the artificial digestive juices and in each time interval) was found in similar contents, which ranged from 0.03 to 0.34 mg (100 g) −1 d.w. Lower levels of this metabolite was noted in methanolic extracts (0.02-0.1 mg (100 g) −1 d.w.). Apigenin and kaempferol 7-rhamnoside were determined in significantly higher quantities in any time variant; their concentrations were up to 15 times greater than in the control samples (methanol extracts from C. vulgaris-containing preparations). Epigallocatechin was extracted in higher quantities from the intestinal juice after 90 min in case of powder (Chlorella A) and tablets (Chlorella B), 50.41 and 54.27 mg (100 g) −1 d.w., respectively, which was approximately 2.5-56 times greater than in case of control (20.09 and 8.42 mg (100 g) −1 d.w., respectively). A phenolic compound that was not released into the artificial digestive juices, but is extracted in methanol, was epigallocatechin gallate (1.14-2.17 mg (100 g) −1 d.w.).
Lutein, the primary metabolite present in algae, was released into the artificial digestive juices only from the tablets ranging from 42.91 to 70.58 mg (100 g) −1 d.w.
F-AAS, which is one of the most common analytical techniques used to analyzed bioelements, was used to determine the content of Zn, Fe, and Mg metals in preparations containing C. vulgaris (powder and tablets) and in extracts obtained incubation with digestive juices. The developed mineralization conditions of lyophilized material and the applied analytical method allowed an effective determination of elements in the preparations and extracts of artificial digestive juices. In this study, the preparations containing C. vulgaris were subjected to quantitative determination of Zn, Mg, and Fe (Table 3). According to the literature, C. vulgaris is rich in macroelements such as phosphorus (1761.5 mg (100 g) −1 of dry matter), potassium (749.9 mg (100 g) −1 ), calcium (593.7 mg (100 g) −1 ), magnesium (344.3 mg (100 g) −1 ), and microelements such as iron (259.1 mg (100 g) −1 ) (Tokusoglu 2003). According to these data, the intake of 3 g of the C. vulgaris extract fulfills the daily iron requirements for men, whereas 7 g is needed for women (according to RDA standards, Institute of Medicine 2001) (Fig. 1a). According to the results of our study, the amount of iron found after incubation with artificial digestive juices is insufficient to supplement the deficiency of this element in humans, as in case of the release of zinc and magnesium from the preparations into the artificial digestive juices (Table 3). Thus, zinc content after digestion of samples for 30 min in artificial digestive juices (usually 9-15 tablets of these supplements are administered, corresponding to 4 g of extract per day) was found to be on an average only 0.804 μg, whereas the daily requirement of men and women is 11 mg (Fig. 1b). This implies that only 0.01% of the zinc requirement per day is supplied from the preparations containing C. vulgaris. Chlorella vulgaris contains chlorophyll, which constitutes 1-2% dry matter, and thus, it provides significant amounts of magnesium (Bashan et al. 2002). Magnesium content in control samples (mineralized formulations) was in the range of 1521-3221 μg g −1 d.w. However, after digestion with digestive juices, it was found to be much lower, and by using similar dosage assumptions as in case of zinc and magnesium, the average magnesium available was 306.57 μg (Fig. 1c). This quantity can only supplement 7.7% of the daily requirement for men and 9.9% for women. In case of iron, 8.74 μg was found in the sample after digestion with artificial digestive juices, which supplements 10.9% of the daily requirement for men and 4.8% for women. Tablets containing C. vulgaris disintegrated in the artificial digestive juices only after approximately 30 min. Therefore, the physical properties of tablets containing C. vulgaris were ensured to be according to the European Pharmacopoeia 8th edition ( 2013). Table 4 lists the physical properties of the prepared tablets in terms of the uniformity of weight, hardness, friability, and disintegration time. The tablets were also tested for variation in thickness to determine any variability associated with the tablet press or the method of preparation.
The thickness of the tablets ranged from 4.3 ± 0.05 to 5.78 ± 0.28 mm. In case of Chlorella B (tablets), the percentage deviation of thickness exceeded 5% (acceptable range of thickness: ± 5%). The average weight, hardness, and friability were within the pharmacopeia specifications. Variation in weight ranged from 196.95 ± 2.31 to 401.95 ± 15.72 mg (acceptable range of weight variation: ± 7.5% for tablets weighing up to 249 mg and ± 5% for tablets weighing more than 250 mg). Hardness of tablets ranged from 62.8 ± 8.3 to 109.2 ± 16.94 N (acceptable range of hardness: > 40 N), and friability ranged from 0.03 to 0.152% (acceptable range of friability: < 1%) ( E u r o p e a n P h a r m a c o p o e i a 8 t h e d i t i o n , 2 0 1 3 ) . Disintegration times of the investigated tablets exceeded 15 min. All tablet formulations had excessively long disintegration times, which ranged from 36.67 to 125.67 min. The average disintegration time of the investigated tablets was up to 61.17 min.
Discussion
The primary phenolic compounds determined in preparations containing C. vulgaris are both benzoic and cinnamic acid derivatives. According to the literature, C. vulgaris contains phenolic compounds such as salicylic acid, trans-cinnamic acid, chlorogenic acid, and caffeic acid (Miranda et al. 2001). In this study, we detected the presence of the following phenolic compounds in preparations containing C. vulgaris: p-hydroxybenzoic acid, p-coumaric acid, and cinnamic acid. In addition, the samples contained kaempferol 7-rhamnoside, epigallocatechin gallate, and apigenin. According to the literature, these compounds are bioavailable for humans.
Chlorella extracts rich in phenolic compounds exhibit strong antioxidant activity. Peng et al. (2009) performed in vivo experiments using rats to test the antioxidant activity of Chlorella extracts. The animals were fed on diet enriched with tetrachloromethane, an organic chemical compound in which all hydrogen atoms are replaced by strong, electronegative chlorine atoms exhibiting strong hepatotoxicity leading t o j a u n d i c e a n d i n s e v e r e c a s e s t o c i r r h o s i s . Tetrachloromethane in the liver cells is metabolized to the trichloromethyl radical, which reacts with oxygen to form a more reactive radical -•CCl 3 O 2 •. According to their results, phenolic compounds found in Chlorella extracts prevented the damage caused by the free radical attack and the peroxidation of liver cell membranes, indicating the hepatoprotective activity of Chlorella extracts (Peng et al. 2009).
Lutein is a yellow organic carotenoid pigment. Its content in C. vulgaris ranged from 5 to 383 mg (100 g) −1 dry matter Kitada et al. 2009;Safi et al. 2014). In humans, the highest concentration of lutein is found in the yellow spot, which is a 7-8-mm round area on the inside of the retina with a distinctive yellow color due to the presence of lutein and zeaxanthin. Lutein exhibits a strong antioxidant effect on the free radicals of the reactive oxygen species n = 3 in triplicate; the Tukey-Kramer test was used for revealing the differences between paired groups of elements in rows; the same superscript letters (a, b, c, d, and e) are marked for which the content differences are statistically significant (for p values of < 0.05) (ROS) class, which are generated by oxidative phosphorylation in the mitochondria on the outer stamen segment. Furthermore, the increased production of ROS may be due to hypoxia of the photosensitive retinal cells (Koushan et al. 2013). The amount of lutein, although less than in the control samples, were sufficient to supplement approximately 40-60% of the daily requirement for humans (the daily requirement ranges from 10 to 20 mg day −1 ) (Otten et al. 2006).
According to the results provided in Tables 2 and 3, six objects (preparations containing C. vulgaris) were found to be characterized by 11 features (variable with different concentrations of Mg, Zn, and Fe, and the content of the organic compounds). Large number of variables, whose values change over a relatively wide range, does not give a clear indication of the observations of systematic changes. In this case, the use of chemometric tools should be useful. Chemometric tools allow an analysis of data and their interpretation in an easy and accessible manner. The task of chemometry is to Bextractm eaningful information about relationships between measured variables or objects. This is possible after applying appropriate mathematical analytical methods to eliminate the anomalies associated with the measurements (Johnson 1984;Sharaf et al. 1986;Miller and Miller 1999).
CA was used for the first set of analytical dataset. This method makes it possible to indicate a similarity (presented on dendrogram commonly called Btree^ (Fig. 2)), or a distinct absence, between the subject matter being investigated (preparations containing C. vulgaris) or variables (analyzed elements and organic compounds) (Aldenderfer and Blashfield 1985;Everitt et al. 2001;Massart and Vander 2004;Gemperline 2006). Based on our similarity analysis (CA, Fig. 2), we observed a subgroup of parameters with similar variability. These parameters describe the concentrations of the individual variables. In practice, this implies that the course of change in these variables is similar, which proves at the same time, a high correlation between the objects. Thus, two primary aims were distinguished. Within the first cluster, there are four formulas: Chlorella S, Chlorella A, Chlorella B, and Chlorella C, and the second cluster has two formulas: Chlorella O and Chlorella M. Their classification into individual clusters indicates the similarity of their composition (content of organic constituents and metals). Furthermore, we found that the highest correlation occurred between Chlorella O and Chlorella M formulations as evidenced by the shortest length of the dendrogram tree arms. The shorter the branch lengths are, the greater is the similarity between the objects in question (Johnson 1984).
PCA has been used as the complementary method in this study. PCA is considered a computational method that leads to the limitation of the measurement data space to the amount required to describe the interactions between them. Parameters, mutually dependent, are replaced with new variables, the so-called main components, which exclude the loss of relevant information. Using PCA, we found that 75.4% of the variations occurring within the analyzed dataset could be described with the first three major components (PC1, PC2, and PC3). Bringing a multidimensional data system to the three main components allowed us to conduct the analysis on a flat projection of the three-dimensional space (Malinowski 1991;Henrion 1994;Massart and Vander Fig. 2 Cluster analysis of the preparation with Chlorella vulgaris (the Euclidean distance square and Ward's algorithm) Fig. 3 Biplot graph creates a three-dimensional space presented on the plane, showing the correlation between the analyzed ingredients found in the formulations containing Chlorella vulgaris and the site of the human gastric juice. Stomach juice (j. stomach) and intestinal juice (j. intestinal) 2004). Considering the similarity of the objects with respect to their place in the digestive system, which resulted in the release of metals and organic compounds from the formulations containing C. vulgaris (Fig. 3), two distinct groups were identified. The first group consisted of ingredients analyzed in the gastric juice, while the other group contained those analyzed in the intestinal juice. Such a division indicated the correlation between a given ingredient and the site of its release in the body. Furthermore, taking into account the two-dimensional graph (Fig. 3) obtained from the three main components, we made certain changes to the individual variables in the area of the digestive tract to which the formulation components were released. Thus, we found that from each of the investigated preparations, both metals and organic compounds were released into artificial digestive juices; moreover, the release was targeted at a particular place in the digestive tract and depended on the component being analyzed. The organic compounds from the formulations were most released into the intestinal juices. Absorption in the human body is most likely in the intestine. Thus, we conclude that these preparations provide organic compounds to the body. In contrast, the release of metals was in the greatest degree into the gastric juices; this suggests that they are only slightly absorbed by the human body.
Conclusions
In this study, the usefulness of preparations containing C. vulgaris in the supplementation of daily diets with the examined compounds for humans has been evaluated on the basis of the analysis of the extracts incubated with artificial digestive juices, and the concentration of phenolic compounds and lutein in the digestive juices. The qualitative composition of bioelements was consistent with the manufacturer's declarations on the packaging containing C. vulgaris, with respect to the controls, but the examined elements were found to be negligible in the artificial digestive juices. Therefore, these preparations cannot be considered to be a good source of elements such as iron, magnesium, or zinc. An important element in the study of an effect of dietary supplements on humans is also primarily the way of preparing the form of the preparation such that the active substances are released from it in the most effective manner. | 2018-04-03T00:16:09.807Z | 2017-12-06T00:00:00.000 | {
"year": 2017,
"sha1": "e840be8b54ed686aae289ef32f3c20285c1ee6c5",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10811-017-1357-2.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "e840be8b54ed686aae289ef32f3c20285c1ee6c5",
"s2fieldsofstudy": [
"Biology",
"Chemistry",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
42074041 | pes2o/s2orc | v3-fos-license | Dynamical spin-density waves in a spin-orbit-coupled Bose-Einstein condensate
Synthetic spin-orbit (SO) coupling, an important ingredient for quantum simulation of many exotic condensed matter physics, has recently attracted considerable attention. The static and dynamic properties of a SO coupled Bose-Einstein condensate (BEC) have been extensively studied in both theory and experiment. Here we numerically investigate the generation and propagation of a \textit{dynamical} spin-density wave (SDW) in a SO coupled BEC using a fast moving Gaussian-shaped barrier. We find that the SDW wavelength is sensitive to the barrier's velocity while varies slightly with the barrier's peak potential or width. We qualitatively explain the generation of SDW by considering a rectangular barrier in a one dimensional system. Our results may motivate future experimental and theoretical investigations of rich dynamics in the SO coupled BEC induced by a moving barrier.
I. INTRODUCTION
Spin-orbit (SO) coupling plays an important role for the emergence of many exotic quantum phenomena in condensed matter physics [1,2]. In this context, the recent experimental realization of SO coupled neutral atoms provides an excellent platform for the quantum simulation of condensed matter phenomena because of the high controllability and free of disorder [3][4][5][6] of cold atoms. By dressing two atomic internal states through a pair of lasers, a Bose-Einstein condensate (BEC) with equal Rashba and Dresselhaus SO coupling has been achieved [3,[7][8][9][10][11][12]. The static and dynamic properties of such SO coupled BEC [13][14][15][16][17][18][19][20][21][22][23] have also been investigated. Notable experimental progress in SO coupled BECs includes the observation of spin Hall effects [24] and Dicke-type phase transition [10], the study of collective excitations such as the dipole oscillation [8] and roton modes [25,26], as well as the dynamical instabilities [27] in optical lattices, etc. Recently, the generation of another type of SO coupling, the spin and orbitalangular-momentum coupling, was also proposed [28][29][30].
Moving potential barriers have been used in the past for the study of the superfluidity of ultra-cold atomic gases. For instance, by stirring a small impenetrable barrier back and forth in a condensate, the evidence of the critical velocity for a superfluid was observed [31]. When a wider and penetrable barrier was swept through a condensate at an intermediate velocity, the condensate is filled with dark solitons [32].
In this paper, we study the moving barrier induced dynamics in a SO coupled BEC. We find that a fast moving penetrable barrier may generate a dynamical spin-density wave (SDW) in the wake of the barrier. Static SDW, which was proposed in solid state physics by Overhauser [33,34], has been widely studied in many different solid state materials such as chromium [35,36]. Our generated SDW in a SO coupled BEC is induced by the moving barrier and vanishes when the SO coupling is turned off. The spatial periodic modulation of the spin density is not static, i.e., the local spin polarization oscillates in time periodically, and could last for a very long time.
The paper is organized as follows. Section II describes the model of the SO coupled BEC. In Section III, we study the dynamics induced by a suddenly turned-on stationary barrier or a slowly moving barrier. Section IV includes the main results of the paper. We generate a dynamical SDW with a fast moving barrier, study its propagation, parameter dependence, and finally explain its mechanism using a simple one-dimensional (1D) system. Section V is the discussion.
II. THEORETICAL MODEL
The SO coupled BEC is realized by shining two counter-propagating laser beams on cold atoms [3]. Two atomic internal states can be regarded as pseudo-spins | ↑ = |F = 1, m F = 0 and | ↓ = |F = 1, m F = −1 of 87 Rb atoms. To stimulate the two-photon Raman transitions, the two lasers are chosen to have a frequency difference comparable with the Zeeman splitting ω Z between two spin states. The experimental configuration and level diagram are shown in Fig. 1. In our simulations, we consider a realistic elongated BEC with N = 10 4 atoms in a harmonic trap with trapping frequencies ω x,y,z = 2π × {20, 120, 500} Hz. The strong confinement along the z and y directions reduces the dimension to quasi-1D. We use E r = 2 k 2 L /2m as the energy unit, where k L is the recoil momentum along the x direction (i.e., the SO coupling direction).
A barrier, which can be created by the dipole potential of another laser beam [31,32], is suddenly switched on in the BEC or is swept from the left to the right side with a velocity of v ranging from 1 to 80 µm/ms. The barrier peak potential is 5 to 25 E r , which is much larger than the chemical potential of the system. The width of the barrier is at the order of µm. The external potential barrier sweeping through the BEC is modelled as a Gaussian potential of the form: where w x = W is the Gaussian barrier width along the SO coupling direction. w y/z are much larger than the BEC widths in these two directions. x 0 and v are the initial position and velocity of the barrier potential, V b is the peak potential of the barrier. The dynamics of the SO coupled BEC are governed by the Gross-Pitaevskii (GP) equation: where the single-particle Hamiltonian with SO coupling is given by where σ i (i = x, y, z) are the Pauli matrices, δ is the detuning of the Raman transition and Ω is the Raman coupling strength. The trapping potential is of the form V trap = mω 2 x x 2 /2 + mω 2 y y 2 /2 + mω 2 z z 2 /2. For the sake of simplicity, we consider a 2D geometry in our calculations by integrating out the z-dependent degree of freedom in the G-P equation 2, which is valid because the strong confinement along z direction restricts the BEC to the ground state of the harmonic trap along the z direction, yielding The interaction between atoms is determined by the mean-field Hamiltonian where the reduced nonlinear coefficients for the 2D system are . The harmonic oscillator characteristic length is a z = /mω z and the s-wave scattering lengths are given by c 0 = 100.86a 0 , c 2 = −0.46a 0 (a 0 is the Bohr radius).
In most cases the many-body interaction is weak, therefore many interesting physics can be well understood from the single particle band structure, which is either a double well type (for Ω < 4E r ) or a single well type (for Ω > 4E r ) for δ = 0. For Ω < 0.2E r , BEC stays in both wells and the two dressed states interfere to form a stripe pattern; for 0.2E r < Ω < 4E r , BEC chooses either of the two wells as the true ground state, which is usually called plane wave phase or magnetized phase with a finite spin polarization | σ z | = 1 − (Ω/4E r ) 2 ; for Ω > 4E r , BEC condenses at k x = 0 and the spin polarization is zero | σ z | = 0. In our calculations, we focus on the latter case and take Ω = 6E r , δ = 0E r , where the generated SDW could be identified easily.
One of the effects of the SO coupling is to change the sound of speed of the condensate. For a regular BEC, the speed of sound is given by v s = U ρ/m, where U = 4π 2 N c 0 /m is the nonlinear coefficient and ρ is the condensate density. When the BEC is dressed by the Raman lasers, the speed of sound is modified by changing the atomic mass to effective mass m ef f since the band structure of the system is modified, i.e., v s = U ρ/m ef f . For experimentally relevant parameters, the sound of speed is to the order of 1 µm/ms. The sound of speed and the collective excitation spectrum has been measured in recent SO coupling experiments [25,26]. In order to generate SDW, the velocity of the moving barrier should be much larger than the speed of sound.
III. EFFECT OF A SUDDENLY TURN-ON STATIONARY BARRIER AND A SLOWLY MOVING BARRIER
Before discussing the generation of SDW in the SO coupled BEC through a fast moving barrier, we consider two different limits: one is that a stationary barrier is suddenly switched on in the middle of the condensate, the other one is that a slowly moving barrier is swept through the BEC from the left side to the right side. Previously, suddenly switching on a barrier potential in the middle of the superfluid is usually used to measure the speed of sound for BECs or Fermi gases [37,38].
In our calculations, the barrier potential is strong and therefore induces strong perturbations to the condensate which might be observed in experiments. Without SO coupling, the barrier excites two wave fronts propagating along both directions with the same speeds for the two spins. For SO coupled BEC, as shown in Fig. 2, the propagation of the two wave fronts propagate differently with respect to the direction of the spin. We see that only one clear wave front propagates to the left (right) side for spin up (down) with a speed around ∼ 10 µm/ms for the current geometry and atom number. This anisotropic and spin-dependent propagation of the density perturbation is a direct consequence of the SO coupling. Note that except the propagation of the wave fronts, a series of density modulations are excited in the mean time, which are due to the suddenly switched on barrier induced perturbations and occur for a regular BEC as well.
When the barrier is slowly swept from the left side to the right side of the BEC, it is impenetrable for the condensate because the barrier potential V b = 15E r is much larger than the chemical potential ∼ 1 E r of the condensate. Therefore BEC is pushed in front of the barrier with the excitations of similar density modulations for the two spins (see Fig. 3).
IV. SDW FROM A FAST MOVING BARRIER
In this section, we focus on a fast moving barrier with a velocity larger than the speed of sound. Because of the increasing relative velocities between the barrier and the condensate, the barrier is now a penetrable potential for the atoms. For a certain parameter regime, the barrier induces a dynamical modulation of the densities of the two spins in the wake of the barrier, while it does not lead to any observable perturbations in its front. Figure 4 shows the density distributions of two spin components when and after a moving barrier with a fast velocity is swept through the condensate. The velocity of the barrier v = 50 µm/ms is much larger than the speed of sound. We see that the density oscillations of the two spin components are out of phase, thus the barrier generates a SDW. The SDW could last a very long time and does not relax in the trap if all the parameters remain unchanged.
A. Generation and propagation of SDW
To characterize the SDW, we calculate the wavelength λ (the distance between two peaks for one spin component) and the contrast C σ = |n σ max −n σ min |/|n σ max +n σ min | near the center of the BEC and plot them as a function of the barrier's peak potential V b , barrier's velocity v and barrier's width W . As shown in Fig. 5, the wavelength of the SDW is roughly a constant as a function of barrier height and width. However, the wavelength is almost proportional to the velocity of the barrier. This is easy to understand because the SDW is not static. Each local spin polarization is fast oscillating as a function of time, as shown in Fig. 6 where we plot the local spin polarization σ z (x = 0) in the middle of the BEC as a function of time. Before the barrier moves to x = 0, the density is a constant. Right after the barrier is swept through x = 0, the densities of the two spins start out-of-phase oscillations, therefore there is a spin polarization oscillation. The oscillation period is roughly a constant T for a certain barrier potential V b . Con- sidering that the barrier is swept through the BEC, the left and right side of the BEC is perturbed consecutively. When the barrier moves at a constant speed, the wavelength of the SDW should be proportional to the moving velocity if T does not depend on the velocity significantly, i.e., λ = v × T . Note that this relation fails to apply for much larger velocities where the spin oscillation at different local points may be generated at very short time and the above physical picture does not apply. In our numerical calculations, we verify that this oscillation curve remains the same when the interaction strength is varied in a large region, showing that the dynamical SDW is a phenomena governed by the single-particle physics. However, the phenomena changes when the interaction strength is strong enough such that the speed of sound of the condensate is comparable to the moving barrier velocity. As we have studied previously in Fig. 3, the barrier becomes impenetrable in this limit and pushes BEC to one side. In Fig. 7, we demonstrate the slow propagation of the SDW when the barrier is suddenly turned off after it moves to the center of the BEC. Since the dynamical SDW is actually a local spin polarization oscillation instead of travelling waves, the removal of the moving barrier at the center of the BEC should stop generating SDW on its right side. However, because of the superfluidity properties of the BEC and nonlinear interactions, the unperturbed neighboring atoms will be eventually perturbed and we see that a new SDW with much smaller amplitude and velocity propagates to the right side of the BEC. The propagation of the new SDW is in the order of the speed of sound as expected.
B. Mechanism of SDW generation
In this section, we provide a qualitative theoretical understanding of the SDW generation by considering a rectangular potential sweeping through a 1D SO coupled BEC system. The rectangular potential can be written as where H(x) is Heaviside function: Similar as the Gaussian barrier, here V b is the barrier potential and W is the barrier width. Figure 8 shows density distributions of the two spins at the moment when the rectangular barrier moves to a position around X = 8µm for different barrier widths. We see that the width of the barrier changes the generated SDW significantly. When L is integer times larger than the SDW wavelength L = nλ, there are only n well developed complete SDW oscillations inside the barrier and the BEC that behind the barrier seems to be unperturbed at all (Fig.8b,c,d). When L and λ are incommensurate, the SDW is generated in the wake of the barrier as shown in Fig. 8a. The wavelength of the SDW is also proportional to the velocity as we have demonstrated in Fig. 5(c) for the Gaussian-shaped barrier, while rarely depends on the barrier width and height. All these features of the dynamics can be well understood in the following way.
The condensate is initially prepared at the ground state with Ω = 6E r while the band structure has only one single minimum at k x = 0, i.e., the quasi-momentum of the SO coupled BEC is zero. The initial wave function at some point x is The effect of a moving barrier could be explained using a simple single particle picture. In the laboratory frame, the real momentum of spin up and spin down components is k ↑ = k x + k L = k L and k ↓ = k x − k L = −k L respectively because k x = 0. An external barrier that moves along the SO coupling direction has different relative velocities for the two spins and therefore induces spindependent dynamics. Take the traveling external potential as the frame of reference, the momentum are then k ↑B = −mv/ + k L for spin up and k ↓B = −mv/ − k L for spin down. In the presence of the fast moving barrier, the velocities of the two spin components will be changed. From the conservation of the energy, we have the new velocities for the two spins: r . Now converting to the laboratory frame, the velocity is k ′ ↑ = k ′ ↑B + mv/ for spin up, and k ′ ↓ = k ′ ↓B +mv/ for spin down. We find that for the large barrier velocity, k ′ ↑ − k ′ ↓ ≈ 2k L , which means the quasimomentum for this states is now k ′ x = (k ′ ↑ + k ′ ↓ )/2 that agrees with the appearance of new momentum states in our GP simulations. Because of different group velocities of two spins in the presence of SO coupling, the moving barrier drives BEC to a new nonzero quasi-momentum states which is equivalent to an effective detuning ∆ e . Now the SDW related phenomena could be modelled as a quench dynamics where the effective detuning ∆ e is suddenly added to induce a coupling between the two new bands. Consider the effective Hamiltonian for a twolevel system (ignoring other irrelevant constants): We denote Ω e = Ω 2 + ∆ 2 e , then the evolution of the local wave function for the point x 0 within the potential is given by: Ignoring the normalized factor and the small terms of the order ∆ 2 e , a straightforward calculation gives the spin polarization at this local point, where C 1 > 0. It is quite clear that when the barrier moves to x 0 with a fast velocity, it perturbs the local condensate n σ (x 0 , t) by coupling the two new bands and thus induces a local spin polarization oscillation σ z (x 0 , t) . Due to the fact that the perturbation is applied from left to right, there is a relative phase between neighboring atoms. Therefore the spin polarization for a point on the left of x 0 is We see from the above equation that the local spin polarization within the rectangular potential is always negative and the period of the oscillation is T in = 2π /Ω e . Therefore the wavelength is λ in = 2π v/Ω e . Note that the point which has a separation of nλ in with the right edge of the rectangular barrier has a vanishing spin polarization. If the left edge of the barrier coincides with these spin polarization vanishing points, i.e., W = nλ in , then a striking effect occurs as we have seen from Fig. 8b-d: the condensate in the wake of the barrier seems to be unperturbed at all. That is because the potential has been removed for these points, where the dynamics is now governed by the original Hamiltonian with ∆ e = 0. At the same time the wave function returns to its eigenstate with σ z = 0.
If W = nλ in , then even though the governed Hamil-tonian returns to the original one, the wave function is not in its eigenstate and the coupling between the old two bands continues with the new Rabi frequency. Therefore in the wake of the barrier, we have the period T out = 2π /Ω > T in and the wavelength λ out = vT out which is a little larger than λ in because Ω e is slightly larger than Ω. Similar calculation shows that the spin polarization could be positive or negative in one period, in agreement with the GP simulation (Fig. 8a). Furthermore, when W = (2n + 1)λ in /2, there is the largest spin polarization oscillation amplitude behind the barrier. The dynamics of the system are equivalent to the precession of a spin under the magnetic field (Fig. 8e). Without the barrier, the system is in the eigenstate of a horizontal magnetic field B x and thus does not precess. A moving barrier is equivalent to suddenly quenching on a z component magnetic field B z . The spin then precesses about the axis determined by the total magnetic field. When the barrier passes through the point, the spin could be in any possible orientations (with SDW appears) or returns to the eigenstate of B x (no SDW appears).
V. DISCUSSION
In our calculation, we have chosen Ω = 6E r . Similar SDWs can also be generated for Ω < 4E r , where the initial state is polarized and the population oscillation amplitudes of the two spins are now different. If the barrier is even faster than the speed used in this paper, then all atoms are perturbed almost at the same time, therefore the local spin polarization oscillation may be in phase. Since the delay is always present no matter how small it is, the pattern may look complicated. For a two-component BEC without SO coupling, a fast moving barrier does not induce any observable effects because the dynamics is governed by two uncoupled bands.
The two-photon recoil momentum and recoil energy correspond to a velocity of v r = k L /m = 4.14µm/ms and a kinetic energy of 1E r . According to our simulations, to generate the SDW with a fast moving barrier (velocity v, peak potential V b ), we need to focus on the following parameter regime: where KE b = 1 2 mv 2 is the corresponding kinetic energy of a particle with a relative speed of | ± v r − v| ≈ v (because v >> v r ) respect to the barrier.
In summary, we present a scheme to observe the generation and propagation of a SDW in a SO coupled BEC through a moving supersonic potential. The period of the SDW almost does not change with respect to the peak potential and width of the barrier. However, it is very sensitive to the velocity of the barrier. The essence of the SDW is due to the different group velocities of the two spin components in the presence of SO coupling. The SDW could last a long time in the trap without relaxation and therefore provides a good system to study other complicated dynamics. For instance, by lowering the Raman coupling and changing the band structure, we may observe the opposite motion of the density modulations of the two spins and their relaxation in the presence of SO coupling. Furthermore, in other parameter regime (smaller barrier potential or velocity) or with a much narrower stirring barrier, it is possible to generate solitons or vortices. | 2015-07-01T19:14:18.000Z | 2015-07-01T00:00:00.000 | {
"year": 2015,
"sha1": "aeed9766ecf5ced075fbad86065d698036da6948",
"oa_license": "publisher-specific, author manuscript",
"oa_url": "https://link.aps.org/accepted/10.1103/PhysRevA.92.013635",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "aeed9766ecf5ced075fbad86065d698036da6948",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
3807171 | pes2o/s2orc | v3-fos-license | Application of EN-14065 in New Management of Hospital Laundry in Order to Reduce Hais Use of Antibiotics and to Slow The Development of Superbugs
SUMMARY Introduction Despite the steps taken to reduce HAIs (Healthcare Acquired Infections), they still remain one of the world's leading and most costly healthcare challenges. Most are preventable. All are extremely costly in human and fi nancial terms. The aim of this work is to summarize data on the persistence of diff erent nosocomial patho- gens on hospital textile and laundry, to reduce HAIs, and use of antibiotics, with New Safety Management of Hospital Laundry by application of EN-14065 and PVA-bags, all in order to slow the development of resistant bacteria and prevent the spread of resistant infections. Methodology We reviewed cited articles, and tested environmental friendly PVA-bags to proof minimizing the possibility of textiles posing as a source of infection. Topic Reviewed cited articles confi rmed existing persistence of the nosocomial patho- gens on hospital surfaces. Effi ciency of using PVA-bags to minimize transmission we confi rmed in the Clinical Center of Serbia. The EN-14065 standard came into force in Serbia in 2012., and requires controlling the RABC system which provides an ac- ceptable level of microbiological quality of laundered hospital laundry in hygienically clean state (12 CFU/25 cm 2 ).
INTRODUCTION
Healthcare Acquired Infections (HAIs), also called nosocomial infections, are infections that fi rst appear between 48 hours and four days aft er a patient is admitted to a hospital or other health care facility.Th ey include urinary tract, surgical site, lung and bloodstream infections [1].Th ey still remain one of the world's leading and most costly healthcare challenges.Most are preventable.All are extremely costly in human and fi nancial terms.
Infection prevention policies do not address the most prevalent and overlooked component of the healthcare environment-Soft Surfaces.Th e potential for survival and spread of pathogens to and from humans and clothing or linens is shown by various laboratory studies.Th ese studies show that survival does occur but varies considerably between diff erent microbial strains and depends on factors such as temperature, relative humidity, type of fabric and inoculum size.Health and safety is paramount in a medical environment.Whether handling linens, instruments, or clothing, it is vital to eliminate the risk of cross infection to patients, staff and visitors.Despite today's extensive eff orts to prevent the spread of germs and bacteria, Healthcare Acquired Infections (HAIs) continue to be one of the world's most pressing and expensive healthcare problems.Th e transfer of Gram-positive bacteria, particularly MRSA and VRE, among patients is a growing concern [2].
METHODOLOGY
We reviewed cited articles, and tested environmental friendly PVA-bags to proof minimizing the possibility of textiles posing as a source of infection.
Soft surfaces -the most overlooked problem area in hospital
While today's standards for preventing HAIs focus on patient screening, hand washing, hard surfaces, sterilizing and soft surfaces remain unaddressed.And this missing step is perhaps the greatest challenge to reducing HAIs.
HOSPITAL undergarments spread of pathogenic microorganisms in medical in-stitutions has become a major problem in the health sector [3].Studies show that soft surfaces bed linens, curtains, uniforms and scrubs, can harbor bacteria and pathogens, causing recontamination during frequent contact.However, they are oft en ignored by infection prevention protocols [4].
Th e hospital laundry can be contaminated from the body of the patient, through employee's hands, from contaminated secondary air or other infected patients.Sheets, gowns, uniforms, towels, cleaning products (brooms, mops) and furniture in the patient's room are important to anyone who works or visits the hospital.Th ey passage of infection, if not controlled [5].Children and the elderly are especially vulnerable to infection, due to their less effi cient immune system.Th e dirty laundry hospital identifi ed as the source of a large number of pathogenic organisms, the actual risk of transmission of pathogens is belittled.For laundries, it's an opportunity to correct or improve their processes to ensure healthcare clients receive the highest-quality textile product possible.
Rotavirus is the most common cause of severe dearie in children (stomach fl u), which result in admission to hospital for treatment, and only in America, about 55 000 patients every year, and unfortunately, about 600 children succumb, and the world succumbs to a year and more than 600,000 cases.Th ese are viruses that multiply in epithelial cells of the intestine, causing gastrointestinal problems, accompanied with diarrhea in humans and animals worldwide.Especially are vulnerable children and the elderly.Th ey are spread by faecal-oral route.Infection is intense in the winter months.Th ere is evidence that people underware who have incomplete control of the stool, even one-tenth of a gram (1/10 g) stool contains about a billion stool rotavirus.Washing to remove 99.99% of viruses, and then drying another 90% is eliminated, but still remained in underwear 100,000 live virus that can cause infection, so it is necessary ironing aft er washing, to the sick, and in private houses [7].
Th ese fi ndings are very important for patients who require a very long intensive inpatient treatment, because their immune system is very weakened from multiple reasons.
For example, mortality caused by rotavirus in the general population is 1 in 10,000, but because patients in intensive care, is 1 in 100 patients treated.Patients suff ering from salmonella, 1 in 1000 has a chance to die in the general population, while patients in intensive care a chance to die there, is one on 25.Adenovirus type 40, causes diarrhea in children less intensity, less in adults, but if the patients got it with immunosuppression, such as cancer patients, mortality is around 50% [8].
Pathogen Persistence
Numerous studies indicate that pathogenic microorganisms persist in the inanimate environment.Kramer, et al. (2006) summarized data on the persistence of various pathogens on inanimate surfaces [9].Th ey report that most Gram-positive bacteria, such as Staphylococcus aureus (including MRSA), Enterococcus spp.(including VRE), or Streptococcus pyogenes survive for months on dry surfaces [10].
Candida albicans, as the most important nosocomial fungal pathogen can survive up to four months on surfaces.Persistence of other yeasts, such as Torulopsis glabrata, was described to be similar (fi ve months) or shorter (Candida parapsilosis, 14 days).
Most viruses from the respiratory tract, such as corona, coxsackie, infl uenza, SARS or rhino virus, can persist on surfaces for a few days.Viruses from the gastrointestinal tract, such as astrovirus, hepatitis A virus (HAV), polio, or rota virus, persist for approximately two months.Bloodborne viruses, such as hepatitis B virus (HBV) or human immunodefi ciency virus (HIV), can persist for more than one week [13].Herpes viruses, such as cytomegalovirus (CMV) or herpes simplex virus (HSV) types 1 and 2, have been shown to persist from only a few hours up to seven days [14].Boyce, et al. 1997 sought to study the possible role of contaminated environmental surfaces as a reservoir of MRSA in hospitals through a prospective culture survey of inanimate objects in the rooms of patients with MRSA in a 200-bed university-affi liated teaching hospital.Th irty-eight consecutive patients colonized or infected with MRSA.Patients represented endemic MRSA cases.Ninety-six (27 percent) of 350 surfaces sampled in the rooms of aff ected patients were contaminated with MRSA [15].
It is very importent to stress that, when patients had MRSA in a wound or urine, 36 percent of surfaces were contaminated.In contrast, when MRSA was isolated from other body sites, only 6 percent of surfaces were contaminated (odds ratio, 8.8; 95% confi dence interval, 3.7-25.5;P<.0001).Environmental contamination occurred in the rooms of 73 percent of infected patients and 69 percent of colonized patients.Frequently contaminated objects included the fl oor, bed linens, the patient's gown, overbed tables, and blood pressure cuff s.
Sixty-fi ve-65 percent of nurses who had performed morning patient-care activities on patients with MRSA in a wound or urine contaminated their nursing uniforms or gowns with MRSA.Forty-two percent of personnel who had no direct contact with such patients, but had touched contaminated surfaces, contaminated their gloves with MRSA [16].
Neely and Maley (2000) sought to examine the survival of several clinical and environmental staphylococcal and enterococcal isolates on fabrics and plastic commonly used in hospitals.One critical aspect of bacterial transfer is the ability of the microorganism to survive on various common hospital surfaces." Th e potential for survival and spread of pathogens to and from humans and clothing or linens is shown by various laboratory studies.Th ese studies show that survival does occur, but varies considerably between diff erent microbial strains and depends on factors such as type of fabric, temperature, relative humidity, and inoculum size [17].
In a study of fungi persistence, Neely and Orloff (2001) report that tests of the survival of Candida spp., a Fusarium spp., a Mucor spp., Aspergillus spp., and a Paecilomyces spp. on hospital fabrics and plastics indicated that viability was variable, with most fungi surviving at least one day but many living for weeks [18].Th e researchers say their fi ndings reinforce the need for appropriate disinfection and conscientious contact control precautions.
Bacterial survival, however, is not an entirely accurate measurement of viral survival.Viruses, in general, are far more resistant to disinfection by chlorination and detergents than are bacteria." Th ey add, "If these viruses remain infectious throughout laundering, they may be transmitted to other individuals in a hospital or household setting through direct contact (laundry, hand, mouth) or through more indirect routes (mouth, hand, food, laundry).
Th is is no surprise given the fact that contaminated textiles oft en contain high numbers of microorganisms from body substances, including blood, stool, urine, skin, vomitus and other body tissues and fl uids.According to the CDC, when textiles are heavily contaminated with potentially infective body substances, they can contain bacterial loads of six to eight logs CFU/100 cm 2 of fabric [19].Standard precautions must be observed while moving, loading, and unloading soiled textiles.
Outbreaks associated with soft service textiles
A number of studies are reported in which transfer via soft surface textiles was identifi ed as the possible cause of an infection outbreak.
Study out of Denver in 2011 by Cervantes et al., white coats and newly laundered short-sleeve uniforms of 100 residents and hospitalists on an internal medicine service in a university affi liated hospital were cultured during an eight hour work day.Th ey found that bacterial contamination occurred within hours aft er donning newly laundered uniforms.Colony counts of these newly laundered uniforms were essentially zero, but aft er just three hours of wear they were nearly 50% of those counted at eight hours [20].
Das et al. reported in 2002 of a multiple-antibiotic-resistant Acinetobacter baumanii that was fi rst isolated from a patient in the general intensive care unit of a tertiary-referral university teaching hospital in Birmingham [21].Similar strains were subsequently isolated from 12 other patients, including those on another intensive care unit within the hospital.Environmental screening revealed the presence of the multiple resistant Acineto-bacter species on fomite surfaces in the intensive care unit and bed linen [22].
In a nosocomial outbreak reported by Shah et al, 13 staff and 11 patients in an acute and chronic health care facility were infected with Microsporum canis [23].Th e dermatophyte was apparently introduced into the facility by a single infected patient; the authors concluded that a likely mode of disease transmission was handling of contaminated laundry.Evidence of the fungus was found in stored linen.
Ineff ectiveness of Laundering
Laundry processes do reduce the microbial load on clothing and linens.During ineff ective laundering, however, data indicates that transmission of pathogens to other items in the load can occur.Th ese risks have been assessed in a number of studies.
A study of home laundered uniforms involved taking surveillance cultures from fi ve patients.Results showed that three of the patients were colonized with the same strain of microorganisms as that cultured from the healthcare providers uniforms.In a study in Great Britain in 2011, healthcare workers who washed their uniforms in domestic washing machines didn't kill all the MRSA and Acinetobacter [24].
According to AORN, surgical attire should be laundered in a healthcare accredited laundry facility.Th ese facilities are preferred because they follow standardized industry standards for proper disinfection of fabrics.As we've discussed we know organisms can live and proliferate on fabrics and we depend upon the healthcare workers to eff ectively wash their uniforms to remove the germs.Our best practices recommend that these fabrics be washed aft er each day's wear [25].
Standard Precautions and Linen Committees
Standard Precautions combine the major features of Universal Precautions (UP) and Body Substance Isolation (BSI) and are based on the principle that all blood, body fl uids, secretions, excretions except sweat, non-intact skin, and mucous membranes may contain transmissible infectious agents.Standard Precautions include a group of infection prevention practices that apply to all patients, regardless of suspected or confi rmed infection status, in any setting in which healthcare is delivered.Th ese include: use of gloves, hand hygiene; gown, mask, eye protection, or face shield, depending on the anticipated exposure [26].
We encourage infection control professionals to serve on their linen committees and if they don't have one, we encourage them to create one.Th e linen committee can have a dynamic combination of people -those who work solely in the linen room can tell you from a practical standpoint how to make the process more effi cient.And infection control nurses can bring their unique perspective and expertise [27].
PVA biodegradable water soluble laundry bags
Equipment or items in the patient environment likely to have been contaminated with infectious body fl uids must be handled in a manner to prevent transmission of infectious agents (e.g.wear gloves for direct contact, contain heavily soiled equipment, properly clean and disinfect or sterilize reusable equipment before use on another patient) [28].
Water soluble bags are new environmentally friendly products, specifi cally designed for use in health care facilities for disposal of contaminated hospital laundry, in order to avoid the risk of contamination and cross-infection of hospital staff , employees in the laundry, patients and visitors to healthcare institutions [29].
Water Soluble Laundry bags are specifi cally made for hospital and healthcare industry operators.Th ese bags are made from water-soluble PVA fi lm, which is a green environmental friendly material and it is 100% bio-degradable, and will not leave any environmental pollutant residue [30].Once the bag is dissolved in the washing process, the solution will be decomposed to water and carbon dioxide [31].Th ese bags are impermeable to bacteria and virus.Th e Water-Soluble Laundry Bags are intended to enhance infection control processes in handling and transferring infected linens or other materials.
PVA fully-soluble laundry bags help to meet care guidelines by ensuring the safe isolation, transportation, and disinfections of soiled and compromised linens [32].Soiled linen is placed into the bag, the bag is sealed using the integral cold water-soluble pink tie, and it is placed into an outer bag ready for transportation to the laundry washer.Th e cold water-soluble tie dissolves in the initial rinse cycle; the bag itself fully dissolves in the wash cycle (Figure 1).Bags are supplied interleaved on a roll reducing the risk of dispersing airborne viruses during dispensation.Laundry bags are available in several sizes in clear or red (Figure 1).Application of 100% Water-Soluble Laundry Bag: • Th e infected contents of the Laundry bag does not need to be handled by staff until the wash and drying cycles are completed.Consequently, this eliminates exposure to the contaminated material during the whole course of transferring, washing and drying [33].
www.hophonline.org• Th e Water-Soluble Laundry bag will dissolve completely in water during the washing process.
• Water-Soluble Laundry bag leaves no potentially infected waste.
• Water-Soluble bags are anti-static, non-toxic, and fully biodegradable.
• Tests have verifi ed that Water-Soluble bags are impermeable to bacteria and virus.
• Water-Soluble Laundry bag has an excellent gas barrier properties.
• Do not miss the Nan particles, and can be used in the protection of nuclear particles, or in nuclear medicine preventing staff from radiological contamination.
• Impervious for microbes, even rights protection of dissemination of dangerous microorganisms from hospital laundry in the nosocomial environment.
Clinical Centre of Serbia practical experience: In the Clinical Center of Serbia, we used sucesfully -tested PVA-bags for the fi rst time in 2006, in our Clinic for Infectious and Tropical Diseases during outbreaks of bird fl u to minimize the possibility of textiles posing as a source of infection or danger to the patient or healthcare worker.We confi rmed that the Water Soluble Laundry Bag are convenient precaution tool that enables soiled linen handlers to isolate, store, transport and clean washable dirty items [34].
It is safe to say that the use of PVA bags in the handling of contaminated patients' undergarments, reducing the risk of excessive use of antibiotics remanded hospital effl uent injected into the environment, usually not metabolized, but in its original form, as a result, we have increased resistance bacterial fl ora.Th e increasing resistance of bacteria and other fl ora has the eff ect, reduced adherence to the treatment of patients, economic loss, and increased mortality of patients treated, and dangerous environmental pollution with consequences not yet tested.
Hospital Textiles as a Possible Vehicle for Healthcare-Associated Infections
Contamination of textiles in healthcare settings is confi rmed.Textiles are a common material in healthcare facilities; therefore it is important that they do not pose as a vehicle for the transfer of pathogens to patients or hospital workers.During the course of use hospital textiles become contaminated and laundering is necessary [35].Laundering of healthcare textiles is most commonly adequate, but in some instances, due to inappropriate disinfection or subsequent recontamination, the textiles may become a contaminated inanimate surface with the possibility to transfer pathogens.
Healthcare professionals may unknowingly spread infectious germs by wearing scrubs and lab coats between work and home.In a recent study, up to 60% of hospital staff uniforms were found to be colonized with potentially pathogenic bacteria and drug-resistant organisms.Contamination has been shown to transfer from fabrics to hands [36]."Hand imprint cultures demonstrated that these pathogens were easily acquired on hands." Once contaminated, uniforms and white coats can harbor pathogens for a long time.Neely It is thought that would be almost 30% nosocomial infections decreased with better hand washing and safe management of hospital facilities, in order to reduce the mortality and costs of treating patients.Th erefore it can be said with certainty, that good management practices for hospital laundry is a very important factor in controlling the formation of nosocomial infections.It is therefore necessary to develop a standard for the management of hospital laundry, such as standards for the production and use of drugs.[37].
EN 14065,
In Serbia we used the identical text as ISO Standard from 2012.year.
Nowadays the need for the prevention of microbiological contamination of individuals, products, materials or environment is of increasing signifi cance.Consequently, assured microbiological quality becomes necessary.Th erefore the laundry industry is adopting new process control techniques to assure the microbiological quality of laundered textiles.Th e purpose of this standard is to provide a management system to deliver an agreed level of microbiological quality according to the intended use of the textile [38].
Hospital laundry management has become a fundamental point in any attendance centre.It is not only a matter of having the necessary linen available for both the patients and the professionals in the centre, but also that all the linen from a sanitary centre has to be considered to be contaminated by germs and as such must be treated in such a way that at the end of the cycle it can be supplied to the users and nurses free of any infectious pathogenic agents [39].
Th e laundry is one of the intervening factors in the fi ght to eliminate any source of microbiological contamination and the risk of recontamination to the patients.
Sensory cleanliness is obtained during the laundry cycle through physico-chemical treatments such as mechanical action, temperature, addition of detergents and auxiliary products, bleaching agents, dilutions and rinses in successive baths, in combination with suffi cient time.With these procedures, most micro-organisms have a low probability of survival.
Principles of EN-14065: -Th e list of microbiological hazard and a list of control measures a) Identifi cation of microbiological hazard in every stage of the product code or staff ; b) assessment and classifi cation of risk levels bio contamination textiles in all phases of the management of hospital facilities, as a result of hazard; c) identifi cation of the control measures in the elimination and reduction of bio-contamination of textiles in order to achieve the accepted level of microbiological quality in the use of hospital laundry.
-Principle 2: Determination of control points/ step/and environmental conditions that can be controlled (control points) for the elimination of or risk reduction.
-Principle 3: Target levels and limit-limit values.Setting the threshold values at each control point which must not be exceeded, in order to ensure the microbiological quality of the processed for hospital services.
-Principle 4: Monitoring System.Setting the opinions or observations for monitoring control points.
-Principle 5: Corrective measures.Determination of corrective actions to be taken when monitoring shows that a particular item/procedures/operational stage/environmental conditions are not controlled.
As we will see, every step in the healthcare laundry process is exacting and regulated within an accredited laundry or laundry service, and this kind of quality may not be achievable in an unaccredited healthcare laundry [43].In addition, the HLAC says that accredited healthcare laundries provide a level of effi ciency that oft en can't be matched by in-house hospital laundries, and explains that today's healthcare laundries are state-ofthe-art, high-tech and effi cient [44].
In addition to uniforms, bed linens, curtains and patient gowns should be washed at an accredited laundering facility [45].
In order to protect the environment in hospital of possible contamination of hospital laundry, and therefore patients and staff , it is necessary to change the procedure in the management of hospital laundry.
Th e use of water-soluble bags in Procedure secure management of contaminated patients' undergarments, fulfi ll the requirements for the prevention of nosocomial infections, are positioned in the application of EN 14065 is the imperative for safe hospital environment is safe handling of hospital laundry contaminated with blood, body fl uids, excretions, and secretions, with special emphasis on the contaminated clothes of patients with, or clinically suspected to be suff ering from the following diseases [46]: Good practice laundering is, of course, based on relevant technological laundry processes that ensure decontamination materials.Because of potential infection risk, it is crucial that healthcare textiles be properly processed and delivered to the customer in hygienically clean state (12 CFU /25 cm 2 ).However, this is only part of the safe management of hospital facilities, because it is necessary to ensure that the hospital is guarded machine, sorted and transported correctly, and to minimize the possibilities of recontamination [47].
What the infection prevention community would like to see, however, is evidence from the medical literature addressing the efficacy of the new laundry chemicals on the market today.
CONCLUSIONS
-Healthcare-associated infection (HAI) out--Healthcare-associated infection (HAI) outbreaks and patient notifi cations are oft en the breaks and patient notifi cations are oft en the result of either failures in infection control result of either failures in infection control practices, or medications, or contaminated de-practices, or medications, or contaminated devices vices [48]. .-Despite the steps taken to reduce HAIs -Despite the steps taken to reduce HAIs (Healthcare Acquired Infections), they still (Healthcare Acquired Infections), they still remain one of the world's leading and most remain one of the world's leading and most costly healthcare challenges.Most are prevent-costly healthcare challenges.Most are preventable.All are extremely costly in human and able.All are extremely costly in human and fi nancial terms.fi nancial terms.
-Th e most common nosocomial pathogens -Th e most common nosocomial pathogens may well survive or persist on surfaces (es-may well survive or persist on surfaces (especially on hospital laundry) more than for pecially on hospital laundry) more than for months, and can thereby be a continuous months, and can thereby be a continuous source of transmission.source of transmission.
-It is thought, that would be almost 30% noso--It is thought, that would be almost 30% nosocomial infections decreased with better hand comial infections decreased with better hand washing and safe management of hospital fa-washing and safe management of hospital facilities, in order to reduce the mortality and cilities, in order to reduce the mortality and costs of treating patients costs of treating patients [49]. .-We confi rmed, that the use of environmentaly -We confi rmed, that the use of environmentaly friendla PVA-bags in Clinical Center of Serbia friendla PVA-bags in Clinical Center of Serbia in 2006, during outbreak of Bird Flu eff ectively in 2006, during outbreak of Bird Flu eff ectively minimize the possibility of textiles posing as a minimize the possibility of textiles posing as a source of infection or danger to the patient, or source of infection or danger to the patient, or healthcare worker.healthcare worker.
-Th e new EU-14065 standard came into force, -Th e new EU-14065 standard came into force, and requires for the fi rst time, that the safe and requires for the fi rst time, that the safe management of hospital laundry must be clas-management of hospital laundry must be classifi ed as one of the most important links in the sifi ed as one of the most important links in the prevention of nosocomial infections, particu-prevention of nosocomial infections, particularly in preventing the spread of lethal and po-larly in preventing the spread of lethal and potentially lethal bacteria.tentially lethal bacteria.
-Hospital managers and health workers can no -Hospital managers and health workers can no longer ignore the problem of safe management longer ignore the problem of safe management of hospital laundry and its role in the spread of of hospital laundry and its role in the spread of nosocomial infections nosocomial infections [50,51]. .-Facts, that healthcare laundry continuous -Facts, that healthcare laundry continuous posing as a source of infection, and facts that posing as a source of infection, and facts that HAIs are most preventable, confi rmed our HAIs are most preventable, confi rmed our need to make New Safety Management of Hos-need to make New Safety Management of Hospital Laundry by application the EN-14065 pital Laundry by application the EN-14065 and PVA-Bags to maintain eff ective Infection and PVA-Bags to maintain eff ective Infection Control, which must be involved in Serbian Control, which must be involved in Serbian Health Care System, Health Care System, to slow the development to slow the development of resistant bacteria of resistant bacteria (superbugs VRS, MRSA) (superbugs VRS, MRSA) all in order to reduce HAIs, use of antibiotics, all in order to reduce HAIs, use of antibiotics, morbidity and mortality morbidity and mortality [52]. .-Antibiotic-resistant bacteria-germs that -Antibiotic-resistant bacteria-germs that don't respond to the drugs developed to kill don't respond to the drugs developed to kill them, threaten to return us to the time, when them, threaten to return us to the time, when simple infections were oft en fatal.simple infections were oft en fatal.
-Th e New Safety Management will -Th e New Safety Management will strengthen strengthen Serbian own-health surveillance eff orts to Serbian own-health surveillance eff orts to combat resistance combat resistance. .
Figure 1 .
Figure 1.New Procedure for the tretatment of hospital laundry [30] and group summarized in the Journal of Clinical Microbiology in 2000 and 2001 multiple studies showing the survival of pathogens on fabric.Th ey noted that MRSA in one study lived more than 20 days on cotton fabric and 40 days on polyester.Th e same holds true for VRE which survived more than 80 days on both fabrics.
November 2002; ICS 07.100.99; 59.080.0
Th is European Standard EN 14065 was approved by CEN on 23 September 2002.Th is European Standard shall be given the status of a national standard, either by publication of an identical text or by endorsement, at the latest by May 2003, and confl icting national standards shall be withdrawn at the latest by May 2003
Volume 2 • Number 1 • January 2015 • HOPH The
use of water-soluble bags with clinically following diseases | 2018-03-09T20:54:16.188Z | 2015-01-01T00:00:00.000 | {
"year": 2015,
"sha1": "3c5bf93cbc9543c23800e304216f43f4c62e8ff6",
"oa_license": "CCBY",
"oa_url": "https://scindeks-clanci.ceon.rs/data/pdf/2334-9492/2015/2334-94921501199G.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "3c5bf93cbc9543c23800e304216f43f4c62e8ff6",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Engineering"
]
} |
257074750 | pes2o/s2orc | v3-fos-license | Innovative Inertial Response Imitation and Rotor Speed Recovery Control Scheme for a DFIG
: This paper proposes an innovative inertial response imitation (IRI) and rotor speed recovery (RSR) control scheme of a doubly-fed induction generator (DFIG, Type 3 wind turbine generator) to provide better frequency support response and RSR services for a high wind power penetrated electric power grid. To achieve the first benefit, the coupling relationship between the control coefficient of DFIGs and the frequency deviation was established by using the exponential function so that the control coefficient becomes large with the increasing frequency deviations and sizes of disturbance. After supporting the system frequency, the exponential function was employed to schedule the dynamic control coefficient to alleviate the negative effects of RSR on the instantaneous system frequency. The benefits of the proposed IIR and RSR strategy were investigated in a test system under various scenarios of sizes of disturbance and wind speed conditions. Test results clearly demonstrate that the proposed IIR and RSR strategy is capable of boosting the maximum system frequency excursion and reducing the negative influences on the system frequency during the speed recovery period.
Introduction
The increasing integration of wind generation will bring significant challenges to the system frequency stability since power converter interfaced doubly-fed induction generators (DFIGs, Type 3 wind turbine generator) decouple the rotor speed from the instantaneous power system frequency [1][2][3] and the system inertia response will be weakened [4,5].Therefore, not only do the maximum instantaneous system frequency excursion (ISFE, ∆f ), but also the maximum rate of change of the frequency (df/dt) become worse, but they might increase the possibilities to trigger the relays of under-and oversystem frequency [6,7].In fact, DFIG retains a wider rotor operation range than that a traditional synchronous generator (TSG) due to the characteristics of the DFIG; thus, DFIGs can be a better option of the inertial control for supporting the frequency [8].
There are three types for the present inertial response imitation (IRI) strategies of DFIGs, which are characterized by the form of the reference for active power: df/dt response; ∆f response; and fixed power trajectory response [9][10][11][12][13][14][15][16][17][18].The references in [9,10] modify the additional control signal, which is proportional to df/dt to imitate the inertial response (df/dt response).The authors of [11,12] suggested an additional supplementary control that was proportional to ∆f to emulate the droop response.The IRI strategy with the fixed power trajectory response is related to the preset power trajectories [14][15][16].
As studied in [13,17], the intensity of the IRI strategy based on df/dt and ∆f mainly depends on the control coefficient.Once the coefficients are not defined appropriately, the intensity of the IRI strategy might be insufficient to contribute to the frequency response service inadequately.In contrast, stalling of the wind turbine is prone to being caused and then results in a large secondary system frequency drop (SSFD).To avoid the stalling of the DFIGs, speed-based coefficient-based IRI schemes are suggested [11,13].The control coefficient is dependent on the rotor speed to provide a frequency support response for various speeds of the rotor.However, under large system frequency disturbances, the electric power grid requires more active power from the DFIG to counterbalance the power imbalance, so special attention should be paid to assigning the control gains.
The speed of the rotor should be regained to the initial operation state after sustaining the instantaneous system frequency.If there is no additional output to offset the short power absorbed to restore the rotor speed, a severe SSFD may produce, and even be lower than the maximum ∆f caused by frequency disturbance [5].With the proliferation of wind generation, the existing schemes can improve the maximum ∆f, mitigating the SSFD turns that are a crucial issue for the deployment of the IRI schemes.Consequently, the trade-off between the rotor speed recovery (RSR) and reducing SSFD is necessary to be developed [18].To reduce the SSFD, an extended state observer-based IRI scheme was suggested in [18].A two-stage variable coefficient-based IRI strategy was addressed [10].However, the performances were dependent on the pre-determined training logic of the fuzzy control.The strategy in [19] suggests a dynamic RSR-oriented droop control to reduce the SSFD with a comprehensive function.Thus, special study should be paid to determine the control coefficient to achieve the trade-off between RSR and reducing SSFD.
Based on the shortcomings of the conventional IRI scheme, the contributions of this study are that (1) the coupling relationship between the control coefficient of DFIGs and the frequency deviation can be established by using the exponential function so that the control coefficient becomes large with the increasing frequency deviations and sizes of disturbance; and (2) the exponential function has been employed to schedule the dynamic control coefficient to alleviate the negative effects of RSR on the instantaneous system frequency.
The rest of this paper is organized as follows.Section 2 introduces the modeling of DFIG.The proposed IRI and RSR schemes are proposed and verified in Sections 3 and 4. Sections 5 and 6 draw the discussions and conclusions, respectively.
Modeling of a DFIG
The typical DFIG configuration model comprises a control system, wind turbine model, shaft model, power electronics, and induction generator (see Figure 1).
In Equation ( 1), cp retains a maximum value (cP, max) at the optimal tip-speed ratio (λop for capturing the maximum Pm.The power reference expression of the MPPT operation PMPPT, is expressed as in Equation ( 5) by substituting Equation (4) in Equation (1).The control system, which comprises a rotor-side converter (RSC) and grid-side converter (GSC), determines the references and receives the measured values for voltage, power, DC-link voltage, and currents, as shown in Figures 2 and 3. Active power control including the maximum power point tracking (MPPT) operation and inertial control are achieved in the RSC controller.The voltage of the DC-link is regulated by the GSC [20].The formula for the mechanical power is a function of air density (ρ), rotor radius (R), power coefficient (c p ), and wind speed (v w ), as in: where λ and β mean the tip-speed ration and pitch angle, respectively.The expression of c p is as: In Equation (1), c p retains a maximum value (c P, max ) at the optimal tip-speed ratio (λ opt ) for capturing the maximum P m .The power reference expression of the MPPT operation, P MPPT , is expressed as in Equation ( 5) by substituting Equation (4) in Equation (1).
The electrical equivalent circuit of the DFIG is illustrated in Figure 4.The stator and rotor voltage equations are represented by: The reactive power, active power, and torque are respectively expressed by Figure 5 displays the structure of conventional scheme #1 (fixed gain scheme).The reference (Pref) comprises the output for the df/dt control loop (ΔPin, top loop), Δf control loop (ΔPdr, bottom loop), and MPPT control (PMPPT), as in Equation (17).Before a frequency disturbance, Pref is equal to PMPPT; after a disturbance, ΔPin and ΔPdr, which are dependent on the measured system frequency, are added to PMPPT.
ΔPin and ΔPdr can be expressed as The stator and rotor winding flux linkages are written as: The reactive power, active power, and torque are respectively expressed by P s = u ds i ds + u qs i qs (15) Figure 5 displays the structure of conventional scheme #1 (fixed gain scheme).The reference (P ref ) comprises the output for the df/dt control loop (∆P in , top loop), ∆f control loop (∆P dr , bottom loop), and MPPT control (P MPPT ), as in Equation (17).Before a frequency disturbance, P ref is equal to P MPPT ; after a disturbance, ∆P in and ∆P dr , which are dependent on the measured system frequency, are added to P MPPT .
Electronics 2023, 12, x FOR PEER REVIEW 5 of 15 ( ) where fsys represents the system frequency.Kin and Kdroop indicate the control gains for the df/dt control loop and Δf control loop, respectively.
During the initial period of the frequency disturbance, ΔPin is dominant since the df/dt retains a large value; whereas ΔPdr is dominant around the frequency nadir.In addition, ΔPin decreases with the df/dt and then decreases to zero when the steady-state is achieved.Thus, the combination of the df/dt and Δf loops can boost the frequency support capability.
For conventional scheme #1, with the increasing control coefficient, the released energy to the grid becomes large; however, the frequency nadir might become low since the significant second frequency drop (SFD) is caused due to excessive released energy.Furthermore, the DFIG operates in a mode deviating from the MPPT operation due to the Δf, which adversely affects the economic operation of the DFIG.In addition, when the conventional RSR schemes are implemented, the SFD is inevitable due to the sudden power drop.Thus, conventional scheme #1 has two issues, as follows: (I) difficulties arise in determining the control coefficient; and (II) SSFD might be caused.
Conventional Scheme #2.
The expression of kinetic energy available from the rotational rotor of the DFIG is: where HDFIG represents the inertia for the DFIG; ω0 and ωmin are ωr before frequency disturbances and minimum value, respectively.∆P in and ∆P dr can be expressed as where f sys represents the system frequency.K in and K droop indicate the control gains for the df/dt control loop and ∆f control loop, respectively.
During the initial period of the frequency disturbance, ∆P in is dominant since the df/dt retains a large value; whereas ∆P dr is dominant around the frequency nadir.In addition, ∆P in decreases with the df/dt and then decreases to zero when the steady-state is achieved.Thus, the combination of the df/dt and ∆f loops can boost the frequency support capability.
For conventional scheme #1, with the increasing control coefficient, the released energy to the grid becomes large; however, the frequency nadir might become low since the significant second frequency drop (SFD) is caused due to excessive released energy.Furthermore, the DFIG operates in a mode deviating from the MPPT operation due to the ∆f, which adversely affects the economic operation of the DFIG.In addition, when the conventional RSR schemes are implemented, the SFD is inevitable due to the sudden power drop.Thus, conventional scheme #1 has two issues, as follows: (I) difficulties arise in determining the control coefficient; and (II) SSFD might be caused.
Conventional Scheme #2
The expression of kinetic energy available from the rotational rotor of the DFIG is: where H DFIG represents the inertia for the DFIG; ω 0 and ω min are ω r before frequency disturbances and minimum value, respectively.
In [13], to boost the frequency support capability and avoid stalling of the wind turbine, the control gain for the frequency deviation control loop is defined to be proportional to E avail , which can be expressed as: According to Equation ( 21), as shown in Figure 6, the expression of K droop for conventional scheme #2 is as follows: where δ is the operating condition factor of the DFIG and regulate the benefits to boosting the frequency support capability.
Electronics 2023, 12, x FOR PEER REVIEW 6 of 15 However, under various sizes of disturbance, Δf is different so that various additional powers are required from the DFIG.With increasing frequency deviation, more active power is required from the grid, particularly under large disturbances.Conventional scheme #2 might be unable to sustain the system frequency effectively.Therefore, the implementation of conventional scheme #2 may face the following challenges: (I) a suitable control coefficient for various frequency disturbances; and (II) similar to conventional scheme #1, SSFD is caused to restore the rotor speed.
Proposed Inertial Response Imitation and Rotor Speed Recovery Control Scheme of a DFIG
To boost the frequency nadir and mitigate the negative influences of RSR on the system frequency, an adaptive control coefficient (ACC) was suggested, which was determined into two periods: an inertial response imitation period (Ksup(fsys, ωr)), which aims to boost the frequency nadir, and the RSR period (Krec(t)), which aims to mitigate the negative influences of RSR on the system frequency (see Figure 7).There are two features of Equation ( 22).The first is that K droop is zero when ω r = ω min .As a result, conventional scheme #2 can avoid the stalling of the wind turbine.The second is that K droop increases with the rotor speed to effectively enhance the frequency support capability for various wind conditions (refer to [13]).
However, under various sizes of disturbance, ∆f is different so that various additional powers are required from the DFIG.With increasing frequency deviation, more active power is required from the grid, particularly under large disturbances.Conventional scheme #2 might be unable to sustain the system frequency effectively.Therefore, the implementation of conventional scheme #2 may face the following challenges: (I) a suitable control coefficient for various frequency disturbances; and (II) similar to conventional scheme #1, SSFD is caused to restore the rotor speed.
Proposed Inertial Response Imitation and Rotor Speed Recovery Control Scheme of a DFIG
To boost the frequency nadir and mitigate the negative influences of RSR on the system frequency, an adaptive control coefficient (ACC) was suggested, which was determined into two periods: an inertial response imitation period (K sup (f sys , ω r )), which aims to boost the frequency nadir, and the RSR period (K rec (t)), which aims to mitigate the negative influences of RSR on the system frequency (see Figure 7).
tional powers are required from the DFIG.With increasing frequency deviation, more active power is required from the grid, particularly under large disturbances.Conventional scheme #2 might be unable to sustain the system frequency effectively.Therefore, the implementation of conventional scheme #2 may face the following challenges: (I) a suitable control coefficient for various frequency disturbances; and (II) similar to conventional scheme #1, SSFD is caused to restore the rotor speed.
Proposed Inertial Response Imitation and Rotor Speed Recovery Control Scheme of a DFIG
To boost the frequency nadir and mitigate the negative influences of RSR on the system frequency, an adaptive control coefficient (ACC) was suggested, which was determined into two periods: an inertial response imitation period (Ksup(fsys, ωr)), which aims to boost the frequency nadir, and the RSR period (Krec(t)), which aims to mitigate the negative influences of RSR on the system frequency (see Figure 7).
Determining Control Coefficient for Inertial Response Imitation Period
Under various sizes of disturbance, a control coefficient should be determined that is suitable for the power deficit to improve the frequency support capability.Thus, to enhance the frequency support capability while avoiding the stalling of the DFIG, the control coefficient for the inertial response imitation period can be expressed as:
Determining Control Coefficient for Inertial Response Imitation Period
Under various sizes of disturbance, a control coefficient should be determined that is suitable for the power deficit to improve the frequency support capability.Thus, to enhance the frequency support capability while avoiding the stalling of the DFIG, the control coefficient for the inertial response imitation period can be expressed as: where σ(ω r ) and η(f sys ) are the operating conditions of the DFIG and the instantaneous system frequency, respectively.|∆f | indicates the absolute value of frequency deviation.α reflects the frequency support term and adjusts the performance of boosting the frequency support capability.As shown in Figure 8, σ(ω r ) determines that K sup (f sys , ω r ) is proportional to the rotor speed and is zero at ω min to avoid stalling of the DFIG and make use of the significant amount of available kinetic energy to support the frequency under various wind conditions.η(f sys ) determines that K sup (f sys , ω r ) is regulated by the |∆f |; thus, with the increase in |∆f |, a large value can be derived to reduce the ISFE under various disturbances.
amount of available kinetic energy to support the frequency under various wind condi-tions.η(fsys) determines that Ksup(fsys, ωr) is regulated by theΔf; thus, with the increase inΔf, a large value can be derived to reduce the ISFE under various disturbances.
As studied in [21], ΔPdr/PMPPT is capable of reflecting the capability for reducing the maximum frequency deviation.As shown in Figures 9 and 10, the comparison of the control coefficient and ΔPdr/PMPPT of the proposed and conventional schemes at various frequency deviations are illustrated.It can be observed that Ksup(fsys, ωr) and ΔPdr/PMPPT are always more than those of the conventional inertial control scheme; furthermore, the differences become large so the proposed IRI scheme with ACC can boost the frequency support capability, particularly for the severe deviations in the system frequency.As studied in [21], ∆P dr /P MPPT is capable of reflecting the capability for reducing the maximum frequency deviation.As shown in Figures 9 and 10, the comparison of the control coefficient and ∆P dr /P MPPT of the proposed and conventional schemes at various frequency deviations are illustrated.It can be observed that K sup (f sys , ω r ) and ∆P dr /P MPPT are always more than those of the conventional inertial control scheme; furthermore, the differences become large so the proposed IRI scheme with ACC can boost the frequency support capability, particularly for the severe deviations in the system frequency.e where σ(ωr) and η(fsys) are the operating conditions of the DFIG and the instantaneous system frequency, respectively.Δf indicates the absolute value of frequency deviation.α reflects the frequency support term and adjusts the performance of boosting the frequency support capability.
As shown in Figure 8, σ(ωr) determines that Ksup(fsys, ωr) is proportional to the rotor speed and is zero at ωmin to avoid stalling of the DFIG and make use of the significant amount of available kinetic energy to support the frequency under various wind conditions.η(fsys) determines that Ksup(fsys, ωr) is regulated by theΔf; thus, with the increase inΔf, a large value can be derived to reduce the ISFE under various disturbances.
As studied in [21], ΔPdr/PMPPT is capable of reflecting the capability for reducing the maximum frequency deviation.As shown in Figures 9 and 10, the comparison of the control coefficient and ΔPdr/PMPPT of the proposed and conventional schemes at various frequency deviations are illustrated.It can be observed that Ksup(fsys, ωr) and ΔPdr/PMPPT are always more than those of the conventional inertial control scheme; furthermore, the differences become large so the proposed IRI scheme with ACC can boost the frequency support capability, particularly for the severe deviations in the system frequency.During the RSR period, to avoid the SFD, the instantaneous decrease in the output power should be prevented [17,18].To address this demand, an exponential function is
Determining Control Coefficient for RSR Period
During the RSR period, to avoid the SFD, the instantaneous decrease in the output power should be prevented [17,18].To address this demand, an exponential function is employed to schedule the dynamic control coefficient K rec (t), as in: where t 1 indicates the beginning of the RSR and γ represents the regulating factor to adjust the scheduled time for decreasing the coefficient, as shown in Figure 11.K sup (t 1 ) indicates the control coefficient at t 1 .
Figure 10.Diagram of ΔPdr/PMPPT for the proposed and conventional IRI schemes.
Determining Control Coefficient for RSR Period
During the RSR period, to avoid the SFD, the instantaneous decrease in the output power should be prevented [17,18].To address this demand, an exponential function is employed to schedule the dynamic control coefficient Krec(t), as in: where t1 indicates the beginning of the RSR and γ represents the regulating factor to adjust the scheduled time for decreasing the coefficient, as shown in Figure 11.Ksup(t1) indicates the control coefficient at t1.As illustrated in Equation ( 26), it is obvious that the large γ can accelerate the RSR, but produce severe SFD.Therefore, γ should not be set as too large a value; otherwise, unexpected severe SFD is caused.The small enough γ can avoid the SSFD, but delays the period of RSR.Since the exponential based Krec cannot decrease to zero, Pref was changed to PMPPT 20 s after recovering the speed of the rotor.
Model System
To study the effectiveness of the proposed IRI and RSR scheme, four cases with constant wind speeds under various wind speed conditions and disturbance were carried out using the test system shown in Figure 12.As disturbances, SG4, which generated 80 MW, was tripped out for Case 1 and Case 2, and SG4, which generated 140 MW, was tripped out for Case 3 to Case 4. As illustrated in Equation (26), it is obvious that the large γ can accelerate the RSR, but produce severe SFD.Therefore, γ should not be set as too large a value; otherwise, unexpected severe SFD is caused.The small enough γ can avoid the SSFD, but delays the period of RSR.Since the exponential based K rec cannot decrease to zero, P ref was changed to P MPPT 20 s after recovering the speed of the rotor.
Model System
To study the effectiveness of the proposed IRI and RSR scheme, four cases with constant wind speeds under various wind speed conditions and disturbance were carried out using the test system shown in Figure 12.As disturbances, SG 4 , which generated 80 MW, was tripped out for Case 1 and Case 2, and SG 4 , which generated 140 MW, was tripped out for Case 3 to Case 4. The performance of the proposed inertial response emulation and RSR control scheme was compared to conventional scheme #2 [11] (which is denoted as the conventional scheme in the simulation results) without the RSR and a conventional scheme with the proposed RSR.In the conventional inertial control scheme and proposed inertial control scheme, the value of δ was set to 50 and α for (23) was set to 100.γ was set to 0.12.The frequency nadirs with the MPPT operation, conventional scheme, and proposed inertial control scheme were 59.378 Hz, 59.578 Hz, and 59.638 Hz, respectively.The frequency nadir for the proposed IRI scheme was the highest since the output power was significantly more than in the conventional The performance of the proposed inertial response emulation and RSR control scheme was compared to conventional scheme #2 [11] (which is denoted as the conventional scheme in the simulation results) without the RSR and a conventional scheme with the proposed RSR.In the conventional inertial control scheme and proposed inertial control scheme, the value of δ was set to 50 and α for (23) was set to 100.γ was set to 0.12.The frequency nadirs with the MPPT operation, conventional scheme, and proposed inertial control scheme were 59.378 Hz, 59.578 Hz, and 59.638 Hz, respectively.The frequency nadir for the proposed IRI scheme was the highest since the output power was significantly more than in the conventional scheme due to the control coefficient coupling with the frequency deviation, as illustrated in Figure 13a,b.The frequency nadirs with the MPPT operation, conventional scheme, and proposed inertial control scheme were 59.378 Hz, 59.578 Hz, and 59.638 Hz, respectively.The frequency nadir for the proposed IRI scheme was the highest since the output power was significantly more than in the conventional scheme due to the control coefficient coupling with the frequency deviation, as illustrated in Figure 13a,b.
In the conventional scheme, there was a small second frequency drop due to the less power drop during the RSR (see Figure 13b), however, when the proposed RSR scheme was applied to the conventional scheme and the proposed scheme, the SFD could be minimized due to the smooth power drop, as indicated in Figure 13a.The maximum ISFE with MPPT operation was 0.628 Hz, which was almost the same as Case 1, since only synchronous generators support the dynamic frequency.However, the frequency nadirs with the conventional and proposed IRI schemes were 59.478 Hz and 59.549 Hz, respectively, which were less than in Case 1 due to the decreased rotating energy of the rotor, and the improvement in the frequency nadir for the proposed inertial control scheme was 0.071 Hz.
Since the gap between the power reference and the MPPT curve becomes small, less In the conventional scheme, there was a small second frequency drop due to the less power drop during the RSR (see Figure 13b), however, when the proposed RSR scheme was applied to the conventional scheme and the proposed scheme, the SFD could be minimized due to the smooth power drop, as indicated in Figure 13a.The maximum ISFE with MPPT operation was 0.628 Hz, which was almost the same as Case 1, since only synchronous generators support the dynamic frequency.However, the frequency nadirs with the conventional and proposed IRI schemes were 59.478 Hz and 59.549 Hz, respectively, which were less than in Case 1 due to the decreased rotating energy of the rotor, and the improvement in the frequency nadir for the proposed inertial control scheme was 0.071 Hz.
Since the gap between the power reference and the MPPT curve becomes small, less SFD of the conventional scheme is caused, as shown in Figure 14a,b.As in Case 1, when the DFIG was implemented on the proposed RSR control coefficient on the conventional and proposed schemes, the SFD could be minimized by smoothly decreasing the output power.Compared to Case 3, random wind speed conditions were employed instead of a fixed wind speed, as shown in Figure 16.As a result, the frequency nadirs for the proposed IRI scheme and conventional IRI scheme were 59.187 Hz and 59.016 Hz, respectively, as indicated in the red line and blue lines.These were lower than those in Case 3 due to the decreasing wind speed conditions during the frequency disturbance.In the RSR period, the proposed RSR scheme could reduce the second frequency drop, as indicated in the red solid and blue solid lines in Figure 17.During the RSR period, a SFD of 59.477 Hz was caused in the conventional scheme due to the sudden power drop (see Figure 15b).However, as in Case 3, when the proposed RSR scheme was applied on the conventional scheme and the proposed scheme, the SFD could be minimized due to the smooth power drop during the RSR period, as indicated in Figure 15 Compared to Case 3, random wind speed conditions were employed instead of a fixed wind speed, as shown in Figure 16.As a result, the frequency nadirs for the proposed IRI scheme and conventional IRI scheme were 59.187 Hz and 59.016 Hz, respectively, as indicated in the red line and blue lines.These were lower than those in Case 3 due to the decreasing wind speed conditions during the frequency disturbance.In the RSR period, the proposed RSR scheme could reduce the second frequency drop, as indicated in the red solid and blue solid lines in Figure 17.
Discussion
In the proposed scheme, the proposed control coefficient for the inertial imitation period and control coefficient for rotor speed recovery were suggested to improve the
Discussion
In the proposed scheme, the proposed control coefficient for the inertial imitation period and control coefficient for rotor speed recovery were suggested to improve the
Discussion
In the proposed scheme, the proposed control coefficient for the inertial imitation period and control coefficient for rotor speed recovery were suggested to improve the frequency nadir and alleviate the negative effects of RSR on the system frequency.The capability during the IRI period and RSR period could be indicated in the simulation results.
As shown in the simulation results, the proposed IRI scheme could improve the frequency nadir since the control coefficient is related to the rotor speed and frequency deviations.As the rotor speed decreases, the control coefficient for the IRI period will decrease, weakening the capability of the improvement in the frequency nadir, as indicated in Case 1, Case 2, and Case 4. As the size of the disturbance increases, the control coefficient becomes greater to improve the frequency nadir, as indicated in Case 2 and Case 3. As shown in the results for the RSR period, the control coefficient, which was applied in the conventional scheme and proposed scheme, could effectively alleviate the negative effects of RSR on the system frequency.This is because the suggested coefficient gradually decreases.However, in the case of the conventional scheme without RSR, this would result in a large SSFD.
From the viewpoint of frequency nadir, the proposed scheme (red solid line) was better than that in the conventional scheme due to the higher frequency nadir.From the viewpoint of reducing the second frequency, both the proposed scheme (red solid line) and conventional scheme with the RSR scheme (blue solid line) could remove the second frequency drop.
The joint probability of the tripping of a synchronous generator is low.As the wind power penetration level increases, wind turbine generators will become the dominate frequency support devices.Therefore, a wind turbine with kinetic energy but without reserve power could participate in inertial response imitation to support the system frequency while effectively regaining the rotor speed without causing SSFD.
Conclusions
This paper proposes an innovative IRI and RSR control scheme to provide better frequency response and RSR service for an electric power grid.To this end, the coupling relationship between the control coefficient of the DFIGs and the frequency deviation was established by using the exponential function so that the control coefficient becomes large with the increasing frequency deviation.Then, an exponential function was employed to schedule the dynamic control coefficient of RSR to alleviate the negative effects of RSR on the system frequency.
The simulation studies clearly indicate that the proposed method can improve the system frequency stability more than that in the conventional schemes under the scenarios of various disturbances and wind speed conditions.As the disturbances and wind speeds became large, the improvement in the frequency nadir was obvious.Furthermore, the proposed adaptive control coefficient alleviates the negative effects of RSR on the system frequency.
The benefits of this study can be summarized as follows.
(1) The control coefficient during the IRI period was defined as a function of rotor speed and frequency deviation based on the exponential function.The control coefficient became large with the increasing frequency in the deviations and rotor speed to improve the frequency nadir under various disturbance and wind conditions.(2) The exponential function was employed to schedule the dynamic control coefficient during the RSR period.The control coefficient will gradually decrease to avoid a reduction in the output power, thereby alleviating the size of the SSFD.
Figure 2 .
Figure 2. Diagram of the rotor side converter controller.
Figure 2 .
Figure 2. Diagram of the rotor side converter controller.
Figure 3 .Figure 2 .
Figure 3. Diagram of the grid-side converter controller.The electrical equivalent circuit of the DFIG is illustrated in Figure 4.The stator and rotor voltage equations are represented by:
Figure 2 .
Figure 2. Diagram of the rotor side converter controller.
Figure 3 .Figure 3 .
Figure 3. Diagram of the grid-side converter controller.The electrical equivalent circuit of the DFIG is illustrated in Figure 4.The stator and rotor voltage equations are represented by:
3 .
Innovative Inertial Response Imitation and Rotor Speed Recovery Control of a DFIG 3.1.Conventional Scheme #1
Figure 7 .
Figure 7. Structure of the proposed inertial control scheme.
Figure 7 .
Figure 7. Structure of the proposed inertial control scheme.
Figure 8 .
Figure 8. Ksup for the proposed IRI scheme during the IRI period.
Figure 9 .
Figure 9. Ksup for the proposed and conventional IRI schemes.
Figure 8 .
Figure 8. K sup for the proposed IRI scheme during the IRI period.
Figure 8 .
Figure 8. Ksup for the proposed IRI scheme during the IRI period.
Figure 9 .
Figure 9. Ksup for the proposed and conventional IRI schemes.Figure 9. K sup for the proposed and conventional IRI schemes.
Figure 9 . 15 Figure 10 .
Figure 9. Ksup for the proposed and conventional IRI schemes.Figure 9. K sup for the proposed and conventional IRI schemes.Electronics 2023, 12, x FOR PEER REVIEW 8 of 15
Figure 10 .
Figure 10.Diagram of ∆P dr /P MPPT for the proposed and conventional IRI schemes.
Figure 11 .
Figure 11.Diagram of the control coefficients for the proposed scheme during the RSR period.
Figure 11 .
Figure 11.Diagram of the control coefficients for the proposed scheme during the RSR period.
Figure 12 .
Figure 12.Single line model system with a DFIG-based wind farm.4.1.Case 1: Wind Speed = 10 m/s, Disturbance = 80 MW Figure 13 illustrates the results for Case 1.The frequency nadirs with the MPPT operation, conventional scheme, and proposed inertial control scheme were 59.378 Hz, 59.578 Hz, and 59.638 Hz, respectively.The frequency nadir for the proposed IRI scheme was the highest since the output power was significantly more than in the conventional
Figure 12 .
Figure 12.Single line model system with a DFIG-based wind farm.
4. 1 .
Figure13illustrates the results for Case 1.The frequency nadirs with the MPPT operation, conventional scheme, and proposed inertial control scheme were 59.378 Hz, 59.578 Hz, and 59.638 Hz, respectively.The frequency nadir for the proposed IRI scheme was the highest since the output power was significantly more than in the conventional scheme due to the control coefficient coupling with the frequency deviation, as illustrated in Figure13a,b.
Figure 12 .
Figure 12.Single line model system with a DFIG-based wind farm.4.1.Case 1: Wind Speed = 10 m/s, Disturbance = 80 MW Figure 13 illustrates the results for Case 1.The frequency nadirs with the MPPT operation, conventional scheme, and proposed inertial control scheme were 59.378 Hz, 59.578 Hz, and 59.638 Hz, respectively.The frequency nadir for the proposed IRI scheme was the highest since the output power was significantly more than in the conventional scheme due to the control coefficient coupling with the frequency deviation, as illustrated in Figure13a,b.In the conventional scheme, there was a small second frequency drop due to the less power drop during the RSR (see Figure13b), however, when the proposed RSR scheme was applied to the conventional scheme and the proposed scheme, the SFD could be minimized due to the smooth power drop, as indicated in Figure13a.
4. 3 .
Case 3: Wind Speed = 8 m/s, Disturbance = 140 MW Compared to Case 2, a large disturbance occurred in this case.As a result, the frequency nadirs for all schemes, which were 58.886Hz, 59.060 Hz, and 59.217 Hz, became lower.The improvement in the frequency nadir between the proposed and conventional IRI schemes was 0.105 Hz, since the proposed control coefficient became large with the increasing frequency deviation, as shown in Figure15d.Thus, the proposed IRI scheme can boost the frequency nadir, even though under a severe disturbance. | 2023-02-22T16:15:50.054Z | 2023-02-18T00:00:00.000 | {
"year": 2023,
"sha1": "fea3d0bdc5e5f8eb98eea6ed9d28db5f0bf58fa8",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2079-9292/12/4/1029/pdf?version=1676871902",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "c582948b27efec59501034e5b0dc97920c88582f",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": []
} |
258547229 | pes2o/s2orc | v3-fos-license | Catalytic and asymptotic equivalence for quantum entanglement
Entanglement is a fundamental resource in quantum information processing, yet understanding its manipulation and transformation remains a challenge. Many tasks rely on highly entangled pure states, but obtaining such states is often challenging due to the presence of noise. Typically, entanglement manipulation procedures involving asymptotically many copies of a state are considered to overcome this problem. These procedures allow for distilling highly entangled pure states from noisy states, which enables a wide range of applications, such as quantum teleportation and quantum cryptography. When it comes to manipulating entangled quantum systems on a single copy level, using entangled states as catalysts can significantly broaden the range of achievable transformations. Similar to the concept of catalysis in chemistry, the entangled catalyst is returned unchanged at the end of the state manipulation procedure. Our results demonstrate that despite the apparent conceptual differences between the asymptotic and catalytic settings, they are actually strongly connected and fully equivalent for all distillable states. Our methods rely on the analysis of many-copy entanglement manipulation procedures which may establish correlations between different copies. As an important consequence, we demonstrate that using an entangled catalyst cannot enhance the asymptotic singlet distillation rate of a distillable quantum state. Our findings provide a comprehensive understanding of the capabilities and limitations of both catalytic and asymptotic state transformations of entangled states, and highlight the importance of correlations in these processes.
Introduction
Entanglement is a key feature of quantum mechanics, and it plays a vital role in many areas of quantum information science. Being a strong form of correlations between quantum systems, entanglement enables a wide range of applications and protocols that have the potential to revolutionize information processing and communication [1,2]. The study of entanglement and its properties has led to significant advancements in our understanding of quantum mechanics, and it has provided insights into how to manipulate and harness its power for practical applications [3].
To understand the pivotal role of entanglement as a resource in quantum information processing, we can consider the distant lab paradigm [4,5]. This scenario involves two parties, Alice and Bob, who are located in different quantum laboratories and can exchange classical messages to communicate with each other. In this setting, entangled states shared between Alice and Bob become a valuable resource, allowing them to perform tasks that would otherwise be impossible [3].
One of the most significant applications of entanglement is in the field of quantum communication, including quantum teleportation [1] and quantum cryptography [2]. These tasks typically rely on singlets, which are pure highly entangled states of two qubits. However, in practice, Alice and Bob may only have access to noisy states. In order to use noisy states for singlet-based protocols, they can employ entanglement distillation [4,6], which is a special case of asymptotic state transformations. In this process, n copies of an initial state are transformed approximately into rn copies of the final state, where r is the transformation rate. Quantum states which can be distilled into singlets at a nonzero rate are called distillable. There exist noisy entangled states which cannot be distilled into singlets, a phenomenon known as bound entan-glement [7].
Another way how Alice and Bob can gain access to singlets from noisy states is to use entanglement catalysis. In this process, an auxiliary entangled state, known as a catalyst, is employed to aid in the transformation of one entangled state to another without altering the catalyst itself [8]. Recent work [9][10][11][12] extended this idea to approximate catalysis, where the transformation can be achieved with a certain degree of inaccuracy. This concept has proven to be instrumental in advancing our understanding of catalytic entanglement manipulation and its potential applications [13].
At first glance, catalytic and asymptotic transformations may seem like distinct concepts, but recent research has uncovered a strong connection between them. Initial evidence for a connection between these concepts was presented in [14,15], and subsequent work has made significant progress in this direction, particularly through the use of approximate catalysis [9,11]. Furthermore, it has been shown that in quantum thermodynamics, catalysis and many-copy transformations with a unit rate are fully equivalent [16,17]. Given the shared features between quantum entanglement and thermodynamics [18][19][20][21], it is plausible that a similar equivalence may exist in entanglement theory.
In this article, we resolve this question by considering catalytic and asymptotic protocols which can establish a nonvanishing amount of correlations. This provides a more flexible and practical approach for studying catalysis and asymptotic transformations and their applications in quantum information processing. In this setting, we prove that for distillable states, catalysis and asymptotic transformations with unit rate are fully equivalent notions of entangled state manipulation. We discuss several applications of our results, including the crucial finding that the addition of a catalyst cannot increase the distillable entanglement of a noisy distillable state.
Asymptotic entanglement manipulations and catalysis
As previously discussed, asymptotic transformations are a powerful tool for understanding the structure and manipulation of quantum entanglement. For instance, consider two bipartite pure states |ψ and |φ . The objective is to use local operations and classical communication (LOCC) to transform n copies of |ψ into m copies of |φ , allowing for an error margin that vanishes in the limit of large n. The maximal ratio m/n defines the transformation rate. This framework is particularly useful if the target is the singlet state |ψ − = (|01 − |10 )/ √ 2, in which case the optimal rate is known as distillable entanglement [4,6,22]. It coincides with the entanglement entropy E(|ψ ) = S (ψ A ) of the initial state, and S (ρ) = −Tr[ρ log 2 ρ] is the von Neumann entropy [6]. In a similar way, it is possible to define transformation rates for noisy states, we refer to the Supplemental Material for more details. A state is called asymptotically reducible onto another state if the transformation can be achieved with a rate at least one [23]. This reflects the intuition that if |ψ is reducible onto |φ , then |ψ is at least as valuable as |φ for any application that allows for asymptotic transformations.
Entanglement catalysis is a phenomenon where an entangled catalyst is used to facilitate the single-copy transformation of an entangled state into another without changing the state of the catalyst [8,24]. Given an entangled state |ψ and a target state |φ , the aim is to find a catalytic state |η such that the transformation |ψ ⊗ |η → |φ ⊗ |η is possible by LOCC. The catalyst is particularly useful if it enables the transformation of |ψ into |φ which is not possible without the catalyst. Recently, the notion of catalysis has been extended to approximate catalysis in [9][10][11][12], which allows for some degree of inaccuracy in the catalytic transformation. The notion of approximate catalysis provides a more realistic model for practical implementations of catalytic entanglement manipulation and enables a broader range of applications [13]. It has been demonstrated in [9] that transformations between bipartite pure states in this scenario are fully determined by the entanglement entropy of the corresponding states. Therefore, for bipartite pure states approximate catalysis is fully equivalent to reducibility. Catalytic phenomena have been extensively studied not only in the context of entanglement, but also in other areas of quantum physics, such as quantum thermodynamics [16,17,[25][26][27], where they are essential for understanding and manipulating quantum systems subject to constraints imposed by energy conservation.
As has been shown in [9], there is a close connection between asymptotic state transformations and catalysis. More precisely, if a state ρ is asymptotically reducible to another state σ, then a transformation from ρ into σ can also be achieved on the single-copy level with approximate catalysis [9]. However, it has remained a crucial open question whether the converse is also true, i.e., whether catalysis and asymptotic reducibility are fully equivalent notions for entangled state transformations. In this article, we introduce the frameworks of marginal asymptotic transformations and correlated catalysis, which allows us to resolve this question and establish the equivalence between catalysis and reducibility for all distillable quantum states.
Correlated catalysis and marginal reducibility
In the context of entanglement catalysis, an important generalization is to consider correlated catalysis, where the catalyst is allowed to have non-vanishing correlations with the system throughout the transformation process. This means that the system and the catalyst can remain correlated in the final state. More precisely, we say that ρ can be converted into σ via correlated catalysis if for any error margin ε > 0 there is an LOCC protocol Λ and a catalyst state τ such that Here, S denotes a possibly multipartite system, C denotes the catalyst, and ||M|| 1 = Tr √ M † M is the trace norm. In other words, the state µ S C is obtained by applying an LOCC protocol Λ to the state ρ S ⊗ τ C , such that the marginal on C is preserved and the resulting state on S can be made arbitrarily close to the target state σ. Previous studies in quantum thermodynamics have explored the significance of correlations for catalytic state transformations, revealing that the presence of correlations between the system and catalyst can increase the transformation power of the procedure [16,17,27,28].
We now introduce the notion of marginal reducibility. We say that ρ can be reduced into σ in the marginals if for any arbitrarily small error margin, there exists an LOCC protocol which can transform n copies of ρ into a state with approximately m marginals, and each marginal being close to the desired state σ. Specifically, we require for any ε, δ > 0 there exists an LOCC protocol Λ and integers m ≤ n such that the following conditions hold for all i ≤ m: Here, µ m is a state on m subsystems, each shared by Alice and Bob, and µ (i) m is the reduced state of µ m on i-th subsystem. Marginal asymptotic transformations have been previously studied in continuous variable systems in [29].
It is worth to discuss the difference between marginal reducibility and the notion of reducibility introduced in [23]. The latter is more stringent as it requires that the final state µ m is close to m copies of σ as a whole. However, for many quantum information processing tasks that rely on pure states |φ , such as singlets in the bipartite case or GHZ states in the multipartite case, small perturbations of the state do not significantly affect its usefulness. In other words, the state Figure 1. Equivalence between catalysis and reducibility. The left part of the figure shows a state ρ being converted into another state σ through a catalytic transformation by using a catalyst in the state τ. The right part of the figure shows that ρ is asymptotically reducible into σ. Our results demonstrate that the two processes are equivalent for any pair of distillable states, assuming that both procedures can establish correlations. µ ε = (1 − ε) |φ φ| + ε1 1/d is also useful for small enough ε. For marginal reducibility, it suffices that ρ ⊗n can be approximately converted into µ ⊗n ε for any ε > 0, which as we have argued above is enough for many tasks based on pure states. Therefore, we suggest that the framework of marginal reducibility is particularly suitable when one aims to produce pure entangled states of high quality that are intended to be used independently.
In the remainder of this article we will focus on the relationship between correlated catalysis and marginal reducibility. Unless otherwise specified, we will refer to these concepts simply as catalysis and reducibility, respectively.
Catalysis-reducibility equivalence
As previously noted, there have been indications that catalysis and reducibility are interchangeable concepts for entangled state transformations [9,11,14,15]. The key contribution of this article is to establish this equivalence for any pair of distillable states, using the notions of catalysis and reducibility which include correlations, as outlined above in this article. We recall that a distillable state is a quantum state which can be converted into singlets at nonzero rate in the asymptotic limit.
Theorem 1. For any pair of distillable states ρ and σ reducibility and catalysis are fully equivalent.
We present a brief overview of the techniques employed for proving the theorem, more details can be found in the Supplemental Material. Firstly, we demonstrate that if the state ρ can be reduced to the state σ, then it is possible to achieve a catalytic transformation from ρ to σ, using techniques similar to those presented in prior works [9][10][11]16]. Subsequently, we establish the converse by explicitly constructing a reduction protocol that utilizes a catalytic conversion protocol from a distillable state ρ into σ. This involves several technical steps that are described in detail in the Supplemental Material. By combining these two results, we conclusively demonstrate the full equivalence of reducibility and catalysis for any pair of distillable states ρ and σ, see also Since all entangled two-qubit states are distillable [30], Theorem 1 implies that catalysis and reducibility are fully equivalent for all two-qubit states. For states beyond two qubits, Theorem 1 also applies if the target state σ is not distillable. Moreover, catalysis is generally at least as powerful as reducibility. With this in mind, Theorem 1 leaves open the possibility that there exist bound entangled states ρ that cannot be reduced to some state σ, yet a catalytic conversion from ρ to σ is possible. Thus, if catalysis and reducibility are not equally powerful on all quantum states, catalysis must show an advantage on some bound entangled initial states. This underscores the importance of investigating the relationship between these concepts in the general case, as it can provide insights into the nature of bound entanglement and the power of entanglement catalysis. Additionally, our findings can have practical implications for quantum information processing tasks where bound entangled states are known to play a significant role [31][32][33].
Going one step further, we investigate the role of catalysis for asymptotic transformation rates. Our findings reveal that the addition of a catalyst does not alter the asymptotic rate of transformation from a distillable state ρ into another state σ, again under the assumption that correlations can be established in the procedure. An important application of this result pertains to the scenario where the target state is a singlet |ψ − . In this context, our analysis reveals that the correlations, which are typically established in the catalytic and asymptotic procedures considered earlier, vanish. This property allows us to explore the features of distillable entanglement when a catalyst is incorporated into the transformation, bringing us to the second main result of this article.
Theorem 2. Catalysis cannot increase the distillable entanglement of a distillable state.
The proof of the theorem combines the previously mentioned results on asymptotic transformation rates with the additional finding that correlations usually established in the involved procedures disappear if the target state is pure. We refer to the Supplemental Material for the proof and more details. Recalling that all entangled two-qubit states can be distilled into sin- glets [30], it follows that Theorem 2 applies to all two-qubit states. In general, our results leave open the possibility that bound entangled states could be activated into singlets through catalysis.
Our results have implications also beyond the scope of bipartite systems. It is worth noting that Theorem 1 can be generalized to the multipartite scenario. To this end, we consider multipartite distillable states, which are those multipartite states that can be distilled into singlets between each pair of parties with some nonzero rate in the asymptotic limit, see also Fig. 2. This includes all pure states which are entangled across any bipartition [34,35]. With this in mind, we can extend Theorem 1 to state that for any pair of multipartite distillable states, reducibility and catalysis are fully equivalent. Furthermore, Theorem 2 is also applicable to this scenario, indicating that the addition of a catalyst cannot enhance the multipartite distillable entanglement of any multipartite distillable state, we refer to the Supplemental Material for more details. The results obtained in the multipartite setting are in line with those in the bipartite setting and imply that if catalysis offers any benefit over reducibility, it can only be observed when the initial state is not distillable.
The limitation to distillable states in Theorem 1 can be overcome by allowing the borrowing of a pure state, that is, considering transformations from ρ ⊗ ψ to σ ⊗ ψ with some entangled pure state |ψ . In this case, the state ρ ⊗ ψ is distillable, leading to the equivalence of catalysis and reducibility for any ρ and σ. Interestingly, this applies even if the borrowed state |ψ has arbitrarily little entanglement. Similarly, we can extend Theorem 2 to state that catalysis cannot increase the distillable entanglement of the state ρ ⊗ ψ, where ρ does not need to be distillable.
These findings offer a better understanding of the relationship between entanglement catalysis and many-copy transformations, and can have practical implications for the exploitation of entanglement in quantum information processing tasks.
Conclusions
In conclusion, our work establishes the complete equivalence between reducibility and catalysis for any pair of distillable states, which extends and confirms previous indications that these concepts are interchangeable for entangled state transformations. Furthermore, we have demonstrated that the addition of a catalyst does not alter the rate of asymptotic transformations between distillable states.
Our results shed new light on the nature of entanglement catalysis and entanglement-based protocols. The full equivalence between catalysis and reducibility for distillable states provides a clearer understanding of the limitations and capabilities of these tasks. We emphasize that our results assume that correlations can be established in the transformation procedures involved. This suggests that taking correlations into account can provide a more complete and accurate understanding of quantum information processing tasks which make use of catalysis. The methods developed in this article can guide the design of new protocols, where catalysis and correlations play a significant role. On the other hand, correlations disappear naturally for transformations into pure target states. This allows us to conclude that the addition of a catalyst cannot increase the asymptotic singlet distillation rate, provided that the initial state has non-zero distillable entanglement to begin with.
The manipulation of entanglement in the multipartite setting is a complex and challenging problem [36], and further research in this direction is required to fully understand and effectively utilize the power of multipartite entanglement in quantum information processing. Our findings are particularly relevant in this context, as they demonstrate the full equivalence of catalysis and reducibility for transformations between multipartite distillable states. These findings have significant implications for understanding the role of catalysis in communication protocols that rely on multipartite entangled states, such as quantum secret sharing [37,38].
Furthermore, our work opens up new avenues for research into the relationship between reducibility and catalysis in the general case, where the assumption of distillability cannot be made. Investigating this relationship can help us better understand the nature of bound entanglement and unlock the full potential of entanglement and general quantum resources in quantum information processing.
Note added. In an independent work [39] it has been shown with different methods that by using entanglement catalysis it is not possible to distill singlets from bound entangled states having positive partial transpose.
We thank Ludovico Lami for insightful comments on our manuscript. This work was supported by the "Quantum Optical Technologies" project, carried out within the International Research Agendas programme of the Foundation for Polish Science co-financed by the European Union under the European Regional Development Fund, the "Quantum Coherence and Entanglement for Quantum Technology" project, carried out within the First Team programme of the Theorem 1 of the main text states that marginal reducibility and correlated catalysis are equivalent for bipartite distillable states. The proof of the theorem follows from Propositions 3 and 4 which are given below. In the following, S denotes a possibly multipartite quantum system. Proposition 3. Marginal reducibility from ρ S onto σ S implies that ρ S can be converted into σ S via correlated catalysis.
Proof. Let Λ be an LOCC protocol converting n copies of an initial state ρ into a state which is a quantum state of the system S 1 ⊗ S 2 ⊗ · · · ⊗ S n , and each S i is a copy of the system S . In the following, Γ i denotes the reduced state of Γ on S 1 ⊗ S 2 ⊗ · · · ⊗ S i with Γ 0 = 1. Moreover, Γ ( j) i is the reduced state of Γ i on S j for j ≤ i. Marginal reducibility of the state ρ into σ implies that for any ε > 0 and any δ > 0 there are integers m ≤ n, and an LOCC protocol Λ such that We are now ready to present a state of the catalyst τ achieving the transformation ρ → σ: in analogy to the construction presented in [9] (see also [10,15]). Here, the states |k are orthonormal states of an auxiliary system K maintained by Alice. Using the LOCC protocol described above Eq. (7) in [9], the state ρ S ⊗ τ C is transformed into a state µ S C with the property that Tr S [µ S C ] = τ C . Here, C denotes the system of the catalyst. What remains is to show that µ S − σ S 1 can be made arbitrarily small. Indeed, we note that the state µ S can be written as Using Eqs. (4) we can further write The proof is complete by noting that ε > 0 and δ > 0 can be chosen arbitrarily.
To complete the proof of Theorem 1 we also need to prove the converse, which is established in the following. We will focus on bipartite systems in the following, i.e., S = AB. Extension to multipartite settings will be discussed below.
Proposition 4. If a distillable state ρ S can be converted into σ S via correlated catalysis, then ρ S is reducible onto σ S in the marginals.
Proof. Let τ be a state of the catalyst such that for some δ > 0. We will now show that the conditions for marginal reducibility in Eqs. (2) are fulfilled in this case.
Since the state ρ is distillable, it is possible to distill some singlets and therefore approximate any state τ via LOCC from a finite number of copies of ρ. In more detail, for any ε > 0 there is an integer k and an LOCC protocol Λ such that In the following, the state τ ε = Λ (ρ ⊗k ) will be called εapproximation of the catalyst. Consider now the following protocol, acting on ρ ⊗n : 1. The last k copies of the state ρ ⊗n are converted into τ ε via LOCC, i.e., 2. Each of the remaining n − k copies of ρ is converted approximately into the desired state, making repeated use of the state τ ε .
We will now analyze in more detail the procedure described above. After the first state ρ S 1 is converted using the εapproximation of the catalyst τ C ε , Alice and Bob share the state µ 1 ⊗ ρ ⊗(n−k−1) , where the state µ 1 is given as and Λ is the same LOCC protocol as in Eqs. (8). Note that Recalling that µ C = τ C , we obtain the inequalities Thus, the state µ C 1 is also an ε-approximation of the catalyst state, having the same precision as τ ε .
Alice and Bob now convert the second copy of the state ρ S 2 , using the catalyst approximation µ C 1 . If we define µ S 2 C 2 = Λ(ρ S 2 ⊗µ C 1 ), then by the same arguments as above we will find that Iterating this procedure for each copy of ρ, we arrive at a state ν S 1 ...S n−k on S 1 . . . S n−k , such that the reduced states on S i fulfill Due to Eqs. (8) and the triangle inequality we further find that The above arguments show that for every ε > 0 and δ > 0 we can convert ρ ⊗n into a state ν S 1 ...S n−k fulfilling Eq. (18). Interestingly, while the integer k depends on ε and δ, the integer n does not depend on these parameters. Thus, we can choose n large enough, making (n − k)/n arbitrarily close to 1. This proves marginal reducibility from ρ to σ, and the proof of the proposition is complete.
Marginal and catalytic asymptotic transformation rates
A key quantity in asymptotic entanglement theory is the transformation rate, describing the optimal performance of an asymptotic transformation of a state ρ S into another state σ S , where S denotes a possibly multipartite systems. We say that an asymptotic transformation from ρ into σ is possible at rate r if for any ε, δ > 0 there are integers m, n, and an LOCC protocol Λ such that The supremum over such achievable rates r is the asymptotic transformation rate R(ρ → σ). Rates of this form have been initially studied in the context of singlet distillation [4,6,22], which will also be discussed in more detail below. A state ρ is reducible to σ in the notion of [23] if R(ρ → σ) ≥ 1.
Moreover, we say that ρ can be converted into σ with correlated catalysis at rate r, if for any ε > 0 and any δ > 0 there exist integers m, n, a catalyst state τ, and an LOCC protocol Λ such that The supremum over all such rates will be called catalytic transformation rate R c (ρ → σ). Analogously, we say that a marginal asymptotic transformation from ρ to σ is possible with rate r, if for any ε > 0 and any δ > 0 there exist integers m, n, and an LOCC protocol Λ such that the following equations hold: Here, µ S 1 ...S m is a state of the system S 1 ⊗ S 2 ⊗ · · · ⊗ S m , where each S i is a copy of the system S . The largest value of r fulfilling these properties will be called marginal transformation rate R m (ρ → σ). We note that rate of this form has been defined previously in [29]. A state ρ is said to be reducible to σ in the marginals if R m (ρ → σ) ≥ 1. Finally, we say that a state ρ can be converted into σ via marginal asymptotic transformations with correlated catalysis with rate r if for any ε, δ > 0 there exist integers m, n, a state of the catalyst τ C , and an LOCC protocol Λ such that The maximal such rate will be called marginal catalytic transformation rate R mc (ρ → σ). It is straightforward to see that We will now prove that for bipartite distillable states, catalysis cannot enhance marginal asymptotic transformation rates.
Proposition 5. For any two bipartite distillable states ρ and σ it holds that Proof. We will show that a marginal catalytic protocol achieving the rate R mc can always be used to construct a marginal protocol without the catalyst, achieving the same rate. For this let τ be the state of the catalyst such that Eqs. (22) are fulfilled. In analogy to the proof of Proposition 4, recall that the state τ can be approximated by a state τ ε , which can be obtained via LOCC from a finite number of copies of the initial state ρ, i.e., τ ε = Λ (ρ ⊗k ) and ||τ ε − τ|| 1 < ε . Consider now the following LOCC protocol, acting on n + k copies of ρ. In the first step, k copies of the state ρ are converted into τ ε via LOCC. After this step, the total state is given by ρ ⊗n ⊗ τ ε . In the next step, Alice and Bob apply the LOCC protocol from Eqs. (22). The resulting state will be denoted by µ 1 , and can be explicitly written as Note that which implies the inequalities The latter inequality implies that µ C 1 approximates the state τ with same precision as τ ε . Using Eqs. (22) and the triangle inequality we further obtain for all i ≤ m.
We will now extend our analysis to 2n + k copies of the initial state ρ. Again, k copies of ρ will be used to establish the state τ ε , resulting in the total state ρ ⊗n ⊗ ρ ⊗n ⊗ τ ε . The first n copies of ρ together with τ ε are converted into the state µ 1 , as described above in this proof, leading to the total state µ S 1 ...S m C 1 ⊗ ρ ⊗n . The remaining n copies of ρ are now converted with the LOCC protocol given in Eqs. (22), using µ C 1 as the catalyst state. Recall that µ C 1 approximates the state τ C with the error ε , which is the same as for τ ε . The total state of the systems S m+1 . . . S 2m after this transformation will be denoted by µ S m+1 ...S 2m C 2 = Λ(ρ ⊗n ⊗µ C 1 ). By the same arguments as above, we find that for all i ∈ [m + 1, 2m]. Iterating the above procedure l times, we see that it is possible to convert the state ρ ⊗ln+k into the state ν S 1 ...S lm having the property that ||ν S i − σ|| 1 < ε + ε for all i ∈ [1, lm]. Since this procedure works for any l, choosing l large enough we can make lm ln+k arbitrarily close to m n , and thus also arbitrarily close to R mc . This proves that it is possible to convert ρ into σ with rate R mc via marginal asymptotic transformations, and the proof of the proposition is complete.
We note that the role of catalysis for many-copy transformations between bipartite pure states has been studied earlier in [15]. In particular, it was shown that multiple copy transformations, with the aid of a pure catalyst, are equivalent to the single copy catalytic transformation with arbitrary pure catalyst states [15].
Marginal and catalytic transformations for pure target states
Here we will focus on a bipartite setting with a pure target state |φ . A case of particular interest is when the target state is a singlet, in which case R(ρ → ψ − ) is known as the distillable entanglement E d (ρ) [4,6,22]. Using the fact that pure state transformations are asymptotically reversible [6], it is straightforward to see that for pure target states the following holds: We will now show that R m and R coincide for pure target states.
Proposition 6. The marginal transformation rate coincides with the standard transformation rate for pure target states: Proof. If S (φ A ) = 0, the target state is not entangled, and both R(ρ → φ) and R m (ρ → φ) diverge in this case. Without loss of generality we assume S (φ A ) > 0. We introduce a slightly different task of transforming a state ρ asymptotically into a state with marginals having distillable entanglement close to E d (φ) = S (φ A ). Here, we say that a transformation with rate r is possible if for any ε, δ > 0 there exist integers m, n and an LOCC protocol Λ such that The maximal such rate will be denotedR m (ρ, φ). Recall that the distillable entanglement is bounded as [40,41] This implies that E d is continuous in the vicinity of any pure state, and thereforeR m (ρ, φ) ≥ R m (ρ → φ).
Consider now an LOCC protocol achieving Eqs. (35). Using Eq. (33) and the properties of distillable entanglement (see also Proposition 11) we find Using this inequality and Eqs. (35) we further obtain which implies that Using Eqs. (35) once again we arrive at Recalling that ε, δ > 0 can be chosen arbitrarily, we conclude thatR m (ρ, φ) ≤ R(ρ → φ). Collecting the above arguments we haveR which shows that these inequalities are actually equalities.
We note that this proposition can be extended also to quantum resource theories different from entanglement, we refer to Proposition 13.
From the above proposition, it follows that in this setting marginal reducibility is equivalent to reducibility as defined in [23]. This means that for pure target states in the bipartite setting allowing for correlations between the marginals does not improve the transformation rate. Combining the above results, we can now prove the following proposition.
Proposition 7. For any bipartite distillable state ρ and any bipartite pure state |φ it holds that Proof. The proof follows by combining Propositions 5 and 6 with Eqs. (24) and (33).
Interestingly, it remains unclear whether Proposition 6 extends to the multipartite setting, if |φ is a general multipartite pure state. However, as we will see below, Proposition 6 also applies if |φ is a specific multipartite state, comprising a singlet shared between each of the parties.
We will now investigate catalytic transformations with pure target states. We say that a state ρ can be converted into σ via correlated catalysis with decoupling if for any ε > 0 there is a catalyst state τ and an LOCC protocol Λ such that [9,12] Note that in this framework, the correlations between the primary system S and the catalyst C can be made vanishingly small. As we show in the following proposition, for pure target states σ = φ, correlated catalysis is equivalent to correlated catalysis with decoupling. Here, S denotes a possibly multipartite system. Proof. Assume that the transformation ρ → φ is possible via correlated catalysis, i.e., for any ε > 0 there exists a catalyst state τ and an LOCC protocol Λ such that Eqs. (1) are fulfilled. Using Lemma 10 which is given below, we see that the catalyst decouples in this procedure, and moreover This shows that the existence of a correlated catalytic transformation from ρ into |φ implies that the transformation is also possible via correlated catalysis with decoupling. The converse is straightforward, noting that correlated catalysis is at least as powerful as correlated catalysis with decoupling in general.
Finally, we will now show that for pure target states in the bipartite setting, correlated catalysis with decoupling is equivalent to the notion of reducibility defined in [23].
Proposition 9. The following statements are equivalent for any bipartite distillable state ρ and any bipartite pure state |φ : 1. ρ is reducible onto |φ 2. ρ can be converted into |φ via correlated catalysis with decoupling Proof. From Theorem 1, we see that marginal reducibility from ρ to |φ is equivalent to the existence of a correlated catalytic transformation from ρ to |φ . Proposition 6 implies that in this setting marginal reducibility is equivalent to the notion of reducibility defined in [23]. Proposition 8 further implies that correlated catalysis is equivalent to correlated catalysis with decoupling. This proves that conditions 1 and 2 are equivalent. The equivalence of condition 3 follows from Proposition 7.
We complete this section with the following lemma.
Lemma 10. For any quantum state µ S C the inequality Proof. Note that Eq. (45) implies the inequality with fidelity F(ρ, σ) = Tr √ ρσ √ ρ. The state µ S has a pu- with the Schmidt coefficient λ i sorted in decreasing order. Due to Eq. (47) we have Let now |ν S CD be a purification of µ S C , and observe that it can be written as where λ i are the same Schmidt coefficients as in Eq. (48) and {|α i } is an orthonormal basis on CD. Noting that and using the fact that the fidelity does not decrease under partial trace we obtain Using the inequality ||ρ − σ|| 1 /2 ≤ 1 − F(ρ, σ) 2 we arrive at Noting that the trace norm does not increase under partial trace we obtain We now use the triangle inequality, arriving at Using again Eq. (53) we find which together with Eq. (45) and triangle inequality implies that Using once again the triangle inequality we obtain Eq. (46): Proof of Theorem 2 Let us introduce the catalytic distillable entanglement that is, the optimal rate of obtaining singlets with the help of a correlated catalyst. Theorem 2 of the main text states that catalytic distillable entanglement coincides with the standard distillable entanglement for any distillable state, i.e., This is a direct consequence of Proposition 7.
Extending Theorems 1 and 2 to multipartite settings
We can generalize Theorem 1 and 2 into a multipartite setting as follows: let us consider distillation into singlets shared between any pair of parties. Let us denote the parties as A 1 , . . . , A n , and let us call all pairs P = A i , A j | i < j . The state that we would like to distill is Φ = p∈P ψ − p (see also Fig. 2), and it lives in H total = p∈P H p = (Ai,Aj)∈P A i ⊗ A j . Note that to each party, we associate a different subsystem for every pair, so the total number of subsystems is n(n−1)/2. Let us call the optimal standard asymptotic rate E d (ρ) = R(ρ → Φ), and states for which E d (ρ) > 0 distillable. Theorem 1 follows from Proposition 3 and 4. Since Proposition 3 does not assume any bipartite structure, it also holds in the multipartite setting. Furthermore, for any state that is distillable, the construction provided in the proof of Proposition 4 also works, so in combination we have Theorem 1.
To obtain Theorem 2, we relied on Proposition 5 and 6. It is clear that Proposition 5 extends easily to this setting, so we only have to show that Proposition 6 also holds, at least around Φ. In a multipartite setting, E d is still monotonic, additive under tensor products and superadditive. Therefore, it is enough to show that it is lower semi-continuous near Φ. Now, suppose we have an LOCC map T that maps a state ρ into H total . Let r = min p∈P E p d (T (ρ)), where E p d is the bipartite distillable entanglement between the factors in p. Then, for any pair of parties we can obtain at least r singlets per copy of ρ. Furthermore, these distillation protocols can be run independently because they are acting on different factors of H total . Therefore, we have E d (ρ) ≥ min p∈P E p d (T (ρ)). Since E p d is simply bipartite distillable entanglement, the hashing bound [41] provides a continuous lower bound. Therefore we would be finished if we can show that it is tight for Φ. Now, Φ contains a singlet for every pair of parties, so we can choose T that separates these singlets for each party. We can verify that in this case, the hashing bound gives E d (Φ) ≥ 1, which is indeed tight. Therefore, Theorem 2 also holds in a multipartite setting.
Strong superadditivity of asymptotic transformation rates
We will show that the transformation rate to go to any pure state φ is superadditive. In particular, this means that the bipartite distillable entanglement is superadditive, and the same holds true for the multipartite distillable entanglement as defined above. In the following, S 1 and S 2 denote two (possibly multipartite) systems.
Proposition 11. For any state µ S 1 S 2 and any pure state φ, we have Proof. Let us take a feasible rate r i < R µ S i → φ and show that r 1 + r 2 is a feasible rate for the transformation µ S 1 S 2 → φ.
To do so, we have to show that for any ε, δ > 0, there exist m, n, and an LOCC protocol Λ such that Fix an arbitrary ε, δ > 0. Without loss of generality, let us assume ε < 1. In the following, we denote the space of S ⊗n i as S i , where i ∈ {1, 2}. Note that since r i are feasible, there exist m i , n and some LOCC protocol Λ i such that Here, Λ i is a LOCC map acting on the space S i ≡ S ⊗n i i where i ∈ {1, 2}. Note that without loss of generality, we can choose the same n for both systems since otherwise we can take the product n = n 1 n 2 and update m i , Λ i accordingly. From, Lemma 10 and Eq. (63a), it follows that Using the data processing inequality of trace norm, it follows that Using the triangle inequality along with Eq. (63a), we get Also note that, from Eq. (63b) m 1 + m 2 n + δ > r 1 + r 2 .
Properties of catalytic and marginal transformation rates
In the following we will provide a general upper bound on the catalytic and marginal transformation rates in terms of the squashed entanglement of the corresponding quantum states. Squashed entanglement is defined as [42] E sq (ρ AB ) = inf Proposition 12. The rates R m and R mc are bounded as Proof. We introduce a slightly different version of the transformation rate, which we will call squashed transformation rate. In this framework, we say that a state ρ can be converted into σ with rate r if the following inequalities are fulfilled for all ε, δ > 0: E sq (µ S i ) − E sq (σ) < ε ∀i ≤ m, (68b) The squashed transformation rate is the maximal such rate, and it will be denoted by R sq . By continuity of squashed entanglement [43], it is clear that R sq (ρ → σ) ≥ R mc (ρ → σ).
Consider now an LOCC protocol Λ and a catalyst state τ achieving Eqs. (68). Using the properties of squashed entanglement [42] we find and thus Using this inequality and Eqs. (68) we further obtain which implies that Using Eqs. (68) once again we arrive at Recalling that ε, δ > 0 can be chosen arbitrarily, we conclude that and the proof is complete.
General quantum resource theories
We will now show a generalized version of Proposition 6 for general quantum resource theories. This, under some assumptions, will show an equivalence between asymptotic rates and marginal asymptotic rates in general resource theories.
Any quantum resource theory is defined by a set of free states (F ) and a set of free operations (O) [44], such that the following property holds Λ f (ρ f ) ∈ F ∀ ρ f ∈ F and Λ f ∈ O. (75) With this introduction, we now state our result. As a notation, we denote R(ρ → σ) as R σ (ρ). | 2023-05-08T01:15:54.024Z | 2023-05-05T00:00:00.000 | {
"year": 2023,
"sha1": "df4822c52e58df57aa02dcc67ed03518a3975781",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "df4822c52e58df57aa02dcc67ed03518a3975781",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
16160725 | pes2o/s2orc | v3-fos-license | Are synovial biopsies of diagnostic value?
Synovial tissue is readily accessible by closed needle or arthroscopic biopsy. These techniques provide adequate tissue for most diagnostic requirements. Examination of synovial tissue can assist in the diagnosis of some joint infections, and in several atypical or rare synovial disorders. Histological confirmation is not normally required for diagnosis of the common forms of inflammatory arthritis, including rheumatoid arthritis (RA). In patients with either established or early RA, immunohistological measures of inflammation in synovial tissue are associated with clinical measures of disease activity, may predict the clinical outcome, and change in response to treatment. Surrogate markers of disease activity and outcome that have been identified in synovial tissue include components of the cellular infiltrate, and several mediators of inflammation and matrix degradation. There is evidence that the very early introduction of disease-modifying therapy inhibits progressive structural damage maximally. Clinicians exploiting this 'window of opportunity' therefore require very early indicators of the diagnosis and outcome in patients who present with an undifferentiated inflammatory arthritis. Some immunohistological features have been described that distinguish patients who are likely to develop progressive RA and who might benefit most from early aggressive therapeutic intervention. In this regard, the inclusion of pharmacogenomic and proteomic techniques in the analysis of synovial tissue presents some exciting possibilities for future research.
recently there has been an upsurge in the use of arthroscopic techniques by rheumatologists, particularly those interested in the pathogenesis of arthritis and the effects of new therapeutic strategies [10]. Initially, arthroscopy required hospitalisation and a general anaesthetic. The production of high-definition, small-bore arthroscopes (1-2.7 mm), and the development of local and regional anaesthesia protocols [11,12], have permitted day-case arthroscopy to move from the operating theatre to procedure rooms, and even to the outpatient clinic [13].
Synovial biopsy in routine clinical practice
Synovial biopsy is not normally required for routine diagnostic or therapeutic purposes in patients with established arthritis. However, examination of synovial tissue can assist in the diagnosis of some joint infections [14]. In acute bacterial arthritis, the synovial membrane contains clusters or sheets of polymorphonuclear leukocytes. Bacteria can be demonstrated in synovial tissue by Gram's stain. Sometimes, cultures of synovial tissue may be positive even when blood and synovial fluid cultures have been negative. In chronic infections, such as tuberculosis and fungal diseases, characteristic synovial lesions may be focal, and multiple biopsies are advised. Mycobacterial granulomas in the synovium do not always demonstrate caseation. With appropriate staining, acid-fast organisms, fungi and spirochaetes (Lyme disease and secondary syphilis) can be demonstrated. The presence of bacterial DNA in synovial biopsy samples can provide important information in the diagnosis of infectious arthritis [15].
Occasionally, the diagnosis of chronic sarcoidosis is established after synovial biopsy [16]. The characteristic histological feature is a well-defined granuloma. The central area of the granuloma is occupied by lymphocytes, which are predominantly CD4 + , and by mononuclear phagocytes and their progeny, including epithelioid cells and multinucleated giant cells. Caseation is absent, but a small area of fibrinoid necrosis may be present. The outer zone of the granuloma is formed by CD4 + and CD8 + lymphocytes, fibroblasts, mast cells and other immunoregulatory cells.
Both gout and pseudogout can demonstrate tophus-like deposits in synovial tissue [14]. When handling tissues, special care is required to preserve the crystalline structures. Amyloid may be deposited in synovium in patients with primary amyloidosis, Waldenstrom's macroglobulinemin, multiple myeloma and adult cystic fibrosis [17]. Arthropathy associated with ochronosis and haemachromatosis demonstrate characteristic histological features. Pigmented villonodular synovitis, multicentric reticulohistiocytosis and rare tumours of the synovial membrane require a biopsy for diagnosis.
Synovial biopsy can have a major role in the diagnosis of monarticular arthritis. A closed needle biopsy of the knee joint might provide sufficient tissue for histological, immunohistological and microbiological analysis. An open biopsy or needle arthroscopic biopsy is the procedure of choice when other joints are involved, and should be undertaken in the knee joint if closed needle biopsy fails to yield a diagnosis.
Established rheumatoid arthritis General comments
The diagnosis of RA after the chronic polyarticular manifestations have become established is usually based on characteristic clinical, radiological and serological manifestations. Histological confirmation is not required. The gross changes that are characteristic of RA result from chronic synovial inflammation. Typically, the surface of the synovium becomes hypertrophic and oedematous, with an intricate system of prominent villous fronds that extend into the joint cavity. Microscopic evaluation of synovial tissue inflammation in RA confirms marked cellular hyperplasia in the lining layer. T cells, plasma cells, macrophages, B cells, neutrophils, mast cells, natural killer cells and dendritic cells accumulate in the synovial sublining layer (reviewed in [18]). The appearances are not specific for RA. The dominant cell populations in the lining layer are fibroblast-like synoviocytes and macrophages, which release an array of proinflammatory cytokines and their inhibitors, promoting further intra-articular perturbations. There is abundant production of matrix metalloproteinases (MMPs), cysteine proteases and other tissue-degrading mediators, which accumulate in the synovial fluid and augment joint damage by interacting directly with exposed cartilage matrix. These features are present very early in the disease course. T cells and plasma cells are prominent in the synovial sublining layer. Perivascular T cell aggregates are observed in 50-60% of patients with RA. These aggregates can be surrounded by plasma cells. There are two basic patterns of T cell infiltration. First, perivascular lymphocyte aggregates can be found, which consist predominantly of CD4 + cells in association with B cells, few CD8 + cells, and dendritic cells. The second pattern of T cell infiltration is the diffuse infiltrate of T cells scattered throughout the synovium. A subset of the CD4 + T cells in synovial tissue is activated. A possible biological effect of activated perivascular T cells in the synovium is the stimulation of migrating macrophage populations through direct cell contact. This mechanism is known to stimulate macrophage production of cytokines and MMPs in vitro. Many of the synovial tissue T cells are, however, in a state of hyporesponsiveness. Interdigitating dendritic cells, which are potent antigen-presenting cells, are located in proximity to CD4 + T cells in the lymphocyte aggregates and near the intimal lining layer. In addition, macrophages and lymphocytes infiltrate the areas between the lymphocyte aggregates. The macrophages often constitute the majority of inflammatory cells in the synovial sublining layer. B cells constitute a small proportion of the total number of lymphocytes in the synovial sublining layer. However, numerous plasma cells may be present throughout the synovium, sometimes exceeding the number of infiltrating T cells.
An issue that frequently arises in the context of possible associations between synovial tissue immunohistology and progressive structural damage relates to the acquisition of tissue samples from a knee joint and the evaluation of radiographic images, usually of the hands and feet. Such studies make the assumption that the immunohistological appearances in a knee joint are representative of pathophysiological events occurring at other sites. Evidence to support this hypothesis comes from a study of patients with RA who underwent biopsy of a knee joint and a small upper-limb joint on the same day [19]. Another important issue that requires consideration is the question of selection bias. This issue has been evaluated extensively, confirming that despite the degree of histological variation within a joint, representative measures of inflammation can be obtained by examining a limited area of tissue [20][21][22][23].
The intensity of the cellular infiltrate, the levels of activation and the amount of secreted products vary greatly between individual patients with RA and other arthropathies [20,24,25]. Many studies of synovial tissue have been reported that indicate associations between immunohistological features of inflammation and clinical measures of disease activity [20,26,27], as well as with local measures of synovitis [28]. The immunohistological measures of synovitis observed in the knee joint are reflected in other joints from the same patient biopsied at the same time [19]. Clinically uninvolved joints in patients with RA demonstrate similar immunohistological changes, although less intensely than in the affected joints [29,30]. Serial synovial biopsies in open therapeutic studies and in randomised clinical trials showed that the immunohistological features of RA and other arthropathies change after treatment with disease-modifying anti-rheumatic drugs (DMARDs) [26,[31][32][33][34][35][36][37], oral corticosteroids [38] and targeted biological agents [39][40][41][42]. The mediators of inflammation that have been shown to change in therapeutic studies include mononuclear cell populations [26,31,32,35,36,39,40,42], adhesion molecule expression [35,36,[38][39][40]42], levels of cytokine production [31,33,35,36,41] and MMPs [34,36,37]. Thus, synovial tissue analysis in patients with RA has revealed several surrogate markers of disease activity and response to treatment.
The value of synovial biopsy
In contrast to the studies of disease status and response to treatment in patients with established arthritis, limited attention has been given to the study of the immunohisto-logical appearances and associations with disease outcome. One cross-sectional analysis demonstrated significant correlations between the number of lining layer and sublining layer macrophages, but not other mononuclear cell populations, and joint damage scores in RA [27]. A longitudinal study highlighted the association between the number of synovial tissue macrophages at baseline and increases in the joint damage scores over 1 year [43]. Other investigators showed that the predominant change in the synovial tissue of patients in remission after treatment with DMARDs was a striking decrease in the number of macrophages [44]. These observations are consistent with the hypothesis that chronic RA is a macrophagemediated disorder and that a decrease in synovial macrophage content should be a primary aim of successful treatment.
Preliminary studies have evaluated possible associations between the known mediators of inflammation in synovial tissue, including cytokines, and outcome in established RA ( Table 1). The effect of blockade of tumour necrosis factor-α (TNF-α) on TNF-α production in synovial tissue was evaluated in patients treated with infliximab [41]. All patients in the study met the American College of Rheumatology 20% improvement response criteria (ACR20), and half of the patients met the ACR50. Patients meeting the ACR50 criteria were those with the highest baseline levels of TNF-α synthesis. There was a significant correlation between baseline levels of TNF-α expression and change in tissue TNF-α levels in response to therapy. The authors concluded that high levels of synovial tissue TNF-α production before treatment might predict responsiveness to anti-TNF-α therapy.
Interleukin-10 (IL-10) is a chondroprotective cytokine and functions in part by inhibiting the production of TNF-α, IL-1 and MMPs (reviewed in [45]). Treatment of experimental models of arthritis with recombinant IL-10 inhibited both the incidence and the severity of disease. In a cross-sectional biopsy study, IL-10 mRNA levels were measured in synovial tissue from patients with erosive RA and compared with those in patients with chronic non-erosive Available online http://arthritis-research.com/content/5/6/271 Table 1 Synovial biopsy and the determination of diagnosis or outcome in established rheumatoid arthritis
arthritis [46]. The patients with erosive RA were positive for IgM-rheumatoid factor (IgM-RF + ) and had a mean disease duration of 16.8 years. The patients with nonerosive arthritis had a mean disease duration of 7 years, and most were seronegative. Synovial tissue IL-10 mRNA levels were significantly lower in the patients with erosive RA (P < 0.03). This observation from a cross-sectional analysis of patients with established RA was extended in a longitudinal study of IL-10 polymorphisms in 291 consecutive patients with early RA. During the first 6 years of follow-up, the increase in radiographic damage scores in the patients who were homozygous for the genotype -1082AA was significantly less than the increase in patients with the genotype -1082GG. The smaller number of erosions in patients with RA who had the -1082AA genotype could not be explained by other determinants of progressive joint damage, such as an increased concentration of IgM-RF, the presence of the shared epitope, or the baseline radiographic damage score. Taken together, these observations suggested that increased expression of IL-10 mRNA in synovial tissue might be required for protection against progressive erosive disease, and that patients with RA who have different IL-10 genotypes have a different disease course. Future research is necessary to confirm whether or not there is a baseline threshold of tissue IL-10 mRNA expression that will identify individual patients with early RA who are more likely to demonstrate an aggressive disease course.
Synovial angiogenesis, a mechanism that is central to synovial proliferation and pannus formation, is largely dependent on vascular endothelial growth factor (VEGF) [47]. In a small study of patients with RA, synovial tissue samples were evaluated for the presence of VEGF at the time of joint replacement surgery and, on average, 10 years later [48]. An association between the amount of VEGF production in endothelial cells and the rate of progressive joint damage was suggested. Further studies of proinflammatory cytokines, tissue-degrading enzymes, angiogenic factors and other mediators of inflammation and damage in the synovium, at the level of either gene expression or protein production, might reveal characteristics associated with a favourable or unfavourable outcome.
Early rheumatoid arthritis General comments
The approach to treating patients with early RA has changed substantially in recent years. In most centres, early arthritis refers to patients who present within 1 year of the onset of symptoms. This change has occurred for several reasons. First, there has been a growing recognition that irreversible structural damage can occur very early in the course of inflammatory arthritis [49]. Second, the establishment of dedicated early arthritis clinics facilitates the early referral of patients with inflammatory arthri-tis [50]. Third, there is a wider recognition of reliable diagnostic factors [51]. Fourth, rheumatologists have access to effective therapeutic modalities that greatly reduce the rate of progressive joint damage [52][53][54]. Last, it has been established that DMARD therapy reduces the rate of progressive joint damage more effectively when introduced within 6 months of the onset of symptoms [55]. It is therefore now standard practice to introduce conventional DMARDs, such as methotrexate, and even targeted biological therapies, as first-line treatments in patients with RA [56].
The presence of some autoantibodies, including IgM-RF and anti-citrulline-containing peptide (anti-CCP) antibody, facilitates an early diagnosis of RA [57]. In addition, several clinical and laboratory factors at baseline reliably predict outcome. These include higher baseline joint counts, a high titre of IgM-RF, an elevated acute-phase response, the number of baseline erosions and the shared epitope [58]. However, these factors were identified in large cohorts and do not always apply to individual patients. Some clinical investigators have developed algorithms that incorporate selected prognostic factors to predict outcome [59,60].
The value of synovial biopsy
Studies of synovial tissue to identify indicators of outcome in RA, and changes after treatment, have been necessarily limited in size in comparison with similar studies that evaluated clinical and serum factors. Synovial biopsy is an invasive procedure and, when performed at arthroscopy, is technically complicated and expensive. Quantification of changes with digital image analysis is also costly and requires considerable expertise. However, the pathophysiological events occurring in tissue are more likely than dispersed serum factors to reflect the clinical status and outcome in individual patients.
Although there is no diagnostic role in early RA, synovial biopsy and tissue analysis may provide important prognostic information. A few biopsy studies have been reported that examined mediators of synovial tissue inflammation and joint damage that were found to be associated with unfavourable clinical and radiological outcomes ( Table 2). In a limited longitudinal study of patients with early inflammatory arthritis, and a mean disease duration of 9.6 months (range 2 weeks to 18 months), the number of synovial lining layer macrophages at baseline was correlated with the number of new erosions on radiographs of the hands and feet 1 year later (P = 0.002) [25]. Most patients had RA. In all patients who developed new joint erosions it was observed that more than 60% of the infiltrating lining layer cells were macrophages, suggesting that an immunohistological analysis of synovial tissue at baseline might identify individual patients who were at increased risk of developing a more aggressive disease course. This observation is similar to the findings in patients with established RA [27,43]. Macrophages are the primary source of the proinflammatory cytokines IL-1 and TNF-α, which induce the production of MMPs by fibroblast-like synoviocytes. Employing in situ hybridisation techniques, it was observed that the number of MMP-1producing cells in the synovial lining layer, in contrast to cells producing cathepsin B and cathepsin L, seemed to be strongly correlated with the number of new erosions that developed during the first year of follow-up (P = 0.0007) [25].
In a similar early synovitis cohort, the expression of MMP-2, MMP-9, MMP-14 and TIMP-2 (tissue inhibitor of metalloproteinases-2) was quantified in synovial tissue biopsies obtained at baseline [61]. Radiographs of the hands and feet were repeated after 1 year. The synovial tissue samples from patients who developed joint erosions had significantly higher levels of MMP-2 than those from the patients who did not develop erosions (P = 0.04). There seemed to be considerable overlap between the groups, and the authors did not distinguish between MMP-2 expression in the lining and sublining layers. Nevertheless, the observation suggested that baseline tissue MMP-2 levels might be a marker for more aggressive synovial inflammation.
Early undifferentiated arthritis General comments
With the emergence of convincing scientific evidence that very early introduction of disease-modifying therapies inhibits progressive structural damage more effectively [55], it is inevitable that some patients who receive treatment will not meet the ACR criteria for RA and will have a self-limiting, non-progressive arthritis. Thus, clinicians will seek a balance between exploiting the early 'window of opportunity' in some patients, and delaying effective treatment until the appearance of sufficient diagnostic criteria in others. About 30% of patients have an undifferentiated inflammatory arthritis at the time of their first presentation to an early arthritis clinic [50]. Similarly, a diagnosis of RA can be established in about 30% of patients. During the period of follow-up, many of the patients with undifferentiated arthritis will develop features that enable a diagnosis of RA, or other categories of arthritis. Several factors have been identified that distinguish groups of patients with undifferentiated arthritis who acquire a diagnosis of RA. Thus, the presence in the serum of anti-perinuclear factor [62], anti-RA33 [63], anti-Sa [64], anti-keratin [65], antifilaggrin [66] and anti-CCP antibodies [51] has been associated with the diagnosis or outcome of RA. In addition, high-titre antibody against serum amyloid A in patients attending an early arthritis clinic with undifferentiated arthritis was associated with a subsequent diagnosis of RA [67].
The value of synovial biopsy
Some studies employing synovial tissue analysis to identify early diagnostic markers in patients with undifferentiated arthritis have been reported (Table 3). In one study, a synovial biopsy was obtained from 95 patients who presented with unclassified arthritis for less than 12 months [68]. The objective was to determine which immunohistological markers could best distinguish RA from other categories of arthritis. Using regression analytic approaches, it was observed that high scores for CD38 + plasma cells and CD22 + B cells were the best discriminating markers when comparing RA with non-RA categories. The authors concluded that immunohistochemical analysis of synovial tissue samples could be used to distinguish patients with RA from other diagnostic categories.
In another study, immunohistological differences between RA and other categories of arthritis were also observed in 71 patients, including 16 who had had RA for less than 12 months [69]. The intensity of infiltration by both T and B cells, and differential expression of αV integrin, seemed to distinguish patients with RA from those with spondylarthritis and those with osteoarthritis. The disease duration of RA did not influence the findings. However, the immunohistological features highlighted in both of these studies seem insufficiently disease-specific for routine use as diagnostic markers.
The demonstration of intracellular citrullinated proteins in synovial tissue samples from 18 of 36 patients with RA, and in none of 52 patients with spondylarthritis, Available online http://arthritis-research.com/content/5/6/271 Table 2 Synovial biopsy and the determination of diagnosis or outcome in early rheumatoid arthritis
Synovial tissue Clinical association Reference
Number of macrophages Radiographic outcome [25] MMP-1 Radiographic outcome [25] MMP-2 Radiographic outcome [61] MMP, matrix metalloproteinase. Table 3 Synovial biopsy and the determination of diagnosis or outcome in undifferentiated arthritis
osteoarthritis and other categories of arthritis, suggested a useful method of discriminating RA from other inflammatory joint diseases [70]. This observation was the first description of a specific histological marker for RA in synovial tissue. The specificity of intracellular citrullinated proteins to RA is the subject of continuing investigation, and it is clear that further biochemical characterisation of the citrullinated proteins present in the synovium of patients with RA, and other inflammatory joint diseases, is required [71,72]. Nevertheless, the possibility that demonstrating intracellular citrullinated protein in synovial tissue might be a new tool for the early diagnosis of undifferentiated arthritis is an important prospect.
Future challenges
There is increasing emphasis on the need to recognise potentially erosive disease in patients presenting with early undifferentiated arthritis, before sufficient criteria for RA have evolved. It is likely that pathophysiological pathways that directly or indirectly result in bone and cartilage degradation are preferentially activated in articular tissues from the earliest phases of the disease. The recognition of enhanced proinflammatory or degradative pathways, or the downregulation of inhibitory factors, that participate in the progression or prevention of arthritis, is most likely to emerge from studies of articular tissues. The preliminary studies of synovial tissues reported here support this hypothesis. The inclusion of pharmacogenomic and proteomic techniques in the analysis of synovial tissue from patients with different categories and stages of arthritis presents some exciting possibilities for future research. | 2014-10-01T00:00:00.000Z | 2003-10-02T00:00:00.000 | {
"year": 2003,
"sha1": "d833a261a7f2cf14cf093829fd2f5c6396ac5959",
"oa_license": null,
"oa_url": "https://arthritis-research.biomedcentral.com/track/pdf/10.1186/ar1003",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "80fd7de1e6611b975ff2c519a66129b1caf664fc",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
86769549 | pes2o/s2orc | v3-fos-license | MwoI and SmaI RFLPs polymorphisms of porcine obese gene and their association with carcass and meat characteristics of heavy pigs
The obese gene encodes leptin, a 16-kDa protein involved in the regulation of fat deposition and energy consumption. Backfat is one of the peculiar characteristics of Italian ham, and represents a fundamental quality factor. Therefore, the obese gene can be considered as a candidate marker for determining economically important production traits such as backfat thickness, feed intake, and growth rate in swine. The aim was to investigate the relationship between obese gene polymorphisms and carcass and meat characteristics of pigs reared for ham production. In the present research, the analyses of three new RFLPs are reported. An MwoI polymorphism occurs at nucleotide 1792, within the intron. Pigs heterozygous at this position have heavier thighs with a thinner layer of fat. Two SmaI polymorphisms occur at nucleotides 5018 and 5410 within the 3́ UTR of the obese gene. Animals heterozygous at position 5410 have characteristics suitable for the production of San Daniele ham: lower backfat thickness and heavier thighs with a thinner fat layer, relative to other genotypes
Introduction
San Daniele or Parma ham is a typical national product, covered by the D.O.P. mark and subjected to European regulations for products of protected origins. The main aims of ham producers are to improve meat quality, and to select and rear the most suitable pigs under defined conditions, to give a high quality product.
It is possible to improve thigh characteristics for ham production by supporting traditional selection, performed by breeders using mathematical-statistical methods, with molecular genetic tests that allow the most suitable animals to be identified by sampling the blood or hair of piglets.
Backfat is one of the peculiar characteristics of Italian ham, and represents a fundamental quality factor, together with intramuscular fat. Recent studies have indicated that backfat thickness can be correlated with some polymorphisms of the obese gene (Jiang and Gibson, 1999). The obese gene encodes leptin (Zhang et al., 1994;Farooqi et al., 1998), a 16-kDa protein with a hormonal function (Halaas et al., 1997). It is secreted by adipocytes and its concentration in the blood correlates highly with the level of adipose tissue (Frederich et al., 1995). Ramsay et al. (1998) reported that leptin mRNA expression is higher in fat pigs than in lean ones. Leptin binds to a specific receptor in the hypothalamus, which inhibits feed intake (Trayhurn et al., 1998), thus interacting with body weight and energy balance (Halaas et al., 1995). The association of leptin mRNA expression with fatness in pigs has been preliminarily investigated (Robeseert et al., 1998).
Considering the role of leptin in the regulation of fat deposition and energy consumption, the obese gene can be considered a candidate marker for determining economically important production traits such as backfat thickness, feed intake, and growth rate in swine (Wu et al., 2002). The obese gene in pigs has been sequenced and assigned to chromosome 18 (Bidwell et al., 1997;Cepica et al., 1999;Campbell et al., 2001). However, these authors collected data in pigs reared for meat production, and therefore with different characteristics from those of heavy pigs, which are normally used for ham production.
In the present research, analyses of three new restriction fragment length polymorphisms (RFLPs) are reported, one within the intron and two in the 3' untranslated region (UTR) of the obese gene. The aim was to investigate the relationship between obese gene polymorphisms and carcass and meat characteristics of pigs reared for ham production.
Animals
Six populations (batches) of heavy pigs, reared for the production of Italian cured ham, were studied.
Two populations were crossbreeds of Large White x Landrace (LWxL1, LWxL2) and four were commercial hybrids (SCAAPAG1, SCAAPAG2, JSR, PIC). For each population, from 50 to 56 contemporary piglets (half female and half castrate males) were selected from each population, and their growth was recorded until the final slaughter weight (about 160 kg live weight). Only those individuals that reached the appropriate weight in the expected time were included in the analysis. The initial total live weights for each batch were recorded and the nutrition programme was established to ensure that the same commercial feeds were administered in equal amounts to the six batches. Animals were reared under standard conditions, according to the San Daniele Consortium protocol.
A complete description of rearing and feeling conditions of pigs used in the experiment is reported in a companion paper (Stefanon et al., 2004) Pigs with carcass weights within the range of 125-140 kg were selected after slaughtering. These and other culling reasons reduced the final number of pigs. Gender distribution within batches remained almost unchanged.
The weight of each carcass was recorded, and measurements of backfat thickness and the percentage of lean and fat cuts in the carcass were calculated using a Fat-O-Meter instrument. The thighs were then dissected and their weight and fat thickness recorded. Samples of both the vastus lateralis and biceps femoris were taken and stored immediately at -20 °C for DNA extraction. After 24 h, the pH was measured.
DNA analysis
DNA was extracted from frozen muscle samples using a phenol-chloroform/proteinase K method (Sambrook et al., 1989), and its concentration was determined by spectrophotometry at 260 nm. The quality of the genomic DNA was assessed by electrophoresis on 1% agarose gels stained with ethidium bromide.
Primers for the amplification of the porcine obese gene, based on available genomic sequences (GenBank U66254; Bidwell et al., 1997) were designed using the PRIMER 3 web program.
PCR reactions for fragments 1 and 2 were performed in a 25-µl total volume containing 50 ng genomic DNA, 1.5 mM MgCl 2 , 0.2 mM each dNTP, 0.5 µM each primer, 0.2 units Taq DNA polymerase (Roche) and 1 x the manufacturer's reaction buffer. The PCR profile was 95 °C for 3 min, 60 °C for 1 min, and 72 °C for 1 min, followed by 30 cycles of 94 °C for 1 min, 60 °C for 1 min, and 72 °C for 1 min, with a final extension at 72 °C for 10 min.
Amplified fragments were digested with MwoI (fragment 1) or SmaI (fragment 2) and separated electrophoretically on a 3% SeaKem (FMC) agarose gel stained with ethidium bromide. PCR products corresponding to each allele of the MwoI and SmaI DNA polymorphisms were sequenced on an Applied Biosystems ABI PRISM 3700 automated DNA sequencer.
NIRS analysis
Subsamples of meat were analysed for dry matter, protein, lipid, and ash contents with a chemiometric method using near-infrared reflectance spectroscopy (NIRS) equipment (Foss NIRSystems 5000), with scanning from 1100 to 2498 nm, and reading every 2 nm. Calibration and data collection was carried out using ISI 2.00 software, version 3.11 (Intrasoft International).
Chemical analysis
A group of 76 samples was used to draw a calibration curve. Samples were weighed, lyophilized for 72 h, and, after re-equilibration with air, were weighed once more and homogenized in a blender. Intramuscular lipid percentage was calculated from petroleum extraction (40/60) in a "Randall" apparatus. Dry matter and ash percentages were determined according to Martinotti et al. (1987). Protein percentage was calculated by subtraction.
Data analysis
All variables were analysed by the general linear model (GLM), using of the SPSS statistical software package (1995). Normal distribution of data was assessed using the Kolmogorov-Smirnov (Lilliefors) test (SPSS, 1997). MwoI and SmaI polymorphisms, batches, and sex, and their effects on carcass and meat composition, taken both singly and in combination, were considered to be fixed factors. Because of the different numbers of samples in the batches, the means were calculated according to the least squares procedure (LSMEANS). Analysis of variance of carcass weight of animals indicated a significant differences between batches (LWxL 132.6 and 127.6, SCAAPAG 139.0 and 133.8, JSR 131.5 and PIC 130.9).
Results and discussion
MwoI polymorphism at nucleotide 1792 of the obese gene MwoI digestion of fragment 1 (264 bp) produced a consistent 28-bp band in each sample, plus two bands of 33 bp and 203 bp (allele A), or a unique band of 236 bp (allele B; Fig.1). Allele A showed a low allelic frequency, below 0.11 (Table 1).
Direct sequencing of the fragment 1 PCR prod-uct revealed a C/G point mutation at position 1792, which abolishes the MwoI restriction site, creating a new restriction site for the MnlI enzyme. Analysis of the MwoI polymorphism associated with meat characteristics is reported in Table 2. Only two genotypes were considered because genotype A/A did not occur. Data recorded at the slaughterhouse included carcass weight, backfat thickness, and percentage of lean and fat in the carcass, as well as ham weight and fat thickness. Furthermore, the chemical composition of the vastus lateralis and biceps femoris muscles was evaluated.
All data were analysed as a function of the MwoI polymorphism, sex, and covariate weight readings.
Heterozygote A/B pigs had heavier thighs with a thinner fat layer (P < 0.05; Table 2). It is possible that this mutation affects mRNA stability or translation efficiency, resulting in specific biological effects. An intron not only coordinates correct splicing, but can also influence gene expression by interacting with transcription factors (Finkbeiner, 2001). The MwoI polymorphism might influence the interaction of transcription factors by altering the binding site. The reported polymorphism may also act as a molecular marker linked to a specific locus that controls backfat thickness.
There are different opinions on the influence of the obese gene intron on the regulation of the gene itself. Kennes et al. (2001) found that an A/T point mutation at position 2845 was associated with feed intake and growth rate (P < 0.0078) in two different populations selected for higher and lower backfat thickness. These data suggest that these polymorphisms influence such traits.
According to Jiang and Gibson (1999), the TaqI polymorphism at position 1112 does not influence meat characteristics. This result was probably affected by the low allelic frequency of the mutation within the pig population. According to the authors, the low allelic frequency is due to selection and indicates that the mutation is unfavourable.
The allelic frequency of the mutation at the MwoI polymorphic site is also low. It has been proposed that animals with this polymorphism might be less suitable for the production of thighs with the characteristics required for San Daniele ham. However, the data reported here indicate that only pigs with this marker produce the heavier ham with the low ham-fat thickness that is required by the San Daniele Consortium. Table 2 indicates that A/B animals are, on average, heavier and have a lower percentage of lean cuts, thicker backfat, and thinner back muscle. These parameters are the opposite of those pertaining to the pigs' thighs, which suggests the hypothesis that pre-emptive selection of young animals leads to a preference for pigs without this haplotype, with leaner carcasses but fatter thighs.
This hypothesis could be confirmed by widening the analysis to include a group of non-selected animals, and comparing the pigs' genotypes with the quality of the final product.
A significant interaction with sex was observed for the last data (P < 0.001). A complete analysis of the Least Square Means of the sex is reported in a companion paper (Stefanon et al., 2004).
On farms, male pigs are castrated, and therefore tend to become excessively fat. Consequently, the sex of the animal influences all variables that correlate with lipidic content. Further investigation may elucidate this influence.
SmaI polymorphisms at the 3' UTR of the obese gene
Amplified fragment 2 (696 bp) has two SmaI restriction sites, with three different patterns. Three alleles were identified: allele C produces three bands of 61 bp, 244 bp, and 391 bp; allele D two bands of 244 bp and 452 bp; and allele E two bands of 61 bp and 635 bp (Fig. 2). The population analysed showed no homozygotes for the D and E alleles. Allelic frequencies are given in Table 3. Direct sequencing of the fragment 2 PCR product showed two point mutations: the first a C/T substitution at position 5018, which eliminates the first restriction site for SmaI (allele D); the second is a G/A substitution at position 5410, which causes the deletion of the second site (allele E). Table 4 reports the effects of the SmaI polymorphisms at positions 5018 and 5410 on meat characteristics.
Analysis showed a significantly different thigh weights (P < 0.01) among these genotypes, although total carcass weight seemed to have a strong influence as well (P < 0.001). The percentages of dry matter (P < 0.01) and ash content (P < 0.05) in the vastus lateralis muscle were also significantly different in these pigs: ash content is higher in C/E animals, which indicates a lower water content. Generally, compared with both heterozygote C/D and homozygote C/C animals, heterozygote C/E pigs display more suitable characteristics for the production of San Daniele ham: lower backfat thickness and heavier thighs with a thinner fat layer.
C/D heterozygote individuals for SmaI at position 5018 are less suitable for the production of San Daniele quality ham because the thickness of the thigh fat layer is greater in these animals.
The SmaI polymorphism at position 5410 lies within the 3' UTR of the gene, which determines the half-life of the mRNA (Pesole et al., 2001). This mutation could lie within an important site for mRNA regulation and influence thigh characteristics by fine-tuning leptin synthesis. A search of the "UTRScan" database neither confirmed nor rejected this hypothesis. On the other hand, this polymorphism could be a marker for the actual mutation that influences the translation or half-life of leptin mRNA.
Conclusions
Detection of a whole set of polymorphisms associated with particular meat characteristics would allow selection of the most suitable animals for the production of the prized San Daniele hams. The aim is to develop a reliable and objective instrument with which to optimise rearing proto- cols. Furthermore, it would be useful to determine the polymorphisms associated with San Daniele ham production, to create a genetic fingerprint detectable with a simple experiment. | 2019-03-28T13:43:41.046Z | 2004-01-01T00:00:00.000 | {
"year": 2004,
"sha1": "7be648f26ab037ee0ebccddcba5570d53e1d8d99",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.4081/ijas.2004.211",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "929cb25a2b0c9f8d7bd539e3bbf95782868a6ce2",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
} |
233030853 | pes2o/s2orc | v3-fos-license | Imitation and recognition of facial emotions in autism: a computer vision approach
Background Imitation of facial expressions plays an important role in social functioning. However, little is known about the quality of facial imitation in individuals with autism and its relationship with defining difficulties in emotion recognition. Methods We investigated imitation and recognition of facial expressions in 37 individuals with autism spectrum conditions and 43 neurotypical controls. Using a novel computer-based face analysis, we measured instructed imitation of facial emotional expressions and related it to emotion recognition abilities. Results Individuals with autism imitated facial expressions if instructed to do so, but their imitation was both slower and less precise than that of neurotypical individuals. In both groups, a more precise imitation scaled positively with participants’ accuracy of emotion recognition. Limitations Given the study’s focus on adults with autism without intellectual impairment, it is unclear whether the results generalize to children with autism or individuals with intellectual disability. Further, the new automated facial analysis, despite being less intrusive than electromyography, might be less sensitive. Conclusions Group differences in emotion recognition, imitation and their interrelationships highlight potential for treatment of social interaction problems in individuals with autism.
Introduction
Facial expressions are an essential tool to communicate emotions non-verbally in social interactions [1]. Being able to understand as well as to generate these expressions is crucial to the exchange of inner states with others [2]. Impairments in reciprocal social communication and interaction are key diagnostic aspects of autism spectrum conditions (ASC) [3]. Although the diagnosis includes both the understanding and the generation of non-verbal signals, especially the latter as well as the association of the two has neither been fully understood nor investigated.
In the context of this question, especially the ability to generate facial expressions that match the facial expression of the interaction partner might play a crucial role. Neurotypical (NT) individuals (i.e. individuals without autism) tend to mimic facial expressions in social interactions automatically [4]. There is evidence that such spontaneous facial imitation, often referred to as mimicry, might help people to recognize emotions (e.g. [5][6][7][8][9]). Accordingly, many specific interventions for patients with ASC involve teaching the voluntary imitation of other's facial emotions (e.g. [10][11][12]). However, the actual benefit of imitation that is voluntarily produced remains unclear, especially in individuals with autism. Drimalla et al. Molecular Autism (2021) 12:27 Investigating the relationship between the voluntary imitation of facial expressions and the recognition of those expressions in autism seems promising. It may be a possible key to elucidate the expression and recognition deficits, understand their interaction and target them therapeutically.
Although many studies reported difficulties of individuals with autism to recognize emotions, results remain inconsistent regarding specific emotions [13][14][15]. A reason might be the low sensitivity of many tasks. This becomes apparent in studies with high-functioning individuals, i.e. individuals who show only a mild level of symptoms and an intelligence quotient of 70 or above [16]. Generally, mixed results may be due to differences in the individuals' level of functioning, potential compensatory mechanisms and task demands [15].
Regarding imitation of non-verbal signs and its important role in social functioning, surprisingly little is known about the tendency to imitate facial expressions in individuals with ASC. Compared to healthy controls, there seem to be differences in spontaneous imitation of facial expressions [17,18], voluntary imitation, however, seems to be grossly unimpaired [18][19][20]. The limited evidence for aberrant voluntary imitation might be explained by ceiling effects [21] as most studies focused on the occurrence of imitation, ignoring its quality. However, the voluntary facial imitation capacity of individuals with ASC appears to differ from neurotypicals' regarding quality [22] as well as timing [23]. A recent meta-analysis [24] summarized a variety of differences in how people with ASC express facial emotion expressions. However, the authors pointed out that the strength of the group differences may be overestimated due to confounding effects of age or intellectual functioning. In conclusion, the exact nature of facial imitation in adults with autism and without intellectual impairment has not yet been fully understood and deserves to be investigated further.
Although most studies investigated voluntary facial imitation of individuals with autism in the context of emotion recognition paradigms, the performance in both areas has not been linked in those studies (e.g. [18,23]). One reason might be that the measures of emotion recognition as well as of imitation performance used in those studies were not sensitive enough, as they used easy-to-recognize emotions and measured only occurrence or speed but not the precision of imitation.
So far, most studies investigating the expression of facial emotions have either deployed electromyography, which has been reported as obtrusive, were limited to very few muscles, or have used time-costly coding of video-recorded expressions by observers. Due to current advances in image and video classification [25,26], computer-based facial expression analysis offers new possibilities to measure facial expressions. This analysis classifies the purely visual input of facial features and facial motion into abstract classes [27]. Unlike electromyography, a computer-based analysis is neither expensive nor intrusive. It allows measurement of facial expression without physical contact with the participant (e.g. when applying the EMG electrodes). This is especially relevant in studies including individuals with autism, as touch is often perceived as aversive, might induce irritation and thus introduce confounds. A study on detection of autism diagnosis successfully classified individuals with autism based on their automatically analysed facial expressions [28]. Another recent study [29] analysed the spontaneous production of facial expressions using automated facial expression analysis software and related its relationships to alexithymia. Both studies clearly showed the value of automatic computer-based approaches.
Taken together, the relationship between recognition and imitation of facial expressions lacks rigorous investigation in adults with ASC and without intellectual impairment. A better understanding of the nature of both phenomena, as well as their association, might help to target the social struggle of individuals with autism. This study, therefore, seeks to examine the voluntary facial imitation capacity of individuals with and without autism in an emotion recognition paradigm. First, we expect to replicate the previously described emotion recognition deficit in individuals with ASC. Second, we assume quantitative as well as qualitative differences in facial imitation in individuals with autism. Third, we aim to elucidate the relationship between facial imitation and emotion recognition-especially for ASC.
Methods Participants
Thirty-seven adults with ASC (18 female, mean age = 36.89, range 22-62) and forty-three NT individuals (22 female; mean age = 33.14, range with no selfreported history of psychiatric or neurological disorders participated in the study. Three participants had been excluded previously because of not matching the inclusion criteria. The remaining sample size of 80 exceeded the required sample size of 67 estimated by a statistical power analysis (evaluated for whole sample bivariate one-tailed correlations with power = 0.80, α = 0.05 and a medium effect size ρ = 0.30). Participants from the ASC group were recruited through the autism outpatient clinic of the Charité -Universitätsmedizin Berlin. All of the participants were diagnosed according to ICD-10 criteria for Asperger syndrome or atypical autism or childhood autism [30].
The diagnostic procedure included the Autism Diagnostic Observation Schedule (n = 36; ADOS-2; due to readability, the authors use the term ADOS; the group's raw algorithm total score can be found in Table 1; all analyses were calculated on the social domain score of ADOS-2; [31]) and the Autism Diagnostic Interview-Revised (n = 22, ADI-R, diagnostic algorithm total score; [32]) if parental informants were available.
Exclusion criteria were current antipsychotic and anticonvulsant medication, comorbid neurological disorders or age over 65 years to avoid possible confounding agerelated neurodegeneration. Furthermore, high German language proficiency was demanded that was assessed through a German vocabulary test (Wortschatztest (WST), [33]). A further exclusion criterion was the use of any medical treatments (e.g. benzodiazepines) that could have an impact on the cognitive abilities of the participants. In addition, in the control group, any history of psychiatric disorder led to exclusion.
Procedure
The experiments were conducted in a laboratory with constant lighting conditions. Participants were asked to engage in an emotion recognition and imitation task. During the experiment, the participants' faces were recorded with a webcam with a rate of 30 frames per second and a resolution of 640 × 480 pixels. An effort was made to disguise the aim of the video recording so that participants would not concentrate on their facial movements: the experimenter told the participants that the webcam was only placed to monitor their attention level. The video recordings of all participants were checked individually and were, in case the instructions were not followed correctly, excluded.
Emotion recognition
The Berlin Emotion Recognition Test (BERT) [34] is a computer-based task for sensitively assessing emotion recognition. The test consists of a total of 48 photographs of facial expressions of professional actors displaying one of the six basic emotions ( [35]; for stimulus production see [36]). The face is centred in front of a dark grey background. There are eight pictures per emotion, and each is expressed by four different female and four different male actors (see Fig. 1 for an example picture for each emotion). Below each picture, two emotional words are presented, and the participant is asked how the person is feeling. Only one of the two possible options correctly describes the emotion expressed. The emotion recognition score is the percentage of correct answers. The position of the correct answer, as well as the order of the picture, is randomized.
To develop a sensitive task, the pictures were extracted from video clips in which professional actors expressed the target emotions. The actors had been instructed with emotional scripts (e.g. imagine you receive an unexpected present) to perform the facial expressions, starting with a Table 1 Demographic and diagnostic information for participants with ASC and NT individuals WST = Wortschatztest, a German vocabulary test to measure verbal intelligence; AQ = Autism-Spectrum Quotient; ADOS-2 = Autism Diagnostic Observation Schedule 2; ADI-R = Autism Diagnostic Interview-Revised. For ADI-R, an autism diagnosis is indicated when scores in all three behavioural areas meet the cut-off scores (social interaction: 10, communication and language: 8 and restricted and repetitive behaviours: 3). For ADOS-2, the cut-off for an "autism spectrum" diagnosis is 7 and the cut-off for an "autism" diagnosis is 10 neutral expression. This led to a more naturalistic footage. From each video clip, frames of three different intensities were extracted. These pictures of facial emotion expressions built the item pool for the BERT. This pool was reduced to the most sensitive items in a pre-study at a public open house event in Berlin, Germany, where large scientific institutions welcome the general public. In this pre-study with a sample of opportunity, 46 participants were asked to recognize the emotion of each of the items. Each picture was presented with the six basic emotions as possible answers. Based on their responses, for each video clip we selected the picture which discriminated best between low-and high-scoring participants. Additionally, we identified for each item the most difficult distractor out of the five incorrect emotion labels. In a follow-up online-study [37] with 436 participants, the selected pictures and distractors were tested and further improved with respect to reliability and discriminatory power by choosing the best eight items per emotion and most-difficult distractor. A more detailed description of the task development and the current version of the task can be found online under: http:// www. hanna drima lla. de/ bert. html.
Imitation
Each emotional expression was preceded by a picture of the same actor showing a neutral expression displayed for 1500 ms. This period was used as a baseline; thereafter the emotional expression picture was shown for 6 s before the emotional words appeared. During this period, the participant's facial response to the picture was recorded via video, preventing movement artefacts resulting from behavioural responses. The reaction time was calculated, for correct answers only, from the moment when the emotion words appeared, and a response was made. After the participant's reaction, the picture disappeared, a blank screen was shown for 100 ms, and the next trial began. Figure 2 displays the time course of a trial.
To investigate imitation, the BERT was presented in an imitation and a watch condition. In the imitation condition, the subject was instructed to move their facial muscles like the person in the photograph. The term "imitation" was not mentioned to mask the hypothesis. In the watch condition, the participant was instructed to just watch the person in the picture. Each condition consisted of 23 different pictures randomly drawn from the BERT picture pool. Due to a technical
Autistic traits
To assess autistic traits in both groups and to screen for ASC in the neurotypical group, the Autism-Spectrum Quotient [38] was administered in its German version [39].The AQ is a 50-item self-report questionnaire assessing different areas of behaviour and attitudes associated with autism spectrum conditions, such as social and communication skills, imagination and attention. On a 4-point scale, the participants indicate how strongly they agree or disagree with a statement. Every slight or strong agreement to an autistic behaviour adds one point to the total score. A score of 32 and above is seen as an indicator of autistic traits that might be clinically significant. The AQ has been shown to have good test-retest reliability and inter-rater reliability [38] as well as good discriminative validity and screening properties in clinical practice [40].
Automatic analysis of facial imitation behaviour
We chose a sign-based approach to measure participants' facial expressions. Sign-based approaches are descriptive; they classify the visual input into abstract facial movements described by their location and intensity. As a coding system for these movements, the Facial Action Coding System (FACS [41]) is widely used in behavioural science and in automatic facial expression analysis. It breaks down facial expressions into 44 observable muscle movements, called action units (AU).
A major advantage of sign-based approaches is their objectivity as they do not involve interpretation [27]. Moreover, they do not reduce the complex emotional facial expression of a person to a small set of more abstract prototypical emotional expressions [42]. Last but not least, sign-based approaches allow us to preserve more dynamic information such as the time point, duration, and amplitude of an action [43]. This is crucial, as humans are very sensitive to the timing of facial actions [44]. Therefore, we chose a sign-based approach to measure participants' facial expressions.
We employed the OpenFace 2.0. tool [45] to extract facial action units from the video recordings of the participant's faces. OpenFace is an open-source tool capable of facial-landmark detection, head-pose estimation, facial-action-unit recognition and eye-gaze estimation. OpenFace 2.0 was trained on video data of people responding to an emotion-elicitation task. This corresponds to the conditions under which the BERT stimuli were recorded. Furthermore, it allows correcting for person-specific neutral expressions. OpenFace 2.0 has been tested on several emotion video data sets and demonstrated state-of-the-art results [45,46].
OpenFace extracts the intensity (scale from 0 to 5) and the presence of 18 action units (AU) from each video frame (except for AU28, for which only presence is analysed). An overview of the AUs that can be detected by OpenFace is provided in Table 2 in "Appendix".
To control for idiosyncrasies in the expression of the participant and their reaction to faces in general, we performed a baseline-correction. For each trial for each individual, we calculated the mean activity of each action unit during the baseline-phase (presentation of a neutral face). Next, for each trial we subtracted this baselineactivity from each action unit activity of each frame.
Measures of imitation
To assess the amount as well as the precision of the participant's imitation (see Fig. 3 for a conceptual overview), we used two approaches: an imitation imprecision score (IIS) and cosine similarity measures (see Fig. 4).
The imitation imprecision score IIS indicates the absolute deviation of the participant's facial expressions from the facial expressions displayed by the actors. Thus, it takes into account all AUs of the facial expression. Lower scores in IIS indicate a higher imitation precision. The imitation imprecision score IIS was calculated for each subject in two steps. First, we averaged the AU activity over frames, then averaged over AUs (Eq. 1), and finally we averaged over pictures (Eq. 2).
IIS ps , imitation imprecision score for picture p and subject s; a, total number of tracked action units; m ps , total number of frames for picture p for subject s; x if , intensity value of AU i in frame f; x ip , intensity value of an AU i shown in picture p by an actor act IIS s , action units-based imitation measure for each subject s averaged across n pictures; n, total number of pictures in a condition. cosinus similarity averaged over all frames of trial s; P ti , participant's action unit i intensity at timepoint t; A i , intensity value of actors' action unit i; t, timepoint of imitation (frame); n, total number of action units; m, total number of frames of a trial.
For each frame, we calculated the cosine similarity of the actor's and participant's vector, which indicates whether the vectors point in the same direction, i.e. the expressions are similar (with 1 as highest possible value). We analyzed both the average as well as the maximum cosine similarity (highest value) of each trial for each participant. For the averaged cosine similarity, we first calculated the mean over all frames for a trial and then averaged over all trials of a participant. For the maximum cosine similarity, we calculated the maximum of all To analyze the intensity of the imitation, we calculated the ratio of the participant's vector's length and the actress' vector's length at the time point of highest cosine similarity.
To analyze the speed of the imitation, we measured the time point (i.e. frame number) of maximum cosine similarity for each imitation of a participant. These 23 values were averaged for each participant.
Statistical analysis
In general, we used a significance level of p < 0.05. However, as we compared the imitation performance of the groups regarding four different aspects (imprecision, similarity, intensity and speed), we Bonferroni-corrected the level to a* = 0.0125 for these analyses. Data were analyzed using Python and R. In cases, in which we found evidence for a strong violation of the normal distribution assumption of our data, we used medians and nonparametric statistical tests indicated by the respective signs. Otherwise, we used means in combinations with parametric tests.
As expected, the groups differed significantly regarding the AQ, with the mean AQ score being significantly higher in the ASC group than in the neurotypical group
Effects of autism diagnosis
Across both conditions of the emotion classification task, the NT group showed a higher percentage of correct emotion classification than the ASC group [NT: 79%; ASC: 73%; t(78) = 2.96; p = 0.004, d = 0.67] and faster responses (ASC: 4100 ms, NT: 2832 ms; z = 395, p < 0.001, r = 44.16). We calculated two mixed effects regression models regarding the emotion recognition abilities of the participants. The first model was built to predict the percentage of correct responses with group and imitation-instruction as fixed effects and a random intercept for each participant. The second model aimed to predict the reaction times of correct responses with group and imitation-instruction as fixed effects and a random intercept for each participant.
Facial imitation
Four participants who, irrespective of imitation condition, imitated the facial expression either never or always, were excluded. Thus, the analysis of imitation effects was calculated on 41 neurotypical subjects and 35 individuals with ASC. Further, for the analysis of the facial expression movements, four participants were excluded, as tracking in these cases was flawed. The resulting sample size for each group was 39 neurotypical individuals and 33 individuals with ASC. We used linear mixedeffects models to control for individual differences and deal with missing values. We built a model with the fixed factors autism diagnosis, imitation-instruction and their interaction. Additionally, we modelled a random intercept and a random slope for each participant to control for random individual baseline differences and differences in their reaction to the mimicry instruction. We aggregated the data for each participant separately for the two conditions.
Due to the novel use of computational approaches in facial expression analysis and due to a lack of literature, it is unclear if such approaches are sensitive enough to detect even very small muscle movements, as is the case for spontaneous mimicry. Thus, for this study, we focused on the facial expressions in the instructed imitation condition only, for which we expected marked facial movements.
We focused on four measures of imitation performance: the imprecision (IIS, absolute difference of actor's and participant's AUs), the similarity of their expression (cosine similarity of their AU vectors), the intensity of their most similar expression (ratio of their vector's length at the time point of highest cosine similarity) and the speed of the imitation. First, we checked whether both groups were able to imitate, i.e. showed a higher mean and maximum similarity in the imitation than in the watch condition. Second, we compared individuals with and without autism on all imitation measures. Third, for the individuals with autism we analyzed whether these measures were associated with their level of symptoms on the social domain indicated by the respective ADOS subscore.
Comparison of imitation and watch condition (separated by groups) Cosine similarity of imitation in NT individuals
The neurotypical participants successfully imitated the expression. If they were instructed to imitate, they displayed expressions that were significantly more similar to the target expression (measured by cosine similarity aver-
Cosine similarity of imitation in individuals with ASC
Individuals with autism also imitated the presented facial expressions when they were instructed to. In the imitation condition, they displayed expressions that were significantly more similar to the target expression (measured by cosine similarity averaged across time and all pictures) than towards the watch condition In line with this finding, the maximum cosine similarity, i.e. most similar expression during the complete trial, was also higher in the imitation condition (difference: Mdn = 0.086, Z = 36, p < 0.001).
Similarity of Imitation
In accordance, we found no evidence that individuals with autism showed less similar expressions, averaged over all pictures, than neurotypical individuals, neither Figure 5a shows the intensity of the most similar expression averaged over trials for each participant and separated by emotion categories, and Fig. 5b shows the intensity of the most similar expression averaged over trials for both groups and separated by emotion categories.
Imprecision of imitation
We measured the precision of the imitated expression by calculating the difference from the original stimuli (IIS s ). The facial expressions that individuals with autism showed differed from the actors' expression (Mdn = 8.90) significantly more than the facial expressions of the neurotypical individuals differed from the actors' expression (Mdn = 8.50), U = 444, p = 0.012, d = 0.3. Post Hoc: Variance of Imitation in Individuals with ASC Post hoc, we compared the variance of the imitation measures between both groups. There was significantly more variance in the group of individuals with autism than in the neurotypical group regarding the intensity of the imitation (F = 4.63, p = 0.035). The distribution of imitation intensity can be seen in Fig. 6. Further, there was a tendency to more variance regarding the maximum similarity (p = 0.096). Fig. 7 for both groups.
Dimensional Relationship of social ADOS and Imitation
Severity of autism social symptomatology (social ADOS) was positively associated with both the maximum intensity of the imitation (r = 0.445, p = 0.009) and negatively associated with the similarity of imitation and target expression (r = − 0.477, p = 0.005) and the maximum similarity (r = − 0.512, p = 0.0023). Further, severity of social autism symptomatology (social ADOS) correlated positively with the imprecision of the imitation (IIS; r = 0.357, p = 0.041) in autistic individuals, which, however, did not survive Bonferroni-correction.
Effects of amount of imitation on emotion recognition
To further investigate the relationship between imitation of an expression and emotion recognition performance, we calculated a model separately for the imitation condition. For the precision and occurrence of the imitation, we calculated a regression model that controlled for the group as well as for the interaction. We calculated an Ordinary Least Squares regression predicting the percentage of correct answers and using the IIS and the autism condition as predictors. The accuracy of the emotion recognition scaled negatively with the individual imprecision of imitation (β = − 0.039; 95% CI [− 0.068 − 0.010]; z = − 2.67, p = 0.009). We found no effect on the speed of correct answers (β = 75.90, p = 0.706). Further, we found no relationship between emotion recognition and the cosine similarity measures (all p > 0.05).
Discussion
Based on a large sample and computer-based facial analysis, we measured quantitative and qualitative differences in facial imitation as well as emotion recognition between individuals with and without autism. Both groups showed intact imitation of facial expressions when they were instructed to imitate. However, the group of individuals with autism differed from neurotypical individuals regarding the speed and precision of the imitation: their voluntary imitation was on average slower and less precise. A separate analysis of the imitation's intensity and its similarity with the actor's facial expression revealed an association between these measures and the severity of the social deficits in the individuals with autism. The more affected individuals expressed a less similar but more intense imitation. On average, individuals with autism recognized fewer emotions correctly and were slower in imitating them than neurotypical individuals. While the effect was small for recognition accuracy, it was of greater magnitude for recognition time. For both groups, the instruction to imitate the emotional expressions was associated with decreased performance in emotion recognition compared to the watch condition. However, the precision of the imitation was positively associated with recognition performance across groups.
Group differences in emotion recognition
We replicated previously reported difficulties of individuals with autism to recognize emotions from facial expressions (e.g. [36,47,48]). Using expressions of basic emotions with varying intensities, which were designed to be difficult to interpret and thus more sensitive, we were able to show that even high-functioning adults with autism seem to have difficulties in recognizing them, in that they need more time and make slightly more errors. These results are consistent with previous studies on emotion recognition in individuals with ASC, that report differences in reference to emotion recognition of briefly presented stimuli [49,50]. In real life, emotions often occur briefly in a subtle form comparable to our stimulus material, which might, at least partially, explain the social difficulties of high-functioning individuals with autism in daily life [15].
Group differences in facial imitation
In accordance with similar studies, we found that individuals with autism were on average capable of imitating facial expressions when instructed (e.g. [18,19,23]).
Using computer-based face analysis, we could also measure and quantify qualitative differences in imitation, especially in dependence of the level of autism symptoms in the social domain. Most studies so far have focused on the occurrence of facial imitation and ignored qualitative differences. Preliminary descriptions of such qualitative differences exist, e.g. Loveland et al. [22] stated that "the responses of subjects with autism contained many unusual behaviours, such as bizarre expressions and those that looked 'mechanical'". Another study, which measured the spontaneous and instructed imitation of facial expression in children with autism, pointed to an altered time course of the facial expressions in individuals with ASC, but only for spontaneous imitation [23].
We replicated this timing effect for imitation, as we found that individuals with autism needed on average more time than neurotypical individuals to imitate facial expressions voluntarily. As dynamic properties of facial expression (e.g. time point of maximal expression, duration, etc.) play an important role for the perceived genuineness [51], the group differences in timing might in part underlie the social interactions problems that individuals with autism show. Further, the imitation speed was associated with the recognition speed. This might point to the importance of imitation for emotion recognition. However, it could also be interpreted as generally slower processing times, which underlie both imitation and recognition. Although intelligence might further represent a factor underlying this association, it is less likely to play a role here, given that our participant groups were matched for IQ. In addition to the timing of facial expressions, individuals with autism differed on average from neurotypical individuals regarding the mean precision of their imitation of emotional expressions. This matches a finding by Brewer et al. [52], which reported that posed facial expressions of emotions of individuals with autism were recognized less accurately than of neurotypical individuals by both individuals with and without autism.
Further, the imitation quality of the individuals with autism was associated on average with their ADOS social subscores, evident in three different measures of imitation performance (similarity, intensity and, marginally, imprecision). In accordance with this finding, Yoshimura et al. [53] showed an association between facial imitation extent and social functioning. Partially, our finding regarding the negative association between imitation performance and severity of autism resonates with a study of Faso and colleagues [54] as well. The authors compared posed and evoked facial expressions of adults with and without ASC. Naive observers rated the expression of individuals with ASC as more intense and less natural. However, we did not replicate this difference regarding similarity and intensity of the imitation on a group level, presumably because of a less affected patient group. A post hoc analysis, supports this interpretation, as there was more variance in the group of individuals with autism than in the neurotypical group regarding the intensity of their imitation. Second, the negative association of imitation performance and social symptomology without a group-difference in contrast to the neurotypical individuals might be explained by the heterogeneity within the autism population, especially as some individuals predominantly show impairments in one of the two domains, either the social communication and interaction domain or the domain of repetitive behaviour [55]. Thus, a clear group difference regarding the imitation performance might only be evident if individuals with social deficits are compared with neurotypical participants. This interpretation is in accordance with the results of a recent study of Zane et al. [56], which compared facial expressions of neurotypical individuals and individuals with autism in an instructed imitation of emotional expressions task. Similar to our study, the authors found more variance regarding the intensity of their facial expressions in the group of individuals with autism compared to the neurotypical group.
The difficulties of, especially more severely affected, individuals with ASC in generating facial expressions might be associated with their lower tendency to engage in impression management such as in displaying social laughter [57][58][59]. One possible reason might be a reduced motivation of individuals with ASC for social maintaining [60]; another might be a reduced ability to finetune one's facial expressions. The second explanation corroborates recent evidence that individuals with ASC show less precise imitation regarding hand movements [61,62]. These findings favour the assumption that individuals with ASC demonstrate difficulties in the finetuning of imitation [63] rather than an inability to imitate [64].
Our findings also match at least partially the summary of a recent meta-analysis [24] investigating facial expression production in autism. Trevisan and colleagues concluded that participants with ASC display facial expressions less frequently, for a shorter amount of time and less accurately. Further, they stated that individuals with ASC do not express emotions less intensely nor slower. As explained above, the null effect regarding intensity might be explained by not considering the level of social impairments. In general, the comparison of our results with this meta-analysis should be taken with caution, as the meta-analysis covers a high number of very different studies including some on spontaneous expressions, mimicry and verbally prompted posing of facial expression.
Relationship of imitation and recognition of facial emotions
Comparing the imitation condition to the watch condition revealed a negative effect of the instruction to imitate on emotion recognition performance across groups. This finding is consistent with that of Kulesza et al. [65], who asked healthy participants to recognize basic emotional facial expressions of an actress and found that participants who were instructed to imitate the expression recognized less facial displays of the emotions than participants who were instructed to inhibit spontaneous imitation of the expressions. In accordance with these findings, in a study with healthy individuals by Stel et al. [66], mimicking facial and behavioural movements of an interaction partner reduced another aspect of emotional understanding, i.e. detecting whether the partner was lying.
That being said, those results cannot rule out the possibility that imitation does foster emotion recognition after all. For example, another possible reason for the negative effect of the imitation on the accuracy of emotion recognition in our design is an additional cognitive load. Controlling the facial muscles might absorb cognitive energy in the imitation condition, and thereby worsen emotion recognition. In line with this interpretation are the results of a study by Lewis et al. [67]. The participants in this study performed an emotion recognition task twice, and half of the participants had to imitate the facial expression in the second round. As both groups recognized more emotions in the second round, it can be assumed that the participants' cognitive load for the emotion recognition task itself was reduced in the second round. While mimicking did not help the performance at the baseline test, the increase in performance in the second round was significantly higher for the mimickers. It seems plausible that only the lower cognitive load in the repeated condition allowed mimicry to take an effect. Thus, individuals' emotion recognition might benefit from imitation, if the imitation does not involve much extra cognitive load. In our design, all stimuli were presented only once, resulting in two equally difficult conditions. This might overshadow any possible positive effect of imitation.
That beneficial effects of imitation on emotion recognition might indeed exist is indicated by our finding that the intensity as well as the precision of imitation was positively associated with emotion recognition performance across the whole group of participants. However, given the correlational nature of this finding, the interpretation warrants caution as the finding could also be explained in a way that either recognition of an emotion mediates imitation or that the severity of autism social symptoms act as a confounding variable.
Most studies investigating facial expressions in autism suffer from low statistical power and might be biased by low intellectual level or age of the participants (for an overview, see [24]). We managed to collect a study sample of 80 individuals, including 37 individuals with autism and without intellectual impairment and ensured an equal portion of male and female participants. A further strength of this study is its unobtrusive measure of facial expressions, which is particularly relevant for individuals with autism and allowed us to research a large sample of this population.
Limitations
As our study investigates adults with autism spectrum condition without intellectual impairment, we do not know whether our findings hold for children with autism or adults with intellectual impairment. A further limitation of this study is its unknown sensitivity to measure non-observable imitation, as OpenFace only assesses muscle movements detectable by camera, whereas EMG allows assessment of very subtle muscle movements [68]. However, OpenFace 2.0. and its precursor OpenFace toolkit have shown their usefulness in studies, which aimed to detect suicidal ideation [69], psychotic symptoms [70] and autism [28] based on facial expressions, which speaks for its general sensitivity. Another potential further limitation of our study is that we cannot rule out that participants moved their facial muscles voluntarily in the watch condition. However, the negative results for imitative behaviour in that condition speak against this having occurred. Additionally, individuals with gross voluntary movement during the watch condition were excluded based on the individual screening of all video recordings.
In our analysis, we first applied a general baselinecorrection to correct for the participant's general facial expression. Second, we calculated a trial-wise baselinecorrection to measure the participant's imitation in comparison with their reaction to the actor's neutral face. As a result, our imitation measures are measures of change and movement of someone's face. This baseline correction implies, however, that a person that shows a certain emotional expression during the neutral phases of the experiment (e.g. because she feels anxious throughout the experiment) might receive a lower imitation score for showing a similar emotional expression as the person to be imitated. However, we are not interested in the absolute facial expression but in the change towards someone's neutral face. This change, from a neutral baseline expression to a more emotion-specific expression, significantly occurred in both groups, evident as a higher cosine similarity averaged above all six basic emotions and participants. Due to a technical error, not all participants saw the same stimuli. However, as the differences were very small and by random choice, we do not assume that this effected our results.
Aiming at a voluntary imitation condition that would be as clearly defined as possible, while not necessitating explicit emotion processing, we asked the participants to "move their facial muscles like the person in the photo". We avoided mentioning the term "imitation", as it might activate popular science beliefs about imitation and its effects on emotion recognition. However, people might scan faces differently if the instruction creates an explicit focus on the muscles rather than the emotion, e.g. by looking less to the eyes and more at other parts of the face. It is also possible that NT and ASC groups respond to this instruction differently, with ASC potentially focusing more literally on muscles rather than the holistic emotion expression. Further studies including eye-tracking should elucidate this aspect as well as the process of imitation more fine-grained.
Aiming for a high standardization, we explicitly asked participants to imitate a static expression display in a photograph for a specific time, instead of collecting facial imitation in the wild. The aim of the study was to investigate the general ability of individuals with autism to imitate facial expressions if there are instructed to. In social interactions, emotions are sometimes expressed voluntarily to produce a certain impression or present oneself in a socially desirable way [71]. However, the results need to be interpreted with caution, as it is not clear, whether people would behave differently in the real-world, e.g. due to different contexts, additional load, the dynamics of facial expressions or social motivations. Further, it has been shown that voluntary imitation relies on different underlying processes than spontaneous imitation [72]. Thus, it would be of great value to conduct a similar study in a real-world setting to see if the results generalize to those settings. In such an experiment, computer-based measures may help to enable an unobtrusive measurement of facial expression imitation. Still, as previous research has often claimed that voluntary facial imitation is not affected in individuals with autism [18], we consider it important to elucidate these differences in our work.
Indeed it is important to bear in mind that our understanding of how facial expressions are used in the real world is still very limited [73]. Further research is needed to better understand how people move their faces in different contexts of everyday life and how they use their facial movements to transfer social information.
Finally, yet importantly, the positive relationship of emotion recognition and imitation extent and precision could only be shown as a correlation. Further studies are needed to investigate the causal direction of this relationship.
Conclusions
To the best of our knowledge, this is the first study that successfully used computer-based analysis to measure facial expression in an imitation context. This unobtrusive and affordable method allowed us to measure qualitative differences in facial expressions between neurotypical individuals and individuals with autism. Using the newly developed sensitive emotion recognition task BERT, we were able to replicate the emotion recognition deficit in individuals with autism and provided some evidence for a positive association of imitation performance and the recognition of emotions.
Further research should explore facial expressions in social interactions with active and passive roles of the participants (expressing and recognizing emotions) to exclude the artificial load of the instruction to express an emotion in imitation paradigms. More broadly, research is also needed to determine the potential of training imitation as a possible mechanism to enhance emotion recognition. While imitation does not seem to help emotion recognition immediately (likely due to additional task demands), training imitation precision via instruction might enhance spontaneous imitation and by that foster emotion recognition. Table 2 Selected single action units from the Facial Action Coding System (FACS) [41]; table adapted from Rosenberg [75] (AU45, which refers to an eye blink, is not listed here but detected by OpenFace) *An AU cannot be detected by Open Face [45] **An AU can be tracked by OpenFace but is not considered relevant for basic emotions in FACS [41], as cited by Gosselin | 2021-04-06T13:55:07.909Z | 2021-04-06T00:00:00.000 | {
"year": 2021,
"sha1": "b6c85bf07a9d5d29e9b541d48ab76473df452c6d",
"oa_license": "CCBY",
"oa_url": "https://molecularautism.biomedcentral.com/track/pdf/10.1186/s13229-021-00430-0",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c86d11b025244a8dd92381253b6d4896ebe90ce3",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
235466160 | pes2o/s2orc | v3-fos-license | A Novel Classification of Glioma Subgroup, Which Is Highly Correlated With the Clinical Characteristics and Tumor Tissue Characteristics, Based on the Expression Levels of Gβ and Gγ Genes
Purpose Glioma is a classical type of primary brain tumors that is most common seen in adults, and its high heterogeneity used to be a reference standard for subgroup classification. Glioma has been diagnosed based on histopathology, grade, and molecular markers including IDH mutation, chromosome 1p/19q loss, and H3K27M mutation. This subgroup classification cannot fully meet the current needs of clinicians and researchers. We, therefore, present a new subgroup classification for glioma based on the expression levels of Gβ and Gγ genes to complement studies on glioma and Gβγ subunits, and to support clinicians to assess a patient’s tumor status. Methods Glioma samples retrieved from the CGGA database and the TCGA database. We clustered the gliomas into different groups by using expression values of Gβ and Gγ genes extracted from RNA sequencing data. The Kaplan–Meier method with a two-sided log-rank test was adopted to compare the OS of the patients between GNB2 group and non-GNB2 group. Univariate Cox regression analysis was referred to in order to investigate the prognostic role of each Gβ and Gγ genes. KEGG and ssGSEA analysis were applied to identify highly activated pathways. The “estimate” package, “GSVA” package, and the online analytical tools CIBERSORTx were employed to evaluate immune cell infiltration in glioma samples. Results Three subgroups were identified. Each subgroup had its own specific pathway activation pattern and other biological characteristics. High M2 cell infiltration was observed in the GNB2 subgroup. Different subgroups displayed different sensitivities to chemotherapeutics. GNB2 subgroup predicted poor survival in patients with gliomas, especially in patients with LGG with mutation IDH and non-codeleted 1p19q. Conclusion The subgroup classification we proposed has great application value. It can be used to select chemotherapeutic drugs and the prognosis of patients with target gliomas. The unique relationships between subgroups and tumor-related pathways are worthy of further investigation to identify therapeutic Gβγ heterodimer targets.
INTRODUCTION
Glioma is a classical type of primary brain tumors that is most common seen in adults, and its high heterogeneity used to be a primitive feature for subgroup classification (1). Historically, glioma was diagnosed based on histopathology and grade (2). World Health Organization Classification of Tumors of the Central Nervous System, revised in 2016, added several molecular markers, including IDH mutation, chromosome 1p/ 19q loss, and H3K27M mutation into an integrated glioma diagnosis (3). With the rise of genomic medicine, this paper, proposing a signature with multiple genes as the indicator of subgroup classification, has adopted an increasingly usual method. A research group described a gene expression-based molecular classification of GBM into Proneural, Neural, Classical, and Mesenchymal subtypes (4). Some studies designed signatures with multiple genes related to m6A RNA methylation, ferroptosis, and lipid metabolism to stratify the prognosis of gliomas (5)(6)(7). The effect of certain biological processes on gliomas lied in the focus of the above studies. Based on the observation to the expression levels of Gb and Gg genes, we found that they had the potential to be molecular markers in subgroup classification of glioma.
G protein-coupled receptors, the largest family of cell-surface receptors in the human genome, are capable of mediating the signaling of a wide range of ligands, hormones, neurotransmitters, proteases, lipids, and peptides, for instance (8). GPCR activation is mediated by the binding of the GPCR extracellular domain with the agonist ligand. GDP on the Ga subunit is replaced by GTP, resulting in the dissociation of the Ga subunit from the Gbg heterodimer. Gbg heterodimer reacts on Phospholipase C, Voltage-Dependent Ca2 + Channels, Phosphoinositide 3 Kinases, Mitogen-Activated Protein Kinases, and is also involved in microtubule polymerization, recycling endosomes, and Golgi fragmentation (9)(10)(11)(12)(13)(14)(15). Furthermore, Gb and Gg may be involved in the assembly of particular GPCR complexes. The pool of Gb and Gg in a particular cell may drive and/or dictate which GPCR complexes can form in that cell (16). Gb and Gg are crucial participants in the malignant progression of tumors. GNB4 overexpression activates the Erk1/2 pathway resulting the process of epithelial-mesenchymal transformation of GC (17). The proliferation of SK-Mel28 human malignant melanoma cells was suppressed with overexpressed GNG2, and the mean tumor size of overexpressed-GNG2 SK-Mel28 cells was less than that of the controlled SK-Mel28 cells in nude mice after inoculation (18).
There are five b-subunits (b1, b2, b3, b4, b5) and 12 g-subunits (g1, g2, g3, g4, g5, g7, g8, g9, g10, g11, g12, g13) in the human body. bg pairs are specifically related to downstream signals (19). Gb1g2 heterodimer activates PI3K, whereas Gb5g2 heterodimer does not possess the similar effect. Both of the above two heterodimers can activate PLCb1 and PLCb2, yet only Gb1g2 is able to activate PLCb3 (20). Differences in affinities between several types of G protein subunits will impose restrictions on the formation of certain heterotrimers and, on the other hand, determine the activity of certain type of G protein in a cell (21). Gg2 and Gg3 are more likely to be bound to Gb1, Gb2, and Gb4 subunits, whereas Gb2 is not bound to Gg1, Gg11, Gg13, and is only weakly bound to Gg8 (22,23). The mutation rate of Gb and Gg genes in glioma remains low, so the influence of mutations can be ignored in the subgroup classification.
Patients and Datasets
Nine hundred fifty-one glioma samples retrieved from the CGGA database (http://www.cgga.org.cn) and 672 glioma samples retrieved from TCGA database (http://cancergenome. nih.gov/) were utilized in this study for reference. Relevant data included relapse samples. For the same patient, we only used the first tumor RNA sequencing data. The FPKM-standardized mRNA sequencing data was log2 transformed for all analyses. The count format mRNA sequencing data retrieved from TCGA was standardized by voom function.
Bioinformatic Analysis
We firstly extracted the expression values of Gb and Gg genes from mRNA sequencing data. Then, we clustered the gliomas into different groups with "Consensus Cluster Plus" package for R v4.0.3 (https://www.r-project.org/). PCA was employed to study the gene expression patterns in different glioma groups. We applied the first three PC values representing RNA sequencing data of each sample to establish a distribution map of the sample. Drug sensitivity analysis was later on performed with "pRRophetic" package (24,25), resulting in a lower IC50. And this indicated that the subgroup was more sensitive to the drug. Then, we screened out differentially expressed genes between each two subgroups with "DESeq2" package (26). The DEG threshold was set at a |log2 fold change| ≥ 1 and an adjusted P value <0.05. KEGG pathway enrichment analysis was used to annotate DEG. The reliability of the results was verified using ssGSEA. Gene lists of pathways used in ssGSEA were downloaded from the KEGG website (https://www.kegg.jp/ kegg/pathway.html). The "estimate" package, "GSVA" package, and the online analytical tools CIBERSORTx (https://cibersortx. stanford.edu/) were employed to evaluate immune cell infiltration in glioma samples (27).
Statistical Analysis
Student's t-tests performed in SPSS v26 were used to determine the differences of Gb and Gg genes expression level. When the expression level of a gene in a subgroup was significantly higher than that in the other two groups, it was considered that the gene was specifically highly expressed in this subgroup and vice versa. Chi-square tests were used to compare the distribution of clinical features between three groups. The Kaplan-Meier method with a two-sided log-rank test was referred to compare the OS of the patients between GNB2 group and non-GNB2 group. Univariate Cox regression analysis on the expression levels in CGGA and TCGA dataset was used to investigate the prognostic role of each Gb and Gg gene. Pearson method was used to evaluate the correlation between Gb and Gg genes and macrophage infiltration. On one hand, a R value more than 0.5 was considered a significant positive correlation. On the other hand, a p value less than 0.05 was considered to be statistically significant.
Three Types of Gbg-Related Subgroups Existing in Glioma
Based on the clustering consistency ( Figures 1A, B, note that an inflection point appeared at k = 4, indicating that 4 was the best value) and the correlation of samples between subgroups (Figures 1C-H, note that there was a high correlation of samples between subgroups at k = 4, which was significantly improved at k = 3) between two datasets, k = 3 seemed to be the sound selection (Figures 1I, J), we found that the subgroups of the two datasets matched in accordance (Figures 2A, B). GNB2, GNB5, GNG10, GNG11, and GNG12 were highly expressed, while GNB3, GNG2, GNG4, and GNG13 were low expressed in a subgroup that we named "GNB2 subgroup." GNB3 was highly expressed while GNB1 and GNG12 were low expressed in a subgroup named "GNB3 subgroup." GNB5, GNG3, GNG7, and GNG13 were highly expressed, while GNB2 and GNB4 were low expressed in a subgroup named "GNB5 subgroup."
Significant Differences Demonstrated in Molecular and Clinical Characteristics Between Different Subgroups
What was particularly notable was that the GNB2 subgroup was almost entirely composed of non-1p19q codeletions in TCGA (99.2%) and CGGA (98.6%) datasets ( Figures 2E, F). This was because of the position of GNG5 and GNG12, that were highly expressed in GNB2 subgroup, both of which were located at the position of chromosome 1p. In addition, GNB1 was located on chromosome 1p and GNG8 on chromosome 19q. There was no significant difference in codeleted 1p19q rate between the GNB3 subgroup and the GNB5 subgroup ( Figures 2E, F). GNB2 subgroup was also associated with higher rates of high pathological grade ( Figures 2C, D), wild type IDH ( Figures 2E, F) and unmethylated MGMT promoter ( Figures 2G, H). While GNB3 subgroup was associated with higher rate of mutated IDH and methylated MGMT promoter ( Figures 2G, H). There was no sufficient evidence to show a significant relationship between subgroups and tumor location.
We then investigated the response to chemotherapy in three subgroups before we arrived at the conclusion that 16 chemotherapeutic drugs displayed significant differences in estimated IC50 between three subgroups ( Figure 3). Patients in GNB2 subgroup showed the highest sensitivity to 11 chemotherapies, including cisplatin ( Figure 3B), cytarabine ( Figure 3C), and etoposide ( Figure 3H), which was consistent with the result that subgroup with higher malignancy was more sensitive to chemotherapies (28). In contrast, patients in GNB5 subgroup showed the lowest sensitivity to 11 chemotherapies. There was no significant difference between GNB2 subgroup and GNB3 subgroup in the sensitivity to methotrexate, which was used for CSF injection in glioma patients with spinal dissemination, and both were higher than that in GNB5 subgroup ( Figure 3O).
Significant Biological Differences Among Subgroups
We screened for differentially expressed genes between each of the two subgroups in TCGA dataset, and KEGG pathway analysis were carried out to understand which pathways the up-regulated and down-regulated genes were enriched in ( Figures 4A-C). Sixteen tumor-related pathways with strong stability were selected for further ssGSEA analysis in TCGA ( Figure 4D) and CGGA ( Figure 4E) datasets. The results concluded from TCGA and CGGA datasets showed strong consistency. GNB2 subgroup was highly associated with high activation of PI3K−Akt signaling pathway, JAK−STAT signaling pathway, and several immune-related pathways. As for GNB5 subgroup, it was highly associated with high activation of Calcium signaling pathway, GnRH signaling pathway, Ras signaling pathway, and other pathways. Last but not the least, GNB3 subgroup was not associated with activation of the 16 selected pathways.
Considering the relationship between GNB2 subgroup and immune-related pathways, we evaluated immune infiltration with ESTIMATE algorithm and ssGSEA of 29 immune-related gene sets. The results indicated that the GNB2 subgroup was associated with strong stemness and immune inflammation ( Figures 4F, G). When characterizing the abundances of different immune cell types with CIBERSORTx, we found that the infiltration levels of M0 macrophages and M2 macrophages increased significantly in glioma samples of GNB2 subgroup in both the CGGA ( Figure 4H) and TCGA ( Figure 4I) datasets. In gliomas, tumor-associated macrophages were promoted by glioma-secreted cytokines to acquire M1 or M2 phenotype, which differs in relation to microenvironment modulation (29,30). On the purpose of further exploring the association between core genes of GNB2 subgroup and macrophages, characteristic markers of TAMs, M1, and M2 were selected to perform Pearson analysis (31,32). Relevant results showed that GNG5 and GNG12 were positively correlated with TAMs and M2, yet not with M1 ( Figures S1A, B). Results of patients with non- Cai et al.
Gbg-Related Classification of Glioma Subgroup
Frontiers in Oncology | www.frontiersin.org June 2021 | Volume 11 | Article 685823 codeleted 1p19q displayed a significant reduction of the correlation between GNG12 and M2, but this did not occur concerning to GNG5 (Figures S1C, D). The result revealed the correlation between GNG12 and M2 macrophages, which was not dependent on the increased expression level of GNG12 expressed by macrophages resulted by the increase in the number of M2 macrophages, but on the involvement of GNG12 expressed by glioma cells in M2 macrophages infiltration. Through the analysis of publicly available singlecell RNA sequencing data, we found that cells with high GNG12 expression were mainly glioma cells, which supported this conclusion ( Figures S1E, F).
Poor Survival in Patients With Gliomas Predicted by GNB2 Subgroup
The characteristics of GNB2 subgroup, including IDH wildtype, 1p19q non-codeletion, high infiltration of M2 macrophages, all predicted poor survival in patients with gliomas. Consequently, we then conducted the Kaplan-Meier survival analysis. Significant correlation was observed between GNB2 subgroup and decreased OS was observed in patients with glioma as well ( Figures 5A, B).
After combining cluster and survival information of two datasets, we found that patients in GNB3 subgroup had longer OS than those in GNB5 subgroup ( Figure 5C). Based on the reduction of sample size and data compatibility, we applied the combined data to the subgroup analysis. In patients with grade 2 ( Figure 5D), grade 3 ( Figure 5E), grade 4 ( Figure 5F), we all observed a significantly shorter OS in the GNB2 subgroup than the non-GNB2 subgroup. GNB2 subgroup also exhibited worse OS in patients with glioma with mutated IDH (Figure 5G), wild type IDH ( Figure 5H), and non-codeleted 1p19q ( Figure 5I). In both patients with mutated IDH and patients with non-codeleted 1p19q, patients with grade 2 and grade 3 in GNB2 subgroup showed shorter OS than those in non-GNB2 subgroup ( Figures 5J-M). In patients with LGG with mutated IDH and non-codeleted 1p19q, a finely segmented patient set, GNB2 subgroup predicted poor survival, too ( Figure 5N). This result was encouraging because there were no further officially recommended prognostic molecular markers available for these patients with LGG with mutation IDH and non-codeleted 1p19q. We afterwards performed a univariate Cox regression analysis on the expression levels in TCGA ( Figure S2A) and CGGA ( Figure S2B) dataset outing to investigate the prognostic role of each Gb and Gg gene. The results showed that high GNB1, GNB2, GNG5, GNG10, GNG11, GNG12 expression were associated to poor prognosis and GNB5, GNG4 were associated to good prognosis in both TCGA and CGGA datasets.
DISCUSSION
We also referred to RNA sequencing data from other tumors, including LUAD and LUSC, for cluster analysis. Relevant results, reflecting strong correlations and a lack of valuable pathways, were unsatisfactory, though. The positive clustering results of this study might be attributed to some characteristics of glioma tissue, such as the glioma-specific 1p19q co-deletion that affected the expression of GNB1, GNG5, GNG7, GNGN8, and GNG12. Besides, compared to other somatic tumors, the relatively immune-privileged microenvironment of glioma, which was dominated by macrophages, reduced the confiding effect of gene expression of other immune cells on the RNA sequencing data of the whole tissue.
Peripheral blood derived macrophages and intracranial microglia replaced T cells as the crucial immune cells in the immune microenvironment of glioma thanks to the existence of the blood brain barrier (33,34). In the microenvironment of malignant tumors, M2 macrophages were the major subtype of macrophages and the important contributors to an immunosuppressive phenotype (35,36). High M2 macrophages infiltration was associated with poor prognosis in patients with glioma, which partly explained the short OS in GNB2 patients. GNG12 might play a distinct role in the formation of immunosuppressive phenotype of glioma. A previous study showed that GNG12 did regulate PD-L1 expression by activating NF-kB signaling in pancreatic ductal adenocarcinoma. In this study, the expression level of GNG12 was also positively correlated with the expression of PD-L1.
Several molecular markers of GNB2 subgroup were associated with tumor progression. Both mutation and overexpression of GNB2 caused leukemogenesis, let alone downregulation of GNB2 expression reduced proliferative potential of tumor cells (37). Overexpression of GNG5 was associated with pool prognosis in patients with glioma (38). GNG4 was found to be one of the most hyper methylated and down regulated genes in GBM, and exogenous over expression of GNG4 inhibited SDF1a/CXCR4-dependent chemokine signaling leading to inhibition of proliferation and colony formation of GBM cell lines (39). High rates of high pathological grade and IDH wildtype were also the reasons for the poor prognosis of patients in GNB2 subgroup.
The limitation to our study is as follows. Due to the increasing complexity of subunit pairs, we did not incorporate Ga gene in this study. In addition, we did not obtain specific pairs of Gb and Gg in the corresponding subgroups that were difficult to get from the analysis of RNA sequencing data. A large number of experiments are still in need to determine exact pairs, despite the specificity of the combination of Gb and Gg is beneficial to narrow the scope. On the other hand, validating the subgroup model's predictive capability on independently generated data does make a difference. Besides, this classification was obtained by unsupervised consistent clustering, which failed to presuppose specific conditions of G protein subunit gene expression value.
To determine which subgroup a glioma tissue belongs to, we need the exact condition of each gene expression value or a mathematical determination model such as neural network model, which needed a certain number of samples would for parameter optimization. The RNA sequencing data we analyzed sourced from TCGA and CGGA databases, which limited the access to clinical data, such as extent of surgical resection and volume of the residue of tumor. A new clinical cohort collecting substantial clinical data for verification and further study is necessary. We identified several important pathways corresponding to subgroups, yet the role of Gbg in these pathways and relevant effects of these pathways on tumor tissue remain to be further investigated.
CONCLUSION
This paper has presented a new subgroup classification for glioma based on the expression level of Gb and Gg genes. Patients with glioma were divided into three subgroups that differed significantly from each other. Each subgroup has its own specific pathway activation pattern and other biological characteristics. The unique relationships between subgroups and tumor-related pathways can be further investigated to identify therapeutic Gbg heterodimer targets. High M2 cell infiltration was observed in GNB2 subgroup. And GNG12 could be treated as a potential effector in immunosuppressive phenotype of glioma. Different subgroups have different sensitivities to chemotherapeutics, so this study may be referred to for clinical drug selection. Additionally, GNB2 subgroup predicted poor survival in patients with gliomas, especially in patients with LGG with mutation IDH and noncodeleted 1p19q. This subgroup classification is expected to be a new molecular marker to predict the prognosis of these patients. This classification can be used to screen out the patients with high actual malignant tumor in patients with low pathological grade, so as to recommend optimal treatment time in advance and to improve the possibility of treatment.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by the ethics committee of Beijing Tiantan Hospital. Written informed consent for participation was not required for this study in accordance with the national legislation and the institutional requirements.
AUTHOR CONTRIBUTIONS
ZC is responsible for data analysis and article writing. WL is responsible for topic selection and research design. CY and YF are responsible for assisting in writing the paper. CW and QJ are responsible for guiding the statistical analysis. SL is responsible for assisting in writing the paper. FC directing the paper writing. All authors contributed to the article and approved the submitted version. | 2021-06-18T13:28:27.033Z | 2021-06-18T00:00:00.000 | {
"year": 2021,
"sha1": "4e096d18152f40716d75e20abd4de08d950eea23",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fonc.2021.685823/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4e096d18152f40716d75e20abd4de08d950eea23",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
228096321 | pes2o/s2orc | v3-fos-license | An Unusual Case of Swelling of Tuberculosis of Elbow and Forearm: A Case Report
Introduction: Mycobacterial infection of the upper extremities is rare with elbow joint being most frequently affected accounting for 2– to 5 % of all skeletal localizations. Diagnosis is of paramount importance in tuberculosis of elbow because delay in analysis could prime to serious difficulties. Case Presentation: We describe a rare presentation of a 38- year- old male with tuberculosis of elbow joint. Massive swelling of forearm with subcutaneous collection without any significant involvement of forearm muscles has rarely been reported. This case will be a significant addition to literature with respect to clinical presentation of elbow tuberculosis. Conclusion: Tuberculosis of elbow along with that of a forearm is rare and surgical intervention can lead to better outcomes in these patients.
Introduction
Musculoskeletal tuberculosis has been showing a resurgence in the past few years due to the increased number of immunocompromised individual and emergence of drugresistant bacteria [1]. Musculoskeletal system is involved in 1-3% of patients with tuberculosis and accounts for 10% of all extra-pulmonary tuberculosis with the common sites being the spine (51%), pelvis (12%), hip and femur (10%), knee and tibia (10%), and ribs (7%). Mycobacterial disease of the upper extremities is rare with elbow joint most frequently acted on, accounting for 2-5% of all skeletal location [1,2,3]. Diagnosing is of most importance in osteoarticular T.B. as a result of delay in diagnosing will cause serious complications. In spite of wide organized row of investigations accessible for tuberculosis, the importance of history and clinical examination of the patient cannot be interfered with, however, clinical presentation is not very common sometimes like this in a 38-year-old male with huge forearm swelling. We are reporting this case to highlight the clinical presentation of tuberculosis of elbow and forearm with an aim of adding something new in the literature.
A male of a 38-year-old presented to us with pain complaints and swelling in his right elbow and forearm for the past 1 year. Pain was insidious in onset, moderate in intensity, nonradiating, gradually progressive in nature, aggravated by movement, partially relieved with rest. Initially, the swelling was localized to elbow joint extending to forearm on the past few months. No history of weight loss was there and any history of trauma but was associated with episodes of fever. On examination, the patient appeared malnourished, locally Clinically, the diagnosis of osteoarticular tuberculosis is difficult with gradual onset of joint pain, swelling, decreased range of motion progressive loss of function, and deformity. During the early phase, tuberculous osteoarthritis might be quickly mistaken for trauma, septic, or rheumatoid osteoarthritis. "Non-weight-bearing joints affected by tuberculosis, elbow joint being the most frequently involved joint in the upper extremity followed by shoulder joint" [1,2]. "Mycobacterium tuberculosis is the main causative organism with only few cases attr ibutable to Mycobacter ium bov is and at y pical Mycobacterium" [4]. "Osteoarticular tuberculosis is the result of blood, lymphatic, or local contamination from adjacent or other areas of primary infection with rare cases from direct inoculation of bacteria" [5]. Pathogenesis of elbow joint tuberculosis involves reactive hyperemia resulting in marked juxta-articular bone demineralization, local bone destruction, periosteal new bone formation, and forearm involvement ranging from involvement of subcutaneous plane or forearm muscles. Infection starts as synovitis causing joint effusion erosions and destruction of bone and cartilage. When untreated para-articular soft tissue are also involved. This involvement may confine to muscles or rarely to subcutaneous tissue.
Discussion
massive swelling extending from elbow to distal forearm was seen. Swelling was circumferential around the elbow joint, distally it was localized to anterolateral aspect of forearm. It was tense, with prominent veins, and was tender around the elbow but non-tender around forearm. It was soft and cystic in consistency, compressible, with positive fluctuation. Movements around elbow joints were limited (Fig. 1). Distal neurovascular status was intact. Blood picture revealed raised lymphocyte count with raised erythrocyte sedimentation rate. Radiologically, X-ray of elbow joint showed arthritic changes along with multiple ill-defined no sclerotic lytic lesion involving humeral condyles, olecranon process of ulna, and radial head along with large soft-tissue shadow (Fig. 2). MRI was suggestive of large collection along the anterolateral aspect of the right forearm which was mainly restricted to subcutaneous plane (Fig. 3). Incision and drainage of the swelling was done under GA. Intraoperatively, the swelling was filled with purulent exudates of around 1 L in volume (Fig. 4). The microbiological and histopathological examination of the synovial and necrotic tissue showed caseating granulomas with cells is as Langhans giant consistent with tuberculosis. Postoperatively, the patient was put on antitubercular drugs which included four agent drug treatment using isoniazid, rifampicin, pyrazinamide, and ethambutol (AKT-4). On follow-up at 4 weeks and 3 months, thereafter the swelling did not recur (Fig. 5). At present, he is improving with no recurrence of swelling.
"Osteoarticular TB should be suspected in patients of South Asian and African origin presenting with bony and soft-tissue infective lesions" [6]. Even though in most instances, biopsy or maybe culture specimen is forced to create the conclusive analysis, it is critical that "Radiologists and clinicians understand the typical distribution, patterns, and imaging manifestations of musculoskeletal tuberculosis" [7]. In the Indian subcontinent, the presentation of elbow tuberculosis is usually exudative with abscess formation 6 Journal of Orthopaedic Case Reports Volume 10 Issue 5 August 2020 Page 5-8 | | | | Agrawal S et al The surgical intervention could appreciably alter the outcome, especially in patients with extra-articular involvement close to the joint. Massive swelling of forearm with subcutaneous collection without any significant involvement of forearm muscles has rarely been reported. This case will be a significant addition to literature with respect to clinical presentation of elbow tuberculosis as similar studies by Protzman et al., [8] Yazici et al. [9] prescribed a fairly conservative approach with just the conservative management we recommend if the swelling is massive or unusual extending to arm or forearm, surgical intervention is a better option.
Radiological findings in osteoarticular tuberculosis are nonspecific and require aspiration or synovial biopsy for definitive diagnosis. Cultures and synovial microscopy yield positive results in up to 80% of individuals with osteoarticular tuberculosis while residual is identified through complete synovial or even biopsies of bone. Histology displays caseating granulomas though a Ziehl-Nielsen stain is negative. Although extra-articular involvement in elbow tuberculosis is rare, in unusual presentation of forearm swelling, this has to be kept in mind and that surgical intervention can lead to better outcomes in these patients.
Conclusion
In our case, the elbow swelling was mastered by the forearm swelling and was mistaken as some tumor. Changes in plain film radiography of the affected joint included periarticular osteoporosis, peripherally located osseous erosions and gradual narrowing of the cartilage space known as Phemister triad. Round or oval lesions with poorly defined margins in bone adjacent to the affected joint with joint effusion and soft-tissue swelling are a common finding in extremity tuberculosis, as in our patient. MRI features include bone marrow changes indicating osteomyelitis or bone marrow edema, bone erosions, synovial thickening, and joint effusion. Synovial thickening associated with osteoarticular tuberculosis is hypointense on T2-weighted MRI images, distinguishing this from other proliferating synovial arthropathies.
Clinical Message
Extra-articular tuberculosis is a rare entity still any soft-tissue swelling adjacent to a joint, a thorough clinical and radiological evaluation to rule out osteoarticular tuberculosis should be done. | 2020-12-12T05:04:54.951Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "43f11c558fb8ecd802dc3bff14aec93a8424263c",
"oa_license": "CCBYNCSA",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "43f11c558fb8ecd802dc3bff14aec93a8424263c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
14538387 | pes2o/s2orc | v3-fos-license | Why the Real Part of the Proton-Proton Forward Scattering Amplitude Should be Measured at the LHC
For the energy of 14 TeV, to be reached at the Large Hadron Collider (LHC), we have had for some time accurate predictions for both the real and imaginary parts of the forward proton-proton elastic scattering amplitude. LHC is now scheduled to start operating in two years, and it is timely to discuss some of the important consequences of the measurements of both the total cross-section and the ratio of the real to the imaginary part. We stress the importance of measuring the real part of the proton-proton forward scattering amplitude at LHC, because a deviation from existing theoretical predictions could be a strong sign for new physics.
We all know that, up to now, scattering amplitudes of PHYSICAL, strongly interacting particles ( i.e. baryons and mesons ) appear to satisfy dispersion relations, while in the particular case of quarks, since there are no asymptotic states, this statement would be meaningless. The most general versions of local quantum field theory lead to proving dispersion relations 1 and, more generally, analyticity properties in two variables in a rather large domain, if one makes use of the positivity properties of the absorptive part of the scattering amplitude 2 . Furthermore one can prove the bound for the total cross section where s is the square of the center of mass (c.m.) energy. The first question one faces regarding the above results is the composite nature of protons and mesons,i.e. their quark-gluon structure. Some physicists doubted that composite particles could be described by local fields. However, Zimmermann has proved long ago that a local field operator could be used as an interpolating field to represent a composite particle 3 . The asymptotic free in and out limits of this field, could then be used to obtain S-matrix elements involving composite objects like a proton. In the sixties, it was realized that asymptotic theory, reduction formulae and standard analyticity properties hold also when particles, in particular composite particles, are created by polynomials in regularized local fields or local observables, acting on the vacuum. It was also shown in Ref. 1, that even in these cases, the scattering amplitudes are polynomially bounded, so that dispersion relations hold in the same way as for strictly local fields. We shall take this picture as our starting point.
Empirically, over many years, dispersion relations have always been consistent with the measured data for energies reached by fixed target machines (e.g. pion nucleon scattering) or colliders (ISR and SPPS colliders). Unfortunately the measurement of ρ (see Eq.(2)) at √ s = 24.3GeV compared to the UA6 data 7 (Taken from Ref. [5]). On the right, dσ/dt for pp elastic scattering at the LHC energy, as a function of |t| in the small t region. The dashed curve is the pure hadronic contribution, while the solid curve includes both the hadronic and the Coulomb amplitudes.
the Tevatron has such large errors that, no useful information can be extracted from the data.
The question of what will happen at LHC energies is completely open. Here √ s will be 30 times higher than the highest energy for which ρ has been previously accurately measured. From the point of view of some string theorists extra dimensions that are needed for string theory, could be larger than the compact ones proposed in the early string days. Indeed, in some recent models 4 , these could be of a scale not far above that of LHC and could introduce observable non-local effects. However, one should note that at present, none of these string type theories have a clear definition of a scattering amplitude. If a(s, t) denotes the spin-independent amplitude for pp (and pp) elastic scattering, where t is the momentum transfer, we define the ratio of real to imaginary parts of the forward amplitude the total cross section and the differential cross section We recall that three of us, Bourrely, Soffer and Wu (BSW), have proposed more than twenty years ago an impact picture approach 5 , based on the work of Cheng and Wu 6 , which describes accurately all available pp andpp elastic data. Several predictions have been made and, as an illustration, we show in Fig. 1 (left), the predicted cross section forpp, in the Coulomb Nuclear Interference (CNI) region, compared to the UA6 data 7 .
Let us now come back to the important question of testing dispersion relations which can be done in two ways : -use an explicit model which reproduces very well all existing data, and satisfies, by construction, dispersion relations, such as the BSW model.
-use fits of existing data, e.g. the one performed by the UA4/2 Collaboration 8 . The superiority of the first approach is that there is no flexibility in the predictions, while in the second it is essential to take a smooth fit, depending on a few parameters, because otherwise, the predictive power is lost. At the LHC energy √ s = 14TeV, the BSW model predicts ρ = 0.122 with σ total = 103.6mb (5) and for completeness we display in Fig. 1 (right), the predicted cross section in the very small t region. The UA4/2 fit predicts ρ = 0.13 ± 0.018 with σ total = 109 ± 8mb .
We see that these numerical predictions are compatible. If the experiment gives numbers compatible with those above, it will mean that the scale of violation is very much above the LHC energy or, that the corresponding minimal size is much smaller. It will also mean that the predicted cross sections of the model and of the fit are valid at much higher energies. This will allow us to have a better idea about the magnitude of the cross sections at energies which might never be accessible, except by cosmic ray experiments.
The next question is: what can one conclude if the real part of the amplitude obtained from the dispersion integral over σ total turns out to be significantly different from the measured one? There are three possible conclusions, all indicating new physics, that would result from such a disagreement. First, it could be that the total cross section beyond the LHC energy region is radically different from what we now believe, based on the indications coming from cosmic ray data or our expectation of a smooth slow logarithmic growth for σ total . This would be a very important signal for new physics. Second, it is quite possible, though less-likely, that in the gap we are faced with, between 0.5 TeV and 14 TeV, something unexpected is happening, e.g. one or more resonances or significant changes in σ total , which is again new physics. This "gap" was imposed on physicists by the fear that LHC would be the last machine, and one had to go from √ s = 2 TeV at the Tevatron, to √ s = 14 TeV at LHC. This fact makes the failure of the Tevatron ρ-measurement more significant.
Thirdly, there is the possibility that dispersion relations themselves do not hold. This would be a very significant result.
Due to the fact that no violation is seen at lower energies, the violation must be progressive and it turns out, in our proposal as we will see, that the violation is controlled by a single parameter. We assume that the initial analyticity domain obtained by local field theory (without the extension due to positivity, which needs polynomial boundedness) is still valid, but that polynomial boundedness is violated in unphysical, in particular complex, regions of the analyticity domain 9 . It is not so easy to implement this violation, and for instance, if one assumes a growth like exp ( √ s) in complex directions, one falls back, in the end on a polynomial bound, back on ordinary dispersion relations and back on the standard bound on the total cross section. The first case of non trivial violation of dispersion relations is when the scattering amplitude is allowed to behave like exp (s/s 0 ) in unphysical and/or complex directions. If we assume that this bound also holds inside the "Lehmann ellipse" at fixed energy, we can prove, using unitarity, that the physical amplitudes, for t ≤ 0, are bounded by s 4 . This means that we can write a dispersion integral with four subtractions. However this is not the scattering amplitude which differs from the dispersion integral by an entire function of order one. Another way is to multiply the scattering amplitude by a convergence factor, which guarantees that the modified amplitude has no exponential growth in complex directions. Such a factor is a crossing symmetric term, where u is the third Mandelstam variable and m is the proton mass. Because of the fact that the real part of the scattering amplitude is indeed small in existing data, as well as in models and fits, the effects of this exponential growth are very visible even if the scale of the exponential, s 0 , is much higher than the square of the LHC energy. For instance at LHC ( √ s=14 TeV), as stated above, we expect naively : ρ = 0.12 to 0.13 .
With a scale √ s 0 = 50TeV, the modified amplitude would lead to ρ = 0.21 .
This means that we do not even need a very accurate measurement of ρ to see an effect. A measurement with 30% accuracy could be enough. This is why we are delighted that ρ will be measured by the ATLAS detector at CERN 10 , as a by-product of a luminosity measurement using the Coulomb interference region.
In closing we should stress the following: an experimental measurement of ρ giving a result consistent with Eqs. (5) and (6) will also be an important result and there will be no indication for new physics. However, we would then have an empirical test of local field theory at length scales 30 times smaller than what is presently known. | 2014-10-01T00:00:00.000Z | 2005-11-10T00:00:00.000 | {
"year": 2005,
"sha1": "8eecdf6f0815074db4822040bdf5efe1e78a4aba",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-ph/0511135",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "f28f38d2c937099909dc5905aee8f3c3533bc49a",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
268761640 | pes2o/s2orc | v3-fos-license | Study on the corrosion inhibition performance of Schiff base corrosion inhibitor on Q235 steel
Q235 steel is widely used in industry and daily life, and it is prone to corrosion during use, especially in hydrochloric acid solution where corrosion is particularly severe. Schiff base organic compounds can significantly inhibit the corrosion of Q235 steel in hydrochloric acid solution, making them a green and environmentally friendly organic compound. This article uses aniline, octadecylamine, and glutaraldehyde as raw materials to prepare two Schiff base corrosion inhibitors, namely bisaniline Schiff base and bisoctadecylamine Schiff base (named BXFJ and SXFJ respectively). The corrosion inhibition performance and mechanism of these two Schiff base inhibitors in Q235 steel in hydrochloric acid solution were studied using weight loss method, electrochemical testing, quantum chemical theory calculation, and molecular dynamics simulation. The results of weight loss method and electrochemical testing show that in a 1mol/L HCl solution, the corrosion inhibition performance of BXFJ at a concentration of 25mg/L is higher than that of SXFJ, indicating that BXFJ can exert excellent corrosion inhibition performance in HCl environment. Density functional theory (DFT) indicates that the active sites of the two Schiff bases are C=N double bonds, and molecular dynamics (MD) further confirms that both Schiff base corrosion inhibitors can adsorb on the surface of Q235 steel.
INTRODUCTION
The harm of corrosion is astonishing for human life and industrial development.It is estimated that about one-third of the annual production of steel produced in the industrial world is scrapped due to corrosion.To solve the problem of corrosion, developed countries spend 3% to 4% of their GDP annually.This is only a direct economic loss caused by corrosion, while the indirect damage caused by corrosion is more extensive and severe.For example, stress corrosion fracture can lead to aircraft crashes, turbine blades flying apart, and bridge collapses.Therefore, metal protection work is imperative, and selecting corrosion inhibitors to protect metal products is the most economical and effective method.The commonly used corrosion inhibitors include inorganic salts such as nitrite, phosphate, polyphosphate, carbonate, silicate, borate, chromate, etc. [2], but they are highly toxic and can cause eutrophication of water, and have been banned.Some organic compounds containing heteroatoms such as P, O, N, S, and π bonds, aromatic rings, and heterocycles have the characteristics of low toxicity and high efficiency, and are widely used as corrosion inhibitors to inhibit metal corrosion [3][4][5][6].Excellent corrosion inhibitors are organic compounds containing conjugated double bonds, heteroatoms (i.e.sulfur, nitrogen, oxygen, phosphorus), aromatic rings, and other structures.Among them, Schiff base corrosion inhibitors have many advantages such as simple synthesis, green and non-toxic, and have important applications in the fields of medicine, catalysis, analytical chemistry, and corrosion.After in-depth research, Schiff base has the potential to become an efficient corrosion inhibitor for carbon steel in acid pickling solutions [7].Schiff base corrosion inhibitors are organic corrosion inhibitors that have a wide range of biological activities due to their unique conjugated structure.They can coordinate with metals and adsorb on metal surfaces to form a film, playing a corrosion inhibition role.Organic compounds with excellent corrosion inhibition performance usually contain heteroatoms or π bonds.Heteroatoms such as N and O containing lone pair electrons can form bonds with metals and adsorb on the metal surface, providing corrosion protection; The bonds in compounds exhibit excellent corrosion inhibition performance due to the interaction between their orbitals and metals.Schiff base compounds contain C=N double bonds and often contain benzene rings and several heteroatoms, making them highly susceptible to adsorption with metals and exhibiting excellent corrosion inhibition behavior.
Weightlessness method
The experimental material used for the weight loss method in this article is Q235 steel, and the corrosive medium used is a 1mol/L HCL solution.
Experimental steps: (1) Grind Q235 steel smoothly on a metal grinder using 80 #, 100 #, and 800 # water-resistant sandpaper in sequence, then clean with distilled water and anhydrous ethanol, blow dry with cold air, and dry in an electric hot air drying oven for more than 4 hours.After the test block is completely dry, weigh it on a balance and record it in sequence.Use a digital caliper to measure the length, width, and height of the test piece and record it in sequence.
(2) Weigh different concentrations and types of Schiff base corrosion inhibitor solutions using pipettes, add them to 1mol/l hydrochloric acid solution (diluted with distilled water), and pour them into a conical flask.Hang the test blocks with a fishing line in 200mL of corrosion test solution, and place three test blocks in each beaker to ensure that the test blocks are completely immersed in the corrosion solution.Change the concentration of corrosion inhibitor, reaction temperature, reaction time and other conditions, and repeat the above experiment.
(3) After removing the test piece, rinse the surface with distilled water and anhydrous ethanol in sequence.Then, use a degreased cotton dipped in acid solution to wipe off the corrosion products on the surface of the test piece.Finally, rinse again with anhydrous ethanol and blow dry with cold air.After complete drying, weigh the weight using an electronic balance.The average corrosion rate V of Q235 steel under different conditions and the quality corrosion inhibition efficiency of Schiff base corrosion inhibitors η It can be calculated separately according to equations (1) and (2).
In the formula, V is the average corrosion rate, g • cm -2 • h -1 ; Δm is the difference in sample mass between before corrosion and after reaction, g; S is the total exposed area of the sample, cm2 ; ρ The density of the sample participating in the reaction, g • cm3 ; T is the total reaction time, h.
In the formula, η Quality corrosion inhibition efficiency, (%); M0 and m are the masses of corrosion products extracted from Q235 steel before and after corrosion, respectively.
Electrochemical testing
Electrochemical testing uses a CHI660E workstation to measure the electrochemical impedance spectrum and dynamic potential polarization curve.A three electrode working system is used, with Q235 steel electrode as the working electrode, platinum plate electrode as the auxiliary electrode, and saturated calomel electrode as the reference electrode.Before conducting electrochemical testing, the Q235 steel electrode needs to be immersed in an electrolyte (250 mL of test solution containing different concentrations of corrosion inhibitors) for half an hour to obtain a stable open circuit potential (OCP).After the open circuit potential stabilizes, impedance testing (EIS) is performed, with a scanning frequency range of 0.01~100 kHz and a perturbation voltage of 20 mV.Zview software is used to fit EIS data to obtain corresponding impedance parameters and equivalent circuits.The scanning range of the dynamic potential polarization curve is set to EOCP ± 250 mV, and the scanning rate is set to 0.1 mV/s.Calculate the corrosion inhibition efficiency based on equations ( 3) and ( 4) from the polarization curve and impedance test results.
In the formula, I0coor and Icorr are the corrosion current densities of Q235 steel measured without and with added corrosion inhibitors, respectively; MA • cm-2.
In the formula, R0t and Rt are the charge transfer resistances at the interface of Q235 steel solution measured without and with added corrosion inhibitors, Ω • cm2 , respectively.
Quantum chemical calculations
Quantum chemistry research is based on the fundamental laws of electronic motion, simulating the electron nucleus system through calculations, and integrating the Schr ö dinger equation.Nowadays, people use various computational models, such as Ab initio Method [8], Semi empirical Method [8], Density Function Theory, DFT [10], etc.Although they are different, they are all based on approximate computational foundations.
The molecular configurations of corrosion inhibitors were all created in the Materials Studio software MS Visualizer.In order to obtain the most stable molecular configuration, the DMol3 module was used for structural optimization.The global reaction activity parameters obtained through quantum chemistry calculations after optimization, including the highest occupied molecular orbital energy ( EHOMO ), lowest empty molecular orbital energy ( ELUMO ), and hardness(η), Softness (S), energy gap( <i mtid='433'>hello</i>) dipole moment (μ) Characterized the reactivity of molecules.
Molecular Dynamics Simulation
Molecular dynamics simulation can provide a better understanding and description of substances.Build a molecular model of the corrosion inhibitor using the Visualizer module in Materials Studio software, then construct a Fe crystal model and establish a simulation system.Use the Focrite module for geometric optimization, use the Geometry Optimization option in the Forcite module, simulate selecting the Dynamics option in the Forcite module, select Fine for accuracy, and select COMPASS II for force field.The NVT system is selected for the system, with an initial speed of Random and a simulated temperature of 25 ℃ (298K).The temperature is controlled using an Anderson thermostat.This article simulates the adsorption behavior of Schiff base corrosion inhibitors in vacuum.
The simulation system consists of Fe (100) surface, 1 corrosion inhibitor molecule, and a vacuum layer.Firstly, import the metal Fe lattice, optimize the lattice using the DMol3 module, cut out the Fe (100) surface, select U=12 and V=12, establish a supercell, establish a vacuum layer with a thickness of 70 Å, add a corrosion inhibitor molecule into the constructed system, adjust the position of the inhibitor molecule, and make the molecular head base perpendicular to the Fe surface.Fix all Fe atoms again, and finally optimize the constructed system in the force module by selecting Geometry optimization and using a compass force field.Then perform molecular dynamics simulations on the system, select the NVT ensemble, set the temperature to 298k, simulate a total time of 1000ps, and output a total of 10000 trajectory files.
The formula for calculating adsorption energy is as follows: adsorption = total − ( molecule + surface ) Among them: Eadsorption : the adsorption energy of the system; Etotal: The total energy of the system after 120ps simulation; Emolecule : the energy of corrosion inhibitor molecules; Esurface: The energy of the Fe (100) surface .
Results of Weightlessness Method
Measure the corrosion inhibition performance of two different concentrations of Schiff bases on Q235 steel in a 1mol/L HCl solution under temperature conditions of 4 hours and 30 ℃ , with average corrosion rate V and mass corrosion inhibition efficiency η, The specific calculation formulas are shown in ( 1) and ( 2).
The test results are shown in Figure 2: During the experiment, it was observed that a large number of bubbles were generated on the surface of carbon steel in blank hydrochloric acid, while in the hydrochloric acid solution containing two Schiff base corrosion inhibitors, the bubbles on the surface of carbon steel were significantly reduced.After the experiment, the carbon steel in the blank hydrochloric acid showed black color and obvious corrosion pits, with many corrosion products on the surface, indicating that the carbon steel was severely corroded by hydrochloric acid; At the same time, the surface of carbon steel in hydrochloric acid with Schiff base corrosion inhibitor is relatively intact and bright, and there is no occurrence of pitting corrosion, resulting in fewer surface corrosion products.Through surface observation, it can be preliminarily inferred that the addition of Schiff base corrosion inhibitors has an inhibitory effect on the corrosion of metals.
As shown in the figure, with the increase of corrosion inhibitor concentration, the corrosion rate decreases and the corrosion inhibition rate increases.And when the concentration of the corrosion inhibitor is 25mg/L, the two Schiff base corrosion inhibitors have the best corrosion inhibition effect.
Electrochemical Curve Results
At room temperature, test the potentiodynamic polarization curves of Q235 steel electrodes in blank and hydrochloric acid solutions with different concentrations (5, 15, 25mg/L) of BXFJ and SXFJ added.The results are shown in Figure 3, and the corrosion potential ( Ecorr ), corrosion current density ( Icorr ), and Tafel slope (ba and bc ) are obtained by extrapolating the Tafel line from Figure 3.The corrosion inhibition efficiency is calculated using formula (3).The electrochemical parameters are listed in Table 1.From Figure 3 and Table 1, it can be seen that after the addition of BXFJ and SXFJ, the corrosion potential Ecoor of Q235 steel is all negatively shifted, indicating that BXFJ and SXFJ are mixed corrosion inhibitors.As shown in Figure 3, the curve shapes are similar, indicating that the addition of BXFJ and SXFJ will not change the electrochemical corrosion reaction mechanism of the electrode.
From Table 1, it can be observed that the Icoor decreases with the increase of the dosage of the four Schiff base corrosion inhibitors, indicating that BXFJ and SXFJ have a corrosion inhibition effect and an increase in corrosion inhibition efficiency.The shape change of the yin and yang regions of the polarization curve is relatively small, and the polarization curve moves towards the direction of decreasing current density.The cathodic Tafel slope bc and anodic Tafel slope ba showed no significant changes, indicating that the corrosion inhibitor forms a corrosion inhibitor film on the surface of carbon steel, slowing down the anodic dissolution and cathodic hydrogen evolution reaction of the steel.The trend of polarization curve changes is consistent with the weight loss test results.
Resistance Impedance Results
Figure 4 shows the electrochemical impedance spectroscopy (EIS) of Q235 steel in 1 mol/L hydrochloric acid solution with different concentrations (5, 15, 25mg/L) of BXFJ and SXFJ added at room temperature.From Figure 4, it can be seen that Q235 steel exhibits similar capacitance rings in both hydrochloric acid solutions without and with the addition of two corrosion inhibitors, indicating that the corrosion reaction is mainly controlled by the charge transfer step at the metal solution interface, and the diameter of the capacitance ring without the addition of corrosion inhibitor is smaller than that of the capacitance ring with corrosion inhibitor.As the concentration of corrosion inhibitor increases, the diameter of the capacitance ring also increases, indicating that the corrosion reaction of Q235 steel is inhibited and the effect is enhanced.In addition, the frequency dispersion and non-uniformity of the electrode surface lead to imperfect arc-shaped properties after adding corrosion inhibitors [11].
Frontline Orbital Theory
The electron distribution of HOMO and LUMO in the frontier molecular orbitals is beneficial for studying the adsorption activity of four corrosion inhibitor molecules, which is related to the molecular electron donating and electron receiving abilities.The frontier orbitals of molecules can be divided into two aspects, namely the highest occupied orbital (HOMO) and the lowest space orbital (LUMO).From the data in Table 2, it can be seen that the ∆ E values of the two Schiff base corrosion inhibitors, BXFJ, are smaller than SXFJ.The magnitude of the ∆ E value can determine the stability of the molecules.The larger the ∆ E value, the more stable the molecules are and the less likely they are to adhere to metals The smaller the E value, the more active the molecule is and the easier it is to adhere to metals.Therefore, the smaller the ∆ E value of a molecule, the better its anti-corrosion effect and corrosion inhibition effect will be [12].
Hardness (η a) BXFJ is smaller than SXFJ, with opposite softness (S) and dipole moment (μ) BXFJ is greater than SXFJ.According to the theory of soft hard acid-base, the Fe ions on the surface of Fe based metals belong to the boundary base and have lower hardness.Therefore, as the softness and hardness of the corrosion inhibitor molecules increase and decrease, they are more likely to coordinate with the empty d orbitals of Fe.Therefore, the corrosion inhibitor molecules with lower hardness have better corrosion inhibition effects.From Figure 6, it can be seen that after the system reaches equilibrium, the N atom and double bond in the corrosion inhibitor molecule are close to the surface of iron, manifested as the five membered ring of OWES, the double bond and nitrogen atom on the five membered ring, the double bond on the branch chain, OWEE's double bond, and N atom are almost parallel to the surface of iron.Both N atoms and double bonds are the main active centers for adsorption, and their interaction with iron contributes significantly.These two molecules do not contain double bonds, and the N atom's branches do not adsorb parallel to the iron surface, forming a certain angle with the metal atom surface.Both corrosion inhibitor molecules can form a protective film on the metal surface, effectively isolating the contact between the iron surface and corrosive substances, thereby preventing and delaying the corrosion of iron.
Calculation of Adsorption Energy
The adsorption energies of BXFJ and SXFJ are shown in Table 2. From Table 3, it can be seen that the adsorption energies of the obtained corrosion inhibitors are all negative, indicating that there is interaction between the two corrosion inhibitor molecules and the metal surface, and the two corrosion inhibitors undergo chemical adsorption on the iron surface [13]; Secondly, the larger the absolute value of adsorption energy, the better the corrosion inhibition effect [14].The absolute value of adsorption energy of BXFJ is greater than that of SXFJ, indicating that BXFJ has better corrosion inhibition effect and stronger adsorption effect than SXFJ, which slows down the corrosion rate of metals.Therefore, BXFJ has stronger ability to inhibit metal corrosion.
CONCLUSION
(1) The results of the weight loss method indicate that both corrosion inhibitors have certain corrosion inhibition effects; as the concentration of the corrosion inhibitor increases, the corrosion inhibition efficiency increases, and the corrosion inhibition effect of BXFJ is better.
(2) The polarization curve results indicate that the two corrosion inhibitors have similar curve shapes, indicating that the addition of BXFJ and SXFJ will not change the electrochemical corrosion reaction mechanism of the electrode.BXFJ and SXFJ are mixed corrosion inhibitors.The impedance results indicate that the corrosion reaction is mainly controlled by the charge transfer step at the interface of the metal solution.
(3) Quantum chemical calculations and molecular dynamics simulations both indicate that the two corrosion inhibitor molecules can stably adsorb on the surface of Q235 steel.The active sites of the two corrosion inhibitor molecules mainly include C=N double bonds and heteroatoms N.
Figure 2 .
Figure 2. Relationship between the concentration of Schiff base corrosion inhibitor and the corrosion efficiency
Figure 3 .
Figure 3. Electrodynamic polarization curve of Q235 steel in 1 M HCl solution with different concentration of corrosion inhibitor.(a) BXFJ; (b) SXFJ
Figure 6 .
Figure 6.Configuration diagram of BXFJ (a, b) and SXFJ (c, d) before and after adsorption on Fe (100) surface
Table 1 .
Electrical parameters of Q235 steel in 1 M HCl solution with different concentrations of corrosion inhibitors
Table 2 .
Quantum chemical parameters of BXFJ and SXFJ
Table 3 .
The calculated absorption energy | 2024-03-31T15:45:39.146Z | 2024-03-25T00:00:00.000 | {
"year": 2024,
"sha1": "360246ae412a21aa24bed1def24baa0cdddb9663",
"oa_license": "CCBYNC",
"oa_url": "https://wepub.org/index.php/IJMSTS/article/download/1010/1007",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "634d4e5be1b0fb27b6aa43e1961672de53a1b81b",
"s2fieldsofstudy": [
"Materials Science",
"Chemistry"
],
"extfieldsofstudy": []
} |
257649242 | pes2o/s2orc | v3-fos-license | Effect of alkalinity and light intensity on the growth of the freshwater sponge Ephydatia fluviatilis (Porifera: Spongillidae)
The adaptation of sponges to freshwater environments was a major event in the evolutionary history of this clade. The transition from a marine environment to freshwater ecosystems entailed a great number of adaptations to more unstable habitats, such as the ability to form resistance gemmules as a defense mechanism against environmental adversity. However, data on the parameters that modulate hatching and growth of these animals are scarce. In the present study, the growth response capacity of Ephydatia fluviatilis (Porifera: Spongillidae) has been evaluated in relation to both water alkalinity and light intensity. The results obtained revealed a positive association between the growth capacity of this freshwater sponge and high alkalinity values. On the other hand, exposure to light, regardless of its intensity, affected the development and distribution of the symbionts, which in turn, corresponds to a higher growth rate of the sponge. The obtained data suggest an explanation for the greater distribution of this species in alkaline environments. The results of this work also shed light on the importance of the symbiosis phenomenon in E. fluviatilis.
Introduction
Having its origins dated back to more than 630 million years ago (Ehrlich et al. 2018;Schuster et al. 2018), the phylum Porifera constitutes one of the earliest metazoan groups on the planet still extant. The structural simplicity and its phenotypic plasticity are possibly the key factors for the success of this phylum, which counts for 9531 valid species (de Voogd et al. 2023), although it is considered that this number could be as high as 15,000 species (Degnan et al. 2015). Likewise, the great adaptive radiation of sponges has allowed them to colonize virtually any aquatic environment.
Despite this, the only clade of sponges that has adapted to inhabit freshwater areas is the family Spongillidae Gray 1867 (Kenny et al. 2020). The members from this family have been able to occupy almost every kind of freshwater environment with a global distribution (Manconi and Pronzato 2008). This has led to significant changes in their physiology, as they have necessarily adapted to a more volatile environment. Therefore, they have adopted strategies of tolerance to adverse ecological conditions, like abrupt temperature changes (Schill et al. 2006) and hypoxic conditions (Reiswig and Miller 1998).
To accommodate such circumstances, these sponges are able to form gemmules, resistance structures made up of a protective spiculated cover that stores in its interior a large number of totipotent cells, known as thesocytes (Calheira et al. 2019).
Most freshwater sponges present seasonal cycles of gemulation, germination and growth, all in accordance with the physical and chemical patterns of the biotope (Melão and Rocha 1999). Among these, some parameters such as temperature and illumination (Benfey and Reiswig 1982), food availability (Frost 1991) and water level stand out. However, data on the effect of the water conditions themselves in relation to the growth capacity of these animals are scarce. One of these conditions that have not been really studied is the alkalinity of the water. As most sponges from this family are associated to both lotic and lentic ecosystems, this parameter may influence the growth ability of these animals. Consequently, it may alter the natural functioning of freshwater habitats, as these animals not only play an ecological role as filter-feeding consumers, but also on the recycling of nutrients (Bart et al. 2019) and on certain biogeochemical cycles (Tréguer et al. 2021).
One of the most interesting features of the ecology of the sponges is their potential as hosts for a huge diversity of symbionts (Webster and Thomas 2016). Mutualistic interactions between metazoans and numerous groups of autotrophic organisms have been extensively studied, and appear to be especially relevant in more primitive animal clades, like cnidarians, platyhelminths, certain mollusks and urochordates (Hirose 2015;Jäckle et al. 2019;Rosset et al. 2021;Rola et al. 2022). The importance of symbiotic interactions is particularly important in sponges, and, in fact, it is presumed to be one of the bases of their evolutionary success (Taylor et al. 2007a, b), given the presence of cellular receptors of various types on their surface, together with their complex innate immune system (Degnan 2015), elements that facilitate host recognition by their microbial symbionts (Usher 2008). All these factors seem to indicate that the genetic basis necessary for the establishment of these mutualistic relationships has been present in sponges since the dawn of this phylum. The abundance of symbionts is such that they can constitute up to 38% of the total biomass of the freshwater sponges (Laport et al. 2019) but can represent proportions up to the 50% of the biomass of the animal in the case of marine sponges (Anteneh et al. 2022). This is why the role of sponges goes far beyond the individual organism, as the holobiont per se forms a rich ecosystem with enormous functional diversity. Despite the joint importance of the different organisms in the sponge microbiome, possibly one of the most beneficial functional groups for the host is the photoautotrophs. Thanks to these symbioses, sponges can incorporate a remarkable part of the products of their host's photosynthetic metabolism (Matsunaga 2018), both by the input of photoassimilated carbon (Taylor et al. 2007a, b), as well as nitrogen (Rix et al. 2020). In addition, sponges benefit from oxygen production as a by-product of symbiont photosynthesis. Such a relationship is not unilateral, as photobionts also benefit from their host, not only thanks to the protection against external adversities (Pröschold and Darienko 2020), but also by the generation of CO 2 as a result of the sponge's metabolism (Achlatis et al. 2019).
The endosymbiotic relationship between sponges and photoautotrophic organisms has been studied especially in freshwater sponges. Unlike their marine counterparts, whose main interaction is with cyanobacteria (Carrier et al. 2022), freshwater sponges are mostly associated with eukaryotic photobionts (Chernogor et al. 2013). Although they can host a wide range of prokaryotic symbionts (Gernert et al. 2005;Keller-Costa et al. 2014), no evidence of symbiosis between cyanobacteria and freshwater sponges has been found under normal conditions (Wilkinson 1987;Adams 2000;Annenkova et al. 2011). The importance of photobionts in freshwater sponges has been questioned on numerous occasions (Wilkinson 1980;Jensen and Pedersen 1994;Hall et al. 2021), as most of these species are associated with shallow and stagnant waters, behaving mostly as scyophilic metazoans (De Santo and Fell 1996). However, the incorporation of photobionts has been shown to induce a higher growth rate in these animals (Frost et al. 1997;Skelton and Strand 2013), given the translocation of nutrients from the symbiont to the host.
The aim of the present study was to determine the growth capacities under different simulated environmental conditions in Ephydatia fluviatilis (Linnaeus, 1759), a freshwater sponge. This species, like the other representatives of the gender Ephydatia, is known for its wide cosmopolitan distribution (Erpenbeck et al. 2020), and it is usually found in brackish water bodies (Kohn et al. 2020), but also in alkaline fresh waters (Poirrier 1974;Gaino et al. 2012). Its ability to produce gemmules that can withstand freezing temperatures of around − 80 ºC (Leys et al. 2019), and the ease of isolation of its green algae symbionts make this species optimal for the purpose of this experiment. The growth capacity of this metazoan will be verified under different alkalinity conditions, a characteristic parameter of the lotic ecosystems of the island of Mallorca, where the presence of this species has been verified. On the other hand, the response of E. fluviatilis to the exogenous incorporation of symbionts in different light conditions will be evaluated. All this will allow to test not only the role that these organisms may have on sponges in their early germination stages, but also the response of the host to the presence of potential photobionts, in order to infer the importance of symbiosis in situations of different light intensity.
Sample collection and purification of the gemmules
The sponge tissue samples were collected on the month of July of 2021, in the pools of the Comafreda stream, also known as Torrent des Guix (Mallorca Island, Spain, 39º 48′ N, 2º 54′ E). The specimens were found in several ponds along transect, although samples were only collected from six individuals located in one of the pools (39º 48′ 02′′ N, 2º 54′ 16.7′′ E), at an altitude of 260 m above sea level (Fig. 1).
The selected specimens were all embedded in the limestone rock walls of the road, at a depth of between (Fig. 2). The physical and chemical parameters of turbidity, luminosity, pH, conductivity, temperature, and dissolved oxygen, among others, were measured in each of the areas where sponges were present. These measurements were taken with a Hanna HI 9828 portable multiparameter apparatus. Six tissue samples were taken from the six specimens in the pool by scraping with a scalpel and were stored in 2 mL plastic tubes, together with water from the pool itself. All these samples were stored in cold storage for about 3 h, and after this period, they were kept in the dark at 4 ºC. The entire sample collection process was minimally intrusive to the animals. Scientific authorization for the study was obtained from the Ministry of Environment and Territory from the Government of the Balearic Islands (SEN 0576/2020).
For the taxonomic identification of the sponge species in the area, some samples from the collected tissue were used. The method followed was based on purely morphological criteria, based on the structure of the animal's spicules. For their fixation and subsequent microscopic identification, the methodology described by Hajdu et al. (2013) was followed. For the isolation and subsequent culture of the gemmules in both experiments, a modification of the protocol of Leys et al. (2019) for the treatment of E. fluviatilis was followed. For this, mechanical disaggregation of the sponge tissue was performed to separate the gemmules. All gemmules were then resuspended in a solution of H 2 O 2 1% v/v to remove non-viable gemmules. After that all the putatively functional gemmules were stored at 4 ºC in dark conditions, in order to avoid its premature hatching.
Cultivation and maintenance of the sponges Prior to conduct alkalinity and light experiments, it was necessary to ensure the viability of the gemmules, in order to verify their correct germination. Therefore, those viable gemmules that were preserved in cold were transferred to Petri dishes. Each plate contained 30 mL of medium M (Rasmont 1961).
Following the recommendations described by Leys et al. (2019), once the gemmules were deposited on the plates, they were kept in dark conditions at a temperature of 20 ± 2 °C, awaiting germination. When all the gemmules hatched, they were fed every two days, inoculating 15 μl of an autoclaved suspension of Escherichia coli CECT 101 at a final concentration of 10 3 CFU ml -1 . To prevent the accumulation of toxic metabolites, the medium was changed regularly every two days.
Effect of alkalinity
In order to determine the growth capacity of the sponges based on the alkalinity of the culture medium, the transfer of the sponges fixed to new 85 mm divided Petri dishes was carried out. To study the effect of this parameter, individuals were divided into four groups according to the alkalinity of the medium: low (1 mEq L -1 ), medium (2.5 mEq L -1 ), moderate (4 mEq L -1 ) and high (5 mEq L -1 ). These values were selected as the range of values for the natural occurrence of E. fluviatilis oscillates between 0.379 and 3.993 mEq L -1 (Poirrier 1974). Two replicates of each treatment were carried out. For this, a modification of the M medium was carried out to adjust the carbonate/bicarbonate/CO 2 balance. Thus, different concentrations of HCO 3 − and CO 3 2− were added to the culture medium to achieve the desired alkalinity of each level studied (Table 1), which were calculated by using the equation proposed by Millero (1995). All of this was done by using the carb command from the seacarb package in R program. In turn, a certain concentration of NaCl was added to each of the culture plates to balance the Na + ion concentration at 300 mg mL -1 , considered the optimum for E. fluviatilis growth (Francis et al. 1982). The sponges were incubated at 22 °C, with a photoperiod of 16:8 h at an irradiance of 6 μE m -2 s -1 . The duration of treatment was 11 days. Between these, the growth capacity of the specimens was monitored three times a week.
Isolation and cultivation of symbionts
For the exogenous inoculation of the symbiont green algae, they were first isolated from one of the sponges' tissue samples preserved at 4 ºC. Following the methodology proposed by Hall et al. (2021), mechanical homogenization of the sponge fragment was performed using a mortar previously sterilized with ethanol, to which 500 μl of BBM medium (Stein 1980)-used in the culture of Chlorella spp., the most common symbiont of E. fluviatilis-had been previously added.
Once the purification of the sample was performed, based on sequential cycles of centrifugation, the green pellet obtained was resuspended in a new Eppendorf tube with 500 μl of BBM, and transferred to an Erlenmeyer flask containing the same culture medium. In turn, the vessel was incubated in agitation under irradiance conditions of 80 μE m -2 s -1 , with a photoperiod of 16:8 h, considered optimal for the growth of Chlorella spp. (Amini Khoeyi et al. 2012), the expected symbiont in these sponges. The addition of ampicillin at concentration of 0.1 mg ml -1 to the medium was carried out to ensure the absence of prokaryotic contamination.
Incubation of the holobiont
To measure the effect of the symbiosis on the growth of sponges under various irradiance conditions, a selection of the fixed sponges was carried out in a manner analogous to that of the alkalinity treatment. In this case, the culture medium for all treatment plates was medium M.
The levels to be evaluated for the independent variable were direct exposure to light (75 μE m -2 s -1 ), in penumbra conditions (5.75 μE m -2 s -1 ) and in absolute darkness. Prior to symbiont inoculation, and to ensure the absence of effect of potential photobionts intrinsically present in these structures, all gemmules were incubated in dark conditions at 20 ± 2 °C. Again, two replicates were carried out for each treatment. Once all the gemmules were fixed, inoculation of the symbionts was carried out in each of the plates. Therefore, 1 mL of the culture was transferred into liquid BBM medium in exponential growth phase two weeks after seeding, at which time the total Chlorella-like cell density was 9.63 · 10 4 cells mL -1 -for density counting, a direct unit count was carried out from 10 μL of the culture-. The final concentration of algae in each plate was around 6 · 10 3 cells mL -1 . Prior to the algae inoculation, the sponges were maintained for 3 days after their hatching, until the development of a functional aquiferous system (Leys et al. 2019).
After the inoculation of the algae cells into the cultures, the light experiment started, and was carried out for 14 days in their respective irradiance conditions. The area of each sponge was measured every 3 days. To ensure the desired levels of light intensity, this parameter was evaluated by means of a portable radiometer Delta OHM HD2302.0. Unlike the measurement of the effect of alkalinity, in this case the sponges were not fed nor the media with the symbionts were changed during the experiment.
Determination of the areas
For the estimation of the area of the porifera during the progression of both experiments, the different specimens were photographed by means of a HAYEAR 5MP USBP 2.0 C-Mount camera coupled to a Leica S8AP0 binocular loupe. Five times area measurements were taken, including the initial area of the individuals, prior to the start of the different treatments. A number between 25 and 30 individuals were measured for each treatment of both experiments. The surface area of the organisms was determined using ImageJ image processing software.
Sample fixation and epifluorescence microscopy
In turn, for the evaluation of the presence of symbionts in the samples, the sponges were mounted and stained with DAPI, considering the autofluorescence capacity of chlorophyll when irradiated with wavelengths of the PAR spectrum corresponding to green and blue. Therefore, sponges were isolated de novo in 85 mm Petri dishes in a 4% solution in paraformaldehyde mixed with 25% Holtfreter medium. These samples were incubated overnight at 4ºC. After incubation, the isolated sponges were stained by adding 25 μL of DAPI at a concentration of 0.01 mg mL -1 , which allowed the cellular genetic material to be observed. After this, the samples were left to stand in the dark for 5 min. Subsequently, mounting was carried out with a drop of glycerol, leaving the samples ready for observation with the Leica DM2500 epifluorescence microscope, using A and I.3 cube filters-DAPI and blue light, respectively. Photographs were taken with a Leica DFC420C camera.
Statistical analyses
All statistical analyses associated with the comparison of the surface area of the sponges were evaluated by means of R-based programming. Welch's ANOVA test with Games-Howell post hoc analysis was used for the evaluation of the growth rate and the projected area in the alkalinity study, as well as Kruskal-Wallis and Dunn's test for the determination of these parameters in the experiment of the effect of luminosity. For the analysis of the occupation of the sponge tissue by the algae cells it was used a one-way ANOVA test.
In situ parameters and species identification
The physical and chemical analysis of the pool from which the sponge samples were extracted is exposed in Table 2. The water from the basin was brackish, and its pH was relatively basic, with reducing conditions in the entire pond. Moreover, although exposed to certain illumination, the overall light intensity in the surface was low, more typical from penumbra conditions-as those to which the sponges were subjected in the luminosity experiment.
The six sponges from the pool were distributed on the surface of the limestone walls, adopting encrusting shapes. The morphological analysis of the spicules verified that the species present in the basin was E. fluviatilis. The megasclere oxeas had an average length of 352 ± 71 μm, and a thickness of 13.8 ± 1.6 μm. However, the specific character that allows discerning this species is based on its birotulate gemmuloscleres, which presented 15 rays per rotule, in addition to a spiniform prolongation on the trunk (Evans and Montagnes 2019). Their average length was 58.6 ± 2.1 μm, with a diameter of 25.4 ± 5.1 μm.
Effect of alkalinity
A strong positive correlation was observed between increasing alkalinity values with respect to total sponge growth. This trend could be seen not only in the total growth of the gemmules, but also during the entire progression of the experiment (Fig. 3).
The group subjected to the high alkalinity treatment manifested a significantly greater increase in size compared to specimens from the other treatments (p < 0.001, F 3, 103 = 8.9959), reaching an average value of 1.03 ± 0.49 mm 2 . On the other hand, the medium (2.5 mEq L -1 ) and moderate (4 mEq L -1 ) alkalinity groups did not present statistically significant differences in final surface area-0.88 ± 0.51 mm 2 and 0.91 ± 0.54 mm 2 , respectively. In fact, it can be observed how their growth progression followed a practically identical trend throughout the experiment. Regarding the low alkalinity treatment (1 mEq L -1 ), a comparatively reduced growth rate was maintained compared to the other experimental groups. The average area obtained by the specimens of this group was 0.73 ± 0.33 mm 2 .
Checking the daily growth rate of the different specimens is checked, it has been verified that the high alkalinity treatment was the one that presented the most remarkable growth with respect to the other groups (p < 0.001, F 3, 101 = 9.1146), up to 0.08 ± 0.03 mm 2 per day. No statistically significant differences were observed between the values of this rate in the other treatments.
Light effect
As evidenced by epifluorescence observations (Fig. 4), only the sponges from the groups exposed to light showed Chlorella-like cells attached to their tissue, while none of the specimens subjected to the dark treatment showed the presence of these photobionts.
At the same time, it is worth noting the difference in the distribution of symbionts between the groups of sponges depending on the light intensity to which they were exposed. While those treated under conditions of 75 μE m -2 s -1 presented a more homogeneous organization of Chlorella-like cells throughout their structure, specimens subjected to a dim light intensity (5.75 μE m -2 s -1 ) showed a greater tendency of aggregation in the periphery of the gemmule, as well as in the outer perimeter of the sponge (Fig. 4). Furthermore, there was a significantly greater symbiont density in the sponges subjected to higher light intensity than the ones exposed to penumbra (p < 0.005) (Fig. 5). Although a minority compared to the sponge-associated forms, a large number of free Chlorella-like cells were also evident in the medium in both light treatments, in contrast to those subjected to dark conditions. Regarding growth parameters, it was possible to determine a larger final area in the light treatments, either high intensity or in penumbra conditions (p < 0.001, χ 2 2 = 17.46) compared to sponges incubated in the dark. The differences in size between the two groups of light-exposed individuals, with final surface areas of 1.77 ± 0.89 mm 2 and Fig. 3 a Progression of the average projected area (vertical bars indicate standard error) among the different alkalinity treatments. b Comparison between the average growth rate of the different treatments. * Indicates significant differences respect to the other alkalinity treatments (Welch's ANOVA test, p < 0.001). Values are expressed as mean ± S.E.M 1.21 ± 0.61 mm 2 , respectively, are attributable to the difference in surface area of the gemmules at the beginning of the experiment. The treatments of sponges subjected to both high light intensity and penumbra conditions show a parallel growth trend (Fig. 6). On the other hand, the initial progression of the individuals not exposed to light is of decreasing size, recovering a little growth capacity throughout the experiment, although at a lower rate than the other two groups. All these data can be corroborated with the analysis of daily growth, which reveals a superior faculty of development in the presence of light exposure (p < 0.001, χ 2 2 = 49.81).
Discussion
Throughout the evolution toward to the freshwater environment, sponges from the family Spongillidae have followed adaptive paths peculiar to other representatives of their phylum, such as their ability to gemmulate in response to environmental adversity (Cáceres 1997). Further understanding of the underlying factors behind this peculiar phenomenon would allow determining to determine not only the ecological importance of modulating this quiescent state for these animals individually, but on the holobiont they comprise (Clark et al. 2021).
Ephydatia fluviatilis, the model species used in this study, although it has been recorded in both lentic and lotic environments (Li et al. 2018), is usually more frequently distributed in ecosystems of lotic character (Waterston and Lyster 1979;Didžiulis 2012;Evans and Montagnes 2019), such as the Comafreda torrent, where not only samples have been extracted, but also the occurrence of this taxon has been documented for the first time. This also constitutes the second register of it in the freshwater ecosystem of the Balearic archipelago (Travesset 1991). A higher prevalence and growth capacity of E. fluviatilis has been suggested in reducing environments-that is, of negative redox potential-(Evans and Montagnes 2019), conditions that were present in the sampling area, with an average value of − 61.9 mV pH -1 at pH 8.1.
Alkalinity is a chemical characteristic of the waters of the Balearic archipelago, which, in the case of the inland water bodies of Majorca, ranges between 1.18 and 5.30 mEq -1 (Moyà and Ramón 1981). The results show a positive correlation between this parameter and increased sponge development, as high water alkalinity (5 mEq L -1 ) favors faster sponge growth.
Our data not only increases the highest tolerable alkalinity range previously proposed for this species, located at 4 mEq L -1 or 4.6 mEq L -1 , according to Poirrier (1974) and Old (1932), respectively; but also indicates the importance of higher alkalinity values for E. fluviatilis, as they enhance its growth. This factor could explain the ecological preference of this species for carbonate-rich river bodies (Pisera and Sáez 2003). In addition to alkalinity, the high content of calcium cations in the stream due to its karst nature is probably beneficial for these sponges (Økland and Økland 1996), as this is an essential element in membrane permeability (Belas et al. 1989).
Ion incorporation is essential for freshwater sponge, as it allows the maintenance of internal homeostasis (Senatore et al. 2016). It has been shown that the species from the family Spongillidae exhibit, in addition, a highly selective capacity for ionic regulation of their body, comparable to that observed in vertebrate epithelia (Adams et al. 2010). If such a regulatory capacity is considered, it could be suggested that increased alkalinity is able to act as a modulator of salt uptake, by allowing not only a more efficient buffering of the pH of the medium, but also a higher solubility of calcium cations (Boyd et al. 2016). The latter factor, as indicated, would act as a signal transducer that, among other essential functions, would allow to efficiently modulating the flow of nutrients into the sponge (Elliot and Leys 2010; Leys and Hill 2012). Future studies on the physiological effects of the ionic concentration could contribute to a better understanding of the global importance of alkalinity in these metazoans.
Apart from the effect of alkalinity in sponge growth, the role of symbiotic interaction by Fig. 6 a Progression of the average projected area between the different light exposure treatments after inoculation of the symbionts. b Comparison between the average growth rate of the different treatments. SEM ± standard deviation. * Indicates significant differences respect to the other light treatments (ANOVA test, p < 0.001). Values are expressed as mean ± S.E.M exogenous inoculation of chlorophytes present on the surface of the original sponge tissue has been evaluated. The importance of symbiosis with photosynthetic eukaryotes in members of the family Spongillidae has been repeatedly questioned (Wilkinson 1980;Sitte and Eschbach 1992), being restricted to the classes Trebouxiophyceae and Chlorophyceae (Chlorophyta) and Eustigmatophyceae (Ochrophyta). In the case of the species from the genus Ephydatia, only photobionts from the division Chlorophyta-more specifically, Chlorella-like cells-have been documented (Pröschold and Darienko 2020), which could indicate the beginning of a more restricted coevolution, as it has been have suggested (Geraghty et al. 2021).
In spite of not empirically verifying a greater growth in conditions of higher light intensity, not only a different density, but also an unequal qualitative distribution between both types of holobiont has been verified. Under conditions of exposure to peak light of 75 μE m -2 s -1 , symbiont algae were more homogeneously distributed throughout the sponge tissue. In contrast, although there was also an aggregation of Chlorella-like cells around the treated gemmules under penumbra conditions, a greater arrangement can be observed at the periphery of the sponge. On the other hand, in specimens treated under dark conditions no such endosymbionts were observed inside the tissue.
The uneven distribution of Chlorella-like cells in the sponge tissue is explained by the establishment of the symbiosis itself. The incorporation of the algae occurs, in the first instance, by filtration-in fact, several experiments have revealed that the uptake of the symbionts by the sponges can become effective in about 4 h (Imsiecke 1993;Hall et al. 2021). After this, algae cells are retained in the collars of the choanocytes, as well as inside the pinacocytes of the outer layer of the animal (Saller 1989). It has been suggested that it is at this point that molecular recognition by both participants in the symbiosis occurs. As far as the sponge is concerned, there is overexpression of certain genes involved in immune recognition (Geraghty et al. 2021), as well as genes associated with oxidative stress , together with different permeases to mediate the transport of photoassimilates to the host (Grozdanov and Hentschel 2007). This fact has previously been verified in the establishment of symbiosis between these chlorophytes and other potential hosts such as Paramecium spp. (Kodama et al. 2014) or Hydra viridissima (Ishikawa et al. 2016). In turn, the ability of Chlorella sp. to directly secrete the glucose produced into host cells for their benefit has also been evidenced (Fischer et al. 1989).
The transmission of the symbionts will occur by transfer of vacuoles between the different cells. These vesicles, known as perialgal vacuoles, contain a single symbiont cell (Reisser and Wiessner 1984), which will be able to divide autonomously. It is considered that, within a period of 6 h, all the cells of the sponge will be able to present these vacuoles, especially in the mesohyl (Saller 1990). On the other hand, there is also another type of vesicle potentially containing the algae, although these have a digestive function. Their activity will be carried out either in case of dysfunction of the symbionts themselves, or in situations where these organisms do not provide a benefit to the host (Ereskovsky et al. 2022). It is speculated that this is the reason for the absence of symbionts in sponge subjected to dark conditions, since they are unable to carry out photosynthesis; they represent an energetic expense for their host, so they will be phagocyted. This would also explain the scarcity of free Chlorella-like cells in the dark treatments, as they would have been ingested by the sponge as the only available carbon source. Thus, digestion of the symbionts is the alternative for the host in situations of metabolic stress, in conditions where it can no longer take advantage of the symbiosis. The underlying mechanism behind the sponge's discrimination of photobiont utility in different situations remains unknown.
It has been shown that the distribution of symbionts in E. fluviatilis is not arbitrary, as in general, amoebocytes carrying Chlorella-like cells are distributed in the cortical region of the animal (Gaino et al. 2003). This distribution would allow a greater light uptake, thus favoring the growth of the holobiont. Our results also indicate a strong presence of Chlorella-like cells in the interior of the sponge, surrounding the gemmules, as well as in the periphery of the animal, factor more evident in sponges subjected to penumbra conditions. The reason for the higher density of symbionts in individuals exposed to higher light intensity is probably multifactorial. On the one hand, it is to be understood that, given the autonomous character of Chlorella-like individual replication with respect to its host (Saller 1990), when exposed to its optimum light intensity they see their growth favored compared to penumbra conditions (Metsoviti et al. 2019). For its part, the greater aggregation of symbionts in the case of maximum irradiance could be analogous to the difference in the distribution of plant chloroplasts as a function of light intensity (Maai et al. 2020). This would also explain the greater extensive tendency toward the periphery of photobionts in sponges treated under penumbra conditions, allowing maximum utilization of available light (Wada 2013). It is also suggested that the increased aggregation of Chlorella-like cells around the gemmule could be a mechanism mediated by the sponge to prevent from damage by excessive light exposure, as documented in Paramecium bursaria (Summerer et al. 2009).
The results obtained indicate a higher growth rate of the individuals exposed to light, which were also the only ones that presented associated photobionts in their tissue. Regardless of the light intensity-whether it was optimal for the symbiont, 75 μE m -2 s -1 ; or in penumbra conditions, 5.75 μE m -2 s -1 -, the establishment of the link between E. fluviatilis and Chlorella-like symbionts could be verified. Therefore, based on the data obtained in this study, the establishment of the holobiont only takes place in situations of illumination of sponges, which is consistent with the observations made by Wilkinson (1980). Thus, in case of light exposure, the symbiosis with Chlorella sp. is particularly important for E. fluviatilis, since the translocation of photoassimilates by this alga supplies all the metabolic demands of its host-since no external carbon source was provided during the whole experiment-, thus stimulating its growth. It is estimated that the contribution of glucose from the photobiont can range from 9 to 17% of the fixed carbon (Pröschold and Darienko 2020). Such nutrient translocation efficiency is comparable to values observed in interactions of Chlorella spp. with other species from the family Spongillidae, such as Spongilla lacustris (Fischer et al. 1989). This fact raises the evolutionary importance of such a symbiotic interaction for this family of sponges, as suggested by numerous studies (Jensen and Pedersen 1994;O'Brien et al. 2019;Hall et al. 2021).
However, compared to other endosymbiosis events between Chlorella-like organisms and other eukaryotes, it appears that the interaction with E. fluviatilis is not as efficient (Wilkinson 1984). For example, the rate of photoassimilate transport by this genus of trebouxiophyceae and Hydra viridissima can range from 25 to 30% (Cook 1983). This last relationship also evidences a greater control of the symbiont by the cnidarian host (Bosch 2012), since the establishment of symbiosis is obligatory for Chlorella individuals in case of coexistence with H. viridissima. It should be emphasized that this hydrozoan is able to express a large diversity of genes exclusively in this interaction (Hamada et al. 2018), revealing a higher degree of coevolution between both species compared to the symbiosis between Chlorella and freshwater sponges (Ereskovsky et al. 2022).
Despite this apparent lower symbiotic efficiency, the ecology of E. fluviatilis needs to be considered for a correct understanding of the establishment of this relationship. This species, like so many other freshwater sponges, manifests a clear sciaphile tendency (De Santo and Fell 1996). This preferential distribution is not arbitrary, since it has been shown that the formation of gemmules tends to take place in dark conditions (Brønsted and Brønsted 1953), which guarantees survival by cryptobiosis in situations of environmental stress. The absence of light, on the other hand, limits the photosynthetic capacity of the symbionts. Although this is not a problem for other species in which such an endosymbiotic relationship has been documented, such as the hydrozoans of the genus Hydra, it should be remembered that these have the capacity for autonomous movement, ability not present in sponges. The data obtained in this experiment suggest that the establishment of aposymbiosis between E. fluviatilis and Chlorella spp. takes place, preferentially, in the case of being able to obtain a direct benefit from the host-glucose supply. However, significant growth has also been observed in the absence of exogenous inoculation of symbionts during the analysis of the effect of alkalinity, which experimentally corroborates the observations carried out in the field (Wilkinson 1980;Evans and Montagnes 2019). All these data support that the establishment of the symbiosis is not essential for the growth of E. fluviatilis under nutrient availability. However, in situations of light exposure it becomes an extremely important factor for the sponges, as it can mediate the transition from a heterotrophic filtering lifestyle to one dependent on photosynthesis mediated by its symbionts.
Final considerations
The results of this study have revealed the important role of alkalinity on the growth of sponges. Fulfilling the initial hypothesis, a positive association between this physicochemical parameter and the development of E. fluviatilis was observed, which explains the ecological tendency of this metazoan to grow in carbonate-rich waters.
On the other hand, the study of the effect of light intensity on the holobiont has revealed a differential effect on growth. Although it is true that no evidence was found of a higher rate of development in conditions of higher light intensity, it was possible to determine an unequal distribution of the symbionts in cases of exposure to different intensities. This fact, together with the absence of growth in the specimens incubated in the dark, indicates an important regulatory role of light not only in the arrangement of the symbionts, but also in the capacity of the sponges to take advantage of them. In conclusion, the data obtained support the relative importance of the aposymbiosis phenomenon in E. fluviatilis, as well as the role of the discriminatory capacity of the metazoan in the establishment of an endosymbiotic association as peculiar as that between sponges and chlorophytes. Acknowledgements A. Sureda and S. Tejada were supported by the Spanish Government, Institute of Health Carlos III (Project CIBEROBN CB12/03/30038). We would like to thank Josep Homs from Guies de Tramuntana for his help to the field work for sampling. S. Pinya were supported by the project Biodibal under the umbrella of the Agreement between the University of the Balearic Islands and Red Eléctrica de España.
Funding Open Access funding provided thanks to the CRUE-CSIC agreement with Springer Nature.
Conflicts of interest
The authors declare that they have not known competing interests.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. | 2023-03-22T15:07:42.907Z | 2023-03-20T00:00:00.000 | {
"year": 2023,
"sha1": "893a67a4b9c61d2410de521a25ef2867b9f49f2a",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10452-023-10014-0.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "68e9c373b696a30a4fd83f97329e7c44a5bbd651",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": []
} |
257055054 | pes2o/s2orc | v3-fos-license | Cluster-based psychological phenotyping and differences in anxiety treatment outcomes
The identification of markers of mental health illness treatment response and susceptibility using personalized medicine has been elusive. In the context of psychological treatment for anxiety, we conducted two studies to identify psychological phenotypes with distinct characteristics related to: psychological intervention modalities (mindfulness training/awareness), mechanism of action (worry), and clinical outcome (generalized anxiety disorder scale scores). We also examined whether phenotype membership interacted with treatment response (Study 1) and mental health illness diagnosis (Studies 1–2). Interoceptive awareness, emotional reactivity, worry, and anxiety were assessed at baseline in treatment-seeking individuals (Study 1, n = 63) and from the general population (Study 2, n = 14,010). In Study 1, participants were randomly assigned to an app-delivered mindfulness program for anxiety for two months or treatment as usual. Changes in anxiety were assessed 1 and 2 months post-treatment initiation. In studies 1–2, three phenotypes were identified: ‘severely anxious with body/emotional awareness’ (cluster 1), ‘body/emotionally unaware’ (cluster 2), and ‘non-reactive and aware’ (cluster 3). Study 1’s results revealed a significant treatment response relative to controls (ps < 0.001) for clusters 1 and 3, but not for cluster 2. Chi-square analyses revealed that phenotypes exhibited significantly different proportions of participants with mental health diagnoses (studies 1–2). These results suggest that psychological phenotyping can bring the application of personalized medicine into clinical settings. Registry name and URL: Developing a novel digital therapeutic for the treatment of generalized anxiety disorder https://clinicaltrials.gov/ct2/show/NCT03683472?term=judson+brewer&draw=1&rank=1. Trial registration: Registered at clinicaltrials.gov (NCT03683472) on 25/09/2018.
The goal of personalizing medicine-matching treatment at the level of the individual-to facilitate the diagnosis and treatment of diseases has challenged researchers and clinicians for decades 1 . This approach is based on the notion that inter-individual variability in various attributes (e.g. genetic, brain and physiological function, environmental exposure, behavioral and personality profile) is associated with heterogeneity in disease processes (disease progression, underlying factors/mechanisms) and treatment responses 1 . Personalized medicine began showing promise from genetic studies in which responses to drug treatments differed depending on individual's specific genetic profile [2][3][4] . However, with respect to personalizing medicine in mental health (e.g. psychiatry), there is a need for more research directed toward personalizing psychological interventions to optimize treatment selection and efficacy for individual patients 5 . Ideal personalized medicine approaches would be low cost, easily implemented and scalable at a population level.
Particularly with respect to anxiety disorders, there is a need for establishing robust markers of illness susceptibility and treatment responses so that these findings can be applied to key clinical decision making processes (e.g. optimal treatment selection for individual patients at their first clinic visit) 6 . Individuals with anxiety disorders, such as generalized anxiety disorder (GAD), display varying responses to pharmacological agents and psychotherapeutic approaches: response rates to anti-anxiety medications vary from 30 to 68% 7 , while 46% of patients showed clinical improvement from psychotherapeutic treatment 8 . In addition, other recent reviews of the literature discuss variable response rates to different pharmacological agents and/or psychological treatment for anxiety disorders 9,10 . A recent meta-analysis also showed individual variability to psychological anxiety treatment outcomes, such that post-treatment symptom alleviation was significantly enhanced for patients exhibiting early response to treatment (in the first 4 weeks) vs those not showing such early improvements 11 .
There is currently no systematic process for the selection of anxiety treatments on the basis of empiricallysupported markers of intervention success 6 . In clinical practice, treatment selection is guided by factors such as cost/benefit evaluation of clinical vs. side effects, or personal experience of the healthcare practitioner 6 .
Methods to identify subgroups with shared characteristics that are associated with treatment outcomes are being explored 12 . For example, unsupervised machine learning methods can be used to detect clusters of participants that share similar patterns or combinations of various characteristics 13 . Subgroups sharing common patterns of features can then be compared based on their treatment outcomes. However, studies including machine learning approaches to identifying markers that relate to psychological clinical treatment responses are scarce but proof-of-concept reports are emerging 14 .
A recent study used clustering methods to determine phenotypes of depression treatment resistance based on the most salient self-reported clinical features (e.g. socio-demographic characteristics, symptom severity), with models showing acceptable ranges of predictive accuracy over treatment outcomes 14 . These results suggest that "psychological phenotyping" may be a useful step forward in the implementation of precision medicine, and have the advantages of low-cost/high accessibility to researchers and clinicians as compared to genetic or neuroimaging biomarkers. The identification of psychological phenotypes could also guide prospective investigations of underlying genetic variants or neurobiological processes involved to uncover bio-/neuromarkers. Self-reported psychological assessments can also be rapidly collected and analyzed in clinical settings, thereby rendering them easily implementable in clinical practice, and scalable such that access can be provided at the population level.
Three key components may be needed to optimally determine an individual's response to a psychological intervention: (1) traits/skills the intervention is aimed at developing, (2) mechanistic factors that are related to the disease process, and (3) clinical outcomes.
With respect to treatment-related characteristics, mindfulness training (MT) is increasingly being studied as a treatment for several mental health disorders 15,16 . Mindfulness-based approaches are aimed at developing awareness of present-moment experiences with acceptance 17 . Such approaches foster non-reactivity or 'being with' unpleasant states, rather than reacting by carrying out a behavior to distract from or avoid such states (e.g. smoking, emotional eating, worrying) 18 , with demonstrated efficacy on reducing anxiety symptoms. However, many of these effects are based on studies that do not have appropriate control groups, warranting an important need for further study [for reviews, see 15,16 ].
To more precisely identify psychological markers relevant to treatment outcomes, it would be informative to not only involve characteristics relevant to the disordered process under study-anxiety-, but also to mechanistic aspects involved in its formation/maintenance and that are targeted by the psychological intervention. Thus, one mechanism identified in developing/maintaining anxious symptoms is the particular way people respond, manage, or relate to their anxiety 19 . As such, worry, a central feature of anxiety 20 , can be described as maintained by the relief it provides in avoiding deep-seated core emotions that could be perceived as threatening 19 . This immediate benefit from experiential avoidance may reinforce and maintain further engagement in worry 19 . To disentangle the maladaptive cycle of worry and anxiety, Unwinding Anxiety (UA) is a mindfulness training program aimed at building up skills to develop awareness of these processes. In developing more mindful awareness to these processes, people can learn to be and work with unpleasant states 18,21,22 , rather than avoid or react to them and perpetuate the habitual cycle of worry 15,16 . A previous study has shown that MT for anxiety's reductions in GAD-7 scores were mediated by reductions in worry, in support of worry reduction as a mechanistic target for the impact of MT training on anxiety.
Recently, app-delivered MT has shown preliminary efficacy in symptom reduction: single-arm trials with anxious physicians 23 and a randomized controlled trial from our laboratory conducted by Roy et al. which included individuals with GAD showed 57% and 67% reductions in clinical symptoms of anxiety respectively (Generalized Anxiety Disorder-7 scores, GAD-7) 24 . In the latter study, 36% of subjects in the experimental group did not achieve remission, suggesting the presence of inter-individual variability in treatment outcomes. Identifying baseline characteristics of participants who do not respond as robustly to treatment could help match individuals to MT vs. other therapeutic modalities -saving time and cost, while potentially improving outcomes.
To date, neuroimaging, genetic, and behavioral findings on anxiety disorders have not yet translated into robust and established markers of anxiety and its treatment outcomes to guide clinical recommendations 6 . Here, we conducted two studies, in which we adopted a data-driven, unsupervised learning approach to identify psychological phenotypes that may interact with anxiety treatment matching and outcomes. In Study 1, we used a clustering approach to detect the presence of subgroups of participants with shared psychological attributes and whether group membership could interact with treatment outcomes in a randomized controlled trial (N = 63). We included measures that captured core aspects of treatment (non-reactivity and interoceptive awareness) 25,26 , mechanism (Penn-State Worry Questionnaire) 20,27 , and clinically-relevant outcomes (GAD-7) 28 . In Study 2, we examined whether the psychological phenotypic clusters that were discovered in Study 1 could be identified in the general population (N = 14,010). We then examined whether sub-group membership could be associated with mental health diagnoses (Studies 1 and 2).
Study 1: methods and materials
The overall effects of app-based MT for anxiety on anxiety outcomes have been reported elsewhere (NCT03683472 registered at clinicaltrials.gov on 25/09/2018 24 ). The trial protocol has not been previously published. These results 24 are distinct from those presented here, as the present results report on the impact of cluster membership on responses to anxiety treatment. Intervention. The app-delivered MT program (Unwinding Anxiety, 'UA') which was used in this study as well as the experimental procedure are fully described in the Supplementary Materials section.
Data analysis. Cluster analyses.
A total of 63 participants were included in these analyses. Following previous recommendations 29 , we first used a hierarchical agglomerative method (after having reduced the data using principal component analysis (PCA)) in order to determine the number of clusters present within the data. This was followed by an iterative partitioning approach (k-means) to minimize within-cluster distances and maximize between-cluster distances (see the Supplementary Materials Section for full cluster analyses procedure description).
Interaction between subgroup membership on responses to MT. Sixty-one participants were included in these analyses due to two participants not completing follow-up assessment surveys. To determine whether psychological phenotype significantly interacted with treatment responses to MT, a mixed measures 3-way ANOVA using TIME (baseline, 1, 2 months post-intervention) as a repeated measures factor, as well as GROUP (TAU + MT, TAU) and CLUSTER as between-subjects factors was conducted on GAD-7 scores. Follow-up analysis (within cluster GROUP X TIME mixed measures ANOVA and contrasts) were conducted to break down a TIME X GROUP X CLUSTER interaction effect, where applicable. To examine whether participants in different clusters may have exhibited different treatment responses due to reduced levels of engagement in the intervention, a TIME X CLUSTER ANOVA was conducted on engagement measures in the TAU + MT participants, for which levels of TIME included were one and two months post-intervention (since no module had been completed at baseline). These data analyses are described in fuller detail in the Supplementary Materials section.
Association between cluster membership and anxiety disorder or depression diagnosis.
To determine the presence of an association between cluster membership and anxiety disorder or depression diagnosis (panic disorder, OCD, GAD, social anxiety disorder, PTSD, agoraphobia, depression), we conducted chi-square analyses using a separate model for each type of diagnosis and a Bonferroni-corrected p value of 0.007 to account for multiple tests.
Study 2: methods and materials
Participants. The sample consisted of 14,260 participants. Potential participants were recruited by Sharecare Inc., utilizing Sharecare's QualityHealth recruitment funnel which individuals opt into based on health conditions that are of interest to them. Email recruitment invitations were sent to those indicating interest in receiving health information specific to anxiety. Participants were considered eligible upon meeting the following inclusion criteria: (1) were at least 18 years of age; (2) Lived in the United States or Canada. Two-hundred and fifty subjects were not retained due to providing data in textboxes that were irrelevant to the question posed (e.g. checking 'other' in the gender category and providing the name of a city in the corresponding textbox) 14,010 participants were retained for inclusion in data analysis. These participants were on average 55 years old, with 3565 Males, 10,435 Females, and 10 individuals who selected other. Due to the de-identified nature of this study's data (online survey completion), this study was deemed exempt from oversight by the Brown University Institutional Review Board and informed consent obtention in accordance to federal legislature (US Department of Health and Human Services for the protection of human subjects Sect. 45 CFR 46.104). The experimental procedure is fully described in the Supplementary Materials Section.
Self-reported mental health diagnosis. Participants indicated whether they had been diagnosed with either of the following mental health disorders: anxiety, depression, bipolar disorder, schizophrenia/schizoaffective disorder, or other mental health disorder. Data analysis. Cluster Analyses procedures are fully described in Supplementary Materials Section.
Association between cluster membership and mental health diagnosis. Follow-up chi-square analyses were conducted to determine whether a given cluster had a greater or lower probability of being associated with the label of a mental health diagnosis (a separate model was conducted for each four mental health conditions). A Bonferroni corrected p value of 0.0125 was used to account for the 4 different models performed.
Between study cluster comparison. To determine similarity between clusters across studies, within-cluster average z-scores for each item were computed and the percent of items between studies that had the same z-score sign (positive/negative z-score, i.e. z-score direction relative to the item's average across the entire study sample) were obtained.
Cluster analyses determine the presence of 3 psychological phenotypes.
To determine the presence of clusters in self-report questionnaire data, we used a hierarchical agglomerative approach (Ward's method, see Supplementary Materials section for complete data analysis procedure details) to determine the number of clusters present. Results from the hierarchical agglomerative clustering approach revealed the presence of 3 clusters (Fig. 1). A k-means iterative partitioning approach was then performed to define clusters using k = 3 centroids, in order to maximize between-cluster distances and minimize within-cluster distance, yielding silhouette values of 0.503.
To determine items which were significantly distinct between clusters, we conducted one-way ANOVAs with CLUSTER as a between-subjects factor on each questionnaire item's z-score. Only significantly distinct items are included in cluster description. Figure 2A illustrates questionnaire item z-scores for each cluster. The clusters can be described in terms of intervention (FFMQ, MAIA), mechanism (PSWQ), and outcome (GAD-7) related features (see Table 1).
Cluster 1 (n = 13) demonstrated the highest scores among all phenotypes on mechanistic and outcome-related variables, and below average scores on mindfulness-related intervention features (FFMQ). Awareness-related intervention features were mixed (MAIA subscales): above-average scores were observed on the majority of features assessing the ability to notice body sensations, emotional awareness, attention regulation, self-regulation, and listening to the body for insight. In contrast, we found below average scores on a subset of items: assessing www.nature.com/scientificreports/ tendencies to worry about body discomfort and to feel distrustful of or unsafe in their bodies. Bringing these features together, cluster 1 can be summarized as 'severely anxious with body/emotional awareness' . Cluster 2 (n = 21) demonstrated scores at or above average on mechanism and outcome-related features, and average scores on mindfulness-related intervention features (FFMQ). This cluster also demonstrated below average scores and lowest of all phenotypes on the majority of awareness-related intervention features (MAIA). Cluster 2 can be summarized as 'body/emotionally unaware. ' Cluster 3 (n = 29) exhibited the lowest scores of mechanism and outcome-related features, yet the highest scores on intervention-related features of all phenotypes. Cluster 3 can be summarized as 'aware and non-reactive' . Cluster membership is unrelated to demographic variables. To determine any association between cluster membership and demographic variables (age, sex, race, education, income, work and marital status), we conducted a chi-square analysis on categorical variables, and a one-way ANOVAs with CLUSTER as a betweensubject factor was conducted on age. No significant association between cluster membership and sex, race, edu- . Means are shown with standard error, and asterisks denote significant group differences at each assessment timepoint (relative to baseline) as follow-up contrasts from mixed measures ANOVA. **p < 0.001; ns: non-significant. Table 1. Study 1 cluster description of individual questionnaire items. Questionnaires' subscales are listed in parentheses when applicable. Items significantly contributing to cluster formation are included (see results from one-way ANOVA on questionnaire item z-scores with Cluster as a between-group factor, Table S2). 95% confidence intervals for each item's within-cluster average were computed: Upward arrows indicate that the within-cluster z-score average for the scale/subscale's items was positive and that > 50% of items had confidence intervals that exceeded the group mean (full upward arrows) or did not exceed the group mean (light upward arrows). Downward arrows indicate that the within-cluster z-score average for the scale/subscale's items was negative and that > 50% of items had confidence intervals that preceded the group mean (full downward arrows) or did not precede the group mean (light downward arrows). PSWQ: Penn State Worry Questionnaire; GAD-7: Generalized Anxiety Disorder 7-item Scale; FFMQ: Non-Reactivity Subscale of the Five Facet Mindfulness Questionnaire; MAIA: Multidimensional Assessment of Interoceptive Awareness ##Indicate that the direction of > 50% scale/subscale's items relative to the group mean was distinct between Studies 1 and 2. 1.53, 29.07) = 2.11, p = 0.149, η p 2 = 0.10). Post-hoc within-subjects comparisons (alpha corrected to 0.008 adjusted for 6 contrasts) were performed within each cluster by comparing GAD7 scores between each assessment timepoint (30 and 60 days posttreatment initiation) and baseline for treatment group participants relative to controls. For clusters 1 and 3, these revealed significant contrasts between anxiety scores at the 30 day assessment timepoint and baseline as well as between the 60 day assessment timepoint and baseline for subjects in MT + TAU vs TAU (Cluster1: p < 0.001 for both contrasts; Cluster 3: p = 0.005 comparing GAD7 scores at 30 day vs baseline for MT + TAU vs TAU, p < 0.001 comparing GAD7 scores for 60 day vs baseline for MT + TAU vs TAU). For cluster 2, no significant differences were revealed by the comparisons between anxiety scores at the 30-day assessment timepoint and baseline (p = 0.065) for treatment relative to the control group, as well as the contrast comparing anxiety scores at the 60-day assessment timepoint and baseline (p = 0.215). These data are illustrated in Fig. 2B.
To determine whether group mean values in GAD scores significantly differed between clusters at baseline, t-tests (alpha adjusted to 0.017 for 3 between-groups comparisons) comparing GAD scores at baseline between pairs of cluster membership groups were conducted and revealed that cluster1 was higher in baseline anxiety compared to clusters 2 and 3 (ps < 0.01), while clusters 2 and 3 were not significantly different from each other when adjusting for multiple comparisons (p = 0.044).
Finally, the TIME (1 month and 2 months post-treatment initiation) X CLUSTER ANOVA on number of mindfulness program modules completed did not reveal any significant main effect of CLUSTER (F(2, 25) = 0.50, p = 0.611, η p 2 = 0.04) or TIME X CLUSTER (F(2, 25) = 0.27, p = 0.767, η p 2 = 0.02) interaction , indicating no significant differences in engagement measures between clusters across assessment time in the treatment group participants. Table 2 illustrates means and standard deviations for each cluster by GROUP and assessment time for anxiety and engagement outcome measures.
Study 2: results
Cluster analyses determines the presence of 3 psychological phenotypes. As in Study 1, to determine the presence of clusters in self-report questionnaire data, a hierarchical agglomerative approach (Ward's method) was used, which revealed the presence of 3 clusters (Fig. 3). As performed in Study 1, we then used a k-means iterative partitioning approach in order to define clusters using k = 3 centroids (mean silhouette values = 0.60, indicative of strong clustering 31 ). Figure 4 illustrates questionnaire item z-scores for each cluster.
Mechanism (PSWQ), outcome (GAD-7), and intervention (FFMQ, MAIA) related features are presented in terms of within-cluster averaged z-scores relative to the entire sample mean (see Table 3 for a summarized description).
Cluster 1 (n = 5629) scored the highest of all phenotypes in outcome and mechanism-related features. These participants exhibited below average scores on the majority of intervention-related features, and had a mitigated Table 2. Anxiety (GAD-7 scores) and engagement (number of modules completed) by treatment group and cluster at each assessment timepoint. TAU + MT = treatment as usual + mindfulness training; TAU: treatment as usual; GAD-7: generalized anxiety disorder-7 item scale. **p < 0.001, follow-up contrasts from mixed measures ANOVA. www.nature.com/scientificreports/ pattern of scores on interoceptive awareness features: above-average scores were observed for MAIA subscales assessing the ability to notice body sensations and emotional awareness. In contrast, below average scores were observed for all other interoceptive awareness subscales ('not distract' , 'not worry' , attention regulation, selfregulation, body listening, trust). As in Study 1, this cluster was labeled 'severely anxious with body/emotional www.nature.com/scientificreports/ awareness. ' Within-cluster average z-scores indicated a 77.4% similarity with cluster 1 from Study 1 (correlation r = 0.78 between the studies' within-cluster average z-scores). Cluster 2 (n = 2982) exhibited below-average scores on outcome-related features. They scored below average on the majority of mechanism-related features, with the exception of above-average scores observed for PSWQ features referring to time-related worry, tendency to worry, difficulty in dismissing worrying thoughts, useless worry. These participants also scored below-average on the majority of intervention-related (FFMQ, MAIA) items. As in Study 1, this cluster was labeled 'body/emotionally unaware' . Within-cluster average z-scores indicated a 72.6% similarity with cluster 2 from Study 1 (correlation r = 0.67 between the studies' within-cluster average z-scores).
Cluster 3 (n = 5399) exhibited the lowest scores of all phenotypes on outcome and mechanism-related features, and the highest on intervention-related features of all phenotypes. As in Study 1, this cluster was labeled 'aware and non-reactive' .
Cluster composition is described in fuller detail in the Supplementary Materials Section, and Table S3 displays the results of the ANOVAs for each questionnaire item as well as z-scores and SE for each item by cluster. Within-cluster average z-scores indicated a 98.4% similarity with cluster 3 from Study 1 (correlation r = 0.92 between the studies' within-cluster average z-scores). Table 3. Study 2 cluster description of individual questionnaire items. Questionnaires' subscales are listed in parentheses when applicable. 95% confidence intervals for each item's within-cluster average were computed: Full upward arrows indicate that the within-cluster z-score average for the scale/subscale's items was positive and that > 50% of items had confidence intervals that exceeded the group mean. Full downward arrows indicate that the within-cluster z-score average for the scale/subscale's items was negative and that > 50% of items had confidence intervals that preceded the group mean. ## Indicates that the direction of > 50% scale/subscale's items relative to the group mean was distinct between Studies 1 and 2. PSWQ: Penn State Worry Questionnaire; GAD-7: Generalized Anxiety Disorder 7-item Scale; FFMQ_NR: Non-Reactivity Subscale of the Five Facet Mindfulness Questionnaire; MAIA: Multidimensional Assessment of Interoceptive Awareness. www.nature.com/scientificreports/ Cluster membership is associated with self-reported mental health diagnoses. To determine whether cluster membership was associated with mental health diagnosis (depression, anxiety, bipolar disorder, schizophrenia/schizoaffective disorders), we conducted a chi-square analysis. The proportion of individuals reporting an anxiety diagnosis significantly differed between clusters (Pearson chi-square(2) = 2982.2, p < 0.001), with the largest proportion found in cluster 1 (67%), followed by cluster 2 (21%), and cluster 3 (21%). A similar pattern was found for depression, bipolar disorder, and schizophrenia/schizoaffective disorder diagnoses (Table 4, all p's < 0.001).
Cluster membership is associated with demographic variables.
To determine whether cluster membership was associated with demographic variables (age, sex, income, education, living area), we conducted chi-square analyses with categorical variables and one-way ANOVAs with the continuous variable of age. These revealed a significant association between cluster membership and sex (Pearson chi-square(2) = 365.6, p < 0.001), with the largest proportion of females within cluster 1 (83%), followed by cluster 3 (70%) and 2 (67%). We found a similar pattern of results for other demographic variables (see Table 4), such that participants in cluster 1 were more likely to report living in rural areas (46% vs 38% in both clusters 2 and 3 respectively), have lower income (65% reporting earning < $25,000 yearly, vs 56% and 51% in clusters 2 and 3 respectively), and were overall younger (M = 51 years ± 11.1) than participants in clusters 2 and 3 (all ps < 0.001) who were both on average 58 years of age (± 12). A significant association between cluster membership and education was also found (see Table 4) such that the proportion of participants with lower education was greatest in cluster 1, followed by clusters 2, and 3 (5% in cluster 1 reporting having middle school as their highest education level, vs 4% and 2% in clusters 2 and 3 respectively).
Feature reduction.
To determine a reduced number of features preserving the composition of clusters, we selected features with maximal amount of variance between clusters in Study 2's dataset (due to its larger sample size) that had the largest effect sizes from the one-way ANOVAs using cluster as a between-group factor on ques- Table 4. Study 2 mental health diagnoses and socio-demographic variables for each cluster. Except for Age which includes M(SE), within-cluster % of participants are reported. **p < 0.001. Significant between-cluster age difference, F(2,14,007) = 585.34, p < 0.001. a Pearson Chi-square statistics are reported. www.nature.com/scientificreports/ tionnaire data items' z-scores. We conducted a cluster analysis on the smaller set of features by applying a PCA to reduce the data to 2 dimensions, followed by a hierarchical agglomerative clustering and k-means analysis (the same set of analysis steps as in Studies1 and 2). This was conducted on half, one third, and one quarter of items within each scale. The solution with the lowest number of features that preserved cluster composition for the majority of features and had the highest mean silhouette value (0.570) was that for one-third of items from each scale (total of 19 items), which yielded comparable mean silhouette values (0.573) when applied to data from Study1 (using this reduced set of 19 features identified in Study 2 from between-cluster effect sizes). These items are highlighted in Table S3.
Discussion/conclusion
This study demonstrated that low-burden, minimal-cost, self-report methods can accurately identify psychological phenotypes based on characteristics related to treatment (mindfulness, interoceptive awareness), mechanism (worry) and outcomes (anxiety) in individuals seeking treatment for anxiety (Study 1), and general populations (Study 2). We found that these phenotypes also constitute a marker of mental illness susceptibility (Studies 1 and 2) and treatment response (Study 1). These results demonstrate the proof-of-concept that rapid, low-cost methods can be employed to match individuals with treatment to optimize outcomes-for example, mindfulness training for anxiety can be tailored to particular subgroups' baseline psychological phenotype.
Our results show high degree of similarity in cluster-based solutions (between cluster correlations of 0.67-0.92 across studies) in both treatment-seeking individuals and at the general population level for psychological attributes with characteristics relevant to a disorder's symptomatology, mechanistic factors, and type of psychological treatment. In addition, cluster membership significantly interacted (moderate to large effect size) with anxiety symptom improvement from our intervention (Study 1). In essence, clinicians could benefit from this information by recommending this type of mindfulness-based intervention as higher-line treatment options to anxious patients with interoceptive awareness (Cluster 1) or lower anxiety/worry with high awareness (Cluster 3) as they showed large effect sizes of symptom improvement, but opt for a different treatment selection for participants with some anxiety/lower interoceptive awareness (Cluster 2). Future studies optimizing this Cluster 2's outcome could further guide recommendations for participants with this type of baseline psychological profile.
In Study 1, although baseline differences in anxiety were observed between clusters (with highest anxiety scores observed in Cluster 1, followed by Cluster 2, and with Cluster 3 having lowest anxiety scores), Cluster 2's non-significant outcome improvement from the intervention was unlikely due to baseline GAD score differences. As such, although anxious patients with interoceptive awareness (Cluster1) was higher in baseline anxiety compared to Clusters 2 and 3, clusters 2 and 3 were not significantly different from each other in baseline GAD scores. The cluster that did not show significant responses to the intervention was Cluster 2, which indicates that the latter effect was not explained by differences in baseline GAD levels between clusters because the cluster with lowest baseline anxiety (Cluster 3) showed significant improvement from the intervention.
The results of these studies could also guide future research in a number of ways: first by applying the same clustering analysis framework to implement personalized medicine approaches to other mental health disorders by using attributes relevant to the disorder's symptomatology, mechanistic factors, and characteristics of the intervention. Brain and physiological correlates of psychological phenotypes could be studied as well. Together, this type of framework could enhance the application of personalized medicine into clinical mental health settings, where heterogeneity to treatment is present and empirically-based guidelines are lacking for treatment selection based on individual characteristics 6 . The low-cost aspect, accessibility, and speed of acquisition of combining validated psychological assessment tools could contribute to the feasibility in implementing such phenotyping to clinical mental health settings to facilitate personalized treatment selection or preventive measures recommendations. Implications are discussed with respect to these study's limitations and future study directions as well as potential personalized medicine applications. Impact on psychological phenotyping and personalized medicine. The results of this study indicate that the use of psychological variables related to an intervention's modality, mechanism(s) of action and outcome(s), can be used to characterize individuals into subgroups that are associated with mental health diagnoses and treatment outcomes. We demonstrated that incorporating key psychological markers of intervention and mechanism of change can identify cluster membership that has clinically-relevant impacts on outcomes for individuals with anxiety. The same approach can be applied to personalize treatment in other mental health conditions in which theoretical aspects of mechanism and treatment have already been identified (e.g. cognitivebehavioral therapy, depression and changes in cognitive patterns 32 ).
Our results demonstrate a potentially important advance for personalized medicine: development of individually-targeted treatment can be enhanced to optimize intervention effectiveness, and/or individualize treatment selection. For example, individuals in the 'body/emotionally unaware' cluster (cluster 2 in Study 1) that showed non-significant clinical outcome improvement relative to clusters 1 and 3 may need the incorporation of treatment components targeting interoceptive awareness skills (e.g. yoga) and/or may benefit from trying a different treatment modality (e.g. CBT). Additionally, it will be important to establish this subgroup's response to medication treatment and/or whether these individuals are more generally psychologically treatment resistant, necessitating stepped-up care immediately upon the beginning of treatment (e.g. combining individual psychotherapy with a digital therapeutic).
The results of these studies also contribute to simple, low-cost methods for the identification of individuals who may be at-risk for mental illness, thereby promoting the development of more effective prevention measures. For example, the higher anxiety clusters identified in both studies exhibited a greater proportion of participants www.nature.com/scientificreports/ with mental health disorders, and may particularly benefit from engaging in prevention measures (e.g. physical exercise, healthy eating habits 33,34 ).
Practical utility in clinical settings. Studies in the pursuit of personalized medicine have largely focused on biomarker identification methods which are costly, time-and labor-intensive. For example, cluster analysis of functional magnetic resonance imaging datasets has contributed to the identification of brain-based, mechanistically derived subgroups in clinical populations-e.g. clusters exhibiting distinct functional connectivity in particular brain networks in people with schizophrenia 35 . While the establishment of objective biomarkers is essential to the application of personalized medicine, these require infrastructure and technological expertise that are largely high-cost and limited in access. Nonetheless, reports on psychological phenotyping involving the use of machine learning and psychometric data show promise 14,36,37 . Our findings not only set the stage to guide and synergize with biomarker studies (e.g. combine psychological phenotyping with brain region clustering in functional neuroimaging studies), but also have the advantage of practical utility in clinical settings. Psychological phenotyping can be developed and deployed rapidly, and at low cost: someone with anxiety can fill out questionnaire items-in less than ten minutes-at home or on their smartphone in a clinic waiting room. The data can then be algorithmically cluster-analyzed and delivered to their electronic medical record in real-time, where it can be viewed by a clinician to scientifically guide and inform treatment selection in a personalized manner.
Limitations and future directions. The present studies are not without limitations. Despite the observed large effect sizes for the interaction of cluster membership with treatment response in Study 1 30 , these results should be interpreted with caution and warrant replication in future larger studies. With respect to analyses between cluster membership and associations with mental health diagnoses, the limited sample size from Study 1 may have masked the presence of significant effects, while conversely, the large sample size from Study 2 may have yielded significant differences with less clinical relevance. Moreover, despite convergence of results between studies in the association between cluster membership and mental health diagnosis, the differences in diagnosis assessment (diagnosed by an experimenter in Study 1, and self-reported in the larger Study 2 sample) also limits interpretations. Future replication of these analyses is therefore warranted to address these limitations. In addition, although clusters showed a large confluence of features between both studies (73-98% and between cluster correlations of 0.7-0.9), some distinctions were observed, particularly for the first phenotype on intervention (awareness) related features, and for the second phenotype on outcome/mechanism-related features. It is possible that these differences stem from population differences in baseline anxiety: Study 1 was a treatment-seeking sample with on average moderate levels of anxiety, whereas Study 2 was a larger sample taken from the general population with on average mild levels of anxiety. Moreover, the use of adaptive intervention designs 5 manipulating specific aspects of interventions (e.g. duration, type, dose) may help to identify key features (e.g. experiential avoidance in the case of anxiety 38 and inform intervention development to bolster treatment outcomes in subgroups that may otherwise not be predicted to respond as robustly. Finally, the use of the reduced data set features yielding comparable strength of clustering and cluster composition across studies would facilitate computation costs when analyzing large datasets as well as speed in data collection with preserved clustering accuracy, yet this result warrants replication in future independent datasets. In conclusion, the results of these studies supported the clinical practical utility of psychological phenotyping using machine learning and self-report questionnaire data involving characteristics related to a psychological treatment's modality, mechanism of action, and outcome. In other words, machine learning can be used to identify psychological phenotypes, or subgroups of people based on baseline psychological characteristics relevant to a particular disorder that can inform patients' response to its treatment or psychological disorder susceptibility and have implications in terms of the use of personalized medicine in clinical mental health practice. The application of this framework also has high practical utility in such settings (ease, rapidity, accessibility and low cost). Finally, these results set the stage for future studies incorporating neurobiological data to determine the correspondence between psychological phenotypes and underlying neurobiological correlates/markers.
Data availability
Data included in the present article will not be made publicly available due to resource sharing specifications included in the NIMH/NIH grant supporting this project. Brown University will adhere to the NIH Grants Policy on Sharing of Unique Research Resources including the "Sharing of Biomedical Research Resources: Principles and Guidelines for Recipients of NIH Grants and Contracts" issued in December, 1999 and the Data Sharing Policy (section 8.2.3.1) from the October 2019 revision. Specifically, material transfers would be made with no more restrictive terms than in the Simple Letter Agreement or the UBMTA and without reach through requirements. Nonetheless, data access requests from academic investigators can be directed to the corresponding author. www.nature.com/scientificreports/ | 2023-02-22T15:15:51.497Z | 2023-02-21T00:00:00.000 | {
"year": 2023,
"sha1": "efe9bf471e501639677af6690c8d7d893ca57c7a",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "efe9bf471e501639677af6690c8d7d893ca57c7a",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
214710703 | pes2o/s2orc | v3-fos-license | Patient’s Experiences and Difficulties at Home Following Day Surgery
Outpatient surgery, also called ambulatory surgery, is a surgical intervention in which patients with proper health condition allowing to plan a day surgery can undergo a surgical operation and be discharged home still on the same day. Nowadays, the number of patients treated in outpatient surgery units has substantially increased compared with those who have to be hospitalized after a surgical intervention.1-9
the same day in a process substantially shortening the hospital stay, where the surgical procedure is postponed only in exceptional cases and poses fewer risks in respect of infections. It is a cost-effective procedure that can be performed with less personnel, where patients and families are more satisfied in a process causing less stress on them. 4,6,10 If, however, the process is not effectively planned, outpatient surgery can cause several problems, as a result of which patients can experience some difficulties. 5,7,10,11 Of the various problems that might arise in outpatient surgery, the most prominent are the need for sufficient number of personnel with experience, the short span of time nurses allocate to patients in the post-surgery period, the need for an efficient training for the period after discharge, but, on the other hand, insufficient time available for such a training, lack of insufficient time for the assessment of complications that might come into the picture in the post-surgery period, making it impossible to monitor the complications after the discharge and the need for a caregiver in the first 24-48 hours. 2,3,12 Patients can have difficulty in coping with the problems they experience at home after their discharge. Various complications such as pain, nausea and vomiting, fever, fatigue, feeling of asthenia, discharge, bleeding, hoarseness and gag reflex can be observed in patients who receive insufficient training at discharge. 2,7,12 In order to prevent such complications, patients should be equipped at discharge with knowledge and skills that would enable them to manage their own selfcare at home. Requirements of patients and their relatives in respect of care after an outpatient surgery vary depending on the surgical intervention implemented and personal traits of each patient. [13][14][15] On the other hand, in outpatient surgery where patients are discharged home in a rapid process, other factors decrease the self-care power of patients, paving the way for a probable rehospitalisation, which include surgery and treatment options, anaesthesia and the related complications, pain control, non-delivery of written material guidance in respect of care at home and wound care. [13][14][15][16] It is thought that the present study would be beneficial especially for physicians and nurses serving in outpatient surgery units with respect to extending their knowledge in terms of doing the planning work, determining the conditions requiring support through a caregiver and reviewing the training before discharge in line with these needs by taking due account of the most prominent and most persistent problems patients experience at home and difficulties they encounter in performing their daily activities. No previous studies found about this subject in republic of North Cyprus.
MATERIAL AND METHODS
The study was conducted descriptive and cross-sectional between 10 December 2014 and 31 March 2015 at a university hospital in Cyprus. The sample consisted of 160 patients whose universe was determined by using the known sample calculation formula (95% confidence interval, 5% error margin) In all these areas, at least 10 surgical procedures performed in a week. Patients are discharged within 3-6 hours after the procedure. Discharge training is given verbally by physicians and nurses and no written material is given.
The patients included in the study were over 18 years of age and able to express themselves, who had undergone an outpatient surgical intervention and were willing to take part in the study. Individuals who met the study criteria and agreed to participate were included in the study.
In the study question forms were used designed by the researcher on the basis of models available in previous research and it was applied after being reviewed by three specialist academic nurses. 2,[6][7][8][9]12 After reviewed, the form did not require any change.
1) Introductory Information Form:
This form, which comprises of 15 questions, contains the demographic data, contact information about patients and questions about their general health condition.
2) Post-Procedure Assessment Form:
This form contains 12 questions about the outpatient surgical procedure, complications experienced by the patient in the post-operative period such as pain, bleeding, nausea and vomiting and those that would provide insight with respect to the duration of clinic stay in the post-surgery period.
3) Discharge Inspection Form:
This form contains questions that might provide a hint in respect of whether the patients have experienced fatigue, nausea, pain, bleeding, constipation, flatulence and difficulty in taking liquid via the mouth in the first 24 hours at home, the experiences of the patients and their coping strategies and difficulties they experience in daily activities such as dressing-undressing and eating. 7,10,13,14 In the first stage, the introductory information form was filled out based on the information in their file designed at the admission to the clinic and information obtained in face-to-face interview held with them. In the second stage, post-procedure assessment form was filled out basing on the interview realized face-to-face with them, when patients contact the clinic after the procedure. In the third stage, discharge inspection form was filled out by the researcher through telephone interview 24 hours after the patient had been discharged. Patients were asked the questions in the discharge inspection form when they came to the clinic for control 7 days after the dis-charge in a face-to-face interview in order to observe whether there had any difference in the in-between period of 2-7 days.
Ethıcal consıdEratıons
Ethical approval was obtained from the Near East University Scientific Researches and Ethics Committee (2014/26-164). Before starting the study, permissions in written form were obtained from the Office of the Chief Physician of the hospital. Before applying the question form, patients were informed about the objective of the study, then their consent in oral and written form was obtained. The study was carried out in accordance with the principles of the Helsinki Declaration.
statıstıcal analysıs
The Statistical Package of Social Sciences v.20.0 software (SPSS Inc.; Chicago, IL, USA) was used for assessing the data. The personal data about the patients included in the study such as gender, age group were subjected to frequency analyses and displayed on frequency distribution tables. Frequency analyses were also performed on other indicators about the patients such as their general health condition, the involved clinic, the type of anaesthesia and the symptoms seen after the procedure, level of information communicated in the pre-and post-surgery period, troubles which patients experienced in the first 24 hours as well as within the time-span of 2-7 days after their discharge home, the difficulties that patients experienced in performing their daily activities in the first 24 hours as well as within the time period of 2-7 days after the discharge home, which were then displayed on frequency tables.
RESuLTS
While, of the patients who participated in the study, 60% (n:64) were women, 37.5% (n:60) were in the age group of 26-35 and 20.0% (n:32) in the age group of over 56 years. Of the patients included in the study, 73.7% (n:118) were not diagnosed with any chronic disease. Of the patients who had a chronic disease, on the other hand, 64.2% (n:27) had hypertension, 30.9% (n:13) diabetes. While 24.3% of the patients stated that they regularly used medicines, 78.1% Özlem AYDOĞDU et al.
With regard to the level of information relayed to patients, 90.6% (n:145) stated that they had been informed in detail before the intervention. While 69.0% (n:110) of the patients received the information from a physician, it was a nurse who informed 31% (n:50) of the patients. As for the discharge training, 72.5% (n:116) said that they received training for post-discharge period, whereby 96.5% (n:154) of these patients were trained by a physician, and 51.0% (n:82) of them stated that they found the discharge training to be sufficient. Besides, 82.0% (n:131) of the patients said that they would have liked to have written material in respect of home care after the discharge.
Taken altogether, the results show that the patients experienced several problems such as nausea/vomiting, difficulty in taking liquid oral, distention, difficulty in breathing and orientation difficulties within 2-7 days after their discharge home.
According to the degree of difficulties experienced by the patients, 64.3% (n:103) had difficulty in walking and moving around, 79.3% (n:127) in going up the stairs, 75.0% (n:120) in managing their own self-care, 67.5% (n:108) in bathing, 69.3% (n:111) in dressing and undressing and 80.6% (n:129) in going to toilet in the first 24 hours after their discharge.
In respect of the difficulties which patients had at home in the time-span of 2-7 days after discharge, that 3.1% (n:5) had difficulty in walking and moving around, 5.6% (n:9) in going up the stairs, 10.0% (n:16) in self-care, 1.2% (n:2) in bathing, 6.8% (n:11) in eating and drinking and 2.5% (n:4) in shopping. Of the patients participated, while 69.3% (n:111) stated that they could not care for the household, 9.3% (n:15) said that they "could not do" their shopping.
DISCuSSION
Most of the patients who participated in the study were women and were in the 26-35 age group and surgery was performed in the gynecology clinic (Table 1). Previous research has provided evidence that the patients undergoing a surgical intervention experience several troubles at home, the ones associated with pain, oedema, exercise, self-care and wound area being the difficulties experienced by the majority, and that they need support from the family in performing daily activities in recovery period. 14, [16][17][18][19] The study results have shown that while 75% (n:120) of the patients experienced pain in the first 24 hours after their discharge, 35.6% (n:57) said that they had pain within 2-7 days after their discharge home ( Table 2). Even though patients can return home in relatively shorter time after an outpatient surgical intervention, patients do experience some problems if the preparations for the surgical procedure and the post-surgery period are not planned well. The most common problem seen in patients undergoing a day surgery is pain, and it is the reason that retards the discharge or leads to hospitalization in most of the patients. It is reported in previous research that patients can be hospitalized or patients themselves consult healthcare institutions due to pain, even in cases where all the measures are taken to provide sufficient analgesia. 7,18 Dal et al. found that the most frequent troubles which patients experience at home after discharge are those in connected with pain, oedema, exercise and self-care, Özlem AYDOĞDU et al.
Turkiye Klinikleri J Nurs Sci. 2020;12(1): [1][2][3][4][5][6][7][8][9] whereby the majority of the patients had pain problem at home. 18 The findings of our study are similar to those observed in the studies indicated above, giving rise to thought that proposals about solutions to prevent pain, one the most prominent problems experienced by patients in the post-surgery period after an outpatient surgical intervention, should be taken into account and implemented.
The study has further shown that 46.2% (n:64) of the patients feel fatigue/weakness ( Table 2). Erkal reported that patients who were examined by cystoscopy in the outpatient surgery department and later monitored through telephone interviews for three days after discharge reported fatigue, being restricted in activities and insufficiency in liquid in-take as the most frequent problems. 1 Gilmartin reported that patients felt intensive fatigue and weakness on the first day after the operation, which, however, showed a decline on the days that followed. 6 In our study, fatigue-weakness has been reported as the second most frequent trouble following pain among the problems which patients experience in the first 24 hours.
The tired feeling, fatigue/weakness after surgery is the usual situation for most patients and there are some reasons for this outcome. Some reasons begin even before surgery. For example, many patients have anxiety about undergoing any type of surgery and find it difficult to sleep, especially right before the date of surgery. Consequently, many patients have a sleep deficit even before they undergo surgery.
The patients included in the study, experienced nausea, vomiting in the first 24 hours, discharge/ bleeding at the wound site, loss of appetite, difficulty in urination, insomnia and difficulty in taking liquid via the mouth. In their study conducted to investigate the problems which patients undergoing nose surgery experience at home in the first 3 days in the postsurgery period and the solutions for these problems, Çilingir and Bayraktar reported that problems that are frequently observed after ambulatory surgical interventions such as bleeding, discharge/leakage at the area operated were seen in the post-discharge period. 7 Gilmartin, on the other hand, showed that nausea and vomiting ceased in the 5 th day after the operation in some patients, being with an incidence of 7%. 6 Postoperative nausea and vomitting (PONV) related with anesthetic methods and drugs are among the most important postoperative problems. Age, gender, weight, personal factors, anxiety, preoperative medications, operation area and surgical method, anesthetic method and drugs, postoperative factors are effective over PONV. Postoperative nausea and vomiting (PONV) is one of the complex and significant problems in anesthesia practice, with growing trend toward ambulatory and day care surgeries. Combination of drugs from different classes with different mechanism of action are administered for optimized efficacy in adults with moderate risk for PONV. Multimodal approach with combination of pharmacological and nonpharmacological prophylaxis along with interventions that reduce baseline risk is employed in patients with high PONV risk. [20][21][22] Several studies in the literature report that inability of patients in respect of maintaining their selfcare at home after an outpatient operation is one of the disadvantages of this type of surgical intervention. 2,7,18 In respect of the daily life activities within the first 24 hours in the post-surgery period, our study found that patients had difficulty in walking and moving around, going up the stairs, in meeting their needs with respect to self-care, bathing and dressing-undressing (Table 3). Patients undergoing a surgical intervention need more care in the first three days, the time where the patients experience more difficulties in performing their daily activities. Several studies in the literature also report that patients discharged after having undergone an outpatient surgical intervention have difficulty in moving, wound care, concentration, driving and performing household chores. 2,6,18 Our study has also found that the difficulties patients discharged home after an outpatient surgical intervention experience in the first 24 hours shows a declining trend within 2-7 days in the post-surgery period (Table 3). Tepe et al. have demonstrated that the daily life activities patients had the most difficulty in the post-discharge period are walking/moving, going up stairs and dressing/undressing. 2 The care needs of the patients who underwent surgical procedures are higher on the first day and during this period, patients may have more difficulty in performing their daily activities. The patients in our study stated that they experienced less difficulty in performing these activities. We think that the availability of relatives who helped the patients to perform the household chores and cooking or patients themselves who did the household chores and cooked their meals prior to the operation did play a role in this respect. Some of the patients, though a few in number, stated that they needed support in activities such as cooking, housekeeping and shopping also in the timespan of 2-7 days after the operation.
A closer look into the relation of clinics/units where the operation was performed with the difficulties patients experienced in the first 24 hours and 2-7 days at home in the post-discharge period revealed that it was the patients treated in the general surgery department who experienced the most pain and fatigue, and those treated in the ear, nose and throat department had the highest incidence in respect of emergence of nausea, vomiting, discharge/bleeding and loss of appetite. Nausea and vomiting are the complications that can also be seen in the post-discharge period in patients undergoing a surgical intervention. 6,12,20 Costa found that patients treated in the orthopaedics department had the highest incidence (16.1%) of pain in the pre-discharge period, and 5.3% Özlem AYDOĞDU et al.
Turkiye Klinikleri J Nurs Sci. 2020;12(1): [1][2][3][4][5][6][7][8][9] of patients suffered from moderate to severe pain in the first 24 hours. 21 Rawal reported that 65% of the patients suffered from moderate to severe pain depending on the type of the surgical intervention, of whom 41% were patients who had undergone daily surgery. 20 Of the patients included in the study, 90.6% (n:145) received information about the intervention before the operation, of whom 69.0% (n:100) were informed by a physician and 31% (n:45) by a nurse. On the other hand, 72.5% (n:116) stated that they received training for the post-discharge period, of whom 96.5% (n:154) were trained in this sense by a physician. It is observed that physicians play an active role in informing patients in the post-surgery period. Previous research has reported that the relatively short time from the admission to undergo outpatient surgery to their discharge is a factor that restricts the assessment of patients by nurses in the pre-surgery period, the information of patients and their families before the operation, their monitoring in the postsurgery period and providing them care in line with their needs and monitoring the problems arising in the post-surgery period. 1,5,18,19 It is thought that such a situation comes into the picture due to nurses' negligence to meet their responsibilities with respect to training, a negligence probably arising from the increased workload. There is also a shortage of nurses and nurses' workload is high in the hospital where the research was conducted.
Asked about the discharge training, 51.0% (n:82) of the patients said that they found it satisfactory. Besides, 82.0% (n:131) of the patients said that they would have liked to have written material containing information about home care after their discharge.
Several researchers have underlined that, because the oral information cannot be fully understood and forgotten in the course of time, the training has to be provided in written form in any case. 14, 15,18 It is also reported in existing accounts that information relayed in written form plays an important role to remove the uncertainties patients experience both during their stay at hospital and at home after their discharge, and that the majority of patients want to have the training materials in written form. 2,7,9,18 It is very important to direct the nurses to patient care practices and patient education in order to provide a quality and safe service. Informing the patient reduces medical error, improves care quality and patient satisfaction, and increases the visibility of the nurse. 23,24 CONCLuSION The present study has found that the most frequent problems patients experience in the post-surgery period after an outpatient operation are pain, feeling of weakness/fatigue, loss of appetite, and nausea/vomiting. The study has also demonstrated that the most frequent difficulties which the patients experience in activities at home are walking and moving around, going up stairs, self-care, bathing, dressing, undressing and going to toilets.
Another result is that most of the patients received detailed information about the surgery to be applied before the operation, whereby the majority was informed by a physician, and fewer number of patients received information from a nurse. The majority of patients do want to have the training materials in written form.
The study suggests that nurses should play an active role in informing the patients about the difficulties they can experience at home and the care to be provided after an outpatient surgical intervention. The study further proposes that nursing interventions should be reviewed to increase the effectiveness of pain management and to manage the problems they experience in performing daily life activities after a day surgery. The study results indicate that delivering written material to the patients would be a reasonable approach that can be proposed.
Source of Finance
During this study, no financial or spiritual support was received neither from any pharmaceutical company that has a direct connection with the research subject, nor from a company that provides or produces medical instruments and materials which may negatively affect the evaluation process of this study. | 2020-03-19T10:52:51.122Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "21f6b7466f99d71819cd73c4d9bfa88ff13c1f27",
"oa_license": null,
"oa_url": "https://doi.org/10.5336/nurses.2019-66571",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "e67d932c4e840a5749bcebf2cf36a1495a85a31a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
216595443 | pes2o/s2orc | v3-fos-license | Electronic and Crystallographic Examinations of the Homoepitaxially Grown Rubrene Single Crystals.
Homoepitaxial growth of organic semiconductor single crystals is a promising methodology toward the establishment of doping technology for organic opto-electronic applications. In this study, both electronic and crystallographic properties of homoepitaxially grown single crystals of rubrene were accurately examined. Undistorted lattice structures of homoepitaxial rubrene were confirmed by high-resolution analyses of grazing-incidence X-ray diffraction (GIXD) using synchrotron radiation. Upon bulk doping of acceptor molecules into the homoepitaxial single crystals of rubrene, highly sensitive photoelectron yield spectroscopy (PYS) measurements unveiled a transition of the electronic states, from induction of hole states at the valence band maximum at an adequate doping ratio (10 ppm), to disturbance of the valence band itself for excessive ratios (≥ 1000 ppm), probably due to the lattice distortion.
Introduction
Impurity doping is a key technology that boosts the functionalities of semiconductors. This is true not only for conventional inorganic semiconductors; in fact, a monumental work on bromine-doped perylene enkindled the dawn of "organic semiconductor" research [1,2] and led to the discovery of conductive polymers [3], charge transfer complexes [4,5], and molecular superconductors [5][6][7][8].
Unlike the case of silicon, controlled doping in organic semiconductor single crystals with minimized lattice distortion and phase separation has been a serious challenge. Recently, Ohashi and coworkers proposed a novel methodology for the production of organic semiconductor single crystals including dopant molecules [15]. This technique is based on molecular beam epitaxy, where an organic semiconductor species is deposited on a single crystal substrate of the same kind of molecule (i.e., homoepitaxy) in vacuo [16,17]. Through rigorous tuning of the evaporation rates of the 'host' molecule and the dopant, introduction of FeCl 3 molecules as an acceptor into the bulk crystal lattice modifications at the highest-occupied electronic states in an extreme sensitivity [27][28][29]. GIXD is a technique for determination of the crystal structures of thin films, and has successfully been applied to hetero-epitaxial organic semiconductor junctions built on molecular single crystals for specification of the lattice orientations [30][31][32] and even for evaluation of their crystallographic qualities by using high-resolution apparatuses [33][34][35]. It was confirmed that homoepitaxial rubrene single crystals grow without any apparent lattice distortion up to the thickness of 100 nm by the present GIXD measurements. In addition, high-sensitivity PYS analyses clearly unveiled evolution of the electronic structures of the homoepitaxial rubrene upon increase in the doping ratio of FeCl3; that is, induction of the holes at the top of the valence band in the initial stage was followed by disturbance of the valence band for excessive doping ratios.
Materials and Methods
Single crystals of rubrene were produced by a horizontal physical vapor transport (PVT) technique [36]. Details of the PVT equipment used in this work can be found elsewhere [37]. The obtained crystals were subsequently bounded on Au-coated Si wafer pieces using conductive Ag paste for the electronic measurements [ Figure 1b-f], or on Si pieces covered with the native oxide by the electrostatic force for the crystallographic analyses, to prepare "substrates". The homoepitaxial rubrene overlayers were produced by vacuum deposition on the rubrene single crystal samples at a low evaporation rate of ca. 3 pm/s by using a vacuum chamber built in a glove box [15]. Bulk-doped homoepitaxial rubrene single crystal samples were fabricated by simultaneous deposition of FeCl3 as a p-type dopant together with rubrene onto the single crystal rubrene surfaces. For doping rates of less than 1000 ppm, rotating disks with various aperture ratios (e.g., 10:1 and 100:1 for the doping ratios of 100 ppm and 10 ppm, respectively) were interposed between the FeCl3 evaporation source and the samples, while the evaporation condition at the origin was maintained as that for 1000 ppm doping [15]. In the present study, homoepitaxially grown single crystalline overlayers of rubrene were examined by means of photoelectron yield spectroscopy (PYS) and grazing-incidence X-ray diffraction (GIXD). Rubrene is a representative of small-molecule p-type organic semiconductors and is known to exhibit remarkably high mobility of conductive holes in its single crystal phase [19][20][21], which is understood in the band transport framework [22,23]. PYS is a suitable methodology for electronic characterization of specimens of low electric conductance [24,25], which is the case for intrinsic rubrene as a wide-gap (2.8 eV [26]) semiconductor, and particularly for detecting small modifications at the highest-occupied electronic states in an extreme sensitivity [27][28][29]. GIXD is a technique for determination of the crystal structures of thin films, and has successfully been applied to hetero-epitaxial organic semiconductor junctions built on molecular single crystals for specification of the lattice orientations [30][31][32] and even for evaluation of their crystallographic qualities by using high-resolution apparatuses [33][34][35]. It was confirmed that homoepitaxial rubrene single crystals grow without any apparent lattice distortion up to the thickness of 100 nm by the present GIXD measurements. In addition, high-sensitivity PYS analyses clearly unveiled evolution of the electronic structures of the homoepitaxial rubrene upon increase in the doping ratio of FeCl 3 ; that is, induction of the holes at the top of the valence band in the initial stage was followed by disturbance of the valence band for excessive doping ratios.
Materials and Methods
Single crystals of rubrene were produced by a horizontal physical vapor transport (PVT) technique [36]. Details of the PVT equipment used in this work can be found elsewhere [37]. The obtained crystals were subsequently bounded on Au-coated Si wafer pieces using conductive Ag paste for the electronic measurements [ Figure 1b-f], or on Si pieces covered with the native oxide by the electrostatic force for the crystallographic analyses, to prepare "substrates". The homoepitaxial rubrene overlayers were produced by vacuum deposition on the rubrene single crystal samples at a low evaporation rate of ca. 3 pm/s by using a vacuum chamber built in a glove box [15]. Bulk-doped homoepitaxial rubrene single crystal samples were fabricated by simultaneous deposition of FeCl 3 as a p-type dopant together with rubrene onto the single crystal rubrene surfaces. For doping rates of less than 1000 ppm, rotating disks with various aperture ratios (e.g., 10:1 and 100:1 for the doping ratios of 100 ppm and 10 ppm, respectively) were interposed between the FeCl 3 evaporation source and the samples, while the evaporation condition at the origin was maintained as that for 1000 ppm doping [15].
The crystallinity of the homoepitaxial rubrene samples was evaluated by GIXD at BL46XU of SPring-8. The X-ray wavelength and glancing angle were set at 0.100 nm and 0.12 • , respectively. Overall diffraction patterns of the sample surfaces were collected by two-dimensional (2D) GIXD measurements by using a 2D X-ray detector PILATUS300K, which was set approximately perpendicular to the incident X-ray, and crystallographic coherent lengths along in-plane directions were evaluated by high-resolution (HR) GIXD spot analyses in the 2θ directions using a NaI scintillation counter and a Ge(111) analyzer crystal. Detailed descriptions of the experimental conditions are found elsewhere [30,35]. The GIXD analyses were conducted in the ambient atmosphere.
The electronic states of the homoepitaxial rubrene samples were analyzed by using a home-built PYS apparatus [29]. For the PYS analyses, exposure of the samples to the ambient atmosphere was thoroughly avoided by the following procedures: (1) the rubrene single crystal "substrates" recrystallized in a purified nitrogen steam were directly conveyed to a glove box filled with a nitrogen atmosphere, (2) the rubrene single crystal substrates were prepared in the glove box and were transferred by using an air-tight container filled with nitrogen and a de-oxidation agent, (3) the samples were introduced into the aforementioned glove box equipped with the vacuum chamber, (4) the homoepitaxial rubrene overlayers were deposited up to the overlayer thickness of 50 nm, (5) the samples were transferred back to the first glove box, and (6) the samples were introduced to the ultra-high vacuum system for PYS measurements by using a vacuum vessel. Figure 2a shows the two-dimensional (2D) X-ray diffraction pattern of a rubrene single crystal sample obtained by integration of 2D-GIXD images taken during continuous rotation of the in-plane azimuthal angle φ of the sample over 180 • . The diffraction spots seen in this image were reproduced by a simulated pattern, as indicated with circle marks, for the (100) surface assuming a known crystal structure of rubrene [38], whereas diffraction ascribable to Si powder (blue arc around |q| = 20.04 nm −1 [39]) probably originated from the wafer piece that the sample was fixed on. φ-integrated 2D-GIXD images taken on rubrene single crystals with 50-nm-and 100-nm-thick overlayers of rubrene are displayed as Figure 2b,c, respectively. While increase in the background intensity (|q| < 8 nm −1 ) was observed to some extent for the 100-nm-thick overlayer as seen in Figure 2c, the obtained diffraction patterns agreed with that of the bare rubrene single crystal, irrespective of the thickness of the rubrene overlayer. It has been reported that rubrene can be crystallized in different polymorphs, triclinic and monoclinic phases, from solution [40,41], and one theoretical work has predicted that the triclinic phase is more stable than the bulk orthorhombic phase [42]. However, emergence of the different polymorphs can be negligible or non-existent, as no sign of diffraction spots attributable to the triclinic or monoclinic phase was detected in the present 2D-GIXD images.
Crystallographic Analyses
The φ-dependence of the 2D-GIXD results indicated that diffraction intensity at q ≡ (q xy , q z ) = (8.74 nm −1 , 0 nm −1 ), which corresponds to {010} diffraction spots for the (100) surface of the rubrene single crystal, appeared only for specific φ geometries in a 180 • periodicity, as expected from the symmetry of the crystal lattice structure. Other spots, e.g., the {111} spots at (q xy , q z ) = (9.76 nm −1 , 2.34 nm −1 )], were also confirmed to come out at expected φ-intervals for all samples. These results corroborated homoepitaxial growth of the rubrene overlayer by complete matching of the crystallographic orientation with the underlying rubrene single crystals.
sample obtained by integration of 2D-GIXD images taken during continuous rotation of the in-plane azimuthal angle ϕ of the sample over 180°. The diffraction spots seen in this image were reproduced by a simulated pattern, as indicated with circle marks, for the (100) surface assuming a known crystal structure of rubrene [38], whereas diffraction ascribable to Si powder (blue arc around |q| = 20.04 nm −1 [39]) probably originated from the wafer piece that the sample was fixed on. ϕ-integrated 2D-GIXD images taken on rubrene single crystals with 50-nm-and 100-nm-thick overlayers of rubrene collected by high-resolution grazing-incidence X-ray diffraction (HR-GIXD) measurements plotted as a function of the rubrene overlayer thickness. The corresponding crystallographic coherent length (up to 2 µm) estimated from the Scherrer equation is also indicated in the right axis for reference. Figure 2d shows full-width-at-half-maxima (FWHM) of 2θ-profiles for the Rub{010} diffraction spots collected by HR-GIXD measurements plotted as a function of the thickness of the homoepitaxial rubrene overlayers. Individual marks correspond to the data obtained at various sample orientations from several samples for each thickness. The spot width is formally related to a crystallographic coherent length (or mean crystallite size) through the Scherrer equation, which is also indicated on the right axis of Figure 2d for reference. Since the actual crystalline domain size of the rubrene single crystals used in this study was at least several-mm-wide, as exemplified in Figure 1, the "coherent size" for the 0-nm-thick sample (i.e., bare rubrene single crystals) does not give any reasonable dimensions of the sample but has to be considered to be restricted by angular resolution of the present experimental setup [43]. The present results suggest an absence of structural deterioration, at least in a scale of this size, upon growth of the homoepitaxial rubrene crystals to the thickness of 100 nm. Therefore, it should be concluded that, at least in terms of structure, the homoepitaxial rubrene overlayers and the bulk single crystal rubrene underneath are identical, not distinguishable as previously suggested [15].
Electronic Analyses
The variation in PYS spectra of 20-nm-thick homoepitaxial rubrene overlayers depending on the FeCl 3 doping ratio is presented in Figure 3a. The spectra of a bare rubrene single crystal and an amorphous rubrene thin film (thickness of 20 nm) grown on an indium-tin-oxide substrate are also displayed. While the homoepitaxial overlayers without FeCl 3 doping and with 10 ppm FeCl 3 exhibited substantially the same trends of photoelectron yield Y versus photon energy hν as that of the bare rubrene single crystal, increase in the doping ratio led to a rightward shift of the spectra approaching that of the amorphous rubrene. conditions and sample individuals. This suggests a jump in Is when the doping ratio was increased from 100 ppm to 1000 ppm.
Discussion
The ionization energy of organic semiconductors is not material specific but depends largely on molecular packing in the solid state [53]. In the case of rubrene, it was proposed that the difference in Is between single crystals and amorphous films should be attributed mostly to formation of the inter-molecular electronic "bands" for the former [45]; i.e., occurrence of the energy dispersion upshifts the upper edge of the highest occupied electron energy in comparison to the original discrete molecular orbital position. Actually, the formation of the highest-occupied molecular-orbital (HOMO) band (valence band) of ca. 0.45 eV-wide was demonstrated for the single crystal rubrene [48,49,54], and a narrower photoemission linewidth of the HOMO peak was observed for the amorphous rubrene films, suggesting an absence of band dispersion [55]. These suggest correspondence between the Is magnitude and the crystalline order of rubrene.
As indicated in Figure 3, the PYS spectra and the Is values for the non-doped and 10-ppm-doped samples revealed good accordance with those of the rubrene single crystal itself, which corresponds to the fact that these homoepitaxial rubrene overlayers exhibit good crystallinity. On the other hand, Is for the 1000-ppm-doped and 10000-ppm-doped rubrene overlayers approached the value for amorphous rubrene, which should be ascribed to structural disordering due to the presence of significant amount of the dopant molecules as previously suggested by atomic force microscopy (AFM) results [15]. The case for the 100-ppm-doped rubrene may be an intermediate. In the previous work [15], while the AFM images did not exhibit any signs of distortion up to this doping ratio, the Hall mobility decayed significantly in increasing the doping ratio from 10 ppm to 100 ppm suggesting an emergence of lattice disturbances as scattering centers. The present Is position for the 100-ppmdoped sample was close to that of the bare rubrene single crystal, which implies that the lattice of the homoepitaxial rubrene was not largely disturbed by the presence of 100 ppm FeCl3. However, this was not entirely free from the structural distortion that may diminish the photoelectron yield from the 'well-crystalized' rubrene and/or slightly reduce the Is value. The photoelectron yield from organic molecular solids can be approximated with the following empirical cube law as a function of hν [24,29,[44][45][46]: Y(hυ) = A(hυ − I s ) 3 ·S(hυ − I s ), where I s is the ionization energy of the sample, A is a material dependent parameter relating to the photoemission cross section, and S(x) is an adequate step function that switches from zero to unity when x goes from negative to positive. This formulation is valid only in a range where (hν − I s ) is not too great [46]. The I s values for the bare rubrene single crystal and amorphous rubrene samples were derived by least-squares fitting of the spectra as (4.88 + 0.06 − 0.09 ) eV and (5.33 ± 0.08) eV, respectively, which is in good accordance with previous PYS [45] and ultraviolet photoelectron spectroscopy (UPS) works [47][48][49][50][51][52]. For hν > I s , this formula can be transformed into: [Y(hυ)] 1/3 = A (hυ − I s ). This means that a x-intercept of a linear onset for the [Y(hν)] 1/3 plot corresponds to I s of each sample. In Figure 3b, the PYS spectra of the 20-nm-thick homoepitaxial overlayers given in Figure 3a are replotted in the [Y(hν)] 1/3 scale. The I s values depending on the FeCl 3 doping ratio are plotted on the inset graph, where the error bars indicate possible ranges covering variation of the results depending on the fitting conditions and sample individuals. This suggests a jump in I s when the doping ratio was increased from 100 ppm to 1000 ppm.
Discussion
The ionization energy of organic semiconductors is not material specific but depends largely on molecular packing in the solid state [53]. In the case of rubrene, it was proposed that the difference in I s between single crystals and amorphous films should be attributed mostly to formation of the inter-molecular electronic "bands" for the former [45]; i.e., occurrence of the energy dispersion upshifts the upper edge of the highest occupied electron energy in comparison to the original discrete molecular orbital position. Actually, the formation of the highest-occupied molecular-orbital (HOMO) band (valence band) of ca. 0.45 eV-wide was demonstrated for the single crystal rubrene [48,49,54], and a narrower photoemission linewidth of the HOMO peak was observed for the amorphous rubrene films, suggesting an absence of band dispersion [55]. These suggest correspondence between the I s magnitude and the crystalline order of rubrene. Figure 3, the PYS spectra and the I s values for the non-doped and 10-ppm-doped samples revealed good accordance with those of the rubrene single crystal itself, which corresponds to the fact that these homoepitaxial rubrene overlayers exhibit good crystallinity. On the other hand, I s for the 1000-ppm-doped and 10000-ppm-doped rubrene overlayers approached the value for amorphous rubrene, which should be ascribed to structural disordering due to the presence of significant amount of the dopant molecules as previously suggested by atomic force microscopy (AFM) results [15]. The case for the 100-ppm-doped rubrene may be an intermediate. In the previous work [15], while the AFM images did not exhibit any signs of distortion up to this doping ratio, the Hall mobility decayed significantly in increasing the doping ratio from 10 ppm to 100 ppm suggesting an emergence of lattice disturbances as scattering centers. The present I s position for the 100-ppm-doped sample was close to that of the bare rubrene single crystal, which implies that the lattice of the homoepitaxial rubrene was not largely disturbed by the presence of 100 ppm FeCl 3 . However, this was not entirely free from the structural distortion that may diminish the photoelectron yield from the 'well-crystalized' rubrene and/or slightly reduce the I s value.
As indicated in
Whereas the three PYS spectra in Figure 3 for the bare rubrene single crystal, non-doped homoepitaxial rubrene, and 10-ppm-doped sample resemble each other, small but significant differences were found when taking a closer look at the photoemission threshold regions. As shown in Figure 4a, the spectra for the bare rubrene single crystal rose along the cube-law curve on the whole. It is noteworthy that the previously reported PYS and UPS spectra for the single crystal rubrene exhibited photoemission signals, even in the energy region beyond the main valence band edge determined by the cube-law fitting [48], and that feature was ascribed to the so-called "oxygen-related band gap state" that had been found on the rubrene single crystal samples being exposed to air [56]. Since the present samples kept their surfaces 'fresh' by the thorough avoidance of being exposed to the ambient conditions before the PYS experiments, the spectral profile given in Figure 4a, and thus the cube-law curve, can be considered as an archetype of the Y(hν) pattern for the intrinsic valence band of the single crystal rubrene. On the other hand, the spectra of the non-doped homoepitaxial rubrene exhibited extra intensities in the low-energy side of the cube-law curve, as shown in Figure 4b. This means that some electrons did exist even though that energy range was within the band gap beyond the valence band onset. In contrast, for the 10-ppm-doped samples, the photoelectron yield only at the vicinity of the valence band edge was shaved off from the expected cube-law curve, which suggests that electrons accommodating at the valence band maximum (VBM) were taken away. Whereas the three PYS spectra in Figure 3 for the bare rubrene single crystal, non-doped homoepitaxial rubrene, and 10-ppm-doped sample resemble each other, small but significant differences were found when taking a closer look at the photoemission threshold regions. As shown in Figure 4a, the spectra for the bare rubrene single crystal rose along the cube-law curve on the whole. It is noteworthy that the previously reported PYS and UPS spectra for the single crystal rubrene exhibited photoemission signals, even in the energy region beyond the main valence band edge determined by the cube-law fitting [48], and that feature was ascribed to the so-called "oxygenrelated band gap state" that had been found on the rubrene single crystal samples being exposed to air [56]. Since the present samples kept their surfaces 'fresh' by the thorough avoidance of being exposed to the ambient conditions before the PYS experiments, the spectral profile given in Figure 4a, and thus the cube-law curve, can be considered as an archetype of the Y(hν) pattern for the intrinsic valence band of the single crystal rubrene. On the other hand, the spectra of the non-doped homoepitaxial rubrene exhibited extra intensities in the low-energy side of the cube-law curve, as shown in Figure 4b. This means that some electrons did exist even though that energy range was within the band gap beyond the valence band onset. In contrast, for the 10-ppm-doped samples, the photoelectron yield only at the vicinity of the valence band edge was shaved off from the expected cube-law curve, which suggests that electrons accommodating at the valence band maximum (VBM) were taken away. The Y(hν) magnitude in principle reflects the occupied density-of-states (DOS) of the specimen above that energy (hν) position with respect to its vacuum level. While a conclusive understanding about a formulation for deducing the accurate DOS distribution from the PYS spectrum [25,[57][58][59] has not yet been reached, nevertheless, characteristics of the electronic structures of the non-doped and 10-ppm-doped homoepitaxial rubrene can be outlined as Figures 4d and 4e, respectively. The The Y(hν) magnitude in principle reflects the occupied density-of-states (DOS) of the specimen above that energy (hν) position with respect to its vacuum level. While a conclusive understanding about a formulation for deducing the accurate DOS distribution from the PYS spectrum [25,[57][58][59] has not yet been reached, nevertheless, characteristics of the electronic structures of the non-doped and 10-ppm-doped homoepitaxial rubrene can be outlined as Figure 4d,e, respectively. The additional photoemission feature for the non-doped homoepitaxial rubrene indicates the emergence of occupied electronic states tailing into the energy gap range. Note that this sample was not exposed to air either; the "oxygen-related" should be excluded from a possible origin for these mid-gap states. Instead, like the Urbach tail of inorganic semiconductor materials [60], the presence of slight structural disordering in the homoepitaxial overlayers presumably caused the rise of the DOS in the energy gap region. Even though the crystallographic structure of the homoepitaxial overlayer is identical to the rubrene single crystal substrate, it was proposed that the growth in vacuo of the homoepitaxial rubrene is not at the thermodynamic equilibrium, and thus the growth stress and strain has to be present in the interior of the homoepitaxial rubrene films [17]. Actually, relatively low Hall mobility (0.2 cm 2 V −1 s −1 ) for the non-doped homoepitaxial rubrene crystals [15], in comparison to that of bare rubrene single crystal samples (~0.6 cm 2 V −1 s −1 ) measured by the same group [61], implies the existence of latent structural disturbance playing a role in the scattering centers for charge carriers. P-type doping to semiconductors generally pulls the Fermi level down to the deeper electron binding energy side. In fact, it was reported that the work function of amorphous rubrene films increased from 4.69 eV to 5.02 eV upon doping of 10 ppm FeCl 3 [15]. In the present homoepitaxial rubrene case, the doping of 10 ppm FeCl 3 made little impact on the crystal lattice, and thus on the main valence band, but only shifted the Fermi level downwards, which swept the electrons in the mid-gap states out and induced the holes at the top of the valence band.
Conclusions
The homoepitaxial overlayers of rubrene grown on the single crystal rubrene were examined by GIXD and PYS in terms of their crystallographic qualities and electronic structures, respectively. The GIXD results confirmed the absence of apparent structural disturbance, as far as the present resolution limit, for the homoepitaxial rubrene single crystals up to the thickness of 100 nm. The PYS results indicated that the ionization energy of the main valence band edge of the homoepitaxial rubrene is identical to that of the rubrene single crystal itself, that doping of FeCl 3 up to 10 ppm hardly affects the main valence band, and that excessive doping over 1000 ppm leads to the transition of the electronic states to those of the amorphous phase of rubrene. High-sensitivity analyses of the PYS revealed the emergence of the filled mid-gap states above the VBM of the non-doped homoepitaxial rubrene and the induced hole states at the VBM upon doping of the 10 ppm FeCl 3 . | 2020-04-29T13:03:28.278Z | 2020-04-01T00:00:00.000 | {
"year": 2020,
"sha1": "4f0b88e9b9f7d17afa9d7c7a813b250244e47402",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1944/13/8/1978/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "65ae763cee4c2e5ebe667e977a5c29862d30ce93",
"s2fieldsofstudy": [
"Materials Science",
"Physics"
],
"extfieldsofstudy": [
"Medicine",
"Materials Science"
]
} |
228957649 | pes2o/s2orc | v3-fos-license | Relationship analysis of participation and participation factor of water users farmer association (P3A) in the management of irrigation at Bantimurung irrigation area
The needs in irrigation water of farmers in farming management is higher nowadays. In addition to the demands in fulfilling the water needs in rice production activities, the ability of farmers to manage the water resources well also become a necessity. Saptana, DKK (2001), stated that water and irrigation resource management is considered as one of the key components for the improvement in food security. Irrigation water management depends on the level of farmer’s association of Farmers Water Users (P3A) as an organization that has the authority in the utilization, development and management of irrigation water. This research aims to analyze the relationship between participation factors with the participation rate of P3A members in the management of irrigation channels in Maros Regency. This research uses a qualitative-quantitative approach. Results showed that; age, land area, distance between paddy fields and irrigation channels is a factor that has a real connection with the participation rate of farmers in the management of irrigation channels. While the number of family members are dependents and education levels do not have a real connection with the participation rate of farmers in the management of irrigation channels.
Introduction
Utilization, development and management of irrigation systems at the tertiary level are the responsibility of the water-user farmer Assembly (P3A) and untuk to realize the development and management of good irrigation water It takes a strong, independent and empowered institutional that is ultimately able to increase agricultural productivity and production in driven, raising the nation's welfare of citizen and food security [1]. Participatory management objectives are for; 1. Increase the sense of togetherness, sense of owning and a sense of responsibility in the management of irrigation between the government and the Association of Water-users Farmers (HIPPA) and 2. The fulfillment of irrigation services that meet the expectations of farmers through efforts to increase the efficiency and effectiveness of sustainable irrigation management [2]. Some research related to the participation of farmers in the management of irrigation water has been conducted and one of the research was conducted by R. Putriany et al (2018) which showed a results that the participation rate of farmers P3A members in 3 groups were in the category where the participation rate is influenced by several factors, namely age, the number of family liabilities and the experience of farmer doing farming as farming manager [3].
Meanwhile, according to Director general of Agriculture infrastructure and facilities (PSP) of Ministry of Agriculture (Kementan) Sarwo edhy There are 5 pillars of modernization of irrigation, namely the reliability of water supply, the reliability of irrigation network, water management, [4]. Some important things of the 5 pillars are the institutional and human resources so as the role of P3A as an institution that formed on a group of farmers in the same region becomes an important component in managing or maintaining the Tertiary irrigation network and looking for solutions more independently of the issues concerning irrigation water emerging at the farm level.
Participation is ideally divided into several phases [5], namely; stage of decision making, implementation stage, evaluation stage or evaluations and stages of enjoying results. In each stage, the participatory level will be different, because there are many factors that become the basis or reason for one to participate. Research conducted by Hastika et al. (2019), indicating that the participation of farmers P3A members on the development activities of irrigation networks include [6]; Participation in the arrangement of group proposal plans and participation in the construction activities of tertiary irrigation tract that entirely shows the level of participation is lack of the form of participation, among others; contribution of thought, energy and funds. Still in Hastika research, it has not been explicitly explained the reason for the low participation rate of P3A in irrigation network development. Nevertheless, at the research of AY Antika (2017) It is explained that the participation rate of P3A members in the classification is more due to the involvement of all members of P3A in all stages of the program implementation are lower and not accompanied by mentoring and supervision of the related parties as well as personal reasons P3A members [7].
Community participation according to Cohen, J. and Uphoff (1977) is determined by internal factors to include individual characteristics such as age, gender, status in the family, education level, ethnic religion, language, occupation, income level, home distance with the location of activities/programs and land tenure and external factors that are all stakeholders in the program [8,9], among others; Community leaders, local governments, NGOS and third parties (NGOS, social foundations and colleges). The same result is also expressed by Wijayanti, N.A. (2011) that the factors that influence community participation in an activity/program among others; age, level of education, income and total family liabilities [10].
Lokita's research results Lokita (2011) show that participation is determined by internal factors including the ability of attitude and motivation and the ability of knowledge, skills and experience of individuals and external factors, namely opportunities that encourage individuals to participate in the program in the form of access [11]. While the research conducted by Hadi Suroso DKK (2014) shows that the difference in the level of community participation in development planning in Banjaran village through Musrembangdes is caused by [12]; A. Level of education where on a certain level of education the community has a tendency to actively participate, as the higher level of education then participation is also higher; B. The level of communication where the intensive communication of fellow citizens with its leadership and the social system in the community with the system outside is able to improve the role and participation of the community; C. Age where the older or senior person age then they give more opinion either input, advice or in terms of setting a decision; D. The kind of community work that is more flexible than the working hours allows them to participate more and E. Leadership levels that are able to recognize and capture the needs of the community tend to encourage participation.
Referring to some previous research that has been elaborated and the fact that the level of participated community or farmers in various programs/activities of government that generally are in the classification of lower or medium, so as varied factors underlying the activeness or participatory, then the researcher interested to analyze the relationship between participation factor with the participation rate of farmers P3A in the management of irrigation in the irrigation region of south.
Research Methods
The object on this research was the farmer members of P3A, divided into member of the P3A (HULU) Karya Bersama, P3A Samaturu (middle) and P3A Sare Te'ne (downstream) at the Bantimurung Irrigation area of Maros Regency (table 1). The method that used in this research is a descriptive method that begins with data collection, data analysis and data interpretation, while applied a qualitative approach. Through the process of interviewing, data and information are collected consisting of the participatory levels of farmer members P3A at each stage of the irrigation management activities and factors that underlie their participation in irrigation management activities. Measurement of farmer participation rate of P3A members was done by using Likert Scale, while the determining factor of participation are measured by identifying age (year), education level (year), number of family liabilities (people), land area (hectares), distance of the residence from irrigation channel and distance between Paddy field and irrigation channel (metre). Since this research will analyze the relationship between participation rates with the determining factor of participation, then the design of the research that will be conducted is design correlation (relationship). Furthermore, the relationship between participation rate and deciding factor of participation is analyzed through Chi Square analysis with hypotheses; Ho: There is no significant linkage between the socio-economic and physical variables with the participation rate of farmers in the management of irrigation channels and H1: There is a significant relationship between socio-economic and physical variables with the level of farmer participation in the management of irrigation channels. Data processing using computer Excel 2013 Software and SPSS/PC (Statistical Packed for Social Science/Personal computer) software.
Results and Discussion
The level of participation of farmers in the management of irrigation are measured using scoring method by grouping manageability activities into 4 phases in accordance with the statement of Rahmawati and Sumarti T. (2011), namely participation in the planning, implementation and evaluation which include activities [5]; cleaning, payment of IPAIR, rehabilitation, water distribution and periodic checking on irrigation channels and the utilization of the results include; delivery of water to farming land, water distribution with the right amount and time, increase in farming productivity also had a fair financial management, transparent and accountable. The results of data analysis on the participation rate of P3A farmers can then be seen in the table. 2 The level of Farmers Participation in P3A Karya Bersama members, P3A Samaturu and P3A Sare Te'ne in the management of irrigation on average was in the medium category that indicated each member of P3A was already aware of the role and responsibility to participated in utilizing and managing irrigation channels that will impact the results. The members of P3A have begun to realize theirs role to devote energy, ideas, or thoughts for real are affected the final goal that is wanted to be achieved from the management of irrigation channels. An active participation of farmers in the activities of irrigation management was certainly due the support of human resource capability that involved in the organization with vary characteristics on each [13].
The determinants of P3A member farmers' participation in irrigation management at the Bantimurung Irrigation Area in this study were formulated based on the description of Cohen, J. and Uphoff, Sunarti, Wijayanti, N.A., and Lokita namely; age (years), level of education (years), number of dependents (people), land area (hectares), distance of residence to the irrigation channel (meters) and location of rice fields from the irrigation channel (meters) [8][9][10][11]. The results showed that the average age of farmers in P3A Samaturu was older compared to farmers in others P3A. If age took as the basis for measuring one's maturity, it can be said that farmers in P3A Samaturu are relatively more mature than farmers in others P3A. Affirmed by Hurlock (2002), that at the age of 40 years a person is in the category of early middle age and at that age there is amature physical and mental change that is the basis for someone to maintain the achievements that have been previously achieved [14]. In addition to age, education level, the number of dependents of the distance of the residence to the paddy field and the location of the paddy field to the irrigation network on average in P3A Samaturu farmers seemed higher compared to farmers in the other 2 P3A. The conclusion that can be drawn from the data determining factors of participation of P3A member farmers is that with the average value that is not much different between one P3A and another P3A, the desire of the farmers to participate in irrigation management is certainly not too different. The level of participation of P3A member farmers and the determinants of participation of 3 P3As in the Bantimurung Irrigation Area have been described and show the fact that each P3A has tried to increase its participation even though it is not yet optimal and of course their participation is motivated by several determinants attached to themselves. The extent of the relationship between the level of participation and the determinants of participation then analyzed with the results shown in Table 3.
Conclusion and Suggestions
In general, the level of participation of farmers in the management of irrigation channels in P3A Karya Bersama (Upstream), P3A Samaturu (Central) and P3A Sare Te'ne (Downstream) are classified as medium where age, land area, distance of residence from irrigation channels and location of rice fields from the channel irrigation becomes a determining factor for participation. While the factor on the number of family dependents and education level are factors that has no relationship with the level of farmer participation in the management of irrigation channels in the Bantimurung Irrigation Area, Maros Regency. | 2020-11-05T09:11:33.624Z | 2020-10-29T00:00:00.000 | {
"year": 2020,
"sha1": "a566735216516c87bd14517c839c063b0d7bb077",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/575/1/012213",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "d8e968017202db7408d6da0aa76efeeabbdc10b0",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Environmental Science"
],
"extfieldsofstudy": [
"Physics",
"Business"
]
} |
221266039 | pes2o/s2orc | v3-fos-license | Pricing and Budget Allocation for IoT Blockchain with Edge Computing
Attracted by the inherent security and privacy protection of the blockchain, incorporating blockchain into Internet of Things (IoT) has been widely studied in these years. However, the mining process requires high computational power, which prevents IoT devices from directly participating in blockchain construction. For this reason, edge computing service is introduced to help build the IoT blockchain, where IoT devices could purchase computational resources from the edge servers. In this paper, we consider the case that IoT devices also have other tasks that need the help of edge servers, such as data analysis and data storage. The profits they can get from these tasks is closely related to the amounts of resources they purchased from the edge servers. In this scenario, IoT devices will allocate their limited budgets to purchase different resources from different edge servers, such that their profits can be maximized. Moreover, edge servers will set"best"prices such that they can get the biggest benefits. Accordingly, there raise a pricing and budget allocation problem between edge servers and IoT devices. We model the interaction between edge servers and IoT devices as a multi-leader multi-follower Stackelberg game, whose objective is to reach the Stackelberg Equilibrium (SE). We prove the existence and uniqueness of the SE point, and design efficient algorithms to reach the SE point. In the end, we verify our model and algorithms by performing extensive simulations, and the results show the correctness and effectiveness of our designs.
I. INTRODUCTION
In the past few decades, the Internet of Things (IoT) has been greatly developed and attracted more and more attention in academia and industry. The IoT technology helps integrate data by connecting different types of devices and has played an irreplaceable role in many fields, such as smart homes, smart factories, smart grids, and so on. In the traditional centralized IoT system, all IoT devices are connected to a centralized cloud server, which is used to manage devices and coordinate communications among devices. The most serious drawback of this centralized architecture is that it faces many problems, such as single point of failure, poor scalability, and network congestion [1]. Some studies introduce distributed IoT [2] and peer-to-peer (P2P) networks [3] to overcome these problems. However, the above studies didn't solve the inherent threats and vulnerabilities of the IoT, such as security and privacy issues [4].
A very effective way to solve the above issues is to incorporate blockchain into IoT [5]. The blockchain technology has been widely used since it was first implemented for Bitcoin in 2009 [6]. Blockchain records data as a decentralized public ledger, it does not require a third party server to store the data. Instead, data are stored in the form of blocks and maintained by all of the members of the blockchain network. The distributed feature allows blockchain to avoid suffering single point of failure which may happen in centralized systems. The blocks are linked by cryptography, and thus any change in a block will affect the subsequent blocks. The security of a blockchain mainly comes from the way that a new block is generated, which is called mining. To generate a new block, the members of the blockchain network need to win the competition of solving a hash puzzle which is very computation-consuming, and the winner will get a reward from the blockchain network platform. In this paper, we consider that the IoT blockchain network adopts the Proof-of-Work (PoW) consensus mechanism. It is worth mentioning that some researchers have proposed a Directed Acyclic Graph (DAG) based blockchain which is known as tangle for lightweight IoT applications, such as IoTA [7]. However, the DAG-based blockchain has many vulnerabilities, such as it facing the threats of denial of service attacks and spam attacks [8], and it's vulnerable to double-spending attacks [9]. Therefore, we do not adopt the DAG-based blockchain in this paper.
As mentioned before, solving the hash puzzle is computation-consuming, so it's hard for the lightweight IoT devices to participate in the mining process. Fortunately, edge computing service is helpful for establishing an IoT blockchain [10], where IoT devices could purchase computational power from edge servers. Consider such an IoT blockchain network that contains lots of IoT systems such as smart factories or smart homes. Each IoT system can be seen as a group, and all of the groups together maintain the operation of the blockchain. The IoT blockchain network will attract nearby IoT systems to join in it due to its security and privacy protection. Motivated by the reward from the blockchain network platform, the IoT devices in an IoT system will purchase the computational resource from the edge server which provides hash computing service (hash-server) to participate in the mining process. In addition, these IoT devices may have other tasks that require the help of the edge server which provides task processing service (task-server). For example, IoT devices arXiv:2008.09724v1 [cs.CR] 22 Aug 2020 that used for building smart cities or realizing augmented reality (AR) need to store and process large amounts of data [11], which is very difficult for lightweight IoT devices to accomplish. Thus it's necessary for these devices to purchase resources from the task-server, so that they can perform their tasks with the help of the task-server. Generally, the more resources they purchase, the faster and better they perform their tasks, and the more profits they could get from the tasks. As IoT devices usually have limited budgets, how to allocate the budgets to purchase different resources from the two kind of servers so as to maximize the profits, therefore, is an important problem for these IoT devices.
Driven by profit, edge servers will set the unit price of their resources to maximize their utilities, and accordingly, there raise a pricing and budget allocation problem between edge servers and IoT devices, We model the interaction between edge servers and IoT devices as a multi-leader multi-follower Stackelberg game, where edge servers are leaders and IoT devices are followers. The main contributions of this paper are summarized as follows.
• We introduce the IoT blockchain network with edge computing, and describe the operation of the IoT blockchain system. • We establish a multi-leader multi-follower Stackelberg game to model the interaction between edge servers and IoT devices. We prove that the Stackelberg equilibrium of the game exists and is unique, and then propose algorithms to find the Stackelberg equilibrium in limited interactions. • We perform extensive simulations to validate the feasibility and effectiveness of our proposed algorithms, simulation results show that our algorithms can quickly reach the unique Stackelberg equilibrium point. The rest of this paper is structured as follows. Section II introduces the related works. Section III describes the IoT blockchain with edge computing. Section IV presents the Stackelberg game. Section V designs algorithms to get the Stackelberg equilibrium. Section VI performs numerical simulations. And finally, Section VII concludes this paper.
II. RELATED WORKS
Due to the inherent security and privacy protection properties of the blockchain, incorporating blockchain technology into IoT has been widely studied in recent years. Novo [12] design an architecture for scalable access management in IoT based on blockchain technology. To address the privacy and security issues in the smart grid, Gai et al. [13] present a permissioned blockchain edge model by combing blockchain and edge computing technologies. Guo et al. [14] design a blockchain-enabled energy management system to ensure the security of energy trading between the power grid and energy stations. Li et al. [15] propose a resource optimization for delay-tolerant data in blockchain-enabled IoT, they use the blockchain technology to improve the data security and efficiency in the IoT system. Liu et al. [16] propose a blockchainbased approach for the data provenance in IoT, which ensures the correctness and integrity of the query results. Qi et al. [17] Hashserver build a compressed and data sharing framework with the help of blockchain technology, which provides efficient and private data management for industrial IoT. Lei et al. [18] design the groupchain which is a two-chain structured blockchain to ensure the scalability of the IoT services with fog computing.
There are some works that use the Stackelberg game to study the interaction among the participators in the edge computing-based blockchain network, which are closely related to our work. Chang et al. [19] study the incentive mechanism for edge computing-based blockchain networks, in which they aim to find the Stackelberg equilibrium between the edge service provider and the miners. Yao et al. [20] use a Stackelberg game to model the pricing and resource trading problem between the cloud provider and industrial IoT devices, and they find the near-optimal policy through a multiagent reinforcement learning algorithm. Xiong et al. [21], [22] formulate a Stackelberg game to jointly maximize the profit of mobile devices and the edge server in mobile blockchain networks. Ding et al. [23] investigate the interaction between the blockchain platform and IoT devices, where their objective is to find the Stackelberg equilibrium such that both the blockchain platform and IoT devices could maximize their utility and profits respectively. Guo et al. [24] study a Stackelberg game and double auction based task offloading scheme for mobile blockchain. However, all of these existing works only considered the computational power demand of IoT devices, and the game models in these works only have one leader, which is fundamentally different from our work.
III. IOT BLOCKCHAIN WITH EDGE COMPUTING
In this section, we introduce the model of the IoT blockchain with edge computing, and describe the operation of the blockchain system. Moreover, we analyze the security and reliability of the IoT blockchain network. Fig. 1 depicts the architecture of the system model of this paper. Consider that there is an IoT blockchain network that has been running for a period which adopts the proof-of-work consensus mechanism. The IoT blockchain network consists of lots of IoT systems such as smart factories or smart homes. Each IoT system includes a set of IoT devices and can be seen as a group. Due to the security and privacy protection brought by the blockchain, the IoT blockchain network will continuously attract other IoT systems to join. In each IoT system, there are two edge servers which provide hash computing service (hash-server) and task processing service (task-server), respectively. Motivated by the reward from the blockchain network platform, devices in the IoT system would like to be miners of the blockchain network, that is, they will compete with other miners to scramble the right of generating a new block by solving a hash puzzle. Due to the limited computational power of these devices, they will purchase computational resources from the hashserver and then offload their hash puzzle to the hash-server during the mining process. Moreover, each IoT device has its own tasks, such as data collecting, data analysis, and data processing. IoT devices could benefit from performing these tasks. However, when the amount of data is relatively large, it is difficult for these IoT devices to perform the tasks. Then IoT devices will purchase task processing resources from the task-server to perform their tasks. Generally, the more resource they purchase, the faster and better they perform their tasks, and then the more benefit they get from these tasks.
A. System Model
B. Blockchain System 1) System Initialization: Before each IoT device joins in the blockchain network, it needs to register with the Authentication Server (AS) which is authorized by the blockchain platform. The process is as follows. A device s n first selects its own identifier ID n , and then generate its public/private key pair (P K n , SK n ) with Elliptic Curve Digital Signature Algorithm (ECDSA) asymmetric cryptography [25]. The public key is opened to the whole blockchain network, and the private key is only known by the device itself. To avoid public key replacement attacks, each IoT device s n needs to get its digital certificate Cert n from the Certificate Authority (CA). The digital certificate Cert n is generated by CA's private key SK ca with s n 's identifier ID n and public key P K n , i.e., Cert n = SK ca (ID n , P K n ). CA send the encrypted message m = P K n (Cert n ) to s n , and s n decrypt the message with its private key to get its digital certificate, that is, Cert n = SK n (m). The digital certificate is used to uniquely identify the IoT device. After gets its digital certificate, the device s n submits its registration information reg n = P K as (ID n , P K n , Cert n ) to the AS. Upon receiving reg n , AS get the registration information by decrypting reg n with its private key (ID n , P K n , Cert n ) = SK as (reg n ). Then AS check the identity of s n , if (ID n , P K n ) = P K ca (Cert n ), s n passes the authentication. Device s n gets its wallet address W AD n from AS, which is generated from its public key with SHA256, RIPEMD-160 and BASE58 algorithms [26]. The AS will store the information (ID n , P K n , Cert n , W AD n ) about IoT device s n .
2) Create Transactions: In the IoT blockchain network, devices can trading with each other, and purchase or exchange sensing data with each other. For example, a device s n wants to purchase the sensing data from s m , s n first generate a request req = {ID n ||W AD n ||DataM es n ||T stamp }, where DataM es n is the description of its request and T stamp is the timestamp of the message. Then s n signs the request message with its private key SK n and digital certificate Cert n to prove its identity, Sreq = {req||Sign SKn (Hash(req))||Cert n }. At last, s n encrypt the message with s m 's public key EncSreq = P K m (Sreq), and sent the message EncSreq to s m . Upon receiving the message, s m first decrypt the message with its private key Sreq = SK m (EncSreq). Then check the signature and digital certificate of s n . If Hash(req) = P K n (Sign SKn (Hash(req))), it can make sure that the message is sent by s n . The digital certificate is used to validate the authenticity of the signature. s m use CA's public key to decrypt the digital certificate Cert n , if (ID n , P K n ) = P K ca (Cert n ), it's known that the signature is signed by s n . Then s m will send a response message to s n , similar to the above process, all the messages between s n and s m are sent in an encrypted way. After finish the trading, s n will broadcast their trading record to the blockchain network, and waiting to be stored in the blockchain. Besides the trading information between devices, some devices may want to store the sensing data which is important and sensitive into blockchain. Both the trading records and sensing data are considered as transactions.
3) Building Blocks: IoT devices collect a certain number of transactions in a period, and package them into a block. Each block is composed of two parts: block content and block header. The block content records the detail of transactions in a Merkle tree structure. The block header consists of the previous block hash value, which is used as a cryptographic link that creates the chain, a version number that used for tracking for software or protocol updates, a timestamp that records the time at which the block is generated, a Merkle tree root of all the transactions, a hash threshold value that records the current mining difficulty, and a nonce, which is used for solving the PoW puzzle. The mining process is similar to that in the Bitcoin system. Denoted h data by the block header excludes the nonce, then the mining process is to find a nonce a such that Hash(h data +a) < dif f iculty [6], where dif f iculty is a 256-bits binary number and is controlled by the blockchain platform to adjust the block generation speed. As the hash operation is very costly, the hash task of each IoT device will be offloaded to the hash-server, and each device is only responsible for generating new blocks after receiving the result from the hash-server.
4) Carrying Out Consensus Process:
The device who first solves the PoW puzzle gets the right to generate a new block, and then the new block needs to be verified by other devices. By adopting the group signature and authentication scheme proposed in [27], each IoT system in the blockchain network can be seen as a group. For a new block which is generated by a device in group i, it needs to pass a two-round validation before to be added in the blockchain. In the first round, the block is checked by the devices in group i, each device will validate the transactions recorded in this block. The block will get a signature if it passes the validation from a device, and it can be broadcast to other groups for a second round validation only if it gets all the signatures of devices in group i. In the second round, upon receiving the block, devices in other groups only check the signature attached in the block. If more than 50% of the devices agreed with the block, the block will be added into the blockchain. Due to the constraints in memory, we let each IoT device only stores a certain number of the latest blocks, which is also applied in [28], [29]. The whole blockchain is stored in the monitoring nodes [29] in each group (IoT system), as the monitoring nodes are authoritative and have larger memory capacity.
C. Security and Reliability Analysis
Different from traditional IoT systems, merging the IoT system into a blockchain network has many advantages, especially in terms of security and reliability. Specifically, the IoT blockchain network with edge computing inherits the security and reliability performance of the blockchain, as shown in follows.
1) Get rid of a third party: IoT devices carry out transactions in a P2P manner, in which each device has the same rights, and the trust between devices is built with the help of the smart contract of the blockchain. Therefore, the IoT blockchain network guarantees the robustness and scalability without involving a trusted third party. 2) Privacy protection: Blockchain can help deal with the increasing risk of sensitive data being exposed to malwares. In the blockchain network, the communication between devices uses the asymmetric encryption technology to protect the sensitive data. The message sent to a device will be encrypted with the receiver's public key, and it can only be decrypted by the receiver's private key. In this way, even if malicious devices intercept the message, they cannot know the content of the message. 3) Integrity: The blocks are duplicated recorded in different devices in a distributed way, so it's very hard for attackers to tamper with the blockchain. Besides, the blocks are linked together through cryptography. If an attacker attempts to tamper with the transactions in a block, the hash value of each subsequent block will be changed, and thus the attacker needs to redo the PoW puzzle of each subsequent block, which is nearly impossible for the attackers. 4) Authentication: In this IoT blockchain network, each new block needs to pass a two-round validation before it is added into the blockchain. It's very hard for an attacker to control a whole group (IoT system), so the new block that contains illegal transactions cannot pass the first round validation. Even if some attackers forge the signature of a group, the illegal block cannot pass the second round validation.
IV. MULTI-LEADER MULTI-FOLLOWER STACKELBERG GAME Consider that a new IoT system now join in the IoT blockchain network, the IoT devices in this system will purchase resources from the edge servers to participate in the mining process and perform their tasks. Driven by profit, the hash-server and task-server will adjust the unit price of their resources to maximize their utilities. After the two servers publish their pricing strategies, each IoT device will determine its strategy for purchasing resources from the two servers according to the resource price and their budgets, such that their profits can be maximized. In this section, we first give the utility functions of the two servers and the profit function of each device, and then describe the problem to be addressed in this paper, specifically, we model the interaction between the two servers and IoT devices as a multi-leader multi-follower Stackelberg game.
A. Utility Function
We assume that each IoT device has a unique budget to purchase resources from edge servers. The amount of resources they purchase from the two edge servers depends on how many profits they can get from the trading and are limited by their budgets. Each IoT device will allocate their budget to purchase different services from the hash-server and the task-server to maximize their profits. For the hash-server and the task-server, they will set the unit price of their resources to maximize their utilities. Moreover, there is a competition between the two servers. For example, if the unit price of resources from hash-server is too high, IoT devices would purchase more resources from the task-server, and vice versa. Naturally, we model the interaction between the two servers and IoT devices as a multi-leader multi-follower Stackelberg game, where the hash-server and the task-server act as leaders who first set the unit price of their resources, and IoT devices act as followers who determine their strategies according to the leaders' bids.
We use S = {s 1 , s 2 , . . . , s n } to denote the set of IoT devices, the budget for each device s i ∈ S to purchase resources is b i where b i > 0. The unit price of resources from the hash-server and the task-server are denoted by p h and p t per day, respectively. Let x h i and x t i be the amount of resources purchased by s i from the hash-server and the task-server, respectively. The amount of resources purchased buy each device is limited by its budget, that is, The profits of each IoT device s i ∈ S includes two parts. The first part comes from mining new blocks for the blockchain network, which is related to the amount of resources x h i purchased by s i from the hash-server. The second part comes from performing tasks, which is related to the amount of resources x t i purchased by s i from the task-server. We use P h i and P t i to denote the two parts of profits, respectively. In the following, we will describe how to calculate the two parts of profits.
As the blockchain network has been running for a period, in this paper, we assume that the total hash computational power of the blockchain network in a period time in the future can be estimated and is denoted by H. In the PoW consensus, the first miner who solves the hash puzzle has the right to generate a new block and will get the reward from the blockchain network. The probability of a miner winning the mining competition is directly related to the hash computational power. We use pro i to denote the probability that device s i ∈ S is the first one to solve the hash puzzle, and pro i can be estimated by Generally, the blockchain network will adjust the difficulty of the hash puzzle periodically according to the total hash computation power in the network to stabilize the block generation speed. We assume that an average of N new blocks are generated per day, and the miners will get a reward R for generating a new block. The expected reward obtained by device s i in per day is pro i RN , and the cost is x h i p h . Then the expected profits that s i gets from the mining process in a day is calculated as The profits P t i got by s i comes from performing tasks is related to the amount of resources purchased by s i from the task server, we use a logarithmic function to estimate the benefits of device s i for performing tasks, and then P t i is calculated as where α > 0 and β ≥ 1 are two constant parameters. Therefore, the total profits got by device s i is calculated as We use U h and U t to denote the utility of the hashserver and task-server, respectively. Assume that the unit hash resource cost of the hash-server is c h , and the unit task resource cost of the task-server is c t . Then the utilities of the two servers can be calculated as The profits P i gots by device s i involves two parts, i.e., P h i and P t i , which comes from trading with the two servers, respectively. The first-order derivatives of P h i and P t i are To make the problem reasonable, the conditions (0) ≥ 0 should be hold, otherwise, IoT devices will never purchase resources from the hash-server or the taskserver. Hence, we have p h ≤ RN H and p t ≤ αβ. As servers will never sell their resources at a price below the cost, that is, p h ≥ c h and p t ≥ c t . Therefore, in this paper, we assume that c h ≤ p h ≤ RN H and c t ≤ p t ≤ αβ.
B. Problem Formulation
The interaction between the two servers and IoT devices has two stages. In the upper stage, the hash-server and the taskserver offer a unit price of their resources. In the lower stage, IoT devices determine their strategies to maximize their profits according to the price of different services. In the following, we give a detailed definition of the problem in each stage. Problem 1. The problem in the lower stage (followers side).
Problem 2. The problem in the upper stage (leaders side). and Note that in the lower stage, each IoT device make their decision independently, and in the upper stage, the two servers are also non-cooperative. Therefore, the problems in the two stages form a non-cooperative multi-leader multi-follower Stackelberg game. Our objective is to find the Stackelberg equilibrium (SE) point of the game, where none of the players of the game wants to change its strategy unilaterally. The SE point in our model is defined as follows.
} be a strategy of IoT device s i , and we use X * = {x * 1 , x * 2 , . . . , x * n } to denote the set of strategies of all of the IoT devices. Let p * h and p * t be the strategies of the hash-server and the task-server, respectively. The point (X * , p * h , p * t ) is the Stackelberg equilibrium point if the following conditions are satisfied.
is an arbitrary feasible strategy for any device s i ∈ S, p h and p t are arbitrary feasible strategies for the hash-server and the task-server, respectively.
V. SOLUTION OF THE MULTI-LEADER MULTI-FOLLOWER STACKELBERG GAME
In this section, we analyze the existence and uniqueness of the Stackelberg equilibrium point of the multi-leader multifollower Stackelberg game. We first analyze the lower stage of the game, where each follower purchases different resources from the two servers with a limited budget to maximize its profits. Then we analyze the upper stage of the game, where the two servers determine their pricing strategies to maximize their utilities.
A. Lower stage (followers side) analysis
The second-order derivatives of the profits function P i of device s i are Therefore, the profits function P i is strictly concave, and the problem for each device s i in the lower stage is actually a convex optimization problem. Sequentially, we use the KarushKuhnTucker (KKT) conditions to solve the problem. Let λ 1 , λ 2 and λ 3 be the Lagrange's multipliers that associated with conditions in Eqs. (10) and (11). Then we define the Lagrangian function as follows.
The KKT conditions (including four group of conditions) of Problem 1 are listed as follows.
Stationarity conditions: Primal feasibility conditions: Dual feasibility conditions: Complementary slackness conditions: The optimal solution of the problem is taken in one of the following four cases.
(1) Case 1: x h i = x t i = 0. According to the KKT condition (28), we have λ 1 = 0 as b i > 0. Substitute it into KKT conditions (23) and (24), we have λ 2 = p h − RN H and λ 3 = p t − αβ. As p h ≤ RN H and p t ≤ αβ hold, we have λ 2 ≤ 0 and λ 3 ≤ 0. The KKT conditions can be satisfied only when λ 2 = 0 and λ 3 = 0. Thus we need to check whether p h = RN H and p t = αβ hold, if yes, the optimal solution is x h i = x t i = 0, otherwise, the optimal solution is not in this case.
Consider λ 1 > 0, according to (28) Substitute (31) and λ 3 = 0 into KKT condition (24), we have Substitute (32) and x h i = 0 into (23), we have It obvious that x t i = bi pt > 0 is satisfied. If λ 1 got by (32) satisfies λ 1 > 0 and λ 2 got by (33) satisfies λ 2 ≥ 0, it means all of the KKT conditions can be satisfied, and thus the optimal solution can be determined as (0, bi pt ). Otherwise, the optimal solution is not in Case 2.
Suppose λ 1 = 0, substitute it into the stationarity conditions (23) and (24), we have As p h ≤ RN H and p t ≤ αβ, it's easy to know that the condition (26) is satisfied. Then we check whether the condition (25) is satisfied. If yes, then the solution shown in Eq. (37) is the optimal solution. Otherwise, λ 1 = 0 is not feasible in this case.
Now we consider the case that λ 1 > 0. According to the condition (28) Solving conditions (23) and (24), we have (39) and solve the function, we have t = B± √ B 2 +4Aα 2A . As t > 0, we have By solving (40), we have Then we check whether λ 1 got by (41) satisfies λ 1 > 0. If λ 1 ≤ 0, it means that the solutions found in (38) cannot satisfy all of the KKT conditions, and thus the optimal solution is not in Case 4. Otherwise, we substitute (41) into (38), and we will get a solution (x h i , x t i ). Then we check whether the solution is the optimal solution. If no, the optimal solution is not in Case 4.
The optimal solution will be found in one of the four cases from Case 1 to Case 4. Based on above analysis, we present the algorithm FOSD to solve Problem 1 in the lower stage. The pseudo-code is shown in Algorithm 1.
B. Upper stage (leaders side) analysis
On the leaders' side, hash-server and task-server set their unit prices p h and p t of resources to maximize their utilities U h and U t which are calculated by Eqs. (5) and (6), respectively. Note that the pricing strategies of leaders will directly affect the strategies of followers, which has been analyzed in Section V-A. Therefore, the strategies of the hash-server and taskserver will affect each other's utility. The game between the two servers is non-cooperative and competitive. To maximize their utilities, each server (leader) should give a suitable unit price of its resource. For example, for the hash-server, if the unit price of resource p h is too high, the followers will prefer to purchase more resources from the task-server, and thus the hash-server will get a very low utility. On the contrary, if p h is set too low, although the followers tend to purchase resources from the hash-server, the total purchased resources are limited due to the budget limitation of these followers, which also results in a low utility. In the following, we will show that the According to the definition, at the NE point, none of the servers can improve its utility by unilaterally changing its strategy. Therefore, when the game reaches a NE point, the interaction between the two servers is suspended, and the pricing strategies of the two servers will never change again. Next, we will prove the existence and uniqueness of the NE point of the game between the two servers. Theorem 1. The NE point of the game between the two servers exists and is unique.
Proof. As defined in Problem 2, the strategy space of the two servers is [c h , RH H ]×[c t , αβ], which is a non-empty, closed and convex subset of the Euclidean space. Next, we calculate the second order derivatives of utility functions U h (·) and U t (·).
We first consider the utility of the hash-server. According analysis in Section V-A, the amount of purchased resources x h i of device s i from the hash-server must be one of the four following cases: Then the first order derivative of x h i (·) with respect to p h of this four cases is calculated as: The second order derivative of x h i (·) with respect to p h of this four cases is calculated as: According to function (5), the second order derivative of U h (·) with respect to p h is As p h − c h < p h , combing the functions of Therefore, the utility function U h (·) of the hash-server is a concave function with respect to p h .
Similarly, we have ∂ 2 Ut ∂(pt) 2 ≤ 0, and we know that the utility function U t (·) of the task-server is a concave function with respect to p t . Thus the interaction between the two servers form a concave 2-person game. According to [30], we know that the NE point of the game between the two servers exists and is unique.
After the leaders (the two servers) issue their strategies, the followers (IoT devices) will determine their strategies for purchasing different resources from the two servers, as is discussed in Section V-A. The interaction between servers and IoT devices formulates a multi-leader multi-follower Stackelberg game, and the objective is to find the Stackelberg equilibrium (SE) point of the game, which is defined as Definition 1. Next, we will prove that the Stackelberg equilibrium of the game between servers and IoT devices exists and is unique.
The SE point of the game between servers and IoT devices exists and is unique.
Proof. As analyzed in Section V-A, each IoT device will find its optimal strategy in one of the three cases from Case 2 to Case 4, which indicates that the strategy of each IoT device is unique after the two servers give their pricing strategies. According to Theorem 1, the game between the two servers has a unique NE point, we thus can conclude that the SE point of the game between servers and IoT devices exists and is unique.
To find the NE point of the game between the two servers, based on the optimal strategies of IoT devices, we proposed an algorithm FNES to find the final pricing strategies of the two servers. The FNES is based on sub-gradient technique [31], [32], and it invokes FOSD as its subroutine. The pseudo-code of algorithm FNES is shown in Algorithm 2.
In FNES, we first set a feasible pricing strategy for each server, and a small step ∆ is set to update the strategies of servers. We iteratively adjust the pricing strategy for each server in an alternative way. For the hash-server, we will calculate its utility U h with pricing strategies p h , p h + ∆ and p h − ∆, and the best pricing strategy will be selected as the latest strategy in the next round. Note that, the value of U h is related to the strategy of each IoT device, and thus we need to invoke the FOSD algorithm to get the strategy of each IoT device. The strategy adjustment of the task-server is similar to that of the hash-server. In each iteration, we update the step ∆ with the attenuation coefficient δ, where δ ∈ (0, 1). The FNES terminates when none of the servers will change its pricing strategy.
VI. SIMULATIONS
In this section, we conduct numerical experiments to validate the feasibility and effectiveness of our algorithms.
A. Experimental settings
In the experiments, we assume the total hash computational power H of the IoT blockchain network in the next period is estimated to be 1000 GH/s. The mining reward R from the blockchain platform is set to be 300, and the number of generated new blocks per day N is set to be 144. For the profit function of performing tasks, we set α to be 40, and β to be 2. The unit resource cost of the hash-server and the task-server is set to be 10, that is, c h = c t = 10. We consider there are 5 IoT devices in the IoT system that would like to purchase resources from the two servers, the budget of each device is 50, 60, 70, 80, and 90, respectively. For the algorithm FNES, we set the set ∆ to be 1, and the attenuation coefficient δ to be 0.99. Unless other declared, the above are the default settings of our experiments. Algorithm 2 Find Nash Equilibrium for Servers. (FNES) Input: The game between the two servers and IoT devices; Output: The pricing strategy (p h , p t ) for the two servers; 1: Initialize: p h = 1 2 (c h + RN H ), p t = 1 2 (c t + αβ); 2: Set a small step ∆, and the attenuation coefficient δ of the step; 3: while true do 4: p h = p h , p t = p t ; // Adjust strategy for the hash-server 5: Calculate U h (p h , p t ), U h (p h + ∆, p t ) and U h (p h − ∆, p t ) by invoking Algorithm FOSD for each IoT device with parameters (p h , p t ), (p h + ∆, p t ) and (p h − ∆, p t ), respectively; 6: if 10: end if // Adjust strategy for the tash-server 11: Calculate U t (p t , p h ), U t (p t + ∆, p h ) and U t (p t − ∆, p h ) by invoking Algorithm FOSD for each IoT device with parameters (p t , p h ), (p t + ∆, p h ) and (p t − ∆, p h ), respectively; 12: if 15: p t = max{p t − ∆, c t }; 16: end if 17: if p h = p h && p t = p t then 18: Break; 19: end if // Reduce the step 20: ∆ = δ · ∆; 21: end while 22: return (p h , p t )
B. Results and Analyses
1) The convergence of algorithm FNES: In algorithm FNES, we set the default initial value of the pricing strategies of the two servers as p h = 1 2 (c h + RN H ) = 26.6 and p t = 1 2 (c t + αβ) = 45. As shown in Fig. 2(a), after 23 iterations, the interaction between the two servers reach the Nash Equilibrium. However, when we set the initial value of the pricing strategies of the two servers as p h = RN H = 43.2 and p t = αβ = 80, as shown in Fig. 2(b), it needs about 130 iterations to reach the Nash Equilibrium, which is much slower than that in Fig. 2(a). This indicates that different initialization will significantly affect the convergence speed of algorithm FNES. We can also see that Fig. 2(a) and Fig. 2(b) reach the same Nash Equilibrium even though the initialization is different, which implies the correctness of our analysis in Theorem 2. We investigate the effect of the step ∆ on the convergence of algorithm FNES in Fig. 3. It can be seen that it needs more iterations to reach the Nash Equilibrium when we adopt a smaller ∆. However, although using a larger ∆ can quickly approach the equilibrium point, we have to wait for more iterations if we want to get a more accurate solution, as we need to wait for the step ∆ to decay to a sufficiently small level. Therefore, how to select a suitable value of ∆ depends on the trade-off between the convergence speed and solution accuracy. If we care more about the convergence speed other than the accuracy, we should adopt a relatively large ∆, otherwise, we should adopt a small ∆. There exists an alternative way to quickly reach a more accurate equilibrium point, that is, we use a large ∆ to quickly approach the equilibrium point, and then set the ∆ to a small value to improve the accuracy.
2) The effect of the mining reward R: We investigate the effect of the mining reward R on the final solution of our problem, as shown in Fig. 4. When the mining reward R increases from 200 to 400, the miners (IoT devices) will get more profit from the mining process, and thus these devices prefer to spend their budget on purchasing hash computational power. Therefore, the hash-server will give a higher price to get a larger utility, and the task-server has to lower its price to attract the devices. The results are shown in Fig. 4(a) and Fig. 4(b). From Fig. 4(c) we can see that the total purchased hash resource of devices decreases as the reward R increases. This is because the price of the hash resource has been raised up, and the devices have limited budgets. It indicates that if the blockchain platform wants to attract miners to contribute more computational power by increasing the mining reward, it may have an opposite effect. Devices will get more profit as the mining reward R increases, as shown in Fig. 4(d). 3) The effect of the unit resource cost of the servers: We keep the unit resource cost c t of the task-server unchanged, and increase the unit resource cost c h of the hash-server from 10 to 20. As the unit resource cost c h raised up, the hash-server will raise its resource price to get a larger utility. Thus devices will allocate more budget to purchase resources from the taskserver, and then the task-server will raise its resource price as it is more competitive. The results are shown in Fig. 5(a). The total purchased hash resource of devices will decrease due to the above reasons, meanwhile, devices will purchase more resource from the task-server, as shown in Fig. 5(c). The utility of the hash-server decreases with increasing the unit resource cost c h , the reason is that the total sold resources of the hashserver decreases as c h increases, even though the price p h has risen, the value p h − c h almost unchanged. The results are shown in Fig. 5(b). As both of the two servers will bid a higher price as c h increases, devices will get less profit, as shown in Fig. 5(d). From Fig. 5, we can conclude that if the server could reduce its unit resource cost, it will be more competitive and thus obtain more benefits.
4) The effect of the budget of devices: If we increase the budget b 5 of device s 5 from 50 to 190, while the budgets of other devices keep unchanged, the profit gets by s 5 will increase because it can purchase more resources from the two servers, while the profits of other devices decrease, as shown in Fig. 6(a). The reason is that the increment of the budget b 5 will cause servers to raise their resource prices, which in turn reduced the amount of resources purchased by other devices, as shown in Fig. 6(b). The result indicates that the budget of devices will affect each other's profit in an indirect way.
VII. CONCLUSION
In this paper, we study the pricing and budget allocation problem between edge servers and IoT devices in an IoT blockchain network. We first introduce the architecture of IoT blockchain with edge computing, and describe the operation of the IoT blockchain system. Then, we model the interaction between edge servers and IoT devices as a multi-leader multi-follower Stackelberg game. We prove the existence and uniqueness of the Stackelberg equilibrium, and design efficient algorithms to get the Stackelberg equilibrium point. Finally, we validate the correctness and effectiveness of our designs by conducting extensive simulations. | 2020-08-25T01:00:51.895Z | 2020-08-22T00:00:00.000 | {
"year": 2020,
"sha1": "7f43d7600952e8d9e212a07ea911da2991f78d91",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "7f43d7600952e8d9e212a07ea911da2991f78d91",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
18255110 | pes2o/s2orc | v3-fos-license | Persistence of Rabies Virus-Neutralizing Antibodies after Vaccination of Rural Population following Vampire Bat Rabies Outbreak in Brazil
Background Animal control measures in Latin America have decreased the incidence of urban human rabies transmitted by dogs and cats; currently most cases of human rabies are transmitted by bats. In 2004–2005, rabies outbreaks in populations living in rural Brazil prompted widespread vaccination of exposed and at-risk populations. More than 3,500 inhabitants of Augusto Correa (Pará State) received either post-exposure (PEP) or pre-exposure (PrEP) prophylaxis. This study evaluated the persistence of rabies virus-neutralizing antibodies (RVNA) annually for 4 years post-vaccination. The aim was to evaluate the impact of rabies PrEP and PEP in a population at risk living in a rural setting to help improve management of vampire bat exposure and provide additional data on the need for booster vaccination against rabies. Methodology/Principal Findings This prospective study was conducted in 2007 through 2009 in a population previously vaccinated in 2005; study participants were followed-up annually. An RVNA titer >0.5 International Units (IU)/mL was chosen as the threshold of seroconversion. Participants with titers ≤0.5 IU/mL or Equivalent Units (EU)/mL at enrollment or at subsequent annual visits received booster doses of purified Vero cell rabies vaccine (PVRV). Adherence of the participants from this Amazonian community to the study protocol was excellent, with 428 of the 509 (84%) who attended the first interview in 2007 returning for the final visit in 2009. The long-term RVNA persistence was good, with 85–88.0% of the non-boosted participants evaluated at each yearly follow-up visit remaining seroconverted. Similar RVNA persistence profiles were observed in participants originally given PEP or PrEP in 2005, and the GMT of the study population remained >1 IU/mL 4 years after vaccination. At the end of the study, 51 subjects (11.9% of the interviewed population) had received at least one dose of booster since their vaccination in 2005. Conclusions/Significance This study and the events preceding it underscore the need for the health authorities in rabies enzootic countries to decide on the best strategies and timing for the introduction of routine rabies PrEP vaccination in affected areas.
Introduction
Rabies is a viral zoonosis that affects mammals. It is caused by neurotropic viruses belonging to the family Rhabdoviridae, genus Lyssavirus. The International Committee on Taxonomy of Viruses (ICTV) recognizes today 14 species [1,2]; this taxonomy is rapidly evolving and the two more recently accepted Lyssaviruses isolated from a bat in Germany (Bokeloh bat lyssavirus) and from a civet in Africa (Ikoma lyssavirus) have been included as new species [3,4,5]. Most lyssavirus variants are found in bats and are known to cause rabies in humans and in domestic animals [6]. Interestingly, the isolates detected until now on the American continent all belong to the classical rabies virus (RABV), the species used in rabies vaccine. Lyssaviruses are neurotropic, causing acute encephalitis or "furious rabies" in about 70% of cases and a paralytic form of rabies in 30%. Not all exposures lead to illness, but once symptoms occur, rabies is almost always fatal. Therefore, proper prophylaxis to prevent infection must be administered promptly after exposure. Approximately 26,400 [95% confidence interval (CI) 15,200] human rabies deaths are estimated to occur worldwide each year using the "Cause of Death Ensemble" model, but the estimate rises to 61,000 (95% CI 37 000-86 000) when a probability decision-tree approach is used [7]. Rabies reservoirs and vectors include domestic as well as wild mammals, but human infection mostly results from bites from rabies-infected dogs. Animal control measures have decreased the incidence of urban human rabies transmitted by dogs and cats; and currently, in Latin American and Caribbean countries, most cases of human rabies are transmitted by bats [8,9].
The Pan American Health Organization (PAHO) implemented a multinational program against rabies in 1983, supporting intensive dog vaccination programs. The results have been very effective. In the Americas, canine cases decreased by 93% (from 15,686 to 1,131) and human cases decreased by 91% (from 355 to 35) between 1990 and 2003 [10]. In the countries where the circulation of canine rabies has been controlled, incidences of canine and human rabies continue to decrease in parallel, with 400 cases reported in dogs in 2010 and 10 in humans in 2012 in Latin America [11]. However, the number of human rabies cases caused by bats began to increase in Latin America in 2004, when more than half of the 87 reported cases were transmitted by vampire bats. Most cases were caused by outbreaks in Brazil (21 cases), Colombia (14 cases) and Peru (8 cases). In 2005, of the 60 reported cases of bat-transmitted human cases in Latin America, 42 were in Brazil and 7 in Peru (Amazonian area) [8]. Although human rabies cases have declined since 2006, cattle rabies in the region continues to increase, and a recent report from Peru estimated that the rabies seroprevalence in bats varied from 3 to 28% depending on the geographical region [12,13].
Although many human rabies outbreaks have been reported in northern Amazonian Brazil, few epidemiological studies have been performed. In 2004, a total of 21 people died during rabies outbreaks in two villages, Portel and Viseu, in the region of Pará State, Brazil, following bat bites (or as a result of bat rabies). In May 2005, 15 cases occurred in Augusto Correa, another rural municipality in the same region. These outbreaks, affecting populations living in remote areas, were of great concern to health authorities, prompting widespread vaccination of the exposed or at risk populations [14]. Following the rabies outbreak in 2005, more than 3,500 inhabitants of Augusto Correa received either post-exposure (PEP) or a pre-exposure (PrEP) prophylaxis. A few people were given booster vaccinations after possible rabies re-exposure, mostly following dog, bat, and monkey bites. As per national guidelines for PrEP in Brazil, if antibody titers < 0.5 IU/mL, the recommendation is to administer 1 booster dose via the IM route and to perform serological testing at D14. For re-exposed individuals who have previously received PEP no serological testing is done. Within 90 days of completing PEP, no vaccine is administered while within 90 days of incomplete PEP, the missing doses have to be given. More than 90 days after completing full PEP, 2 doses of vaccine (D0, D3) are recommended while if the PEP is incomplete, the recommendation is to administer the full 5-dose schedule based on the nature of rabies exposure [15].
Study design
This was a single-site, prospective epidemiological study designed to evaluate the persistence of RVNA in a population at risk of vampire bat rabies and who had previously received either PrEP or PEP regimens. The study also aimed at providing additional data on the need for booster vaccination against rabies. The results of 3 years of follow-up are presented here.
Outbreaks of human rabies cases occurred in 2004 and in 2005 in Augusto Correa, a rural municipality of approximately 27,000 inhabitants in Para state, northern Brazil. After the second outbreak, approximately 3,500 local residents of Augusto Correa were given either the standard five-dose intramuscular (IM) PEP (with or without rabies immunoglobulin administration) on Days 0, 3, 7, 14 and 28, or a three-dose PrEP vaccination series on Days 0, 7 and 21 or 28 with purified Vero cell rabies vaccine (PVRV, Verorab; Sanofi Pasteur, France). This prospective study was conducted in 2007 through 2009 at the Arai health unit (USF Arai 3) in Augusto Correa, to evaluate the persistence of RVNAs in those who had been vaccinated in 2005. Each study participant was followed-up annually for 3 years (in 2007, 2008 and 2009). As recommended by WHO, an RVNA titer >0.5 International Units (IU)/mL was chosen as the threshold of seroconversion [16]. Participants with RVNA titers 0.5 IU/mL or Equivalent Units (EU)/mL at enrollment or at subsequent annual visits received booster doses of PVRV.
Anyone who had been vaccinated in 2005 was eligible to participate. Written informed consent was given by participants aged 18 years and above or by parents or legal guardians if younger. The study was conducted in accordance with the Edinburgh revision of the Declaration of Helsinki, International Conference on Harmonization (ICH) good clinical practice and applicable national and local requirements regarding ethical committee review.
The primary objective was to evaluate the persistence of RVNA following PrEP or PEP. Secondary objectives included describing RVNA titers following receipt of PVRV booster doses, estimating the incidence of clinical cases of rabies in the study population, and determining the correlation between the anti-rabies antibody titers obtained by the rapid fluorescent focus inhibition test (RFFIT) and a commercially available enzyme-linked immunosorbent assay (ELISA).
Laboratory methods
Blood specimens (5 mL) were collected from each study participant at enrollment and at each of the three annual follow-up visits (when the patient came to the health center) for testing by RFFIT and enzyme-linked immunosorbent assay (ELISA). Blood serum specimens were divided into four 0.5 mL aliquots for testing. The RFFIT method was adapted from the original one [17] while changing both the cell line support (BHK21 cells instead of MNA cells) and the rabies virus strain used. RVNA titers of all specimens against the Pasteur virus strain PV (instead of the Challenge Virus Strain CVS) were measured by RFFIT at the Centro de Controle de Zoonoses (CCZ) laboratory in São Paulo, Brazil. Ten percent of those specimens were randomly selected for RFFIT re-testing at Institut Pasteur laboratory in Paris, France, using a vampire bat virus strain (instead of the CVS strain). In addition, the concentration of rabies virus anti-glycoprotein antibodies (EU/mL) in each blood sample was determined by ELISA (Pasteur virus strain) at Institut Pasteur laboratory in Paris, France, using the Bio-Rad Platelia assay as per the manufacturer's instructions. The correlation between the RVNA titers measured by RFFIT and by fluorescent antibody virus neutralization (FAVN) assay (CVS in BHK21 cells) [18] was estimated at the CCZ laboratory, São Paulo, Brazil, using the specimens collected in 2007, the first year of the study.
Statistical analysis
The immunogenicity analysis was descriptive; no hypotheses were tested. Seroconversion (RFFIT titer >0.5 IU/mL) rates and geometric mean antibody titers (GMTs) were calculated with their 95% confidence intervals (CIs). The sample size calculation was based on an expected seroconversion rate of 90% at 5 years after the primary vaccination series. A total of 140 subjects were required to ensure a 95% precision for a two-sided CI of 5%. Assuming 30% of the participants would be lost to follow-up at 5 years after primary vaccination, a total of 200 subjects had to be included. However, to anticipate additional dropouts, subgroup analyses and insufficient sera for laboratory testing, the planned enrollment was 500 participants. The study populations included in the analysis comprised: 1) all the evaluable study participants in each follow-up year, 2) participants who received a booster dose of vaccine at enrollment or during a follow up year, and 3) participants who did not receive booster doses of vaccine either at study entry or in any follow-up year. Missing data were not replaced.
The primary study endpoint was the number and percentage of subjects with RVNA titers >0.5 IU/mL each year using the RFFIT assay. We performed an analysis by gender and age group (i.e., 2-5, 6-17, 18-40, 41-60, and >60 years of age). The number and percentage of subjects with RVNA titers >0.5 EU/mL using the ELISA test was calculated in the overall study population for each of their follow up visits.
For inter-group comparisons, quantitative variables and ordinal qualitative variables were compared using Student's t-test or ANOVA (parametric data) and the Wilcoxon or Kruskal-Wallis test (nonparametric data). Qualitative variables were compared using the Chi square test (or Fisher exact test when frequencies were less than five for at least one category). The correlations of GMTs measured by two different assays were determined by Pearson's correlation coefficient (r). The correlations between percentages of participants with titers >0.5 IU/mL or EU/mL measured by RFFIT, FAVN or ELISA were calculated using the Kappa coefficient (κ). Among the 95 participants who did not complete the study or did not attend all of the visits, 91 (95.8%) were lost to follow-up and four died (one following an epileptic coma and three of different cancers). In 2007, nine participants were excluded from analysis because they had not received a complete PEP schedule (i.e., <5 vaccine doses). Four additional participants were excluded from analysis in 2008 because of missing data (no booster dose information). One subject who had previously been excluded from analysis in 2008 withdrew from the study in 2009.
Participant demographics and rabies vaccination at enrollment
The 507 participants who were evaluated at the start of the study ranged from 2 to 83 years of age, with a mean ± SD of 21.4 ± 16.8 years, and 288 (56.8%) were male. The age and gender distributions are shown in Table 1. The mean time ± SD between the last vaccine dose and enrollment was 23.7 ± 1.7 months.
At enrollment, PEP had been given to 448 of the 507 participants (88.4%); 58 (11.4%) had received PrEP, and 1 (0.2%) had received a re-exposure PEP vaccination. In 2005, 340 subjects (78.0%) received rabies immunoglobulin. The number of vaccine doses administered in 2005 and participant age at inclusion are given in Table 1. To be eligible for the immunogenicity analysis, participants had to receive three vaccine doses for PrEP, five doses for PEP, or two booster doses for PEP following a suspected re-exposure. Most participants (439, 86.6%) had received five doses, six subjects (1.2%) had received four doses, 59 subjects (11.6%) received three doses, and only three subjects (0.6%) had received two doses. The number of doses does not exactly match the number and type of prophylaxis regimens given in 2005 because nine subjects who reported being given PEP had received fewer than five injections. Two of them had received only two vaccine doses, one received three doses and six received four doses. Those subjects were excluded from the analysis of both the boosted and non-boosted populations.
Booster vaccination
Participants with antibody levels <0.5 IU/mL or EU/mL at inclusion or at one of the annual study visits were considered no longer seroconverted against rabies and were boosted. At enrollment in 2007, 2 years after vaccination, nine of the 507 participants (1.8%) had been boosted since receiving their PrEP or PEP regimens; six were given one or two booster injections, but the number of doses was not known for the three others. In 2008, 3 years after vaccination, 43 of the 461 participants with booster dose information (9.3%) had been boosted in the previous year. Forty of the 43 received one or two booster dose injections, one received five doses, and the number of doses was not known for two participants. In 2009, 4 years after vaccination, 14 of 428 remaining participants (3.3%) had received booster doses since their 2008 follow-up visit. Thirteen received one or two booster dose injections and one received three doses (Table 2).
Possible rabies re-exposure The subjects receiving booster vaccination included both those who received boosters due to a subsequent exposure and those who received boosters due to a serological result 0.5 IU/mL. a Boosted at inclusion (i.e., 2 years after vaccination) (7.3%) were bitten; 32 (94.1%) had an RFFIT titer >0.5 IU/mL and 12 (35.3%) had received a booster after enrollment. Between their 2008 and 2009 study visits, 29 participants (6.8%) were bitten; 24 (82.8%) had an RFFIT titer >0.5 IU/mL, and 7 (24.1%) had received booster dose after enrollment. There was a total of 89 cases of re-exposure to rabies resulting from bites from rabid animals, mostly dogs (52 cases) but also bats, cats, and monkeys. No cases of rabies occurred among the study participants.
RVNA persistence
The serology results for both the non-boosted and boosted populations are shown in (Fig 2). Nine (1.8%) of the 507 participants had received rabies vaccine booster doses between vaccination in 2005 and enrollment in 2007 ( Table 3). The time since the last vaccination was not known for five of them, but it was 0-6 months for one, 12-18 months for two, and 18-24 months for one. Additionally, 43 (9.3%) of the 465 participants present at their follow up visit in 2008 (3 years after vaccination) were boosted according to their titer measured during the 2007 campaign. The interval since the last vaccination was not known for 12 of the boosted participants, but was 0-6 months for 31, and 6-36 months for the remaining five. Fourteen
Age and gender differences in antibody persistence
In the non-boosted population, GMTs (Table 4) were significantly higher in young participants 2-5 and 6-15 years of age and the proportion of subjects with RFFIT titers >0.5 IU/mL (Fig 3) was only slightly decreasing at each year of follow-up. In subjects aged 60 years or older, GMTs were lower although mostly >1 IU/mL, except for a drop between 2008 and 2009 where the seroconversion rate also decreased from 83.3% to 66.7%. However, the number of subjects was limited and the proportion of those with RFFIT titers >0.5 IU/mL was not significantly lower compared to the other age groups. In the 16-40 years age group, both the GMTs (around 1 IU/ mL) and the proportion of individuals with RFFIT titers >0.5 IU/mL was stable over the 4 Table 3. Rabies virus-neutralizing antibody titers >0.5 IU/mL and GMTs at each follow up visit. years of follow up. In the 41-60 years age group, the situation was far more contrasted with significantly lower GMTs (0.53 to 0.77 IU/mL) and proportion of subjects with RFFIT titers >0.5 IU/mL at inclusion and at the follow up visit in 2008, 3 years after vaccination (P <0.05, Fisher exact test) compared to the general study population. However, both values tended to increase over the years, thus suggesting that poor responders were progressively removed from the nonboosted population. Males had lower seroconversion rates than females at each follow up visit, with significant differences observed in 2008 (P <0001) and 2009 (P <0.008, Chi squared test), 3 and 4 years after vaccination (Fig 4). Significant gender differences were also observed, with males having lower RFFIT GMTs than females at each year of follow-up. (Fig 4).
Antibody persistence after pre-and post-exposure prophylaxis
At each study visit, similar percentages of neutralizing antibody (RFFIT) titers >0.5 IU/mL were observed in the non-boosted participants who were given PrEP (3 vaccine doses, n = 58) in 2005 and in those receiving a PEP regimen (five vaccine doses, n = 448) at each study visit (Fig 5). Similar GMTs were also observed in the PrEP and PEP groups using the RFFIT assay, ranging from 1.0 to 1.1 IU/mL each year of follow up (Fig 6).
RFFIT and FAVN assay results
All specimens collected in 2007 were retested with the FAVN assay to determine the correlations with the RFFIT assay and ELISA (Table 5). In both the non-boosted and boosted (Table 5) and r = 0.99 for the boosted participants (Table 6). There was a good concordance of the seroconversion rates determined by FAVN (86.5%) and the RFFIT (84.6%) assays, with κ = 0.86. In the non-boosted population, the Pearson's correlation coefficient for FAVN and ELISA was 0.83, (95% CI: 0.80-0.86). The result in the boosted population was similar (r = 0.95).
RFFIT and ELISA results
There was a strong correlation between RFFIT and ELISA results (Pearson's correlation coefficient) in the non-boosted population r = 0.82 at inclusion, which however progressively decreased over the years to 0.71 at the 1-year follow-up and 0.62 at 2-year follow-up. There was Persistence of Post-Vaccination Rabies Antibodies in a Brazilian Population a good concordance of the proportion of titers >0.5 determined by the RFFIT (IU/mL) or the ELISA (EU/mL) assays; however the same trend was observed. The Kappa coefficient (κ) in the non-boosted population was 0.61 at inclusion, 0.54 at the 1-year follow-up and 0.42 at 2-year follow-up (Table 7). In summary, the strength of the association between RFFIT and ELISA decreased with time, as the GMTs obtained by RFFIT remained relatively unchanged over the duration of follow-up; and, unexpectedly, the ELISA values increased in the second and third years.
Discussion
The primary objective of this study was to evaluate the persistence of RVNA following PrEP or PEP with PVRV as measured by sero-neutralization assays. Secondary objectives included describing the effect of booster doses on RVNA titers, estimating the incidence of clinical cases of rabies in the study population, and determining the correlation between the RFFIT, FAVN Table 7. Correlation between RFFIT and ELISA assay results in the non-boosted study population. Persistence of Post-Vaccination Rabies Antibodies in a Brazilian Population and ELISA rabies virus antibody assays. Adherence to the surveillance protocol was high, with 84% retention over 3 years of follow-up. Possible re-exposure, mainly from dogs, bats, monkeys, and cats was reported by 5-7% of participants each year. The low numbers of bat bites is probably evidence of effective preventive measures implemented in the region such as: the reduction of bat population using anticoagulants, improvement of dwelling places through the continuous supply of electric power and light (the absence of electric light is known to be associated with vampire bat attacks), and the protection of houses aimed at avoiding gaps in the walls, windows or doors [19,20].
Persistence of RVNAs
Overall long-term persistence of RVNAs was good, with 85 to 88% of the non-boosted study population remaining seroconverted (RFFIT titer >0.5 IU/mL) over the 3 years of follow-up ending in 2009. The GMT of the population remained >1 IU/mL (twice the WHO-recommended threshold) at the end of follow-up. Persistence of RVNA following vaccination in 2005 was similar in participants given PrEP and those given PEP. These results are consistent with those reported in previous studies [21,22], and are discussed below in the context of routine PrEP vaccination. Our results are in accordance with other serological studies demonstrating that RVNA titers equal to or greater than 0.5 IU/mL, which is the WHO-recommended threshold of seroconversion, can persist for several years after administration of a complete vaccination series [23]. These results therefore highlight the need to maintain and intensify rabies PrEP and PEP. There were gender-and age-related differences in RVNA persistence. Overall, females had significantly higher GMTs and higher seroconversion rates than males in 2008 and 2009. These results are in line with some previous reports [24,25], however a correlation between gender and immune response to rabies vaccine has not been established [26]. While some gender differences in this study were statistically significant, their clinical significance remains doubtful because the seroconversion rates remained above 80% and the GMTs above 0.90 IU/ mL in both genders. Also, the persistence of RVNA, as measured by the seroconversion rate, was shorter in the population >60 years of age than in younger participants, but the difference was not significant, and GMTs decreased only slightly. Participants 16-40 years of age had lower immune responses than the other age groups, but the observed GMTs and seroconversion rates among that age group, at 0.91-1.0 IU/mL and 80.6-84.5%, respectively were similar to those observed in previous studies [22]. The GMT and seroconversion rate point values were lower in those 41-60 years of age than in the other age groups, and both increased over the duration of follow-up. These values may have been influenced by a relatively small sample size and broad 95% CIs. They also indicated the progressive removal of the poor responders from the study which mostly focused on the non-boosted population.
One of the limitations of the study is that only those subjects who responded well to the initial vaccination, i.e. remained seroconverted throughout follow up, were evaluated for antibody persistence. Subjects whose RFFIT antibody titer fell below or equal to 0.5 IU/mL were boosted and were excluded from the analysis to avoid any bias in evaluating antibody titers during subsequent follow up visits. Ideally, the analysis should have included all study subjects; however, it would have been both unethical and contrary to the design of our study (based on the recommendations presented in the leaflet of the rabies vaccine Verorab) not to vaccinate those with low antibody levels and expose them to the risk of rabies disease.
RVNA titers are generally measured by RFFIT [17] or FAVN, the gold standard assays recommended by the WHO [16]. Nevertheless, for additional analyses, an ELISA using rabies virus glycoprotein as antigen (Platelia Rabies II) is available [27,28]. Although this ELISA method does not measure human RVNA but all anti-glycoprotein G antibodies, it is easier and more rapid to perform. Rapid assays should be encouraged to facilitate diagnosis in rural settings lacking sophisticated techniques and qualified personnel. Increasingly discordant results were obtained with the RFFIT and ELISA assays from 2007 to 2009. The reason for these discordant results and for the decrease in correlation and concordance is not clear and deserves further investigation using well characterized proficiency panels. A previous comparison of these two assays found that the results of each corresponded closely except in samples with high RFFIT titers [27].
PrEP in rabies endemic countries
This study and the dramatic events preceding it underscore the need for the health authorities in rabies enzootic countries to decide on the best timing for the introduction of routine rabies PrEP vaccination in affected areas even if regular titer checks and boosters, may not appear affordable for the developing economies. This introduction would also prevent the need for serotherapeutic treatment, a real advantage in developing countries where human or equine rabies immunoglobulins are scarce and expensive. Ideally, routine pre-exposure rabies vaccination should be included in the Expanded Program on Immunization (EPI) schedule, given concomitantly with other pediatric vaccines. Two studies have evaluated the concomitant administration of rabies and DTP vaccines in Vietnam, and a third evaluated the concomitant administration of rabies and Japanese encephalitis vaccines.
The first Vietnamese study in infants showed that PVRV can be administered concomitantly with DTP-IPV as 2 IM doses at 2 and 4 months of age and a booster dose 1 year later with satisfactory safety and immunogenicity results and with no interference between the 2 vaccines [29] [30] [31]. Similar findings were drawn from another study conducted in Vietnam where PVRV was co-administered with DTP-IPV as 3 ID or 2 IM injections. The study showed that there was no apparent interference between the 2 vaccines and confirmed that their coadministration was safe in infants and toddlers [32] [33]. Finally, a study conducted in Thailand confirmed that the co-administration of a purified chick embryo cell vaccine (PCECV) with Japanese encephalitis vaccine (JEV) is safe and confers satisfactory immune response without interference between the two vaccines [34].
These clinical studies strongly suggest that rabies vaccine may be co-administered with routine pediatric vaccines and support integration of rabies PrEP vaccination into the childhood immunization schedules of countries where rabies is enzootic. This would minimize the costs and practical difficulties associated with the introduction of rabies PrEP into routine immunization practice.
Shortened PEP regimens
Shortened PEP vaccination regimens that require less than 1 month to complete are also particularly relevant in rural populations in rabies-endemic countries. They require fewer visits to the vaccination center, potentially resulting in better compliance. One option is abbreviated 4-dose IM schedule, which requires 2 weeks for completion. Preliminary data from studies conducted in Thailand [35] and India [36] suggest that a 1-week 4-4-4 intradermal (ID) PEP regimen is an alternative option to consider. An ongoing study in The Philippines (Clinical-Trials ID no. NCT01622062) is evaluating the 1-week 4-4-4 ID PEP regimen followed by a single-visit four-site ID booster vaccination at five years.
Conclusions
The surveillance results obtained in this study should encourage health authorities in rabiesenzootic countries to investigate the best strategies and timing for introduction of routine rabies PrEP vaccination in affected areas. In terms of PEP regimens, our observation that a complete 3-dose PrEP schedule induced similar GMTs and similar percentages of vaccinees with RVNA titers >0.5 IU/mL compared to a complete 5-dose PEP schedule is in favor of abbreviated schedules. | 2018-04-03T05:34:39.363Z | 2016-09-01T00:00:00.000 | {
"year": 2016,
"sha1": "44db77399d780fdc5347092a3ffc1eb52c8c28dd",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosntds/article/file?id=10.1371/journal.pntd.0004920&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "44db77399d780fdc5347092a3ffc1eb52c8c28dd",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
258845926 | pes2o/s2orc | v3-fos-license | Extraction, characterization and anti-oxidant activity of polysaccharide from red Panax ginseng and Ophiopogon japonicus waste
Red ginseng and Ophiopogon japonicus are both traditional Chinese medicines. They have also been used as food in China for thousands of years. These two herbs were frequently used in many traditional Chinese patent medicines. However, the carbohydrate compositions of these two herbs were not normally used during the production of said medicine, such as Shenmai injection, resulting in a large amount of waste composed of carbohydrates. In this study, the extraction conditions were optimized by response surface methodology. The Shenmai injection waste polysaccharide was extracted by using distilled water that was boiled under the optimized conditions. The Shenmai injection waste polysaccharide (SMP) was thereby obtained. SMP was further purified by anion exchange chromatography and gel filtration. With this method, a neutral polysaccharide fraction (SMP-NP) and an acidic polysaccharide fraction (SMP-AP) were obtained. The results of structure elucidation indicated that SMP-NP was a type of levan, and SMP-AP was a typical acidic polysaccharide. SMP-NP exhibited potential stimulation activity on the proliferation of five different Lactobacilli strains. Therefore, SMP-AP could promote the antioxidant defense of IPEC-J2 cells. These findings suggest that Shenmai injection waste could be used as a resource for prebiotics and antioxidants.
Introduction
Shenmai San, which consists of Panax ginseng and Ophiopogon japonicus, is an ancient Chinese patent medicine that first appeared in "Medical Origins. " It is used to treat diseases like heart attacks, congestive heart failure, and severe bronchitis induced by Qi and Yin deficiency (1). In modern China, Shenmai San has been developed into an injection preparation (Shenmai injection) that is a sterile aqueous solution prepared by the combination of Red ginseng ethanol extract and O. japonicus ethanol extract. Clinical pharmacy studies reveal that the Shenmai injection is more effective when used to treat cardiovascular diseases, such as coronary heart disease, viral myocarditis, and chronic pulmonary heart disease (1,2). It is often combined with The standard of fructose (Fru) and glucose (Glc) were purchased from Solarbio (Beijing, China). All other chemicals, such as phenol, sulfuric acid, acetone, boric acid, glycerin, etc., were of analytical grade and obtained from the Chengdu Kelong chemical factory (Chengdu, China).
Extraction and determination of polysaccharide from Shenmai injection waste
The powdered red P. ginseng and O. japonicus were mixed with a ratio of 1:1(w/w) and isolated using distilled water and different extraction conditions. The aqueous extracts were collected and concentrated by Rotary Evaporator (Shanghai Yarong Biochemical Instrument Factory Co., Ltd). Four times the volume of ethanol was poured into the water extracts and placed at 4°C for 24 h. The mixture was centrifuged (3,500 rpm, 10 min), and the insoluble residue was separated. The polysaccharide was obtained after lyophilization. The content of the carbohydrate was determined by the phenol-sulfuric acid method (10). The extraction yield of the polysaccharide was calculated according to the content of the carbohydrate.
Design of extraction conditions 2.3.1. Single-factor experiment
The optimum extraction conditions of polysaccharides from Shenmai injection waste were measured by single-factor experiments and the response surface method (RSM). The singlefactor experiment was executed in a designed extraction time (ranging from 0.5-3 h), a designed extraction temperature (ranging from 50°C-100°C), and with an extraction ratio of solvent to material (ranging from 10:1-50:1) with 1.0 g of red ginseng and O. japonicus powder mixture (after ethanol extraction) (11,12). One factor was kept invariable for each study and each group in triplicate. After extraction, the aqueous extracts were centrifuged and freezedried. Then the powder was dissolved into 1 mg/mL, and the carbohydrate component was determined by the method described above.
Optimization of extraction conditions by BBD
Box-Behnken design (BBD) is a type of response surface design. It is an independent quadratic design in that it does not contain an embedded factorial or fractional factorial design. In this design the treatment combinations are at the midpoints of edges of the process space and at the center. These designs are rotatable (or near rotatable) and require 3 levels of each factor. It is more efficient and easier to arrange and interpret experiments in comparison with others (13). Based on the single-factor experiment described above, BBD experiment was adopted and revealed in Table 1 with three-levelthree-factors. Those factors were mentioned above. The extraction temperature (X 1 ), the ratio of solvent to material (X 2 ), and the extraction time (X 3 ) were designed using SAS. JMP. 13.0 software (Statistical analysis system, United States).
The variables were coded according to the following formula: Frontiers in Nutrition 03 frontiersin.org where X i is the coded value of the variable X i , X 0 is the value of X i at the central point, and Δx is the amplitude of variation. The results were analyzed and fitted to a second-order polynomial model.
In the formula, Y is the response variable (the extraction yield of polysaccharide). A 0 , A i , A ii , and A ij are the intercept linear, quadratic, and interaction coefficients of X 1 , X 2 , and X 3 , respectively. X i and X j are the coded independent variables, and the terms of X i 2 represent the quadratic terms. Analyses of the variance were evaluated via the ANOVA procedure. The fitness of this predictive model was performed by the coefficient of determination R 2 and the adjusted-R 2 . Then the statistical significance and regression coefficients were checked using the F-test at a probability (p) of 0.01 or 0.05.
DEAE-cellouse ion exchange chromatography
300 mg of crude polysaccharide was dissolved with 20 mL of distilled water and filtered with a 0.45 μm filter. Then the polysaccharide solution was injected into the DEAE-cellouse ion column (50 mm × 40 cm, Beijing RuiDaHengHui Science & Technology Development Co., Ltd.), and distilled water was used as an elution buffer. The neutral fraction was eluted with a three-fold column volume of distilled water at the speed of 2 mL/min, combining the phenol-sulfuric acid method (14). We collected all the elution, concentrated it, and then preserved the solution via lyophilization, named SMP-NP. The column was further eluted using a gradient elution of NaCl (0-1.5 M), and the acidic fraction SMP-AP was obtained.
Molecular weight determination
Molecular weight was measured by size exclusion chromatography. Five types of dextrans were used as standard (10,70,200,800, and 1,000 kDa). 2 mg of each of the standard dextrans were weighted, respectively. Then all standard dextrans were mixed and dissolved in a 10 mmol/L NaCl solution and filtrated via a 0.22 μm filter. The mixture solution was then injected into the column and eluted with a 10 mmol/L NaCl solution at a speed of 0.2 mL/min. We kept 1 mL in volume per tube. After that, we collected all elution. The carbohydrate fraction was determined using the phenol-sulfuric acid method. We calculated the linear relationship between the molecular weight logarithm of the glucan and the elution volume and obtained the molecular weight of the carbohydrate fraction.
Chemical compositions and linkage determination
The SMP-AP was subjected to methanolysis with 3 M of hydrochloric acids in anhydrous methanol for 24 h at 80°C to obtain the methylglcosides. Then the monosaccharide composition was determined by gas chromatography (GC) after derivatization via the hexamethyl disilazane (HMDS) and the trimethylchlorsilane (TMS) reaction. The mannitol was added to the samples as the internal standard. Additionally, the presence of Fru was tested with the Urea-HCl colorimetric method (15).
The glycosidic linkages were determined by methylation. The carrier gas was Helium (pressure control: 80 kPa). The relative amount of each type of linkage was determined based on the area of each compound and related to the monosaccharide compositions of each fraction (16).
The NMR spectroscopy
After three deuterium exchanges using freeze-drying in D 2 O (10 mg/mL) and performance on a Bruker AV600 instrument (Bruker, Rheinstetten, Germany) at 25°C, the 1 H NMR and 13 C NMR spectra of SMP-NP and SMP-AP were recorded on the spectrometer (600 MHz). These peaks were labeled using MestReNova software (Version 6.0. 2009
Bacterial growth
The basal medium (10 g peptone and tryptone, respectively, 5 g yeast extract, 1 mL of Tween 80, 0.5 g L-cysteine hydrochloride, 1 g/L carbohydrate source, and 1 L of distilled water with an adjusted pH at 6.5) and the MRS medium were used as culture mediums after being autoclaved at 121°C for 30 min. Two commercial prebiotic products were used as a positive control compared with SMP-NP. These products were P95s (96.1% fructo-oligosaccharides, DPn 2-9, with 2.7% glucose, fructose, and sucrose, the product of the partial enzymatic hydrolysis of chicory inulin) and Orafti ® HP (99.8% inulin, DPav ≥23, with 0.2% glucose, fructose, and sucrose). Both were obtained from Quantum Hi-Tech (China) Biotechnology Co., Ltd., Shenzhen, China. These five strains of Lactobacilli were incubated in the 50 mL MRS medium at 37°C overnight in anaerobic chamber (Thermo Scientifific 1,029, in 5% N 2 , 10% H 2 , 5% CO 2 ), then centrifuged 3,500 rpm, 10 min, and resuspended in saline and basal medium, successively, to remove the carbon source. Finally, they were resuspended with basal medium containing these three different carbon sources above (the CPPF and two commercially available prebiotic P95s, Orafti ® HP), at a concentration of 10 7 -10 8 CFU/mL, after adjusted by McIntosh Turbidimetric tube. 5 milliliter bacterial suspensions were divided in test tubes, and then incubated for 0 and 24 h. All test tubes were set in triplicate. Two hundred microliter of the basal medium was added to the 96-wells plates and the density of bacteria were measured at the wavelength of 600 nm (A600) using Multiscan Spectrum (Thermo Scientifific, Varioskan Flash) after incubated for 0 h and 24 h. The bacterial growth was evidenced as the increment in A600 (ΔA600) during 24 h of incubation in anaerobic chamber. After 24 h of incubation, the pH was measured by pH meter (A115200, Lichen Instrument technology Co. Ltd., Hunan, China) after removing bacteria by centrifuging at 4000 rpm for 20 min. Each tube was tested three times, triplicate each time, making sure high accuracy and precision (11).
Establishment of oxidative stress damages model
IPEC-J2 cells were seeded into 96-well plates at a density of 1.0 × 10 4 cells/well. After the cells were adhered in the 96-well plate, the culture medium was washed with a phosphate buffer saline (PBS, pH = 7.4, Beijing Solarbio Science & Technology Co., Ltd.) three times. 200 μmol/mL of H 2 O 2 (Sigma-Aldrich, United States) were added into the 96-well plate (n = 12) and cultured in a 37°C incubator. After 24 h incubation, 10 μL of CCK8 (Wuhan Boster Biological Technology., LTD, Wuhan, China) was added to the 96-plate wells. After a 1 h incubation, the measurement was performed at 450 nm with a microwell reader (Bio-Rad).
Measurement of IPEC-J2 cells viability
IPEC-J2 cells were seeded into 96-well plates and cultured in a 37°C incubator for 24 h. 200 μmol/mL of H2O2 was added to the plate and cultured for 24 h. Three concentrations of SMP-AP (20 μg/mL, 10 μg/mL, 5 μg/mL) were added to the 96-well plate. After being cultured at 37°C for 24 h, the cell viability was determined by the CCK8 method.
Determination of antioxidant enzymes activity
IPEC-J2 cells were seeded into a 6-well plate, and an oxidative stress damages model was established using 200 μmol/mL of H 2 O 2 . Different concentrations of SMP-AP (20 μg/mL, 10 μg/mL, 5 μg/mL) were added to the six-well plate. After being cultured at 37°C for 24 h, plates were washed with PBS three times. Cells were collected by a cell culture dish and disrupted by a cell ultrasonic cell breaker (Shanghai Huxi Industry Co., Ltd). The cells were then centrifuged at 12,000 rpm for 3 min after cell disruption to obtain the supernatant for the determination of the antioxidant enzyme activity. The antioxidant enzyme activities were determined by a Biochemical Detection Kit (Nanjing Jiancheng Bioengineering Institute, Nanjing, China).
Quantitative real-time PCR
RNA extraction from the intestine cells and real-time PCR for antioxidant gene detection were performed as previously reported (17). In summary, IPEC-J2 cells were lysed with Trizol Regent (R1100, Beijing Solarbio Science & Technology Co., Ltd., China), and the total RNA was extracted from the cells according to the manufacturer's instructions. The RNA quality was determined by the agarose gel method, and the RNA concentration was determined by a spectrophotometer (NanoDrop 2000, Shanghai Institute of Thermal Sciences, China). The total RNA was reverse transcribed using reverse transcriptase which was done according to the manufacturer's instructions (Enzyme Tower, Waltham, Mass.). The Bio-Rad-CFX96 system was used for real-time quantitative PCR, and related gene expression was normalized with the internal control β-actin. The primer sequences of the SYBR green probe of the target gene are shown in Table 2.
Statistical analysis
The statistical values were represented as mean ± SD. The statistical comparisons were applied with the one-way analysis of variance (ANOVA) by Duncan's test using SPSS version 20.0. Then the values of p < 0.05 and p < 0.01 were considered statistically significant and highly significant, respectively. (named Shenmai polysaccharide, SMP). As shown in Figure 1A, the extraction yield of SMP increased as the extraction temperature increased (p < 0.05), showing a linear relationship. When the water extraction temperature reached 100°C, the extraction yield of polysaccharide reached its maximum amount (60.25%) ( Figure 1A). Therefore, 90°C was chosen as the center level of extraction temperature in the response surface design. The other two temperature points were set at 80°C and 100°C. With the temperature at 100°C and an extraction time of 30 min, the extraction yield reached 58.80% when the solvent/material ratio was around 30 mL/g ( Figure 1B). But when the solvent/material ratio was increased from 30 mL/g to 50 mL/g, the extraction yield was 60.75%, only increasing 1.95% ( Figure 1B, p > 0.05). Thus, 30 mL/g was set as the central value in the response surface design. The other two values were set as 20 mL/g and 40 mL/g, respectively. With the temperature at 100°C and a solvent/material ratio of 30 mL/g, it was found that the polysaccharide extraction yield increased with increased extraction time, reaching a maximum value (56.79%) around 30 min ( Figure 1C). Therefore, 30 min was set as the center point for extraction time. The other points for extraction time were set at 20 min and 40 min.
Optimization of extraction yield using RSM
3.1.2.1. Model fitting SAS. JMP13.0 software was used to carry out a regression analysis of the data in Table 1. This provided the following predicted regression equation of the three factors corresponding to the yield of SMP: . . X 1 , extraction temperature; X 2 , solvent/material ratio; X 3 , extraction time.
Analysis of response and contour surface plots
Variance analysis was conducted on the model (Table 3). It was found that the R 2 of this model was 0.97 (p = 0.0023 < 0.01), and the multiple regression relationship between dependent variables and all independent variables was significant ( Table 3). The primary term X 1 (extraction temperature) of the model was extremely significant (p < 0.001), suggesting that extraction temperature had the greatest impact on the yield of SMP. Frontiers in Nutrition 06 frontiersin.org Effect of three factors interacting on the extraction yield of Shenmai polysaccharide (left, response surface plots; right, contour plots); (A) effect of extraction temperature (X 1 ) and solvent-material ratio (X 2 ) on the yield of Shenmai polysaccharide; (B) effect of extraction temperature (X 1 ) and extraction time (X 3 ) on the yield of Shenmai polysaccharides; (C) effect of solvent-material ratio (X 2 ) and extraction time (X 3 ) on the yield of Shenmai polysaccharides.
Optimization of extraction conditions
Obtained through the software's statistical analysis, the response surface figure is shown in Figure 2. It can be seen that the yield of SMP was greatly affected by the above three factors. This was consistent with the regression coefficient results in Table 1. When the extraction time was fixed at the 0 level, the yield of SMP increased with the increase of extraction temperature (X 1 , 80°C-97.70°C) and the solvent/material ratio (20-40 mL/g) (Figure 2A). However, when the temperature is higher than 97.70°C, the yield of SMP decreases again. When the solvent/material ratio is fixed at the 0 level, the yield of SMP increased as the extraction temperature (X 1 , 80°C-97.70°C) and extraction time (20-35.92 min) increased ( Figure 2B). When the extraction temperature was fixed at the 0 level, the yield of SMP increased with the increase of the ratio of solvent/material (20-40 mL/g) and extraction time (20-35.92 min). When the extraction time exceeds 35.92 min, the yield of SMP decreased ( Figure 2C). In addition, it can be seen from Figure 2 that the following pairs all have a linear relationship with the yield of SMP: the extraction temperature (X 1 ) and the solvent/material ratio (X 2 ), the extraction temperature (X 1 ) and the extraction time (X 3 ), and the solvent/material ratio (X 2 ) and the extraction time (X 3 ). This is similar to the analysis results in Table 3. In summary, extraction temperature, solvent/material ratio, and extraction time all have a certain impact on the yield of SMP. The order of the three factors was extraction temperature, extraction time, and then solvent/material ratio.
Verification of the models
The fitting equation obtained in 3.1.2.1 was analyzed by JMP software. Based on the results, we concluded that when the extraction temperature was 93.10°C the solvent/material ratio was 40 mL/g, the extraction time was 27.94 min, and the yield of SMP reached its maximum value of 62.84%. To ensure convenient production, the optimal extraction process conditions were determined. They are as follows: an extraction temperature of 93°C, a solvent-material ratio of 40 mL/g, an extraction time of 30 min each time, extraction twice, and a predicted extraction rate of 62.78%.
To verify the accuracy of the above fitting model, the SMP was extracted twice for 30 min using the optimal extraction process. The final yield of SMP was 62.72%, which was close to the predicted value (62.78%), indicating a good fit with the above regression equation. This extraction process can accurately display the trend of SMP production. It meets the requirements of both experimental and actual production.
Chemical composition of crude Shenmai polysaccharide
The protein, polyphenols, and carbohydrate content in crude polysaccharides were determined. Crude SMP was 81.89% carbohydrates, 8.82% protein, and 2.31% polyphenols. Therefore, purification processes were performed to obtain purified Shenmai polysaccharides.
Purification of polysaccharide
The neutral polysaccharide of SMP was eluted by distilled water. 238.8 mg of neutral polysaccharide fraction was obtained from 310.7 mg of crude SMP, named SMP-NP. An acidic polysaccharide fraction with a mass of 15.63 mg was collected by DEAE anion exchange chromatography, named SMP-AP ( Figure 3A). This data indicated that neutral polysaccharides (yield 76.86%) account for the majority of the SMP. To obtain the purified polysaccharide fraction, gel filtration was performed. One fraction was obtained with symmetrical and uniform curves for both SMP-NP ( Figure 3B) and SMP-AP ( Figure 3C), indicating that these two polysaccharides are homogeneous components.
Molecular weight and monosaccharide composition determination
The molecular weight of the SMP-NP and SMP-AP were determined by size exclusion chromatography. The results showed that the molecular weight of SMP-NP and SMP-AP were 30.8 kDa and 256.7 kDa, respectively.
The results of the monosaccharide composition determination of these two polysaccharide components are shown in Table 4. There are two monosaccharide components in SMP-NP, glucose, and fructose. It has a glucose content of 10.1%, and the molar ratio of fructose to glucose is about 9:1. SMP-AP consists of five monosaccharide components: glucose, galactose, galacturonic acid, rhamnose, and arabinose. It has a molar ratio of 95:2.0:1.6:0.8:0.5.
Glycosidic linkage units of SMP-NP
The results of the glycosidic linkage determination of SMP-NP are shown in Table 5. SMP-NP is mainly composed of Fru and Glc, which is consistent with the results of monosaccharide composition determination. The linkage units in Fru are mainly 1,2-Fruf and 1,2,6-Fruf, and these have a molar ratio of 53.0% and 27.4%, respectively. In addition, terminal Fruf is also presented in SMP-NP and has a molar ratio of 8.5%. The linkage units in Glc are 1,4-Glcp, 1,6-Glcp, and terminal Glcp. These contain a molar ratio of 8.8%, 0.6%, and 1.6%, respectively.
Glycosidic linkage units of SMP-AP
The glycosidic linkage unit determination results of SMP-AP are shown in Table 6. SMP-AP is composed of 1,4-GalAp and 1,4-Galp Purification elution profiles. DEAE anion exchange chromatography elution file of Shenmai polysaccharide (A). Size exclusion chromatography elution profile of fraction SMP-NP (B) and SMP-AP (C). A490 stands for absorbance at 490 nm as described in phenolsulphuric acid method. At the same time, Ara in SMP-AP also showed two kinds of glycosidic units. T-Araf was the main linkage unit and had a molar ratio of 6.1%. In addition to this, the Glc in SMP-AP is mainly connected by 1,3-Glcp.
NMR analysis 3.6.1. NMR analysis of SMP-NP
The 1 H spectrum of SMP-NP is shown in Figure 4A, and the 13 C NMR spectrum is shown in Figure 4B. 13 C-NMR results show that the carbon signal concentrated in δ 102-104 ppm assigned itself to the C-2 signal peak of the Fruf residue. Three of the most obvious signal peaks were δ 103.79, δ 103.65, and δ 103.17 ppm. According to previous reports, the heterocephalic carbon signals are δ 62.63 ppm, δ 60.50 ppm, and δ 60.40 ppm, respectively. The corresponding proton signals are δ 3.85, δ 3.84, and δ 3.66 ppm, respectively. This indicates that the three Fruf residues in SMP-NP are β-configuration (18)(19)(20). The heterocephalic carbon signals concentrated in δ 92-101 ppm should be attributed to Glc residues. The heterocephalic carbon signals of the three signal peaks were δ 92.13, 98.04, and 99.58 ppm, respectively. Combined with the results of 1 H spectrum, which has the H-1 signals at δ 5.27, 5.31, and 5.13 ppm, respectively, the three residues proved to be α-D-Glcp configuration (21)(22)(23)(24)(25)(26)(27)(28)(29). The specific chemical shifts of 1 H and 13 C of the above sugar residues are shown in Table 7
NMR analysis of SMP-AP
The 1 H spectrum of SMP-AP is shown in Figure 5A, and the 13 C spectrum is shown in Figure 5B. The signals of δ 17.69 ppm and δ 94.89 ppm in 13 C-NMR spectra were assigned to C-6 and C-2 of Rhap. The signals of δ 1.31 ppm and δ 5.30 ppm were assigned to H-6 and H-2 of Rhap. Taken together, those results indicated the presence of α-1,2-Rhap in SMP-AP (30,31). The signals δ 170-173.55 ppm were assigned to C-6 of carboxyl signal peaks. This indicated that SMP-AP contains GalpA residues. The chemical shift of H-1 was δ 4.57 ppm, and the chemical shift of C-1 was δ 98.76 ppm. This suggested that the residue is α-D-GalAp. In addition, 13 C-NMR results showed that the H-1 proton signals of Ara were mainly concentrated around δ 5.0-5.25 ppm, while the C-1 carbon signals were mainly concentrated around δ 107 − 112 ppm (31-37). According to the literature, the signal at δ 5.14 ppm was assigned to H-1 of α-D-Galp, and the C-1 carbon signal of this residue was at δ 102.01 ppm (38).
Based on the above structural analysis of SMP-AP, the main backbone of SMP-AP was α-(1 → 3)-Glcp, which is a typical glucan structure. At the same time, α-(1 → 4)-GalA and α-L-(1 → 4)-Rhap are on the main chain of SMP-AP, which is a typical RG-I pectin structure. In addition, there were Araf and Galp residues on the 2 nd position of Rhap as branch chain connections. So, we speculated that there were a few arabinogalactan residues in SMP-AP. We concluded that SMP-AP is an acidic polysaccharide composed of glucan and RG-I pectin that has a small amount of arabingalactan linked to it as branched chains (39).
Prebiotic activity
Five Lactobacilli strains were cultured in a medium with different carbon sources, and their density was evaluated after 48 h. As shown in Table 8, the bacterial density and the bacterial density increment of the SMP-NP group were significantly higher than those in the saline group (p < 0.05). This indicated that five different Lactobacilli strains can ferment and utilize SMP-NP as a carbon source to support their proliferation in vitro. We then compared this with Orafti ® HP (DPn ≥ 23). When p95s (DPn 2-9) was used as a carbon source, four Lactobacilli strains showed better growth and greater changes in bacterial density. This suggested that P95s were an easily used carbon source for these four different Lactobacilli strains. The effect of SMP-NP on the proliferation of the five different Lactobacilli strains was similar to that of P95s.
A vertical comparison of five different Lactobacilli strains showed that SMP-NP could promote the proliferation of five different Lactobacilli strains in vitro, but the extent to which the five Lactobacilli strains used SMP-NP was different. The greatest fermentation utilization rate of SMP-NP was by L. johnsonii BS15, and its bacterial density changed the most after 48 h of fermentation, reaching 30 times that of the saline group (p < 0.05). The utilization rate of SMP-NP by L. plantarum BS10 was also very good, and its bacterial density reached 16.5 times that of the saline group after 48 h of fermentation. L. buchneri BSS1 utilization rate was relatively poor, and the bacterial density after 48 h was about twice that of the saline group.
As shown in Table 9, five different strains of Lactobacilli were cultured in an anaerobic environment and they each displayed different pH values in the culture medium. The pH values of the five media supplemented with SMP-NP decreased in varying degrees. In addition, the pH values of the media supplemented with MRS were significantly decreased. However, in the medium supplemented with SMP-NP, there was no significant difference in the degree of pH reduction among the five Lactobacilli strains. The decrease in pH was due to the metabolites produced by Lactobacilli, such as lactic acid, acetic acid, and other types of short fatty acids. The increase in bacterial density and lower pH indicated the growth of probiotics and the effective use of SMP-NP.
Anti-oxidant activity
To evaluate the antioxidant effect of SMP-AP on intestinal epithelial cells, a porcine jejunal epithelial cell line (IPEC-J2) treated Figure 6A).
Further biochemistry measurements showed that in the group supplement with SMP-AP, total antioxidant capacity (T-AOC), glutathione peroxidase activity (GSH-Px) and superoxide dismutase The 1D NMR spectrum of SMP-NP in D 2 O (A) 1 H NMR spectrum; (B) 13 C NMR spectrum.
Frontiers in Nutrition 10 frontiersin.org activity (SOD) increased. However, lipid oxidation products-MDA decreased, indicating that the protective effect of SMP-AP on oxidative stress may be due to it mediating the cellular antioxidant defense of IPEC-J2 cells (Figures 6B-E). To analyze the regulatory mechanisms of SMA-AP in cellular antioxidant defense, we quantified the expressions of genes associated with these processes. First, we noted that in H 2 O 2 -treated IPEC-J2 cells supplied with SMA-AP, expressions of some critical antioxidant genes were significantly increased ( Figures 6C,F-I). These include antioxidant genes like catalase (CAT), glutathione peroxidase (GPX) and superoxidase dismutase (SOD), NQO-1. We concluded that this expression increase was responsible for increased cellular antioxidant defense activity. Secondly, we quantified the expression of the key transcriptional factor -Nrf2, a direct regulator of these antioxidant genes. In doing so, we found that SMA-AP could also enhance the expression of Nrf2 in H 2 O 2 -treated IPEC-J2 cells ( Figure 6J). In summary, our results indicated that SMA-AP could be used as an effective component to treat oxidative stress-related defects by regulating cellular antioxidant defense.
Optimization of extraction process of polysaccharide from Shenmai injection waste by response surface methodology
Panax ginseng and O. japonicus are both traditional Chinese medicinal materials. Polysaccharide is one of the main active components of these two herbs and has been researched before. However, only a few studies focused on extracting SMP from the waste of the Shenmai injection. Through the optimal extraction conditions, 62.72% polysaccharide was obtained in this study. This proved that there were a large number of polysaccharide components in the production waste of the Shenmai injection, which needed further development and utilization.
Polysaccharide is a natural product and is supplied by several sources. Naturally, the technology used to extract it has always been the focus of research. Recently, there have been many reports on the extraction technology of polysaccharides from P. ginseng or O. japonicus. A study was conducted to investigate the effects of temperature, solid-material ratio, and extraction time on the yield of P. ginseng (40). The results showed that the optimal extraction process was as follows: the liquid-solid ratio was 12:1, the extraction time was 3.5 h, the extraction temperature was 100°C, and it was extracted twice, the yield of polysaccharides from P. ginseng reached 22% under these conditions. However, another study showed that the optimal extraction process was when the ratio of material to liquid was 1:8, water bath extraction was used 3 times at 100°C and for 3 h each time (41). And yet, another study had optimum extraction conditions that put the extraction temperature at 100°C, extraction time at 4 h, and the liquid-solid ratio at 15:1 (42). In our study, the optimal extraction temperature of SMP was 93°C, the extraction time was 27.94 min, and the ratio of solvent to the material was 40 mL/g. These large differences in optimization may be due to different material treatment methods. Most of the Chinese medicinal materials in the other studies were extracted by direct shearing, while in our study, the two medicinal materials were ground into powder for extraction. Many studies have shown that the effective components in plant cells can be dissolved only after infiltration, swelling, more infiltration, and then diffusion. However, the pulverization of medicinal materials can significantly improve the wall-breaking rate of plant cells, thereby improving the dissolution of effective components (43,44). This is likely the reason why we were able to have a high extraction rate of SMP in a short amount of time in this study. On a different note, other studies showed that the optimal water extraction process of O. japonicus polysaccharide was to extract twice at 100°C for 2 h each time and to keep a liquid to material ratio of 6:1. The order of factors affecting the water extraction of O. japonicus polysaccharide was said to be: extraction time > extraction temperature > ratio of liquid to material > extraction times (45). However, the results of this study showed that the most influential factors on the polysaccharide were extraction temperature, extraction time, and then the ratio of solvent to material. The above differences may have been caused by the variety of medicinal materials, harvest time, and polysaccharide content (46). Additionally, there are significant differences in the extraction rate and polysaccharide content of different herb medicines, so the combined extraction of red P. ginseng and O. japonicus may also be the reason for the differences in the extraction process between studies.
Isolation, purification, and structure elucidation of Shenmai polysaccharide
The structure of polysaccharides is closely related to biological activity. Polysaccharides with different structures have different pharmacological activities. The molecular weight is related to the advanced structures formed by the polysaccharides. Polysaccharide GRS1-I with a molecular weight of 4.611 kDa was obtained from P. ginseng by the method of amylase and alcohol precipitation (47). But another polysaccharide, this one with a molecular weight of 1.5 kDa, was obtained from P. ginseng by ethanol precipitation at different concentrations (41). The molecular weights of those two polysaccharides were much smaller than SMP-NP and SMP-AP. The different extraction conditions may be the reason for the differences in their molecular weights. However, another neutral polysaccharide fraction named, PGPW1, was isolated from P. ginseng with a Mw of 350 kDa. This was higher than other reported P. ginseng polysaccharides (48). Regarding the FIGURE 5 The 1D NMR spectrum of SMP-NP in D 2 O (A) 1 H NMR spectrum; (B) 13 C NMR spectrum. (28,47). These polysaccharides were significantly different from the Mw of the two polysaccharide fractions obtained in the present study. This may be because the polysaccharides obtained in our study were isolated from a mixture of P.gingseng and O. japonicus rather than from a single plant. Studies have shown that the polysaccharides in Chinese patent medicine may come from a single medicine and that new polysaccharides may be produced during the extraction process. Our results showed that the molar ratio of Fru and Glc was 35:1 and that the main connection between Fru was 2 → 1 or 2 → 6, which was very similar to the SMP-NP structure. Several studies were able to obtain a similar structure of polysaccharides from O. japonicus, but the molar ratio of Fru to Glc were 30:1 (49), 15:1 (50) and 12:1 (51). WGPA-N and WGPN are neutral polysaccharides in P. ginseng that were eluted by distilled water (52). The study found that those two neutral polysaccharides were composed of Glc, Gal, and Ara. The study also found that the molar ratio of these three monosaccharides was 3.3:95.3:1.3 and 18.0:66.3:15.7, respectively. Glc was mainly connected as 1 → 4 or 1 → 6, which was similar to how Glc connected in SMP-NP. Two other researchers extracted two neutral polysaccharides from P. ginseng, using different methods (40), and found that those neutral polysaccharides not only contained Glc, Gal, and Ara but also held a small amount of mannose. The main linkage units of Glc were 1 → 4 linked Glc, with a small amount of 1 → 6 linked Glc, and the results were similar to SMP-NP. In summary, Fru in SMP-NP may come mainly from O. japonicus, while Glc may come mainly from P. ginseng. Some studies have shown that when the production process of polysaccharides is different, some polysaccharide components may change. These changes include things like losing a monosaccharide component or having inconsistent molar ratios of the monosaccharide (53). This may be the reason why SMP-NP develops different monosaccharide molar ratios with P. ginseng and/or O. japonicus between studies.
Acidic polysaccharides are polysaccharides with carboxyl groups. Most of the acidic polysaccharides that came from P. ginseng are pectic polysaccharides. Fewer studies reported acidic polysaccharides coming from O. japonicus. SMP-AP is an acidic polysaccharide composed of GalA, Gal, Ara, Glc, and Rha. This is very similar to the structure of the P. ginseng acidic polysaccharide that has been previously published (54-56). It shows that in the acidic polysaccharide S-A-I from P. ginseng, with a Gal residue connected by 1 → 6 GaL and further connected by 1 → 5 or 1 → 3, five Ara and GalA residues were presented as 1 → 4 linked GalA. This is very similar to how the monosaccharides in SMP-AP are connected. The difference is that these P. ginseng acidic polysaccharides do not contain Glc. The results of the structural elucidation of WRGP indicated that the main component of WRGP was RG-I pectin and that it contained more of the AG type side chain. However, unlike SMP-AP, WRGP also contained manose. When isolated from P. ginseng, GPW-1, and GPW-2 have monosaccharide compositions similar to those of SMP-AP (54). Another study showed that all six acidic pectins from P. ginseng have glycosidic linkages. This is also similar to SMP-AP (57). Therefore, due to the similarity of glycosidic linkages and monosaccharide compositions, it is possible that the acidic polysaccharide of Shenmai injection waste mainly comes from P. ginseng. However, SMP-AP and the above-mentioned pectin showed different monosaccharide compositions and molar ratios. This may be due to the differences in materials, extraction, and purification methods, all of which can lead to changes in the monosaccharide composition.
Prebiotic activity
Prebiotics refer to organic substances that are not digested and absorbed by the host. Instead, they selectively promote the metabolism and proliferation of beneficial bacteria in the body, thereby improving the host's health (58). Polysaccharides are one of the most common prebiotics. In this study, the density of multiple The results showed that five different Lactobacilli strains could ferment and utilize SMP-NP as a carbon source. This increased bacterial density, indicating that SMP-NP could promote the proliferation of these five different Lactobacilli strains in vitro. At the same time, as the density of bacteria increased, their metabolites increased, such as lactic acid and acetic acid (59,60). This resulted in a decrease in the pH of the culture medium. After measuring the pH values of different Lactobacilli strains cultured on different media, it was found that the medium supplement with SMP-NP could get a pH value much lower than the saline group. In summary, SMP-NP can promote the proliferation of five Lactobacillus strains and reduce the pH of the culture medium, meaning it has potential prebiotic activity.
Anti-oxidant activity
Oxidative stress and disruption of the intracellular redox balance have been identified as the key potential factors in the progression of animal diseases (61). With this in mind, SOD and CAT are important antioxidant enzymes. SOD plays a crucial role in balancing oxidative and antioxidant effects. It is a free radical scavenger that can scavenge superoxide anion radicals. Additionally, the high and low viability of SOD indirectly reflects the body's ability to scavenge free radicals (62). CAT is a ubiquitous enzyme that efficiently promotes the decomposition of H 2 O 2 into H 2 O and O 2 to prevent cellular oxidative damage (63,64). MDA is the final product of these oxygen-free radicals' lipid peroxidation (65). The level of MDA content indirectly reflects the severity of the free radicals attacks on body cells.
In the past several decades, numerous natural polysaccharides and fructans have been shown to have significant antioxidant activity using different evaluation methods (60). Our results indicated that SMP-AP exhibited significant antioxidant activity. The SMP used in this study were extracted from a combination of red P. ginseng and O. japonicus. However, numerous studies have shown that P. ginseng polysaccharides can significantly increase the levels of the antioxidant enzymes SOD, CAT, and GPX-Px, as well as the non-enzymatic compound reduced glutathione (GSH). These studies also showed that P. ginseng polysaccharides can decrease malondialdehyde (MDA) levels against oxidative stress (66). The Shengmai injection was also investigated and had similar antioxidant activity, which was consistent with our study (67). However, the mechanism of the antioxidant activity of polysaccharides is still unknown, which need further study in future.
Conclusion
In this study, the optimal extraction conditions for crude polysaccharides from Shemmai injection waste were obtained by RSM. When the extraction temperature was at 93°C, the ratio of solvent to material was 40 mL/g, and the extraction time was 30 min, the maximum yield of SMP was 62.72%. After purification, a neutral fraction (SMP-NP) and an acidic fraction (SMP-AP) were obtained. The neutral fraction was a levan, and the acidic fraction was a pectic polysaccharide. SMP-NP could be fermented by five strains of Lactobacillus. It was able to reduce the pH of the culture medium, and it may be a potential prebiotic. The acidic polysaccharide SMP-AP exhibited potential antioxidant activity in vitro. All these results Frontiers in Nutrition 14 frontiersin.org suggested that, due to its polysaccharides, Shemmai injection waste could be used to develop potential prebiotics and anti-oxidants.
Data availability statement
The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author. | 2023-05-24T13:17:16.626Z | 2023-05-24T00:00:00.000 | {
"year": 2023,
"sha1": "96d5eea3356cf63deff8a77036fae2922837f196",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "96d5eea3356cf63deff8a77036fae2922837f196",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
268497205 | pes2o/s2orc | v3-fos-license | Transcriptomic analysis of rat brain response to alternating current electrical stimulation: unveiling insights via single‐nucleus RNA sequencing
Abstract Electrical brain stimulation (EBS) has gained popularity for laboratory and clinical applications. However, comprehensive characterization of cellular diversity and gene expression changes induced by EBS remains limited, particularly with respect to specific brain regions and stimulation sites. Here, we presented the initial single‐nucleus RNA sequencing profiles of rat cortex, hippocampus, and thalamus subjected to intracranial alternating current stimulation (iACS) at 40 Hz. The results demonstrated an increased number of neurons in all three regions in response to iACS. Interestingly, less than 0.1% of host gene expression in neurons was significantly altered by iACS. In addition, we identified Rgs9, a known negative regulator of dopaminergic signaling, as a unique downregulated gene in neurons. Unilateral iACS produced a more focused local effect in attenuating the proportion of Rgs9+ neurons in the ipsilateral compared to bilateral iACS treatment. The results suggested that unilateral iACS at 40 Hz was an efficient approach to increase the number of neurons and downregulate Rgs9 gene expression without affecting other cell types or genes in the brain. Our study presented the direct evidence that EBS could boost cerebral neurogenesis and enhance neuronal sensitization to dopaminergic drugs and agonists, through its downregulatory effect on Rgs9 in neurons.
6][17] However, the latest research refutes this finding, claiming that flicker stimulation cannot regulate gamma oscillations in brain waves, nor can it reduce excessive Aβ deposition in the brains of AD mice or ameliorate AD symptoms. 18Compared to indirect neural stimulation by visual flicker, EBS may more directly modulate brain activity and pathological processes.Among the EBS approaches, transcranial alternating current (AC) stimulation (tACS) is considered the optimal therapeutic strategy to achieve direct brain stimulation, non-invasively. 19However, studies confirm that 75%-85% of the main therapeutic current of transcranial stimulation is shielded by the scalp and skull. 20n contrast, intracranial AC stimulation (iACS), which is achieved by implanting electrodes within the cranial bone, bypasses the shielding effects of the scalp and skull. 21ithout penetrating the brain parenchyma, iACS can deliver the full intensity stimulating currents directly to the brain to achieve therapeutic effects.Therefore, when tACS is the optimal clinical strategy, iACS, which reflects the modulation effects on brain, is more suitable to elucidate the modulating mechanisms of EBS on healthy brain and neural disorders.
Our previous studies have demonstrated the effectiveness of iACS in enhancing neurogenesis and modulating microglial activation in AD mice.][23] These preliminary studies inspired further exploration on the precise cellular and genetic mechanisms by EBS, an area of research that, to our knowledge, remains largely unexplored.
Here, we provided the first single-nucleus RNA sequencing (snRNA-seq) profiles of the rat brain with iACS, separately profiling 285,347 single-nucleus transcriptomes from the cortex, hippocampus, and thalamus of eight rats, aiming to profile distinct responding cell clusters and their gene expression patterns following the specific 40 Hz iACS trials.Our objective was to delineate the specific cell types and genes that response to iACS under normal physiological conditions, prior to any administration for brain diseases.The results would provide essential early insights into the safety of neural cells and genes within the brain under the 40 Hz iACS neural modulation and identify potential cellular targets.This research would be instrumental in shaping strategies for administering neu-ral modulation to both individuals in health and brain disease conditions, such as AD, PD, stroke, etc.
Intracranial alternating current stimulation schemes and in-brain-derived electric field distribution
We have designed the study to use randomly grouped rats, subjecting to bilateral or unilateral iACS trials.The brain regions of cortex, hippocampus, and thalamus from each experimental group of rats were, respectively, collected for snRNA-seq, gene and protein expression analyses (Figure 1A).The electrode implantation was designed for either bilateral or unilateral iACS treatment as shown in Figures 1B and 2B.To intracranially deliver the current into brain, the electrodes were drilled in the skull without penetrating the cerebral tissue.The end of electrode was precisely positioned to contact the dura, thereby delivering the electrical current, as depicted in Figure 1B,C.For the sham and bilateral iACS (bi-iACS) groups, the paired electrodes were set symmetrically on each hemisphere (left lane in Figure 1B; sham and bi-iACS in Figure 2B) to deliver the fake or iACS current bilaterally.For unilateral iACS stimulation, the paired electrodes were set on the left hemisphere (right lane in Figure 1B) for ipsilateral and contralateral iACS, shortened as ips-iACS and con-iACS in Figure 2B.The iACS trial started 24 h after the electrode implantation surgery.The sinusoidal current was delivered intracranially at 40 Hz, 250 µA, 1 h per day for 7 days (Figures 2A and S1A,B).The rats kept in health status during and after the iACS treatment each day, according to the daily records of body weight (Figure S1C) and neurological severity score (NSS) assessment (Figure S1D), indicating no hazards to health or neurological functions by iACS.We also performed hematoxylin and eosin immunocytochemistry with the brains at the end of the experiment, showing no significant tissue damage the iACS treatment (Figure S1E).
To quantify the coverage of iACS in deeper brain, we performed the finite element method (FEM) simulation and measured the in-brain-derived electric field (EF) in rat brain with unilateral or bilateral iACS.With FEM (Figure 1D), the simulation showed an obviously derived in-brain EF when either bilateral or unilateral iACS were administrated on the 3D rat brain model (upper row in Figure 1D).The peak magnitude of EF was estimated at ∼300 V/m, surrounding the electrode areas, attenuating deeper into the brain.Specifically, when bilateral iACS was performed, the FEM simulation plotted a symmetric distribution of EF on both hemispheres.For the cortex, dentate gyrus (DG) of the hippocampus and sub-ventricular zone (SVZ) of the thalamus, the EF magnitude was read at ranges of 200-250 V/m, 170-200 V/m, and 100-150 V/m, respectively (middle row in Figure 1D).While, the unilateral iACS simulation plotted an even higher peak of EF, distributing mainly at the ipsilateral hemisphere (ips-iACS).The peaks of EF were read as 225-275 V/m EF in the cortex, 200-225 V/m in the DG, and 150-200 V/m in the SVZ (left hemisphere in the bottom row in Figure 1D).While for the contralateral hemisphere (con-iACS), it plotted a significantly lower EF, compared to the relative ipsilateral plot.The estimated peaks of EF were read as 0-50 V/m in the cortex and 50-100 V/m in both the DG and SVZ (right hemisphere in the bottom row in Figure 1D).
For the measurement in iACS-treated rat brain, the real-time EEG revealed a marked peak in gamma oscillation (within the 35−60 Hz range) following the 40 Hz iACS administration, irrespective of unilateral or bilateral iACS scheme (Figure S2).On the other hand, the in-brain-derived EF magnitude was detected varying upon the specific iACS scheme was employed.Specifically, the oscilloscope measurement demonstrated that ips-iACS generated the highest EF in the cortex (237.3 ± 31.6 V/m), the DG (204.9 ± 21.8 V/m), and the SVZ (180.7 ± 8.9 V/m).These values were higher compared to those obtained from the bi-iACS group, which recorded 222.0 ± 33.7 V/m in the cortex, 190.8 ± 18.4 V/m in the DG, and 117.8 ± 10.8 V/m in the SVZ.The con-iACS showed considerably lower EF values, with 33.8 ± 8.2 V/m in the cortex, 75.4 ± 14.6 V/m in the DG, and 66.7 ± 11.2 V/m in the SVZ.Notably, these in-brain EF measurements were consistent with the FEM simulation plots (Figure 1E-G).
The results from both FEM simulation and in-brain EF measurement indicated that iACS could deliver the 40 Hz signal at 250 µA into deeper brain while maintaining the full frequency.However, there was a progressive decrease in the intensity of EF as the distance increased from the surface to deeper brain.Furthermore, unilateral iACS resulted a focused local impact within the ipsilateral hemisphere.While, bilateral iACS encompassed a more extensive area, affecting both hemispheres, but with a less concentrated intensity of EF.
snRNA-seq profiling of cortex, hippocampus, and thalamus of the iACS rats
Following the 7-day iACS trial, the rats were sacrificed for brain region snRNA-seq, real-time PCR, and immunofluorescence analyses.The brain collections were divided into four groups for sample labeling: sham (with fake iACS), ips-iACS (the ipsilateral hemisphere with unilateral iACS), bi-iACS (with bilateral iACS), and con-iACS (the contralateral hemisphere with unilateral iACS) (Figure 2A,B).
Here, we separately profiled the cortex, hippocampus, and thalamus of the above four-group brain hemisphere for specific-region snRNA-seq (n = 2 rats in each group for snRNA-seq).In total, 285,347 snRNA-seq profiles, including 108,996 cells in the cortex (Figure 2C,D), 88,592 cells in the hippocampus (Figure 2G,H), and 87,759 cells in the thalamus were analyzed as shown in Figure 2K,L.
To classify the major cell types in specific regions of the iACS rat brain, we clustered all cells jointly across the eight individual rat cortex, hippocampus, or thalamus, producing six transcriptionally distinct cell-type clusters for each of the three brain regions with highly consistent expression patterns across the individual iACS rat brains.We identified and annotated the major cell types of the cortex (Figure 2C), hippocampus (Figure 2G), and thalamus (Figure 2K) by interrogating the expression patterns of known gene markers, 23,24 neurons (marked by Syt1 and NeuN), oligodendrocytes (marked by Cldn11), microglia (marked by Tmem176b), choroid plexus cells (marked by Vcan), astrocytes (marked by Gja1), and others (Figures S3A-C, S5A-C, and S7A-C).We then tracked and annotated each single cell from all clusters of each individual rat with the specific iACS scheme (Figure 2D for the cortex, Figure 2H for the hippocampus, F I G U R E 1 Research design of intracranial alternating current stimulation (iACS) and derived in-brain electric field (EF).(A) Research design.(B) iACS scheme illustration: bilateral iACS (bi-iACS, left) and unilateral iACS (right).(C) The iACS electrode implanted in skull for stimulating current delivery.(D) The 3D brain modeling and finite element method (FEM) simulation of iACS-derived in-brain EF distribution.The upper row was the 3D brain modeling, build up based on a volumetric atlas (including 118 brain structures) offering comprehensive anatomical delineations of the male Sprague-Dawley rat brain; the middle row was the FEM simulation of the derived EF distribution by the bilateral iACS, at 40 Hz, 250 µA.The lower row was the FEM simulation of the derived EF distribution by the unilateral iACS for the left hemisphere, at 40 Hz, 250 µA.(E) The measurement of derived EF at the ipsilateral (ips-iACS)/contralateral (con-iACS) forebrain cortex under the unilateral iACS, or at the forebrain cortex under the bi-iACS.(F) The measurement of derived EF at the dentate gyrus (DG) of the hippocampus under the ips-iACS, bi-iACS, and con-iACS.(G) The measurement of derived EF at the sub-ventricular zone (SVZ) of the thalamus under the ips-iACS, bi-iACS, and con-iACS.The blue, pink, and yellow rectangles illustrated the ranges of the FEM simulated EF magnitudes under ips-iACS, bi-iACS, and con-iACS from (D).The measured EF magnitudes were shown as mean ± standard deviation (SD), ***p < 0.001 and **p < 0.01 were considered as significantly different between groups.n = 3 rats for each group.and Figure 2L for the thalamus).We used these tracking and annotations to quantify the ratios of the cell type in the cortex, hippocampus, and thalamus, to identify the cell-type-specific response to iACS, to assess differences in region-specific cellular responses between unilateral and bilateral iACS, and to characterize the specificity of iACS-modulated gene expression.
Neuronal populations were boosted by iACS
To dissect cell-type heterogeneity under iACS across different regions of brain, we separately quantified the percentages of each cell-type category from the cortex, hippocampus, and thalamus in the four groups of sham, ips-iACS, bi-iACS, and con-iACS.Incorporating the results from the snRNA-seq, real-time PCR, and immunofluorescence analyses, the comparison between groups demonstrated the following cell-type-composition-specific responses to iACS treatments with brain region specificity.
In the cortex, the region most closely exposed to iACS current, the 7-day iACS trial increased the percentage of neurons with either ips-iACS (increased by 63.1%), bi-iACS (by 13.3%), or con-iACS (by 9.3%) treatment, compared with those in sham, according to the snRNA-seq (Figure 2E).
In consistent with this result of snRNA-seq, the mRNA expression of NeuN (a neuronal marker), according to the real-time PCR, were found to be upregulated following treatments with ips-iACS by 66.8% (p = 0.0089, n = 3 rats for each group), bi-iACS by 36.8% (p = 0.1271, n = 3), and con-iACS by 32.6% (p = 0.1844, n = 3), in comparison to sham (Figure 2F).The ips-iACS emerged as the most effective iACS scheme for increasing the neuronal population in the cortex.Alongside this increase, there was a noted decrease in the percentages of oligodendrocytes, microglia, and choroid plexus cells when ips-iACS was applied.However, the significant trend of decrease was not detected in either the bi-iACS or con-iACS group.On the other hand, astrocytes exhibited irregular changes in response to the iACS treatments, compared to sham (Figure 2E).
We then applied immunofluorescence to verify that ips-iACS led to the most pronounced increase in NeuN+ (encoding protein of NeuN) neurons in the cortex, as depicted in Figure 3A,B.The zoomed-in images in Figure 3A1-12, displayed the expression patterns of NeuN in the experimental groups, with a focus on the different layers of the cortex.The quantified proportion of NeuN+ cells demonstrated a significant rise in NeuN+ neurons with both ips-iACS and bi-iACS.Specifically, the proportion of NeuN+ neurons increased from 40.55% in sham to 65.01% with ips-iACS (p = 0.0072, n = 3 rats for each group).However, the increases to either 52.39% with bi-iACS (p = 0.1682, n = 3) or 47.54% (p = 0.5137, n = 3) with con-iACS did not yield statistical significance when compared to sham, indicating a less promising effect.On the other hand, the quantification of GFAP+ glial cells revealed no significant changes following any of the three iACS treatments compared to sham (Figure 3C,D).The results aligned with the snRNA-seq profiling (Figure 2E) and the NeuN expressions determined through real-time PCR (Figure 2F).
Combining the results from snRNA-seq, real-time PCR, and immunofluorescence analyses using the cortex samples, ips-iACS was the most effective in increasing neurons in the cortex.While, neither significant nor regular change was detected in glial cells with any scheme of iACS.
In the hippocampus, neurons continued to be the most responsive cell type to the three iACS schemes.According to the snRNA-seq profiling, the increase in neurons was quantified as follows: ips-iACS led to a 46.1% increase, bi-iACS resulted in a 51.7% increase, and con-iACS achieved a 34.5% increase in hippocampal neurons compared to sham (Figure 2I).Notably, bi-iACS was more effective in increasing neuron percentage in the hippocampus than that with ips-iACS.
The real-time PCR analysis confirmed the consistent trends of the neuron increase with the NeuN expression in the hippocampus following the iACS treatments.Specifically, compared to sham, the expression of NeuN was increased by 61.0% (p = 0.0265, n = 3 rats for each group) with ips-iACS, by 67.9% (p = 0.0155, n = 3) with bi-iACS, and by 42.9% (p = 0.1132) with con-iACS, as shown in Figure 2J.As for the other cell types in the hippocampus, oligodendrocytes were detected decreased by the iACS treatments.While astrocytes and microglia demonstrated no significant change across the experimental groups (Figure 2I).
Immunofluorescence revealed that all three iACS groups exhibited an increase in NeuN+ neurons within the DG and Cornu Ammonis 4 (CA4) areas of the hippocampus.Notably, the percentage of NeuN+ neurons rose from 12.45% in sham to 23.52% (p = 0.0063, n = 3 rats for each group) with ips-iACS and to 20.84% (p = 0.0272, n = 3) with bi-iACS, reaching statistical significance.Increase was also observed with con-iACS to 17.42% (p = 0.1955, n = 3); however, the change did not show statistical significance, as shown in Figure 3E,F.The zoomed-in images distinctly highlighted the difference in the number of NeuN+ cells among the iACS treated and sham within the DG and CA4 areas of the hippocampus (Figure 3E).In contrast, GFAP+ glial cells did not show any significant alterations following any scheme of iACS compared to sham in the hippocampus, as shown in Figure 3G,H.
The snRNA-seq, real-time PCR, and immunofluorescence analyses indicated that ips-iACS and bi-iACS were highly effective in increasing neuronal numbers in the hippocampus.While con-iACS did not exhibit a significant effect on neuronal increasement.Additionally, similar to observations in the cortex, none of the iACS schemes demonstrated a significant impact on glial cells in the hippocampus.
In the thalamus, which is located deeper in brain and distant from the iACS electrodes, the induction in the neuron population was observed following ips-iACS and bi-iACS, as revealed by snRNA-seq analysis.Specifically, ips-iACS led to a 33.4% increase in neuron numbers, and bi-iACS resulted in a 31.03%increase.However, con-iACS did not demonstrate a significant effect, as there was only a negligible increase of −0.42% in neuron proportion when compared to sham (Figure 2M).As for other types of cells in the thalamus, the ips-iACS and bi-iACS showed a nonsignificant trend in decreasing astrocytes, microglia, and oligodendrocyte numbers (Figure 2M).
The real-time PCR analysis validated consistent patterns in NeuN expression within the thalamus after iACS treatments.In comparison to sham, there was a 46.6% increase in NeuN expression with ips-iACS (p = 0.0430, n = 3 rats for each group).Despite a 38.4% increase with bi-iACS (p = 0.0155, n = 3), this change was not statistically significant.With con-iACS, NeuN expression was reduced by 0.042% (p = 0.9847), a result that was not statistically significant and mirrored the trends observed in the snRNA-seq profiling (Figure 2N).
While ips-iACS was still recognized as the most effective scheme for enhancing neuronal numbers, the results from snRNA-seq, real-time PCR, and immunofluorescence analyses indicated a weaker induction effect on neurons in the thalamus compared to that observed in the cortex and hippocampus.Additionally, the three iACS treatments demonstrated irregular effects on other cell types in the thalamus, irrespective of ips-iACS, bi-iACS, or con-iACS.
Combining the results, it demonstrated that neurons were the primary cell type responding to iACS across all the three brain regions: cortex, hippocampus, and thalamus.Among the iACS schemes, ips-iACS, which generated an EF above 150 mV/mm (in the cortex: 237.3 ± 31.6 V/m, in DG of the hippocampus: 204.9 ± 21.8 V/m, in SVZ of the thalamus: 180.7 ± 8.9 V/m). Figure 1D-G is identified as the most effective approach to increase neurons in these brain regions.
Rgs9 in neurons was negatively regulated by iACS
When neurons were identified as the most sensitive cell type to ips-iACS and other schemes of iACS, we next sub-clustered the neurons (Figures S4, S6, and S8) and compared the gene expression levels for the neurons with the ips-iACS versus sham, separately in the cortex, hippocampus, and thalamus, to identify the differentially expressed genes under iACS.
Here, we profiled eight sub-clusters of neurons from the cortex (Figure S4B) and hippocampus (Figure S6B), and seven sub-clusters of neurons from the thalamus (Figure S8B).Upon sub-clustering analysis, we observed a distinct sub-cluster that exhibited differential distribution between the ips-iACS and sham groups across all three brain regions.In all three brain regions, a distinct subcluster was identified, characterized by a notable change in Rgs9 gene in neurons, following the ips-iACS treatment (Figures 4A,B and S4A,B for the cortex, Figures 4E,F and S6A,B for the hippocampus, and Figures 4I,J and S8A,B for the thalamus).And across all the three brain regions, the Rgs9+ neuron populations proportions were quantified significantly decreased by ips-iACS (Figures 4C and S4C for the cortex, Figures 4G and S6C for the hippocampus, and Figures 4K and S8C for the thalamus).Specifically, according to the snRNA-seq, the most significant Rgs9+ neuron number change by ips-iACS was quantified in the cortex, with 97.5% decrease (Figure 4C).While the other two regions demonstrated 59.8% (the hippocampus, Figure 4G) and 54.0% (the thalamus, Figure 4K) decrease of Rgs9+ neuron numbers.
The real-time PCR results consistently showed a decrease in Rgs9 expression across the cortex, hippocampus, and thalamus following iACS treatments.In the cortex, expression of Rgs9 was reduced to 21.26% with ips-iACS, 28.39% with bi-iACS, and 62.53% with con-iACS, compared to sham (100%), as shown in Figure 4D.In the hippocampus, the expression was downregulated to 45.72% with ips-iACS, 41.59% with bi-iACS, and 64.01%with con-iACS, as shown in Figure 4H.Similarly, in the thalamus, Rgs9 levels were decreased to 51.85% with ips-iACS, 43.2% with bi-iACS, and 69.19% with con-iACS, as shown in Figure 4L.
Taken together, when neurons were found as the primary cell type to be affected by iACS, a specific subset of neurons was detected to exhibit a significant reduction in Rgs9 expression.The most pronounced decrease in Rgs9 expression was observed in response to ips-iACS.The observed decrease in Rgs9 expression progressively diminished from the cortex, through the hippocampus, and finally to the thalamus.This trend was indicative of a correlation with the intensity of the in-brain derived EF generated by iACS, suggesting a dependency on EF intensity.
Rgs9 signaling in iACS-induced neurons
When Rgs9 was pinpointed as a key gene responsive to iACS in neurons, we then conducted an analysis of the pathway enrichment for genes that were up regulated in Rgs9-positive neurons within the cortex, hippocampus, and thalamus.Utilizing Gene Ontology (GO) analysis in conjunction with snRNA-seq data, we found that Rgs9 was associated with neuronal differentiation, maturation, and drug responsive processes to substances such as amphetamine, morphine, and nicotine, as shown in Figure 6A-C.Further intracellular signaling associated with iACS downregulated Rgs9 was further examined by analyzing protein expression in the cortex tissue samples, which showed the most notable reduction in Rgs9 following the ips-iACS (Figure 6D-H).Along with the downregulation of Rgs9 on the RNA level, the protein expression of RGS9 was also detected to be downregulated with the same significance (Figure 6D,E).However, the protein expressions of regulator of G protein signaling 7 binding protein (R7BP) (Figure 6D,F) and G protein subunit beta 5 (Gβ5) (Figure 6D,G) were detected with no change by any of the iACS treatments.The two proteins of R7BP and Gβ5 were recognized as the other two members of the RGS9/R7BP/Gβ5 complex, playing primary regulating roles in neuronal response to drugs and neuronal differentiation. 25,26While, the expression of β-catenin in nucleus, previously identified as a downstream targeting signal negatively affected by RGS9 27 and as a regulator of neuronal differentiation in response to electrical stimulation, 28 were found to be increased with ips-iACS, bi-iACS, and con-iACS, as shown in Figure 6D,H.The results indicated that iACS could negatively regulate Rgs9 expression, which in turn affected the functioning of the RGS9/R7BP/Gβ5 complex.However, iACS did not directly effect on either R7BP or Gβ5 components of the complex; on the other hand, iACS triggered the β-catenin activation through downregulating Rgs9.Both signaling offered another potential explanation for neuronal enrichment in the iACS-treated brain.
DISCUSSION
The EBS has been evidenced immediately safe and effective for serials of brain disorders, ever since deep brain stimulation (DBS) was approved by the Food and Drug Administration for treatment of PD, tremors, etc., in 1997. 29Since then, EBS have been widely used through either invasive or non-invasive way, for neurological and neuropathological modulations.The underlying mechanisms were explained as the modulation of neuronal polarization and spiking activity by direct current stimulation (DCS), 30 as well as the entrainment and synchronization of neuronal oscillations by ACS. 31,32Although of the accumulating studies on neural modulating effects of DBS, DCS, and ACS, there is rarely an investigation on the comparison between global and local stimulations, the cell-type response, and the gene expression changes under EBS.
Here, we provided the first snRNA-seq profile, separately of the rat cortex, hippocampus, and thalamus regions under either bilateral or unilateral iACS at 40 Hz.Our primary goal for this study was to identify potential cellular targets and assess the safety of these therapeutic interventions, particularly in the context of healthy cells within brains.This approach was anticipated to yield vital information that could significantly influence the treatment strategies for EBS for various neurological disorders.Across all the cell-type clusters with iACS, we surprisingly found neurons as the only cluster to be significantly increased, in regions of the cortex, hippocampus, and thalamus, especially with ips-iACS.The real-time PCR and immunofluorescence analyses corroborated this increase observed from snRNA-seq.The other cell types, including astrocytes, oligodendrocytes, and microglia, on the other hand, were profiled either non-influenced or irregularly changed with no significance by any scheme of iACS.These results suggested that the neurons were the major cell type to response to EBS.Furthermore, ips-iACS was identified as the most effective stimulation for the neuronal boost, especially in the cortex.Combined with the cell-type cluster profile and the in-brain EF magnitude simulation and measurement data, it suggested that the unilateral iACS generating EF at 40 Hz and 100−250 V/m in the brain regions of cortex, hippocampus, and thalamus would be the most effective way to boost neurons in brain, which was consistent with our previous founds that the effect of electrical stimulation at 100 V/m to promote the neuronal differentiation of neural stem cells, 28 and the effect of iACS stimulation at 40 Hz to boost the neurogenesis in AD mouse brain. 21hen neurons across the cortex, hippocampus, and thalamus were identified as the primary cell type increased by iACS, it was presumed that the increased neurons by iACS were neurons or precursor cells newly formed, indicating iACS-induced neurogenesis.The notion of neurogenesis in the adult brain, particularly in areas like the SVZ and hippocampus, has been debated but is increasingly supported by recent research.Endogenous neurogenesis can be induced by exogenous chemical and physical stimulations.Our work contributed to this discussion, showing that electric stimulation can promote neuronal differentiation from neural stem cells, and enhance endogenous neurogenesis in healthy and AD rat brain. 21,22,28For the underlying molecular mechanism, we have observed the phosphoinositide-3-kinase (PI3K)/Akt (also called protein kinase B or PKB)/Glycogen xynthase kinase-3 beta (GSK-3β)/β-catenin pathway in previous study.Here, to further explore the underlying mechanism and primary regulating gene, we dissected the snRNA-seq data for the gene expression changes by iACS, finding Rgs9 as a unique downregulating gene under the ips-iACS.Rgs9, also known as the regulator of G protein signaling 9, encodes the protein RGS9, a member of the RGS family of GTPase activating proteins that regulate various intracellular signaling pathways through G protein deactivation.The expression and distribution pattern suggests an enriched gene expression of Rgs9 in brain and retina, 33 regulating dopamine, opioid, and protein kinase A (PKA) signalings. 34As a member of all RGS-related GTPase-accelerating proteins, Rgs9 is involved in regulation of physiological processes, including the membrane channel activity and ion flux regulations.For instance, Rgs9 is reported to increase the intracellular calcium out-flux by dopamine activation of the D2 receptors. 35,36In other studies, the encoded protein RGS9 is reported to play roles in neuronal differentiation and maturation through complex with R7BP and Gβ5. 25,26Furthermore, when β-catenin signaling pathway controls numerous cellular processes, including neuronal differentiation and neurogenesis, RGS9 is reported to negatively regulate β-catenin activation. 27Here, our results demonstrated the GO enrichment of Rgs9 correlated to the neuronal response to amphetamine, morphine, nicotine, as well as to neuronal processes regulation, which was modulated by iACS through Rgs9.As there was no direct evidence linking the function of Rgs9 and induced neurogenesis, our GO enrichment and protein expression results indicated a potential mechanism of iACS-induced neuronal genesis through negative regulation on Rgs9.Further results provided new evidence to link the RGS9 with β-catenin activation controlled neuronal differentiation, which addressed the hypothesis of iACS-induced neurogenesis, as well as expanded the undergoing mechanism of PI3K/Akt/GSK-3β/β-catenin pathway from our previous study with Rgs9 as an initial up-streaming responsorial signaling to iACS.
In conclusion, we provided the snRNA-seq profiles of the rat cortex, hippocampus, and thalamus under iACS at 40 Hz.The neurons were identified as the specific cell type sensitive to iACS, responding as the increased absolute neuron number and relative proportion.Interestingly, the iACS induced less than 0.1% changes on gene expression level in neurons.While Rgs9 was identified as a unique decrease expression gene in neuron.The unilateral iACS produced a more focused local effect on Rgs9+ neuron proportion attenuations on the ipsilateral hemisphere.As Rgs9 is a negative regulator for the G proteins and β-catenin activities, our result provided a novel mechanism that iACS would be a potential treatment for enhancing the neurogenesis and neuronal sensitization, through its unique downregulation effect on Rgs9 in neurons.This study enhanced our understanding of the effects of iACS and tACS neural modulation on healthy brain cells and genes, providing novel information on targeting cells, responding genes and efficient stimulating strategy for further applications of EBS on many brain diseases.
Animal and grouping
The male Sprague-Dawley rats used for this study, at the age of 4 months, weighing at 250-300 g were housed with ad libitum access to food and water in a room maintained at a constant temperature (20
Finite element method
The FEM was used to estimate the distribution of EF in a 3D rat brain model.A simplified brain model was built based on MRI images of 118 brain structures with T2 module (available at: https://www.nitrc.org/projects/whs-sdatlas).The Sim4Life platform (v7.01.8169,Zurich MedTech AG) was used to perform a quasi-electrostatic FEM simulation to calculate the electric current distribution in the brain model.
Electrode placement
The electrode implanting surgery was performed on the rats from each group 24 h before the iACS session.As described previously, 21,37,38 the rats received the anesthesia with 2% (v/v) isoflurane in O 2 flow (0.2-0.3 L/min) before surgery.
Two stainless steel screws (0-80, D.I.A., 0.067 in.) were sterilized and implanted at the coordinatesanteroposterior (AP): −4.2, mediolateral (ML): ±4.5 (for bilateral iACS) or at AP: −4.2, ML: −4.5 and AP: 1, ML: −2.5 (for unilateral iACS) (unit: mm).The coordinates were set according to "The rat brain in stereotaxic coordinates-sixth edition." 39During the entire surgery, the rats were placed on a thermostatically controlled warming pad and body temperature monitored with a rectal thermometer.Depth of anesthesia was monitored every 5 min by a toe pinch to elicit a foot withdraw.For the analgesic regimen, the rats received subcutaneous Carprofen at 5 mg/kg at the time of surgery.The rats' neurological function was assessed twice daily in the following 2 days after the surgery, and Carprofen was administered if rats showed signs of pain or stress.
Intracranial AC stimulation
The iACS was delivered through the screw electrodes 24-h post the implantation surgery.For the treatment, the rats were anesthetized with 2% (v/v) isoflurane in O 2 flow (0.2-0.3 L/min).The iACS was performed with the parameters: 40 Hz at 250 µA (signal produced and monitored by the Neuroelectrics Starstim), for 1 h per day, for 7 days.The sham rats were processed with 40 Hz at 250 µA for 59 s per day, for 7 days, as the fake stimulation control.
Measurement of in-brain derived EF
For EF measurement, the rat was separately anesthetized with 2% (v/v) isoflurane in O 2 flow (0.2-0.3 L/min) and placed in the stereotaxic apparatus.Two stainless screw electrodes were sterilized and implanted into skull as the stimulating electrodes.Three pairs of Ag/AgCl measuring electrodes implantation according to "The rat brain in stereotaxic coordinates-6th edition." 37,39In brief, for the con-iACS group, the coordinates of the three pairs of Ag/AgCl measuring electrodes were set at AP: −3, ML: Adjust nucleus concentration to 3-4 × 10 5 nuclei/mL with PBS (added with RNase inhibitor and DL-dithiothereitol (DTT)), ready for use.The isolated nuclei were then resuspended in PBSE at a concentration of 10 6 nuclei per 400 µL, filtered through a 40 µm cell strainer, and counted using Trypan blue.DAPI staining (1:1000; Thermo Fisher Scientific, D1306) was performed on the PBSE-enriched nuclei, with nuclei being identified as DAPI-positive singlets.The single-nucleus suspension concentration was adjusted to 3-4 × 10 5 nuclei/mL in PBS and loaded onto a microfluidic chip (GEXSCOPE Single Nucleus RNA-seq Kit, Singleron Biotechnologies).The resulting snRNA-seq libraries were prepared in accordance with the manufacturer's instructions (Singleron Biotechnologies) and sequenced on an Illumina HiSeq X10 instrument to a sequencing depth of at least 50,000 reads per cell, using 150-bp paired-end (PE150) reads.
snRNA-seq data analysis
The gene expression matrices were generated from raw reads using scopetools (https://anaconda.org/singleronbio/scopetools).The first step involved filtering out reads without polyT tails and extracting cell barcodes and unique molecular identifiers (UMIs).Adapters and polyA tails were trimmed before aligning the reads to the pre-mRNA reference (Ensemble, Rnor6.0 genome).Then reads with the same cell barcode, UMI, and gene were grouped together to count the number of UMIs per gene per cell.
The cell number was determined using the "knee" method, a standard quality control approach for snRNA-seq, which identifies the inflection point (or "knee") on a plot of the number of UMIs versus the number of cells.Barcodes to the left of the knee point, indicating high-quality cells, were retained for further analysis, while those to the right were excluded.The cell barcode files from the filtered matrix, corresponding to the cell fraction, were then analyzed using Scanpy v1.9. 40
Real-time PCR
Total RNA of brain tissue was extracted from the treated rats or cDNA synthesis using PrimeScript RT Master Mix (Perfect Real Time) (TAKARA RR036A). 28Real-time PCR reactions were performed using the Roche LightCycler96 and SYBR Green Premix Pro Taq HS qPCR (ACCURATE BIOLOGY AG11701) Kit.The primer sequence can be found in Table 1.The cycling condition were set as: an initial preincubation step at 95 • C for 2 min, followed by 40 cycles of two-step amplification at 95 • C for 15 s, and melting stage can be divided into three parts, 95 • C for 15 s, 60 • C for 15 s, 95 • C for 15 s, and finally a cooling stage of 37 • C for 30 s.The mRNA expression of GAPDH was used as an internal control and the expression levels of target genes were normalized to a control sample using the 2 −ΔΔct method for relative quantification of gene expression.
Immunofluorescence
As described previously, 21 the brain slices were fixed in 4% paraformaldehyde (PFA) for 30 min, and then permeabilized within 0.1% Triton X-100 (Sigma-Aldrich) for another 30 min.After blocking the non-specific proteins with 3% bovine serum albumin (BSA)-PBS at room temperature for 1 h, the slices were incubated with the primary antibodies: NeuN
F I G U R E 2
The single-nucleus RNA sequencing (snRNA-seq) profiling of the cortex, hippocampus, and thalamus of intracranial alternating current stimulation (iACS)-treated rats.(A) Photograph of a rat underling the iACS treatment.(B) Grouping illustration for electrode implantation, iACS and tissue collection.Uniform manifold approximation and projection (UMAP) embedding of analyzed transcriptomes from the rat cortex (C and D), hippocampus (G and H), and thalamus (K and L) tissue annotated by cell type as well as treatment distribution.n = 2 rats for each group for snRNA-seq.Distribution and cell numbers of identified cell types, for the cortex (E), hippocampus (I), and thalamus (M).The mRNA expression of NeuN in the iACS-treated cortex (F), hippocampus (J), and thalamus (N) by real-time PCR.The mRNA expression of NeuN by real-time PCR were shown as mean ± standard deviation (SD), ***p < 0.001, **p < 0.01, and *p < 0.05 were considered as significantly different between each iACS and sham.n = 3 rats for each group for the real-time PCR assay.
F
I G U R E 3 The immunofluorescence of neurons and glia in cortex, hippocampus, and thalamus of intracranial alternating current stimulation (iACS) rat.The NeuN+ neurons in the cortex (A), hippocampus (E), and thalamus (I).Quantification of NeuN+ neuron count percentage in the cortex (B), hippocampus (F), and thalamus (J).GFAP+ cells in the cortex (C), hippocampus (G), and thalamus (K).Quantification of GFAP glia count percentage in the cortex (D), hippocampus (H), and thalamus (L).The nuclei were labeled with 4',6-diamidino-2-phenylindole (DAPI) staining.The percentages of NeuN+/DAPI+ and GFAP+/DAPI+ were shown as mean ± standard deviation (SD), ***p < 0.001, **p < 0.01, and *p < 0.05 were considered as significantly different between each of the three iACS groups and sham.n = 3 rats for each group.Score bars: 200 or 100 µm as shown in each image.
F I G U R E 4
Rgs9 gene expression difference by the intracranial alternating current stimulation (iACS) treatments.Volcano plot of gene expression changes of neurons between sham and ips-iACS groups in the cortex (A), hippocampus (E), and thalamus (I).Significant genes were called via DESeq2 (p < 0.05 and change >2-fold).Uniform manifold approximation and projection (UMAP) showed Rgs9 expression in the cortex (B), hippocampus (F), and thalamus (J).Distribution and cell numbers of identified Rgs9 + neurons for sham and ips-iACS groups in the cortex (C), hippocampus (G), and thalamus (K).The mRNA expression of Rgs9 in the iACS-treated cortex (D), hippocampus (H), and thalamus (L) by real-time PCR.The mRNA expression of Rgs9 by real-time PCR were shown as mean ± standard deviation (SD), ***p < 0.001, **p < 0.01, and ***p < 0.05 were considered as significantly different between each iACS and sham.
F I G U R E 5
The immunofluorescence of Rgs9+ neurons in intracranial alternating current stimulation (iACS)-treated rat brain.The NeuN+/RGS9 cells in the cortex (A), hippocampus (D), and thalamus (G).Quantification of RGS9+ cell count percentage in the cortex (B), hippocampus (E), and thalamus (H).Quantification of RGS9-NeuN+/NeuN+ cell count percentage in the cortex (C), hippocampus (F), and thalamus (I).The nuclei were labeled with DAPI staining.The percentages of RGS9+/DAPI+, RGS9-NeuN+/NeuN+ were shown as mean ± standard deviation (SD), ***p < 0.001, **p < 0.01, and *p < 0.05 were considered as significantly different between each of the three iACS groups and sham.n = 3 rats for each group.Score bars: 200 or 100 µm as shown in each image.
F I G U R E 6
Signaling pathway analysis of intracranial alternating current stimulation (iACS) regulation on neurons.The pathway enrichment analysis of down streaming upregulated genes and pathways in the Rgs9 + neurons from the cortex (A), hippocampus (B), and thalamus (C), using Gene Ontology (GO) biological process terms.(D) The protein expressions of RGS9, R7BP, Gβ5, and β-catenin in the rat cortex with iACS treatments.(E) Quantification of RGS9 expression in (D).(F) Quantification of R7BP expression in (D).(G) Quantification of Gβ5 expression in (D).(H) Quantification of β-catenin expression in (D).The protein expressions were shown as mean ± standard deviation (SD), ***p < 0.001, **p < 0.01, and *p < 0.05 were considered as significantly different between each of the three iACS groups and sham.n = 3 rats for each group.
• C-22 • C) on a 12-12 h lightdark cycle.The animal procedures were approved by the Institutional Animal Care and Use Committee (IACUC) at Shenzhen Institutes of Advanced Technology.All efforts were made to ensure animal comfort and to reduce the number of animals used.For sham and iACS procedures, the rats were randomly divided into four groups: sham, ips-iACS, bi-iACS, and con-iACS.
List of the primers used in this study.Centrifuge at 200 rcf at 4 • C for 2 min and transfer 9 mL supernatant to a new tube.Centrifuge at 500 rcf at 4 • C for 5 min, leave supernatant, and mix pellet with 50 µL pre-chilled PBS.Add 200 µL DAPI staining solution, mix, and react for 2 min on ice in the dark.Add 5 mL pre-cooled PBSE, filter into a new tube, centrifuge at 500 rcf at 4 • C for 5 min and remove excess liquid.Add 100-200 µL pre-cooled PBS, gently resuspend nucleus pellet.
TA B L E 1 | 2024-03-18T05:10:39.629Z | 2024-03-15T00:00:00.000 | {
"year": 2024,
"sha1": "3dc991c22a63ff3913280ba91c383713b7a531a2",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "3dc991c22a63ff3913280ba91c383713b7a531a2",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
251514412 | pes2o/s2orc | v3-fos-license | Rational emotive health therapy for the management of depressive symptoms among parents of children with intellectual and reading disabilities in English language
Background: There is little data in developing countries such as Nigeria with regard to the impact of caring for their children with intellectual and reading disability (IRD) on the quality of life of the parents and the risk of psychopathology. Objective: The main objective of the study was to assess the level of psychopathology, i.e., depression among parents of children with intellectual and reading disabilities. Methods: This was pretest/posttest control group design with 198 parents (99 fathers/99 mothers) of 100 children with the diagnosis of IRD. The measures used in this study for data collection was Beck Depression Inventory (BDI). Repeated measures analysis of variance (ANOVA) was employed for data analysis. Results: Result obtained showed a significant high proportion of depressive symptoms among parents of children with intellectual and reading disabilities at initial assessment. Furthermore, the REHT intervention resulted in a significant reduction in depression of parents in treatment group as compared to those in the control group. Conclusion: The presence of a child with intellectual and reading disabilities does not cause parents to become depressed but irrational beliefs about their children’s mental and reading deficiencies may contribute to unhealthy thinking and feelings about the future of their children. REHT is very effective in assisting depressed parents of children with intellectual and reading disabilities to think rationally about their children and work towards overcoming disability-related as well as behavior-related irrational beliefs. The mental health providers, therapists and counselors should apply the REHT in managing people with psychological distress especially parents of children with intellectual and reading disabilities who may have psychological diagnosis of depression.
Introduction
Parents assume a critical source of support for children with special needs because they absorb the added demands on time, emotional and financial resources of their children. [1] Previous studies have indicated that the presence of a child with special needs such as a child with intellectual disability (ID) is likely to trigger off psychological distress among parents. [2,3] Intellectual and reading disability (IRD) is a serious source of concern for language and special educators as well as guidance counselors.
According to Masito, Warnick and Esambe [4] intellectual disability involves impairments that significantly affect one's ability to read, write and reason. It also affects one's social judgement and interpersonal communication skills as well as one's ability to take care of oneself. Those suffering from ID include those with down syndrome, fragile X syndrome or Rett syndrome. This is because reading is a cognitive function and people with Medicine intellectual disabilities do not have such intelligent quotient to navigate through reading. However, those with reading difficulties do not necessarily suffer from intellectual disabilities. In this study, reading disability is not studied on its own but is taken as a fallout of intellectual disability.
Children with IRD are children with special needs. Most parents of children with intellectual and reading disability (CIRD) suffer from psychological distress. [5] In addition to these psychological distresses, most parents of children with at least severe IRD tend to be marked with pessimism, anhedonia and tendency for lack of initiative and feeling of hopelessness. [6] In psychological diagnosis, such distresses often fall under the umbrella of depressive illness. [6] Studies have also indicated that parents of CIRD tend to exhibit a higher magnitude of various spectra of depressive symptoms according to [7] which included; hypersomnia, poor concentration, social isolation, anger, frustration, unrealistic expectations, poor appetite, and loss of interest in previously enjoyed activities and motivation.
Parents of CIRD report more psychological distress when their children begin to display dysfunctions such as communication difficulties, lack of social skills, practical skills, as well as delay or limited mental functioning. [8] CIRD seems to be a source of disappointment, unhappiness, and regret to parents [9] and due to ignorance of Nigerian citizens, CIRD are abused, discriminated, unaccepted, and a show of negative attitude towards the children and their families. [10] These reactionary tendencies and unfavorable comments are likely to trigger off depressive symptoms among parents. Symptoms of depression may not necessarily signify a diagnosis of depression but they definitely show a red flag.
Nigeria has one of the highest reported rates of childhood intellectual disability in the world with 75/1000 for mild ID. [11] In other developing countries such as Pakistan, the rate of childhood ID is 65/1000 for mild ID. [12] Studies conducted in developing countries such as Nigeria have indicated reasonable consistently high rates of depression and anxiety disorders where 10-44% of people suffer from depression and anxiety. [13] In addition, about 50 million people suffer from depression and other related disorders due to kidnapping, insurgency and starvation in North-East and other parts of Nigeria. The Federal Neuro-psychological Hospital, Nigeria reported that about 60% of Nigerians attending primary healthcare have mental disorders ranging from depression, psychosis to posttraumatic disorders. [13] These mental disorders apart from insurgency may be as a result of the presence of a child with IRD in the family. [14] The prevalence of depression appears to be skewed towards higher magnitude among parents of CIRD while it is widely reported that parents of CIRD have more mental distress than other parents with different disabilities, [15] and one empirical question stands as to whether psychological disorders stem from preexisting mental distress or trigger off from the burden of caring for the child with IRD. [6] Studies have indicated that rather than preexisting mental illness the burden of caring for the child with IRD has direct bearing on the development of psychological disorders among parents. [16,17] There is a vast research database comparing psychological distress among parents of CIRD with other parents of children without ID. For instance, Keskin [18] reported high traits of depression and anxiety in parents. Mirza and Jenkins [19] reported prevalence rates of 34%. Heller, Hsieh & Rowitz, [20] Saloviita, Italunia & Levionen, [17] Simmerman, Blacher & Baker [21] reported increased burden and stress, Kersh, Hedvat, Hauser-Crawn and Warfield [22] reported poorer parenting efficacy. Herring, Gray, Taffe, Tonge, Sweency and Simmerman, [23] and Blacher and Baker [1] reported poorer marital adjustment. These findings have important implications for therapists working in behavioral intervention as they suggest that improvement in the child's behavior may lead to decrease in parents' stress and improved parental mental health. [24] Aside from behavior, other types of care demands can create distress and depression for parents [24,25] which included adaptive behavior deficits [17,26] and medical needs. [24] All these have been associated with negative parents' impact, stress and burden. Thus, the researchers suggest that parents' outcomes may not be determined by simply the presence or absence of a disability but maladaptive behavior and care needs may be the important risk factors for parents' impact and stress. [27] In comparing the care for CID and other disabilities, mothers are usually the primary caregivers. [28] However, fathers also share the task of care responsibilities with their spouses. [29] In this regard, it is worthwhile to note that father's support can lead to significant improvement in maternal well-being and lower levels of psychological distress of parents. [29] So, including fathers in this research is necessary to give a complete picture about parental mental health and to compare psychological stress such as depression between both parents in the families having a child with CIRD.
To date, literature evidence has shown that there is the need to address depressive symptoms of parents of CIRD as medical condition to ameliorate depressive disorders of parents. [23] Different treatment measures including cognitive restructuring, setting goals, respite care, cognitive and behavioral problem solving, and in particular, cognitive therapy are usually being used to assist those suffering from depressive symptoms. Previous studies have indicated that 1 form of therapy, which had been proved effective in addressing many types of psychological distress, is rational emotive behavior therapy (REBT). For instance, Ellis and Griegre [7] explain that emotional disturbance is connected with dysfunctional cognitive behavior known as irrational beliefs. Ellis [30] informed that irrational beliefs may include catastrophizing a situation, judging oneself as worthless or not competent or despair, and turning them into problems. In regard to depressive symptoms, the irrational beliefs may include; "l do not see my child as a worthy child". "I cannot go along with my child in the public to avoid social stigma". "I am not emotionally strong enough to see my child being an object of ridicule". "I am horribly ashamed of myself whenever I see my child's disability". "I feel disappointment when l ever see my child with normal children from other parents". REBT assists to reduce emotional stress of parents by promoting more realistic, logical and flexible thinking. [7,31,32] Ellis [33] maintains that if individuals experience negative behavioral and emotional consequences, more positive consequences will emerge once irrational beliefs are disputed and replaced with new effective beliefs, as can be seen in case of depressive symptoms.
The present study demonstrates the development of rational emotive health therapy (REHT) from the principles of REBT. REHT is a form of cognitive behavioral health therapy for the management of depressive symptoms. Omeje et al [32] who used REHT in the treatment of alcohol use disorder among community-dwelling HIV-positive patients in Enugu revealed a significant reduction in the use of alcohol by the participants. It is possible that REHT may assist parents who are depressed due to behavioral issues and care demands to reduce their irrational thinking about their children. In this regard, therefore, the objective of this study was to ascertain the effect of REHT on depressive symptoms among parents of children with intellectual disability. The researchers, however, hypothesized that there will be a significant effect of REHT on measurable depressive symptoms among fathers and mothers of children with intellectual and reading disabilities.
Ethical approval
The Department of Educational Foundations at the University of Nigeria, Nsukka provided the researchers with approvals to conduct the study. The researchers conformed to the Helsinki ethical principles of psychological research with human participants. [34]
Area of the study
The study was conducted in Enugu State of Nigeria
Design of the study
This was a pretest/posttest control group design
Participants
The participants were 200 parents who self-reported never to have experienced symptoms of depression and related psychological distresses prior to their children being diagnosed of ID. The study sample included 198 parents (99fathers/99mothers) of 100 children aged 5-20 years who met other inclusion criteria for the study. Participants were randomized into experimental and control groups using Random Allocation Software. [35] The mean age for the group was 40.16 years (SD = 8.8) ranging from 24 to 45 years. After getting written informed consent, the participants were recruited from 3 health institutions in Enugu. About 98% of the parents that the researchers approached willingly gave consent to participate in the study. The parents were assessed for depression from Beck Depression Inventory (BDI).
Furthermore, 62.6% of the participants were self-employed; 20.2% were civil servants, and 17.2% were not doing any meaningful jobs. All the participants met the Medical Research Ethics Committee at the College of Medicine and Health Sciences diagnostic criteria. Parents of CID were selected for treatment and control groups, 99 mothers for treatment and 99 fathers for control groups.
Measures
2.5.1. Beck depressive inventory. This is a 21-item multiple choice self-report inventory of 1961-that measures the intensity, severity and depth of symptoms of depression in parents of CIRD. The BDI takes about 10 minutes to complete. The internal consistency for BDI ranges from R = 93 and a mean of.86. The BDI also had high internal consistency with alpha coefficients of.86 and.81 for psychological and nonpsychological patients. The internal consistency of BDI for the present study is.84. The demographic characteristics are shown in Table 1.
Procedures
The researchers advertised the study in the place of the study between November 2016 and June 2017. A total of 198 parents with a high score (≥ 28) on the BDI showing the presence of severe depressive symptoms with irrational beliefs were selected and randomly assigned into treatment and control groups.
The treatment process of managing depressive symptoms was based on REHT Treatment Manual for Depressive Symptoms (RTMDS) adapted from REBT Depressive Manual (RDM) developed by David et al (2004). Participants (n = 99) in the treatment and participants in the control group (n = 99) took a pretest before the intervention (Time 1) in REHT program. The participants in the treatment group received REHT intervention program. Ten sessions which lasted for 50 minutes each, were held once per week for 5 consecutive weeks for those in the treatment group. Participants (n = 99) in the control group received conventional counseling for ten sessions that lasted for 50 minutes each and were held once per week for 5 consecutive weeks. After the end of the intervention period, a posttest (Time 2) for depression was administered using the same measure to both groups after 2 months of completing the study.
Intervention
Rational emotive health therapy treatment manual for depressive symptoms was used for the intervention. This manual was adapted from REBT p + rinciples developed by David et al [36] that are used for depressed people. The RTMDS is based on the framework of rational emotive and cognitive behavior therapy. Before using this manual, the researchers created rapport with the participants. At this point, the rules of the therapy, the rationale behind the use of REHT for depressive reduction, as well as the goals for the study were vividly explained to them. The RTMDS is focused on problematic beliefs such as self-downing, catastrophizing, and low frustration tolerance. In this regard, however, the cognitive behavioral and emotive techniques were used to change the target problematic beliefs of the participants related to depressive disorders. Cognitive techniques included thinking on how to manage their depressive symptoms and for disputing their unhealthy thoughts about their children. Behavioral techniques were instructing the participants on practical steps to help them cope with depressive symptoms such as hiding the child from people and having horrible feelings for the child. Emotive techniques involved were used to assist the participants to change their negative thoughts on an emotional level: humorous stories, child related poems and negative satiric songs were used to generate feelings to assist to change negative thoughts towards children with disabilities.
Following the principles of rational emotive health therapy, the researchers disputed the disability related irrational thoughts of the participants which included; 'I do not see my child as a worthy child", 'I cannot go along with my child to avoid social stigma; 'I am not emotionally strong enough to see my child being an object of ridicule; 'I am horribly ashamed of myself whenever I see my child's disability; "I feel disappointed when I see normal children from other parents" "Giving my child out to motherless babies" home is a big relief to my problem" and others. Participants were taught to do away with the discomfort and distress caused by irrational beliefs through the following ways: they were taught that a child is a gift from God Table 1 Demographic characteristics of the participants. whether disabled or not, and to be patient and wait for God's intervention, they were taught to dispute the irrational beliefs that brought about the discomfort and distress and have rational beliefs such as: "I can accept my child no matter the disability" and "I can now plan for my child's future". The participants were taught to do away immediately with the discomfort and distress by eliminating the predisposing factors that caused the distress. The RTMDS severed as an immeasurable guide for the REHT intervention in the treatment group.
Data analysis
The researchers employed IBM SPSS statistics 20 to carry out statistical analysis-including screening for missing values and violation of assumptions. A repeated measures analysis of variance (ANOVA) was employed to ascertain the effect of REHT on depressive symptoms in the treatment group as compared to the control group. Partial eta Squared was used as a measure of effect size for this design.
Results
In Table 2, the results of the analysis are presented. It was indicated that there were no baseline differences on depressive symptoms between participants in the treatment (M = 57.37, SD = 3.68) and control group (M = 55.38, SD = 2.23) conditions, F (I. 196) = 11.917, P = .692, n 2 p = 0.690. The n 2 p value of 0.690 indicates a burden of depressive symptoms between the treatment and control groups at pretest. This showed that depressive symptoms of the parents were significantly high across the treatment and control group at baseline. As shown in Table 2, a repeated measures analysis of variance (ANOVA) revealed a significant treatment by time interaction effect for depression F(I. 196) = 4.994, P = .000, n2p = 0.889. The results showed a significant reduction from time 1 to time 2 on depression (P = .000) for REHT group, and for the control group there was no significant change over the same period. In line with the researchers' prediction, REHT significantly reduced depressive symptoms of mothers in the treatment group (M = 17.71, SD = 2.21) as compared to fathers in the control (M = 36.81, SD = 4.11).
In Figure 1, it revealed how REHT significantly reduced depressive symptoms of mothers in the treatment as compared to fathers in the control group over time.
The objective of this study was to ascertain the effects of REHT on depressive symptoms among parents of CIRD in Enugu, Enugu State of Nigeria.
The result obtained showed that a significantly high proportion of parents of CIRD have psychological diagnosis of depression across the treatment and control groups at initial assessment. This finding is in congruence with the previous studies such as Floyed and Gallapher, [2] and Greenberg et al, [3] which reported that the presence of a child with intellectual and reading disabilities can trigger off psychological distress among parents of such children which may likely lead to depression. In addition, the studies of Gramm and Neibour, [37] Khamia, [38] Saloviita, Itaalunna and Leinoheu, [17] in agreement with the present study reported that the burden of caring for a child with intellectual and reading disabilities has direct bearing on the development of psychological disorders among parents. However, REHT significantly reduced the depressive symptoms of mothers in the treatment as compared to fathers in the control group.
The findings of the present study indicate that the REHT based intervention was efficacious in the treatment of depressive disorders among parents of CIRD. Omeje et al [32] who used REHT in the treatment of alcohol use among community-dwelling HIV-positive patients reported that rational emotive health therapy was efficacious in reducing the level of alcohol-related irrational beliefs among the participants. The participants who were exposed to REHT were able to recognize and change their self-defeating thoughts and beliefs, develop healthy behavior, and become more thoughtful in managing emotional behavior. REHT participants were urged to learn that their healthy thoughts, emotions and behaviors are panacea for understanding their situations. On the other hand, the control group participants did not indicate any change in depressive symptoms between baseline and follow-up.
The finding of this present study is in line with the argument that the mere presence of a child with IRD in the family alone does not cause parents to develop symptoms of depression. Rather, it is the irrational beliefs about the future of their children that lead to unhealthy thinking and self-defeating behaviors that result in depressive symptomatology. In the light of the study's result, parents are urged to support REHT interventions designed to reduce depressive symptoms in those who are already depressed.
Limitations and Suggestions for the further studies
Limitations of the study were the small sample size of the population and lack of comparison with group of parents of children without intellectual disability. Further studies are required which should include multiple studies across the state and country, comparing both rural and urban settings with a large sample size and a control group.
Although participants self-reported never to have experienced symptoms of depression and related psychological distresses prior to their children being diagnosed of ID before being included in the study, there other correlates of depression which could trigger symptoms in parents even after their children have been diagnosed of ID. Therefore, future studies should endeavor to identify ID as the central cause of the depression before intervention.
Conclusion
There was a high rate of depressive symptoms among parents of CIRD in this study. Rates of depressive symptoms were even higher among mothers as compared to fathers. Therefore, based on the findings of the study, we concluded that REHT assisted parents of CIRD to significantly reduce their depressive symptoms as compared to the control group. This study, however, is of benefit to different groups of people which include mental health providers, caregivers, parents, and therapists. For parents REHT may enable them to learn how to change their depressive disorders and become more thoughtful in managing their emotions. | 2022-08-12T15:35:56.070Z | 2022-08-12T00:00:00.000 | {
"year": 2022,
"sha1": "2a2d828ec86aeedb0a3f56e7ea4066304385191f",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "WoltersKluwer",
"pdf_hash": "2a2d828ec86aeedb0a3f56e7ea4066304385191f",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
237331487 | pes2o/s2orc | v3-fos-license | CAR T-Cell Therapy in Hematological Malignancies
Chimeric antigen receptor (CAR) T-cells (CAR T-cells) are a promising therapeutic approach in treating hematological malignancies. CAR T-cells represent engineered autologous T-cells, expressing a synthetic CAR, targeting tumor-associated antigens (TAAs) independent of major histocompatibility complex (MHC) presentation. The most common target is CD19 on B-cells, predominantly used for the treatment of lymphoma and acute lymphocytic leukemia (ALL), leading to approval of five different CAR T-cell therapies for clinical application. Despite encouraging clinical results, treatment of other hematological malignancies such as acute myeloid leukemia (AML) remains difficult. In this review, we focus especially on CAR T-cell application in different hematological malignancies as well as strategies for overcoming CAR T-cell dysfunction and increasing their efficacy.
Introduction
In cell-mediated immune responses, T-lymphocytes (T-cells) play a pivotal role in surveilling and eliminating tumor cells or pre-malignant cells. If T-cell activity is impeded, cancer can develop [1]. Since many cancer types acquire the ability to silence anti-cancer immune responses, scientists have developed strategies to fight back with immunotherapy, based on boosting a patient's own immune system to attack the cancer cells [2]. T-cellbased adoptive immunotherapy is an approach to modify and redirect T-cells against cancer cells. As a part of this, CAR T-cell therapy is a relatively new treatment option, based on reprogramming a patient's own T-cells with a CAR construct and returning them into the patient's blood, where they start to attack cancer cells [3]. This technique was first demonstrated by the Eshhar lab, which paved the way for a chimeric cancer therapy [4]. The CAR itself functionally replaces the endogenous T-cell receptor (TCR) and is a hybrid protein composed of four different components. The extracellular domain is usually a single-chain variable fragment (scFv) derived from a Fab or a monoclonal antibody coupled via a flexible linker determining the antigen specificity. The hinge region derived from CD4 or IgG4 connects the extracellular-to the transmembrane domain and is important for conformational flexibility. The intracellular domain is composed of a co-stimulatory domain like CD28, 4-1BB, ICOS or OX40 imitating the costimulatory signal of the TCR during activation. The stimulatory domain represents the CD3ζ chain of a TCR or FcRγ finalizing the activation process [5][6][7]. The activated CAR T-cells specifically identify targets on cancer cells leading to their destruction. A main advantage herein is that the recognition is unrestricted to the MHC. The first application field of CAR T-cell therapy has been hematological malignancies like ALL, chronic lymphocytic leukemia (CLL) and multiple myeloma (MM) since they are easier to target than solid cancers in regard to finding an adequate tumor antigen [8,9]. So far, five CAR T-cell therapies have been approved by the food and drug administration (FDA), four of them targeting CD19, the most frequently used antigen. Recently, in March this year, an anti-BCMA CAR Tcell therapy (Idecabtagene viclaucel) for the treatment of multiple myeloma has been approved [10]. However, various hematological diseases such as acute myeloid leukemia (AML) or Richter's syndrome still lack successful breakthroughs in CAR T-cell therapy for treatment of those diseases [11]. In this review, we want to provide an updated overview of CAR T-cell treatment options in hematological malignancies as well as address strategies to overcome CAR T-cell dysfunction and new approaches for combination with other therapies, which will undoubtedly change the field of autologous T-cell immunotherapy.
CAR T-Cell Therapy in Hematologic Malignant Neoplasms
Until today, CAR T-cell therapy is mainly performed in the context of hematological malignancies, but an increasing number of trials are also conducted in solid tumor patients (Figure 1; clinicaltrials.gov) [12]. In this section, we focus on CAR T-cell therapy in leukemias, lymphomas and myelomas. finding an adequate tumor antigen [8,9]. So far, five CAR T-cell therapies have been approved by the food and drug administration (FDA), four of them targeting CD19, the most frequently used antigen. Recently, in March this year, an anti-BCMA CAR T-cell therapy (Idecabtagene viclaucel) for the treatment of multiple myeloma has been approved [10]. However, various hematological diseases such as acute myeloid leukemia (AML) or Richter's syndrome still lack successful breakthroughs in CAR T-cell therapy for treatment of those diseases [11]. In this review, we want to provide an updated overview of CAR Tcell treatment options in hematological malignancies as well as address strategies to overcome CAR T-cell dysfunction and new approaches for combination with other therapies, which will undoubtedly change the field of autologous T-cell immunotherapy.
CAR T-Cell Therapy in Hematologic Malignant Neoplasms
Until today, CAR T-cell therapy is mainly performed in the context of hematological malignancies, but an increasing number of trials are also conducted in solid tumor patients (Figure 1; clinicaltrials.gov) [12]. In this section, we focus on CAR T-cell therapy in leukemias, lymphomas and myelomas. CAR T-cell therapy in clinical trials. The left pie chart shows the number (n) of CAR T-cell therapies in clinical trials categorized into solid cancers, others and hematological malignancies (n = 934). The hematological malignancies are further listed within the right pie chart (n = 722). Data taken from clinicaltrials.gov and filtered for each disease separately [13]. Search criterion was "CAR" and all hits were manually filtered for each category shown.
CAR T-Cell Therapy in Acute Lymphoblastic Leukemia
ALL is caused by malignant precursor B-or T-lymphocytes affecting normal blood cell production in the bone marrow [14]. It is the most common form of leukemia in children with a better prognosis compared to adults [15]. The incidence of B-ALL in adults is much higher compared to T-ALL [16]. Frontline therapy is usually chemotherapeutics. In high-risk patients, classified based on immunophenotype, somatic genetic alterations, site of relapse, prior therapy and time until relapse, an allogeneic hematopoietic stem cell transplantation in first remission as well as targeted immunotherapy is additionally advised [17,18]. To date, one CAR T-cell therapy is approved by the FDA, namely Kymriah [19] by Novartis, demonstrating marked effects in treating B-ALL, with 81% overall remission within 3 months [20]. Kymriah targets CD19, a B-cell surface marker, leading to a well tolerable B-cell aplasia as an off-tumor effect [21]. Since then several other trials started using different CAR constructs, stimulatory and co-stimulatory domains and adjusted manufacturing processes, because some patients still become insensitive to CAR T- . The hematological malignancies are further listed within the right pie chart (n = 722). Data taken from clinicaltrials.gov and filtered for each disease separately [13]. Search criterion was "CAR" and all hits were manually filtered for each category shown.
CAR T-Cell Therapy in Acute Lymphoblastic Leukemia
ALL is caused by malignant precursor B-or T-lymphocytes affecting normal blood cell production in the bone marrow [14]. It is the most common form of leukemia in children with a better prognosis compared to adults [15]. The incidence of B-ALL in adults is much higher compared to T-ALL [16]. Frontline therapy is usually chemotherapeutics. In high-risk patients, classified based on immunophenotype, somatic genetic alterations, site of relapse, prior therapy and time until relapse, an allogeneic hematopoietic stem cell transplantation in first remission as well as targeted immunotherapy is additionally advised [17,18]. To date, one CAR T-cell therapy is approved by the FDA, namely Kymriah [19] by Novartis, demonstrating marked effects in treating B-ALL, with 81% overall remission within 3 months [20]. Kymriah targets CD19, a B-cell surface marker, leading to a well tolerable B-cell aplasia as an off-tumor effect [21]. Since then several other trials started using different CAR constructs, stimulatory and co-stimulatory domains and adjusted manufacturing processes, because some patients still become insensitive to CAR T-cell therapy due to antigen loss of the tumor or CAR T-cell exhaustion [22]. Therefore, other targets instead of CD19 are being used. Currently more than a hundred clinical studies are registered investigating multiple targets, varying from CD20 as a potential target to bispecific CAR T-cells using CD19 and B-cell maturation antigen (BCMA) [23]. The AMELIA study using CD19 and CD22 as target achieved over 75% complete responders in three different groups varying in the administered dose of CAR T-cells (NCT03289455). To treat T-ALL, CAR T-cell therapies in clinical trials are targeting CD7 (NCT04572308, NCT04033302) or CD5 (NCT04594135) for example.
CAR T-Cell Therapy in Chronic Lymphocytic Leukemia
CLL is a very heterogeneous disease characterized by the accumulation of CD5/CD19 double-positive B cells in peripheral blood and lymphoid compartments. CLL is accompanied by immune dysregulation such as T-cell abnormalities including impaired synapse formation, impaired proliferative capacity of T-cells, exhausted T-cell phenotype and a diminished ability for T-cells to execute cytotoxicity [24]. The risk of suffering from the disease increases with age and is more common in the western world [24,25]. The conventional therapy for symptomatic CLL patients includes monoclonal antibodies, chemotherapy and immunotherapy depending on diagnosis and progression of the disease [25,26]. Although there are already a plethora of new therapeutics including BTK inhibitors, PI3K inhibitors, BCL2 inhibitors and Fc-engineered monoclonal antibodies for example, CLL is still mostly incurable [25]. CAR T-cell therapy has been investigated for patients with relapsed or refractory disease using mostly CD19 as a target. Compared to ALL or diffuse large B-cell lymphoma (DLBLC), response rates are by far worse in CLL. In a study by Geyer et al., the overall response rate was only 38% and the complete response rate was 25% with a median overall survival of 17 month [27]. Frey et al. investigated an overall response rate of 44% with only 28% of complete responders. The median overall survival was 64 month in this study [28]. Despite the challenges and relatively low response rates in CLL, there are potential applications for CAR T-cell therapy in CLL. Some clinical studies are focusing on CAR T as a consolidation therapy for patients with incomplete remissions [29]. Furthermore, this could be a potential application in elderly patients with comorbidities as a therapy with less adverse events compared to an allogeneic transplantation.
CAR T-Cell Therapy in RICHTER'S Syndrome
Richter's syndrome is usually the transformation of a CLL into a higher malignant form such as a diffuse large B-cell lymphoma with a relatively poor prognosis. The disease is very aggressive and the median survival is five to eight months [30]. Since many patients with Richter's syndrome have undergone extensive treatment before the transformation of the disease, treatment options are limited. In younger patients, an allogeneic hematopoietic stem cell transplant (HSCT) is indicated, in adult patients an immunotherapy is indicated [31]. Although the CAR T-cell therapy was firstly examined in CLL, it may help also Richter's syndrome patients with limited treatment options [30,31]. The Mayo clinic in Rochester, Minnesota, started a clinical trial very recently in May this year enrolling patients with relapsed/refractory B cell malignancies including Richter's syndrome to be treated with CD19 directed CAR T-cell therapy (NCT04892277). Kittai et al. reported in their study at the Ohio State University James Comprehensive Cancer Center about nine patients receiving the CD19-directed CAR T-cell therapy axicabtagen-ciloleucel [32]. Eight of the nine patients were pretreated with kinase inhibitors and one patient died due to an infection. Five of these eight patients showed a complete response and three a partial response. So far, only one patient relapsed. Despite these encouraging results, far more investigation in this field is needed.
CAR T-Cell Therapy in Lymphoma
The first approved CAR T-cell therapy was the CD19-directed Kymriah for treating relapsed and refractory ALL and diffuse large B cell lymphoma (DLBCL) [19]. DLBCL is one of the most common forms of non-hodgkin lymphomas (NHL) and make up to 40% of all lymphomas [33]. In the ZUMA study, patients with refractory large B-cell lymphomas were treated with CD19 targeted CAR T-cells (Yescarta) showing 58% complete responders and 25% partial responders [34]. Durable responses of over two years could be seen, leading to the FDA approval of Yescarta (axicabtagene ciloleucel) in 2017 [35,36]. Recently in March 2021, a new CAR T-cell therapy was approved by the FDA, namely Breyanzi (Lisocabtagene marleucel) for treating refractory large B-cell lymphomas, such as DLBCL, high grade B-cell lymphoma, primary mediastinal large B-cell lymphoma and follicular lymphoma [37]. For the treatment of mantle cell lymphoma (MCL), the FDA approved the anti-CD19 CAR T-cell therapy Tecartus (Brexucabtagene autoleucel) [38]. So far, only CD19 targeted therapies for B cell lymphoma are approved indicating a need for changing the focus also to other targets. A study from 2014 (NCT01735604) revealed a response in 4 out of 7 patients treated with CD20 CAR T-cells [39]. Another potential target is CD30, a membrane protein on activated B-and T-cells belonging to the TNF receptor family. In a study treating Hodgkin lymphoma (HL) patients with CD30 CAR T-cells, seven out of 18 patients achieved a partial response [40]. Further investigation will be necessary to unravel new targets making CAR T-cell therapy applicable for a wide variety of patient's characteristics [41].
CAR T-Cell Therapy in Multiple Myeloma
In multiple myeloma, malignant plasma cells accumulate in the bone marrow repressing normal hematopoietic cell production and further repressing osteoblast function [42]. This leads to the production of complete and incomplete immunoglobulins, so called paraproteins with no function. To date, the disease is almost incurable and various therapies, including chemotherapy, HSCT and immunomodulatory drugs, can only keep the disease stable over time and relieve symptoms [43]. CD19 targeted CAR T-cell therapies seem to be incapable of curing MM and achieve only minor effects in MM patients since CD19 is only expressed in low amounts on their surface [44]. Since then, several clinical trials are investigating different targets, above all, BCMA, which is expressed on mature B-cells and plasma cells, making it a promising target for CAR T-cell therapy in MM [42]. In a phase I CRB-402 clinical trial, CAR T-cells targeting BCMA (NCT03274219) were tested, showing a response rate of 86%. Further studies are investigating new targets and currently over hundred are registered for treating MM with CAR T-cell therapy [13]. CD138 or Syndecan-1 is especially expressed on MM cells and is therefore an interesting new target. A small clinical study (NCT01886976) assessed the safety and efficacy of a CD138 directed CAR T-cell therapy and explored a response rate of 80% showing a stable disease for over three months [45]. Recently, the first anti-BCMA CAR T-cell therapy named Abecma (Idecabtagene vicleucel) has been approved by the FDA for the treatment of relapsed and refractory MM.
CAR T-Cell Therapy in Acute Myeloid Leukemia
AML is a disease of the myeloid blood cell lineage arising mostly from genetic or epigenetic changes affecting normal blood cell production in the bone marrow [46]. Besides chemotherapy, an allogeneic hematopoietic stem cell transplantation can help to induce complete remission. Since AML is a genetically heterogeneous disease, characterizing the disease determines therapy options [47]. The major limitation for the usage of a CAR T-cell therapy in AML is the absence of a targetable antigen since many myeloid antigens are also expressed on healthy hematopoietic stem and progenitor cells (HSPCs) leading to destruction of the bone marrow [11]. As a consequence, targets have to be chosen carefully while achieving only minor and tolerable toxicities for the patients. The first AML CAR T-cell therapy was directed against the Lewis Y antigen showing only very limited efficacy [48]. By now, over twenty clinical trials are enrolling and recruiting patients for CAR T-cell therapy in AML targeting predominantly CD123, CD33 and CLL-1. CD123 and CD33 are mainly expressed on AML blasts; however, they can also be found on healthy HSCPs [49]. CLL-1 is highly expressed in AML but also on monocytes and other non-hematological cells [50]. Since the response rates are limited so far, scientists go for combinatory targets in CAR T-cell therapy. For example, patients are currently being recruited at Zhujiang hospital in China for treatment with CD38/CD33/CD56/CD123/CD117/CD133/CD34/Mucl-CAR T-cells (NCT03473457). A clinical trial at the Dana-Farber Cancer Institute in Boston, Massachusetts, used CAR T-cells targeting NKG2D-ligands (NCT02203825), which showed very poor responses in acute myeloid leukemia/myelodysplastic syndrome or relapsed/refractory multiple myeloma, with all patients receiving follow-up alternative therapies [51]. Further clinical trials target other antigens such as CD44v6 (NCT04097301), which are currently recruiting patients.
Overcoming CAR T-Cell Dysfunction
Antigen recognition is a crucial point in CAR T-cell therapy since many patients experience a relapse because the tumor cells become negative to the target antigen. Conversely, off-target cross reactivity in CAR T-cell therapy is still a problem. Hence, a major challenge is to improve the antigen recognition and specificity of CAR T-cells. Bispecific CAR T-cells recognize two or more tumor associated antigens simultaneously, for example, CD19 and CD20 [52]. Furthermore, mixing different CAR T-cells that target the same antigen or tandem CARs (TanCAR) co-targeting two different tumor antigens may enhance therapeutic efficacy [53]. Enhancing proliferative capacity and persistence of CAR T-cells can be addressed via optimizing costimulatory signaling domains. Incorporating one or more costimulatory domains into the CAR construct can influence their effector function. CD28 and 4-1BB are widely used, but ICOS, OX40, CD27 and many more are also under investigation [54][55][56][57][58]. CAR T-cells based on 4-1BB costimulation are known to have a greater persistence while CD28 costimulation enhances proliferation and tumor elimination [59]. Another strategy is to modify cytokine expression via so-called T-cells redirected for universal cytokine killing (TRUCKS). Those 4th generation CAR T-cells deliver a transgenic protein of interest to the targeted tissue upon antigen encountered signaling. In detail, those CAR T-cells are synthetically engineered to express an inducible expression cassette driven by a transcription factor, leading to the expression of the transgenic cytokine upon signaling [60]. Furthermore, Shum et al. created transgenic T-cells with IL-7 receptors (C7R) incorporated in the CAR construct. Constitutive signaling is promoted when encountering an antigen, thus activating intracellular STAT5 signaling, the major IL-7 signaling nodal point, supporting anti-tumor activity [61]. Optimization of structural components can also include knocking out negative regulators, which is a powerful tool to overcome an immunosuppressive tumor microenvironment (TME). Immune checkpoint inhibitors play pivotal roles in tumors-the T-cell interactions lead to T-cell exhaustion, tolerance and ultimately dysfunction [62]. The CRISPR/Cas9 tool enables one to knockout immune checkpoint molecules such as PD-1, CTLA-4 and LAG3 in CAR T-cells [63]. The knockout of negative regulators such as transcription factors, for example NR4A, correlating with PD-1 and TIM3 gene expression, can help to induce tumor regression [64]. Expression of a dominant negative receptor (DNR) on the surface of a CAR T-cell targets the same goal. Engineered PD-1 DNR lacks PD-1 transmembrane and intracellular signaling domains augmenting CAR T-cell cytotoxicity [65]. Another synthetic biology approach is chimeric switch receptors (CSRs), which convert negative into positive signals by reversing the suppression of inhibitory molecules [66]. Liang et al. engineered CD19-targeted CAR T-cells expressing a PD-1 CSR, treating patients with post CD19 CAR T-cell failure to suppress PD-1/PD-L1-mediated T-cell exhaustion. Three of six patients achieved a complete response [67]. To abrogate and limit the cytotoxicity of CAR T-cells, they can be engineered with safety switches, which can inactivate and eliminate the CAR T-cells drug. Safety switches include suicide genes such as caspase 9 (iCasp9) fused with a FK506 binding protein, incorporated into the CAR, leading to dimerization and ultimately apoptosis upon addition of a synthetic inducer of dimerization drug [68]. Moreover, limiting CAR T-cell long term persistence can also prevent toxic effects. This can be achieved by using therapeutic antibodies, which specifically recognize CAR T-cells, leading to their elimination. These are just examples of all of the available powerful tools to modify CAR T-cells. Increased development of synthetic biology interventions are needed to facilitate personalized medicine in the field of CAR T-cell therapy.
CAR T-Cell Therapy and Combination Therapies
For lymphoma and ALL, CAR T-cell therapy has shown remarkable results in treating patients, but for CLL for example, results are not as promising [27]. Therefore, several studies are investigating CAR T-cell therapy in combination with other therapies to maximize the therapeutic efficacy but preserving patient safety at the same time. A research focus lies also on CAR natural killer cells (NCT04887012, NCT04887012) and CAR natural killer T-cells (NCT03294954).
Monoclonal Antibodies
Monoclonal antibodies used for cancer treatment either target tumor-associated antigens to induce cytotoxicity or are used to block receptor-ligand interactions. In this regard, immune checkpoint inhibitors are antibodies that block the inhibitory T-cell receptor CTLA-4 (Ipilimumab) or PD-1 (Pembrolizumab), which leads to reactivation of silenced cancer-specific T-cells [69,70]. As CAR T-cells also express multiple inhibitory receptors, combining CAR T-cell therapy with checkpoint blockage could possibly prevent the exhaustion and silencing of CAR T-cells. Chong et al. described a successful increase in CAR T-cell efficacy after treating a refractory DLBCL patient with pembrolizumab [71]. Only a few clinical studies so far are combining CAR T-cell therapy with monoclonal antibodies for the treatment of hematological malignancies (NCT04381741, NCT04703686, NCT03310619) and for the treatment of solid cancers (NCT03179007, NCT02862028, NCT01454596). In light of these results, combinatory therapy of CAR T-cells and monoclonal antibodies will be of more importance to emerge new strategies in fighting against cancers.
Small Molecule Inhibitors
Drugs smaller than 500 Daltons targeting distinct molecule portions are considered as small molecule inhibitors. Due to their size they are able to pass through the cell membrane to act intracellularly, antagonizing different pathways correlated with cancer development [72]. Tyrosine and serine kinase inhibitors are most frequently used to treat cancer patients targeting tumor survival, growth and metastasis [73]. The most promising target is the mitogen-activated protein kinase (MAPK) pathway since it is involved in multiple cellular functions. MEK inhibitors as well as BRAF inhibitors have shown impressive results for the treatment of solid cancer [74]. A clinical trial combining CAR T-cells and a BRAF inhibitor revealed mixed results as tumor infiltrating lymphocytes (TILs) were inhibited showing that the complexity of targeting that pathway in combination with adoptive T-cell therapy remains to be elucidated [75]. Since the PI3K/Akt/mTOR signaling cascade is a major key player in regulating the cell cycle, researchers demonstrated that Akt inhibition ex vivo could enhance antitumor immunity in CAR T-cell therapy [76]. Concerning mTOR inhibition, Huye et al. created rapamycin resistant anti-CD19 CAR T-cells and found out that those had an increased antitumor activity in Burkitt's lymphoma and ALL cell lines [77]. One unpublished clinical trial is currently enrolling CLL and DLBCL patients in the United States, Australia and Europe for a combinatory therapy of CAR T-cells with Ibrutinib (NCT03960840). Furthermore, Fraietta et al. found out that Ibrutinib therapy administered before and during CAR T-cell treatment in CLL patients could improve CAR T-cell expansion and downregulation of inhibitory receptors [78].
Oncolytic Viruses
Oncolytic viruses target and eliminate tumor cells without damaging healthy tissue in two different ways. Firstly, through a direct attack, in which the virus infects and enters the cells, leading to cell lysis. Secondly, through expression of viral antigens in infected cancer cells, which leads to their subsequent recognition and destruction by cytolytic T-cells [79]. This principle was studied in MM cells using adenovirus serotype 5, showing oncolysis in infected malignant cells, suggesting an application also in other hematological malignancies [80]. Nishio et al. designed an oncolytic adenovirus armed with the chemokine genes RANTES and IL-15 leading to CAR T-cell recruitment, prolonged persistence and enhanced survival in neuroblastoma cell lines [81]. This could be an interesting attempt at combining CAR T-cells with oncolytic viruses for the treatment of hematological malignancies but also for solid cancer, where one phase I trial is running using a binary oncolytic adenovirus and HER2-targeted CAR T-cells (NCT03740256) for the treatment of HER2 positive solid tumors.
Proinflammatory Cytokines
Cytokines can tremendously influence T-cell function such as expansion, persistence and effector activity. In addition to the engineered co-expression of cytokines in CAR T-cells discussed in Section 3, cytokines can be administered intravenously to patients. For example, interleukin 2 (IL-2) influences T-cell growth, expansion and cytotoxicity, and is approved by the FDA for the usage in cancer treatment [82]. Several clinical trials are testing the combination of CAR T-cell therapy with IL-2 (NCT00924326, NCT00019136, NCT04119024, NCT03098355), revealing enhanced persistence of CAR T-cells and durable remissions in vivo in different tumor entities such as lymphoma, ovarian cancer and melanoma [83]. However, IL-2 is a double-edged sword as high IL-2 dosages can decrease central memory T-cells [84]. Other investigated cytokines such as IL-7 and IL-15 showed increased CAR T-cell cytotoxicity compared to IL-2 in ALL/CLL patients [85]. Most clinical trials are comparing IL-2 and IL-7/IL-15 activity in lymphoma patients (NCT02652910, NCT04186520, NCT03929107, NCT02992834), revealing the demand for testing combinatory approaches of CAR T-cells and proinflammatory cytokines.
Adverse Events of CAR T-Cell Therapy
Toxic effects frequently accompany curative effects of a CAR T-cell therapy. The most frequent side effect is the cytokine release storm (CRS), where excessive release of cytokines is triggered by CAR T-cell activation, proliferation and enhanced killing, manifesting in a broad range of clinical symptoms such as fever, tachycardia and pyrexia or even death [86]. Tocilizumab, a monoclonal antibody against the IL-6 receptor, acting as an immunosuppressant is often used for the treatment of CRS [87,88]. Besides CRS, tumor lysis syndrome (TLS) is a common toxicity upon CAR T-cell treatment. Due to mass destruction of malignant cells, their cellular components are rapidly released, leading to hepato-and nephrotoxicity. Overlapping with CRS, TLS can also lead to cardiac arrhythmia. Management of TLS should therefore include prevention of cardiac dysrhythmias as well as prevention of renal function [89]. A prevalent side effect includes neurotoxicity, which is generally associated with CRS. As CAR T-cells also migrate into the cerebrospinal fluid, high levels of cytokines in the cerebrum can lead to aphasia, delirium, seizures and syncope for example. For the management of neurotoxicity, corticosteroids are favored as they can pass the blood-brain barrier [90]. Furthermore, on-target off-tumor effects frequently occur when the CAR-target antigen is not exclusively expressed on tumors but also on healthy tissue. For example, B-cell aplasia occurs as an on-target off-tumor effect since CD19 targeted CAR T-cells also eliminate CD19 positive B-cells. However, B-cell aplasia upon CAR T-cell therapy is usually well-tolerated [91].
Conclusions
Immunotherapy and especially CAR T-cell therapy has demonstrated outstanding response rates in subgroups of patients with hematological malignancies, leading to emergence of CAR T-cells as a major breakthrough in cancer immunotherapy. Furthermore, the fifth CAR T-cell therapy has been approved by the FDA (Breyanzi), underlining that CAR T-cells have become a valid therapy option for refractory blood cancer and points to the promising potential of this therapy approach. However, in some hematological malignancies, response rates are low and patients still relapse. Additionally, for some hematological malignancies such as Richter syndrome, data is still very thin with only a low number of patients enrolled in clinical studies so far. In addition, adverse events frequently accompany CAR T cell therapy, showing that this therapeutic approach still needs to be optimized in regard to safety and efficacy. However, so far, four out of the five FDA-approved CAR T cell drugs target CD19 (Breyanzi, Kymriah, Tecartus and Yescarta) and only one targets a different antigen (BCMA, Abecma). Comparing these drugs with the expanding list of targets currently investigated in many clinical studies gives confidence that the number of approved CAR T constructs as well as the list of targets are still growing. This is particularly important as suitable targets for some entities such as AML are still missing. Aside from the quest for novel targets, a large panel of innovative approaches are expected to markedly improve CAR T cell therapy, which have been discussed in this review and comprise the development of bispecific CAR T cells, improved CAR constructs, genetic modification of CAR T cells and combination treatments with other drugs. Regarding all these technical possibilities, it is expected that the next generation of CAR T cells will hopefully serve as a safe and highly effective weapon to fight hematological malignancies.
Conflicts of Interest:
The authors declare no conflict of interest. | 2021-08-28T06:17:22.259Z | 2021-08-01T00:00:00.000 | {
"year": 2021,
"sha1": "60aca5f0f9bbd63b6bbde81f5ba07094499ed481",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/22/16/8996/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "92503dc4a88eb9560ec432721872e9f85ee542d2",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
208626146 | pes2o/s2orc | v3-fos-license | Label-free dynamic imaging of mitochondria and lysosomes within living cells via simultaneous dual-pump photothermal microscopy
: The dynamic activities of mitochondria and lysosomes, which play important roles in maintaining cellular homeostasis, were observed without labeling by using highly sensitive photothermal (PT) microscopy. This imaging modality allows for the direct observation of cellular organelles that contain endogenous chromophores, with high temporal and spatial resolution. We identified mitochondria and lysosomes inside living mammalian cells via simultaneous dual-color imaging. Moreover, dynamic imaging revealed that the lysosomes make contact with mitochondria and move between sites within the dynamic mitochondrial network. Since mitochondrial and lysosomal functions are intricately connected, PT microscopy should provide in-depth understanding of cellular functions associated with mitochondria–lysosome communication as well as insights into various human diseases caused by dysfunction of these organelles.
Introduction
Optical microscopy is indispensable for visualizing intracellular structures in biomedical research and medical diagnosis. Since optical methods are non-invasive, they are particularly useful for revealing the structure and dynamics of organelles in living cells and organs. Fluorescence microscopy is currently one of the most popular methods for cell imaging owing to the unprecedented sensitivity of its background-free detection. Fluorescent molecules such as fluorescent proteins, organic dyes, and semiconductor nanocrystals are used to label specific targets. However, it is possible that the label-target complex may have different behavior from the native target. Furthermore, this imaging modality is limited to observations of targets that can be readily labeled. Photobleaching of fluorescence signals is also a significant problem in fluorescence microscopy, especially in time-course imaging where fluorescent molecules are exposed to successive light irradiation.
Photothermal (PT) microscopy can visualize non-fluorescent chromophores with high sensitivity and high spatial resolution [1][2][3]. In PT microscopy, two laser beams with different wavelengths are usually used for pumping and probing: the pump beam increases the temperature, ∆T, around the focal point of the optically absorbing sample, which results in variations in the local refractive index (typically ∆n ≈ 10 −4 for ∆T = 1 K in water); the refractive index change is detected with the probe beam. This technique has been applied to the visualization of the distributions of endogenous chromophores in biological specimens, such as hemoproteins in mitochondria [4][5][6], red blood cells in microvascular networks [7], and melanin pigments in skin cancers [8][9][10]. Non-fluorescent molecules are usually less affected by photobleaching compared to fluorescence molecules, and hence make better PT imaging contrasts.
So far, several label-free PT imaging studies of live cells or organs have been made [4][5][6][7]9]. However, little has been reported on the dynamic imaging of cellular processes because of the slow imaging speed of the PT microscopy modality. Image acquisition time in previous realizations of PT microscopy typically ranged from several to several tens of minutes [1]. Cell structure, organelles, and macromolecules inside the cell are dynamic entities, responding both to the internal state of the cell and external stimuli. Dynamic time-resolved imaging or time-course analysis is essential for providing sufficient information to explain complex biological events. Furthermore, time-resolved imaging is needed to thoroughly examine which organelles or species can be visualized in PT microscopy, as the origin of the PT signal from live cells is not yet clear [4].
Improvements in signal-to-noise ratio (SNR) are critical for the successful application of PT microscopy to dynamic cellular imaging. Improved SNR is crucial for cellular imaging, not only because it facilitates measurement times to be decreased, but also because it allows the minimization of irradiation and thermal damage to live cells. In previous work, we developed a highly-sensitive laser scanning PT microscope by implementing a new detection scheme (spacedivided balanced detection), using a low-noise balanced detector [11][12][13]. The detection limit for the temperature increase was evaluated experimentally to be 0.1-0.2 K for an integration time per pixel of 10 µs [12]. This system allows image acquisition with a temporal resolution of several seconds while maintaining a temperature-rise of a few degrees. Furthermore, simultaneous multi-wavelength PT imaging has been demonstrated using several laser diodes (LDs) with different wavelengths, via frequency-division multiplexing [14]. Multi-wavelength imaging is a straightforward approach for differentiating or identifying multiple species within a sample that takes advantage of the fact that each PT signal is proportional to the molecular absorption coefficient. PT spectral imaging has also been achieved, using a wavelength-tunable laser as a pump beam, by switching the laser wavelength and making repeated scans [5,9]. However, the simultaneous measurement of multiple wavelengths is crucial for imaging moving objects such as living cells, to minimalize image displacement between the images acquired using different wavelengths.
In this study, we performed dynamic imaging of live cells by using highly sensitive, simultaneous dual-pump PT microscopy. We observed filamentous and punctate structures inside living cells, which were attributed to mitochondria and lysosomes, respectively. Moreover, time-course PT imaging revealed distinctive lysosome motion and a transformation of mitochondrial morphology in response to external stimuli. Figure 1 shows the PT microscopy experimental setup used in this study. The optical configuration is largely the same as that reported in our previous publications [11,12]. A 780-nm singlefrequency laser diode (LD) was used for probing. For two-color imaging, 520 and 640-nm LDs were used as pumps, with their intensities modulated at different frequencies (300 and 400 kHz, respectively) for the simultaneous lock-in detection. Variable neutral-density filters were used to control the pump-beam powers. The two pump beams were combined using a dichroic mirror and collimated using a single-mode fiber and an off-axis parabolic mirror for spatial-mode filtering. The combined pump and probe beams were directed to the sample via a galvo-scanner. An objective lens with a numerical aperture of 1.25 (Olympus UPLSAPO40XS) was used to focus the beams on a sample. The focused laser beams were scanned, point-by-point, sequentially, and the pixel information was assembled into an image. Transmitted beams were collected by a collection lens (Olympus U-AAC) whose front aperture was directly immersed in the culture solution. Improvement in the SNR is crucial for the dynamics imaging because SNR is proportional to the square root of the measurement time in the shot noise limited PT microscopy. To improve the signal-to-noise ratio, a spatially divided balanced detection scheme was implemented using a custom-made fiber bundle [11,13,15]. The fiber bundle separated the central and peripheral parts of the transmitted probe beams. A home-built balanced photodetector was used to detect the separated probe beams with shot-noise-limited sensitivity [12]. The PT signals at the two frequencies were simultaneously demodulated by frequency-division multiplexing [14] using a dual-frequency lock-in amplifier (NF LI5660), for which the time constants t c were set to 10 µs. The sample position was controlled in the axial direction using a positioning stage driven by a piezo actuator to acquire a set of images focused on adjacent parallel planes within the sample (a z-stack) [8]. A confocal fluorescence detection scheme was also set up, in which a photomultiplier tube (Hamamatsu H5784-04) detected the fluorescence signal from the sample through a pinhole [11]. More details of the experimental setup have been presented in our previous publications [8,[11][12][13][14][15] The spatial resolution of the PT microscope was evaluated by measuring individual 5-nm gold nanoparticles (Sigma Aldrich) dispersed in polyvinyl alcohol film [ Fig. 1(b-d)]. The full width at half maximum (FWHM) values for the lateral intensity profiles were measured to be 0.29 and 0.32 µm when the wavelength of the pump beam was 520 and 640 nm, respectively. The spatial displacement between the two-color image was within 20 nm. Figure 1(d) shows the axial point spread function (PSF), which was obtained by acquiring a stack of 50 images, shifting the sample position in the axial direction by 0.05 µm between each acquisition; the FWHMs of the axial PSFs are 0.86 and 0.89 µm for 520 and 640 nm, respectively, and the spatial displacement in the axial direction is 0.1 µm.
Materials
HeLa and COS-7 cells were provided by the RIKEN BRC through the National BioResource Project of the MEXT/AMED, Japan. Both cell lines were cultured in Dulbecco's modified Eagle's medium (DMEM), supplemented with streptomycin (100 µg/mL), penicillin (100 U/mL), and 10% bovine serum, in an incubator at 5% CO 2 and 37°C. Phenol red solution, which is commonly used for pH testing, was not added to the medium because it absorbs visible light and would cause temperature increases during the measurements. For the live-cell imaging, cells were cultured in a glass-bottomed dish. Live-cell imaging was conducted at room temperature (22°C) within 1 hour of the culture dish being taken from the incubator. For the fluorescence imaging of lysosomes, a transfection reagent (Invitrogen Lysosome-RFP) was used to express red fluorescence proteins (RFP), specifically labeling lysosomes with RFP. For the measurements of fixed cells, cells were cultured on a coverslip, which was then washed with distilled water and air-dried before being mounted on a glass slide using a drop of mounting solution (Matsunami MGK-S). Figure 2 shows bright-field (a) and PT images (b-e) of live HeLa cells. The bright-field image was acquired using a CMOS camera with a LED backlight. PT images at the pump wavelengths of 520 and 640 nm are shown in Fig. 2(b) and Fig. 2(c), respectively. The images consist of 2000 × 2000 pixels, and the acquisition time was 44 s. The two different PT images are merged in Fig. 2(d), where the green and red colors represent the signal intensity at the 520and 640-nm wavelengths, respectively. Figure 2(e) is a magnified view of Fig. 2(d). The pump powers incident on the sample were 2.9 and 12 mW at 520 and 640 nm, respectively. PT microscopy provides high-contrast images of organelles inside cells. In these images, filamentous and punctate structures are observed around the cell nucleus that appears as a circular dark shadow. The filamentous structures can be associated with mitochondria, from their characteristic shape and localization [4,16]. Mitochondria are organelles in cells that supply energy in the form of adenosine triphosphate (ATP). The contrast agent of the PT signal is assumed to be the cytochrome c contained in the intermembrane space of the mitochondria [4][5][6]. Because cytochrome c has an absorption peak at a wavelength of ∼530 nm, the 520-nm pump produces a large PT signal. The ratio of the signal intensity at 520 nm to that at 640 nm is ∼1.4. Thus, the mitochondria appear green in the merged images. In contrast, the PT signal from the punctate structures, observed using the 640-nm pump, appear more intense than the same features seen at 520 nm by a factor of ∼1.5. Thus, they appear orange or yellow in the merged images. It should be noted that most of the punctate structures are found within the mitochondrial network. The diameters of the punctate structures are in the range of 0.5-0.7 µm.
Results and discussion
We also conducted PT imaging of fixed HeLa cells on a glass slip and acquired images showing very similar subcellular features to those seen in the live-cell images [ Fig. 3(a)]. This result indicates that it is possible to acquire PT images even for fixed cells, as long as the cell structure and organelle distribution are maintained during the fixing process. In addition to HeLa cells, we conducted PT imaging of live COS-7 cells [ Fig. 3(b)] and found that both filamentous and punctate structures are again visualized around the cell nucleus in these cells. In the HeLa cells, the punctate structures are widely scattered throughout the mitochondrial network. In contrast, in the COS-7 cells, the punctate structures are clustered around the boundaries of the cell nuclei. We examined the photodegradation of the endogenous chromophores caused by the laser irradiation by repeating the image acquisition [ Fig. 3(c)]. Figure 3(d) shows the mean signal intensities of the sequential images, indicating that signal intensity decreases by only 14% after 100 scans.
PT microscopy is capable of optical sectioning in a similar manner to confocal microscopy because it is based on a pump-probe technique [8]. Here, we demonstrate 3D imaging of organelles in live cells. A stack of 50 images of live HeLa cells was acquired by changing the sample position in the axial direction, using a step size of 0.1 µm, to obtain a complete structure of the cell and its organelles [ Fig. 3(e) and Visualization 1]. Although the filamentous and the punctate structures produce strong PT signals, a weak signal is also produced by the cell cytosol or cytoskeleton, allowing the outline shape of the cells to be determined.
To examine whether the punctate structures observed in PT images form a part of the mitochondria, we observed morphological changes of the mitochondria in response to physiological treatments. For this purpose, carbonyl cyanide m-chlorophenylhydrazone (CCCP), an ionophore and decoupler of the respiratory chain, was added on the culture medium and time-course images were acquired (Visualization 2 and Visualization 3). It is known that CCCP induces membrane depolarization, leading to a morphological transformation of mitochondria from filaments to a rounded structures [17]. Figure 4 shows PT images of HeLa cells before (a) and after (b) CCCP loading. The final concentration of CCCP was 20 µM. The filamentous structure changes to a round shape with a diameter of ∼1.0 µm after depolarization. This finding establishes that the filamentous structures represent mitochondria. However, we observed that the punctate structures are not affected by CCCP. Their sizes, signal intensities, signal intensity ratios (520/640nm), and subcellular locations showed little change during CCCP loading. This result suggests that the punctate structures are distinct from mitochondria.
Although the punctate structures cannot be attributed to the mitochondrial network, we consider that it is functionally linked to the mitochondria as they are colocalized around the cell nucleus. Lysosomes are membrane-enclosed organelles that function as the digestive system of the cell, and these are thought to have membrane contact sites with mitochondria, allowing mutual regulation of their functions [18,19]. In fluorescence imaging, lysosomes and mitochondria have often been observed to colocalize [20]. Thus, we speculated that the punctate structures seen here in the PT images are lysosomes. Since lysosomes can also be observed using a fluorescence probe, PT and confocal fluorescence images of lysosomes, within the same cells, specifically labeled with RFP were acquired (Fig. 5). Fluorescence imaging was first conducted, using the 520-nm LD for excitation, followed by PT imaging. One can see clear colocalization of the punctate structures in the PT [ Fig. 5(a)] and fluorescence images [ Fig. 5(b)]. This result indicates that the punctate structures can be attributed to lysosomes. Lysosomes contain various enzymes capable of degrading different types of biomolecules. One of the possible origins of the measured PT signals are lipofuscin granules. These lipidcontaining residues of lysosomal digestion consists of pigments that absorb visible light. However, careful studies are still needed to ascertain the origin of the PT signal as lysosomes contain various biomolecules and their degradation products [21]. There is a possibility that the PT signal originates from impaired mitochondria incorporated within lysosomes through autophagy pathways (mitophagy) [21,22]. Accumulation of specific substrates in lysosomes is implicated as a cause of various diseases, known as lysosomal storage diseases [23]. We anticipate that imaging of endogenous chromophores in lysosomes will prove to be a beneficial approach for improving the understanding and diagnosis of such diseases.
Finally, we observed dynamic activities of the organelles by acquiring time-course images at 4 s/frame ( Fig. 6 and Visualization 4), with each image consisting of 500 × 500 pixels. We can see that lysosomes move around the dynamic mitochondrial network. Figure 6(b-g) displays expanded views of the square region in shown in Fig. 6(a) from the time-course images. Figure 6(h) illustrates the trajectory of the lysosome marked by the arrows in Fig. 6(b-g) as overlaid circles on the PT image. The temporal variations of the lysosomal position in the x and y directions are summarized in Fig. 6(i). We observed that most of the lysosomes are continually in contact with the mitochondria and some of the lysosomes move along the mitochondrial structures. It appears that the movements of the lysosomes are not continuous but rather that these organelles appear to move in a stop-and-go fashion and the movement direction changes frequently. This suggests that lysosome movement is subject to regulation by protein motors [24,25]. We were able to confirm that laser irradiation does not cause cell death during measurements lasting several tens of minutes. Dynamic images shown in Fig. 6 were acquired with a pixel size of 0.1 µm. Although one can see pixel cross-talk along the fast scanning axis (transverse direction), it does not cause a significant problem to observe the structure and dynamics of the cellar organelles.
In lock-in measurements, the modulation period should be much smaller than t c . However, in PT microscopy, signal intensity is determined by the heat diffusivity and it decreases with a decrease in the modulation period. We have examined the frequency dependence of the PT signal of live HeLa cells by changing the modulation frequency from 40 kHz to 3 MHz and found that the PT signal decreased by half at 1 MHz. This indicates that the low modulation frequency is better to achieve a high SNR. On the other hand, images are severely distorted when the modulation frequency is less than ∼200 kHz for t c = 10 µs because the modulation period is too closes to t c . In order to balance SNR with the image fidelity, modulation frequencies were selected at 300 and 400 kHz. Since second-order low pass filters were used for the lock-in integration, frequency detuning between the two-channels was set at 100 kHz so that the cross-talk between the two-channel is less than -40 dB. We have also examined the frequency dependence of the spatial resolution by measuring individual gold nanoparticles and confirmed that the spatial resolution shows little difference as long as the modulation frequency is above 300 kHz.
Conclusions
We have demonstrated label-free dynamic imaging of mitochondria and lysosomes in living cells by using highly sensitive, simultaneous dual-pump PT microscopy. To date, mitochondria and lysosomes have been visualized via fluorescence microscopy, using various fluorescent molecules as probes. Our label-free approach has advantages, especially for the dynamic observation of organelles, because labeling markers may disturb the motion of the target and result in behavior that is different from that of the native target [16].
Mitochondrial and lysosomal functions are thought to be intricately connected and critical for maintaining cellular homeostasis. A further direction of this study will be to provide a biological interpretation of the present findings and new insight into a biological problem associated with lysosome-mitochondria interactions. Lysosomes play a crucial role in degrading damaged mitochondria in mitophagy pathway. It would be interesting to observe mitophagy or the consequences of its disruption without the use of any labels by PT microscopy. | 2019-10-24T09:10:09.493Z | 2019-10-23T00:00:00.000 | {
"year": 2019,
"sha1": "e84645b3f69b2c4ace05ffa67cba03c93866b535",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1364/boe.10.005852",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "78544fd473875026ad6d578377e6782218b45a9c",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
265905151 | pes2o/s2orc | v3-fos-license | Epidemic threshold : A new spectral and structural approach of prediction
,
I INTRODUCTION
Networks are everywhere.Several real phenomena such as disease spreading, behaviour contagion, and rumour propagation are described as a spreading process in the complex system Symbol Short description A The adjacency matrix of the network.⟨k⟩, ⟨k 2 ⟩ The first (average connectivity) and second moment (connectivity divergence) of the degree distribution.λ max The spectral radius (largest eigenvalue) of the matrix A.
The infection rate: rate of infection or transmission from an infected individual to a susceptible individual per effective contact.γ The recovery rate: rate that an infected individual will recover per unit time (in continuous-time models) or per time step (in discrete time models).λ The transmissibility: the infection rate scaled by γ −1 so that λ = β/γ.λ c The epidemic threshold, critical infection rate.
G
A connected network G = (V, E) with n nodes in V and m edges or links in E.
Table 1: Notations [14].These processes are widely modelled using networks or graphs.Therefore, networks are greatly interesting and constitute fertile, and flexible tools for scientific modelling and analysis of complex systems [17] such as an infectious disease spread over a contact network.
In the study of infectious disease spread, the basic reproduction number R 0 is the average of expected secondary infection number caused by a primary infectious individual introduced in a fully susceptible host population.R 0 is strongly correlated to the likelihood and extent of an epidemic.Critically R 0 depends not only on the disease but also on the host population structure [11].Therefore, network-based models of epidemiological contact have emerged as an important tool in understanding and predicting the spread of infectious disease [4].Understanding the network structure allows for better control of the micro and macro propagation [11], [1], and even improves the predictions.Thus, we need more sophisticated tools for analysis and visualization of the network structure: one of these tools is the spectral theory of graph [3], [4].Hence, predicting whether a disease will die out or become an epidemic is known as the epidemic threshold.
Epidemic threshold τ denotes the incidence of a disease at which it can be considered as an epidemic.An epidemic threshold τ is the critical β/γ ratio value beyond which an infection becomes an epidemic [21].Nevertheless, τ is commonly linked to the R 0 that allows the definition of the epidemic threshold concept [7].τ depends not only on the transmission and recovery rates of a disease but, also fundamentally on the network structure [21].Therefore, the accuracy of the prediction and understanding of epidemic thresholds on complex networks is a challenge in the field of network science.To clarify some basic concepts of this work, Table 1 defines some basic notations used in this work.
The aim of this paper is to design and experiment a new general structural and spectral prediction approach of the epidemic threshold.This should be substantially similar to those in the literature and accurately captures the full network structure but is not limited by it.Therefore, we propose a new general and spectral approach to analyse the spreading processes in a network.
The layout of this paper is organised as follows: Section 2 reviews the previous approaches and their limitations.Section 3 presents the issue of epidemic threshold, energy of graph, and spectral theory of the graph.Section 4 describes the proposed new approach while section 5 presents the experimentation, results, and discussions.We conclude in section 6.
II THE PREVIOUS APPROACHES AND THEIR LIMITATIONS
In the literature, there are many successful theoretical approaches of the epidemic threshold.We denote various benchmarks generally used to provide an approximation of the epidemic threshold related to the dynamic spreading in real networks.This includes the Mean-field (MF), Degree-based mean-field (DBMF) or Heterogeneous mean-field (HMF) and Quenched (QMF) also called Individual-based mean-field (IBMF).
The Mean-field (MF) approach
The Mean-field (MF) approach is based on the works of Kephart and White who adopted a modified homogeneous approach where directed graphs model the communication among persons [12].Formally, here, in a homogeneous network, the epidemic threshold is denoted by Eq. 1: where ⟨k⟩ is the first moment of the degree distribution.The MF assumes that all nodes in the network are statistically equivalent: the interaction probabilities between any two nodes are the same.Therefore, the contact network structure is not considered.However, MF approach can be inaccurate when network degree distribution is asymmetric and heterogeneous.
The Heterogeneous mean-field (HMF) approach
To more capture network structure, [16] improved the homogeneous MF approach to obtain the HMF by the assumption of the inability for a node (or person) to infect node that infected it.
Here, the epidemic threshold is given by Eq. 2: where ⟨k 2 ⟩ is the second moment of the degree distribution.HMF is more used for uncorrelated networks [8].It's more useful under the mean-field assumption of independence between node's infectious states.Due to its parameters and assumptions, the HMF approach can be inaccurate for the quenched connections among nodes.Moreover, the HMF neglects the dynamic correlations among the states of neighbours.
The Quenched mean-field (QMF) approach
Because neither the MF nor the HMF approach can capture enough the contact network structure: the Quench mean-field (QMF) approach is developed using the adjacency matrix A. This approach is widely used to study the spreading dynamics [20].In [21], authors proposed a discrete-time formulation to predict the epidemic threshold problem with any assumption of homogeneous connectivity.However, the epidemic threshold is given by Eq. 3: where λ max is the largest eigenvalue of the adjacency matrix A. The QMF approach depends only on the network structures.The QMF is an advanced approach that is more accurate than the MF and HMF [20].
The QMF approach has many variants such as the N-intertwined approach [18]; the Dynamical Message-Passing (DMP) using the non-backtracking matrix; the Simplified DMP (SDMP).
Nevertheless, in some specific situations, some research doubts the accuracy of the epidemic threshold value predicted by the QMF approach [8].
As it happens, in the literature, there are many approaches to predict the epidemic threshold.However, we are interested to develop a new general structural and spectral approach of prediction that more captures the full network structure using structural and spectral properties of a network such as a node number, adjacency matrix, spectral radius, and the energy of graph.This new approach should be substantially similar to the earlier approaches.Moreover, it should be also accurate.Therefore, the new approach offers a general and spectral approach to analyse spreading processes in a network.
III THE EPIDEMIC THRESHOLD AND THE SPECTRAL THEORY OF GRAPH
The spectral theory of graph and network science are used to understand how network topology can predict the dynamic processes [10] like an epidemic threshold in a complex system.It analyses the relationships between the graph structure and its eigenvalues.Thus, the spectral theory of graph plays a central role in the fundamental understanding of the network [6,5,4].However, a large literature on algebraic aspects of spectral graph theory and these applications are in several surveys, books or monographs such as [5], [6].
The eigenvalue of graph
The analysis of the eigenvalues allows us to get useful information about a graph that might otherwise be difficult to obtain [5].Eigenvalues have a strong relationship with the structures of graphs.The largest eigenvalue of graph λ 1 or λ max is called the spectral radius.
The energy of graph
It's a graph-spectrum-based quantity.The original version of graph energy from the year 1978 is based on the eigenvalues of the adjacency matrix [9]: , where λ i is the i th eigenvalue.However, the energy of graph found unexpected large applications in areas of science and engineering [10] such in [15] with the epidemiological applications.
IV THE PROPOSED NEW APPROACH
In the epidemic threshold study, one of the challenges is to capture the essence of the full network structure with as few parameters as possible with accuracy.For any network, we present a new general structural and spectral prediction approach of the epidemic threshold.Our approach does not assume homogeneous connectivity or any particular topology in a discrete time.We assume that during each time interval, an infected node i try to infect its neighbours with probability β.At the same time, i may be cured with probability γ.Thus, formally, the new epidemic threshold approach λ c is denoted by Eq. 4: Here, E(G) is the energy of graph, and k is a real scale parameter.The λ KSE c means K Spectral Energy approach of the epidemic threshold prediction.In fact, λ max has several applications in science such as chemistry, and computer science [6].It's proven that the more highly connected a network is, the larger is λ max [19], and the smaller is 1/λ max as an epidemic threshold, which is strongly related to the R 0 concept.This can exhibit a basic exponential decay model ϕ, where ϕ = e −1 λmax t , ϕ 0 = 1, with the single parameter λ max .To consider each eigenvalue, we are interested in the energy of graph concept according to its definition.Thus, about the fraction of the energy of graph on each node, we define ∆ = E(G) n .In epidemic threshold context, according to its salient features like critical or threshold values: we look at the simple reciprocal model y = k( 1x ), where x is a variable and k a constant or scale parameter.Hence, the reciprocal of ∆ is: . Related to this reciprocal, we have the intuition to observe the rate of ϕ at t = 1, over there: . Thus, the new approach to predict the epidemic threshold λ KSE c is an application that associates each adjacency matrix to a specific decay relative composition eigenvalues relating to ∆.
V EXPERIMENTATION, RESULTS AND DISCUSSIONS
With data analytic and data visualisation technics on the experimental dataset in Figure 1; the simulations are driven to answer the question of how the new prediction approach of the epidemic threshold is substantially similar and performs in real a good performance than earlier approaches including the most used QMF.
The dataset describes in Figure 1 contains real networks of infectious disease spread, smallworld, random, and regular networks in spreading processes overall 31 different types and topologies networks; 17 real social networks, 9 generated social networks, 3 random networks, and 2 regular random networks.Here, Id refers to the network identifier, kmax refers to the maximum node degree in a network, k denotes the first moment of degree, k2 the second moment of degree, den refers to the density of a network, and cc the clustering coefficient.However, with data visualization technics based on numerical and graphical simulations overall these networks: different sets of predicted values MF, HMF, QMF and the new KSE epidemic threshold are been computed, analysed, visualised, and discussed.
In Figure 2, we can show that the network Id 5, 9, 11, 12, 13, 14, 15, 17, 18, 19, and 21 have nearest predicted values of the epidemic threshold.Thus, the new proposed approach of epidemic threshold KSE has substantially similar common features with the earlier approaches, specifically with the widely most used accurate QMF.The summary descriptive statistics values of the MF, HMF, QMF and the proposed KSE are built in Table 2. Here, for the widely used QMF approach in the literature, we observed that the new proposed approach KSE has the 2 nd quantile (Q 2 ) more similar.The new proposed approach KSE is similar for the major descriptive statistic characteristics like the mean, std, Q 2 , Q 3 and range related to the QMF.This means that the new KSE approach is similar to the earlier and shares major features with the earlier, specifically with the widely used accurate QMF.Theoretically, those results come from the eigenvalues concept at the root of QMF and KSE approach.
Moreover, the area, curve and shape of each epidemic threshold value can be observed in Figure 3. Here, we can show that the area of all epidemic thresholds have a substantially similar area, curve and shape over the range of the 31 different experimental networks in the dataset.They share the same shape, curve and sense of variation.This means that the new proposed approach KSE is similar to the earlier one.Furthermore, the gap or difference between predicted values of the epidemic threshold related to the new KSE is analysed.The summary of its descriptive statistics is shown in Table 3.
Here, for any p, q epidemic threshold, e_p_q means the Euclidian gap or difference of p to q: p -q.In Table 3, the standard deviation of the gap or the difference between the QMF and the KSE is 0.078.All the gaps are relatively low.Relatively low is related to the earlier approaches particularly lowest to the most used QMF.Moreover, the new KSE approach shares major common features with the earlier, specifically with the most used accurately QMF.The ANOVA F and p-value using the Ordinary Least Squares to the MF, HMF, QMF prediction approach related to the KSE Furthermore, to analyse the statistical difference among these experimental sets of epidemic threshold predicted values, we have used the univariate ANalysis Of VAriance (ANOVA) test using the Ordinary Least Squares (OLS) model, or the Bioinfokit Python package.We obtain the summarized output of ANOVA F and p-value in Table 4 where sum sq denotes the sum of squares, df denotes the degree of freedom, F the F-statistic, and PR the P-value.Here, the p-value 0.44 > 0.10.Hence, the null hypothesis is accepted.Thus, there is "not significant" statistical difference between different sets of epidemic threshold values.So, once again, ANOVA shows that the new proposal KSE epidemic threshold is similar to the earlier generally used in the literature.
Overall, we observed that the new KSE prediction approach of the epidemic threshold is substantially similar to the earlier in the literature.Both KSE and QMF perform better than the other approaches in terms of « accuracy ».Moreover, KSE offers a new approach to predicting the epidemic threshold using nodes number, spectral radius and energy of the graph.Hence it constitutes a new general and spectral approach to analyse the spreading processes in a network through structural and spectral properties of a network.
The potential advantages and benefits of the KSE new approach compared to the earlier We established an analytical comparative study in Table 5.Here, the term relatively is related to the context and dataset of this study.This term refers to the possible suggestive theoretical interpretations, or missing formal proofs.Moreover, contextually in Table 5, the criteria accuracy refers to the quality to capture the full network structure; Transparency, is the quality to assess rule, and function of each parameter in the formula, even the assessment of the parameters in relationship; Flexibility refers to the ability to change or be real scale easily; and parameter, refers to the quality of parameter(s), its number, also their meaning in the relationship.Nevertheless, no model or approach is perfect; the new KSE can have a potential appropriate balance of accuracy, transparency, flexibility, and parameter.Relatively easy: single parameter ⟨k⟩.
Relatively poor: due to its assumptions.
The use of a single parameter ⟨k⟩.
HMF Relatively poor fit:
due to its parameters can be inaccurate.
Relatively medium: can assess the role of ⟨k⟩, ⟨k 2 ⟩.
Relatively medium: due to its assumptions.
QMF Relatively medium fit: captures network structure using only λ max Relatively easy: due to it single parameter λ max .
Relatively good: due to its assumptions.
The use of a single parameter λ max .
KSE
Relatively high fit: captures the full network structure using {λ max , E(G), n, k }.
Relatively medium: parameter assessment in relationship can be complex.
Relatively
improved: due to its assumptions, using {λ max , E(G), n} and a scale k.
The use of {λ max , E(G), n, k} structural and spectral parameters in relationship.
Table 5: The potential advantages and benefits of the new approach over the earlier: a qualitative comparison between MF, HMF, QMF and the new KSE prediction approach of the epidemic threshold Furthermore, according to the relationship between the epidemic threshold and R 0 , we have driven some real case studies related to the previous work in the literature about the R 0 : • The dataset used in [2]: small-world networks of the Newman Watts Strogatz model for 24 nodes, each of which is connected to 6 nearby nodes, where the probability of an extra link is 1/6.• The dataset used in [13]: β = 0.005, δ = 0.9, γ = 0.9.Authors have used these parameters for the simulations, and their differential equations.Table 6 shows the structural information of the used datasets.However, under the assumption
Id Network
Type n m ⟨k⟩ ⟨k 2 ⟩ den cc 1 Newman Watts Strogatz small-world 24 83 6.916 48.583 0.301 0.536 Table 6: The summary of structural information from the dataset of a density-dependent transmission, by definitions: R 0 = βn/γ, yet λ c = β/γ; thus R 0 = λ c ×n .So, we obtain the following results in the Table 7.We can observe that the structural R 0 obtained using differential equations in [13,2].These results highlight the similar accuracy of the KSE related to the earlier approach, specifically to the most used QMF.Besides, these results bring nearer the network-based model for the structural approach of R 0 and the mathematical modelling approaches of R 0 using a system of differential equations.This result emphasises the usefulness of network-based structural approach for the prediction of some key epidemiological parameters such as λ c , R 0 .
VI CONCLUSION
In this paper, we address the accurate understanding and prediction of the epidemic threshold on the complex networks in the spreading process context.Here, network structure fundamentally influences the dynamics of the spreading processes with a boundary condition for spreading processes over networks like the epidemic threshold.Therefore, to improve the structural prediction approaches, we have designed and experimented a new general structural and spectral prediction approach of epidemic threshold called KSE.The new approach further captures the full network structure using nodes number, spectral radius, and the energy of graph.We have driven simulations on 31 networks at different structures and topologies: 17 real social networks, 9 generated social networks, 3 random networks, and 2 regular random networks.With data analysis and data visualization techniques, the simulations show that the new KSE approach is similar to the earlier MF, HMF, QMF and shares major features with the earlier, specifically with the most used accurate QMF approach.The new prediction approach of the epidemic threshold offers a new general and spectral area to analyse the spreading processes over a network.The results are both fundamental and practical interest in improving the control and prediction of spreading processes over networks.Particularly meaningful to decision-makers in public health who can use these results to improve the control of an infectious disease spread, and also to inform policy to improve the successful mitigation and eradication strategies.Future research can examine the temporal evolution of a specific infectious disease in a network.As well as to enhance the proposed epidemic threshold approach with other spectral theory of graph concepts.
Figure 1 :Figure 2 :
Figure 1: The summary of structural information about networks in the dataset
Figure 3 :
Figure 3: The area visualization of MF, HMF, QMF and the proposed KSE prediction approach of the epidemic threshold to the original value of R Original 0
Table 2 :
The summary of the descriptive statistic values of the MF, HMF, QMF and the proposed KSE prediction approach of the epidemic threshold
Table 3 :
The summary of the descriptive statistic values of the gap or difference between MF, HMF, QMF prediction approach related to the KSE | 2023-12-07T16:03:25.574Z | 2023-12-05T00:00:00.000 | {
"year": 2023,
"sha1": "9b075b090b57a680914f3a0fd668dd181ddf9362",
"oa_license": "CCBYNC",
"oa_url": "https://arima.episciences.org/12642/pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "131cbbefaa551cd64e11a00b7057086da32ed78e",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Computer Science"
]
} |
216232993 | pes2o/s2orc | v3-fos-license | Effects of steel fibers on flexural strength and impact resistance for self-consolidating concrete plates
Self-Consolidating Concrete (SCC) can be considered as a major innovative satiric expansion in the future of concrete materials and technology during the recent three decades. It has evolved to make adequate compaction with easy concrete placement in constructions, which have overcrowded the restricted zones and reinforcement. This research aims to experimentally study the effect of steel fibers volumetric ratio (Vf ) and effect of chicken wire in enhancement of the behavior of the normal strength SCC. Two types of SCC concrete mixes were prepared and cast for this purpose. One of them was normal strength SSC to serve as a reference mix, while 0.75 % of steel fibers was used in the second one. Slump-flow test, L-box, J-ring and V-funnel tests had been done to investigate the properties of fresh SCC. Compressive strength of the reference SCC was about (33.5MPa) and for 0.75% steel fibers SCC was about (39.7MPa). In addition to that, six identical plate’s specimens of (400×400×20) mm were cast and tested to study the effect of steel fibers and chicken wire on the impact load resistance and flexural load of SCC. Results show that the impact load resistance for steel fibers SCC concrete had been increased by about 60% compared with the reference normal strength SCC mix, while an increase in impact load resistance for SCC of about 50% was gained by using chicken wire mesh. It was noted that the use of 0.75% steel fibers-SCC and chicken wire-SCC plates show an increase in impact resistance and flexural strength by about 20% and 2% compared to the reference normal strength SCC, respectively. The results also indicate that using steel fibers is useful in holding up the formation of transverse cracks and providing affirmative restraint to the successive growth of cracks of plate’s specimens.
Introduction…
SCC or Self Consolidating concrete is considered as a major innovative improvement in concrete knowledge for the recent decades. SCC,is a unique type of concrete, which can run throughout and block up the gaps between reinforcement bars and corners of moulds with no more requirement for shaking and compaction through the casting procedure. SCC results in long-lasting concrete structures, and spares labour and consolidation racket. SCC is like ordinary concrete in being a fragile material with low elasticity and tensile strength and poor crack vitality inspite of its high performance; thus, SCC mixes can be provided by steel fibers in order to increase its tensile potency and improve its stiffness, fracture capacity and maximize its load carrying capacity. Few researches presented deal with using steel fibers to enhance the properties of SCC, some of these researchers studied the structural behavior of SCC, while the others deal with the mechanical behavior of it. Spearheading deals with Self Compacting Concrete comes back to 1980s in Japan. Okamura and Ozowa [1]studied the mechanism for accomplishing self-compactibilty, which prevents segregation between gravel and mortar when the concrete moves through the confined region of steel reinforcing bars in addition to involve high deformability of mortar or paste. The occurrence of collision and contact between the particles of aggregate may be raised with the decrease in comparative distance between the particles, subsequently; the interior particles' stresses could be raised with the deformation of the concrete, predominantly close to obstacles. It was concluded that the required flowing energy is consumed with increase in the internal stress,,resulting in blockage of aggregate particles. Restricting the coarse aggregate crashing (whose energy utilization is principally powerful) to less than normal level is effective in avoiding that kind of blockage.
Sonebi et al. [2] studied the compressive strength of SCC. Five mixes were used in the investigation with cement content ranging between (280-515) kg/m 3 and with water to cement ratios between (0.43-0.68). Superplasticizer admixture was used in the investigation. Results indicated that all mixes were highly flowable, with slump flow ranging between (650-690) mm and flow times ranging between 2.3 and 4 sec. The 28 days compressive strength ranged between 47-79.5 MPa. Koning et al. [3] and Hauke [4] suggested an acceptable strength increase by 13.5% and 9.1%, respectively in SCC made with addition of 15% of fly ash material .
Al-Jabri [5] investigated the properties of SCC produced using locally available materials in Iraq and the influence of dosage and fineness admixture of High Reactivity Metakaolin (HRM) on the properties of SCC in fresh and hardened state. Concrete mixes containing (500 kg/m 3 ) cement with (0.34-0.36) water to cement ratio. The results show that the slump diameter was greater than or equal to 650 mm, filling height ranged between 0 to 50 mm and flow times ranged between 3 and 9.5 sec. The compressive strength was upto 85 MPa, and the splitting tensile strength was upto 6.1 MPa. HRM improved the compressive, splitting tensile, and modulus of rupture strengths upto 23.9 %, 4.26% and 4%, respectively.
The purpose of the present experimental work is to investigate the possibility of utilizing locally found materials in Iraqi to create SCC that conforms to international well known specification documents. And to evaluate the filling ability, passing ability of fresh concrete, compressive strength and tensile strength for hardened SCC produced. In addition to studying the effect of adding steel fibers and one or two layers of chicken wire in the impact load resistance and flexural strength of the normal strength SCC.
Experimental program
An experimental program is planned and prepared in order to scrutinize the structural performance for SSC after being strengthened by adding steel fibers and one or two layers of chicken wire meshes. A lot of laboratory experiments were done in Materials and sStructural engineering laboratories of the Structural Engineering Branch / Civil Engineering Department / University of Technology/ Baghdad/ Iraq in turn to complete the present research investigational job. A total of 42 specimens have been cast to investigate the properties of SCC. Six concrete plates with dimensions of (400x400x20)mm were cast to study the impact load capacity test and flexural strength test of the SSC plates after strengthening by adding steel fibers and one or two layers of chicken wire.
18 concrete cubes (150*x150x150) mm are used to determine the compressive strength of SCC concrete in accordance with B.S 1881: part 116, 1989 [6] and 18 concrete cylinders (150x300) mm are used to determine splitting tensile strength of this type of concrete in accordance with ASTM C496-04 [7]. In natural conditions, SCC does not require any compaction; therefore, the mixes BCEE4 IOP Conf. Series: Materials Science and Engineering 737 (2020) 012010 IOP Publishing doi:10.1088/1757-899X/737/1/012010 3 were poured into moulds till these moulds were completely filled with no compaction. Then, after casting, moulds were enclosed by polyethylene sheets for around twenty four hours to avoid loss of moisture from the specimens' top surface, which could result in plastic shrinkage cracks through the first few hours beyond concrete casting. After that, specimens were demoulded and positioned in the curing water containers. Figure1 shows some of the specimens after being cast.
Concrete Ingredients
In this research, Tasluja factory ordinary portland cement has been used in all concrete mixes. This cement conformed to the requirements of Iraqi Specification No.5//1984 [8] according to the chemical and physical test results. Natural sand (Al-Ukhaider) was used as a fine aggregate with maximum size of 4.75 mm conforming to Iraqi specifications limits (No. 45/1984 [9], it's classified as zone No.2 according to the Iraqi Specification. Locally found normal crushed aggregate of maximum size (10 mm) was utilized as gravel (Coarse aggregate). The grading and sulfate content of the adopted gravel comply with requirements of No.45/1984 Iraqi Specification [9].
In general, it is necessary to use superplasticizers in order to obtain high mobility for SCC mixes. Therefore, a high range water reducing admixture (HRWRA) was used to fabricate better performance concrete mix. HRWRA is commercially recognized as Top Flow SP703. This admixture conforms to ASTM C494 [10] requirements. Also, silica fume is used as pozzolanic admixture. Pozzolanic activity index and the chemical oxide compositions results of silica fume match to the requirements of ASTM C1240-05 requirements [11]. In addition to that, Limestone powder (LS) was utilized as a filler to increase the quantity of fine materials in the mixture in order to improve segregation resistance and upgrade its cohesiveness. The used (LS) is of 0.125 mm particle size (passing through sieve No. 200 that conforms to EFNARC 2005 recommendations [12].
Chicken Wire Mesh
The Rhombic shape meshes of reinforcement fabricated from (0.54mm) nominal diameter steel bars was used in the present work. The opining in the long and short directions were 10.6 and 7.92mm, respectively. Three samples of chicken wires with dimensions of (30x150) mm were prepared for tensile tests and mesh properties according to ACI 549-1R-99 [13]. Figure 2 shows measured and mesh test. The test of chicken wire mesh was done by using the computerized equipments existing in Production Engineering and Metallurgy Department/ University of Technology. One or two layers of these chicken wires were used as an internal reinforcement in two (400×400×20) mm plates specimen in order to study the impact strength of SSC.
Micro Steel Fibers.
Micro straight steel fibers of 0.75% volume fraction and aspect ratio of L/d =75 was used in the present work. Table 1 shows the micro steel fibers properties. This form of fibers conformed to the ASTM A820-01 [14] requirements. Figure 3 shows the micro steel fibers used.
Concrete mix properties
Two types of concrete mix were used in the present work. A number of trial mixes were prepared and made so as to acquire the most appropriate mix design for normal strength SSC. The final mix proportions was 1:2:2 (cement: sand: gravel) by weight of cement. Totally of 18 standard cubes of (150 x150 x150) mm were cast and tested at 28 days to determine the concrete compressive strength according to BS 1881-1983 [6] in a rate of six standard cubes for each batch (a batch has an adequate amount of concrete for casting two plates specimens for each working day). In addition to that, 18 cylinders of 150 mm in diameter and 300 mm in length were cast and tested from the same batch to calculate concrete strength in tensile. The mix design compositions are listed in table 2.
Fresh SCC tests and results.
In turn to confirm that the concrete used in the present experimental work has the properties of SCC, the new fresh normal strength concrete of separate mix was verified according to four typical SCC tests procedures, which are: T50 cm slump flow, Slump flow, V-Funnel test, L-box, and J-Ring Test.
The tests are all performed at the Concrete Laboratory/ Civil Engineering Department/ University of Technology. These tests were carried out according to EFNARC. Details of each test are being listed below:
Slump-flow and T500test.
Slump-flow test deals with the measuring of horizontal free flow for SCC using a regular slump-cone.
The experimental values of slump flow and T500 tests were 720 mm and 3.8 sec, respectively. Hence, all mixes are considered to have an excellent consistency and workability from the filling ability point of view. The results of fresh properties test for the mix were listed in table 3 that shows the time required for the concrete flow to reach a circle with 500mm diameter. Figure (4-a) shows the slump flow test.
L-Box test.
The L-Box test values were between 0.91 and 0.96. The obtained results achieved the acceptable criteria for SCC. The mix shows no blockage throughout the strictly spaced obstacles, as a result, it was assumed to have a good passing ability from the passing ability point of view. The results of fresh properties test for the mix was listed in table 3. Figure (4-b) shows the L-Box test.
V-Funnel test
Concrete Flowability o was measured through V-funnel test. The V-Funnel test values were between 6.2 and 7.5 sec. These results are within the acceptable criteria for SCC. No segregation behavior is observed for all mixes. The V-funnel time results of fresh concrete of the mix were listed in table 3.
J-Ring Test.
J-Ring test is represented by the values of (D), which can be defined as the highest spread slump of low final-diameter in the J-ring. The obtained J--Ring test results ranged between 716 and 755mm; these results are inside the satisfactory criteria of SCC. Thus, all of mixes are assumed to have an excellent passing ability. Results of the J-Ring test for fresh properties for the mix was shown in table 3. Figure (4-c) shows the J-Ring test. Figure 5. Cube Compressive strength test. Figure 6. Splitting strength test.
Flexural test
Two standard test methods are available to find out the flexural strength of a concrete-beam according to C78/C78M-16 specifications [15]: Center point loading: In this test method, the total loads are applied at the center of the span-length of the beam. The maximum stress is present at the center of the beam.
Third point loading test: this technique depends on applied half of the load at each third of the beam's span length. The flexural strength or modulus of rupture by center point loading is higher than the modulus of rupture of the third point loading.
In present work, the maximum stress is present over the center one-third portion of the loaded plated. The load must be continuously applied with no shock at all. Moreover, the applied-load should be placed at a steady rate until the breaking point without shock or disruption. Figure 7 indicates failure modes of SCC plates. The use of 0.75 % steel fibers and two layers of CHW plate show an increase in flexural strength by about 20% and 3%, respectively compared with the normal strength SCC mix. The results for flexural strength test at 28 day are shown in table 5.
Impact load test.
Six (400×400×20) mm plate specimens were used for impact resistance test. This test was done in accordance with the procedure proposed by the ACI Committee 544.2R-89 [16], which has been used by several researchers [17,18]. The impact load is applied using hammer of 4.45 kg ball of 60.2 mm in diameter, which was dropped repeatedly and directly from a 457 mm height on the center point of the specimen's top surface (figure 8). The plates were simply supported along all edges. The number of required blows to construct the initiation of crack (first crack) was recorded and remarked to be as initial crack-strength (N1), while, the number of blows that caused specimens breakdown or failure is recorded and remarked as the strength of failure (N2). Shear cracking load can be defined as the load at which a considerable change in the load carrying mechanism take place, resulting in a redistribution of the stresses within the plates [19].
Generally, it was found that concrete is an exceptionally stain rate sensitive material. Both the peak bending loads and the fracture energies were higher under dynamic load-conditions than static loadconditions. Steel fibers were found to be considerably increase the ductility and the impact load resistance of the SCC. Results of impact test on plates are presented in table 6. It is well indicated that the distributed micro steel fibers gave a significant increase in number of blows by about 40% to produce cracks and 50% to failure compared with SCC without steel fibers, this increase could be because of the uniformly distributed steel fibers in the concrete mix and its effectiveness in 3Dimension inverse the chicken wires, which its effective in 2Dimension, so that the steel fibers enhanced the impact behavior of specimen more than chicken wires. For more illustration, results of table 6 are shown graphically as illustrated in figure 9. Figure 10 indicates the failure modes of tested SCC plates.
Conclusions
The following points are concluded depending on the test results in the present investigation: 1-It is well indicated that the distributed micro steel fibers gave a significant increase in the number of blows by about 40% to produce cracks and 50% to failure compared with SCC without steel fibers, this increase could be because of the uniformly distributed steel fibers in the concrete mix 2-Steel fibers have a definite adverse effect on all workability properties of fresh SCC.
Consequently, demand of higher water, or chemical admixture dosages could be added to keep the targeted workability values within the suitable ranges. 3-All SCC mixes that incorporated steel fibers have slightly higher compressive strength and splitting strength than reference normal strength SCC mixes. 4-Impact results indicated that presence of fibers cause more cracks and energy absorption. | 2020-03-12T10:45:14.975Z | 2020-03-06T00:00:00.000 | {
"year": 2020,
"sha1": "167b16e372c6f9598dde8c046e43673a9b530e88",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/737/1/012010",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "3d3d237585f0be223625c69cb43eb537040d0168",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
} |
244269357 | pes2o/s2orc | v3-fos-license | Theme 10 - DISEASE STRATIFICATION AND PHENOTYPING OF PATIENTS
L. Bouvier, S. McKinley, J. Truong, A. Genge, N. Dupr e, A. Dionne, S. Kalra and Y. Yunusova Hurvitz Brain Sciences Program, Sunnybrook Research Institute, Toronto, Canada; Department of Speech-Language Pathology, University of Toronto, Toronto, Canada; Montreal Neurological Institute-Hospital – The Neuro, Montr eal, Canada; Centre de recherche du CHU de Qu ebec – Universit e Laval, Qu ebec, Canada; Neuroscience and Mental Health Institute, University of Alberta, Edmonton, Canada; Division of Neurology, University of Alberta, Edmonton, Canada; University Health Network – Toronto Rehabilitation Institute, Toronto, Canada
Background: In ALS, the development of bulbar signs is associated with faster disease progression, shorter survival, and lower quality of life (1). Thus, effective measurement tools that improve early detection and progression monitoring of bulbar signs are essential for clinical management of the disease. According to recent literature, acoustic measures of speaking and pause times during passage reading show sensitivity to different stages of bulbar disease in English speakers (2,3). However, the utility of these measures in French is not known. Considering the importance of language-specific characterization of dysarthria, cross-linguistic validation of acoustic biomarkers is needed. Objectives: Primary objective: To determine if speech and pause measures differ between French speakers with ALS with and without bulbar symptoms and healthy controls. Secondary objective: To determine if these measures can reflect the decline in bulbar symptoms in a French cohort with ALS. Methods: 46 Canadian French speakers (29 ALS; 17 controls) were recorded reading a passage during up to three followup visits (total of 92 recordings). ALS speakers were classified as bulbar symptomatic or bulbar pre-symptomatic based on their ALSFRS-R bulbar subscore (3). Recordings were analyzed using a semi-automated speech and pause segmentation procedure (4) to provide measures of speaking rate, total duration, duration of speech, and number of pauses. These measures were compared between the three groups and correlated with ALSFRS-R, total and bulbar scores. Results: Group comparison revealed that the ALS symptomatic group significantly differed from the pre-symptomatic and control groups for speaking rate, total and speech durations, and number of pauses (p < 0.05). None of the measures allowed differentiating the pre-symptomatic and control groups. Speech and pause measures were all moderately correlated with ALSFRS-R total and bulbar scores (p < 0.05). Discussion: As demonstrated in English speakers, measures of speech and pause behavior during passage reading are sensitive to the progression of bulbar disease in French speakers with ALS. Contrary to English, those measures were unable to detect ALS prior to the development of bulbar symptoms in French speakers. This may be due to language-specific factors (e.g. prosody) as well as our relatively small sample size. Including kinematic assessments (5) may improve early detection in French. Studies with larger cohorts of French speakers progressing from pre-symptomatic to symptomatic stages are needed to help identify early markers of bulbar disease.
liziane.bouvier@sri.utoronto.ca previous falls, all of which are common in the progression of ALS. This study investigates the use of home telemonitoring using a smartphone-connected sensor to track gait as a method for rapidly responding to changes in patients' ambulatory health. Objectives: The first aim of the study is to determine whether walking speed measured with a wearable sensor is comparable to speed measured using standard clinical or laboratory methods regardless of whether the recording was performed at home or in the laboratory. We also ask if there is high adherence to the practice of sensor-based home gait measurement. Methods: Patients with symptom onset in the last 3 years and an ALSFRS-R walking score of 2-3 were prospectively recruited from the Penn State Health ALS Center. At the study visit, patients completed a 10-m walk task, during which walking speed was measured using a stopwatch and a foot-worn triaxial sensor (MMRþ, MbientLab Inc). Over 24 weeks, subjects completed twice-weekly, 5-min home recordings of walking using the foot sensor and a custom smartphone application. Walking speeds derived from lab recordings and the next available home recording were tested for equivalence using ANOVA. Control subjects were also recruited and completed a lab-based 10-meter walk task, followed by 4 weeks of home recordings. Control subjects underwent additional video-based walking speed analysis during lab assessments, which were compared to the other methods. Results: 8 patients (6 male) and 2 controls (1 male) were enrolled in the study. Patients (6 ongoing) completed a median of 21.5 (3-52) recordings over 86 (7-178) days. All but 1 subject(s) submitted at least one recording per week while in the study; in total only 4.3% of recordings were sent 7 or more days after the previous recording. Four patients completed an in-person gait assessment. No differences in 10-meter walking speeds were found when comparing those calculated from stopwatch measurements versus those derived from foot sensor data. Similarly, the two control subjects had 10 m walk task speeds that were comparable across sensor, stopwatch, and video-based methods, although the video-based method had a tendency towards greater speed estimates (4-10% higher than other methods). In all subjects, walking speeds derived from home measurements were significantly lower than all lab-based measures (p < 0.001). Discussion: Preliminary results validate the accuracy and feasibility of wearable-derived walking speed tracking in patients experiencing early gait changes due to ALS. Lab-based walking speeds were consistently greater than home assessments, perhaps as a result of subjects modifying their natural gait due to being observed. Future results will determine whether gait telemonitoring may act as a functional biomarker for fall risk.
ageronimo@pennstatehealth.psu.edu
Background: Amyotrophic lateral sclerosis (ALS) clinical trials rely on a standard set of outcome measures, including the revised ALS Functional Rating Scale (ALSFRS-R), vital capacity (VC), and handheld dynamometry (HHD). Digital Quantitative Monitoring (DQM), uses tasks performed on digital devices to obtain more frequent, quantitative and granular measurements of function alongside patient reported outcome measures in order to improve on standard ALS outcome measures. Methods: The study originated with two intensive clinic visits separated by a week during which daily self-administered test and continuous passive data (DQM) was collected remotely. With COVID-19, the study was re-designed to a fully-remote and longitudinal format, comprising telemedicine visits at baseline, 12, and 24 weeks, weekly self-administered testing, and continuous passive data collection. During telemedicine visits, study staff administered traditional ALS outcome measures including the ALSFRS-R, neurological fatigue index -Motor Neuron Disease (NFI-MND), and a quality of life scale. DQM assessments were delivered via mobile application (Digital Artefacts) on a provided iPhone and Apple Watch, as well as via web browser on their computer. The mobile app included a symptom questionnaire, selfadministered ALSFRS-R, fine motor, gait, stance, speech, and cognitive tests, and collected continuous passive data. Participants used their home computer and mouse to complete a point and click task assessing fine motor movements. Twenty-five healthy controls (HC) and 25 people with ALS (PALS) will be enrolled. Results: All PALS participants have been enrolled and HC enrollment is projected to complete in August 2021. Thirteen PALS and 3 HC have completed participation. Of 456 scheduled sessions mobile application sessions, participants have completed 385, with 50 further partially completed sessions. Test-retest reliability at baseline varies across tests, but ICC values above 0.9 have been observed (alternating finger-tapping rate, passage reading speaking rate). Correlation with relevant baseline ALSFRS-R subscores (i.e. bulbar, fine motor, gross motor, respiratory) are moderate (0.4-0.6) or weak (0.2-0.4) for most test features analyzed. Considering participants with at least two sessions, the median value of several computer mouse task features demonstrated strong correlations (0.6-0.9) with baseline ALSFRS-R handwriting and/or total scores. These features included normalized jerk, execution time, maximum speed, and temporal location of the main submovement. Available data will be presented. Conclusion: This pilot study in PALS and HC is helping to clarify the utility of a variety of mobile technology-based DQM tools in ALS, to compare these tools to traditional ALS outcome measures, and to extend our ability to assess cognition in people with ALS. Early results suggest compliance is acceptable and at least a subset of the digital tests included in this study may have promise as reliable measurements of function and cognition in people with ALS. Introduction: Amyotrophic lateral sclerosis (ALS) is a fatal neurodegenerative disease that affects both upper and lower motor neurons. We conducted the largest Taiwanese cohort to investigate the natural history and prognostic factor of ALS. Methods: We recruited 227 patients diagnosed with definite or probable ALS. All patients were tested for common disease genes including C9ORF72, SOD1, FUS and TARDBP. Detail clinical characteristics were acquired and neurological examinations were performed after informed consent. The patients were followed up biannually for evaluation of ALSFRS-R score. Results: The cohort consists of 127 men and 100 women. Eleven patients had a family history of ALS and 32 patients exhibited bulbar onset. Average diagnosis delay since symptoms onset was 16.8 months. On average, the functional outcome, evaluated by ALSFRS-R score, decline from 36.1 to 24.4 over the first year of diagnosis. Age of onset, presence of disease causing genes and gender did not affect the rate of functional decline. Initial ALSFRS-R score, bulbar onset, BMI and older age of onset resulted in worse survival outcome. Discussion: This is the largest ALS cohort with an analysis of its natural history in Taiwan. This study advances the understanding of clinical characteristics, natural disease course and risk factors of ALS in Taiwanese population. Amyotrophic lateral sclerosis (ALS) is a fatal neurodegenerative disorder involving upperand lower motorneurons. The causes of neuron degeneration are still unclear, but some predisposing factor have been put in relation with an increased risk of developing the disease; these include head traumas, agonistic athletic practice, and toxic exposure. According to literature data, ALS is sporadic for 90% of cases and familiar for about 10% of the cases. The mean age at onset ranges from 58 to 63 years, and some phenotypes have a gender prevalence. These include patients with predominant bulbar involvement that are mainly women with an older age at onset, or while flail arm phenotypes that show strong male predominance. Here we describe the recent motor neuron disease (MNDs) population of the Santa Chiara Hospital, Pisa, Italy, to compare it with what the available in literature In the last year, 153 patients (77 men and 76 women) with a mean age at onset of 63.4 years came to our attention and underwent regular neurological evaluation for MNDs. Among them, 33 had a predominant involvement of I MN at onset (15 women and 18 men with a mean age of 58.2 years). 50 men and 30 women (mean age at onset of 61.8 years) had a prevalent involvement of II MN. Among them, two male patients had a flail arm clinical picture and one woman had a monomelic arm involvement. 28 women and 5 men (mean age at onset of 68.7 years) had a bulbar picture. Two men with a mean age at onset of 60 years had a cognitive impairment and three patients (two female and one male) had a respiratory onset (mean age 68.3). 8 out of 143 ALS patients (5%) had a positive genetic test (2 SOD1 mutation, 3 C9Orf72 mutation, 3 FUS mutations). Among all patients, we had three Kennedy's disease and seven HSP genetically confirmed. 38 patients had a familiarity for neurodegenerative diseases, mainly AD and PD, but also ALS. Among ALS patients, 100 underwent a high camp cerebral MRI to study motor cortex: 68 patients had a positive MRI. Almost all negative MRIs were in patients with lower motorneuron prevalence involvement at onset. MRI resulted negative also in four familiar ALS (2 C9Orf72-related, one FUS-related and one with NGS still ongoing but ALS familiarity). 17 patients had predisposing risk factors (7 agonistic athlete, 6 toxic exposures, 1 head trauma, 2 consanguinities among parents and one with a previous history of MG supporting a dying back hypothesis) Our database is substantially aligned with previous literature reports, supporting the validity of the current body of knowledge of ALS prevalence. Similar databases are very useful to effectively track patients and increase the efficiency of follow-ups Lu.becattini@gmail.com DSP-06 Higher Troponin T levels positively correlated with the extent of body regions affected on EMG in ALS patients S. Chamoun 1,2 , U. Kl€ appe 1,2 , S. Imrell 1 , F. Fang 2 and C. Ingre 1,2 Background: The troponin-complex is a well-studied, small protein-complex involved in the regulatory function of skeletal and cardiac muscle contraction. The complex is subdivided into three smaller proteins -Troponin I (cTnI), Troponin T (cTnT) and Troponin C (cTnC). Both cTnT and cTnI are mainly used in the diagnosis of cardiac pathologies, where an elevated level indicates damage to the cardiomyocytes. During regeneration of skeletal muscle tissue after denervation, isoproteins of the Troponin protein family are reexpressed in skeletal muscle (1). Several studies have detected elevated levels of cTNT in patients with neuromuscular disorders (1,2). We have also recently shown that cTnT in plasma is elevated in ALS patients compared to ALS mimics and -healthy controls and increases longitudinally as the disease progresses (3). Objective: To determine the correlation between cTnT in plasma and levels of re-innervation analysed through neurophysiological examinations in patients with ALS.
Methods: We conducted a retrospective cross-section analysis of plasma cTnT levels of patients diagnosed with ALS at Karolinska University Hospital during 2015-2018. We only included patients who had undergone an electromyography (EMG) examination within 6 months of the cTnT measruement. EMG-investigations had to have been examined in at least three out of the four defined regions for ALS-diagnostics. We then created an EMG-protocol which graded the amount of fibrillations or positive sharp-wave potentials from a level of 0 to 4 or in percentage (amount of muscles affected divided by number of muscles investigated). We use Peason's correlation coefficient as an assessment of the correlation between cTnT levels and EMG findings. Results: Among the 50 patients included in the study, none had any known cardiac conditions. Age at diagnosis varied between 37 and 82 years (median 64.5 years) and the cTnT levels varied between 6 and 124 ng/L (median 30.9 ng/L). cTnT levels were statistically significantly correlated with the number of EMG-regions affected 0.344; (p ¼ 0.015) and percentage of EMG regions affected (Pearso ns 0,431; (p ¼ 0.002). Conclusions: To our knowledge, this is the first study to examine the correlation between plasma cTnT and neurophysiological findings in patients with ALS. We found a clear, correlation between higher levels of plasma cTnT and a greater number of body regions affected in EMG. A possible explanation is that elevated plasma cTnT levels in ALS patients are due to a re-innervation process occurring after damage to the motor neurons and the subsequent skeletal muscle tissues sanharib.c@gmail.com DSP-07 Amygdala TDP-43 pathology is a sensitive pathological correlate of behavioral dysfunction in amyotrophic lateral sclerosis J. Gregory, E. Elliott, T. Ritakari, J. O'Shaughnessy, S. Chandran, C. Smith and S. Abrahams
University of Edinburgh, Edinburgh, United Kingdom
Background: Cognitive and behavioural deficits are a wellrecognised symptom in up to 50% of patients with amyotrophic lateral sclerosis. Whilst cognitive deficits are thought to be driven, at least in part, by the pathological accumulation of phosphorylated TDP-43 (pTDP-43) aggregates in extra-motor brain regions, a sensitive pathological correlate of behavioural deficits is yet to be determined. The brain areas thought to predominantly be associated with behavioural dysfunction are the (i) amygdala, (ii) orbitofrontal cortex (BA11/12), (iii) ventral anterior cingulate (BA24) and (iv) medial prefrontal cortex (BA6). Objectives: To identify a sensitive pathological correlate of behavioural dysfunction in ALS. Methods: Here we examined post-mortem tissue from these four brain regions in a cohort of 30 sporadic ALS (sALS) patients, a proportion of which had also undergone the same neuropsychological behavioural assessment as part of the Edinburgh Cognitive ALS Screen. Results: We show that overall, the behavioural screen done as part of the ECAS predicted TDP-43 pathology with 100% specificity and 86% sensitivity in behaviour-associated brain regions, with the amygdala demonstrating the best sensitivity and specificity when analysed alone. Furthermore, in the amygdala of sALS patients, we show variation in morphology, cell-type predominance and severity of pTDP-43 pathology and that the presence and severity of intra-neuronal, but not glial, pTDP-43 pathology is associated with a clinically detectable behavioural deficit. Discussion: Taken together our data suggest that the behavioural questionnaire done as part of the ECAS is a reliable correlate of pTDP-43 pathology in behaviour-associated brain regions. We also show that, out of the four regions profiled, the amygdala is the most sensitive correlate of behavioural deficits and that, in this region, neuronal pTDP-43 pathology is a better correlate of behavioural dysfunction than glial pathology. These data would be supportive of recent MRI imaging studies evaluating the amygdala as a key imaging correlate of behavioural dysfunction aimed at improving monitoring and stratification of patients with behavioural symptoms. Background: The Amyotrophic Lateral Sclerosis Rating Scalerevised (ALSFRS-r) is the primary outcome measure utilised in clinical trials and research in ALS. This scale is limited such that clinically meaningful changes for subjects are often missed, impacting upon the evaluation of new drugs and treatments. Technology has the potential to provide sensitive, objective outcome measurement. Objective: To provide a state-of-the-art review of current and future trends in the measurement of upper limb function. Methods: A general review of upper limb measurement tools including subjective paper-based questionnaires and the objective sensors that aim to supersede them was conducted. Due to the relatively low incidence of ALS, this review spanned neurological conditions in general. Results: Current assessment methods include multi-item functional scales such as patient-reported functional scales (e.g. ABILHAND questionnaire), clinician-rated scales (e.g. the ALSFRS-r upper limb component or the DASH (Disabilities of the Arm, Shoulder and Hand) questionnaire) and objectively rated task performance scales (e.g. ARAT (Action Research Arm Test)). Objective tests providing continuous data include the nine-hole peg test (9HPT), box and block test and hand grip dynamometry. Scales and questionnaires provide a broad overview of functional tasks but may be influenced by the rater. Tests, such as the 9HPT and grip dynamometry provide reliable continuous data but suffer floor affects and may be limited by the patients ability to follow instruction. Novel technological sensors and solutions have been applied in an attempt to provide objective outcome measurement, (e.g. Kinect reachable workspace) (1) but many are time consuming or expensive in clinical practice. Discussion: Heretofore, paper-based subjective questionnaires such as the ALSFRS-r have been the gold standard in upper limb function. They have to a degree been supplemented by objective tests such as the 9HPT, but many of these are crude measures and also impacted by floor and ceiling effects. Technology has the potential to radically change the upper limb measurement field with sensors such as accelerometers and gyroscopes already being trialed in research and commercial applications. Barriers to widespread adoption include the absence of consensus regarding the most appropriate sensors, demonstration of validity, clinical practicality and clinician adoption. There is a requirement for a simple, sensitive and clinically meaningful test of upper limb function in ALS.
haydenco@tcd.ie
Background. Motor neuron diseases (MNDs) encompass a wide pathological continuum ranging from classic amyotrophic lateral sclerosis (ALS) to pure/predominant upper motor neuron (pUMN) and pure/predominant lower motor neuron (pLMN) disease forms. While it is widely accepted that these phenotypes are characterized by distinct survival rates, their longitudinal trajectories of clinical decline are still largely unknown. Additionally, the majority of prognostic studies in MNDs have mainly focused on classic ALS, while the need to evaluate distinct prognostic features in pUMN and pLMN phenotypes has been largely neglected. Objective. To investigate longitudinal trajectories of clinical functional decline across the main MND phenotypes and to develop phenotype-specific prognostic models. Methods. 60 patients with a clinical diagnosis of MND (26 classic ALS, 14 pUMN and 20 pLMN) were recruited and followed longitudinally with clinical evaluations approximately every 3 months, for up to 15 months. Motor examinations included the following assessments: overall degree of functional impairment (evaluated using the ALS functional rating scale revised "ALSFRS-r"), muscle strength (evaluated using the Medical Research Council "MRC" scale) and UMN involvement (evaluated using the UMN score). For each of these measures, a baseline progression rate was further estimated. Cognitive/behavioral and mood examinations included the following assessments: cognitive and behavioral impairment (evaluated using the Edinburgh Cognitive and Behavioural ALS screen "ECAS") and mood disorders (evaluated using the Hospital and Anxiety and Depression scale "HADS"). Based on longitudinal ALSFRS-r data individual slopes of decline were generated, and linear regression models were then applied to isolate, among baseline clinical features, significant predictors of a more aggressive disease course in each clinical phenotype. Results. Longitudinally, the ALSFRS-r delta of variation was higher in classic ALS patients (À13.67), followed by pLMN (À11.89) and pUMN cases (À5.76); significant differences were selectively observed for pUMN compared to classic ALS (p ¼ 0.05). In classic ALS, significant predictors of a more aggressive longitudinal decline included greater baseline rates of overall functional impairment (p ¼ 0.003) and UMN involvement (p ¼ 0.04), as well as greater baseline lower limbs UMN involvement (p ¼ 0.02). In pUMN, significant predictors of a more aggressive longitudinal decline were male gender (p ¼ 0.05) and side of symptom onset (right p ¼ 0.001, bilateral p ¼ 0.003). In pLMN, significant predictors of a more aggressive longitudinal decline included greater baseline cognitive impairment (total ECAS score p ¼ 0.003, ECAS ALS-specific functions score p ¼ 0.01) and more severe mood disturbances (p ¼ 0.01). Discussion. In conclusion, our study confirms the urgent need for phenotype-specific prognostic models in order to improve patient's management and clinical trials implementation in MNDs. Background: Clustering techniques could be salvational as to providing robust ALS stratification candidates. Unfortunately, the ALS literature is overlooking many critical aspects of the said techniques (1). The importance of the stratification problem made the reliance on prognoses models' "goodness" as a surrogate validation of a stratification a common practice. Objectives: The study aims at answering two questions: (1) Does a better prognosis model imply a "higher-quality" stratification? (2) Is it possible to settle on an agreed-upon ALS subtyping? Methods: Five clustering algorithms were used to partition the same dataset and partitions with clusters ranging from 2 to 11 were generated. Seven Internal Clustering Validation Indexes(CVIs) were computed on the obtained partitions to assess their inherent geometrical characteristics. To quantify the resemblance of different stratifications, the Adjusted Rand Index (ARI) was calculated on each pair of partitions having the same number of clusters. The prognosis task evaluating the quality of the partitions is the prediction of the ALSFRS slope for a cohort of 2187 patients from the ALS PRO-ACT dataset. A top-ranking solution from the 2015 ALS Stratification challenge served as a baseline. The Root Mean Squared Error (RMSE) gauged the potential impact of a given stratification on the slopes prediction model. Results: The used ensemble of CVIs was able to rule out a subset of possible clusters numbers restricting the acceptable number of groups to a maximum of 6. The analysis of CVIs' optimal values led to the appearance of 4 as an "optimal" number of clusters but with a score of 9 out of 35 possible optimal values. By the analysis of the ARIs matrices for the same cluster range, two ALS subpopulations could be selected as the most suitable subtyping given that the subpopulations are too similar to each other regarding all clustering algorithms (ARI of 0.78 out of 1). Trying to draw a clear-cut stratification, a correlation between the RMSE scores and the ARIs of the top 3 best-performing partitions for each number of clusters was computed. Close RMSEs of regression models, built on top of the selected partitions, are negatively correlated with too similar partitions (Pearson correlation coefficient ¼ À0.59). Discussion: Choosing the "best" ALS patients' partitioning given an "optimal" number of clusters is far from being solved. The surrogate assessment provided by the score of a prognosis model made matters worse. Partitions with the same number of clusters could have the same RMSE while being different in terms of their ARIs. In the absence of a golden standard of ALS subtypes, launching discussions to agree on computationally quantifiable set of desirable characteristics of the partitions to-be-obtained could be a solution to the ALS patients stratification problem. Background. Amyotrophic lateral sclerosis (ALS) progression is now known to be highly variable across patients (1). Given its relentless nature, the functional decline is usually expected to be continuous. However, previous studies using the revised ALS Functional Rating Scale (ALSFRSr) showed that about 25% of ALS patients experience at least one 6month pause and that such pauses could be even longer in a smaller percentage of cases (2). ALSFRSr is a standard measure in ALS progression studies; nonetheless, it has some flaws that could undermine its ability of grasping the real disease evolution over time ( Background: While symptoms of cognitive and behavioural impairment in ALS/MND (ALSci/ALSbi) are well-known (1), how these manifestations evolve over timeand who is/is not at risk for changeis unclear. Cross-sectional studies suggest that cognitive impairment is associated with advancing disease stage (2,3) and, in some cases, C9ORF72 repeat expansion (4,5). Longitudinal studies have been hampered by high attrition, small sample sizes, and the lack of neuropsychological tests suitable for repeated administration and accommodating ALS/MND-related physical disabilities. Objectives: To explore the prevalence and temporal patterns of cognitive and behavioural change in ALS/MND patients, and the potential association with age, sex, education (years), bulbar onset, and C9ORF72 status. Methods: Subjects were ALS, PMA, and PLS participants in CReATe (Clinical Research in Related Disorders for Therapeutic Development) Consortium's Phenotype-Genotype-Biomarker (PGB) study. The Edinburgh Cognitive and Behavioural ALS Screen (ECAS) was used to assess ALS specific (ALSsp; language, executive, verbal fluency) and nonspecific (ALSnsp; memory, visuospatial) cognition. Informants reported behavioural symptoms through a semi-structured interview. N ¼ 238 English speakers with ECAS at !3 visits (3-6 mo apart, max 5 visits) were included in latent class growth and mixed model analyses. Results: Initial ALSci was uncommon (N ¼ 18). Evidence for three subgroups representing different patterns of cognitive performance was found. Two showed normal scores that either remained unchanged on all ECAS measures (p ¼ ns) or increased marginally over time for ECAS total (ECAStot; p ¼ 0.002) and ALSsp (p ¼ 0.001), but not ALSnsp (p ¼ 0.134), scores. One subgroup ($10% of participants) showed low initial scores on all ECAS measures and small but significant declines in ECAStot and ALSnsp (p < 0.001), but not ALSsp and their relative progression rates. Mann-Whitney and Chisquared tests were used to identify significant differences between pUMN and classic ALS as well as between pLMN and classic ALS. Logistic regression analyses were then applied to isolate significant predictors of pUMN and pLMN diagnoses. Results: Significant predictors of a pUMN diagnosis included younger age at onset (p ¼ 0.02), longer disease duration (p ¼ 0.05), and greater cranial (p ¼ 0.03), upper limbs (p ¼ 0.03) and lower limbs (p ¼ 0.01) UMN involvement. Significant predictors of a pLMN diagnosis included longer disease duration (p ¼ 0.01), more severe gross motor functional impairment (p ¼ 0.01), lower muscle strength of the right (p ¼ 0.001) and left (p ¼ 0.007) lower limbs, less severe cranial (p ¼ 0.02), upper limbs (p ¼ 0.002) and lower limbs (p ¼ 0.01) UMN involvement, and slower rate of progression of the UMN signs (p ¼ 0.01). Discussion: Our findings suggest that specific clinical features at the time of diagnosis may help differentiating between more benign and more aggressive MND phenotypes. These findings have potential to facilitate appropriate stratification for clinical trials enrollment, clinical management, and prognosis estimation. Background: Spreading across anatomical regions is a hallmark of Amyotrophic Lateral Sclerosis (ALS) but disease progression in respiratory onset ALS patients has scarcely been reported. Respiratory-onset patients are a particular model to study the contiguous and predominant side involvement in ALS spreading and the role of UMN vs LMN in the spreading pattern. Objectives: To assess the spreading pattern in a group of respiratory-onset patients followed in our center, taking into account their specific phenotype. Methods: From a consecutive ALS population with probable/ definitive ALS disease, followed in our Unit and evaluated accordingly to the ONWebDuals protocol, we included all respiratory-onset patients without concomitant involvement of other regions. We considered the phenotype (predominant UMN vs LMN) as well as the following affected regions (upper limbs, UL; lower limbs, LL; Bulbar, B; cognition; respiration, R; axial, A). Results: From the 625 ALS patients included, 18 (2.88%) had respiratory onset form (15 males, mean onset age 69.2 ± 11.6 years, mean disease duration 10.7 ± 9.0 months). Sixteen were Caucasian and all were right-handed. Two other patients had concomitant respiratory and bulbar onset and a third one had concomitant axial onset form and were excluded. At first visit, 38.9% of the patients reported significant weight loss (>10%) while 83.3% had resting fatigue and 88.9% had orthopnea. Weak cough was present in 83.3% and paradoxical respiration in 77.8%. All patients had predominant LMN involvement. Depression was present in 88.9%. Cognitive involvement was present in only one patient. No sensory changes were noted. The 2nd affected region was UL in 7 (4 of whom then progressed to LL involvement), LL in 3 (who progressed to UL involvement), bulbar in 4 (2 progressed to UL involvement and one to LL) and axial in 3 (1 progressed to LL involvement). An asymmetric progression was seen in 33.4%, from the respiratory region to the UL or LL (4 of them to the left segment and 1 to the right) while the remaining progressed to bilateral or middle line involvement. Progression interval to the 2nd region was 4.7±5.7mo and to a 3rd region was 6.1±8.7mo. All patients were adapted to NIV (disease duration to NIV 9.9 ± 7 months, min 2-max 28); only two patients are alive, being total survival 31.7 ± 21.9 months (min 8-max 91). Discussion: A more medial, symmetric, rapid and contiguous spreading, with a predominant LMN pattern is observed in respiratory onset ALS patients, as compared to patients with spinal or bulbar onset. This respiratory onset phenotype was seen in older men, with no sensory and rare cognitive involvement. Despite early NIV adaptation, total survival does not seem to differ from patients with bulbar onset form. Background: ALS heterogeneity points to the existence of potential subphenotypes which may require distinct treatment strategies. Neurophysiological methods, such as spectral electroencephalography (EEG) measures that point to network-level (dys)function, have shown promise in this regard. Previous findings from our group (1) point to the presence of at least four phenotypes based on the restingstate EEG networks. For refinement of these findings, taskbased (motor) paradigms that activate more specific motor brain networks during EEG recordings have the potential to provide further information for ALS stratification. Objectives: To explore the potential of subphenotyping based on electrophysiological data recorded during functional motor tasks, which would provide a data-driven, noninvasive approach for stratifying ALS patients. Specifically, we sought to employ cluster analysis and statistical testing to investigate whether EEG/EMG data contain subgroups that are both significant and stable. Methods: Hierarchical and spectral clustering were applied to the following EEG/EMG-derived data: four measures of cortico-muscular coherence (CMC) taken from 20 patients during motor task performance (1) and five measures of event-related spectral perturbations taken from 24 different patients during sustained attention to response task (SART) performance (2). These nine measures were previously found to be significantly different between patients and controls. Cluster Validity Indices (CVIs) were used to assess the goodness of each cluster assignment and p-values for each cluster assignment were calculated using several Monte-Carlo methods. Stability under small perturbation was quantified using Adjusted Rand Index (ARI; [0,1], 1 the most stable). Results: Hierarchical clustering of CMC data found one significant (p ¼ 0.004 to 0.008), stable (ARI ¼0.90) solution with 5 clusters (including varying small and large sizes: one large cluster containing 11 patients, and two smaller clusters containing only 1-2 patients). Spectral clustering of SART data also found one significant (p ¼ 0.049), stable (ARI ¼0.88) solution with 5 clusters. The latter clusters were comparable in size, each containing 4-7 subjects. Discussion: We found that the EEG/EMG data contain cluster structure, which was both significantly above chance level and stable. However, low sample sizes meant that some of the resulting clusters were small, making it difficult to assess whether the proposed subgroups are capturing real effects in ALS electrophysiology. While it is not possible to draw more general and strong conclusions from these specific subgroups, the findings do demonstrate that the EEG/EMG recorded during functional motor tasks contain rich information for ALS subphenotyping. With a larger sample, the cluster structure should become more apparent, and the resulting subgroups may reflect important aspects of ALS heterogeneity, which will be instrumental for stratification in clinics and for clinical trials.
sirenkov@tcd.ie DSP-19 Monitoring progressive loss of walking ability in amyotrophic lateral sclerosis using Timed Up and Go test E. Sukockien _ e 1,2 , R. Iancu Ferfoglia 1 , A. Poncet 3 , J.P. Janssens 4 and G. Allali 5,6 the key determinants of disability. Tracking the decline in walking capacity can provide critical clinical information; however, walking evaluation has been scarcely studied as a potential predictive factor for survival in motor neuron disease. Objectives: Our goals were to assess the progression of gait decline and evaluate its association with mortality in ALS using the Timed Up and Go test (TUG). Specifically, the objectives of the study were: (1) determine the feasibility of using the Timed up and Go test (TUG)a reliable measure of mobilityin ALS patients, (2) compare the TUG in bulbar and non-bulbar ALS patients, and (3) examine if the TUG could be a marker of disease progression similar to the ALSFRS-R score. Methods: Patients with confirmed ALS according to El Escorial criteria (definite, probable or possible ALS) were recruited in the Centre for ALS and Related Disorders of Geneva University Hospitals and followed prospectively. At baseline, demographic and clinical parameters, including ALSFRS-R score and ALS form (bulbar, non-bulbar) were recorded. Exclusion criteria were the presence of other neurological or orthopedic disorders interfering with gait. The TUG was performed at baseline and subsequent evaluations occurred every 3 months. At inclusion, patients were classified as unable to perform the TUG, "slow TUG" (>10.6 s) and "fast TUG" ( 10.6 s). Results: In total, 68 patients with ALS (mean ± SD age: 69 ± 12 years; 50% female) were included. Baseline TUG were negatively correlated with the total ALSFRS-R score (r ¼ À0.63, p < 0.001). At baseline, ALS patients with bulbar onset performed the TUG faster (9.9 ± 3.7 seconds) than nonbulbar ones (17.3 ± 14.9 seconds, p ¼ 0.008). Thirty of 68 (44%) patients died by the end of the follow-up period. The TUG performance at the first visit did not predict mortality. Discussion: While we did not find any association between mortality in ALS and gait performance, the TUG was feasible in a majority of ALS patients and was correlated with functional status. The main advantage of the TUG is its easy access in the clinical settings to provide a multiple component assessment of balance and mobility, as well as cognition. Furthermore, the TUG could be included in the battery of assessments as an additional measure in the follow-up of non-bulbar ALS patients. egle.kazlauskaite@hotmail.com | 2021-11-18T06:23:02.887Z | 2021-11-01T00:00:00.000 | {
"year": 2021,
"sha1": "cf0861c4afa7fa5b634875139d5b5b3aba58713a",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/21678421.2021.1985797?needAccess=true",
"oa_status": "HYBRID",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "cd9e0e190fe9620823d469cca1bcffbeea6e2665",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
208957265 | pes2o/s2orc | v3-fos-license | Serum bilirubin level is associated with exercise capacity and quality of life in chronic obstructive pulmonary disease
Background Bilirubin has antioxidant properties against chronic respiratory diseases. However, previous studies are limited by acquisition of serum bilirubin level at one time point and its analysis with clinical parameters. We evaluated the association of serum bilirubin levels with various clinical outcomes of chronic obstructive pulmonary disease (COPD) in Korean Obstructive Lung Disease (KOLD) cohort. Methods We included 535 patients with COPD from the KOLD cohort. Serum bilirubin levels and various clinical parameters, such as lung function, 6-min walking (6 MW) distance, quality of life (QoL), and exacerbation, were evaluated annually; their association was analyzed using generalized estimating equations and the linear mixed model. Results Among 535 patients, 345 (64.5%) and 190 (35.5%) were categorized into Global Initiative for Chronic Obstructive Lung Disease (GOLD) I-II and GOLD III-IV groups, respectively. 6 MW distance was positively associated with serum bilirubin levels, especially in the GOLD I-II group (estimated mean = 41.5). Among QoL indexes, the COPD assessment test score was negatively associated with serum bilirubin levels only in the GOLD I-II group (estimated mean = − 2.8). Higher serum bilirubin levels were independently associated with a higher number of acute exacerbation in the GOLD III-IV group (estimated mean = 0.45, P = 0.001). Multivariate analysis revealed that lung function and mortality were not associated with serum bilirubin levels. Conclusions Higher serum bilirubin levels were associated with a longer 6 MW distance and better QoL, especially in the GOLD I-II group, whereas they were related to a higher risk of acute exacerbation, especially in the GOLD III-IV group. Bilirubin levels may represent various conditions in COPD.
Background
Chronic obstructive pulmonary disease (COPD) is characterized by persistent respiratory symptoms and airflow limitation caused by significant exposure to noxious particles or gases [1]. Oxidative stress is an important mechanism in the development, progression, and exacerbation of COPD. Biomarkers of oxidative stress are elevated in the exhaled breath, sputum, and blood of patients with COPD [2].
Bilirubin is known as a potential antioxidant and possesses anti-inflammatory properties [3]. Elevated bilirubin levels have a protective effect in cardiovascular disease and cardiovascular disease-related diseases [4]. Several reports demonstrate the relationship between bilirubin level and respiratory diseases. In the United Kingdom, higher level of bilirubin was associated with a lower risk of COPD, lung cancer, and all-cause mortality [5]. In the Swiss general population, serum bilirubin was positively associated with lung function [6]. In COPD, bilirubin level was inversely related to COPD disease severity and progression, and a higher bilirubin level was associated with a lower risk for acute COPD exacerbation, suggesting that bilirubin can be a biomarker of COPD exacerbation [7,8].
However, previous studies have a potential limitation with respect to the acquisition of serum bilirubin level at one time point and analyzing it with serial clinical parameters. Brown et al. analyzed the serum bilirubin level through repeated measurements, but the interval of measurement and total duration of the study were too short to assess the relationship of serum bilirubin level with various clinical outcomes in COPD [8]. For comparison with the results of previous conflict results, we aimed to evaluate the association of both baseline and repeatedly measured serum bilirubin levels with various clinical outcomes of COPD in Korea, including lung function, quality of life (QoL), exercise capacity, exacerbation, and mortality.
Study subjects
The study population initially consisted of 547 patients from the KOLD cohort, in which patients with COPD or asthma had been recruited from the pulmonary clinics of 17 hospitals in South Korea between June 2005 and December 2011 and are under continuous follow-up. The inclusion criteria were a postbronchodilator ratio of forced expiratory volume in 1 s to forced vital capacity (FEV 1 /FVC) < 0.7, age > 40 years, smoking history ≥10 pack-years, and no or minimal abnormality on chest radiography. Twelve patients were excluded (one had a high bilirubin level, nine were hepatitis B virus carriers, and two patients had moderate-to-severe liver diseases). The severity of disease was based on the FEV 1 % predicted in accordance with previous Global Initiative for Chronic Obstructive Lung Disease (GOLD) criteria [1].
Bilirubin measurement
Venous blood samples were collected from the patients at baseline and annually, unless they refused for any reason. A high bilirubin level was defined as a bilirubin concentration > 2.34 mg/dL for men and > 1.75 mg/dL for women [9]. These limits represent concentrations 1 standard deviation (SD) above the mean serum total bilirubin related to the most common variant of Gilbert syndrome, a benign hereditary cause of indirect hyperbilirubinemia [5].
Clinical assessments
Clinical assessments include pulmonary function test results, 6-min walking (6 MW) distance, QoL measured by COPD Assessment Test (CAT) and St. George's Respiratory Questionnaire (SGRQ), exacerbations, and mortality. These assessments were performed annually on the same day as blood collection. The 6 MW test was conducted on a 20-m walking course [10]. Exacerbation was defined as an unscheduled hospital visit or hospitalization due to aggravation of one of three respiratory symptoms, including cough, sputum, and dyspnea for 2 days or more.
Statistical analysis
We assessed the association between serum bilirubin levels and clinical outcomes. First, to assess the baseline bilirubin level that effects changes in lung function, 6 MW distance, CAT and SGRQ scores; and exacerbation over time, we applied the linear mixed model and generalized estimating equations (GEE) approach to the dichotomous and continuous outcomes, respectively for regression analysis. Second, the marginal GEE approach was implemented to account for the clustered measured clinical outcomes and serum bilirubin levels in the same patient for regression analysis. Age, sex, body mass index (BMI), smoking status, and baseline FEV 1 were included in the model. We also examined the serum bilirubin levels that affect mortality through the Cox regression model using time-dependent repeated measurements. Statistical analysis was conducted with SAS version 9.4 (SAS Institute, Inc., Cary, NC, USA). We performed univariate analyses with the χ 2 test for categorical variables and Student's t-test for continuous variables. A twosided P-value < 0.05 was considered statistically significant.
Baseline characteristics
The serum bilirubin levels did not vary between visits over the follow-up period [see Additional file 1]. Table 1 shows the baseline characteristics of the whole study group and the GOLD I-II/III-IV groups. Among the 535 patients, 345 (64.5%) and 190 (35.5%) were categorized into GOLD I-II and GOLD III-IV groups, respectively. The mean age was 67.9 years, 97% were male, 35.8% were current smokers, the mean FEV 1 was 1.72 L (56.8% of predicted), the mean follow-up duration was 5.4 years, and the mean bilirubin level was 0.68 ng/mL in the whole group. BMI, lung function, and proportion of current smokers were higher, and the 6 MW distance was longer in the GOLD I-II group than in the GOLD III-IV group. QoL was better in the GOLD I-II group than in the GOLD III-IV group according to CAT and SGRQ scores. The number of exacerbations and the mortality rate were higher in the GOLD III-IV group than in the GOLD I-II group. Serum bilirubin level was not different between the GOLD I-II and GOLD III-IV groups. Table 2 shows the results of the regression analysis of the relationship of serum bilirubin levels and pulmonary function. Baseline serum bilirubin level was associated with FEV 1 (estimated mean = 0.19, P = 0.038) and FVC (estimated mean = 0.34, P = 0.017) in the GOLD I-II group according to the univariate analysis. However, the multivariate analysis adjusted for age, sex, BMI, and smoking status showed no significant association between serum bilirubin levels and pulmonary function in the analysis of the baseline level or in the clustered analysis. Table 3 shows the results of the regression analysis for the association of serum bilirubin levels with 6 MW distance and QoL index. In the whole group, the 6 MW distance (estimated mean = 31.0, P = 0.005) was a Data are presented as numbers (percentages) or means ± standard deviation † P-value for comparison between COPD patients with GOLD I-II and those with GOLD III-IV BMI body mass index, FEV 1 forced expiratory volume in 1 s, FVC forced vital capacity, GOLD Global Initiative for Chronic Obstructive Lung Disease, AST aspartate aminotransferase, ALT alanine aminotransferase, 6 MW 6-min walking, CAT COPD assessment test, SGRQ St. George's Respiratory Questionnaire positively associated with serum bilirubin level in the multivariate clustered analysis. According to the subgroup analysis, this positive association was observed only in the GOLD I-II group (estimated mean = 41.5, P = 0.002). Among the QoL index, SGRQ score was not associated with serum bilirubin level in the analysis of baseline level or in the clustered analysis. However, the CAT score (estimated mean = − 1.9, P = 0.048) was negatively associated with serum bilirubin level in the multivariate clustered analysis in the whole group. A negative association between the serum bilirubin level and the CAT score (estimated mean = − 2.8, P = 0.017) was observed only in the GOLD I-II group when subgroup analysis was conducted.
MW distance and QoL
Exacerbation Table 4 shows the relationship between serum bilirubin levels and the number of acute exacerbation per year. In the whole group, a higher baseline serum bilirubin level was independently associated with a higher risk of exacerbation (estimated mean = 0.62, P = 0.001). According to the subgroup analysis, this significant relationship was observed only in the GOLD III-IV group. Number of acute exacerbation per year was positively associated with serum bilirubin both in the analysis of baseline bilirubin level (estimated mean = 0.75, P = 0.005) and in the clustered analysis (estimated mean = 0.45, P = 0.001) in the GOLD III-IV group (Table 4).
Mortality
Serum bilirubin level was not associated with mortality in the whole group or in the subgroups in the analysis of the baseline level or in the clustered analysis (Table 5).
Discussion
This study investigated the association of serum bilirubin levels with several clinical aspects, such as lung function, exercise capacity, QoL, exacerbation, and mortality, through the Korean COPD cohort study. We found that higher serum bilirubin level was associated with longer 6 MW distance, better QoL, and higher risk of acute exacerbation after adjusting for age, sex, BMI, smoking status, and baseline FEV 1 . When stratifying patients according to severity of airflow limitation, the association with better exercise capacity and better QoL persisted only in the GOLD I-II group, whereas the association with a higher risk of exacerbation persisted only in the GOLD III-IV group.
Bilirubin is a potential antioxidant against peroxyl radicals and protects cells from toxic levels of hydrogen peroxide [3]. Powerful antioxidant actions of bilirubin arise from rapid regeneration to bilirubin via biliverdin reductase after being oxidized back to biliverdin [11,12]. Additionally, bilirubin attenuates vascular endothelial activation and dysfunction in response to proinflammatory stress [13].
These may explain how higher serum bilirubin levels are associated with better exercise capacity in COPD. This is the first study to demonstrate a positive association between serum bilirubin level and 6 MW distance in COPD patients. However, the association was only observed in the GOLD I-II group. In COPD patients with severe airflow limitation, other factors such as BMI, muscle weight, baseline saturation, level of dyspnea, and lung function have greater influence on exercise capacity. Handgrip strength was independently and positively related to serum total bilirubin level in both sexes among Japanese community-dwelling persons [14]. Reactive oxygen species and reactive nitrogen species are generated in skeletal muscle both during rest and contractile activity [15]. Intense and prolonged exercise can result in oxidative damage to both proteins and lipids in contracting myocytes. Low and physiological levels of reactive oxygen species are required for normal force production in skeletal muscle, but high levels of reactive oxygen species promote contractile dysfunction, resulting in muscle weakness and fatigue [16]. Bilirubin is one of the numerous nonenzymatic antioxidants located within skeletal muscle fibers, and it inhibits both lipid and protein oxidation [14]. In addition, bilirubin attenuates vascular endothelial activation and dysfunction in response to proinflammatory stress [13]. Albuminbound bilirubin protects human ventricular myocytes against oxyradical damage [17]. However, the relationship of bilirubin and clinical outcomes should be cautiously assessed in various settings when other health statuses could confound the results. Brown et al. showed that higher bilirubin level was associated with lower risk of acute exacerbation of COPD from the secondary analyses of data in the Simvastatin for Prevention of Exacerbations in Moderate-to-Severe COPD (STATCOPE) and the Azithromycin for Prevention of Exacerbations of COPD (MACRO) studies [8]. However, we found contradicting results, as a higher bilirubin level was associated with higher risk of exacerbation of COPD, especially in the GOLD III-IV group. This result could be cautiously explained by the relationship between bilirubin and right-heart function. Poelzl et al. showed that the median total bilirubin level increased with every New York Heart Association class [18]. Moreover, Samsky et al. reported that several echocardiographic indices of right-heart dysfunction were related to elevated total bilirubin levels, with increased portal vein pulsatility index having the best predictive value in patients with exacerbation of chronic heart failure. Abnormal liver function test results were observed in patients with heart failure as a result of impaired perfusion or increased right-sided cardiac pressures [19]. COPD patients with right-sided heart failure were at higher risk of severe exacerbations [20]. Relative pulmonary artery enlargement (ratio of the main pulmonary artery diameter to the aortic diameter > 1) was a significant biomarker for predicting future exacerbation in the COPD gene study [21]. The ratio of the main pulmonary artery diameter to the aortic diameter was positively related with right ventricular pressure, and a high ratio was a significant risk factor of COPD exacerbation in Korean COPD patients [22]. In this study, the relationship between serum bilirubin and exacerbation was only significant in the GOLD III-IV group, which had more severe airflow limitation, but the cardiac function status in this group is not exactly known. In STATCOPE and MACRO, COPD patients with predicted FEV 1 < 80% were enrolled, and one-third were categorized into the GOLD II group [23,24]. Moreover, the study duration of STATCOPE (median = 1.74 years) and MACRO (median = 0.55 years) was shorter than that of our study (mean = 5.4 years).
In contrast to the findings of several previous studies, serum bilirubin level was not associated with pulmonary function in COPD patients. Curjuric et al. studied the association of bilirubin with lung function in the Swiss study on Air Pollution and Lung Disease in adults (SAPALDIA) cohort. High bilirubin levels were significantly associated with higher FEV 1 /FVC and forced expiratory flow at 25-75% of the pulmonary volume (FEF 25-75% ) overal l [6]. Leem et al. found significant associations of serum bilirubin levels with FEV 1 , FVC, and 25-75% in the general population, especially in neversmokers. Moreover, serum bilirubin levels were related to an annual decline in FEV 1 , FVC, and the FEV 1 /FVC ratio [9]. Apperley et al. reported that bilirubin is inversely related to COPD disease severity and progression. Higher serum bilirubin concentration was associated with a higher FEV 1 and less annual decline in FEV 1 [7]. In the study by Apperley et al., participants were active smokers with mild to moderate airflow limitation, defined as FEV 1 between 55 and 90% predicted, and their mean FEV 1 was 75% predicted. In the GOLD I-II group of our study, the mean FEV 1 was 66.7% predicted, and half of the group had FEV 1 < 70% predicted (data not shown), meaning that airflow limitation was more severe in our study than that in the study by Apperley et al. Milevoj Kopcinovic et al. assessed the association of bilirubin as an oxidative stress marker with stable COPD patients. Although the number of participants was small, the total bilirubin levels were not different between patients with different COPD severities [25]. The association of serum bilirubin level with mortality is controversial, depending on the subtypes of mortality. Its association with overall mortality in COPD has not been reported previously in accordance with our study. In mild COPD, bilirubin was only inversely correlated with coronary heart disease mortality, but not with overall mortality [7].
Our study had several limitations. First, methodologically, the causality between serum bilirubin level and clinical consequences of COPD could not be established. Although it is currently impossible to conduct a prospective study with supplementation of bilirubin to investigate the clinical effects of bilirubin in patients with COPD, pretreatment with gavage of indirect bilirubin attenuated smoking-induced pulmonary injury by suppressing inflammatory cell recruitment and proinflammatory cytokine secretion and increasing antiinflammatory cytokine levels and antioxidant superoxide dismutase activity in a rat model of smoking-induced emphysema [26]. Second, a smaller number of patients were included in our study compared with previous large-cohort studies. However, the strength of our study is that data from repeated measurements over a longterm period were used. We evaluated the association of serum bilirubin levels with clinical outcomes using two different methods: (1) analysis of the relationship between the baseline serum bilirubin level with serial data of various clinical parameters, (2) analysis of all repeated serum bilirubin levels with serial measurements of clinical parameters using the clustered analysis method. Third, this cohort study did not include other factors that may affect bilirubin levels, such as previous medication, alcohol consumption, or diet. Moreover, parameters of cardiac functions were not fully evaluated in KOLD cohort. Therefore, our assumption of relationship between heart failure and high serum bilirubin levels in severe airflow limitation cases could not be proved in this study. Additionally, mortality could not be solely by the effects of acute exacerbation because death was defined as all cause death in our study.
Conclusion
This is the first COPD cohort study to investigate the relationship of repeatedly measured serum bilirubin levels and COPD-related clinical outcomes every year. A higher serum bilirubin level was independently associated with increased exercise capacity, a better QoL in patients with mild-to-moderate COPD, and a higher risk of exacerbation in those with severe-to-very severe COPD. Bilirubin possesses potential antioxidant and anti-inflammatory effects. However, bilirubin should be cautiously considered as a biomarker for predicting clinical consequences in various settings of COPD.
Additional file 1:. Serum bilirubin levels over the follow-up period. Serum bilirubin levels did not vary between the visits over the follow-up period. | 2019-12-10T15:17:16.453Z | 2019-12-01T00:00:00.000 | {
"year": 2019,
"sha1": "61679db5a1316fd7b050726a11ca0b23177f321d",
"oa_license": "CCBY",
"oa_url": "https://respiratory-research.biomedcentral.com/track/pdf/10.1186/s12931-019-1241-5",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "61679db5a1316fd7b050726a11ca0b23177f321d",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
33949349 | pes2o/s2orc | v3-fos-license | A note on the interaction between stock prices and exchange rates in Middle-East economies
Abstract Ample studies have been conducted to analyse the interaction between stock prices and exchange rates in developed and developing countries. However, studies on Middle-East economies are limited. Moreover, many existing studies test for Granger causality in a bi-variate setting which in turn leads to conflicting causality results. The goal of this study is to investigate the causal interaction between stock prices and exchange rates empirically in Iran, Kuwait, Oman and Saudi Arabia from January 2004 to December 2011. Among four Middle-East economies, we find that stock prices and exchange rates have bi-directional causality in Iran, Oman and Saudi Arabia, but the variables do not interact in Kuwait. Additionally, the recursive causality tests reveal that these relationships are stable over the analysis period. Therefore, stock prices and exchange rates affect each other at least in Iran, Oman and Saudi Arabia.
Introduction
Knowledge of the actual direction of causality between stock prices and exchange rates would provide additional information to investors and/or policymakers in forecasting and monitoring stock market performance and/or exchange rates. Therefore, the causality between stock prices and exchange rates has received widespread attention in the finance and economic literature and there is a growing literature on this topic, especially after the 1997 Asian financial crisis. 1 In such a scenario, understanding and foreseeing the relationship between stock prices and exchange rates might enable policymakers to formulate appropriate policies before the spread of the crisis (Parsva, 2012).
Theoretically, there are two competing schools of thought which have essentially rooted this causal relationship. First, the traditional school of thought, also known as the flow-oriented model, articulated that exchange rates Granger-cause stock prices because appreciation (depreciation) in exchange rates would decrease (increase) the competitiveness of a firm in the global markets which in turn decrease (increase) the firm's profits and its stock price (e.g. Dornbusch & Fischer, 1980). Second, the portfolio balance school of thought, also known as the stock-oriented model, argued that stock prices lead exchange rates to change rather than the other way around. The model claimed that a stock price increase (decrease) would trigger massive capital inflows (outflows), thus, exchange rates would appreciate (depreciate) owing to an increase (decrease) in the demand for domestic currency (Branson, 1983;Frenkel, 1983). Therefore, the causality should be running from stock prices to exchange rates.
This study attempts to provide further evidence on the causal relationship between stock prices and exchange rates in Middle-East economies, namely Iran, Kuwait, Oman and Saudi Arabia. This study contributes to the existing literature in two directions. First, a number of studies on this topic performed Granger causality tests in a bi-variate framework (e.g. Hatemi-J & Roca, 2005;Lean, Narayan, & Smyth, 2011). Thus, these studies are likely to be biassed owing to the omission of relevant variables influencing stock prices and exchange rates (Lütkepohl, 1982). The omission of a relevant variable from a system has two effects which can force the expected value of the estimated coefficient away from the true value of the population correlation coefficient as follows. First, it might invalidate the causality inference between the variables of the incomplete system. The argument that any change in variable X causes changes in variable Y or vice versa, drawn from bivariate causality tests, may be invalid, as invalid causality inferences can result from omitting important variables (Caporale, Howells, & Soliman, 2004). Second, there is evidence that generally by omitting relevant variables, the entire estimated equation is suspect due to the likely bias in the coefficients of the variables remained in the equation (Studenmund, 2006).
The oil price and inflation rate are two important variables influencing stock prices and exchange rates. If oil is one of the inputs for production, an increase in the oil price would increase the cost of production which in turn affects the cash flow and depresses the stock prices (Arouri & Rault, 2012;Chortareas, Cipollini, & Eissa, 2011;Narayan & Narayan, 2010). Besides, Krugman (1983) articulated that an increase in the oil price would also cause the exchange rate of oil-exporting countries to appreciate via the wealth transfer effect (see also Golub, 1983). Apart from that, a rise in the inflation rate is usually accompanied by an equal rise in the domestic interest rate to cushion the inflationary effect. According to the theory of interest rate parity, the high domestic interest rate may attract foreign capital inflows and lead to domestic currency appreciation. Since inflation would rise together with interest rate according to the Fisher effect hypothesis, an increase in the inflation rate would also affect the firm's cash flow and profit and, thus, depress stock prices. Brown (2002) documented that the Middle-East is an oil-rich region that covered 65 per cent of the world reserves of crude oil. Moreover, numbers of leading oil exporters are located in this region. Therefore, change in the oil price and inflation rate would have significant implications on the stock prices and exchange rates of Middle-East economies. Motivated by these promising reasons, we investigate the causal relationship between stock prices and exchange rates in Iran, Kuwait, Oman and Saudi Arabia within a multivariate framework by including the oil price and inflation rate as additional control variables. Thus, the current plan of this study is to address and clarify the question of what the impact is on the relationship between stock prices and exchange rates of including additional relevant variables.
Beside omission of relevant variable(s), the second contribution of this study to the existing literature pertains to the use of more advanced econometric approaches to test the degree of integration and the causal relationship between stock prices and exchange rates.
Unlike the earlier studies, we employ the standard unit root test, Augmented Dickey-Fuller (ADF), as well as the unit root test with structural break advocated by Perron (1997) to determine the maximal order of integration of the series under investigation. Additionally, the causal relationship between stock prices and exchange rates may not be stable due to changes in the global economic and financial environments over time. This could also be a factor explaining the variation of results provided by the previous studies. Therefore, assessing the stability of the causal relationship between stock prices and exchange rates in Middle-East economies via the time-varying Granger causality test is essential. To the best of our knowledge, the unit root tests with structural breaks and also the time-varying Granger causality test have not been applied to this topic, particularly those relevant to the Middle-East economies.
The balance of this paper is structured as follow. A literature review of the study will be discussed in the next section. Section 3 will report the methodology and results. The conclusion of this study will be provided in Section 4.
Review of past literature
In an open economy, the impact of unexpected changes in exchange rates on the present value of a firm's assets, liabilities and cash flows exposes the economic value of the firm to exchange risk. This implies that exchange rates play a significant role in the movements of stock prices. In other words, stock prices of the firms that involve foreign direct investment (FDI), export and import of goods and services are likely to be influenced by exchange rate fluctuations (Soenen & Hennigar, 1988). Generally, the economic exposure of firms to exchange rate risks has increased and stock markets may respond to the excess movement and increase the volatility of exchange rates. On the other hand, exchange rates have been more sensitive to stock market movements and global portfolio investments over the past decades.
The first era of theoretical studies on the relationship between stock prices and exchange rates was sparked after the Bretton Woods agreement on the floating exchange rate system was abandoned by most countries in 1973 (Stavarek, 2005). Thus far the existing studies for the causality between stock prices and exchange rates have mainly concentrated on developed and developing economies in the Americas, Europe and Asia regions (e.g. Ajayi, Friedman, & Mehdian, 1998;Ajayi & Mougouė, 1996;Caporale, Hunter, & Menla Ali, 2014;Hatemi-J & Irandoust, 2002;Hatemi-J & Roca, 2005;Lean et al., 2011;Tsagkanos & Siriopoulos, 2013). Not much attention has been given to Middle-East economies (e.g. Chortareas et al., 2011;Parsva, 2012;Parsva & Lean, 2011). Moreover, the causality results are inconclusive among the existing studies. The plausible reasons for the conflicting causality results may be due to the omission of relevant variables and/or the instability of the causal relationship between stock prices and exchange rates. For example, Hatemi-J and Roca (2005) found that before the Asian financial crisis period, exchanges rate Grangercause stock prices in Indonesia and Thailand, but stock prices Granger-cause exchange rates in Malaysia. During the crisis period, they failed to find any evidence of causality between stock prices and exchange rates in the ASEAN economies. In addition to that, Parsva and Lean (2011) employed the Johansen cointegration and Granger causality tests to analyse the relationship between stock prices and exchange rates in six Middle-East economies in the period before and during the global financial crisis. In sum, they found that the causality results pre-crisis and during crisis periods are inconsistent among the Middle-East economies, except for Egypt and Oman. Clearly, the Granger causality test is very sensitive to the choice of the analysis period.
Methodology and Results
This study covers monthly data (from January 2004 to December 2011) of stock prices, nominal exchange rates in terms of local currency relative to the euro, crude oil price in US dollars per barrel, and the inflation rate for Iran, Kuwait, Oman and Saudi Arabia. 2 All data are obtained from Datastream, International Financial Statistics (IFS) and the Tehran Stock Exchange website. With the exception of the inflation rate, all data are transformed into natural logarithms.
Prior to the Granger causality test between stock prices and exchange rates, it is necessary to determine the order of d max . ADF test indicates that all variables are stationary at the first difference, but there are non-stationary at the level. Therefore, the ADF test in Table 1 suggests that all variables under investigation are belong to I(1) process. Nonetheless, if the series contain structural breaks, these ADF results may be not accurate (Perron, 1989). To confirm the order of integration of each series, we also employ the Perron (1997) unit root test with a break. The results of Perron test are presented in Table 2. Importantly, the Perron test finds no additional evidence against the standard ADF test. We affirm that the order of integration for all variables under investigation are I(1) processes, suggesting the choice of Table 1. the results of aDF unit root test. note: the asterisks *** and ** denotes the significance level at 1 and 5 per cent, respectively. Δ is the first difference operator. the lag order for aDF test is to set by aic. Figures in brackets denotes the optimal lag order. the critical values are obtained from mackinnon (1996). source: authors' calculations. Toda and Yamamoto (1995) and Dolado and Lütkepohl (hereafter TYDL). Next, we use the TYDL Granger causality test to ascertain the direction of causality between stock prices and exchange rates. To perform this test, we estimate the following augmented VAR models in level: where ln is the natural logarithm and the residuals 1t , 2t are assumed to be normally distributed and non-serially corrected. k is the optimal lag structure of the VAR models selected by the multivariate Akaike Information Criterion (AIC), while p = k + d max . d max is the maximal order of integration of the series. Here, we specify d max = 1 in our study because our unit root tests results suggest that all variables are I(1). Moreover, the Monte Carlo simulation results in Dolado and Lütkepohl (1996) also confirmed that d max = 1 is superior to other order of d max . From equation (1), δ 1 ≠ δ 2 ≠ ⋯ ≠ δ i ≠ 0, implying that exchange rates Granger-cause stock prices (i.e. supporting the traditional approach), whereas from equation (2), η 1 ≠ η 2 ≠ ⋯ ≠ η i ≠ 0, indicating stock prices Granger-cause exchange rates (i.e. supporting the portfolio-balance approach). The results of TYDL Granger causality test are reported in Table 3. The results indicate that stock prices and exchange rates have bi-directional causality in Iran, Oman and Saudi Arabia. Nevertheless, we find that for the case of Kuwait there is no causality flow in any direction.
In addition to the TYDL Granger causality test, we also incorporate the time-varying procedure to verify the stability of the causal relationship between stock prices and exchange rates. One of the advantages of using TYDL Granger causality test is that it can be applied without knowing the unit root and cointegration properties. Toda and Yamamoto (1995) suggested estimating a vector autoregressive (VAR) model in level with lag k then augmented it with an additional lag determined by the maximal order of integration, d max . In doing so, the statistical inference can be made based upon the standard asymptotic distribution.
To this end, we tested the direction of causality with full sample, while the causal relationships between stock prices and exchange rates may be unstable due to the frequent change of (1) Table 3. the results of tYDL Granger causality test.
note: *** and ** denote rejection at the 1 and 5 per cent significance levels, respectively. the Lagrange multiplier (Lm) test suggest that the selected vaR models for causality test are free from serial correlation problem up to order eight. source: authors' calculations.
Null hypotheses
Likelihood ratio (LR) test statistics
Iran
Kuwait Oman Saudi Arabia k = 12, p = 13 k = 3, p = 4 k = 12, p = 13 k = 12, p = 13 ln EX t ↛ ln SP t 52.0081*** 1.1486 24.8706** 22.9398** ln SP t ↛ ln EX t 41.2179*** 2.2231 23.0333** 39.5837*** global economic and financial environments (Tang, 2008). Policy-making based upon this result may be inappropriate. To address the stability issue for the causality between stock prices and exchange rates, we extend our study by conducting the recursive-based TYDL Granger causality test proposed by Tang (2008). The advantage of using the recursive instead of rolling-based TYDL Granger causality test is that the latter is subjected to the problem of choosing best rolling window where the choice is likely to be arbitrated. Owing to this limitation, some recent studies such as Tang (2013Tang ( , 2015 and Tan (2013, 2015) used the recursive-based causality test. To perform the recursive-based TYDL Granger causality test, we set the initial sample size T and adding one new observation to the end of the sample (i.e. T+1). This process will continue until the last observation is consumed.
With this procedure, we compute the Likelihood Ratio (LR) statistics for each sub-sample. The plots of 10 per cent normalised LR statistics for H 0 : ln EX ↛ ln SP and H 0 : ln SP ↛ ln EX are depicted in Figure 1 and Figure 2, respectively. The null hypothesis of non-Granger causality can be rejected if the normalised LR statistic is above the unity line. By and large, we find that the causal relationship between stock prices and exchange rates are stable because the causality inferences for each sub-sample are consistent. Specifically, we observe that the plots of normalised LR statistics for Iran, Oman and Saudi Arabia fluctuated above the unity line while the normalised LR statistics for Kuwait tended to fluctuate below the unit line in all sub-sample period. Unlike Ajayi et al. (1998), Hatemi-J and Roca (2005) and Lean et al. (2011), we confirm that there is a stable bi-directional causality between stock prices and exchange rates in Iran, Oman and Saudi Arabia. However, there is also a stable neutral causality in the case Kuwait and this is contrary to the finding of Parsva and Lean (2011). In sum, our empirical results show supportive evidence of traditional and also portfolio-balance approaches at least for Iran, Oman and Saudi Arabia.
Conclusion
We examined the causal relationship between stock prices and exchange rates in Iran, Kuwait, Oman and Saudi Arabia using a multivariate framework. The TYDL Granger causality results revealed that stock prices and exchange rates in Iran, Oman and Saudi Arabia are closely interacting. In other words, all sample countries follow both traditional and portfolio-balance approaches. Nevertheless, stock prices and exchange rates in Kuwait interact loosely because we do not find any evidence of causality between them, so Kuwait follows neither traditional nor portfolio-balance approaches. Moreover, the recursive-based causality results affirm that these causal relationships are stable in the selected countries. Therefore, investors may be able to forecast the stock market's performance based upon the exchange rate pattern or the other way around. Furthermore, economic policymakers may be able to stimulate the stock market's performance at least in Iran, Oman and Saudi Arabia by adjusting the exchange rates. It is also worth believing that this close relationship allows policymakers to predict the currency crises based upon the stock market's performance. Therefore, alternative policies could be implemented before such a crisis, such as reinforcing financial market transparency and accountability in the countries under review to prevent high volatility in stock prices and unreliable movements in currency value in the foreign exchange market (Parsva, 2012). Moreover, other monetary and fiscal policies should also be implemented to ensure macroeconomic stability in the Middle-East economies.
However, this study is still imperfect. The newly established markets in some Middle-East countries and the lack of financial and economic data in the global databases are the main reasons that the financial time series in this area are inevitably restricted to domestic resources. Otherwise, more observations may lead the study to get more significant results. Moreover, the implication of the results could possibly be improved by applying daily data in the multivariate model. The use of higher frequent observations will possibly better capture the dynamic nexus of the stock market and foreign exchange market.
Notes
1. It is believed that the 1997 Asian financial crisis, which started as an exchange rate crisis in Thailand and then led to the depreciation of other currencies in the region, resulted in the collapse of the stock markets (Hatemi-J & Roca, 2005;Khalid & Kawai, 2003). 2. Most of the sample countries in this paper have fixed their currencies to the US dollar.
Therefore, the investigation has been conducted by using the monthly time series data of nominal exchange rate against the euro, as the euro is the second most distributed and traded currency in the world after the US dollar (Bank for International Settlements, 2007). | 2017-09-01T23:37:15.092Z | 2017-01-01T00:00:00.000 | {
"year": 2017,
"sha1": "cfef08caddd74e8ec568f7d5d8304a38f31331af",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/1331677X.2017.1311222?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "9fb105fe5509cdadc00d99094f4dbe645652f313",
"s2fieldsofstudy": [
"Economics",
"Business"
],
"extfieldsofstudy": [
"Economics"
]
} |
114642957 | pes2o/s2orc | v3-fos-license | Modernization of high-power (5 kW) broad ion beam source
In the course of the long-term performance (during 5 years) of a high-power source of gas ions (25 keV, 0.2 A, 600 cm2) with a plasma emitter based on cold cathode discharge, the character and rate of key constructive elements faults were determined, which allowed to calculate the inter-repair time, complexity and cost of the repair. The peculiarities of the gas-discharge system and the ion beam forming system limiting the effectiveness of ion beam treatment were revealed as well. Conditions favorable for the decrease in the discharge voltage by 50–200 V and igniting voltage up to 1.5-2 times are determined. The possibilities of lowering the minimal flow of working gas are demonstrated. The design of the discharge system with reduced sputtering rate of local areas of the hollow cathode is offered. The changes added to ion source design aimed to enhance the lifetime of the plasma chamber that is exposed to cyclic heating by the back electron beam leading to the development of through cracks, and to enlarge the rupture life of glow discharge hollow cathode by optimizing its configuration and the conditions of discharge ignition and burning, are described. The upgraded design of a multislit ion-optical system with enhanced performance ensures uniform surface distribution of ion fluence.
Introduction
The gas ion source with a large beam cross section was elaborated for modifying large surfaces of materials by ion implantation and for using it in coating deposition processes assisted by ion beam (IBAD) [1]. To generate ion-emitting plasma, the injection of fast electrons into extended cathode cavity (plasma chamber) containing low-area anode was used. The electrons are emitted from plasma of self-sustained glow discharge with hollow cathode through grid electrode and accelerated in double layer of space charge while crossing the grid electrode [2]. As a result of electron oscillations in the cathode cavity, volume plasma is formed whose space inhomogeneity depends on gas pressure across the cathode cavity and method used for feed gas input. It was stated that under one-side injection of electrons into the plasma chamber the acceptable (±10%) inhomogeneity of plasma density on 600 mm length are reached by regulated gas fed from opposite ends of plasma chamber [3]. In order to reduce the rate of ion sputtering of hollow cathode its area was enlarged up to ~ 1300 cm 2 . To rise the grid electrode resource, a space between the grid and the cathode aperture was enlarged till 100 mm, as a result, the electron flow after passing cathode aperture of 0.5-1 cm 2 area broadened up to 100 cm 2 . 100-fold reduction of emitting plasma density allowed to use a perforated electrode with 4 mm apertures instead of fine-grained grid to solve the problem of the grid resource limited by ion sputtering [4]. The choice of multislit ion-optical system for ion beam formation was specified both by quasi-linear geometry of the beam cross-section and by the need to adjust many apertures at a long distance and to keep the adjustment while the temperature of electrodes varies in a wide range. Oneside rigid fixing of electrodes and single rods forming the slits as well as their floating arrangement from the opposite side provided reliable functioning of ion-optical system. The set of required parameters of ion beam (25 -40 kV, 0.2 A) also impeded its formation as demanded a long (6 -8 cm) accelerating gap. All these not only complicated the aperture adjustment, but also led to the increase of the rate of gas ionizing in the gap at plasma-forming gas pressure 0.05 Pa and caused the back flow of accelerated electrons. As a result, heating of the source and exceptional consumption of power of high-voltage supply take place [1]. Substantial extra energy release required changes in design and productivity of the cooling system. Therefore, in the course of designing and testing of the ion source the main problems hampering its operation were determined and measures for their overcoming taken. In the present work, the lifetime of key elements of the source under conditions of real pilot use in the installation for modifying surfaces of turbine blades was estimated. Results of experiments aimed at the improvement of technical and operational characteristics of the ion source realized in the new source design are described.
Service conditions and technical maintenance of the ion source
The ion source is used in vacuum coater «Victoria-2M» for ion-plasma modifying of the details of aviation technique in Scientific-Production Association «Technopark of Aviation Technologies» working in cooperation with Public Joint Stock Company "Ufa Engine Industrial Association" and Ufa State Aviation Technical University during 5 years. The ion source is used for treatment of low and high-pressure compressor blades and allows to provide ion cleaning (etching) and ion implantation of 200 oversized details by one cycle. Modifying the surfaces of gas-turbine motor components by ion implantation leads to enhancement of fatigue strength of titanium alloys by 6 -10%; nickel alloys by 12 -14%. Besides, ion beam treatment increases the lifetime of components, their microhardness and adhesion strength after successive coating depositions. Source parameters are as follows: accelerating voltage is 25 kV, beam current of nitrogen or argon ions is 0.2 A, the initial cross section of the beam is 600 x 100 mm2, the operation gas pressure in treatment chamber equals 0.05 Pa. The ion source is exploited in cyclic mode: 1 -3 cycles a day, one cycle taking 150 min. Total running hours of the source in continuous mode of beam generation made ~ 3500 hours, on the average, 5 hours a day. The ion source design is shown in Fig. 1. At both ends of water-cooled hermetic case 1 ceramic high-voltage bushing insulators 2 are arranged; at one of them hollow cathode 3 is mounted, at the opposite one -plasma chamber 4 and rod tungsten anode 5 are fixed. Plasma chamber and cathode are placed inside metallic screen 6, which equalizes electric field strength in a high voltage gap and blocks plasma feeding from the gap between grid electrode 7 and cathode 3. The large length of the gap (30 mm) is caused by the significant thermal expansion of the plasma chamber in axial direction when ion source is used for a long time. The ion beam is formed by the ion-optical system consisting of emitter electrode 8 and accelerating electrode 9 and propagated into the treatment chamber through the rectangular aperture in output flange 10.
Routine technical maintenance of the ion source was accomplished by working staff and consisted in periodic (1 time per 6 months, after 600-700 hours of operation) replacement of diaphragm 11 made of stainless steel 12X18H10T in hollow cathode (3,13,14) because owing to ion sputtering a diameter of the aperture increased from 8 to ~ 15 mm (Fig. 2). This aperture increase led to the growth of the discharge voltage and required rise of gas flow. Tungsten igniting electrode 16 ( Fig. 1) was replaced simultaneously because of its shortcutting on 10 -15 mm. The reduction of electrode length worsened conditions of the discharge ignition, which also led to the necessity to increase the minimal gas flow. Tungsten rods (Ø2 mm) (12, Fig. 1) of accelerating electrode 9 of ion-optical system (IOS) (8, 9) sputtered by ion beam were periodically (after 1300-1500 hours) replaced. More complicated prophylactic repair was made once after 2500 hours of operation. It included general maintenance or Multiple cyclic heating and cooling lead to loss of stainless steel plasticity and through cracks development (Fig. 2), hence plasma chamber construction loses its rigidity. The simplest solution of this problem is placing a cylindrical replaceable insert into plasma chamber. Gradual degradation of this insert does not result in emergency effects.
Rather a complicated configuration of the hollow cathode including elements 3, 11, 13, 14 is specified by the necessity of special space for electron flux expansion towards the grid 7; for disposal of fixing
Optimizing the conditions of beam formation and discharge operating
The known disadvantage of the mode of broad ion beam formation by multi-slit ion-optical system (IOS) is that even at high homogeneity of plasma ion emitter such IOS provides almost regular distribution of beam current density along the large axis of its cross section only at definite distance from the beam formation system and definite modes of its operation, conditioned by the combination of beam current and accelerating voltage.
Regularity of distribution is provided as a result of angular divergence of elementary beams formed in singular apertures and their overlapping in the space of beam drift. At less than optimal distances from ion-optical system, irregularity of beam current density distribution is observed, characterized by interchange of maximum and minimum values of ion current density along large axis of the beam. Therefore, even at insignificant change of beam generation modes or at treatment of complicated surfaces fluence of ion irradiation of the surface of articles moving crosswise the long axis of beam section also will be irregularly distributed across the surface.
The proposed way to reduce inhomogeneity of ion beam treatment implies the change of the angle of slope of the slits in ion-optical system electrodes: the angle between slit and short axis of beam's cross section has to differ from 0°. The minimal angle of deviation min ( / ) arctg h l is conditioned by length l and width h of single slit aperture. As a result, all parts of surface of the article moving across the long axis of the beam receive approximately equal fluence of ion irradiation. Design and operation principle of IOS are shown in Fig. 3.
In Fig. 4 distributions of ion current measured by collector (length l and width h/4) moving lengthwise the long axis of the beam for optic with rectangular slit orientation and for optic with angle of slits rotation 9 are shown. Measurements were made at 3 cm distance from ion-optical system, at which the level of inhomogeneity of the beam section is high. Even in such conditions the inhomogeneity of fluence made <±15%.
The research on optimizing conditions of ignition and burning of self-sustained glow discharge with hollow cathode was purposed at reduction of minimum value of gas flow, ignition voltage of pulse discharge in cathode cavity and DC voltage, applied between the cathode and anode, providing stable ignition and burning of constricted discharge. The research showed that the change in the potential of diaphragm 11 with outlet aperture imposes significant effect on the conditions of the discharge ignition between the cathode and anode. At cathode potential of the diaphragm disruption of cathode sheath in constricting aperture is required for the discharge development; this is achieved by the increase in the cathode-anode gap's voltage and growth of pulse discharge current. At the anode potential of the diaphragm, conditions of pulse ignition are practically stable; the ignition voltage of the constricted discharge decreases by 300 -500 V (Fig. 5); and the discharge voltage falls by 50 V (Fig. 6.1). Besides, the diaphragm with the potential close to the anode one (40 -50 V) is not subjected to ion etching. The effect of cathode insertions (15, Fig. 1) on burning voltage of the constricted discharge is shown in Fig. 6.2. The growth of the amplitude of ignition pulse current from 12 to 50 A leads to the reduction of the minimum flow of feeding gas, at which the constricted discharge ignition is provided, from 11 to 7 sccm.
Conclusion
The experience of the long-term (during 5 years) pilot use of the high power source of the broad gas ion beam (25 kV, 0.2 A, 600 cm 2 ) with a plasma emitter based on the discharge with a cold cathode was summarized. The source is easy and reliable in exploitation and does not require frequent repair. Current prophylactic repair of the ion source held twice a year consisted in replacement of the cathode diaphragm because of the significant increase of outlet aperture dimensions as a result of ion sputtering, and also the change of tungsten rods in accelerating electrode of ion-optical system, sputtering by ion beam. Regular (biyearly) prophylactic maintenance consisted in repair or total change of plasma chamber and hollow cathode. The loss of plasticity of metal and development of through cracks took place owing to cyclic heating of plasma chamber wall by back electron flow. The integrity of hollow cathode design is destroyed because of intensive scattering of protruding parts. In order to increase the lifetime of these components a new design of hollow cathode was elaborated and use of changeable insertion inside plasma chamber was recommended.
The results of experiments allowed to increase the lifetime of the diaphragm with constricting aperture owing to altering its potential from the cathode to anode one and to reduce the inhomogeneity of ion beam treatment using the multislit ion-optical system with the angle between slits and long axes of beam section different from 90°. | 2019-04-15T13:11:34.736Z | 2017-05-04T00:00:00.000 | {
"year": 2017,
"sha1": "8e3325eddce96b2920c0f58e572479727f67c42e",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/830/1/012050",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "81e2b21c597a45cf00f3ae44ade51a6ef23de0b3",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
} |
55241438 | pes2o/s2orc | v3-fos-license | ENVIRONMENTAL CRITIQUE ON WATER SECTORAL ENVIRONMENTAL IMPACT ASSESSMENT OF BANGLADESH
The water resources sector of Bangladesh relies on the Environmental Impact Assessments (EIAs) to asse ss the possible positive and negative impacts on the e nvironmental and social components of the project affected areas. The motivation of this research was to identify the key environmental components, gaps nd lapses of current EIA practices in water resources sector of Bangladesh. Under the motivation, this st udy has determined the effectiveness of a water resourc es EIA (Gorai River Restoration Project) for sustai nable implication of water resources development and mana gement projects in Bangladesh. Component-based checklist method and effectiveness review framework were used in this study to draw conclusions and to make environmental decisions on the important secti ons of the studied EIA. Review of the key aspects a nd the analysis of the effectiveness framework disclos ed that the studied EIA is well performed and have considered sufficient information for decision maki ng, but the residual and unavoidable impacts were n ot identified for all the important environment compon ents in the construction and operation phase. Inclu sion of important environmental and social components un der different intervention scenarios, consideration of alternative flow regimes, suggestions and analysis of different project interventions ensuring public participation were the key strengths of the studies EIA. The considered environmental issues and aspec t of this study can be used as guidelines for the future EIAs under the similar geo-environmental contexts. The developed review framework can be implemented in wa ter resources EIA review process to ensure longterm sustainability of water resources projects.
INTRODUCTION
An Environmental Impact Assessment (EIA) is a government-mandated prerequisite for the implementation of a project which has a potential for significant impacts on the environment (Glasson et al., 1999). Wathern (1990) defines EIA as the assessment of the environmental impacts and it helps to identify alternative options which ensures the project's sustainability both in environmental and socio-economic standpoints. Water resources interventions fall under the red category of industrial activities under the Environmental Conservation Act of Bangladesh (DoE, 1997).
For the water resources projects of Bangladesh, EIA acts as a pre-requisite for the proposed project feasibility and sustainability (WARPO, 2005). EIA Review is the process of checking the standard of an EIA to decide whether the proposed project gains approval and operation or not (UNEP, 2002). In this study, environmental considerations and aspects were reviewed and the effectiveness of the conducted water resources sector EIA were assessed based on the established EIA review methods.
The main objective of this study was to assess the effectiveness of water resources EIA in Bangladesh. The extent of the executed tasks, gaps of the study were identified and summarized considering the environmental and social aspects to ensure an effective EIA in the water resources sector.
MATERIAL AND METHODS
The effectiveness review framework, component based checklist method and reviews of expert's suggestions were used to summarize the gaps and lapses and to make a Science Publications AJES recommendation for the studied EIA (UNEP, 2002). Component-based checklists were prepared considering the main findings of the environmental baseline and detailed EIA were thoroughly reviewed using three classes (C: Complete, M: Moderate and P: Poor) with explicit remarks (FPCO, 1992).
Scoping was scrutinized considering the relevant impacts, key factors and reasonable alternatives (Saha, 2007). Analysis of the major environmental impacts, indirect and cumulative impacts, suggested mitigation measures with monitoring arrangements and the contingency and compensation plan were reviewed sequentially under the effectiveness review framework method (UNEP, 2002). Consultations with the EIA practitioners (15) and experts (8) were conducted to verify the review results and to justify the applicability of the studied EIA under the proposed project options. Following Sadler (1996), this study rated the identification of deficiencies, critical shortcomings, remedial measures and decision making on a scale of A (well performed) to F (very unsatisfactory) and considered the 'Triple A' test of appropriateness (coverage of key issues and impacts), adequacy (impact analysis) and applicability (effectiveness).
Overview of Gorai River Restoration Project (GRRP)
Gorai River is the main distributary of supplying fresh water in the south-western part of Bangladesh (Mirza, 1998). Seasonal flow (November to May) of the Gorai River has been declining for the last twenty years and further decrease in dry season flow will lead to the rapid siltation at the mouth of the Gorai River. After signing the Ganges Water Treaty (GWT) with the Government of India (GoI), Government of Bangladesh (GoB) asked the World Bank (WB), the Government of Netherlands (GoN) and other donor agencies to assist the implementation of GRRP. For this reason, a mission was undertaken in September, 1997 andMarch, 1998. The GRRP area covers about 1.616 million hectares of land area which falls between the latitude of 21° 30′N to 24°N and longitude of 89°E to 90°E. The Gorai River off-takes from the Ganges River is as the northern boundary and the southern tip of the Sundarbans is the southern boundary of the GRRP area.
The GRRP was planned to start in 2001 under the auspices of Bangladesh Water Development Board (BWDB) with the funding support from the GoB, WB and the GoN (DHV-Haskoning and Associates. 2000). The overall objectives of the proposed GRRP are to prevent environmental degradation in the southwest region, the coastal belt and to save the Sundarbans (the largest mangrove of the world) by undertaking the restoration works of the Gorai River ensuring supply of fresh water flow in the wet season and augmenting flow during the dry season (BWDB 1998). The proposed project will improve agricultural and fisheries production and navigation through mitigating adverse environmental effects due to salinity intrusion. The major components of GRRP are (a) river training works at the Gorai mouth and the Ganges approach to the Gorai off-take, (b) restoration of the Gorai River distribution system, (c) community development and (d) participation and institutional capacity building for maintaining the restored river system while ensuring sustainable water distribution and use.
GRRP includes investigation of every part in its preliminary stage including the design, river training works and a program of maintenance dredging to augment the flows of the Gorai River. Proposed GRRP will update and supplement technical, social, environmental and economic assessments and will incorporate lessons learned from the recurrent dredging activities which is carried out under the priority work programme of the Government of Bangladesh. The proposed GRRP will be conducted on the basis of the project appraisal by the GoB, World Bank and other bilateral donors (BWDB, 1998). Therefore, the EIA of this proposed study was conducted by the multidisciplinary team of EGIS-II, an environmental organization and a Trustee of the Government of Bangladesh, before the initiation of the project activities.
The main objectives of the studied EIA were to assess the positive and negative impacts on the environment due to the priority construction and river training works for the flow restoration and to prepare an environmental management plan to monitor and mitigate any forthcoming adverse impacts on natural environmental and social life due to the proposed project interventions. The EIA study was carried out in the priority project area of the Gorai River, the Gorai Corridor and the Southwest Impact Zone and areas which are directly or indirectly impacted by the GRRP (EGIS-II, 2000b). The changes in salinity affected area were found to be quite significant under the FWOP condition from 97.944 to 199.316 km 2 (9 to 18% Direct Impact Area) at the low flow regime consideration. Different project options ( Table 1) were suggested by DHV-Haskoning and Associates, including river training works and dredging at one or more channels of the Ganges, near the Gorai-off take and in the Gorai River itself (DHV-Haskoning and Associates, 2000).
Detailed EIA of GRRP
Among the different options (A1 to A7), A1 has the least intervention in physical, biological and social aspects, A2 to A5 options have structural interventions, A6 and A7 have included extensive dredging but may create environmental degradation and are not economically feasible. Sundarbans has reflected much more negative impacts under the FWOP condition. Under the FWOP condition, 12% of the GRRP area will be under the low salinity zone in which timber productivity will decline and the rest of the area will be under the high salinity zone, fish habitat will deteriorate, an imbalance in prey-predator relationships will emerge, breeding grounds for fresh and saltwater fishes will be diminished or disappear, which will result in an imbalance in ecosystem functioning.
Under the FWIP condition, 33% of the Sundarbans forest will come under the low salinity zone which will benefit the floral and wildlife composition, fish population and biodiversity will improve, flushing of lagoons will support good seasonal vegetation succession, increased flow of dry season water will be increased about 10%, land reclamation will be possible, socio-economic development is estimated with the increased agricultural and fishing activities, 190% improvement in fresh water and shrimp farming, greater availability of fresh water (ground and surface), positive impacts on the health of the people and the surroundings of the south-west region of Bangladesh. Among the negative impacts of FWIP condition, private acquisition will result in the land loss of 500 households, 32% reduction in shrimp farming and increased vulnerability of riverbank erosion in other parts but not at the structural sites.
The Environmental Management Plan (EMP) included a mitigation plan for different project phases, a compensation plan (land acquisition, land requisition, erosion) and a contingency plan (pre-construction, construction and operational phase). Important steps of the mitigation plan included a stipulation that the minimum possible amount of land is to be used and the affected people are to be compensated properly. The enhancement plan included measures that will ensure derivation of the intended benefits. Components of the enhancement plan covered plantation program, restoration of connectivity of rivulets with the Gorai River and excavation of fish migration routes.
The monitoring plan for the GRRP prepared to monitor changes taking place due to the restoration of flow through the Gorai River. The EMP summarizes hydrologic, soil, agricultural, ecologic and social monitoring programs. Dividing the total DIA of GRRP into a number of management units was established based on the commonality of interests regarding management of water resources (EGIS-II, 2000b). Options
RESULTS
Feature-based review results of the requirements, limitations and recommendations of EIA of GRRP are summarized in the review matrix ( Table 2). The result section contains the major inclusions in the EIA study and the recommendations are suggested considering the results from the effectiveness review framework and component-based checklist methods and the decisions are done using Sadler's rating scale (Sadler, 1996). The scope and limitations of the study and main The scope and limitation of the study and a brief A con-sultants' names or organizations were not review of similar projects should be included to (competently provided (ADB, 1993) analyze the overall project situation. performed) Policy, legal and Existing environmental conservation act, Policies relevant to the important environmental C administrative 1995 was not highlighted (DoE, 1997). components should be analyzed to illustrate (satisfactory) framework the legal and administrative framework Approaches and Major discussion of the used methodologies Brief discussion of the methodologies and C methodologies was not included. Overview of used guidelines used guidelines will be more informative and (satisfactory) was not given and major data gaps (primary and effective for decision-making. secondary) not discussed (ADB, 1993) Project options Project options did not include additional Complete project description considering ancillary B (well information (about land requirement, resources, essential information should be added to give the performed) labor force and investment cost for each options) overall idea about the project description and and hierarchy and schedule of the interventions is implementation. not provided in the detailed EIA report.
Environmental and
Chemical and biological properties of the surface Surface water chemical and biological properties A social baseline water were not provided in the baseline section. should be measured to define the current water (thoroughly Interventions of the previous projects are not condition. Base condition and dataset of each performed) considered (WARPO, 2005).
component should be used to analyze the possible affected area by the project interventions. Alternative Uncertainty of flow regime was not considered. Flow regimes should be considered with the B (well flow regimes seasonal and annual fresh water availability. performed) As the upstream water supply is uncertain due to the construction of Farakka Barrage, alternative flow regimes should be considered with existing water availability (Mirza, 1998).
Environmental and
Critical evaluation, positive and negative impacts Key data gaps should be incorporated in section. B (well social impacts of were not highlighted individually for every single Cause and effect relationship should be performed) different options option (among the seven different options). explained in all the aspects of environmental Key data gaps and opportunities for and social perspectives. environmental enhancement were not considered. Residual impacts were not highlighted separately. Cause and effect relationships between planned project activities and the environmental components were considered in few aspects.
Environmental and
Component-based residual impacts on natural All the residual, unavoidable and uncertain B (well social impacts of environment of the selected option were not impacts should be considered. performed) selected option highlighted. Uncertainty of selected options and its impacts on environment and social components were not discussed separately (FPCO, 1992
DISCUSSION
In the EIA of GRRP, beneficial and adverse impacts are explained under the Future-Without Project (FWOP) and Future-With Project (FWIP) conditions for each of the proposed project options emphasizing construction and operation phases (EGIS-II, 2000b). Risks of adverse impacts were evaluated properly with an impact matrix (Canter, 1996). The project has impacts on environmentally sensitive areas, endangered species and their habitats and on aesthetics. All these impacts are considered for different project options and specifically for the selected options.
Comparing the FWOP and FWIP conditions, the "Without Project" scenario is not recommended because the Gorai River needs extensive human interference to restore its water flow during the dry season. The GRRP is necessary due to its environmental and social benefits to the surrounding area and about 96.7% of the stakeholders of the GRRP area shared their positive opinions during the participatory sessions (EGIS-II, 2000b).
Seven different options in various locations were suggested in the feasibility study of GRRP (DHV-Haskoning and Associates, 2000). No similar project implemented at the proposed GRRP site in the recent past. The unavoidable adverse impacts on the natural environment (ecology and biodiversity) were discussed for the construction and operation phase in the studied EIA. Concerns expressed by likely affected people are considered and the impacts were reviewed to assess the exact impacts on environmental and social components. The detailed EIA of GRRP addressed the environmental and social concerns adequately in all significant stages of the EIA process (EGIS-II, 2000b).
Proposed mitigating measures were reasonably feasible and the EMP was found effective for proper decision-making. Important environmental and social monitoring programs (e.g., hydro-morphological, surface and groundwater, soil, ecological and social monitoring programs) were included in the EIA of GRRP. Residual impacts on natural environment were conflated in defining the mitigation plan and unavoidable impacts on the environment (especially on ecology and biodiversity) were discussed with the contingency and compensation plan, but not discussed in separate sections (FPCO, 1992).
Rating of EIA of Gorai River Restoration Project
Considering the key aspects with crucial environmental and social issues, the EIA of GRRP was graded as "B" (well-performed), based on the Sadler's (1996) rating scale.
The EIA of GRRP was well-performed and no major task was left incomplete in the detailed EIA and the studied EIA has sufficient information for better decision-making in project approval and implementation.
Lessons Learned
Impact area consideration of the EIA of GRRP was carried out considering the Gorai River corridor and priority project area. The direct impact area was specified considering surface and groundwater which is a good remark of proper scoping and bounding.
Consideration of Important Environmental Components (IECs) and Important Social Components (ISCs) in the Environmental Baseline facilitated the visualization of the overall FWOP and FWIP conditions and improve the understandability of the apparent impact assessment. IECs included hydrological (water level, discharge, salinity), morphological (planform analysis, sediment transport), aquifer system, groundwater level, soil and agricultural (cropping pattern, crop production and damage, agricultural inputs), fisheries (both capture and culture) and ecological parameters. ISCs included demographic data, land distribution and agricultural arrangements, income and quality of life (Education, Health, Nutrition, Water Supply and Sanitation).
Seven different alternative project options considering three different flow regimes (high, medium and low) provided the alternative coping scenario with the changing water conditions (Table 1). Reliable data sources of secondary information, national databases for secondary data and up-to-date environmental (RS and GIS) and social techniques were used in the methodologies and have given an overview of the environment and social impacts on the direct impact area.
Environmental and social impact statements considered both FWOP and FWIP conditions for seven different options including separate discussion (environmental and social consequences and land requirements) for each of the components. The selected project options facilitate decision making and improve the understanding of the FWOP and FWIP scenarios. Categorized environmental and social management (mitigation, contingency, compensation and enhancement plan) and the monitoring plan were structured considering all the important environmental and social components and provide a strong basis to determine future environmental and social scenarios.
The involvement of existing institutions and nine local level management units were found useful to gather local knowledge and opinion to make the assessment and Science Publications AJES project sustainable. Risk analysis of environmental and social impacts was performed using an impact matrix. Ubiquitous public participation was found in all the major steps of environmental impact assessment of GRRP. The overall study was conducted following the National Water Policy (NWP) of Bangladesh, therefore, all the major environmental and social components and issues should be considered in the detailed EIA (MoWR, 1999). Canter and Canty (1993) studied the significance of impact determination of many international water resources EIA experiences (especially American and European) but the focus was on the identified impact that can be mitigated, planning a baseline and monitoring programs. The highlight of the studied EIA was for the significant impact determination, a hierarchy of significance determination criteria considering the geographic situation, project type and size, environmental problems due to the project interventions were considered under the defined sections ( Table 2).
This EIA review study was developed underlying the fifteen (15) evaluation criteria identified by Thompson (1990) who reviewed twenty four (24) established EIA methodologies and suggested a coherent approach to EIA for significant impact determination. Based on the past success of integrated river basin management in China (Sun, 1994) and Africa (Scudder, 1994), Barrow (1998) came up with the evaluation criteria for integrated river basin management concept and management in the UK and suggested SEA, ecosystem auditing and setting regional environmental management system and the outcomes of these studies used as guidelines to ensure the robustness of EIA review. Momtaz (2002) stated the EIA process in Bangladesh and reviewed an EIA of drainage rehabilitation projects based on the EIA framework of Modak and Biswas (1999) and a qualitative analysis was done under the broad categories but in this review study, the rating scale was used for every single components in the checklist and also included the important evaluation criteria considered by Momtaz (2002) to make the study more applicable for the other sectoral EIAs.
The established review framework was cross-checked with the defined criteria and issues discussed under the two famous European directives, Water Framework Directive and the Strategic Environmental Assessment Directive (Carter and Howe, 2006). The requirements discoursed in those directives (e.g., collection of baseline data, assessment of alternative options and policies, mitigation and monitoring programs, consultation and public participation) were thoroughly reviewed for the studied EIA and found adequate with no missing information for proper decision making.
The review framework and the decision-based checklist incorporated all the effectiveness dimensions used by Hirji and Ortolano (1991). They checked the EIA effectiveness for four water resources interventions in Kenya, namely, Masinga Dam Project, Munyu Dam Project, Kiambere Dam and Tana Delta Irrigation Project. The Tana Delta Irrigation Project had some similarities with the GRRP. The environmental and social issues discussed for the Tana Delta Irrigation Project were thoroughly checked and compared with the GRRP to ascertain the review results. Used EIA review criteria by Sadler (1996) used in this study also checked with the defined speculations by Hirji and Ortalano (1991) followed by Ortolano et al., 1987, to ensure the effectiveness of this EIA review. All the environmental and social considerations of the studied EIA comprised with the defined speculations of Ortolano et al., 1987. The review criteria under the checklist methods covered all the important environmental criteria suggested by Colley et al., 1999, their developed EIA review package was used to analyze twenty eight (28) EIA reports of South Africa (Sandham and Pretorius, 2008) and to review the EIA qualities of Egypt (Badr et al., 2011), for eight European countries (Barker and Wood, 1999) and for the Scottish forest sector of the UK (Gray and Edward-Jones, 1999). The review results from the used checklists methodologies and expert opinions and the results using the review framework developed by Colley et al., 1999 came up with the same result (A: Well performed; Colley et al., 1999). The review results from this study is comparable to other established and experimented EIA review methodologies implemented in environmental sectors all over the world and it showed similar results which increase the applicability of using descriptive and decision driven checklist review methodologies to review large scale water resources EIA studies.
Consideration of environmental and social components under different flow regimes with the associated impacts, delineation of proposed alternative options, detailed environmental management and monitoring plans and overall public participation made the EIA of GRRP a noteworthy EIA to follow for the future Environmental and Social Impact Assessment studies in Bangladesh and for other areas under the similar geo-environmental context.
CONCLUSION
Key aspects of an EIA were reviewed to assess the effectiveness of the studied EIA. The four main themes of the EIA of GRRP (quality, content, environmental management plan and conclusions) were thoroughly reviewed in this study and the detailed EIA was found well performed with no major tasks left incomplete. The EIA provided sufficient information for the relevant decision makers who are responsible for deciding whether or not to implement the GRRP. The analysis of the possible impacts were conducted using the descriptive, decision-focused checklists and expert suggestions but environmental cost-benefit analysis could give more insight of impact prediction, assessment, decision making and to communicate the results much more efficiently to the decision makers but unfortunately environmental cost benefit analysis was not conducted in this study due to the confidentiality of the project documents.
The GRRP is highly recommended to be implemented due to the present flow condition of Gorai River and its associated present and future environmental and social impacts. Furthermore, the EIA of GRRP is a model for the other Environmental and Social Impact Assessments of the water resources sector in Bangladesh and for the water resources development and management projects under the similar geographical and environmental contexts. The review matrix developed under this study can be used and improved integrating other EIA review methods to crosscheck the effectiveness of EIA in achieving long-term environmental sustainability for the water resources projects.
ACKNOWLEDGEMENT
Author would like to acknowledge the Center for Environmental and Geographic Information Services (CEGIS) for giving access to the EIA and Feasibility Study Reports of the Gorai River Restoration Project (GRRP). | 2019-04-13T13:05:37.551Z | 2014-04-19T00:00:00.000 | {
"year": 2014,
"sha1": "2d86305c46b9902d6478caa15943be253cd113bb",
"oa_license": "CCBY",
"oa_url": "http://thescipub.com/pdf/10.3844/ajessp.2014.236.243",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "abdddad4a7bc939fa0b9c651ed025176a76f6f11",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Engineering"
]
} |
220422073 | pes2o/s2orc | v3-fos-license | Implant Surface Microtopography – A Review
Osseointegration is the direct contact between the living bone and the implant surface without interposed soft tissue at the microscopic level and it is a critical process for implant stability and consequent shortand long-term clinical success. Surface conditions are particularly important as they play a major role in the osseointegration process. Several characteristics among implant surface, such as surface composition, physicochemical properties, surface wettability, and roughness influence the rate and quality of osseointegration. The goal of this review is to analyze the currently available methods for implant surface modification and also discuss the future trends in surface bioengineering and nanotechnology for improving the osseointegration and consequently their biological performance.
IntroductIon
Scientifically based implant therapy, emerged at the end of the 1970s following ground breaking studies with 10-year clinical results presented by a research group in Sweden directed by Dr. Branemark et al. [1,2] A published study showed that more than 220 implant brands are present globally producing more than 2000 different types of implants. [3] Considering the variety of materials, surface treatments, shapes, lengths, and widths available, clinicians have a wide array to choose from them during treatment planning, but which one is to choose? Is still a question of concern? Following implantation, events take place both on the biological side and on the materials side. According to the "interface scenario" of Kasemo and Gold, [4] primary molecular events lead to secondary events that ultimately result in particular cell and tissue responses. Development of interface is complex and involves numerous factors. [5] These include not only surgical technique but also implant-related factors, such as material, shape, topography, and surface chemistry. To alter the surface characteristics to improve implant performance, much attention has been focused on changes in surface roughness and chemistry.
Smooth, polished surfaces show poor mechanical integration with bone because, without surface irregularities, these surfaces provide no resistance to mechanical forces at the bone-implant interface. [6] Machine-finished implants, such as the Branemark System implants (Nobel Biocare, Zurich, Switzerland), have a substantial history of use, whereas they may appear macroscopically smooth, but the implants have a low roughness, in the range of 0.5-1 mm. [7] Surface characteristics directly and indirectly influence the way of molecules present in the biological world act and this might ultimately control new tissue formation as cell proliferation and differentiation both depend on quality of early adhesion. [8] Many research efforts have been directed toward improving the bone-implant interface, with the aim of accelerating bone healing and improving bone anchorage to the implant. [9] The interface is improved physically by the architecture of the surface topography. At the micrometer level, the reasoning for this approach is that a rough surface presents a higher developed area than a smooth surface, and thus increases bone anchorage and reinforces the biomechanical interlocking of the bone with the implant, at least up to a certain level of roughness. At the nanometer level, the
Historical Background
The history of evolution of dental implants is a rich and fascinating travelog through time. The first evidence of dental implant is attributed to the "Mayan" population roughly around 600 AD where they utilized pieces of shells as implants for replacement of mandibular teeth. [11] In 1913, Dr. EJ Greenfield placed a 24-gauge hollow latticed cylinder of iridium-platinum soldered with 24-K gold as an artificial root. [12] In the 1940's, Formiggini and Zepponi developed post-type endosseous implant. Dr. Raphael Chercheve from France added to the spiral design by creating burs to ease the insertion of the implant for a best fit. Various implant designs expanded in the 1960's. [13] In 1978, Dr. Branemark et al. presented a two-stage threaded titanium root-form implant. [14] Two other ground-breaking persons of modern implantology were Dr. Schroder and Dr. Straumann of Switzerland. [15] It is widely accepted that the surface properties of a dental implant play a major role in the osseointegration process and biomechanical fixation due to its influence in the implant-tissue interactions as it affects directly the behavior of the surrounding tissues. [15][16][17][18] The surface features become extremely important at the initial healing period of an implant as they influence directly the dynamics of the bone-implant interface and consequently command the short-and long-term success rate of the prosthetic treatment. The implant surface characteristics including topography, chemistry, surface charge, and wettability are likely to be of particular relevance to the chemical and biological interface processes in the early healing stages after implantation. [19] Surface modifications influence cell proliferation and differentiation, extracellular matrix synthesis, local production factors, and even cell shape, gene expression, protein secretion, differentiation, and apoptosis. This will consequently affect retention and proliferation of osteogenic cells at the implant site. [20]
Methods of Implant Surface Treatments
Dental implant surface structure, morphology, and chemistry can be changed by two ways: Additive or subtractive. The primary function of these techniques is to modify the implant surface characteristics such as increasing bone formation to improve peri-implant osteogenesis, improvement of corrosion and wear resistance, and removal of surface contaminants. Following methods are used to change the surface topography of the implant.
Machined surface
The first generation of dental implants, termed the turned implants, had a relatively smooth surface after being manufactured. [21] This surface is usually and inadequately called smooth since scanning electron microscopy analysis showed that they have grooves, ridges, and marks derived from tools used for their manufacturing which provides mechanical resistance through bone interlocking. [22] However, the main disadvantage regarding the morphology of non-treated implants is the fact that osteoblastic cells are prone to grow along the grooves existing on the surface, which in terms of clinical implications means a longer healing time required. [23] The machined implant is turned, milled, and polished. It is minimally rough, with a surface area roughness (Sa) value of 0.3-1.0 μm. [24] Sandblasting/grit blasting Sandblasting is one of the most commonly used types of surface modification processes because of its simplicity, low cost, and easiness of application. Microspheres of diameter in the range 10-540 μm are typically accelerated toward the surface to be treated, using a compressed air or nitrogen blow. The main effect of sandblasting is to change the morphology of the treated surface, substantially increasing its roughness. The value of this parameter depends on several factors including: The type of grid material used the dimension of the spheres, the energy and angle when they hit the surface, and the duration of the treatment. Typical values of the Ra roughness are in the range 0.3-3 μm as compared to Ra values lower than 0.1 μm for polished Ti surfaces. A side effect of the sandblasting process is the contamination of the surface by the material released by the microspheres during their interaction with the surface. [25] The grit blasting technique usually is performed with particles of silica (sand), alumina, titanium dioxide, or resorbable bioceramics such as calcium phosphate (CaP). Titanium oxide (TiO 2 ) particles with an average size of 25 μm can produce moderately rough surfaces in the 1-2 μm range on dental implants. [9] Acid-etched surface The immersion of a titanium dental implant in strong acids such as hydrochloric acid, sulfuric acid, nitric acid, and hydrogen fluoride is another method of surface modification which produces micro pits on titanium surfaces with sizes ranging from 0.5 to 2 μm. The resulting surface shows a homogenous roughness, increased active surface area, and improved adhesion of osteoblastic lineage cells. Dual acid-etching consists in the immersion of titanium implants for several minutes in a mixture of concentrated HCl and H 2 SO 4 heated above 100°C to produce a micro-rough surface [18] that may enhance the osteoconductive process through the attachment of fibrin and osteogenic cells, resulting in bone formation directly on the surface of the implant.
Grit blasting and acid etching (sandblasted and acid-etched [SLA])
Following grit blasting, the surface is submitted to acid-etching to further enhance the topographic profile of the surface and remove processing by products. The advantages of this method include an increase in the total surface area of the implant, achieved due to the selective removal, resulting from electrochemical differences in the surface topography. [18] This process should be carried out under controlled conditions, as over etching the surface decreases surface topography and mechanical properties and may be detrimental to osseointegration.
Anodic oxidation
To alter the topography and composition of the surface oxide layer of the implants, micro or nanoporous surfaces may also be produced by potentiostatic or galvanostatic anodization of titanium in strong acids, such as sulfuric acid, phosphoric acid, nitric acid, and hydrogen fluoride at high current density or potential. [26] When strong acids are used in an electrolyte solution, the oxide layer will be dissolved along current convection lines and thickened in other regions which create micro or nanopores on the titanium surface. [27] This electrochemical process results in an increased thickness and modified crystalline structure of the TiO 2 layer. However, it is a complex procedure and depends on various parameters such as current density, concentration of acids, composition, and electrolyte temperature. [6] Laser treatment Studies showed that direct laser fabrication (DLF) implants have structures with complex geometry and could allow the better osteoconductive process. Evaluation of cytocompatibility and fibrin clot extension was carried out using osteoblasts and human blood to compare cell growth and fibrin clot covered areas on several implant surfaces. DLF implant surface showed lower cell density compared to machined, smooth textured grit blasted, Asian Pacific Journal of Health Sciences | Vol. 7 | Issue 2 | April-June | 2020 50 and acid-etched implant surfaces. Inorganic acid etching slightly improved the extension of human blood to increase the micro roughness. Moreover, laser metal sintered implants were better adapted to the elastic properties of bone. Thereby, DLF implants could decrease stress-shielding effects and enhance implant longterm success rates. [28] Titanium plasma-spraying (TPS) TPS consists of injecting titanium particles into a plasma torch at high temperature. These particles are projected onto the surface of the implants where they were condensed and fuse together, forming a film about 30 μm thick resulting in an average roughness of around 7 μm. [18] The TPS processing may increase the surface area of dental implants up to approximately 6 times the initial surface area and is dependent on implant geometry and processing variables, such as initial powder size, plasma temperature, and distance between the nozzle output and target. [29] One of the major concerns with plasma-sprayed coatings is the possible delamination of the coating from the surface of the titanium implant and failure at the implant-coating interface despite the fact that the coating is well-attached to the bone tissue. A major risk with high surface roughness concerns difficulties in controlling peri-implant it is due to the intercommunication between porous regions facilitates migration of pathogens to inner bone areas, potentially compromising the success of the implant therapy. [30] CaP coatings CaP coatings, mainly composed by hydroxyapatite (HA), have been used as a biocompatible, osteoconductive, and resorbable blasting materials. The idea behind the clinical use of HA is to use a compound with a similar chemical composition as the mineral phase of the bone to avoid connective tissue encapsulation and promote peri-implant bone apposition. [31] For this matter, the CaP coatings disclose osteoconductive properties allowing for the formation of bone on its surface by attachment, migration, differentiation, and proliferation of bone-forming cells. The hydroxyapatite (HA) ceramic particles are heated to extremely high temperatures and deposited at a high velocity onto the metal surface where they condense and fuse together forming a 20-50 μm thick film. [32] To improve coatings, a number of techniques have been developed with the aim of producing thin-film nanostructured bio-ceramic coatings, such as sol-gel deposition, pulsed laser deposition, sputtering coating techniques, electrophoretic deposition, and ion-beam-assisted deposition. [33] The sol-gel electrophoresis method can be prepared using a dip coating or a spin coating process and is capable of improving chemical homogeneity in the resulting HA coating as it allows for better control of the chemical composition and macrostructure of the coating. [6] The Pulsed Laser Deposition results in a titanium surface microstructures with greatly increased hardness, corrosion resistance, and high degree of purity with standard roughness and thicker oxide layer. The ion-beam assisted deposition technology permits the formation of thin films at atomic and molecular levels, as well as low temperature syntheses utilizing ionic effects. [34,35] Recurrent drawbacks include controlling the calcium-phosphate layer composition, resorbability, weak adhesion to the substrates, the use of high temperatures, or the costs involved in the process. In fact, there are several reports of cracking and/or delamination of the coating due the generation of large thermal stresses during processing, which may affect the quality and rate of peri-implant bone formation. [36] Biomimetic CaP coatings Biomimetic coatings involve the use of microstructures and functional domains of organismal tissue function to deposit CaP on medical devices to improve their biocompatibility. [37] This bioinspired method consists in the precipitation of CaP apatite crystals onto the dental implant surface through simulated body fluids under near-physiological or biomimetic conditions of temperature and pH. [37]
Turned/machined
Studies on animal models and clinical studies have suggested a positive correlation between the implant surface roughness and bone-implant contact (BIC). The success rate of machined (nontreated) implants has been reported as less when placed in low bone density compared to good bone quality. [38] Osteoblasts are rugophilic, hence, they tend to grow along the grooves existing on the implant surface. The disadvantage regarding the morphology of non-treated implants is that they provide mechanical resistance for bone interlocking. According to a study [39] proposed by Sennerby et al. healing process in round screw-shaped machined Ti implants in cortical bone after 3-180 days. They reported an early cellular response, a relative absence of inflammatory cells and a rapid formation of woven bone from the endosteal surface.
Acid-etched
Acid-etched implant surface produces a microtexture rather than a macrotexture. The dual acid etched surfaces improve the osteoconductive process through the attachment of fibrin and osteogenic cells, enhancing bone formation directly on the implant surface. [40] When a higher temperature is used with an acid etching method, they produce a homogeneous microporous surface with increased cell adhesion and greater BIC compared to TPS surfaces. [41] In vitro reports, on cell response to hydrophilic SLA, osteoblasts behavior was affected by altering protein absorption that directly induced differentiation by the assembly of focal adhesion sites (FAs) and intracellular four signaling cascade activation. The FAs are important sites of signaling that control spreading, migration, cytoskeletal organization, cell cycle progression, gene expression, and matrix fibrillogenesis. [42] Laser sintering Laser-sintered Ti implants showed high purity with enough roughness for good osseointegration compared to other treatments. Biological evaluation of the role of Ti ablation and chemical properties showed the ability of its grooved surface to orientate osteoblasts attachment and control the direction of ingrowth. [43] CaP coating Coating of dental Ti implants with CaP ceramic is commonly used to change the chemical composition of the implant surface.
After implant placement, the CaP particles are released into the peri-implant region; raising the saturation of body fluids and leading to the precipitation of a biological apatite onto the implant surface. Endogenous proteins present in this layer of biological apatite act as a matrix for osteogenic cell attachment and growth. Integrins mediate the cellular interactions with the apatite layer and its proteins onto the implant surface. The signaling pathways through integrins can regulate bone forming cell activity. The bone stimulating action of CaP coatings at implant surface enhances early osseointegration compared to non-CaP coated dental implants. [44,45] HA coating Several methods have been used for applying HA coatings on to metals, and each method can result in different material properties. Plasma spraying that forms a coating thickness of 40-50 μm is the most common used technique for coating Ti implants. A synthetic form of HA has a similar chemical composition to the mineral matrix of bone. [46] HA can form a direct and strong bone-to-implant bond. After implant placement, HA acts as a bioactive material where a sequence of events results in precipitation of a CaP rich layer on the implant surface. The CaP incorporated layer will be developed in a biologically equivalent HA that will be incorporated in the developing bone through octacalcium phosphate. [47] In many preclinical and clinical studies calcium-to-phosphate ratio, phase composition and crystal structure are used as chemical parameters, to optimize the performance of CaP coatings. [17] HA coatings showed a persistent significant improvement of the osteoconductivity of metallic implants.
Growth factors coating
Implant surfaces can be coated with biomolecules, such as bioadhesive or growth factors, to promote osseointegration. The arginylglycylaspartic acid (RGD) sequence from fibronectin is the most common used bio-adhesive that binds to adhesion receptors and promotes cell adhesion. RGD functionalized and tissue engineered constructs that can improve early bone ingrowth and matrix mineralization in vivo. [48,49] However, BIC and osteoblast differentiation were not improved by RGD application to Ti implant surfaces. This might be due to the absence of crucial modulatory domains from the native fibronectin, the RGD signals disappear by non-specific adsorption of plasma protein and interactions with inflammatory components. On the other hand, Germanier et al. [50] compared sandblasted implant surfaces that were either RGD peptide polymer coated or uncoated and placed in the maxillae of minipigs. They concluded that RGD coating might enhance bone apposition at the early stages of bone regeneration. Platelet-derived growth factor (PDGF) and insulin-like growth factor (IGF) were used in combination around implants where they can produce 2-3 times more new bone within 7 days compared to controls. However, after 21 days, in spite of a large volume of new bone formed around dental implants treated with growth factors, there was no significant difference between growth factor and control sites. Thus, the use of PDGF/IGF may only accelerate the process of bone formation. [51] Electrochemical anodization The electrochemical anodization can produce a mixed nano/submicron scale TiO 2 network layer (lateral pore size: 20-160 nm) on a polished Ti surface in 10 min. This TiO 2 network layer improved the whole blood coagulation and human bone marrow stem cell adhesion on a Ti dental implant surface. [52] The galvanic anodization of Ti in strong acids produces a thick layer of TiO 2 . Burgos et al. compared implant surface manufactured by anodic oxidation to turned surfaces in a rabbit model. BIC values were 20%, 23%, and 46% around the oxidized surfaces with a different osseointegration pattern, while 15%, 11%, and 26% around the machined surfaces, after 7, 14, and 28 days, respectively. Huang et al. studied the oxidized implant surfaces placed in the posterior maxilla. After 16 weeks, the recorded mean of BIC was 74%. They stated that this oxidized surface showed a considerable osteoconductive potential resulting in a high level of implant osseointegration in Type IV bone. [53] Fluoride treatment Ti can react to fluoride (F) ions, forming soluble TiF [4] that enhances osseointegration of dental implants. The analysis of human mesenchymal cells showed no difference in cell attachment between the fluoride treated and control grit-blasted implants. Fluoridated implants also sustained greater push-out forces and showed higher removal torque than control implants. In addition, it increased osteoblast differentiation represented by increased expression of Cbfa1, osterix, and bone sialoprotein. [54] Biologically active drugs Bisphosphonate coated Ti implants improved local bone density in the peri-implant region, due to its antiresorptive effect limited to the implant site. Du et al. [55] studied the effect of simvastatin, by oral administration, on implant osseointegration in osteoporotic rats and showed that it can enhance implant osseointegration. Tetracycline-HCl has the ability to kill microorganisms that may contaminate the implant surface and can remove the smear layer and endotoxins from the implant surface. In addition, it prevented the action of collagenase, increased cell proliferation, attachment, and bone healing, improved blood clot attachment, and retention on the implant surface during the early phase of healing, thus enhanced osseointegration. [56] TPS Al-Nawas et al. [57] compared different types of macro and microstructure implant surfaces in dogs. After 8 weeks of healing and 3 months of loading, higher BIC values of TPS rough surfaces and blasted/acid-etched implants were reported in comparison to machined ones. The difference between the TPS and the blasted/ acid-etched implants BIC values was not significant. An in vivo study [58] that evaluated TPS versus plasma sprayed HA implants, showed that bone contact length for HA implants was significantly higher than TPS at 12 weeks of implant placement and 1 year of loading.
Alkali treatment
NaOH treatment includes the formation of a bioactive sodium titanate layer on orthopedic Ti surfaces. Following the immersion in stimulated body fluids (SBF), bone-like apatite is deposited onto this layer. Sodium ions in the titanate layer are exchanged with H3O + ions from the SBF forming Ti-OH groups, which combine with Asian Pacific Journal of Health Sciences | Vol. 7 | Issue 2 | April-June | 2020 52 Ca 2+ ions to produce amorphous calcium titanate. The reaction with phosphate polyatomic ions forms amorphous CaP, which transforms into bone-like apatite. Alkali treatment and biomimetic precipitation of CaP coatings are techniques that can be used to coat the interior of porous metallic surfaces. [59,60]
Future trends A n d conclusIon
Microtopography of the implant surface in contact with the biologic tissues is recognized to play a fundamental role in the healing process, but the exact mechanism underlying the osseointegration process remains poorly understood. Within the time frame of the present review, there has been number of dental implants commercially available with a wide variety of surface characteristics, both in terms of structural and chemical properties. Most of the in vivo and in vitro studies showed several novel dental implant surfaces, mostly consisting in modifications of the commercially available ones. One of the main drawbacks in the dental implant surface is the empirical nature of the manufacturing process as it lacks of consensus in the choice of uniform standard for obtaining controlled topographies. For this matter, several in vivo and in vitro studies are required, but often performed without a hierarchical approach and standardized parameters using different surfaces, cell populations, or animal models.
There is an urgent need for more fundamental research in this area that would normalize and combine both in vitro and in vivo studies ultimately leading to the appropriate clinical application. A large amount of studies compare a specific rough surface with machined or turned surfaces as a control group. Since it is widely acknowledge that rough surfaces have better performance than machined or turned surfaces, the results have typically the tendency to be positive. Therefore, the inclusion of a widely accepted positive control would be beneficial to evaluate the performance of a certain surface in a more realistic way. Clinical trials comparing different commercially available implant surfaces under similar clinical situations are rarely disclosed, making the outcome assessment between different surfaces quite difficult. | 2020-07-09T09:02:38.737Z | 2020-06-01T00:00:00.000 | {
"year": 2020,
"sha1": "877b63c39376c054ee2f996cb7e19e0916c69ace",
"oa_license": null,
"oa_url": "https://apjhs.com/index.php/apjhs/article/download/1050/980/1837",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "f2ff85016862f46078cd358add4d7b437fef5bdd",
"s2fieldsofstudy": [
"Medicine",
"Materials Science",
"Engineering"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
40186649 | pes2o/s2orc | v3-fos-license | Carcinoma of the urachus and the role of PET-CT in disease recurrence – case report
accounts for 0.5%-2% of all bladder cancers. These tumours are commonly seen in patients 40-70 years of age, twothirds of whom are men. Most urachal cancers are adenocarcinomas but other histological subtypes are described. Patients with urachal cancer may have no symptoms until the late stage. The most common clinical signs are suprapubic mass, haematuria and dysuria. The recommended treatment is primarily surgical, with extended partial cystectomy, en bloc excision of the urachal mass, urachal tract and umbilicus, and pelvic lymph node dissection. While some have advocated radical cystectomy as definitive therapy, this procedure can usually be reserved for larger tumours. Radiation and chemotherapy are ineffective against urachal carcinoma. The aim of this article is to present a patient with urachal carcinoma and describe the treatment, including the special role of PET-CT.
Introduction
The estimated incidence of urachal carcinoma in the general population is reported to be 1 in 5 million individuals.Most frequently it concerns regions of endemic Schistosoma haematobium infection [1], with a peak incidence at 40-70 years of age, two thirds of patients being men [2].It is exceedingly rare, with a reported incidence of 0.5% to 2% of all bladder tumours.Urachal carcinoma may arise from any of the segments of persisting urachal remnants and they manifest with no clinical symptoms for a long time.The urachus is formed around the 9th week of gestational age, adjacent to the base of the bladder, lies between the umbilical ligaments and extends to the region of the umbilicus, measuring 3-10 cm in length and 8-10 mm in diameter.It normally involutes at about the 32 nd week of gestational age, ultimately becoming a fibrous cord.Urachal remnants result in various anomalies both in children and adults [2].Histopathologically, most urachal carcinomas are adenocarcinomas (accounting for 90%), originating from mucosal cells of the organ, similar to those found in colorectal cancer.In 75% of cases cancer cells are observed to be mucus-secreting [1].In most cases cancer infiltration spreads along the bladder wall and patients suffer from abdominal pain, haematuria and dysuria.On physical examination a tumour palpable above the pubic symphysis is observed.In 50-70% of cases calcifications are found on ultrasound and in CT scanning of the abdomen.
Over the last years the use of positron emission tomography -computed tomography (PET-CT) in various diseases has been evaluated.Completing standard examinations with PET-CT allows more accurate determination of the cancer stage and may result in a change of treatment [4][5][6].Up to the present there have been no data on the possible practical use of PET-CT in diagnosing urachal cancer.Equally interesting is its use in the detection of disease recurrence and/or confirmation of distant metastases.
In clinical practice several types of disease stage classification are used.One of the most widely used is the system developed by Sheldon and Ontario (Table I, II).It is assumed that stage T1-T3 patients may potentially be cured.In patients with advanced disease (T4) it is not possible, even with very aggressive treatment [7].The recommended treatment is major surgery -wide en bloc resection of the tumour with the urachus and the adjacent organs: the bladder, bilateral pelvic lymph nodes, brown tissue, surrounding ligaments, the sigmoid, the umbilicus and part of the anterior abdominal wall [1].It is a wide, mutilating surgery, hard to accept by the patients.Moreover, in 1993 Henly et al. demonstrated that survival rates were not different between the group with partial resection of the bladder and the patients who underwent radical cystectomy, as long as the surgical margins were tumour free [8].That is why the approach worldwide is to perform minimally invasive surgery such as laparoscopic removal of the tumour, the bladder dome, the peritoneum, the umbilical ligaments and the umbilicus, but only in cases of locally advanced cancer.
contemporary oncology
The attempts of combining surgery with radiotherapy do not deliver the anticipated results.Similarly, minimal or no benefit has been reported for systemic therapy.Currently there is no chemotherapy regimen proven effective for urachal carcinoma.In case of distant metastases the applied systemic therapy gives median survival time of 20 months [1,9].
As the disease predominantly concerns young people and in advanced stages the prognosis is bad, there have been attempts to combine administration of cytostatics with radiotherapy or surgical removal of metastatic foci.There are publications in which combination therapy positively influenced the survival rate.On the other hand, the histopathological type occurring in urachal carcinoma seems to be unresponsive to radiation and systemic therapy [10].
The objective of this paper is to present a patient with urachal carcinoma and share insights on the diagnostic and therapeutic procedures which have been followed in his case.Apparently, it is one of the very few studies in which PET has been used for the assessment of disease recurrence.
Case report
A 27-year-old male diagnosed with urachal carcinoma presented to the Clinic of Radiotherapy at F. Łukaszczyk Oncology Centre in Bydgoszcz in May 2008, to be qualified for irradiation.
The disease started in November 2006 with abdominal pain and haematuria.After the initial diagnostic tests in Szczecin he was diagnosed with bladder cancer in stage T2NxMo.Due to the histopathological diagnosis (Infiltratio carcinomatosa -carcinoma (G3)), he underwent partial cystectomy.The result of the histopathological examination was carcinoma mucosinum.The patient was discharged with a final diagnosis of urachal cancer and recommendation of close monitoring in the urological clinic.
In 2008 a follow-up ultrasound examination detected the presence of hyperplastic change in the postoperative cavity after the previously removed urachus.A partial resection of the bladder wall with the tumour was performed.In the postoperative histopathological examination the patient was diagnosed with cancer recurrence -adenocarcinoma mucinosum in stage T3N0M0.After surgery the patient reported to the Oncology Centre in Bydgoszcz in order to qualify for systemic therapy and radiotherapy.Ultrasound examination of the abdominal cavity revealed a hypoechogenic area with a vague outline sized 30 × 50 × 30 mm in the postoperative scar, suggesting a keloid.After chemotherapeutic consultation the patient began systemic treatment KG (CBDCA 600 mg, 2000 mg Gemzar).Next, after consultation in the Clinic of Radiotherapy, a decision was taken to perform diagnostic tests and possibly qualify the patient for radiotherapy, due to his young age and the previously performed partial resection of the bladder with confirmed relapse.
The follow-up imaging examinations found a lesion measuring 34 × 53 × 42 mm to the right of the bladder, at the height of the common iliac artery.This area modelled the urinary bladder and connected with the postoperative scar at the front.In the postoperative scar a lesion with uneven contours, measuring 30 × 30 mm, was depicted.The whole image might have corresponded to tumour recurrence or a thickened postoperative scar.In connection with the vague image in tomography, it was decided to perform positron emission tomography (PET-CT), using labelled fluorodeoxyglucose in order to confirm or exclude active tumour.The PET-CT, after the administration of 600 MBq (18F) 2FDG, revealed a tissue mass measuring 32 × 59 × 41 mm, lying between the ilium and the bladder, showing increased radiotracer uptake on the edge of the up to 10 mm thick lesion, with the exception of the area on the side of the bladder, glucose metabolism evaluated with SUV was 4.1.On the abdominal side the lesion connected with the rectus abdominis and with the thickening of tissue a few centimetres above the pubic symphysis in the subcutaneous tissue measuring 16 × 31 × 42 mm; the area showed glucose metabolism SUV of up to 2.9.A similar level of glucose metabolism
Discussion
Urachal cancer is a rare disease; hence the published works represent either small groups of patients or individual cases, on the basis of which it is difficult to draw the proper conclusions.The available literature suggests that major surgery is the best form of treatment.The histological type of urachal cancer seems to be not sensitive to systemic therapy and radiotherapy [10].In such cases, one should try to resect all tissues and organs derived from the urachus, but such a decision is very difficult for the whole therapeutic team as well as for the patient.The extensive and highly mutilating surgery still arouses great controversy, especially when it concerns young people, forcing the team to search for new forms of treatment, which in the case of this disease are doomed to failure.In order to consider the need for invasive treatment in our case it seems necessary to recall that the prognosis for patients with urachal cancer is bad.Disease recurrence is recorded in 51% of patients after partial cystectomy, and the overall 5-year survival ranges between 11% and 55% [1].Survival is primarily dependent on the stage of the disease, but also on the size of the margin in postoperative histopathological examination (Table I).
No optimal primary treatment in our patient meant the necessity for reoperation, but taking this decision was very difficult.We do not deny that the suspicion of disease recurrence helped qualify the patient for the revision of the F Fi ig g. . 1 1. .CT scan of abdomen and pelvis in patient with urachal carcinoma F Fi ig g. . 2 2. .High uptake of FDG in scar of abdomen F Fi ig g. .3 3. .High uptake of FDG between bladder and iliac bone abdominal cavity.On the one hand we are very pleased that it did not confirm an active neoplastic process.On the other hand, the presented case shows how difficult it might be to correctly interpret the diagnostic tests, which may result in qualifying the patient for unnecessary surgical interventions.In the patient described it was necessary to open the abdominal cavity.Although the bladder was not removed, it is still a very controversial decision, but the scar tissue in which cancer cells may have remained, especially in the area around the bladder and the umbilicus, was excised.
The death rate from recurrent urachal carcinoma is high, even up to 67%; therefore performing radical surgery is of great importance.Herr et al. [7] performed extensive surgery in 50 patients diagnosed with urachal cancer.Survival was 70%, with a median follow-up of 5 years.Worse prognosis is correlated with advancing stages of the disease and metastasis to lymph nodes.In patients with stages I-IIIA 5-year survival was 93%, falling to 41% in patients with advanced disease.Survival of patients without lymph node metastases compared to patients with positive nodes was 78 vs. 25%.Distant metastasis was confirmed in 32% of patients with a mean survival of 22 months from the primary surgery.Median survival after confirmation of metastasis was 17 months.The author emphasizes that the predictors are excision of the urachus and the umbilicus [7].
Urachal carcinoma may metastasize to the liver and the lungs, less frequently to the brain [9].Kaido et al. [9] presented a patient with urachal cancer, in whom metastasis to the lungs was found and surgically removed 2 years after partial resection of the bladder and chemotherapy.Unfortunately, three years later, MRI revealed a 3 cm metastasis in the frontal lobe of the brain, and four smaller ones to the left cerebellar hemisphere and left temporal lobe.The biggest lesion was surgically excised, and the others were removed with a Gamma Knife.In MRI, 3 months later, there was evidence of recurrence in the postoperative cavity and 10 postoperative metastatic foci in the entire brain.The surgical excision of the lesion in the brain was repeated.Other foci were treated using a Gamma Knife.Unfortunately, without effect, three months later the patient died.This shows how aggressive this disease is and that we should seek to perform extensive en bloc resection at all costs in order to increase the chances of a cure [9].
The use of PET-CT in urachal cancer seems to be interesting.Performing this test before surgical procedures may help decide on the extent of primary surgery.In the case of our patient such evaluation was not possible, since the patient came to our centre after double surgery.The PET-CT performed 3 months after surgery revealed increased glucose uptake, which might have indicated an active neoplastic process.Based on the result of histopathology, we conclude that the increased uptake of labelled glucose was associated with post-operative scar healing and/or a focus of fibrosis.
In conclusion, we wish to emphasise the significance of the need for a careful preliminary analysis of urachal carcinoma patients and the pivotal role of radical surgery in the successful treatment of this disease.The usefulness of positron emission tomography in assessing clinical stage, diagnosis of local recurrence and/or distant metastases in urachal cancer remains an open issue.Perhaps the lack of baseline PET-CT prior to treatment significantly reduced the usefulness of this test in our patient.It is likely that the use of radiopharmaceuticals other than FDG will have a practical use in this disease.
T
Ta ab bl le e 1 1. .Staging system and outcome of patients with urachal adenocarcinoma S Sh he el ld do on n e et t a al l. .( (1 19 99 94 4) ) N Na ak ka an ni is sh hi i e et t a al l. .( (1 19 99 96 6) ) 5 5--y ye ea ar r s su ur rv vi iv va al l ( (% %) ) I no invasion beyond urethral mucosa II invasion confined to urachus III A local extension into the bladder A invading into the bladder but not abdominal wall, 58 peritoneum, or other viscera III B local invasion into abdominal wall III C local invasion into peritoneum B invading abdominal wall, peritoneum or viscera other than bladder 42 III D local invasion into viscera other than the bladder IV A metastasis to regional lymph nodes C metastasis to regional lymph nodes or distant sites 0 IV B metastasis to distant sites T Ta ab bl le e 2 2. .Ontario staging system T T T1 tumour invasion confined to urachus T2 local extension into the bladder T3 local invasion to tella adiposa T4 invasion into peritoneum, abdominal wall, and other viscera was visible in the thickening in the subcutaneous layers of tissue, just below the upper pole of the postoperative scar.Moreover, increased radiotracer uptake was observed in seminal vesicles on the right as well as in the prostate gland.The whole image for the clinical interpretation spoke in favour of an active malignancy (Fig. 1, 2, 3).After three courses of chemotherapy the patient was withdrawn from systemic treatment due to disease progression depicted in imaging examinations and PET-CT.Given the patient's young age, the previous treatment for urachal cancer recurrence and a suspicion of another relapse, after the medical consultation the patient underwent surgery.During the surgery the tumour located in the right pelvis was excised along with the lower pole of the scar from the previous surgery.In histopathological examination urachal cancer recurrence was not confirmed (Hp; granulatio et inflammatio, focalis cum fibrosis).Currently, the patient is feeling well and is under close observation of the Clinic of Radiotherapy at F. Łukaszczyk Oncology Centre in Bydgoszcz.Periodic imaging examinations and urine cytology are performed with no evidence of recurrence. | 2017-08-15T02:44:20.023Z | 2011-04-29T00:00:00.000 | {
"year": 2011,
"sha1": "b5c9a6382b19405dbe54e58576c3c56c414d0a71",
"oa_license": "CCBYNCSA",
"oa_url": "https://www.termedia.pl/Journal/-3/pdf-16604-10?filename=Carcinoma%20of%20the%20urachus.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "b5c9a6382b19405dbe54e58576c3c56c414d0a71",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
17816961 | pes2o/s2orc | v3-fos-license | Mouse DRG Cell Line with Properties of Nociceptors
In vitro cell lines from DRG neurons aid drug discovery because they can be used for early stage, high-throughput screens for drugs targeting pain pathways, with minimal dependence on animals. We have established a conditionally immortal DRG cell line from the Immortomouse. Using immunocytochemistry, RT-PCR and calcium microfluorimetry, we demonstrate that the cell line MED17.11 expresses markers of cells committed to the sensory neuron lineage. Within a few hours under differentiating conditions, MED17.11 cells extend processes and following seven days of differentiation, express markers of more mature DRG neurons, such as NaV1.7 and Piezo2. However, at least at this time-point, the nociceptive marker NaV1.8 is not expressed, but the cells respond to compounds known to excite nociceptors, including the TRPV1 agonist capsaicin, the purinergic receptor agonist ATP and the voltage gated sodium channel agonist, veratridine. Robust calcium transients are observed in the presence of the inflammatory mediators bradykinin, histamine and norepinephrine. MED17.11 cells have the potential to replace or reduce the use of primary DRG culture in sensory, pain and developmental research by providing a simple model to study acute nociception, neurite outgrowth and the developmental specification of DRG neurons.
Introduction
The cell bodies of sensory neurons of the peripheral nervous system reside in the cranial and dorsal root ganglia (DRG). Sensory neuron function is altered in response to the endogenous release of inflammatory mediators in myriad pathological conditions [1]. DRG neurons in primary culture have been used to study the molecular mechanisms of acute nociception and peripheral sensitisation as well as to screen for drugs targeting these pathways. The drawbacks of primary culture include limited material requiring large numbers of animals to be sacrificed, labour intensive isolation procedures, poor transfection efficiencies, heterogeneity of cytochemical phenotypes and the presence of non-neuronal cells that confound "omic" studies. Several DRG cell lines have been generated, including the rat DRG/mouse neuroblastoma hybrid cell lines [2,3] and a rat embryonic DRG cell line [4]. But given the large number of DRG neuron subpopulations, more cell lines are required to represent the diversity of phenotypes. Moreover since the development of transgenic and gene knockout technology, there has been an increased reliance on murine models to study the mechanisms of acute and pathological pain and peripheral nervous system development. To date, no murine DRG cell lines exist to complement such studies. Recently, cells with nociceptive properties were derived from human pluripotent stem cells (hPSCs) using combined small molecule inhibition [5,6]. However, the reported protocols involve painstaking maintenance and manipulation of stem cells and require up to seven weeks for the emergence of some nociceptive markers. For these reasons, we set out to create mouse DRG cell lines with nociceptive properties and to develop an efficient differentiation protocol. We used the Immortomouse [7] to clone immortalised sensory neuron progenitors. The Immortomouse expresses a thermolabile simian virus 40 large T antigen tsA58. The transgene is under the control of the Major Histocompatibility Complex (MHC) H-2K b Class I promoter, which is basally active in many tissues and can be further induced by interferon. At 33°C, the large T antigen tsA58 is stable, but at 39°C the protein is rendered non-functional. The Immortomouse has been used to create many conditional cell lines from mitotic cells. To increase the likelihood of isolating neurons of the nociceptive lineage, we isolated several lines from embryonic day E12.5 DRG, a developmental stage when proprioceptive and low threshold mechanoreceptive-lineage neurons have terminally differentiated but nociceptive lineage neurons are still dividing. Here we present the derivation and characterisation of the Mouse Embryonic DRG (MED) cell line, MED17.11. These cells express markers of committed sensory neuron progenitors. However, when cultured in our differentiation medium, they express markers of maturing DRG neurons including numerous ion channels. We also observed functional responses to noxious compounds and inflammatory mediators. Therefore the MED17.11 cells may provide a simple model to study both acute nociception, developmental specification of DRG neurons and potentially the mechanisms of peripheral sensitisation.
Animals and DRG culture
A small Immortomouse mouse colony was maintained by the university of Sheffield Biological service unit. Breeding and maintenance of the mouse colony was carried out under Home Office Project License PPL 40/3430. Mice of all ages were sacrificed using a humane method as listed in Schedule 1 of the Animal (Scientific procedure) Act 1986. E12.5 embryos from the H2kbtsA58 Immortomouse were killed by immersion in ice-cold PBS followed by decapitation. DRG from all vertebral levels were collected into PBS on ice and tail tips were harvested for genotyping for the presence of the temperature-sensitive SV40 large T antigen (TSA58-sense 5'-TGCCAGGTGGGTTAAAGGAGCATGA-3' and TSA58-antisense 5'-AGCCAAGCAACT CCAGCCATCCA-3'). The DRG were digested for 40 minutes in a mixture of 0.6 mg/ml collagenase type IX (Sigma) and 1 mg/ml Dispase II (Gibco) in enzyme incubation solution [8] at 37°C in 5% CO 2 and 95% air. Following trituration, the DRG were resuspended in medium permissive for T antigen expression. This comprised a basal medium of DMEM/F12 with stable glutamine, penicillin and streptomycin and 10% FBS (all PAA), supplemented with interferon gamma (100 units/ml during initial establishment of DRG cell lines and reduced later to 50 units/ml), chick embryonic extract at 0.5% (Sera Labs) was added to augment the proliferation rate. The cells were cultured at 33°C for a few passages before cloning.
Establishment of DRG Cell Lines
Individual clones were isolated by cloning rings. We observed that cloning by limited dilution in multi-well plates lead to differentiation and death of cells. Primary clones were subjected to an initial screen to select for βIII-tubulin (Tuj1) immunoreactivity. These primary clones were subcloned until homogeneity of shape and Tuj1 expression was observed. The cell lines were routinely cultured in permissive medium (proliferating conditions) and passaged weekly. We have selected for clones with robust growth over more than 100 passages.
Differentiation in Non-Permissive Conditions
Before all experiments, the cells were plated onto polyornithine-coated glass coverslips or tissue culture-treated plastic and transferred to non-permissive medium for T-antigen expression overnight (medium as above, but without interferon gamma and chick embryonic extract). The large T antigen protein is rendered non-functional at 39°C [7]. However, this temperature is close to the thermal activation threshold of the heat and capsaicin receptor, TRPV1 (~42°C) which is expressed early in developing DRG [9], so to avoid activation of the ion channel, the cells were maintained at 37°C.
Immunocytochemistry
Cells were washed in ice-cold phosphate buffered saline (PBS) and fixed in 4% paraformaldehyde for 10 minutes. Paraformaldehyde autofluorescence was quenched by incubation with 50 mM ammonium chloride for 20 minutes. The cells were permeabilised for 15 minutes in PBS with 0.1% TritonX-100, washed and placed in blocking solution for one hour at room temperature prior to overnight incubation with primary antibodies at 4°C. Following, three washing steps in PBS, the cells were incubated with DAPI (1:5000) and Alexa Fluor anti-mouse or antirabbit secondary antibodies for two hours at room temperature (1:2000 each, Life Technologies). After another three washing steps, coverslips were mounted on slides using Prolong Gold (Life Technologies). Antibodies were verified in adult primary DRG in culture. We used monoclonal antibodies for Tuj1
RT-PCR
RNA isolation was performed using TRI-reagent (Sigma) and Direct-zol RNA mini-prep (Zymo Research) according to the manufacturers' instructions. Primers were designed to flank exon junctions and across two exons using NCBI Primer BLAST (Table 1). Reverse transcription (2000 ng input RNA) was performed using High Capacity cDNA Reverse Transcription Kit (Applied Biosystems) with random hexamers. cDNA was amplified by PCR using Go Taq polymerase with 5x Green buffer (Promega). The synthesised cDNA (50 ng of calculated from original RNA) was used as a template for RT-PCR, using primers summarised in Table 1. Following initial denaturation at 95°C for 10 minutes; the cDNA was amplified for 35 cycles using the following parameters: 95°C for 45 seconds; 57°C for 30 seconds, 72°C for 30 s. A final extension was performed at 72°C for 10 minutes. Primers were validated with cDNA synthesised from adult and E13.5 DRG mRNA.
Calcium Imaging
Cells were loaded with 2 μM Fura2-AM (Molecular Probes) for 30 minutes and allowed to recover in standard extracellular solution for a further 30 minutes before experiments. All recordings were performed at room temperature. The cells were superfused with standard extracellular solution for at least 5 minutes before beginning recordings. Ratiometric measurements of intracellular calcium [Ca 2+ ]i were made using a Cairn Dual OptoLED (excitation wavelengths: 350 and 380 nm) with a Hamamatsu C4742-95 camera and Simple PCI 6 software (Hamamatsu). Background subtraction and ratio calculations (350/380 nm) were performed within the software. Standard extracellular solution contained the following (mM): NaCl (140); KCl (4); CaCl 2 (2); HEPES (10); NaOH (4.54) and glucose (5). For high potassium extracellular solution, the concentration of KCl was increased to 40 mM and NaCl was reduced to 104 mM. Both solutions were adjusted to pH 7.4 at room temperature using NaOH. We tested three fields of cells for each compound. To avoid desensitisation mechanisms, only one stimulus was applied per coverslip, unless the cells failed to respond, in which case another drug was tested on the same coverslip.
Compounds
All drugs were applied in standard extracellular solution from the following stock solutions: capsaicin (
Data Processing and Statistical Analysis
Statistical analysis was performed using GraphPad Prism 6 software and SigmaPlot version 12.0. For the calcium imaging data, cells that responded with a rise in fluorescence ΔF/F 0 = 0.1 were considered to be responders. Cell surface area measurements were performed in Image J. A free-hand region of interest was drawn around the perimeter of the soma. The cell diameters were estimated from the area, which was assumed to be equal to πr 2 .
Transient Transfection
Transient transfection with pmaxGreenFP was performed using Lipofectamine LTX with the Plus reagent. Cells were seeded at density of 12500/cm 2 for transfection the following day. For each cm 2 the following was added in antibiotic free medium: 0.25 μg of DNA, 0.25 μl of plus reagent and 0.75 μl lipofectamine LTX were used. Transfection was allowed to proceed overnight.
Initial screen of immortalised Tuj1 positive clones
We selected dividing cells for cloning from E12.5 cultures on the basis of immunoreactivity for the neuron-specific marker Tuj1 and then selected 28 clones based on robustness of proliferation and uniformity of Tuj1 expression. Using RT-PCR and immunocytochemistry we screened these 28 clones for markers of neural crest cells, glia and post-mitotic sensory neurons, both in proliferating conditions and following differentiating conditions (data not shown). MED17.11 cells were identified as having an advanced sensory neuron-like profile coupled with a strong proliferation capacity. This clone was chosen for further detailed characterisation.
Proliferating MED17.11 cells express neuronal markers and can be transfected
When maintained in proliferating conditions, the cells adopted a flattened morphology (Fig 1) and were immunopositive for the generic neuronal markers Tuj1 and FOX3 (NeuN, Fig 1). MED17.11 cells were also immunopositive for the LIM homeodomain transcription factor Isl1 (Fig 1), which plays a role in sensory neuron survival and maturation, in particular for nociceptors [10]. Importantly, MED17.11 was negative for the glial markers GFAP, SOX10 and CNPase (data not shown).
Screens involving gene over-expression or knockdowns in primary DRG neurons in culture are difficult, owing to the challenge of obtaining high quantities of purified neurons, poor transfection efficiencies, particularly with standard lipid based transfection reagents which can cause a loss of cell viability. As proof of principle we have demonstrated that MED17.11 cells can be transfected using a lipofection-based reagent (Fig 1, GFP).
Differentiated cells rapidly adopt a neuronal morphology and express the DRG neuron marker Advillin
The protocol for differentiation is represented in Fig 2A. Our differentiation medium contained bFGF (10 ng/ml, R&D Systems), di-butryl cAMP (0.5mM, Sigma), forskolin (25 μM, Cell Signalling Technology) [11] and rock inhibitor Y-27632 (5 μg/ml, Chemdea), which promotes neurite outgrowth in embryonic DRG [12] and induces neural crest cell differentiation [13]. Growth factors NGF (100 ng/ml) and GDNF (20 ng/ml, both R&D) were also added to the differentiating medium, the former being required for nociceptor survival and both playing a role in phenotype specification. Within hours of switching to the differentiation medium, the majority of cells displayed a bipolar morphology typical of immature DRG neurons [14]. Fig 2B and 2C are bright-field images after three days in differentiating medium (day 5 of the differentiation protocol, Fig 2A). The majority of cells had a phase-bright rounded soma with two processes which extended in length and branched extensively in culture. We found that morphological differentiation was strongly dependent on cell density and was optimal when the cells were plated at 5000-15000/cm 2 . At higher densities a smaller percentage of cells showed visible differentiation and at lower densities the survival rates were poor.
MED17.11 cells displayed some heterogeneity in terms of cell size. Fig 2D shows the frequency distribution of soma diameters measured on day 5 of the differentiation protocol. Cell body diameters ranged from 8 to 32 μm, but 77.8% of MED17.11 cells were between 14 and 20 μm, with a mean diameter of 18.2 ± 0.2 μm (n>200 cells). The cell diameters showed a small but significant decrease over days post-differentiation ( Fig 2E). This is attributable to the loss of a small population of larger diameter cells over this period. The soma size is equivalent to small diameter adult neurons, but larger than reported in mouse at E15.5 [15] where TrkApositive (nociceptive) and RET positive (early RET expressing, mechanoreceptive, population) neurons had a mean diameter of 10.1 and 15.5 μm respectively (reported as surface area = 80.6 ± 1.8 μm 2 and 187.8 ± 7.6 μm 2 ; diameter calculated assuming area = πr 2 ). MED17.11 cells were uniformly immunoreactive for the sensory neuron marker Advillin ( Fig 2F) which is selectively expressed in rat DRG and superior cervical ganglia [16] and has even more restrictive DRG expression in mouse [17].
MED17.11 cells are committed to the sensory neuron lineage and mature when differentiated
To understand whether morphological differentiation by our medium was coupled to cytochemical maturation, we screened the cells for markers of sensory neurons and neural crest cells using RT-PCR in proliferating and differentiating conditions (7 days, which is day 9 of the differentiation protocol, Fig 2A). We included some markers that are expressed in modalityspecific subpopulations of DRG and whose expression pattern has been described in embryonic DRG. However, the primary focus was on markers of nociceptive neurons. The mRNA phenotype is summarised in Fig 3 and suggests that the undifferentiated cells are already committed to the sensory neuron lineage, but mature when cultured in our differentiation medium as described below.
In agreement with immunocytochemical analysis, in proliferating conditions, we did not detect mRNA for SOX10, a marker of multipotent neural crest cells whose expression is subsequently maintained only in glial cells [18]. Importantly, we detected expression of FOXS1, which is restricted to sensory neuron committed cells and is not expressed in the sympathetic chain in E12.5 mice [15]. In agreement with immunocytochemical analysis, we also detected mRNA for Advillin in both proliferating and differentiated cells. Taken together this suggests that the phenotype of MED17.11 cells correlates well with committed sensory neuron progenitors.
Within peripheral sensory neurons, RUNX1 is selectively expressed in developing nociceptive lineage neurons and maintained in the non-peptidergic nociceptor population postnatally [19] [20]. After just one day in differentiating conditions MED17.11 cells expressed this transcription factor (Fig 3). TrkA is initially found in all nociceptors/thermoreceptors (but not large diameter neurons), but postnatally, it is down-regulated in half of the population. These neurons start to express the GDNF receptor, cRET, do not synthesise peptides such as CGRP and substance P and label with the plant isolectin, IB4 [15]. Differentiated MED17.11 expressed mRNA for cRET; interestingly, we also detected mRNA for CGRP, a marker of peptidergic nociceptors. This suggests that MED cells are capable of differentiating into different subpopulations of DRG neurons (note that the differentiation medium contains both NGF and GDNF). We also detected mRNA for TRPV2, which is expressed in post-mitotic DRG neurons and developing motor neurons from E11, but not in other spinal cord neurons [20]. Of note, we also A "+" indicates that the cell line was positive and a "-" indicates that the cell was negative for the corresponding marker. A"?" indicates that the marker was not tested in the given condition A. The "*" beside Runx1 indicates that this nociceptor marker was also tested after one day in differentiation conditions and was positive. LTM = low threshold mechanoreceptor, VGSC = voltage gated sodium channel. Right, immunocytochemical analysis of MED17.11 cells differentiated for 7 days. The cells showed immunoreactivity for the modality specific markers TrkA and TrkC (top) and the embryonic VGSC, Nav1.3 (bottom left). Live MED17.11 cells also strongly labelled with IB4 (bottom right). Scale bar is 50 μm. detected mRNA for the mechanosensor proteins, Piezo1 and Piezo2. Both are expressed in DRG, with Piezo 2 being highly enriched in adult DRG [21] but their expression profiles have not been reported in embryonic DRG.
To determine whether the cells expressed markers of proprioceptor/mechanoreceptor lineage neurons, we looked for expression of receptor tyrosine kinases TrkB and TrkC as well as the runt domain transcription factor Runx3. When differentiated MED17.11 expressed TrkC and Runx3 (Fig 3).
Voltage gated sodium channels (VGSCs) confer electrical excitability to adult sensory neurons with NaV1.7, NaV1.8, and NaV1.9 being important molecular substrates underlying the excitability of nociceptors in DRG neurons. Following differentiation, we could not detect mRNA for Nav1.8 at this time-point (7 days in differentiating conditions), however, the cells expressed mRNA for VGSC NaV1.7 whose expression in sensory neurons is required for both acute and inflammatory pain [22,23] and the TTX resistant NaV1.9, which is enriched in nonpeptidergic nociceptors [24,25].
To determine whether mRNA expression analysis reflected actual protein translation and to confirm the reproducibility of our protocol, we screened cells in differentiating conditions for 7 days using antibodies against trkA and trkC (Fig 3B). In agreement with mRNA analysis, both cells were immunopositive for the two receptor tyrosine kinases. While all cells were immunopositive, the level of intensity was heterogeneous, suggesting that each receptor was enriched in certain cells. Live MED17.11 cells labelled strongly with isolectin B4 (IB4, Fig 3) which, as stated above, predominantly labels small diameter, non-peptidergic neurons.
Although mRNA and immunocytochemical evidence suggests that MED17.11 cells mature in our differentiation medium, it is unclear whether they still retain properties of embryonic neurons. While the VGSC NaV1.3 is not expressed in healthy adult DRG neurons, it is highly expressed in the developing DRG [26], making it a useful indicator for the degree of neuronal maturation. Indeed, all MED17.11 cells were immunoreactive for Nav1.3 suggesting cells differentiated for 7 days are still embryonic in nature.
MED17.11 responds to noxious compounds and the voltage-gated sodium channel agonist veratridine
To investigate whether MED17.11 cells express functional receptors found specifically in nociceptors, we used Fura-2 AM calcium microfluorimetry to examine the responses of cells in proliferating and differentiating (3-6 days) conditions to compounds known to excite sensory neurons (Fig 4). Representative traces of responses are shown in Fig 4A. In agreement with RT-PCR data, differentiated MED17.11 cells were sensitive to the TRPV1 agonist capsaicin (10 μM), a canonical nociceptor marker (Fig 4A upper). 71% of differentiated cells were responsive, compared with proliferating conditions when none responded to capsaicin. The mean capsaicin response magnitude was ΔF/F 0 : 0.43 ± 0.05 (Fig 4B).
KCl depolarisation of DRG neurons elicits calcium transients by activating voltage gated calcium channels, which are selectively expressed in neuronal cells of the DRG. In the developing DRG, capsaicin responses are only observed in neurons capable of being depolarised by KCl [9], therefore we expected that a large percentage of MED17.11 cells could be depolarised by KCl. Indeed, more than 90% of MED17.11 cells responded to KCl, with a mean response amplitude of ΔF/F 0: 0.38 ± 0.04 ( Fig 4B). As with capsaicin, MED17.11 cells were not depolarised by KCl in proliferating conditions (Fig 4B).
DRG neurons express purinergic receptors and are activated by their endogenous ligand ATP [27]. 50% of MED17.11 cells responded to ATP (10 μM) (Fig 4B) with a mean response amplitude of ΔF/F 0 : 0.3 ± 0.05 (Fig 4B). Again, we did not observe responses to ATP in control cells cultured in proliferating conditions. We next examined the cells for sensitivity to the sodium channel agonist veratridine. This alkaloid neurotoxin elicits calcium transients in primary adult DRG neurons [28] and preferentially activates TTX sensitive sodium channels by preventing the closure of the inactivation gate on open sodium channels [29]. Similar to the percentage of cells activated by capsaicin, 75% of MED17.11 cells responded to the compound with a sustained elevation in [Ca 2+ ] i ( Fig 4A and 4B), with a mean response amplitude of ΔF/F 0 : 0.23 ± 0.03. Similarly to other compounds, we did not detect veratridine responses in MED17.11 cells maintained in proliferating conditions.
TrkA positive neurons give rise to innocuous thermoreceptors and noxious cold sensors. We looked for functional responses to 30 μM WS-12, a selective compound for the cold receptor TRPM8, whose expression has been reported in embryonic DRG from E16.5 onwards initially in a subpopulation of TRPV1-positive neurons [9]. We did not observe any calcium elevation in response to this drug suggesting the cells do not express functional TRPM8 (data not shown). TRPA1 is an irritant receptor that is predominantly expressed in a subpopulation Immortalized DRG Cell Line of TRPV1 positive neurons [30]. Again we failed to observe responses to the TRPA1 agonist cinnamaldehyde (200 μM; data not shown).
To validate the effectiveness of our differentiation medium, we cultured MED17.11 cells for 3 days in non-permissive conditions (at 37°C and without IFNγ) with NGF and GDNF but without bFGF, cAMP, forskolin and Y-27632 (Fig 4C). We did not observe any responses to KCl or veratridine and only 8% of cells responded to capsaicin (compared to 71% in differentiation medium), thus validating the effectiveness of our differentiation medium.
MED17.11 expresses receptors for inflammatory mediators
Inflammatory mediators released during tissue damage can directly activate nociceptors or sensitise them to subsequent stimuli leading to hyperalgesia [1]. To determine whether differentiated MED17.11 cells have the potential to be used as an in vitro model to study mechanisms of peripheral sensitisation, we examined their responses to several inflammatory mediators using calcium microfluorimetry. Representative traces are shown in Fig 4D. We observed responses to bradykinin (2 μM), histamine (10 μM) and norepinephrine (10 μM) but we did not observe any responses to serotonin (10 μM; data not shown).
Discussion
We have derived a mouse embryonic DRG cell line, MED17.11, from the Immortomouse. The expression profile of this cell line correlates best with committed sensory neuron progenitors, and can be differentiated efficiently to have cytochemical and pharmacological properties consistent with nociceptor-lineage neurons. The cells also expressed markers of mechanoreceptor/ proprioceptor lineage neurons, suggesting that MED17.11 were isolated from DRG committed cells with multimodal potential.
Differentiated MED17.11 cells are a robust and rapid model for neurite outgrowth
The rapid and efficient induction of neurogenesis on a simple substrate of polyornithine, following differentiation (Fig 2), together with their transfectibility, could make MED17.11 a particularly useful model system for the growing field of high content analysis, which combines automated microscopy with automated analysis for chemical/genetic screens. Quantitation of neurite outgrowth is the most popular phenotypic screen for neuronal cells. Neuronal-like cell lines are used as popular model for in vivo neurons in such screens. However, speed and efficiency of differentiation is a specific bottleneck. Within hours of application of our differentiation medium, MED17.11 cells elaborate long processes that extend over time in culture. Moreover, unlike many widely used cell lines, such as PC-12, MED17.11 does not have a tendency to aggregate in our differentiation conditions (Fig 2B and 2C), which is a distinct advantage for high content screens.
MED17.11 cells have multimodal potential
Using our differentiation protocol, MED17.11 cells rapidly acquire properties of peptidergic and non-peptidergic nociceptors (Fig 3). Here, we have focused on probing their nociceptive phenotype, however TrkC and Runx3 expression indicates that they may also still have the ability to differentiate into low threshold mechanoreceptors/proprioceptive lineage neurons [31]. The differentiation medium contains the neurotrophic factors NGF and GDNF, which are involved in the specification of these populations [32]. We believe that a more defined medium could direct these cells to a specific phenotype. Regardless, this multipotentiality may be useful to study the mechanisms underlying DRG subpopulation specification. Further studies in serum-free conditions and perhaps using additional growth factors such as NT3 and BDNF are needed to determine the degree to which these cells can be directed towards a specific phenotype.
MED17.11 cells have unique phenotypes
Several other DRG cell lines exist but more are needed to better represent the full complement of phenotypes observed in the highly heterogeneous DRG. Recently, Vetter et al [33] characterised the endogenous calcium responses of various cell lines to compounds known to excite DRG neurons, including the rat embryonic [2] and neonatal [3] DRG/ mouse neuroblastoma fusion cell lines, F11 and ND7/23, as well as the T-antigen immortalised rat embryonic 50b11 cell line [4]. They used several compounds also tested here: bradykinin, histamine, ATP and capsaicin. Comparison with their results suggests that MED cells have unique properties. For instance, like MED17.11, F11 and 50b11 responded to ATP whereas ND7/23 cells were insensitive. Vetter et al did not detect responses to capsaicin in any of these cell lines, however, this contrasts previous reports for 50b11 [4]. The reasons for this are unclear, but could be related to differences in experimental parameters [33]. F11 cells lose their capsaicin sensitivity over time in culture [34], with genetic loss being a characteristic drawback of somatic cell fusion lines. Unlike MED17.11, none of these cell lines responded to histamine. F11 and ND7/23, but not 50b11 cells responded to bradykinin, with F11 showing particularly large calcium transients. MED17.11 was sensitive to bradykinin. Similarly, this is the first report of endogenous Piezo 2 expression in a DRG cell line, so it will be interesting to evaluate the mechanoreceptive properties of MED17.11 cells.
We did not detect expression of Nav1.8 using mRNA analysis. This channel is first detected at E15 in rat DRG [26], however, the onset of expression is differentially regulated within DRG subpopulations. MED17.11 strongly labelled with IB4 and in this population, Nav1.8 first appears only after Nav1.9 expression in the late embryonic period [26]. Immunoreactivity with NaV1.3 suggests that MED17.11 in our differentiation conditions for 7 days still have an embryonic phenotype. It is possible that culturing MED17.11 for longer periods could lead to the expression of Nav1.8 or that our culture conditions lacked an essential factor governing its expression. However, the mechanisms underlying induction of expression of Nav1.8 during development are unknown. We also failed to observe functional responses to the TRPA1 agonist cinnamaldehyde. Again, it is possible that the time-point chosen could be too early to detect this marker as the onset of functional TRPA1 expression occurs in the post-embryonic period (from P0 in peptidergic nociceptors and P14 in non-peptidergic nociceptors; (9)).
A differentiation medium for rapid and efficient cytochemical differentiation
Using our differentiation medium, MED17.11 cells cultured on polyornithine rapidly acquired markers of post-mitotic sensory neurons and differentiated morphologically with high efficiency. Neural crest cell specification of DRG neurons is an active area of study. Recently, great strides have been made in this area to produce sensory neurons from human pluripotent stem cells (hPSCs) using combined small molecule inhibition [5,6]. After 10 days these cells begin to express early markers of sensory neurons, including Brn3a and Isl1, similar to the expression profile of non-differentiated MED17.11. These cells are then grown in basic medium supplemented with growth factors and ascorbic acid and subsequently acquire nociceptor transcription factors and functional ion channel expression after a further 2-6 weeks in culture. This is in strong contrast to differentiated MED17.11 cells where we detected nociceptive markers after 1-7 days. Notably, the acquisition of the important nociceptor transcription factor, Runx1 was not detected in hPSCs-derived neurons until 4 weeks of culture in growth factors. In addition, veratridine and ATP sensitivity was not observed until 2 weeks post-differentiation in response to growth factors. Capsaicin responses were only seen after 6 weeks and in just 1-2% of cells. This is in contrast to MED17.11 cells which respond to both ATP (50%) and capsaicin (71%) after just a few days of differentiation. Indeed when we cultured MED17.11 in just NGF and GDNF, we observed little functional differentiation ( Fig 4D) compared with using our primary differentiation medium (Fig 4B). Therefore it will be interesting to determine whether our differentiation medium can be applied to neuralised hPSCs to accelerate acquisition of sensory neuron markers. The time saved (days instead of weeks) will have a significant impact on the suitability of MED17.11 or hPSCs derived neurons for high-throughput drug screens. Moreover, immortalising differentiated hPSCs at the same developmental stage as MED cell lines might further simplify the derivation of sensory neurons from hPSCs by eliminating the need to maintain and manipulate stem cells. Recently, Wainger et al derived nociceptors from fibroblasts by transfecting them with five transcription factors. Moreover, the fibroblasts were derived from patients with familial dysautonomia [35], thus providing a novel way to model human neuronal disease in vitro. This is an extremely encouraging development for the field. But for many basic researchers, a simpler, cheaper and quicker method for deriving sensory neurones in culture is necessary and thus MED17.11 cells may be more suitable for this work.
In summary, MED17.11 cells have the capability of differentiating into sensory neurons of multiple modalities, with particularly strong evidence that they differentiate efficiently into nociceptive-lineage neurons. Moreover, like primary DRG neurons they are sensitive to inflammatory mediators. The advantage of creating a temperature-dependent conditional cell line means that there is a continuous supply of material, but that any confounding effects that the immortalising gene on phenotype can be reduced. MED17.11 cells can be induced to undergo rapid and efficient maturation into sensory neurons using our differentiation medium; enabling large scale preparations for high-throughput and "omic" screens. The use of MED17.11 should aid basic and pharmaceutical research by providing an in vitro model to study the molecular mechanisms underlying nociception, neuronal development and phenotype specification, while at the same time reducing the number of animals used to derive primary cultures. | 2016-05-04T20:20:58.661Z | 2015-06-08T00:00:00.000 | {
"year": 2015,
"sha1": "8a88ea9fbcab23682ba752f5e980db5e1c156353",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0128670&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8a88ea9fbcab23682ba752f5e980db5e1c156353",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
18140809 | pes2o/s2orc | v3-fos-license | Commutative Languages and their Composition by Consensual Methods
Commutative languages with the semilinear property (SLIP) can be naturally recognized by real-time NLOG-SPACE multi-counter machines. We show that unions and concatenations of such languages can be similarly recognized, relying on -- and further developing, our recent results on the family of consensually regular (CREG) languages. A CREG language is defined by a regular language on the alphabet that includes the terminal alphabet and its marked copy. New conditions, for ensuring that the union or concatenation of CREG languages is closed, are presented and applied to the commutative SLIP languages. The paper contributes to the knowledge of the CREG family, and introduces novel techniques for language composition, based on arithmetic congruences that act as language signatures. Open problems are listed.
Introduction
This paper focuses on commutative languages having the semilinear property (SLIP). We recall that a language has the linear property (LIP) if, in any word, the number of letter occurrences (also named Parikh image) satisfies a linear equation; it has the semilinear property (SLIP) [5] if the number satisfies one out of finitely many linear equations. A language is commutative (COM) if, for every word, all permutations are in the language; thus, the legality of a word is based only on the Parikh image, not on the positions of the letters. Here we deal with the subclass of COM languages enjoying the SLIP, denoted by COM-SLIP, for which we recall some known properties. For a binary alphabet, COM-SLIP languages are context-free whereas, in the general case, they can be recognized by multi-counter machines (MCM), in particular by non-deterministic quasi-real-time blind MCM (equivalent to reversalbounded MCM [7]). The COM-SLIP family is closed under all Boolean operations, homomorphism and inverse homomorphism, but it is not closed under concatenation.
Our contribution is to relate two seemingly disparate language families: on one hand, the COM-SLIP languages and their closure under union and concatenation (denoted by COM-SLIP ∪,· ), on the other hand, the family of consensually regular languages (CREG), recently introduced by the authors, to be later presented. We briefly explain the intuition behind it. Given a terminal alphabet, a CREG language is specified by means of a regular language (the base) having a double alphabet: the original one and a dotted copy. Two or more words in the base language match, if they are all identical when the dots are disregarded and, in every position, exactly one word has an undotted letter (thus in all remaining words the same position is dotted). In our metaphor, we say that, position by position, one of the base words "places" a letter and the remaining words "consent" to it. A word is in the consensual language if the base language contains a set of matching words, identical to the given word when the dots are disregarded. This mechanism somewhat resembles the model of alternating non-deterministic finite automata, but the criterion by which the parallel computations match is more flexible and produces a recognition device which is a MCM working in NLOG-SPACE. This MCM can be viewed as a token or multi-set machine; it has one counter for each state of the DFA recognizing the base language; each counter value counts the number of parallel threads that are currently active in each state. Our main result is that the COM-SLIP ∪,· family is strictly included in CREG; we also prove some non-closure properties of COM-SLIP ∪,· .
To construct the regular language that serves as base for the consensual definition of a COM-SLIP ∪,· language, we have devised a new method, which may be also useful to study the inclusion in consensual classes of other families closed union or concatenation. It is easy to consensually specify a COM-LIP language by means of a regular base; however, in general, union or concatenation of two regular bases consensually specifies a larger language than the union or concatenation of the components. To prevent this to happen, we assign a distinct numeric congruence class to each base, which determines the positions where a letter may be placed as dotted or as undotted. For a given word, such positions are not the letter orders, but they are the orders of the letters in the projections of the word on each letter of the alphabet. The congruence acts as a sort of signature that cannot be mismatched with other signatures.
To hint to a potential application, COM-SLIP ∪,· offers a rather suitable schema for certain parallel computation systems, such as Valiant's "bulk synchronous parallel computer" [16]. There, when all threads in a parallel computational phase, which we suggest to model by a commutative language, terminate, the next phase can start; the sequential composition of such phases can be represented by language concatenation; and the composition of alternative subsystems can be modeled by language union. As said, such computation schema is not finite-state but it is a MCM.
Paper organization: Sect. 2 contains preliminaries, some simple properties of COM-SLIP ∪,· and the consensual model. Sect. 3 introduces the decomposed form, states and proves the conditions that ensure union-and concatenation-closure, and details the congruence based constructions. Sect. 4 proves the main result through a series of lemmas. The last section refers to related work and mentions some unanswered questions.
Preliminary Definitions and Properties
The terminal alphabet is denoted by Σ = {a 1 , . . . , a k }, the empty word by ε and |x| is the length of a word x. The projection of x on ∆ ⊆ Σ is denoted by π ∆ (x); |x| a is shorthand for |π {a} (x) | for a ∈ Σ, and |x| ∆ stands for |π ∆ (x) |. The i-th letter of x is x(i) and x(i, j) is the substring ; it can be naturally extended to a language. The component-wise addition of two vectors is denoted by the corresponding language family is named COM. A language L ⊆ Σ * has the linear property (LIP) if there exist q + 1 > 0 vectors c, p (1) , . . . , p (q) over N k , (resp. the constant and the periods) such that Ψ(L) = c + n 1 · p (1) + . . . + n q · p (q) | n 1 , . . . , n q ≥ 0 .
A language has the semilinear property (SLIP) if it is the finite union of LIP languages. The families of commutative LIP/SLIP languages are denoted by COM-LIP/ COM-SLIP, respectively. It is well known that COM-SLIP is closed under the Boolean operations, inverse homomorphism, homomorphism and Kleene star, but not under concatenation, which in general destroys commutativity. However, the concatenation of COM-SLIP languages still enjoys the SLIP.
Let COM-SLIP ∪,· be the smallest family including COM-SLIP languages and closed under union and concatenation. Let BLIND denote the class of languages accepted by nondeterministic, blind multicounter machines [7], which, we recall, are restricted to perform a test for zero only at the end of a computation; they are equivalent to reversal-bounded counter machines. The following facts, although to our knowledge not stated in the literature, are straightforward. Proposition 1. Main Properties of COM-SLIP ∪,· .
1. Every COM-SLIP ∪,· language on a binary alphabet is context-free.
3. The COM-SLIP ∪,· family is not closed under intersection and Kleene star.
Proof. Let L ′ = com ((ab) + ). Statement (1) is immediate: since all COM-SLIP on a a binary alphabet are context-free [9,13], also their union and concatenation is context-free. Statement (2) is also immediate, since COM-SLIP is clearly included in BLIND, and BLIND is closed by union and concatenation. The inclusion is strict since BLIND includes also non-context-free languages on a binary alphabet [7]. To prove non-closure of intersection -Statement (3) -assume by contradiction that the language L 0 = L ′ ∩ a + b + = {a n b n | n > 0} is in COM-SLIP ∪,· . Hence, also the languages L 1 = {a + b n a n | n > 0}, L 2 = {a m b m a + | m > 0} and L 1 ∩ L 2 = {a n b n a n | n > 0} are in COM-SLIP ∪,· . But the latter language is not context-free, contradicting Statement (1). To complete the proof of Statement (3), if COM-SLIP ∪,· were closed under Kleene star, then language L 3 = (L ′ c) * would be COM-SLIP ∪,· , with c ∈ {a, b}. However, COM-SLIP ∪,· is included in BLIND, which is an intersection-closed full semiAFL (see Section 5 of [1] and also Theorem 1 of [7]), i.e., BLIND is closed under intersection, union, arbitrary homomorphism, inverse homomorphism, and intersection with regular languages. Hence, the language L 4 = L 3 ∩ (a + b + c) * = {a n b n c | n > 0} * would be in BLIND. Letter c can be deleted by a homomorphism, hence also the language {a n b n | n > 0} * , is BLIND, contradicting Corollary 3 of [1] and also Theorem 6, Part (2), of [7].
Consensual Languages.
We present the necessary elements of consensual language theory [2,3]. LetΣ be the dotted (or marked) copy of alphabet Σ. For each a ∈ Σ,ã denotes the set {a,å}. The alphabet Σ = Σ ∪Σ is named double (or internal). To express a sort of agreement between words over the double alphabet, we introduce a binary relation, called match, over Σ * . Definition 1 (Match). The partial, symmetrical, and associative binary operator, called match, @ : Σ × Σ → Σ is defined as follows, for all a ∈ Σ: in every other case.
Hence, the match is undefined on strings w, w ′ of unequal lengths, or else if there exists a position j such that w( j)@w ′ ( j) is undefined, which occurs in three cases: when both characters are in Σ, when both are inΣ and differ, and when either one is dotted but is not the dotted copy of the other. Syntactically, the precedence of the match operator is just under the precedence of the concatenation. The match w of two or more strings is further qualified as strong if w ∈ Σ * , or as weak otherwise. By Def. 1, if w = w 1 @w 2 @ . . . @w m is a strong match of m ≥ 1 words w 1 , . . . , w m , then in each position 1 ≤ i ≤ |w|, exactly one word, say w h , is undotted, i.e., w h (i) ∈ Σ, and w j (i) ∈Σ for all j = h; we say that word w h places the letter at position i and the other words consent to it. Metaphorically, the words that strongly match provide mutual consensus on the validity of the corresponding word over Σ, thereby motivating the name "consensual" of the language family. The match is extended to two languages B ′ , B ′′ on the double alphabet, as Definition 2 (Consensual language). The closure under match, or @-closure, of a language B ⊆ Σ * is B @ = i≥0 B i@ . The consensual language with base B is defined as C (B) = B @ ∩ Σ * . The family of consensually regular languages, denoted by CREG, is the collection of all languages C (B), such that the base B is regular.
It follows that a CREG language can be consensually specified by a regular expression over Σ.
Example 1.
The LIP language L = {a n b n c n | n > 0} is consensually specified by the base (that we may call a "consensual regular expression")å * aå * b * bb * c * cc * . For instance, aabbcc is the (strong) match of a ab bc c and aå bb cc. The commutative closure of L is also in CREG, with base: com abc ∃˚Σ * .
Similarly, the COM-LIP language L ′ = com (ab) The COM-LIP language L ′′ = com (abb) + is specified by the base B 2 = com abb ∃˚Σ * . The languages L ′ ∪ L ′′ and L ′ · L ′′ are in CREG, but, counter to a naive intuition, they are not specified by the bases obtained by composition, respectively, B 1 ∪ B 2 and B 1 B 2 . In general C (B 1 ∪ B 2 ) ⊃ C (B 1 ) ∪ C (B 2 ): in the examples, C (B 1 ∪ B 2 ) contains also undesirable "cross-matching" words, such as ababb = abåbb @åbabb. A systematic compositional technique for obtaining the correct bases for the union and concatenation is the main contribution of this paper.
Summary of known and relevant CREG properties.
Language family comparisons: CREG includes the regular languages, is incomparable with the context-free and deterministic context-free families, is included within the context-sensitive family, and it contains non-SLIP languages. CREG strictly includes the family of languages accepted by partially-blind multi-counter machines that are deterministic and quasi-real-time, as well as their union [4]. Closure properties: CREG is is closed under marked concatenation, marked iteration, inverse alphabetic homomorphism, reversal, and intersection and union with regular languages. The marked concatenation of two languages L 1 , L 2 ⊆ Σ * is the language L 1 #L 2 , where # ∈ Σ, while the marked iteration of L ⊆ Σ * is the language (L#) * . A language family enjoying such properties is known as a pre-Abstract Family of Languages (see, e.g., [14]). A precise characterization of the bases that consensually specify regular languages is in [3]; an analysis of the reduction in descriptional complexity of the consensual base with respect to the specified regular language is in [2]. Complexity: CREG is in NLOGSPACE, i.e., NSPACE(log n) (often called NL): it can be recognized by a nondeterministic multitape Turing machine working in log n space. The recognizer of CREG languages is a special kind of nondeterministic, real-time multi-counter machine.
Useful notations for consensual languages.
The following mappings will be used: These mappings are naturally extended to words and languages, e.g., given x ∈ Σ * , switch(x) is the word obtained interchanging a andå in x (a sort of "complement").
In the remainder of the paper, we assume that each base language is a subset ofΣ * −Σ + , since words inΣ + are clearly useless in a match. Let B, B ′ be languages included inΣ
Consensual specifications composable by union and concatenation
Since it is unknown whether the whole CREG family is closed under union and concatenation, we first introduce a normal form, named decomposed, 1 of the base languages, which is convenient to ensure such closure properties. Second, we state two further conditions, named joinability and concatenability, for decomposed forms, and we prove that they, respectively, guarantee closure under union and concatenation. Such results hold for every consensual language, but the difficulty remains to find a systematic method for constructing base languages that meets such conditions. Third, in Sect. 3.1 we introduce an implementation of decomposed forms, relying on numerical congruences, that will permit us to prove in Sect. 4 that the (∪, ·)-closure of commutative SLIP languages is in CREG.
Definition 3 (Decomposed form).
A base B ⊆Σ * −Σ + has the decomposed form if there exist a (disjoint) partition of B into two languages, named the scaffold sc and the fill f l of B, such that f l is unproductive, and the pair (sc, sc) is unmatchable.
The names scaffold and fill are meant to convey the idea of an arrangement superposed just once on each word of the base and, respectively, of an optional (but repeatable) component to complete the letters which are dotted in the scaffold. Three straightforward remarks follow. For every base B there exists a consensually equivalent decomposed base: it suffices to take as scaffold the language {a dot(y) | ay ∈ B, a ∈ Σ, y ∈ Σ * }, and as fill the language {dot(x)y | x ∈ Σ, y ∈ Σ * , xy ∈ B}. For every s ⊆ sc, f ⊆ f l, the base s ∪ f is a decomposed form. The scaffold, but not the fill, may include words over Σ.
Consider a word w ∈ C (B). Since the fill is unproductive, its match closure cannot place all the letters of w and such letters must be placed by the scaffold. Since by definition the match closure of the scaffold alone is the scaffold itself, the following fundamental lemma immediately holds.
Example 2. The table shows the decomposed bases of languages com (ab) + and com (abb) + of Sect. 2.1, considering for brevity only the case that the number of a's is a multiple of 3. Let L ′ = com {a 3n b 3n | n ≥ 1} , with scaffold sc ′ and fill f l ′ , and L ′′ = com {a 3n b 6n | n ≥ 1} , with scaffold sc ′′ and fill f l ′′ : Clearly, every word in sc ′ is unmatchable with every other word in sc ′ , hence sc ′ @sc ′ = / 0. Similarly, every fill is unproductive. Every word in L ′ is the match of exactly one word in the scaffold with one or more words in the fill. Analogous remarks hold for L ′′ .
Next, imagine to consensually specify two languages by bases in decomposed form B ′ = sc ′ ∪ f l ′ and B ′′ = sc ′′ ∪ f l ′′ . By imposing additional conditions on the bases, we obtain two very useful theorems about composition by union and concatenation.
Theorem 1 (Union of consensual languages in decomposed form). Let the base languages B
Since B is decomposed, by Lemma 1 it must be either x ∈ sc@ f l @ or x ∈ sc. In the latter case, x is in B ′ or in B ′′ , and the inclusion follows. In the former case, there exist n ≥ 2 words w 1 , w 2 . . . , w n , with n ≤ |x|, w 1 ∈ sc, w 2 , . . . , w n ∈ f l and w 1 @w 2 @ . . . @w n = x. We claim that either w 1 ∈ sc ′ and every other w i ∈ B ′ , or w 1 ∈ sc ′′ and every other w i ∈ B ′′ , from which the thesis follows. Assume w 1 ∈ sc ′ (the case w 1 ∈ sc ′′ is symmetrical). If there exists j, 2 ≤ j ≤ n, such that w j ∈ f l ′′ (with w j ∈Σ + ), then sc ′ @ f l ′′ is not empty (it includes at least w 1 @w j ), a contradiction with the hypothesis that B ′ and B ′′ are joinable. Example 3. Returning to Ex. 2, we check that the two bases are joinable. The union of the bases is in decomposed form: f l ′ ∪ f l ′′ is unproductive (because letters at positions 3, 6, . . . cannot be placed); the For concatenation, a similar, though more involved, reasoning requires a new technical definition.
Definition 5 (Dot-product ⊙ and concatenability). Let B ′ , B ′′ be in decomposed form, and define their dot-product as with scaffold sc ′ · sc ′′ and fill f l ′ ∪ f l ′′ , and the next two clauses hold for all words w ′ , w ′′ ∈ Σ + , y ′ ∈ sc ′ , y ′′ ∈ sc ′′ : The two clauses are symmetrical. In loose terms, Clause (1) says that the fill f l ′ contains a word w ′ that matches y ′ y ′′ , if, and only if, the word has a prefix x ′ , also in f l ′ , which matches y ′ , hence it is aligned with the point of concatenation. Therefore, the match w ′ @y ′ · y ′′ does not produce a word that is illegal for C (B ′ ) · C (B ′′ ). This reasoning is formalized and proved next.
We consider only Case (1) since the other is symmetrical. Since w j ∈ f l ′ and w j @w ′ 1 w ′′ 1 is defined, then, by definition of concatenability, there exists x ′ ∈ f l ′ such that w j = x ′ · dot(w ′′ 1 ), i.e., w j (1, q) = x ′ , a contradiction with the assumption of Case (1).
Example 4.
Consider again Ex. 2. It is easy to check that the pair (sc ′ · sc ′′ , sc ′ · sc ′′ ) is unmatchable, for the same reason that (sc ′ , sc ′′ ) is unmatchable. Then, we check that the bases sc ′ ∪ f l ′ and sc ′′ ∪ f l ′′ are concatenable. We only discuss the case of Clause (1) since Clause (2) is symmetrical. Let w ′ ∈ Σ + , y ′ ∈ sc ′ , f l ′′ ∈ sc ′′ . If there exists x ′ ∈ f l ′ such that w ′ = x ′ dot(y ′′ ), then obviously both w ′ ∈ f l ′ and w ′ @y ′ · y ′′ are defined. For the converse case, assume that w ′ ∈ f l ′ and w ′ @y ′ ·y ′′ is defined. Consider the projections α = π a (w ′ ), α ′ = π a (y ′ ) ∈ (aåa) + and α ′′ = π a (y ′′ ) ∈ (åaa) + . Then α ∈ (ååå) * å aå(ååå) * . Since w ′ @y ′ ·y ′′ is defined, the factoråaå of α must be matched with a factor of α ′ α ′′ : by its form and alignment, the only possibility is that it is matched with a factor of α ′ . Hence, α has the form (ååå) * å aå(ååå) * dot(α ′′ ). We omit the analogous reasoning for the projections on b. Since w ′ @y ′ · y ′′ is defined, then w ′ must have the form x ′ · dot(y ′′ ) for some x ′ ∈ f l ′ . Therefore L ′ · L ′′ = C (sc ′ · sc ′′ ∪ f l ′ ∪ f l ′′ ). For instance This example relies on a numerical congruence with module 3 for positioning the dotted and undotted letters. We shall see how to generalize this approach to handle words of any congruence class (with respect to the length of the projections on each letter). The generalization will carry the cost of taking larger values for the congruence module.
Incidentally, we observe that the theorems of this section may have a more general use than for commutative languages. Moreover, the theorems do not require the base languages to be regular; in fact, Def. 2 applies as well to non-regular bases (as a matter of fact [3] studies context-free/sensitive bases).
A Decomposed Form Relying on Congruences
Having stated some sufficient conditions for ensuring that the union/concatenation of two consensual languages can be obtained by composing (as described by Th. 1 and Th. 2) the corresponding base languages, we design a decomposed form, suitable for supporting joinability and concatenability, that uses module arithmetic for assigning the positions to the dotted and undotted letters within a word w over Σ; the preceding examples offered some intuition for the next formal developments. 2 Loosely speaking, each decomposed base language is "personalized" by a sort of unique pattern of dotted/undotted letters, such that, when we want to unite or concatenate two languages, the match of two words with different patterns is undefined, thus ensuring that the union or catenation of the two decomposed bases specifies the intended language composition.
For every a ∈ Σ, consider the projection of w on a = {a,å} and, in there, the numbered positions of each a andå. Let m be an integer. By prescribing that for each base language, each undotted letter a may only occur in positions j characterized by a specified value of the congruence j mod m, we make the bases decomposed. We need a new definition.
The disjoint regular languages sc-R m , fl-R m Σ * are defined as: The definition of fl-R m is clearly equivalent to x | ∀a ∈ Σ, π a (x) ∈ (switch(R m (a)) ∪å) * −Σ * . It is fairly obvious that C (B) = Σ + , since Σ + ⊆ sc-R m . Also, sc-R m @sc-R m = / 0 and fl-R m is unproductive. The following lemma is also obvious. For clarity, in this example the characters in sc-R 6 and in fl-R 6 , belonging to factors in R 6 (a), R 6 (b), or switch(R 6 (a)), switch(R 6 (b)) respectively, are in bold. Examples of words in C (B) are: a 6 b 6 ∈ sc-R 6 , also a 6 b 6 =å aåaaabbbbbb @ in sc-R 6 aåaåååbbbbbb in fl-R 6 a 9 b 8 ∈ sc-R 6 , also a 9 b 8 =å aåaaaaaabbbbbbbb @ in sc-R 6 aåaååååååbbbbbbbb in fl-R 6 (ab) 4 aaabb ∈ sc-R 6 , also (ab) 4 aaabb =åb abåbabaaabb@ in sc-R 6 abåbabåbåååbb in fl-R 6 To ensure that a base, included in sc-R m ∪ fl-R m , can be used when two such languages are concatenated, we need the next simple concept.
Definition 7 (Shiftability). A language
This means that any word in R remains legal, when it is padded to the left/right with any dotted words.
Next we show that by taking disjoint sets of slots over the same module, we obtain two bases that are joinable; if, in addition, the fills are shiftable, the condition for concatenability is satisfied.
being unmatchable is symmetrical). By contradiction, assume that there exist x ∈ fl-R ′′ m and y ∈ sc-R ′ m such that x@y is defined. Let a ∈ Σ be a letter occurring in x ∈Σ + and consider the projection α = π a (x). By definition of fl-R ′′ m , there exist a position q of α and a value r ∈ R ′′ such that α(q) = α(q + r ′′ ) = a. Then, there exists α ′ ∈ π a (y) such that α@α ′ is defined. But in α ′ for all positions p, 1 ≤ p ≤ |α ′ |, if α ′ (p) =å then α ′ (p + r ′ ) = a for all r ′ ∈ R ′ . Therefore, if p = q then α(p + r) = α ′ (p + r) = a, which is impossible by definition of matching. The same argument could be applied to show that also the other two pairs are unmatchable. Part (2): Define as fl-E ′ , sc-E ′ and as fl-E ′′ , sc-E ′′ the fills and the scaffolds of E ′ and E ′′ , respectively. If fl-E ′ and fl-E ′′ are shiftable, then also the fill fl-E ′ ∪ fl-E ′′ of both E ′ ∪ E ′′ and E ′ ⊙ E ′′ is shiftable, since the union of two shiftable languages is shiftable. We now prove that in this case E ′ , E ′′ are also concatenable. Let w ′ ∈ fl-E ′ , y ′ ∈ sc-E ′ , y ′′ ∈ sc-E ′′ . If there exists x ′ ∈ fl-E ′ such that x ′ @y ′ is defined and w ′ = x ′ dot(y ′′ ), then it is obvious that w ′ ∈ fl-E ′ =Σ * fl-E ′Σ * and that w ′ @(y ′ · y ′′ ) is defined. We are left to show that: if w ′ @(y ′ · y ′′ ) is defined then ∃x ′ ∈ fl-E ′ such that w ′ = x ′ dot(y ′′ ) and x ′ @y ′ is defined.
The proof of Claim (6) requires another technical definition. Given a set R of slots with module m, for a ∈ Σ, for every α ∈ π a (sc-R m ) a restarting point for projection α is a position i, . Hence, at i there is a factor in R m (a). A symmetrical definition holds if α ∈ π a (fl-R m ): factor α(i, i + m − 1) ∈ switch(R m (a)). A restarting point always exists for all α ∈ π a (sc-R m ) or α ∈ π a (fl-R m ), provided that α ∈ Σ + . We claim that if s ∈ sc-R m , f ∈ fl-R m for some (possibly equal) sets of slots R,R with module m, and the match s@ f is defined, then both the following conditions hold: ∀a ∈ Σ, the set of restarting points for π a ( f ) is included in the set of restarting points for π a (s). (8) Since f ∈Σ * , there exists at least one a ∈ Σ such that π a ( f ) has a factor in switch(R m (a)) i.e., there exists a restarting point p for π a ( f ). For brevity, let α = π a ( f ). Hence, 1 ≤ p ≤ |α| − m. Therefore, there exists r ∈R such that α(p) = α(p + r) = a. Consider now β = π a (s). Since s@ f was assumed to be defined, There are two possibilities: either p is a restarting point also for β , hence r ∈ R and the above claims follow, or p is not a restarting point for β . The latter case is however impossible. In fact, in this case p + r would be a restarting point for β , because of the form of R m (a). Therefore, since β (p) =å, there would be a restarting point also at position p − r ′ , for some r ′ ∈ R. However, both r, r ′ , by definition, are smaller than m/2, therefore 2 ≤ r + r ′ ≤ m − 2. Hence, the restarting point at p − r ′ would be at a distance less than m from the restarting point at p + r, which is impossible by definition of R m (a).
We prove Claim (6) to finish. For every a ∈ Σ, let q ′ a = |π a (y ′ )|, and let q ′′ a = |π a (y ′ )|. Consider the rightmost restarting point p a for π a (w ′ ). By definition of fl-E ′ , there exists r ′ ∈ R ′ such that π a (w ′ )(p a , p a + m) = aå r ′ −1 aå m−r ′ −1 . By Claim (8), p a is also a restarting point for π a (y ′ · y ′′ ): there exists r ∈ R ′ ∪ R ′′ such that π a (y ′ y ′′ )(p a , p a + m) =åa r−1å a m−r−1 . We claim that p a ≤ q ′ a . In fact, if p a > q a , then p a must be a restarting point for y ′′ , hence r ∈ R ′′ : but r = r ′ , a contradiction with the hypothesis that R ′ ∩ R ′′ = / 0. If p a ≤ q ′ a then p a must be a restarting point for π a (y ′ ), hence r = r ′ and actually p a ≤ q a − m. Since p a is the rightmost restarting point, π a (w ′ )(p a − m + 1, q ′ a + q ′′ a ) ∈Σ + . Choose x ′ to be the prefix of w ′ such that such that w ′ = x ′ dot(y ′′ ).
Commutative SLIP languages and their (∪, ·)-closure
This section proves the main result:
Theorem 4 (Closure under union and concatenation). The family COM-SLIP ∪,· is strictly included in the family of consensually regular languages: COM-SLIP ∪,· ⊂ CREG.
Every language in COM-SLIP ∪,· can be defined by an expression that combines finitely many COM-SLIP languages, using union and concatenation; since COM-SLIP is the finite union of COM-LIP languages, we may assume that the expression includes only COM-LIP, rather than COM-SLIP, languages.
In the sequel, we prove that every COM-LIP language can be consensually defined in a decomposed form such that it permits to satisfy the additional assumptions needed for union and concatenation, hence all COM-SLIP ∪,· languages are in CREG.
Decomposed form for COM-LIP languages
To expedite handling the constant terms of LIP systems, we introduce a new operation append that combines a language and a commutative language, the latter penetrating into the former. Definition 8 (Appending). Let B be a language over the double alphabet Σ. For a ∈ Σ, define the (unique) factorization where B a ⊆ Σ * · a and B Σ− a ⊆ Σ − a * are languages, resp. ending by a, and not using the letters a,å.
If neither a norå occurs in B, let B a = ε. Let A ⊆ a + ; we define the operation, named appending A to B, as follows: Given a commutative language F ⊆ Σ * , Σ = {a 1 , . . . , a k }, the iterative application of the previous operation to every letter of the alphabet (in any order) defines the operation, named letter-by-letter appending F to B, as: . (B ✁ π a 1 (F)) ✁ π a 2 (F)) . . . ) ✁ π a k (F).
To illustrate, we compute: In the remainder of the Section, let L be a COM-LIP language over Σ = {a 1 , . . . , a k }, k > 0, defined by constant c and periods P = p (1) , . . . , p (q) , for some q > 0, with the condition that for every p ∈ P, every component p i is even.
The next definition introduces some sets, called X ,Y,W , to define the COM-LIP language L with a base D in decomposed form. The assumption on each p i being even will be lifted when defining COM-SLIP languages.
Definition 9.
For all even integers m ≥ 4, and for all sets of slots R of the form {r} with 0 < r < m/2, define the regular languages X ,Y, D ⊆ Σ * and the finite commutative language W ⊆ Σ * , as follows: It is obvious that X ⊆ fl-R m . To see that Y ✁ W ⊆ sc-R m , we first describe relevant features of the formulae. By Eq. (11), W is the finite commutative language having as Parikh image the linear subspace included between c and c + (m/2 − 1) p (1) + . . . + (m/2 − 1) p (q) . For each a i , the projection on a i of a word in Y ✁W ends with a tail of undotted a i 's defined by Eq. (11). While the projection on a i of sc-R m has necessarily length multiple of m, the tail does not need to comply with such constraint, thus allowing, in principle, the language Y ✁W to contain words whose projections on a i has any length greater or equal to c i (within the specified subspace). The following lemma is immediate: Lemma 3. Let X ,Y,W, D as in Def. 9. Then, D is a decomposed base included in sc-R m ∪ fl-R m , with Y ✁ W ⊆ sc-R m being the scaffold and X ⊆ fl-R m being the fill; moreover, the fill of D is shiftable, i.e., X =Σ * XΣ * . Example 6. Consider the language L ′′ even = com (a 2 b 4 ) * having the period p a = 2, p b = 4 and null constant. Notice that to obtain language com (ab 2 ) * , it is enough to apply union to L ′′ even and to the language L ′′ odd = com abb(a 2 b 4 ) * , which can be defined with the same period p a = 2, p b = 4, and with constant c a = 1, c b = 2. If module m = 6 and set of slots R = {2} then R 6 (a) =åaåa 3 , R 6 (b) =bbbb 3 .
Also, fl-R 6 = aåaå 3 ∪å * ∃ bbbb 3 ∪b * − {å,b} * . Let Both X and Y satisfy Def. 9. To complete the base of language L ′′ even , we define The fill {å,b} * X {å,b} * and the scaffold Y ✁ W are a decomposed form for L ′′ even . Similarly, to define L ′′ odd , we have to define the sets X ′ ,Y ′ ,W ′ ; for X ′ ,Y ′ we select as set of slots R ′ = {1}, which satisfies The important property of the language in Eq. (9) is stated next.
Since the language C (D) is commutative, and v ∈ C (D), also u ∈ C (D).
We can now complete the proof of Th. 4. Since a COM-SLIP language is the finite union of COM-LIP languages, a COM-SLIP ∪,· language is the union and concatenation of COM-LIP languages. It can be assumed that these COM-LIP languages comply with Def. 9 having only even components in every vector of the set P of periods (since otherwise they can be represented as the finite union of COM-LIP languages with this property). Select the same module and disjoint sets of slots for the decomposed bases of these COM-LIP languages. By Th. 3, since each COM-LIP is defined by a shiftable base with disjoint sets of slots, the various bases can be combined with ∪ and ⊙, resulting in a shiftable base. By Th. 1 and and Th. 2, the result is still a consensual language (with a decomposed base). The inclusion is strict, since language {ba 1 ba 2 ba 3 . . . ba k | k ≥ 1} has a non-SLIP commutative image, but it is in CREG [2].
Related Work and Conclusion
By classical results, COM-SLIP ∪,· is included in the class of languages recognized by reversal-bounded multi-counter machines [1,8] (which is also closed under concatenation). The latter class admits different, but equivalent, characterizations: as the class of languages recognized by (nondeterministic) blind MCMs' [7], or as the minimal, intersection-closed full semi-AFL including language com((ab) * ) [1,6]. However, the cited papers are not concerned with actual construction methods for the MCMs'. Although COM-SLIP languages have been much studied, we are not aware of any specific study on the effect on COM-SLIP of operations such as concatenation.
Concerning the techniques to specify COM-SLIP languages, our specification, using as patterns the commutative Parikh vectors, bears some similarity to Kari's [10] "scattered deletion" operation.
It is known that family COM-SLIP, when restricted to a binary alphabet, is context-free [9,13], therefore it enjoys closure under concatenation and star. On the other hand, we observe that the intersection I = L ′4 ∩ a + L ′2 b + , where L ′ = com ((ab) + ), is not context-free, since I ∩ (a + b + ) 4 = {a n b n a n b n a n b n a n b n | n > 1}.
In [13], the context-free grammar rules for COM-LIP again resemble our consensual specification. Also, the context-sensitive grammars in [11], obtained by adding permutative rules of the form AB → BA to context-free grammars, include COM-SLIP and of course its closure by concatenation and star, but not its intersection with regular languages.
Last, the COM-SLIP languages are included in the SLIP language family recognized by a formal device, based on so called restarting automata, studied in [12], but the grounds covered by CREG and by that family are quite different. Beyond the mentioned similarities, we are unaware of anything related to our congruence-based decomposed form.
Unanswered questions This paper has added a piece to our knowledge of the languages included in CREG; it has introduced a novel compositional construction for the union/concatenation, which is very general and hence likely to be useful for other language subfamilies included in CREG. Some natural questions concern the closures of COM-SLIP under other basic operations: is the intersection of two COM-SLIP languages, or the Kleene star of a COM-SLIP language, in CREG?
A different kind of problem is whether the only commutative languages that are in CREG are semilinear; for instance, the nonsemilinear non-commutative language {ba 1 ba 2 ba 3 . . . ba k | k ≥ 1} is in CREG, but, for its commutative closure, we do not know of a consensually regular specification. Last, a more general problem is whether CREG is closed under union, concatenation, and star. A possible approach is to investigate whether every CREG language may be defined by a base which is joinable and shiftable, thus obtaining closure under union and concatenation by virtue of the lemmas presented in this paper. | 2014-05-21T19:13:46.000Z | 2014-05-21T00:00:00.000 | {
"year": 2014,
"sha1": "652725dfa9e8bdb7be5bd95372d29b6573c42af9",
"oa_license": "CCBYNCND",
"oa_url": "https://arxiv.org/pdf/1405.5604",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "652725dfa9e8bdb7be5bd95372d29b6573c42af9",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science"
]
} |
258426645 | pes2o/s2orc | v3-fos-license | Effects of detection loophole on rival entanglement attestation techniques
Loopholes present in an experimental set-up can significantly affect the reliability of entanglement detection. We discuss two methods for detection of entanglement: one is by using the positive partial transposition criterion after quantum state tomography and the other by estimating the second and third moments of partial transposition of the quantum state through random classical snapshots. We examine the impact of inaccuracies in these detection methods by considering presence of spurious clicks or suppression of valid clicks in the detectors. By comparing the two methods, we observe that the condition based on partial transposition moments is more robust to missing counts than the positive partial transposition criteria. Moreover, we realize that in the presence of additional counts, none of the criteria misinterpret any separable state as entangled. But in such a scenario, the condition based on the moments can not guarantee any state as entangled, unless the additional event efficiency is about 0.9 or higher.
I. INTRODUCTION
Entanglement [1][2][3] is a type of quantum correlation which when exists between the subsystems of a composite system, the individual subsystems can not be fully characterized by ignoring the others, even when full knowledge of the complete system is available [1]. Such a correlation was first observed by Einstein, Podolsky, and Rosen [4]. The main feature of this correlation is that it lacks any classical analogy. The discovery of entanglement opened a new path towards the success of quantum technology and communication. Quantum teleportation [5] and quantum dense coding [6] are some examples of the various applications of entanglement. Entanglement is also used in quantum security processes, such as quantum key distribution [7][8][9].
Because of entanglements' wide applications in numerous quantum protocols, the creation and detection of entanglement is a crucial field of research. After the invention of entanglement, various criteria have been suggested for detection of entanglement. Positive partial transposition (PPT) criteria [10,11], linear and nonlinear entanglement witness [2,12,13], CHSH Bell inequality [2,14], conditions based on partial transposition (PT) moments [15][16][17][18], are some of the examples of efficient detection methods.
Effectiveness of any particular criteria over the others depends on the context. For example, PPT criteria provides a necessary and sufficient condition for detection of entanglement shared between two-qubit or qubit-qutrit systems [19] but its experimental verification requires complete state tomography, where as, though the condition based on moments of partial transposition does not provide a necessary criteria, it can be efficiently verified experimentally without the need of state tomography.
Transposition is an operation which is positive but not completely positive. This peculiar but interesting property of the operation enriches us with the PPT criteria for detection of entanglement. The criteria was first introduced by Peres and Horodecki in Ref. [10]. According to the condition, if after partially transposing a density operator, the resultant operator becomes negative, then the density matrix, be it pure or mixed, must represent an entangled state. Since partial transposition does not necessarily map positive operators to the same, it can not be experimentally implemented. Thus, the usual method to verify the criteria is to obtain the complete matrix form of the state through tomography [20][21][22][23] and then analytically check the positiveness of the operator found by operating partial transposition on the tomographically obtained state.
Based on the same non-complete positiveness of partial transposition, another criteria for detection of entanglement was introduced. According to this criteria, if the 3rd moment of partial transposition of a shared state is less than the square of the second moment of the same, then the corresponding state is entangled [18]. One of the advantages of this new criteria is that its constituents, i.e., the second and third moments, can be estimated in experiments without performing the complete state tomography [15][16][17][18]24].
In general, theories are developed based on idealistic assumptions that, in reality, may not be satisfied. For example, in physical experiments, the errors present in the apparatus can not always be completely ignored, implying that they can have a significant impact on the result. Entanglement detection is also not an exception. In this paper, we consider an erroneous measurement process for entanglement detection. In particular, the motive is to observe the effect of the presence of detection errors in experiments. In this aim, we focus on two distinct methods for detection of entanglement, viz., using PPT and moment-based criteria.
Detection loopholes has been previously discussed in a series of works in the context of entanglement detection using Bell's inequality [25][26][27][28][29][30][31] and entanglement witnesses [32][33][34][35]. Within a measurement set-up, erroneous detectors can give rise to additional clicks in any particular measurement direction as well as missed clicks in another or the same direction. This leads to the definition of two detector efficiencies: additional event effi-ciency and lost event efficiency. We explore two scenarios separately, where in the first case the additional event efficiency is one with arbitrary lost event efficiency, and in the other case only the lost event efficiency is one. To examine the effect of the non-unit efficiencies, we focus on the set of Werner states and obtain the range of the parameter defining the Werner states where a separable Werner state can appear to be entangled.
We compare the two ranges of the parameter corresponding to the wrong detections, considering PPT and moment based criteria. It is observed that, in presence of lost events in experimental detection using the moment based criteria, there exists a threshold value of lost event efficiency beyond which a lower number of entangled states could be detected but no separable state would mistakenly appear to be entangled. But, unfortunately, in the case of PPT criteria, even a small deviation from unity of lost event efficiency will result in false detection. Even when both criteria fail to correctly detect the states, the volume of separable states being certified as entangled is smaller in the case of the moment-based criteria than the PPT criteria. In the presence of additional counts, the effectiveness of the two criteria somewhat reverses. In such a scenario, the PPT criteria can correctly detect a finite volume of entangled Werner states for a wide range of values of the efficiency and is free from erroneous detection, whereas the moment based criteria is unable to certify any Werner state as entangled whenever the efficiency is less than 0.9.
The rest of the paper is organized as follows. In Sec. II, we briefly discuss the entanglement detection methods using PPT and moment based criteria. Sec. III consists of discussions on the detection processes using PPT and the moment based criteria, in presence of imperfect detectors in laboratory's set-up. Finally, we present a precise conclusion in Sec. IV.
II. DETECTION OF ENTANGLEMENT
There exists various methods for detection of entanglement. In the following two subsections, we briefly recapitulate two of them.
A. PPT criteria
None of the existing, computationally efficient, methods can detect all the entangled states in every dimension. Partial transposition is a widely used method which provides a necessary and sufficient criteria for detecting entanglement shared between qubit-qubit or qubit-qutrit systems.
Let ρ AB be a bipartite quantum state shared between Alice and Bob. ρ AB acts on the Hilbert-space H A ⊗ H B . Here the suffices A and B correspond to Alice and Bob's part respectively.
Let ρ ab,cd = a| b| ρ AB |c |d be a particular element of the density matrix ρ AB where |a (|b ) and |c (|d ) are the elements of Alice's (Bob's) basis. The partial transposition of a state, ρ AB , taken on Alice's subsystem, can be denoted by ρ T A AB . Under the action of the partial transposition on Alice's subsystem, any arbitrary element ρ ab,cd of the density matrix ρ AB transforms to ρ cb,ad , i.e., the indices of Alice's subsystem gets swapped but Bob's indices remain unchanged. Transposition is a positive map, but it's not complete positive. Thus, though ρ T AB (transposition on ρ AB over the composite basis of H A ⊗ H B ) is a positive operator, ρ T A AB may not be positive. This property gives birth to the entanglement detection criteria, which says if ρ T A AB is non-positive then ρ AB is surely entangled. The converse is also true in 2 ⊗ 2 and 2 ⊗ 3 systems but for higher dimensions, positivity of ρ T A AB does not confirm separability of ρ AB . This is commonly known as the positive partial transpose criteria (PPT) or Peres Horodecki criteria [10,11].
Let us discuss an example. The Werner state is given by Here the range of P can be considered to be [0,1]. Therefore, the matrix form of ρ w in the computational basis is given by If we take the partial transposition of the Werner state, ρ w , with respect to the first party, say A, then we have One can easily check that the four eigenvalues of ρ T A w are given by (1 + P )/4, (1 + P )/4, (1 + P )/4, and (1 − 3P )/4. We see that the three of the eigenvalues of ρ T A w are same and positive; but the eigenvalue (1 − 3P )/4 can be negative for certain range of values of P . It can be easily checked that within the range 1/3 < P ≤ 1 the eigenvalue becomes negative implying that the state ρ w is entangled within this range and is separable otherwise.
B. second and third moments
In this section, we discuss about the PT-moments of a general bipartite state, which are another set of faithful quantities for detecting entanglement. In general, the nth PT-moment of a bipartite entangled state, ρ AB , is defined as p n (ρ AB ) = Tr (ρ AB ) T A n , where n is any positive integer defining the order of the PT-moment. p 1 (ρ AB ) is always 1 for any state, ρ AB , p 2 (ρ AB ) quantifies the purity of ρ AB , and p 3 (ρ AB ) is the lowest order moment that carries information about the partial transposition taken over subsystem A.
Let us mention an inequality using which we can detect the entanglement of any bipartite state. If the state ρ AB is separable, then p 3 (ρ AB ) ≥ p 2 (ρ AB ) 2 and thus if p 3 (ρ AB ) < p 2 (ρ AB ) 2 we can surely certify the state, ρ AB , as entangled [15][16][17][18]. This condition is called the p 3 -PPT criteria.
For Werner state, ρ w , p 2 (ρ w ) and p 3 (ρ w ) are (1 + 3P 2 )/4 and (1 + 9P 2 − 6P 3 )/4, respectively. Using the p 3 -PPT criteria a range of P can be found beyond which ρ w becomes entangled and it is found to be 1/3 < P ≤ 1. This range is the same with the range of P that was obtained using PPT criteria.
III. PRESENCE OF DETECTION LOOPHOLE IN ENTANGLEMENT DETECTION
Any theory is usually built depending on some ideal assumptions. But in real life, when we perform experiments, these ideal assumptions may not remain valid. There can always be some noise present in the apparatus that can deflect the experimental setup from it's theoretical structure. These unavoidable errors can affect the final conclusion of that experiment.
Various sorts of loopholes can be present in entanglement detection experiments, for example the detection loophole [25][26][27][28][29][30][31][32][33][34][35], locality loophole [36], coincidence loophole [37], etc. Among them, we try to examine the presence of detection loophole. In this context, we examine two particular detection methods one is based on the PPT criteria and the other one depends on the moment based criteria.
A. PPT criteria
Since partial transposition is not a physical operation, it can not be implemented experimentally. Thus the usual method of detection of entanglement using partial transposition criteria involves state tomography [20][21][22][23].
Experimental arrangements required for quantum state tomography are depicted in Fig. 1. Here the black box is nothing but a source of pairs of photons. Let |H and |V denote the two polarization states of each of the photons, i.e., horizontal and vertical, respectively. Since Here QWP and HWP are the quarter and half wave plates, respectively, which are placed within the paths of the photons along with a polarizer. After the polarizer, the pair of photons passes through a pair of detectors where the polarization of the photons are measured [20].
From now on we will use this particular basis to represent any operator acting on the single-qubit Hilbertspace. To express the polarization state of the two photons collectively we use composite basis B = {|HH , |HV , |V H , |V V }. Q polarizer, a quarter wave plate, and a half wave plate are used to project the light beams in a particular direction. Therefore the projected two-photon state where U H (h) and U Q (q) denote the unitary operations representing the action of the half and quarter waveplates, respectively. The parameters, h and q, are the angles made by the quarter and half wave plates to the vertical axis, respectively. The exact form of the unitaries are given by and For any arbitrary state, ρ, number of observed coincidence in a particular direction, defined using the set of angles S ν = {h ν 1 , h ν 2 , q ν 1 , q ν 2 }, is given by The constant N denotes the total number of outcomes when measured in a particular basis. We assume that the measurement has been done equal number of times on every required basis. Thus ideally, N does not depend on the considered basis on which the measurement is being done. In ideal scenario, N is equal to the total number of times the measurement have been done on a particular basis.
For two-qubit states, the minimum number of parameters required to be obtained through the state tomography is 16. Among them, 15 parameters determine the density matrix, ρ, and the 16th parameter determines the constant, N . Thus, to complete the state tomography, 16 sets of angles, S ν , are needed. Therefore ν must take at least sixteen values. To present a condition on the 16 sets of S ν to be suitable for the state tomography, let us first introduce 16 linearly independent matrices, Γ µ , which satisfy the properties, Here X is any arbitrary matrix. An example of such Γ µ matrices are σ i ⊗ σ j for i, j = 0, 1, 2, 3. Here, σ 1 , σ 2 , and σ 3 are Pauli matrices and σ 0 is the 2×2 identity operator. Any density matrix, ρ, can be expressed in terms of Γ µ in the following way From the trace relations of the Pauli matrices, we have r µ = Tr[Γ µ ρ]. Another 16×16 dimensional matrix, K, can be defined as Therefore, number of counts in a particular direction S ν , can be expressed as Hence the density matrix can be represented in terms of these experimentally accessible parameters, N ν , in the following way The matrices, M ν , satisfy the completeness relation, 16 ν=1 M ν = I 4 (For proof, see appendix of Ref. [20]). To perform the tomography, one should select the sets, S γ , in such a way that the inverse, K −1 , exist.
Till now we have discussed the experiment by assuming an ideal setting. But in realistic situations, various type of noise, present in the apparatus, can significantly influence the results. Let us examine how the presence of imperfections in the detectors can affect the entanglement detection. Because of the imperfections, events can get lost or extra events can appear in the detector. In this inaccurate situation, the observed form of the density matrix will be where N is the total number of outcomes when measuring on a particular basis. Performance of an experimental set-up, in presence of errors, is best quantified by its efficiency. In this context, two efficiencies can be defined, viz., the additional and lost event efficiencies, given by η + = N N +ε+ and η − = N −ε− N , respectively. Here ε + , and ε − denote the total number of additional and lost events when the measurements are done on a particular basis. N denotes the total number of outcomes in an ideal scenario.
We will consider two situation separately, in the first scenario we fix the additional event efficiency at η + = 1 and, in the second one, we consider η − = 1 with arbitrary η + . In particular, we consider that the measurement processes would only have additional events or lost events separately and not together.
Let, because of the presence of imperfections in the detectors, the number of clicks in the direction S ν is where N ν is the number of times it should have clicked in the ideal situation. In Eq. (2), + and − sign correspond to additional and lost count type errors, respectively. To simplify the calculations and reduce notational complexity, we take ± ν = ± for all ν. Substituting the expression of N ν from Eq. (2) to Eq. (1) we get Here we have used the completeness relation of M ν . To determine if the shared state is entangled or not, the experimentalists may determine the eigen-values of the partial transposition of the erroneous state ρ , i.e., ρ T A , and check their positivity. It is interesting to note that though the magnitudes of the eigen-values of ρ T A depend on the total number of outcomes, N , it's positivity is not determined by N . Moreover, whether there would be any wrong detection or not that can not be control by M ν , that is, in particular by the angles, {h ν 1 , h ν 2 , q ν 1 , q ν 2 }. Case 1 : To get a deep understanding, we consider the lost count type error and a scenario where the actual shared state is a Werner state. The experimentalists do not have any information about the shared state, other than it is a two-qubit state. They want to clarify if the shared state is entangled. The experimentalists consider a particular sets of values for each of the angles, h ν 1 , h ν 2 , q ν 1 , and q ν 2 , such that the inverse of K exists. Because of the erroneous detectors, the matrix form they get is not exactly equal to ρ w , but has a distinct structure, say ρ − w .
Using Eq. (3), we find the measured form of the Werner state, ρ − w , in presence of lost counts, given by . Therefore, if the experimentalists check the PPT criteria, they will certify all the states having P value within the range 1/3 − 4 − /3N < P ≤ 1 as entangled. We see, for any given value of − and N , the states corresponding to the range 1/3 − 4 − /3N < P ≤ 1 3 are separable, but will be falsely detected as entangled. Since, in this case the state under consideration is two-qubit, that is, the dimension of the Hilbert-space is 4×4, thus ε − = 4 − . Hence, the lost event efficiency is given by Thus the range of incorrect detection can be expressed in terms of the lost event efficiency, η − , and is given by, η− 3 < P ≤ 1. This implies P c = η − /3 is the cut-off value of the parameter, P , beyond which all states will appear as entangled.
Case 2 : Let us now move to the next situation, where the lost event efficiency, η − , is unit but the additional event efficiency, η + , has an arbitrary value. In this scenario, because of over counts, Following the same method as discussed in the previous case, we get the range of P for which the experimentalists will declare the Werner state as entangled, and it is given by 1/3 + 4 + /3N < P ≤ 1. Since, by considering ε + = 4 + , we have η + = N N +4 + , the range of detected entangled states can be expressed in terms of the additional event efficiency, η + , i.e., 1 3η+ < P ≤ 1. We see, in this case, though a lesser number of entangled Werner states would be certified as entangled, but no separable state would be falsely detected as entangled.
B. Evaluation of p2 and p3 moments
In this section, we first discuss about the detection of entangled states through evaluation of p 2 and p 3 moments of partial transposition [18]. Then we implement the same type of error and explore its effects.
Let us consider a n qubit system. We make two partitions of this system, say A and B. |A| and |B| are the number of qubits in the subsystems A and B, respectively.
The usual method of experimental estimation of p 2 and p 3 moments of a state, say ρ AB , involves operation of local random unitaries, U i , on each of the qubit of ρ AB . As a result each of the qubits gets rotated arbitrarily with respect to each other. The composite unitary, acting on the complete n-qubit state, is given by U = U 1 ⊗ U 2 ⊗ U 3 ... ⊗ U n . After the operation of U , the state is projected on the computational basis. Let the outcome set of this projective measurement is K = {k 1 , k 2 , k 3 , ....., k n }. The combined operation of local unitaries and projective measurement can be done on, say, M copies of ρ AB . The classical snapshots are defined asρ where U (r) i denotes the unitary operated on the ith qubit of the rth copy of the state, ρ AB , k (r) i is the outcome of the measurement on the same qubit of the same copy after the operation of the unitary, U i , and I 2 is the identity matrix which operates on the individual qubits. The unbiased estimator of lth PT-moment, sayp l , can be obtained in terms of all possible combinations of l snapshots among the M snapshots. The estimator is given by [38] Here Π A and Π B are the permutation operators, that is, and K To implement the measurements theoretically, we randomly generated the set K r by following the probability distribution, P α = α| U ρ w U † |α , i.e., the probability of the state, U ρ w U † , to get projected on |α . Finally we calculate the unbiased estimator of p 2 and p 3 moments using Eq. (4). To find the exact values of p 2 and p 3 , we repeat the process 100 times and take the average over the estimators. In the ideal case, the estimated values of p 2 2 and p 3 is found to be exactly equal to (1 + 3P 2 ) 2 /16 and (1 + 9P 2 − 6P 3 )/4, respectively, up to a numerical error which are of the order of 10 −2 .
To realize the effect of non-ideal detectors, we replace the probabilities, (considering only additional events). We again assume − α = − and + α = + for all α.
We can expressP − ν andP + ν in terms of efficiency, η − and η + , for lost and additional counts respectively, in the following waȳ By varying the efficiencies, η − and η + , we calculate the cut-off value of P , say P c , above which all the Werner states will be certified as entangled. To find this point, we calculate the values of p 2 2 and p 3 for different P and particular values of efficiency, η ± , between the range 0 to 1. By plotting p 3 and p 2 2 with respect to P , for a fixed efficiency, η ± , we find the value of P c , that is, the value of P at which p 3 and p 2 2 curves intersect. For the region P > P c , p 3 is less than p 2 2 , indicating the state as entangled.
In Fig. 2a, we plot P c as a function of the lost event efficiency, η − , taking η + = 1, using red plus points. To compare the method of detection of entangled states using p 3 -PPT condition with PPT criteria, we plot P c in the same figure using blue line. We also plot the actual value of P above which the Werner state becomes entangled, i.e., P = 1 3 , using black dashed line. From the figure, it is apparent that there is no wrong detection in the moment based method for efficiency above 0.402. But, in case of state tomography, a minor deviation from the ideal scenario will result into wrong detection, that is certifying separable states as entangled. The higher slop of the curve joining the red pluses than the blue line confirms that even if the efficiency, η − , is less than 0.402, the p 3 -PPT condition approves fewer separable states as entangled than the combined method using state tomography and partial transposition. In Fig. 2b, we plot the same quantities but by varying η + and keeping η − fixed at one. It is visible from the figure that all the points P c and P c are above the dashed line. Therefore none of the two methods result in any wrong detection in presence of additional count. But the moment based method can not detect any entangled state for η + ≤ 0.9, whereas the PPT-criteria can detect finite amount of entangled states FIG. 2: Range of the parameter defining Werner states for which the state is certified as entangled. Lbw − and Lbw + are lower bounds on the parameter, P , above which all states appear as entangled when lost and additional counts are considered, respectively. We plot P c and P c using red pluses and blue lines, respectively, along the vertical axis in presence of lost and additional counts, characterized by the lost event efficiency (η − ) and additional event efficiency (η + ) which are presented along the horizontal axis of Fig. 2a and Fig. 2b, respectively. In both of the figures, the dashed line depicts the P value of the Werner state above which the state actually becomes entangled. All the axes are dimensionless.
within the range 1 3 < η + ≤ 1. Even when η + > 0.9 the moment based criteria can detect a very small volume of entangled states compared to the PPT-criteria.
IV. CONCLUSION
We have investigated realistic entanglement detection methods and determined the parameter range, defining the Werner state, within which a separable state can erroneously be detected as entangled. The two methods under consideration were state tomography followed by application of PPT criteria and experimental determination of the second and third moments of the state and utilization of the entanglement detection condition based on these moments.
In ideal situations, both of these criteria are capable of detecting all entangled Werner states. But in realistic scenarios, experiments might be affected by many physical factors, such as interaction with the environment, errors present in the apparatus, etc., which can significantly deviate the results from our expectations. We have considered the presence of detection inaccuracies in the experiment and explored the new parameter range between which all states appear to be entangled, trying to determine if there is any possibility of incorrect detection.
We have compared the two methods of entanglement detection. We can see that, in both cases, allowing additional clicks results in no incorrect detection. But in that scenario, lesser entangled states would be detected using moment based criteria compared to the PPT-criteria. When we considered lost counts instead of additional, we encountered with wrong detection, that is, in such a sit-uation, the experiment may detect separable states as entangled. But the range of the parameter representing wrong detection, for a particular efficiency, is smaller using p 3 -PPT criteria than PPT criteria. Moreover, after a threshold value of the efficiency, the moment-based criteria becomes detection-loophole-free, whereas the PPT criteria can still erroneously detect states even when the efficiency is slightly lower than the perfect efficiency.
Thus we can conclude that detecting entanglement through moments of partial transposition is more reliable than doing full state tomography and checking positivity of partially transposed state when the motive is correct detection of states. On the other hand, if the aim is to detect a larger volume of states, PPT criteria is more beneficial, but it comes with the drawback of having false detection. | 2023-05-02T01:16:22.233Z | 2023-04-30T00:00:00.000 | {
"year": 2023,
"sha1": "ac9e731f063de25a9ea4d119f7989f859d4bbd66",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "ac9e731f063de25a9ea4d119f7989f859d4bbd66",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
99342691 | pes2o/s2orc | v3-fos-license | A research on application of water treatment technology for reclaimed water irrigation
Abstract Water, taken as the source of life, is one of the most important constituent parts of the global ecological system. The water resources problem is becoming more and more severe as a result of the acceleration of the process of urbanization in China and an increased population since reform and opening up. Therefore, reclaimed water irrigation is one of the effective measures to deal with the scarcity of water resources. In line with the necessity and urgency of reclaimed water reuse at present, brief analyses are made on the determination of both the reclaimed water treatment and the relevant irrigation technology, and then explanations and studies are made from such perspectives as the construction purpose, the water quality standards and the technological process. Finally the relevant new technologies and technical requirements for the reclaimed water treatment are put forward for the reclaimed water irrigation from such aspects as the techniques and technologies related to sewage treatment, the technical requirements and standards for reclaimed water irrigation, and the management and monitoring of reclaimed water irrigation.
Introduction
Water, taken as the source of life, is one of the most important constituent parts of the global ecological system. The water resources problem is becoming more and more severe as a result of the accelerated process of urbanization in China, industrial and agricultural development and an increased population since reform and opening up. China would begin to enter into a period of severe water scarcity since 2010, with the water resources situation being especially severe in the northern region, according to a survey in Water Supply And Demand in China in the 21st Century from the Ministry of Water Resources. Therefore, it is extremely urgent to alleviate the water scarcity by strengthening studies on water recycling while improving efficiency in water utilization [1e3].
Recycling reclaimed water is one of the effective ways to reduce sources of pollution and solve the issue of water scarcity, and irrigation is one of the important measures to recycle the reclaimed water. As an effective method of saving water resources, reclaimed water has been widely adopted by developed countries to irrigate farms in lieu of drinking water. The traditional irrigation mode not only causes huge waste of water resources, but is unable to meet the irrigation requirements, as agricultural crops have various varieties, and different requirements for irrigation water volumes, locations, irrigating methods. Therefore, it is of paramount importance to explore the technologies and modes of reclaimed water irrigation on farm [4,5]. It would be best to adopt a combination of technologies for disposal and reuse of domestic wastewater as well as for irrigation with respect to reclaimed water irrigation in order to achieve the purposes of transforming the sewage into resources and improving water utilization ratio and labor productivity while saving water. The effluent from the sewage treatment device for irrigation application features stable and reliable volume and quality, which is not only capable of reducing discharge of pollutants and relieving shortage of water resources, but can even be used as a reliable alternative source of water and fertilizer. Thus, study of technological mode for treatment of reclaimed water, recycling of reclaimed water in farm irrigation, as well as nationwide development and promotion will be of great strategic significance [6,7].
Water quality standards of reclaimed water irrigation
In china, water shall be divided into water for agriculture, forestry, animal husbandry and fishery industry, water for urban miscellaneous use, industrial water, water for environment and water for supplementing source of water according to the purposes of wastewater treatment and reuse in The Reuse of Urban Recycling WatereClassified Standard (GB/T 18919e2002). At present, there are uniform national standards with respect to each of farm irrigation, urban miscellaneous use, industrial water, water for environment and groundwater recharge [8,9].
Water quality standards of reclaimed water irrigation for urban green land Limits for the physiochemical indexes and hygienic indexes of the reclaimed water for irrigation of urban green land shall conform to the provisions in Table 1.
Water quality standards of agricultural reclaimed water irrigation
The basic control items of quality of reclaimed water for agricultural irrigation are shown in Table 2.
Reclaimed water treatment and irrigation technologies
Introduction to treatment method of water for reclaimed water irrigation The optimization and grouping shall be implemented by selecting rational treatment technology units according to properties and characteristics of sewage, purpose of water recycled, physiographic conditions, project investment, operation costs, because a certain single water treatment process is difficult to meet the requirement for quality of water recycled. At present, the urban sewage treatment process in China mainly includes such conventional processes as coagulation, sedimentation, filtering, sterilization and so on; there are also many other methods for advanced treatment, including coagulation and clarifying filtration, absorptive filtering with activated carbons, ultrafiltration, semipermeable membrane, ionic exchange, reverse osmosis, biological method, micro-flocculation, contact mechanism of oxidation and filtration, ozonation, etc [10,11]. See Table 3 for corresponding treatment method. Moreover, the requirements for the quality of agricultural irrigation water continue to be improved, and further requirements are being put forward from more aspects including color, turbidity, pathogenic bacteria, etc. along with the progress in water treatment technologies and social development.
Application and study of MBR in reclaimed water irrigation
In recent years, the MBR has been applied to sewage reclamation and reuse on a wider and wider scale, and has brought about obvious economical benefits, considerable environmental benefits and social benefits [12e14].
The research group has established relevant test base and demonstration village in Yingzi Village, Saiwudang Administration, Maojian District, Shiyan, Hubei Province, wherein the design scale of the recycling project of reclaimed water is 5 m 3 /d; and the raw water refers to rural domestic sewage. Twater quality of the same is shown in Table 4. The sewage treatment system adopts the combined process system in which full-automatic membrane bio-reactor (MBR) is combined with the advanced oxidation process (AOP). Moreover, the procedure of the sewage treatment process is as follows: domestic sewagedpretreatment tankdanaerobic/aerobic membrane tankdreclaimed water (Fig. 1).
The combined process in which the membrane bio-reactor (MBR) is combined with the advanced oxidation process (AOP) may be implemented for treatment of sewage of any quality [15,16]. The MBR treatment process has the advantages of both the membrane separation technique and the bio-treatment technology; and the all-in-one MBR reactor integrates Total residual nitrogen/(mg/L) 2 9 p H 6 e9 i n t e r n a t i o n a l j o u r n a l o f h y d r o g e n e n e r g y 4 1 ( 2 0 1 6 ) 1 5 9 3 0 e1 5 9 3 7 membrane separation, bio-reaction, aerobiotic process and aeration, featuring compact volume, rational structure, less land occupation; therefore the decomposition and oxidation rate by organism of organic matter, as well as the removal rate of inorganic matters such as nitrogen and phosphorus are greatly improved. Moreover, the water filtrated by the ultrafiltration membrane has super high quality, and the system hardly discharges any residual sludge. The AOP advanced oxidation process activates molecules from the MBR effluent through photoelectric chemical reaction and generates oxidant$OH (hydroxyl radical) with extremely strong oxidizability and capability of rapidly degrading all poisonous and harmful difficult-to-oxidize matters in water so that TOC in water will finally be zero. Further the hydroxyl radical has the functions of decolorization and deodoraization, as well as a strong sterilization function, therefore delivering effluent of higher water quality. According to test results, the quality of water after being processed by MBR þ AOP can reach the latest standard of domestic drinking water issued by the State. The intermittent aeration system of the regulating tank forms excellent anoxic and anaerobic conditions and accelerates the nitrification and denitrification processes [17]. The sludge returning system of the membrane tank is conducive to the regulation of microorganism concentration in the membrane tank and the maintenance of the activity of the microorganism. The advanced program design of the PIC automatic control system ensures that the system can run automatically and normally for a long time, while achieving the purposes of saving energy and reducing consumption. Moreover, no additional flocculant or advanced oxidation disinfectant is required due to the low operating costs and the presence of the AOP system.
Major performance indexes of the technology: (1) Influent mainly refers to urban and rural domestic sewage or industrial wastewater (Tables 1e5) of similar quality, wherein the quality of the effluent can be made superior to the Class-A sewage discharge standard stipulated by the State by adjusting appropriate parameters; and the water may also be recycled for various purposes by adjusting parameters according to needs so as to meet higher recycling requirements. Total dissolved solids/(mg/L) 1000 in regions of nonsaline-alkali soil 2000 in regions of saline-alkali soil 7 Residual chlorine/(mg/L) 2 1 8 Number of ascaris eggs/(piece/L) 2 9 p H 6 e9 i n t e r n a t i o n a l j o u r n a l o f h y d r o g e n e n e r g y 4 1 ( 2 0 1 6 ) 1 5 9 3 0 e1 5 9 3 7
Drop irrigation and recycling technologies of reclaimed water
The quality of the effluent from the rural domestic sewage processed using "Membrane Bioreactor þ High-efficiency Oxidation Disinfection" treatment technology is superior to that of the Class-A sewage discharge standard (Table 5) stipulated by the State [18,19], and the effluent may also be applied to farm irrigation after achieving the standard (Table 3) of the agricultural irrigation water by adjusting aeration time and other parameters. The technology was researched and developed in an allusion to the need of recycling and high-efficiency utilization of domestic sewage in water source area. The technical device and principle are shown in Figs. 2 and 3, which mainly include the sewage treatment system combining full-automatic MBR (membrane bioreactor) and AOP advanced oxidation process, precise fertilization and irrigation technologies by means of drop irrigation, etc. The main contents are as follows: (1) MBR treatment process integrates advantages of membrane separation technology and bio-treatment technology, and therefore the decomposition rate and oxidation rate of organism to organic matter, as well as removal rate of inorganic matters, such as nitrogen and phosphorus will be greatly improved. (2) Molecules in the MBR effluent will be activated through photoelectric chemical reaction to generate oxidant $OH (hydroxyl radical) with extremely strong oxidizability and capability of rapidly degrading poisonous and harmful matters that are difficult to oxidize in water so that TOC in water is finally zero; therefore, the decolorization, odor removal and sterilization are achieved. (3) The domestic sewage will be collected by pipelines and sent to the reclaimed water reservoir after being centralized processed using the MBR þ AOP filter system to serve as one of water sources of precise fertilization and irrigation technologies by means of drop irrigation combined dispatching of multiple water sources between the reclaimed water and surface water. (4) The variable-frequency control submersible pump installed in the reclaimed water reservoir will be started automatically when the reclaimed water can meet the need of irrigation, while the valve installed in original surface water pipeline will be started to supplement the reclaimed water reservoir and implement synchronous irrigation by combining reclaimed water with the surface water during irrigation peak when the reclaimed water cannot meet the need of irrigation.
Technical effect: The water after treatment from domestic sewage will be directly applied to agricultural irrigation as liquid fertilizer for precise fertilization and irrigation technologies by means of drop irrigation, with the emission rate to the water body being 0; at the same time, the water after treatment can reduce application rate of chemical fertilizer, with effects of improving quality of crops and saving labors while protecting environment, saving fertilizer, and increasing yield.
Technical requirement for reclaimed water irrigation
Whether the reclaimed water irrigation technology can be successfully popularized and applied depends not only on technological requirements for the design of the sewage treatment system, but on other aspects, such as water distribution and water supply in reclaimed water reuse, irrigation system, management and monitoring technologies, etc.
Selection of appropriate irrigation method
The reclaimed water irrigation should adopt the irrigation method and facilitates for high-efficiency water resource utilization, such as sprinkling irrigation, drop irrigation, etc. from the perspectives of modern agriculture and sustainable development of water resources; while the sprinkling irrigation and the drop irrigation should also be adopted from the perspective of pollution control in order to control agricultural non-point source pollution and control leaching of nutrients along with water into the runoff.
Calculation of all kinds of balances
The calculation of volume of irrigation water is especially important both in reclaimed water irrigation and clean-water irrigation, wherein the two irrigation methods are quite different from each other, as evidenced by the existence of certain amounts of nutrients, such as BOD, nitrogen, phosphorus, potassium, etc. in the reclaimed water which are beneficial for plants. Therefore, the calculation of balanced contents of water, nutrient, salt, oxygen and heavy metal shall be the key to successful reclaimed water irrigation.
(1) Water balance: Season and climate changes, properties, water-holding capacity and penetration coefficient of soil, as well as underground water level and distribution condition of the underground water, etc. are key factors which will affect the calculation and shall be given due consideration before calculation. The plant root system absorbs water and nutrients in the soil, and the transpiration of plant also causes partial loss of water, meanwhile the pull force generated by the transpiration also redistributes the water and nutrients in the plant. Moreover, the water volume as a result of precipitation must be taken into account as well when calculating water balance.
The calculation of water balance can be simply expressed as Xh ¼ (precipitation þ design flow rate of irrigation)-(total loss as result of water transpiration þ diffusion loss) [20].
(2) Salt balance: The salt content should be lower than the limit that the crops can bear because different crops have different salt tolerances; otherwise, poisons may accumulate at root of the crop and affect growth. Moreover, the soil will become fertile and fluffy when there is appropriate amount of salt in the soil. (3) Nutrient balance: The balance calculation of nutrients mainly includes the balance calculation of nitrogen, phosphorus, potassium and other elements which can be absorbed and utilized by the plant as nutrients in the soil. In the first place, the total amount of effective nutrients entering into the acreage of land through the reclaimed water irrigation, including the total load of nutrients in water, organic matter and fertilizer in the soil, and the residual nutrients in the crops. (4) Oxygen balance: A knowledge of the texture, structure and reoxygenation ability of the soil will be the key to the calculation of oxygen balance. The temperatures of different soils will greatly affect the microbial activity and reoxygenation ability of the soil [21e23]. The common organic matter (COD) is located at the upper layer of the soil, and the microorganisms in the soil are very active during intermittent period of irrigation, and therefore the soil has obvious degradation and strong reoxygenation ability of soil. Thus, the irrigation load should be controlled to prevent the reoxygenation of the soil from being influenced as a result of an excessive soil tolerance. (5) Heavy metal: The heavy metal may damage the activity of the soil and microorganisms, and also enter the food chain after being absorbed by the crops, thereby forming accumulative poison which is harmful to human health; and therefore, it is necessary to strictly control content of heavy metal. The reclaimed water irrigation should be implemented by selecting appropriate water source first [4,24,25]. Rural domestic sewage basically comprises no heavy metal and is suitable for reclaimed water irrigation; the heavy metal must be removed in the link of reclaimed water treatment system if industrial wastewater is adopted for the source water, or the useful heavy metal may also be separated and recycled by novel processes.
Management and monitoring measures
A complete automatic monitoring system will be required first for scientific management of reclaimed water treatment and irrigation system; and the accuracy and reliability and universality of the data should be ensured in time and space by observing and recording data through computer remote control [26,27]. Detailed management plans and rules for reclaimed water irrigation system and ancillary preferential policy on reclaimed water utilization will be formulated under the guidance of government; further, the management and monitoring efficiency of the reclaimed water irrigation system should be continuously improved. The main target of monitoring reclaimed water irrigation should be to monitor the crops, soil and the underground water. Information on nutrient balance and crops health can be known by monitoring crops irrigated with reclaimed water.
Conclusion and prospect
Relative shortage of water resource in China determines the practical choice of reclaimed water irrigation, which has obvious advantages as well as risks that must be prevented. The economical benefit should be carried out while avoiding environmental damage by adopting scientific and systematic policy, method and technical measures that can be used to draw on advantages and avoid disadvantages. The reclaimed water irrigation should be adjusted according to local conditions and planned rationally; the sewage should be treated effectively before irrigation; the standardized management of the reclaimed water irrigation should be strengthened; and the scientific research [22] of reclaimed water irrigation should be deeply implemented. Plenty of research work in the technical field of reclaimed water irrigation has been implemented at home and abroad, while the research on reclaimed water irrigation in China is still at its preliminary stage; therefore the following items shall be the directions of agricultural application and research of reclaimed water irrigation in China in an allusion to China's actual conditions: (1) study on the new treatment process of reclaimed water, new technology and system of reclaimed water irrigation; (2) formulate standards for reclaimed water irrigation; (3) rule of transportation and conversion of nitrogen, phosphorus, organic matter, and heavy metal in soil and vegetation system under the condition of reclaimed water irrigation; (4) safety and high-efficiency irrigation technology of reclaimed water, i n t e r n a t i o n a l j o u r n a l o f h y d r o g e n e n e r g y 4 1 ( 2 0 1 6 ) 1 5 9 3 0 e1 5 9 3 7 coupling and yield-increasing effect of reclaimed water irrigation and fertilization, and irrigation and fertilization amount suitable for the farm after reclaimed water irrigation; and (5) research and set up evaluation index and method of environment implications of reclaimed water irrigation and health risk. Finally, to implement and popularize the application of reclaimed water irrigation will be a complicated systematic project, which involves many units and organizations, such as urban planning, construction, environmental protection, municipal administration, water conservancy, agriculture, etc., and the troubles in coordination and management are self-evident. Therefore, it is necessary to make a wide range of survey and research, establish a complete management system of reclaimed water irrigation while specifying the administrative power and water right. Moreover, the government should strengthen the concept of sewage reutilization, implement unified allocation and use in irrigation area with reclaimed water by bringing the reclaimed water into the integrated planning of water resources utilization while gaining a thorough understanding the position of reclaimed water irrigation in water resource allocation. | 2019-04-08T13:06:15.569Z | 2016-09-21T00:00:00.000 | {
"year": 2016,
"sha1": "9d17a337927901f4288a5d8bd0432af4f9158257",
"oa_license": "CCBYNCSA",
"oa_url": "http://ir.igsnrr.ac.cn/bitstream/311030/43329/1/Xu-2016-A%20research%20on%20applic.pdf",
"oa_status": "GREEN",
"pdf_src": "Adhoc",
"pdf_hash": "8ab6da1b26c09accdcb522adebf43cf5421d52ec",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.