id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
248414928
pes2o/s2orc
v3-fos-license
Household Secondary Attack Rates of SARS-CoV-2 by Variant and Vaccination Status This systematic reviewe and meta-analysis evaluates household secondary attack rates of SARS-CoV-2 by variant and vaccination status. Reports excluded (n = 67): -No data on uninfected contacts (n = 22) -No data on household contacts (n = 19) -Reported prevalence or overall household attack rate, which includes index cases (n = 10) -Tested household contacts using antibody tests (n = 9) -No original data (n = 4) -Overlapping study population with another article included in meta-analysis (n = 2) -Restricted to households with at least one confirmed case among household contacts (n = 1) New studies included (n = 58) Included Total studies included in review (n = 135) Records identified from reference lists of eligible articles (n = 2) Studies included in previous version of review (n = 87 studies) Previous studies Reviewed methodology to exclude studies that did not include laboratory-confirmed infections, included asymptomatic index cases only, and were preprints that were subsequently published (n = 87) Records excluded (n = 10) -No lab-confirmed infections (n = 5) -Preprints in first analysis that were subsequently published (n = 4) -Asymptomatic index cases (n = 1) eTable 1. Electronic Databases and Search Strategy for Household Secondary Attack Rate of SARS-CoV-2 Database Figure 2 when restricting to studies with low risk of bias. Received a booster dose was defined as having received an additional dose after completion of the primary COVID-19 vaccination series before the index date. Fully vaccinated was defined as completion of the primary vaccination series ≥2 weeks before the index date and stratified into completion <5 months or ≥5 months before the index date. Some persons who were fully vaccinated had unknown dates for completion of their primary vaccination series. Partially vaccinated was defined as having only 1 dose of a 2-dose series or completing the primary vaccination series <2 weeks before the index date. eTable 5. Pairwise Analyses of Index Case Vaccination Status Using Only Studies in Which SARs Were Reported From Both Relevant Subgroups De Gier et al. 11 Partly vaccinated was defined as having received the first dose of a two-dose schedule at least 14 days before onset of symptoms. Fully vaccinated was defined as having completed a two-dose schedule at least 7 days or the one-dose Janssen schedule at least 14 days before symptom onset. 13 Vaccination status was dichotomized to either non vaccinated or fully vaccinated as per each vaccine's protocol. Harris et al. 1 Vaccinated index cases defined as having been vaccinated 21 days or more prior to testing positive for COVID-19 based on evidence of the time needed for the vaccine to provide a sufficient level of immunity. Non-vaccinated index cases were defined as not having received a vaccine prior to testing positive. Households where the index case received the vaccine less than 21 days before testing positive were excluded from this analysis. Most of the vaccinated index patients (93%) had received only the first dose of vaccine. Gazit et al. 17 Participants were classified into one of three vaccination-status groups at the time of the index case (the confirmed exposure): Unvaccinated; Recently Vaccinated Once, i.e. those vaccinated with the first vaccine dose within 0-7 days before the index infection, and Fully Vaccinated, i.e. those who were 7 or more days post the second dose by the time of the confirmed exposure. Jalali et al. 22 To define the vaccine status of the household contacts, they used the test date of the primary case and compared it with the contacts vaccination dates: 1.Unvaccinated: A contact was considered unvaccinated if the primary case's test date is before the contact's first dose. 2.Partially vaccinated: A contact was considered partially vaccinated if he/she had received 1 dose of vaccine (mRNA Vaccines or AstraZeneca vaccine) prior to the test date of his/her primary case. Contacts who had received dose 2 within the last week before the primary case's test date were also considered partly vaccinated. 3.Fully vaccinated: A contact was considered fully vaccinated if he/she had received dose 2 (mRNA) at least 1 week prior to the test date of his/her primary case. 4.Booster vaccinated: A contact was considered booster vaccinated if he/she had received dose 3 at least 1 week prior to the test date of his/her primary case. The time interval between the second and the third doses should be >= 120 days. The vaccine status of the primary cases was defined based on their test date and their vaccination dates. Individuals with J&J vaccine were excluded from the study. We excluded households where two individuals tested positive on the same day to ensure a unique index case in each household. Layan et al. 25 Cases were considered vaccinated if their infection occurred >7 days after the 2nd dose. Similarly, household contacts were considered vaccinated if their exposure to the index case occurred >7 days after the 2nd dose Martínez-Baz et al. 34 A person was considered fully vaccinated ≥ 14 days after receiving one dose of Janssen or the second dose of other vaccines, and partially vaccinated ≥ 14 days after receiving only the first dose of Spikevax, Comirnaty or Vaxzevria. Meyer et al. 36 Not defined, but the two secondary cases found among household contacts of vaccinated index cases were diagnosed 25 days after the second vaccination. Ng et al. 41 Both index cases and close contacts were considered partially vaccinated if they had received one vaccine dose before the day the quarantine order was issued, or were within 14 days of the second dose on the day the quarantine order was issued. If more than 14 days had elapsed after their second dose, they were taken to be fully vaccinated. Sachdev et al. 47 Partially vaccinated patients were defined as patients who received at least 1 dose of vaccine but were not fully vaccinated. Fully vaccinated patients were defined as patients who had received a second mRNA vaccine dose or a single-dose viral vector vaccine ≥14 days from symptom onset or collection of a positive specimen Singanayagam et al. 48 Participant defined as unvaccinated if they had not received a single dose of a COVID-19 vaccine at least 7 days before enrolment, partially vaccinated if they had received one vaccine dose at least 7 days before study enrolment, and fully vaccinated if they had received two doses of a COVID-19 vaccine at least 7 days before study enrolment.
2022-04-29T06:23:08.823Z
2022-04-01T00:00:00.000
{ "year": 2022, "sha1": "da26eb671c2e5fefd7e0641e83013218bc061d3a", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "face146a37fbb6ff11f5f84fb726bc47e7cdd888", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
229291692
pes2o/s2orc
v3-fos-license
Moringa oleifera Lam and its Therapeutic Effects in Immune Disorders Moringa oleifera Lam., a plant native to tropical forests of India, is characterized by its versatile application as a food additive and supplement therapy. Accumulating evidence shows that Moringa plays a critical role in immune-related diseases. In this review, we cover the history, constituents, edibility, and general medicinal value of Moringa. The effects of Moringa in treating immune disorders are discussed in detail. Moringa can not only eliminate pathogens, including bacteria, fungi, viruses, and parasites, but also inhibit chronic inflammation, such as asthma, ulcerative colitis, and metabolic diseases. Additionally, Moringa can attenuate physical and chemical irritation-induced immune disorders, such as metal intoxication, drug side effects, or even the adverse effect of food additives. Autoimmune diseases, like rheumatoid arthritis, atopic dermatitis, and multiple sclerosis, can also be inhibited by Moringa. Collectively, Moringa, with its multiple immune regulatory bioactivities and few side effects, has a marked potential to treat immune disorders. INTRODUCTION Moringa oleifera Lam (MO), a frost and drought resistant plant of the monogeneric family Moringaceae, a native plant of tropical forests of India, is characterized by its versatile applications as a food additive and supplement therapy (Anwar et al., 2007). MO is suitable for food application because of its abundant nutritional ingredients, such as essential amino acids, oleic acids, vitamins, and minerals. MO is recognized for its medicinal uses, such as treating various infections, modulating the immune system, and displaying antioxidant, anti-diabetic, or anti-tumor effects (Dhakad et al., 2019). Moringa tree leaves were mostly used for cattle feed in ancient times (Sun et al., 2017), but were gradually started to be used in the human diet to maintain mental and skin health (Anwar et al., 2007). With its growing popularity, different parts of MO, such as roots, seeds, and pods, were recognized as nutritious and medically valuable. Currently, MO is widely used in food ingredients, nutraceuticals, and medications and has been termed a "Miracle tree" (Dhakad et al., 2019). BIOACTIVE CONSTITUENTS AND GENERAL FUNCTION OF MORINGA OLEIFERA The bioactive constituents of MO have been identified in almost all parts of the plant (Liang et al., 2019). The specific constituents isolated from MO mainly (detailed in Supplementary Table S1) include flavanoids (mainly distributed in the leaves), glucosinolate and isothiocyanate (mainly distributed in the leaves), phenolic acid (all distributed in the leaves), alkaloids and sterols (distributed in the leaves, roots, and seeds), and terpene (all distributed in the pods) (Anwar et al., 2007;Bichi, 2013;Baldisserotto et al., 2018;Dhakad et al., 2019). The constituents of the leaves and seeds were most frequently reported. Based on the phytochemical analysis, phenols and alkaloids are more abundant in the leaves than in the seeds, while flavonoids, saponins, and anthocyanins are more abundant in the seeds (Gupta et al., 2018). Besides, other kinds of nutrients are present in high levels in the processed products of MO, including a number of fatty acids derived from the seed oil (Leone et al., 2016), various kinds of minerals from the dried leaf powder (Witt, 2014), and high-quality carbohydrates from refined gum exudates (Kar et al., 2013;Gupta et al., 2018). The addition of a small amount of MO is reported to significantly improve the nutritional value of food such as bread, yoghurt, cheese, and soup (Williams, 2013;Stadtlander and Becker, 2017). The diverse parts of MO have been processed into many food products in more than eighty countries, to improve mineral and vitamin deficiencies (Ali et al., 2017). Moreover, few side effects have been reported for the use of MO (Bichi, 2013;Palada et al., 2017;Dhakad et al., 2019). In terms of its therapeutic properties, the constituents isolated from the seeds and leaves of MO are reported to function in approximately 80 diseases (Mahmood et al., 2010), which can be mainly categorized as oxidative stress, glucose metabolism disorders, tumors, organ injury, and immune-related diseases (Anwar et al., 2007;Dhakad et al., 2019). MO contains more than 40 natural antioxidant compounds and is well-known for its effect on eliminating free radicals (Pakade et al., 2013). For example, isoquercetin is recorded to have the highest antioxidative activity and exhibits a ROS inhibitory effect by increasing the expression of antioxidant enzymes, such as superoxide dismutase (SOD), glutathione peroxidase (GPx), and catalase (Vongsak et al., 2013(Vongsak et al., , 2015Ratchanee, 2014). In addition, the application of MO leaf powder can maintain malondialdehyde (MDA) levels and the ferric reducing ability of human plasma (Doerr, 2005;Ngamukote et al., 2016). MO has shown outstanding hypoglycemic activity in various diabetic animal models, or in human volunteers, because it can not only stimulate insulin secretion from pancreatic β-cells, but also directly reduce blood glucose by reacting with anti-insulin antibodies (Mahmood et al., 2010;Tende et al., 2011;Villarruel-López et al., 2018). MO also exhibits antitumor properties, including cytotoxic, antiproliferative, chemoprotective, and anti-inflammatory activities in diverse types of tumors (Guevara et al., 1999;Biswas et al., 2012). Moreover, MO is also reported to protect organs from injury. MO not solely stabilizes hypotensive activity to protect the cardiovascular system via its fully acetylated glycosides, but also has a calcium antagonist effect, a lipid removal function, and diuretic activity (Faizi et al., 1998;Sana et al., 2015). Furthermore, MO showed antispasmodic, antiulcer, and hepatoprotective effects in treating diarrhea, gastrointestinal motility disorder, and fatty liver disease (Hamza, 2010;Kumar et al., 2010;Das et al., 2011). One of the most valuable effects of MO is its immune-related functions, which have been reported recently years, which are involved in many immune disorders and possesses significant value in translational medicine. In the present review, we introduce and discuss the immune-related functions of MO. ANTI-INFECTIOUS ACTIVITY OF MORINGA OLEIFERA MO possesses a number of activities against infectious diseases. All parts of the plant can be made into various formulations against bacteria, fungi, viruses, and parasites. Bioactive components of natural medicinal herbs, including MO, exert the anti-infectious effects against pathogens. Benzyl isothiocyanate, extracted from the seeds of MO, can significantly reduce the pathogenicity of bacteria by inhibiting bacterial conjugation (Padla et al., 2012). A leaf extract containing silver and niaziminin, or flowers containing kaempferol, rhamnetin, and isoquercitrin (Dubey and Gupta, 1978;Das et al., 2013;Vongsak et al., 2013;Rajendran et al., 2014;Paikra et al., 2017), exerted a direct beneficial effect in the elimination of microbes. The anti-bacterial activity of MO is summarized in detail in Table 1. It is clear that MO has a relatively broad anti-microbial spectrum; however, it shows a slightly higher inhibitory effect against gram-negative bacteria. The other anti-infectious activities of MO are summarized in Table 1 as well. The leaves and seed appear to possess a broader spectrum of anti-microbial activity than the other parts of MO. It is acknowledged that bioactive components of MO have been fully researched, however, what is the corresponding relationship between different parts of MO and their effects to certain infectious diseases remains to be discussed. Mechanistically, one of the most effective ingredients of MO is the moringa coagulant protein (molecular weight approximately 13 kDa), which can purify polluted water, regulate its acid-base balance, and exert an antiseptic effect, even in a crude salt extract form (Anwar et al., 2007;Abdul Hamid et al., 2016;Gupta et al., 2018). The coagulant protein is able to flocculate microorganisms through its functions of adsorption and charge neutralization (Broin et al., 2002;Mulugeta and Fekadu, 2014). In addition, a group of amino acids found in MO can interact with metal ions. These amino acids, together with the absorbed metal ions, generate a negatively charged environment that consistently influences the survival of pathogens (Sharma et al., 2007;Obuseng et al., 2012;Matouq et al., 2015). Moreover, the ingredients in MO can be chemically functional. Kaempferol, a natural flavonoid extracted from MO, exhibits dose-dependent anti-microbial effect via disruption of the integrity of bacterial cell membrane (Poklar Ulrih et al., 2010;Rajendran et al., 2014). Isoquercitrin, another active ingredient of MO, can strongly inhibit viral gene expression by attenuating the activation of the nuclear factor kappa (NF-κB) signaling pathway (Vongsak et al., 2015). Several reports have shared the function of MO in deliminating pathogens because the multiple effective components, including natural proteins and certain amino acids in MO, which can destroy or neutralize the microorganisms, modulate the microenvironment, and amplify the immunity. Although basic researches of MO were relatively common nowadays, clinical researches or applications based on the single component of MO were still infrequent. More clinical trials of MO should be applied when the safety of MO has been confirmed. MORINGA OLEIFERA AND ITS EFFECT ON CHRONIC INFLAMMATION Chronic inflammation is involved in a number of disorders and is characterized by continuous expression of pro-inflammatory factors and long-lasting tissue damage. MOs possesses properties that act against chronic inflammation and its associated disorders. An n-butanol extract of Moringa seeds could significantly improve the lung function parameters of guinea pigs with ovalbumin-induced airway inflammation (Mahajan et al., 2009;Mahajan and Mehta, 2011). The number of immunological cells, particularly neutrophils and eosinophils, was dramatically decreased in serum or in bronchoalveolar lavaged fluid when the extract was applied. This anti-inflammatory effect was also confirmed using lung tissue histopathology. The active ingredient of the extract, subsequently proved to be β-sitosterol, was believed to function by modulating the balance of Th1/Th2 cytokines. However, few studies have focused on the response of immunocytes to MO. Kooltheat et al. found that MO could eliminate the production of monocyte-derived macrophage factors, like tumor necrosis factor alpha (TNF-α), interleukin (IL)-6 and IL-8 (Kooltheat et al., 2014(Kooltheat et al., , 2017. Notably, the decrease in these factors was evident at both mRNA and protein levels. Most of immune-related molecules were originated from immunocytes. Therefore, it is critical to study the effects of MO on immunocytes and immuno-microenviroment. Ulcerative colitis (UC), a chronic intestinal disease characterized by bloody diarrhea, is a non-specific inflammatory disorder as well as a common precancerous lesion of colectoral cancer. Minaiyan et al. used a hydroalcoholic extract of Moringa to treat experimental colitis in mice, and observed downregulation of a group of secreted inflammatory factors and an increase of both colon lengths and the expression of glutathione-S-Transferase Pi 1 (GSTP1), which is a detoxifying enzyme mediated by NFE2-Related Factor 2 (NRF2) . This effect was attributed to compounds of biophenols and flavonoids in MO in a dosedependent manner (Minaiyan et al., 2014;Kim et al., 2017). Chronic inflammation is also associated with metabolic disorders, such as non-alcoholic steatohepatitis (NASH) caused by hepatic lipid accumulation, and high-fat diet induced glucose intolerance. Almatrafi et al. measured the levels of hepatic cytokines of guinea pigs fed with no, low, or high MO diets 48 . They demonstrated that the expression of IL-1β, IL-10, and interferon gamma (IFN-γ) were the lowest in the high MO group, and no difference was found for IL-6, monocyte chemoattractant protein-1 (MCP-1) and TNFα cytokines among the groups. The authors inferred that quercetin and chlorogenic acid might contribute to the anti-inflammatory effect (Bamagous et al., 2018). A similar diet (containing a fermented Moringa extract contained) was applied to experimentally obese mice. The expression of proinflammatory cytokines was markedly decreased in their liver, epididymal adipose, and quadriceps muscle (Joung et al., 2017). Traditional medicine has obvious advantages for chronic diseases, therefore, more clinical trials should be performed to understand the regulation process of MO. Symptomatic support and immunity improvement are currently the first-line approaches for the treatment of chronic inflammation, mainly because chronic inflammation is often acompanied by disorder of the immune microenvironment. MO, not only inhibiting the expression of a series of proinflammatory factors, but also contributing to the regulation of immune cells, provides options for the control of chronic inflammation. Collectively, MO could attenuate the negative impact of the chronic inflammation mainly through inhibiting the expression of a series of pro-inflammatory factors. PHYSICOCHEMICAL IRRITATION INDUCED IMMUNE DISORDERS AND MORINGA OLEIFERA MO may also be used to treat immune disorders after physical or chemical irritation. Since ancient times, MO has been used to treat cuts, burns, and wounds (Rathi et al., 2006;Mahmood et al., 2010;Paikra et al., 2017). MO is capable of inducing a moderate inflammatory phase after injury, which is critical for the wound healing cascade, because it provides a suitable environment for the removal of harmful substances and tissue repair, prevents excessive leukocyte recruitment, and promotes the proliferation and migration of fibroblasts. Additionally, several studies have validated the central and peripheral analgesic effects of MO (Rao et al., 2008;Adedapo et al., 2015;Martínez-González et al., 2017;Paikra et al., 2017), not to mention its anti-infection property, which has been widely demonstrated. These studies provided scientific support for the use of MO by indigenous Philippines and Indians who collected MO to dress wounds (Nama, 2015;Palada et al., 2017). MO is advantaged to deal with acute disorders because it is easily accessible and that's why MO has saved many lives in developing countries. Chemical irritation mainly refers to metal intoxication and drug side effects, which induce global or organic immune disorders and tissue damage. Adeyemi et al. found that MO-based diets could protect against nickel-induced hepatotoxicity in rats, partially by attenuating the systemic inflammatory response (Adeyemi et al., 2017). In another study, an ethanolic extract of MO was applied to chromium-treated male rats. MO significantly reduced the levels of inflammatory markers and ameliorated the chromium effects on testicular local immunity (Sadek, 2014). Besides, several studies have reported that heavy metal ions, such as Cd (II), Pb (II), and Cu (II), can be removed using the bark and seeds of MO; therefore, in addition to its anti-oxidative effect, MO-based therapy is recommended as a valuable treatment for detoxication (Sharma et al., 2007;Obuseng et al., 2012;Reddy et al., 2012;Chatterjee et al., 2016). Edeogu et al. explored the protective effect of MO seed oil against gentamicin-induced pro-inflammation in rats (Edeogu et al., 2019). They found that gentamicin prominently increased the content of IL-6, TNF-a, induced nitric oxide synthase (iNOS), and NF-κB in the kidney, while treatment with MO significantly decreased the levels of these inflammatory markers. Overdose administration of acetaminophen, commonly considered as one of the leading causes of acute kidney failure, resulted in a significant elevation of pro-inflammatory cytokines (IL-1β, IL-6, and TNF-α) and a reduction in antiinflammatory cytokines (IL-10) in renal tissue: All of these inflammatory changes were reversed by treatment with an MO leaf extract (Adil et al., 2016). Similar effects were found in treatment of levofloxacin-induced hepatic toxicity, aspirininduced gastric ulcer, and methotrexate-induced neurotoxicity (Akhtar and Ahmad, 1995;Verma et al., 2012;Famurewa et al., 2019;Farid and Hegazy, 2019). Moreover, the toxicity of food additives is partially attributed to inflammatory injury. MO has been used to ameliorate nephrotoxicity induced by titanium dioxide nanoparticles (TiO 2 NPs) (Kandeil et al., 2019). TiO 2 NP-treated rats were fed with an MO leaf extract, and the expression of kidney 222 injury molecule 1 (KIM-1), NF-κB, TNF-α, and heat shock protein 70 (HSP-70) were markedly 223 decreased, while the expression of NRF-2 and heme oxygenase 1 (HO-1) were significantly 224 upregulated compared with those in control groups. Recently, Abd-Elhakim et al. showed that MO 225 might exert a protective effect against melamine-induced hematoimmunotoxic hazards (Abd-Elhakim 226 et al., 2018). In that study, melamine had a markedly adverse impact on the global hematological 227 system, while the application of MO not only attenuated the melamine-induced symptoms of anemia, 228 leukopenia, and innate and humoral immune disorders, but also restored hematological parameters, 229 including neutrophils, lymphocytes, serum IgG and nitric oxide (NO) levels. Physicochemical irritation tend to cause acute stress response. Considering that MO is an inexpensive and easily available natural plant with few reported side effects, it is an advantageous choice for the treatment of such diseases. Besides, MO has good analgesic effects and can be used to counteract side-effects of some medicine, it may be a useful attempt to use it as a companion drug. AUTO-IMMUNE DISORDERS AND MORINGA OLEIFERA Rheumatoid arthritis (RA) is a typical auto-immune disorder, characterized by an increase in pro-inflammatory cytokines (including TNF-α, IL-6, and IL-1β) and inducible inflammation-related enzymes (such as cyclooxygenase and lipoxygenase) and a decrease in anti-inflammatory cytokines (as IL-4 and IL-10). Several studies have reported the efficacy of MO in alleviating joint inflammation associated with RA; however, the exact mechanism remains unknown (Mahajan et al., 2007;Padmini et al., 2016). Saleem and colleagues used complete Freund's adjuvant to establish an RA model in rats. In the model, treatment with a MO methanolic extract markedly reduced the serum concentration of C-reactive protein, prostaglandin E2, and TNFα, markedly downregulated the levels of NF-κB, prostaglandin E2 (PGE2), cyclooxygenase 2 (COX-2), and IL-1β, and significantly upregulated the mRNA levels of I-κB, IL4, and IL-10, and remarkably restored the histopathological indices and arthritic index in the joints (Saleem et al., 2019). Atopic dermatitis (AD), a kind of chronic, inflammatory skin disease, belongs to another group of classic auto-immune disorders (Brunello, 2018). It is generally accepted that AD is typically accompanied by an extreme initiation of T-cells, elevated serum IgE levels, and the skin infiltration of dendritic cells and T cells (Yamura et al., 1981). Choi et al. used TNF-α and IFN-γ to induce AD in HaCaT cells (human keratinocytes), and applied a Dermatophagoides farinae extract to monitor AD in BALB/c mice (Choi et al., 2016). MO not only reduced the expression of pro-inflammatory cytokine-related mRNAs and the levels of mitogen-activated protein kinases (MAPKS) in vitro, but also improved the ear skin thickness and serum immunoglobulin levels in vivo. In addition, a decrease in retinoic acid-related orphan receptor γT (RORγT) levels was observed, which regulates the expression and development of Th17 cells). Levels of thymic stromal lymphopoietin (TSLP, which triggers dendritic cells and secretion of Th2 cytokine production) and mannose Receptor C-Type 1 (CD206, which is expressed in various immunological cells) were also reduced. These results strongly suggested the efficacy of MO as a supplement to treat patients with AD. Similar effects were demonstrated in a model of multiple sclerosis. Galuppo et al. showed that MO could counteract the inflammatory cascade in an animal model of experimental autoimmune encephalomyelitis (EAE) (Galuppo et al., 2014). In that study, TNF-α was identified as one of the main targets of glucomoringin-isothiocyanate (GMG-ITC), a natural agent extracted from MO. Tahiliani and colleagues revealed the therapeutic value of MO leaf extract in the regulation of hyperthyroidism, an autoimmune related disorder, by inhibiting triiodothyronine (T3) Frontiers in Pharmacology | www.frontiersin.org December 2020 | Volume 11 | Article 566783 synthesis and release (Tahiliani and Kar, 2000). Monotherapies provide limited benefits in curing autoimmune diseases while natual drugs like MO are excellent alternatives. Unfortunately, scholars interested with auto-immune diseases have merely conducted experiments of MO on animals. Autoimmune diseases are mainly associated with genetic factors, immunomodulation, viral infection and antigenic variations. With the exception of genetic factors, MO has been reported to have unique positive effects on the other three aspects. Most of related reports were focused on its value for organspecific autoimmune diseases. However, MO is a medicinal plant with multiple active ingredients, which may have a better effect on systemic autoimmune diseases, such as systemic lupus erythematosus. More researches on MO in systemic autoimmune diseases are warranted. CONCLUSION AND FUTURE PERSPECTIVES Current research shows that MO exerts its multiple immunerelated effects primarily through directly eliminating pathogens or modulating the balance of pro-and anti-inflammatory mediators released from various kinds of immune cells by regulating the activity of signaling pathways, such as the canonical NF-κB pathway (Figure 1). Significantly, the bioactivity of MO is dependent on its active ingredients, which are related to the different parts of this plant and extraction methods used. Notably, in some experiments, low-dose application of MO might have a better anti-inflammatory effect than higher doses (Ferreira et al., 2008;Almatrafi et al., 2017;Kapse et al., 2017), which suggested the necessity of identifying the appropriate dosage of MO before clinical application. Research evidence has demonstrated the therapeutic value of MO to treat immune disorders; however, a few problems still remain to be solved. For example, will the active ingredients extracted from different parts of MO interact to invalidate its effects? Are there any side-effect remaining to be discovered? Are there other molecular mechanisms underlying MO's immuneregulatory function? Immune disorders, whether resulting from infection or inflammation, might have severe consequences. MO, with few reported side effects, has a long history for curing diseases and is an inexpensive and credible natural medicine. More importantly, it can precisely modulate the immune balance because of its moderate and comprehensive bioactivities. Despite its huge potential to cure immune disorders, further quantitative and mechanistic research should be undertaken before MO can be developed into clinical applications. MO is most popular in east and southeast asian countries according to the current reports. It is also used in central america and other tropical countries because it is originated from tropical regions. The total extract of MO was cheap and effective which has saved a large number of patients in the less developed area of the world. However, as all the other natural medicine, it is almost impossible to be widely used in clinical until the risk of MO has been thoroughly understood. Unfortunately, there has been little research conducted specifically on the side-effects of MO, and further evidence is needed in the future to confirm the safety of MO. AUTHOR CONTRIBUTIONS LG and LZ designed the manuscript. XX and JW wrote the manuscript with essential contribution of CM, WL, TW, BZ, YW, XL, LG, and LZ. XX produced the figure. FUNDING
2020-12-17T14:14:19.579Z
2020-12-17T00:00:00.000
{ "year": 2020, "sha1": "59fbd57fd4312e28cca22a9cf4a3edc48737123d", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fphar.2020.566783/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "59fbd57fd4312e28cca22a9cf4a3edc48737123d", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
221122252
pes2o/s2orc
v3-fos-license
Thyroglossal duct surgery. What is the acceptable recurrence rate? Objectives: To present experiences of different specialties in the treatment of thyroglossal duct cysts (TGDCs) and subsequent complications in multiple centers. Methods: A retrospective cross-sectional study of all cases of TGDC for a period of 11 years from 2008-2019 by different departments from 3 different centers in Jeddah, Kingdom of Saudi Arabia (King Faisal Specialist Hospital & Research Centre, Bakhsh Hospital and International Medical Center). Results: Forty-nine patients were included. The type of surgery performed plays a significant role in recurrence (p<0.001). The Sistrunk procedure had a lower recurrence rate (0%) than simple excision (70%) and has showed a significantly long recurrence-free interval (p<0.001). Higher recurrence rates are associated with higher postoperative complications (p=0.002). Patients who underwent pre-operative fine needle aspiration did not have any recurrence during the follow-up period. Conclusion: The Sistrunk procedure is the gold standard technique with the highest recurrence-free interval rate. Fine needle aspiration could be recommended as a less invasive procedure to exclude malignancy. T he thyroglossal duct cyst (TGDC) is one of the most common congenital midline neck masses that are located at the level of or below the hyoid bone in approximately 80% of cases, and the remaining 20% are located above the hyoid bone. 1 Epidemiologically, TGDC has a bimodal distribution in children and young adults. 2 The clinical presentation of TGDC includes swelling, pain or tenderness, dysphagia, and dysphonia. 3 However, inflammation or infection in the form of an abscess or cellulitis is the most common complication seen in TGDC patients. Other complications include the presence of the tract posterior to the hyoid bone or ectopic thyroid follicles. 4 However, adults have been reported to experience more symptoms than children. 2 Surgical removal is the definitive treatment; however, recurrence is a known possible outcome. 3 To reduce the recurrence risk based on embryological principles, Schlange had suggested removal of the middle third of the hyoid bone with the main cyst; then, Sistrunk improved this technique further by removing the mid-portion of the hyoid bone and tissue between the hyoid bone and foramen cecum. 5 However, the recurrence rate is still 6.6% in the pediatric population, irrespective of the type of surgery performed whether it is Sistrunk's operation or modified Sistrunk's procedure. 6 Over time, it was found that the central neck dissection or anterior wide local excision to remove the entire thyroglossal tract remnant reduced the risk of initial failure and was considered a favorable option for the management of recurrent cases, especially in cases with a history of infected cyst or incision to avoid the risk of further recurrences. 7 Few studies have emphasized the factors that are associated with TGDC recurrence that could increase the economic burden of the hospital and negatively influence the mental health of the patients and their family. The purpose of this study was to determine the factors related to TGDC recurrence and present the experiences of different specialties in the treatment of TGDCs and its complications in multiple centers. We included all patients with the diagnosis of TGDC without restriction to age, nationality, or presenting symptoms. We excluded patients with missing documentation and also those who were lost to follow up of <12 months. Data extracted from patients' records included basic demographic characteristics, clinical presentation, medical history, postoperative variables, Brief Communication possible outcome parameters, and complications. All patients were followed up for 24 months after the surgery. The literature review was carried out using PubMed, Medline, Embase, and Cochrane Library. Key terms related to thyroglossal duct cyst, recurrence, and Sistrunk were used to search for related published studies. An institutional review board has approved this retrospective study and attests to its scientific validity. Informed written consent was provided by all participants in this study. Statistical analysis. All statistical analyses were performed by the Statistical Package for Social Sciences for windows, version 25.0 (IBM Corp, Armonk, NY, USA). Descriptive statistics were formed from the data collected and expressed as mean, standard deviation (SD), and frequencies. Statistical comparisons between categorical variables were performed using Fisher's exact test or Chi-square test as appropriate. Kaplan-Meier (KM) curves were used to estimate the recurrence-free rate after surgery for TGDC. Moreover, a log rank test was performed to compare the recurrence-free rate among different procedures and specialists. A p<0.05 was considered significant. Results. A total of 49 patients were included in the study; of these, 32.7% were male. The mean±SD age of all patients was 11.8±11.4, and most of them were Saudis (83.7%). Only 5 patients (10.2%) presented with compression symptoms, and 14 patients (28.6%) had other comorbidities. Noteworthy, only 7 patients (14.3%) had a previous surgery, which was in a different hospital than the ones included in our study and had no records, for the thyroglossal cyst, and most of them were male (71.4%). Meanwhile, a total of 2 patients with papillary thyroid carcinoma (PTC) were reported, a 21-year-old male and a 22-year-old female. There was no significant difference in the examined characteristics between men and women ( Table 1). Association of recurrence with preoperative and operative variables. All patients with recorded recurrence (n=7) have been diagnosed with concomitant TGDC, and the cysts were situated either in the infrahyoid (71.4%) or the suprahyoid (28.6%). Regarding investigation modality, all patients underwent ultrasound preoperatively. Moreover, no recurrence was detected in patients with a history of fine needle aspiration (FNA) or in those who had their surgery performed by a general surgeon. There was no significant effect for any of the pre-operative variables on the recurrence rates ( Table 2). For the operative factors, surgery duration (mean±SD) was slightly longer in the recurrence group (90±43.8 min) than that in the no-recurrence group (85±59.5 min). In addition, a total of 5 surgeons with an average minimum experience of >15 years performed the surgeries. Additionally, the patients who underwent a resection of the middle (central) part of hyoid bone showed no recurrence. The general surgery group had no recurrence rate in comparison to the pediatric group which was 13.3% while the otolaryngology -head and neck surgery group had the highest rate of 23.1%. Nevertheless, there was no statistical significance found when comparing the data to those of the no-recurrence group ( Table 2). Association of recurrence with postoperative variables. All recurrent cases underwent a simple excision. A significant increase in risk of recurrence (p<0.001) was found between simple excision and other procedures. Moreover, the recorded postoperative complications were significantly (p=0.002) higher in the recurrence group (71.4%) than in the nonrecurrence group (11.9%). However, none of the other postoperative variables have showed any significant difference between the recurrence and no-recurrence groups ( Table 3). Disclosure. Authors have no conflict of interests, and the work was not supported or funded by any drug company. Recurrence-free rates (procedure and specialty). For both combined Sistrunk's and total thyroidectomy and Sistrunk's alone, the KM curve showed a similar recurrence-free rate (100%) during the follow-up period. In contrast, the simple excision procedure showed a significantly higher rate of recurrence (p<0.001) according to the log-rank test. In terms of the specialty performing the surgery, general surgery showed superiority, with no recurrent cases recorded in their sample. However, no statistical significance was found for this difference (p=0.202) according to the log-rank test. Thyroglossal cyst recurrence is common and depends mainly on the type of surgery and the incomplete removal of TGDC. In a retrospective study with a large sample size of 207 patients, the overall recurrence rate was 9.7%, and a significant difference was found among surgical types; the Sistrunk operation had a recurrence rate of 5.3%, which was lower compared to that of plain excision at 55.6%. 8 In another retrospective review on a large number of patients (n=352), the overall recurrence rate was 4.5%. 9 By using a 3-dimensional reconstruction, it was shown that TGDC penetrated the hyoid bone as a result of the forward growth of the hyoid bone. This could possibly result in recurrence. 10 Dissection of all tracts is recommended to decrease the risk of recurrence; however, dissection of the foramen cecum was not found to be as important as the partial dissection of the hyoid bone. 9 Furthermore, central neck dissection is considered Thyroglossal duct surgery ... Alahmadi et al 7 reported that central neck dissection could reduce the risk of initial failure in difficult cases, and even though it is a safe procedure, it should still be performed with caution to avoid the risk of injury to the carotid artery, vagus nerve, or larynx. However, there are other risk factors that could increase the risk of recurrence. The rupture of the cyst could increase the risk of recurrence; however, in another study, the rupture of the cyst in 53 out of 159 cases was noted to have no effect on recurrences or postoperative complications, despite the extension of the cyst posterior to the hyoid. 3,11 Furthermore, postoperative infection may increase the risk of recurrence. 8 Other possible factors could be attributed to the recurrence, such as expertise of the surgeon and years of training, the persistence of infra or suprahyoid tract remnants, and a misdiagnosis of a TGDC. 12 Furthermore, our result revealed that only 2% were carcinomas or lymph nodes with metastasis. This finding is consistent with that of literature reporting that carcinoma could be present in approximately 1% of cases, but these cases had excellent prognosis if provided with adequate treatment. 13 Moreover, in a retrospective Thyroglossal duct surgery ... Alahmadi et al analysis, PTC in the epithelium of the cyst was reported in 1.4% of cases. 9 However, FNA is still favorable to confirm the diagnosis and distinguish malignant features; it was carried out in 10.2% of our cases. 14 Contrary to malignant transformation, the incidence of ectopic thyroid in TGDC is unknown. Thus, preoperative imaging, such as sonography, computed tomography, and FNA biopsy are recommended as supplementary techniques to confirm the diagnosis. 15 In our study, we did not identify any patient with ectopic thyroid tissue in the postoperative histopathological examination which other studies demonstrated as commensurate finding. [9][10][11] Complications of the Sistrunk procedure are known to be minor ones and wound-related. 15 In our study, we report no complications in 30/40 of patients who underwent Sistrunk operation, and in 39 out of the 49 patients in the total cohort. Moreover, other complications were related to infection and no statistically significant difference was found among the different types of surgery. In addition, the study has shown that surgeries conducted by general surgeons had a better prognosis as no recurrences were detected in cases that underwent general surgeries. These results are consistent with another retrospective study of 102 patients who were followed up for 14 years. 15 The study showed that there was no recurrence detected in the general surgery group as compared to the 3% recurrence rate found in the pediatric surgery group during the follow-up period. 15 It is noteworthy that the pediatric surgery group (n=67) was larger than the general surgery group (n=35). Similarly, in our study, we only had 6 patients in the general surgery group, as compared to the 13 patients in the otolaryngology -head & neck surgery group and 30 patients in the pediatric surgery group. Study limitations. A small sample size with a short follow-up duration can make it hard to draw an efficient conclusion. Secondly, we included all age groups with no comparison regarding specific groups. Lastly, patients coming from 3 different centers and 3 different departments may give rise to variabilities in surgical techniques. Therefore, further multicenter studies with a larger sample size with more evaluation of the Sistrunk surgical procedure among all surgical specialties with focus on recurrent TGDC cases are strongly suggested. In conclusion, there is a significant difference in the recurrence rates and postoperative complications among different types of surgeries. No significant superiority has been detected when comparing the different specialties in terms of recurrence rates. The Sistrunk procedure is the gold standard technique, showing a higher recurrence-free rate as compared to a simple excision. Fine needle aspiration is highly recommended as a non-invasive modality to exclude malignancy and improve outcomes.
2020-08-14T13:01:28.904Z
2020-08-01T00:00:00.000
{ "year": 2020, "sha1": "eb03a0daedb2d849792e28040880c14398029f50", "oa_license": "CCBYNCSA", "oa_url": "https://smj.org.sa/content/smj/41/8/878.full.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "15bc94d8f4e344deb95f4f9f1cca8a8041911d57", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
16031054
pes2o/s2orc
v3-fos-license
A study of the potential anticancer activity of Mangifera zeylanica bark: Evaluation of cytotoxic and apoptotic effects of the hexane extract and bioassay-guided fractionation to identify phytochemical constituents The present study investigated the potential anticancer activity of the bark of Mangifera zeylanica, an endemic plant in Sri Lanka that has been traditionally used for cancer therapy. Cytotoxic and apoptotic effects were investigated in vitro using sulphorodamine assay, acridine orange and ethidium bromide staining, caspase-3 and −7 activity, DNA fragmentation and reverse transcription-quantitative polymerase chain reaction in estrogen receptor positive MCF-7 and triple-negative MDA-MB-231 breast cancer cell lines, SKOV-3 ovarian cancer cell line and MCF-10A normal mammary epithelial cells. Hexane extract demonstrated increased levels of cytotoxicity in cancer cells (IC50, 86.6–116.5 µg/ml) compared with normal cells (IC50, 217.2 µg/ml). Chloroform extract demonstrated increased cytotoxicity to normal cells (IC50, 92.9 µg/ml) compared with cancer cells (IC50, 280.1–506.5 µg/ml). Exposure to the hexane extract led to morphological changes characteristic of apoptosis and DNA fragmentation in the three cancer cell lines. Caspase-3 and −7 were significantly activated in MDA-MB-231 and SKOV-3 cells, indicating the occurrence of caspase-dependent apoptosis in these cells, and caspase-independent apoptosis in MCF-7 cells. Furthermore, upregulation of proapoptotic Bcl-2-associated X protein occurred in the three cancer cell lines, and antiapoptotic survivin was downregulated in MCF-7 and SKOV-3 cells; by contrast, tumor protein p53 was upregulated only in MCF-7 cells, suggesting p53-mediated apoptosis in MCF-7 cells and p53-independent apoptosis in the remaining cancerous cell lines. In addition, fraction M1 obtained from bioactivity-guided fractionation of the hexane extract demonstrated increased cytotoxicity in cancer cells (IC50, 15.4–38.7 µg/ml) compared with normal cells (IC50, 114.6 µg/ml), with the highest cytotoxicity observed in MDA-MB-231 triple-negative breast cancer cells. The hexane extract of M. zeylanica bark contained polyphenols and flavonoids, and caused free radical scavenging activity. Its gas chromatography-mass spectrometry profile revealed the presence of long-chain hydrocarbons, including β-sitosterol and β-amyrin. Fraction M1 contained seven unknown compounds and a small number of known non-cytotoxic compounds. Collectively, results obtained in the present study indicate that the hexane extract of M. zeylanica bark mediates cytotoxic activities through induction of apoptosis in three cancer cell lines; thus, the hexane extract may be used to isolate novel anti-cancer compounds. Introduction Breast cancer accounts for almost 1/4 of all cancers diagnosed in women (1). Among the molecular subtypes of breast cancer, estrogen receptor (ER)-positive subtypes respond to anti-estrogen therapy (2), but have been observed to develop resistance (3). Triple-negative breast cancer, which does not express ER, progesterone receptor or human epidermal growth factor receptor 2 (HER2), is more aggressive and has a reduced number of treatment options (4). Anti-estrogens and trastuzumab are not effective for the treatment of triple-negative cancer, as cells do not express ER or HER2 (5); therefore chemotherapy is the only effective treatment option (6). Besides being expensive, radiotherapy and chemotherapy may cause serious side effects (7). Therefore, it is necessary to discover novel anticancer compounds that cause fewer adverse effects. Plants and other natural sources have provided ~60% of anti-cancer agents currently in use (8); however, there are a number of traditionally used plants that remain to be scientifically validated. A study of the potential anticancer activity of Mangifera zeylanica bark: Evaluation of cytotoxic and apoptotic effects of the hexane extract and bioassay-guided fractionation to identify phytochemical constituents Mangifera zeylanica (family, Anacardiaceae) is a plant endemic to Sri Lanka, and is typically found in the intermediate and wet zone forests (9). It is commonly known as 'Etambaʼ, and grows as a wild species that bears edible fruit. M. zeylanica has been used traditionally for cancer therapy in Sri Lanka. However, these claims have not been scientifically validated. Mangiferin is the only reported compound isolated from M. zeylanica (10). Therefore, the present study was conducted to evaluate the potential cytotoxic and apoptotic effects of M. zeylanica on breast and ovarian cancer cells and to identify phytochemical constituents in active fractions obtained from bioactivity-guided fractionation. Materials and methods Plant material, chemicals, cell lines and cell culture reagents. Approval was obtained from the Department of Wildlife Conservation, Government of Sri Lanka (Columbo, Sri Lanka) for collecting M. zeylanica bark for research. The bark (2.5 kg) was collected from Imaduwa (Galle, Sri Lanka) and the plant was identified by a botanist at Bandaranayke Memorial Ayurvedic Research Institute (BMARI; Nawinna, Maharagama, Sri Lanka). The voucher specimen (#1221 A) was deposited at BMARI. All chemicals were purchased from Sigma-Aldrich (St. Louis, MO, USA) unless otherwise specified. Cell lines and 10% fetal bovine serum were acquired from the American Type Culture Collection (ATCC; Manassas, VA, USA). Extraction and preparation of plant extract. Finely powdered dried bark (2.5 kg) was subjected to sequential extraction using hexane, chloroform, ethyl acetate and methanol (thrice with each solvent) by sonicating for 3 h at room temperature. All resulting extracts were filtered and evaporated using an R-3 rotary evaporator (BÜCHI Labortechnik AG, Flawil, Switzerland) under reduced pressure at 40˚C to obtain crude extracts of hexane, chloroform, ethyl acetate and methanol. Stock solutions were prepared by dissolving in dimethyl sulfoxide (DMSO), and diluted to working solutions prior to use (the final DMSO concentration was 0.5% v/v). Preliminary phytochemical analysis, determination of total flavonoid and polyphenol content and free radical scavenging activity. Hexane extract of M. zeylanica was tested for the presence of polyphenols (11), flavonoids (12), lipids, sterols and saponins (13,14) using previously described methods with minor modifications as required. Polyphenol content was expressed as gallic acid equivalent, and flavonoid content as quercetin equivalent, per 1 g of plant extract. Free radical scavenging activity of the extract was investigated by 2,2-diphenyl-1-picrylhydrazyl (DPPH) assay (15) with minor modifications. Hexane extract (0.5 ml) was added at various concentrations (25,50,100,200 and 400 µg/ml) to 0.5 ml of DPPH (Sigma-Aldrich) solution (5.9 g in 100 ml methanol) and incubated in the dark for 30 min, followed by absorbance (A) reading at 517 nm (Synergy™ HT Multi-Mode Microplate Reader; Bio-Tek Instruments, Inc., Winooski, VT, USA). Percentage scavenging ability was calculated as half maximal effective concentration (EC 50 ) using the following equation: EC 50 = (A control -A sample )/A control x 100. Ascorbic acid was utilized as the positive control. Cell culture and cytotoxicity assay. MCF-7 human ER-positive breast cancer cells, MDA-MB-231 triple-negative breast cancer cells, SKOV-3 ovarian epithelial cancer cells and MCF-10A normal mammary epithelial cells were maintained in ATCC-recommended medium [MCF-7 cells, Dulbecco's modified Eagle's medium (DMEM; ATCC 30-2002); MDA-MB-231 cells: Leibovitz's L-15 medium (ATCC 30-2008); SKOV-3 cells, McCoy's 5A medium (ATCC ; and MCF-10A cells, DMEM (ATCC ] with 10% fetal bovine serum, insulin (Sigma-Aldrich; 0.01 mg/ml), streptomycin (Sigma-Aldrich; 0.1 mg/ml) and penicillin (Sigma-Aldrich; 100 U/ml). All cells were cultured at 37˚C in an atmosphere of 5% CO 2 , with the exception of MDA-MB-231 cells, which were cultured without CO 2 . Cells were harvested by trypsinization and seeded into 96-well plates (product no. 3860-096; Iwaki Cell Biology, Iwaki, Japan) at a density of 5x10 3 cells/well. Following 24 h incubation, cells were treated with various doses (25,50,100,200 or 400 µg/ml) of hexane, chloroform, ethyl acetate or methanol extracts, or mangiferin. The cytotoxic effect of the extracts was assessed by sulforhodamine B (SRB) assay following 24 h incubation (16). Briefly, cells were fixed using 50 µl of ice-cold 50% trichloroacetic acid, incubated for 60 min at 4˚C, washed with tap water five times and stained using 0.4% SRB solution (100 µl stain/well). Plates were subsequently incubated at room temperature for 15 min, SRB solution was decanted and unbound dye was removed by washing with 1% acetic acid five times, followed by air-drying. Unbuffered Tris-base solution (200 µl/well) was added to the wells to solubilize unbound SRB dye. The contents were mixed on an agitator for 1 h at room temperature. Absorbance was read at optical density 540 nm (Synergy™ HT Multi-Mode Microplate Reader) and percentage cell viability was calculated (mean of control group -mean of treated group / control group x 100%). All experiments were performed in triplicate. Paclitaxel (Sigma-Aldrich) was utilized as the positive control. Negative controls received ATCC-recommended medium and DMSO. Identification of active fractions of the M. zeylanica bark extract. The crude hexane extract, which was cytotoxic to cancer cells and less cytotoxic to normal cells, was subjected to a series of solvent-solvent partitions. It was initially partitioned between hexane and MeOH/H 2 O (9:1, v/v) and subsequently, following separation of the hexane layer, the aqueous layer was diluted with water to a composition of MeOH/H 2 O (6:4, v/v) and extracted with chloroform. The aqueous layer was subsequently concentrated under reduced pressure and partitioned between ethyl acetate and water. A total of four fractions, namely hexane-, chloroform-, ethyl acetate-and water-soluble fractions, were thus obtained. Cytotoxicity was contained in the chloroform-soluble fraction. The dried chloroform layer (1.1 g) was subjected to silica gel column chromatography (230-400 mesh; cat no. 177/03; Daihan Labtech India Pvt. Ltd., Delhi, India) and eluted with 100 ml each of hexane-ethyl acetate (8:2, 7:3, 6:4, 1:1, 4:6, 3:7, 2:8, 1:9, v/v), ethyl acetate-methanol (1:1, v/v) and methanol. All the solvents for chromatography separations were purchased from Sigma-Aldrich. Active fractions identified by SRB assay were monitored by normal-phase thin-layer chromatography (TLC) using hexane-ethyl acetate (1:1, v/v) as the mobile phase. As all cytotoxic fractions produced almost a clear spot during normal-phase TLC, all fractions were pooled and concentrated to give T 1 . T 1 was monitored on reversed-phase TLC using methanol-water (9.5:0.5, v/v) as the mobile phase, fractionated in a reversed-phase column (C 18 ), and eluted with 10 ml each of methanol-water (7:3, 8:3, 9:3, v/v) and methanol. Fractions identified as most cytotoxic by SRB assay were monitored by reversed-phase TLC using methanol-water (9:1, v/v) as the mobile phase. Following observation of the behaviour of these fractions in reversed phase-TLC, 500 µl from each active fraction was pooled to give the final fraction (M 1 ) and its cytotoxicity to cancer cells and normal mammary epithelial cells was assessed. Evaluation of apoptotic effects. The potential apoptotic effects of the hexane extract were assessed by investigating its effect on caspase-3 and -7 activity, morphological changes and DNA fragmentation. The effect on caspase-3 and -7 activity was determined in the three cancer cell lines. Cells were treated with the hexane extract for 4 h (25, 50, 100, 150 and 200 µg/ml) or 24 h (5, 10, 25, 50 and 100 µg/ml). Caspase activity was assessed using ApoTox-Glo™ triplex assay according to the manufacturer's protocol (Promega Corporation, Madison, WI, USA) and compared with untreated controls. The three cancer cell lines (5x10 5 cells/ml) were treated with 200 and 400 µg/ml of the hexane extract for 24 h and harvested by trypsinization and centrifugation. The resulting cell pellets were subsequently incubated for 1 h at 55˚C in freshly prepared lysis buffer (5 mM Tris-HCl, pH 8; 1 M NaCl and 5 mM ethylenediaminetetraacetic acid, pH 8; 0.5% sodium dodecyl sulfate and proteinase K; 200 µg/ml). Following incubation with RNaseA (200 µg/ml) for 2 h at 50˚C, DNA was extracted using phenol-chloroform-isoamyl alcohol. Extracted DNA was visualised under ultraviolet light to assess the effect on DNA fragmentation (Quantum-ST4 1100/20 M; Fisher Biotec Pty Ltd., Wembley, Australia) following electrophoresis on a 2.0% agarose gel stained with ethidium bromide (EB). Cell morphology was assessed by examining acridine orange (AO)/EB-stained (17) treated cells. Cells at 70-80% confluence were harvested by trypsinization, seeded into 24-well plates (Iwaki Cell Biology) on cover slips (5x10 4 cells/well) and incubated for 24 h in a humidified atmosphere at 37˚C in 5% CO 2 . Cells were subsequently treated with 25, 50, 100, 200 and 400 µg/ml hexane extract, incubated for 24 h, rinsed with cold phosphate-buffered saline and fixed with 4% formaldehyde at room temperature. AO/EB solution (10-20 µl) was added to each well and cells were observed under a fluorescence microscope (BX51 TRF; Olympus Corporation, Tokyo, Japan). RNA isolation and reverse transcriptase quantitative polymerase chain reaction (RT-qPCR). The three cancer cell lines (200,000 cells/ml) were cultured in cell culture flasks, treated with the hexane extract at 100 or 150 µg/ml for 4 h, and 50 or 75 µg/ml for 24 h. Following treatment, cells were harvested and total RNA was extracted with TRIzol ® Reagent (Invitrogen; Thermo Fisher Scientific, Inc., Carlsbad, CA, USA) according to the manufacturer's protocol. Total extracted RNA (2 µg) and 50 ng of random primers (Integrated DNA Technologies, Coralville, USA) were mixed in a PCR tube (0.2 ml) and the total volume was made up to 13.5 µl with diethylpyrocarbonate (DEPC)-treated ultrapure water for reverse transcription. The resulting RNA-random primer mixture was denatured at 70˚C for 5 min and subsequently quenched on ice for 2 min to prevent formation of secondary structures. Complementary (c)DNA was synthesized by adding 5 µl 5X buffer, 5 µl 10 mM deoxynucleotide mixture (deoxyadenosine triphosphate, deoxyguanosine triphosphate, deoxycytidine triphosphate and deoxythymidine triphosphate), 25 units of RNasin and 200 units of Moloney murine leukemia virus reverse transcriptase (all Thermo Fisher Scientific, Inc.), and the reaction mixture (25 µl) was incubated at 37˚C for 60 min by using a thermal cycler. RT-qPCR was performed in Stratagene Mx3000P using the MESA Green qPCR Master Mix Plus for SYBR Assay (Eurogentec, Seraing, Liège, Belgium) with the primers listed in Table I (except for p53 in SKOV-3 cancer cells, which are p53-null; Integrated DNA Technologies). Glyceraldehyde-3-phosphate dehydrogenase (GAPDH) was utilized as the housekeeping gene. The reaction Table I. Primers used for reverse transcription-quantitative PCR and the PCR product size. Gene Forward primer, 5'-3' Reverse primer, 5'-3' Size, bp Bcl- 2-associated X protein TCCAGGATCGAGCAGGGCGAA CGATGCGCTTGAGACACTCGCT 109 Tumor protein p53 TCTGGCCCCTCCTCAGCATCTT TTGGGCAGTGCTCGCTTAGTGC 369 Survivin TGGCCGCTCCTCCCTCAGAAAA GCTGCTGCCTCCAAAGAAAGCG 190 GAPDH GGCATTGCCCTCAACGACCAC ACATGACAAGGTGCGGCTCCCTA 283 PCR, polymerase chain reaction; Bcl, B-cell lymphoma 2; GAPDH, glyceraldehyde-3-phosphate dehydrogenase. was performed in a total volume of 25 µl, containing 2 µl cDNA sample, 0.5 µl of each primer (0.5 µM), 12.5 µl SYBR Green reaction mix and DEPC-treated ultrapure water (9.5 µl). PCR amplification was performed in duplicate wells. The cycling conditions were as follows: Denaturation step (95˚C for 10 min), and 40 cycles of three-step amplification (denaturation, 95˚C for 30 sec; annealing, 56˚C for 1 min; and extension, 72˚C for 1 min). In addition, the real-time reaction of the products was examined by analyzing the melting point following each reaction. The formula ΔCq = Cq target gene -Cq GAPDH was used to determine the ΔCq values. Following this initial calculation, ΔΔCq values were calculated using the formula ΔΔCq = ΔCq treated -ΔCq untreated . Expression of the gene of interest in the treated cells was measured relative to that of the untreated control cells. Results were quantified using the formula 2 -ΔΔCq (18). with Dunnett's post hoc test was used to compare groups, and P<0.05 was considered to indicate a statistically significant difference. A B investigated in the present study. Fraction M 1 was strongly cytotoxic to the three cancer cell lines and less cytotoxic to normal cells (Fig. 1A and B). Among the cancer cell lines studied, the highest cytotoxic response was observed in the MDA-MB-231 triple-negative cell line (15.42±0.41 µg/ml). Apoptosis is induced by the hexane extract of M. zeylanica bark. In response to treatment with the hexane extract, caspase-3 and -7 activity significantly increased in MDA-MB-231 and SKOV-3 cells in a time-and dose-dependent manner (P<0.001) compared with the positive control (ascorbic acid; EC 50 =4.2 µg/ml); however, caspase-7 was not activated in MCF-7 cells at 4 or 24 h post-incubation (Fig. 2). The EC 50 values obtained for the hexane extract indicate that it has free radical scavenging activity, although its activity is lower than that of ascorbic acid (values higher than the positive control have a lower activity). AO/EB staining (Fig. 3A) revealed primary morphological evidence of apoptosis (including chromatin condensation, nuclear fragmentation and changes in the size and shape of cells) in the three cancer cell lines at 24 h post-incubation. DNA fragmentation, a characteristic of late apoptosis, was observed in the three cancer cell lines exposed to the hexane extract for 24 h, with no such evidence observed in control cells (Fig. 3B). RT-qPCR analysis of p53, Bcl-2-associated X protein (Bax) and survivin genes reveals differential expression of various tumor-associated factors. The relative mRNA expression of the genes investigated in the three cancer cell lines is shown in Fig. 4. (Table IV). Discussion Of the four organic extracts of M. zeylanica bark, the percentage yield was lowest for the hexane extract. However, the hexane extract was selectively cytotoxic to the cancer cells investigated in the present study and contained secondary metabolites, including flavonoids, tannins, steroids, reducing sugars and phenolic compounds, while saponins were absent. The polyphenol content of the hexane extract was greater than the flavonoid content. The cytotoxicity of the hexane extract to ER-positive (MCF-7) and t r iple-negative breast cancer cells (MDA-MB-231), and to ovarian epithelial cells (SKOV-3) was dose-dependent, and this extract demonstrated reduced cytotoxicity to normal mammary epithelial cells. By contrast, the chloroform extract demonstrated reduced cytotoxicity in the cancer cells and increased cytotoxicity in the normal cells investigated in the present study. The M 1 fraction, obtained from fractionation of the hexane extract, additionally demonstrated high levels of cytotoxicity in the three cancer cell lines and reduced cytotoxicity in normal mammary epithelial cells. Notably, the highest cytotoxicity was exerted on triple-negative cells. Mangiferin was not observed to exert cytotoxic effects on any of the cancer cell lines investigated in the present study. García-Rivera et al (19) failed to identify any significant cytotoxicity of mangiferin in MDA-MB-231 cells. Thus, compound(s) other than mangiferin in M. zeylanica appear to mediate the cytotoxic and apoptotic effects observed in the present study. The processes of homeostasis of organs and tissues depends upon the vital role of apoptosis, the dysregulation of which may be observed in cancer (20,21). Apoptosis involves the sequential activation of a cascade of proteases, known as caspases. There are two classes of caspase, initiators and effectors, and the latter class includes caspase-3 and -7 (22). The extrinsic and intrinsic pathways of apoptosis merge to form a common pathway, which is mediated by these effector caspases (23). In the present study, characteristic features of apoptosis, including activation of caspase-3 and -7 (except in MCF-7 cells), nuclear fragmentation and chromatin condensation were clearly observed in the three cancer cell lines in response to treatment with the hexane extract of M. zeylanica bark. Activation of caspase-7 was not observed in MCF-7 cells, and these cells do not express caspase-3. Thus, it is possible that the hexane extract caused caspase-independent apoptosis in MCF-7 cells through the intrinsic pathway, potentially via activation of apoptosis-inducing factor or endonuclease G, which are responsible for DNA fragmentation (24). Triple-negative breast cancer cells and ovarian epithelial cancer cells demonstrated typical activation of caspase-3 and -7 following exposure to the hexane extract. As the presence of caspase-3 and -7 alone is not able to signify whether the intrinsic or extrinsic pathway has been activated, additional components require investigation in order to ascertain the pathways activated. Bax and p53 genes have significant roles in apoptosis; increased expression of Bax is known to induce apoptosis (25), while p53, in addition to mediating apoptosis, regulates the antiapoptotic gene survivin (26). In the present study, the upregulation of Bax and p53, with concomitant downregulation of survivin, observed in MCF-7 breast cancer cells in response to the hexane extract suggested that apoptosis in these cells may be mediated via the intrinsic pathway. Triple-negative breast cancer cells, which carry a mutant p53, demonstrated upregulation of Bax, while p53 and survivin expression was not altered in these cells following treatment with the hexane extract; this suggested that a p53-independent pathway may mediate apoptosis in these cells. In the ovarian epithelial cancer cells, which are p53 null, proapoptotic Bax was upregulated and antiapoptotic survivin was downregulated. It is likely that a p53-independent pathway, such as the mitochondria-dependent 'intrinsic' cytochrome pathway, is involved in the mediation of apoptosis in these cells (27). The effect of the hexane extract on the activation of caspases and on mRNA expression of proapoptotic and antiapoptotic genes observed in the present study suggested that M. zeylanica exerts its antiproliferative effects, at least partly, via apoptosis; however, the underlying mechanism of apoptosis may differ between the three cancer cell lines investigated. Oxidants are able to damage DNA and cause mutations, which may lead to carcinogenesis, and are additionally able to stimulate cell division (28). Antioxidants reduce oxidative damage to DNA and reduce aberrant increases in cell division (29). The results of the present study demonstrated that the hexane extract of M. zeylanica possessed antioxidant ability, as revealed by the observed free radical scavenging activity. GC-MS analysis of the hexane extract identified that it was rich in sterols and long-chain hydrocarbons. β-sitosterol and β-amyrin detected in the hexane extract have been reported to be cytotoxic and apoptosis-inducing compounds in MCF-7 breast cancer cells and HL-60 leukemia cells, respectively (30)(31)(32). The M 1 fraction was identified to contain 7 unknown compounds. It additionally contained a small number of known compounds that are not cytotoxic. GC-MS profiles of active fractions gave the present study a strong direction for isolation of phytochemicals from the hexane extract, which is currently being investigated in additional studies. In conclusion, the results of the present study provide confirmatory evidence for the presence of anticancer compounds in M. zeylanica, an endemic plant used by traditional practitioners in Sri Lanka for the treatment of cancer. Of the two solvent extracts identified to be cytotoxic (hexane and chloroform extracts), the hexane extract demonstrated a greater cytotoxicity in the three cancer cell lines and reduced cytotoxicity in normal mammary epithelial cells. Furthermore, the hexane extract exerted apoptotic and antioxidant effects. The greater cytotoxic effect exerted by the active fraction, particularly on triple-negative cells, warrants additional studies investigating the anticancer effects of M. zeylanica.
2018-04-03T03:57:13.438Z
2016-01-08T00:00:00.000
{ "year": 2016, "sha1": "7019187bfb908c57fed413d0cdd8516e295e3cb1", "oa_license": "CCBYNCND", "oa_url": "https://www.spandidos-publications.com/10.3892/ol.2016.4087/download", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "7019187bfb908c57fed413d0cdd8516e295e3cb1", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
245822240
pes2o/s2orc
v3-fos-license
Design, Synthesis, and Biological Evaluation of 5,6,7,8-Tetrahydrobenzo[4,5]thieno[2,3-d]pyrimidines as Microtubule Targeting Agents A series of eleven 4-substituted 5,6,7,8-tetrahydrobenzo[4,5]thieno[2,3-d]pyrimidines were designed and synthesized and their biological activities were evaluated. Synthesis involved the Gewald reaction to synthesize ethyl 2-amino-4,5,6,7-tetrahydrobenzo[b]thiophene-3-carboxylate ring, and SNAr reactions. Compound 4 was 1.6- and ~7-fold more potent than the lead compound 1 in cell proliferation and microtubule depolymerization assays, respectively. Compounds 4, 5 and 7 showed the most potent antiproliferative effects (IC50 values < 40 nM), while compounds 6, 8, 10, 12 and 13 had lower antiproliferative potencies (IC50 values of 53–125 nM). Additionally, compounds 4–8, 10 and 12–13 circumvented Pgp and βIII-tubulin mediated drug resistance, mechanisms that diminish the clinical efficacy of paclitaxel (PTX). In the NCI-60 cell line panel, compound 4 exhibited an average GI50 of ~10 nM in the 40 most sensitive cell lines. Compound 4 demonstrated statistically significant antitumor effects in a murine MDA-MB-435 xenograft model. Introduction Microtubules have long been recognized as effective targets for the treatment of many human malignancies [1,2]. Microtubules are involved in a variety of cellular functions including mitosis, motility, intracellular transport, trafficking and organization, including positioning of organelles [1,3,4]. Molecules binding to tubulin and interrupting tubulin dynamics are recognized as microtubule targeting agents (MTAs), and they have been used clinically as single agents or in combinatorial regimens for the effective treatment of leukemia, lymphoma and various solid tumors [2,3,5]. MTAs are a highly diverse class of cytotoxic agents that include a variety of different chemical scaffolds ( Figure 1) [2,3,6]. MTAs can be classified into two major groups: (1) microtubule destabilizers that initiate microtubule depolymerization; and (2) microtubule stabilizers that promote the polymerization of tubulin into microtubules [1,2]. Additionally, MTAs are further divided into seven groups based on their binding sites [6,7]. Two binding sites on tubulin/microtubules have been identified for microtubule stabilizers [8]. First, the taxane site is located on the interior of the microtubule, and all clinically approved microtubule stabilizers, including paclitaxel ( Figure 1), docetaxel (Figure 1), cabazitaxel and ixabepilone, bind to this site [8]. MTAs can be classified into two major groups: (1) microtubule destabilizers that initiate microtubule depolymerization; and (2) microtubule stabilizers that promote the polymerization of tubulin into microtubules [1,2]. Additionally, MTAs are further divided into seven groups based on their binding sites [6,7]. Two binding sites on tubulin/microtubules have been identified for microtubule stabilizers [8]. First, the taxane site is located on the interior of the microtubule, and all clinically approved microtubule stabilizers, including paclitaxel (Figure 1), docetaxel (Figure 1), cabazitaxel and ixabepilone, bind to this site [8]. The taccalonolides (Figure 1), zampanolide and cyclostreptin are compounds that bind covalently within the taxane site, but to date have not been evaluated clinically [8][9][10]. The second stabilizer site is the laulimalide/peloruside site, which is located on the exterior of the microtubule and named for the natural products that bind to this site [8]. The clinical development of compounds binding to this site has been limited by a lack of in vivo efficacy for laulimalide [11] and supply challenges for peloruside A [12]. In the class of microtubule destabilizers, five sites have been identified: the vinca site, the colchicine site (CS), the maytansine site, the pironetin site [6], and more recently the gatorbulin site (gatorbulin-1, Figure 1) defined by the cyclic peptide of the same name [7]. The vinca alkaloids vinblastine, vincristine, vindesine, and vinorelbine, as well as other structurally unique/unrelated compounds, including eribulin (Figure 1), bind within the vinca site, which is located at the interdimer interface between two tubulin heterodimers in a protofilament [6]. The colchicine binding site is located on β-tubulin at the intradimer interface While the clinically useful vinca alkaloids vinblastine and vincristine were approved decades ago, new MTAs have been approved more recently for clinical use. Eribulin (Figure 1), a simplified synthetic analogue of the natural product halichondrin B, is a microtubule depolymerizer that has unique properties [5,15] and significant utility in the treatment of advanced breast cancer [15]. The dolastatin 10 analogue monomethyl auristatin E and maytansine ( Figure 1) analogues are employed as the cytotoxic payloads of antibody-drug conjugates (ADCs) that have found clinical utility [16]. These unconjugated MTAs were too toxic for systemic administration, but their antibody-directed delivery to cancers was designed to reduce off-target toxicities [16]. Continuing challenges with clinically approved MTAs, including the taxanes and vinca alkaloids, are the incidence of dose-limiting side effects and limited efficacy due to multidrug resistance [17]. Cancer cells and patients demonstrate resistance to clinically used agents as a result of the expression of the drug efflux pump P-glycoprotein (Pgp) and the βIII-tubulin isotype [2,18,19], leading to efforts to identify new MTAs that can overcome these mechanisms of drug resistance. Figure 1), have been extensively studied, and several have been evaluated in clinical trials, including combretastatin A-4 phosphate (CA-4P/fosbretabulin), the combretastatin CA-1P prodrug (OXi4503), 2-methoxyestradiol, AVE8062, CKD-516, BNC105P, ABT-751, CYT-997, ZD6126, plinabulin (NPI-2358) and MN-029 [20][21][22]. Colchicine itself is approved for the treatment of gout but is not employed as an anticancer agent due to toxic side effects at the doses necessary for efficacy [23]. However, other CS agents have exhibited promising potential as anticancer candidates [20][21][22]. Many compounds that interact with the CS are able to overcome multiple mechanisms of drug resistance [24]. This suggests that the development of MTAs targeting the CS has the potential to overcome limitations associated with existing drugs and perhaps improve clinical outcomes. This has been challenging, however, and to date no CS agent has received FDA approval for anticancer indications [22]. There is an urgent need to develop new tubulin inhibitors with fewer side effects and good oral bioavailability that are less prone to clinically relevant drug resistance mechanisms. Rationale We previously reported [25,26] N4-substituted-pyrimido [4,5- (1) Isosteric replacement: To explore the activities of compounds with the 4,5,6,7tetrahydrobenzo thiophene scaffold on both inhibition of cancer cell proliferation and microtubule depolymerization, we carried out the isosteric replacement of the scaffold -NH-of the lead compounds 1-3 by sulfur (-S-) to afford target compounds 4-14 (Table 1). Isosteric replacement of -NH with (-S-) has literature precedence in improving antiproliferative and microtubule depolymerizing activities [27]. Moreover, pharmacological applications of 5,6,7,8-tetrahydrobenzo [4,5]thieno [2,3-d]pyrimidines have been extensively illustrated in various reports in the literature [28][29][30][31][32][33][34][35][36][37][38]. In addition, the lead tricyclic compounds and the proposed target compounds incorporate a p-methoxyphenyl substitution akin to colchicine and CA-4 ( Figure 1). The nature of the heteroatom substitution (S for NH) affects hydrogen bond (HB) strength [39]. Thus, it was also of interest to isosterically replace the oxygen atom of the 4 -OCH 3 of 4, 8 and 9 with a sulfur moiety to afford 5, 10, and 11, in analogy to 2. (2) Decrease numbers of sp2 bonds: Drug candidates show a higher clinical success rate with one or more sp3 hybridized carbon atoms as compared to "flat" molecules, due to low aqueous solubility of purely aromatic compounds [40]. One of the major limitation of some MTAs, particularly the taxanes, is their poor water solubility [41]. Thus, watersoluble MTAs are highly coveted, and an enormous effort continues to chemically modify and/or formulate analogues to increase their water solubility. Increasing 'aromatic proportion' in a molecule has a detrimental effect on the solubility [40]. The fraction of sp3 hybridized carbon atoms (Fsp3), in other words, the fraction of carbon atoms that are saturated, correlates positively with water solubility [40]. In an attempt to both increase the water solubility as well to probe the potential interactions with the hydrophobic pocket in the CS, we designed target compounds 4-14 by incorporating sp3 hybridized carbon atoms in the tricyclic scaffold of the lead compounds 1-3. 2-NH 2 on compound 4 with a 2-CH 3 to afford 8. This would also provide information regarding the activity on the replacement of H with CH 3 at the 2-position in the tricyclic scaffold. (4) Conformational restriction: Conformational restriction or rigidification of a ligand can decrease the entropic penalty [42]. The ligand can adopt a preferred conformation for binding, which might lead to enhanced potency for a given physiological target [42]. In an effort to better define the conformational requirements for biological activities, we systematically incorporated various groups to restrict bond rotations. The conformation of 9 ( Figure 3) is determined by three rotatable single bonds: the 4-position C-N bond (bond a), the 1 -position C-N bond (bond b) and the 4 -position C-O bond (bond c). Conformational analysis via molecular modeling and 1 H NMR studies [25] suggest that the methyl group on the aniline nitrogen in 1 restricted the free rotation of bond a as well as bond b ( Figure 2) and consequently restricted the conformation of the anilino ring. To study the significance of conformational restriction on biological activities, we first designed compounds 8 and 9. In 9, the rotation of bonds a and b was restricted by incorporating a methyl group at the N4-position to afford compound 8. Incorporation of tetrahydroquinoline rings in 6 and 12 further restricted bond b of 4 and 8. The design of compound 13 via the incorporation of a 5-methoxy naphthalene ring provided a further element of conformational restriction. by incorporating sp3 hybridized carbon atoms in the tricyclic scaffold of the lead compounds 1-3. (3) Variation of the substituents at the 2-position: Compound 7 was specifically designed to determine the effect of replacing the 2-NH2 in 4 with a 2-H. This allows an exploration of the 2-NH2 and hydrogen bond interactions with corresponding amino acids at the CS. It was also of our interest to observe the effect of isosteric replacement of 2-NH2 on compound 4 with a 2-CH3 to afford 8. This would also provide information regarding the activity on the replacement of H with CH3 at the 2-position in the tricyclic scaffold. (4) Conformational restriction: Conformational restriction or rigidification of a ligand can decrease the entropic penalty [42]. The ligand can adopt a preferred conformation for binding, which might lead to enhanced potency for a given physiological target [42]. In an effort to better define the conformational requirements for biological activities, we systematically incorporated various groups to restrict bond rotations. The conformation of 9 ( Figure 3) is determined by three rotatable single bonds: the 4position C-N bond (bond a), the 1′-position C-N bond (bond b) and the 4′-position C-O bond (bond c). Conformational analysis via molecular modeling and 1 H NMR studies [25] suggest that the methyl group on the aniline nitrogen in 1 restricted the free rotation of bond a as well as bond b ( Figure 2) and consequently restricted the conformation of the anilino ring. To study the significance of conformational restriction on biological activities, we first designed compounds 8 and 9. In 9, the rotation of bonds a and b was restricted by incorporating a methyl group at the N4-position to afford compound 8. Incorporation of tetrahydroquinoline rings in 6 and 12 further restricted bond b of 4 and 8. The design of compound 13 via the incorporation of a 5-methoxy naphthalene ring provided a further element of conformational restriction. by incorporating sp3 hybridized carbon atoms in the tricyclic scaffold of the lead compounds 1-3. (3) Variation of the substituents at the 2-position: Compound 7 was specifically designed to determine the effect of replacing the 2-NH2 in 4 with a 2-H. This allows an exploration of the 2-NH2 and hydrogen bond interactions with corresponding amino acids at the CS. It was also of our interest to observe the effect of isosteric replacement of 2-NH2 on compound 4 with a 2-CH3 to afford 8. This would also provide information regarding the activity on the replacement of H with CH3 at the 2-position in the tricyclic scaffold. (4) Conformational restriction: Conformational restriction or rigidification of a ligand can decrease the entropic penalty [42]. The ligand can adopt a preferred conformation for binding, which might lead to enhanced potency for a given physiological target [42]. In an effort to better define the conformational requirements for biological activities, we systematically incorporated various groups to restrict bond rotations. The conformation of 9 ( Figure 3) is determined by three rotatable single bonds: the 4position C-N bond (bond a), the 1′-position C-N bond (bond b) and the 4′-position C-O bond (bond c). Conformational analysis via molecular modeling and 1 H NMR studies [25] suggest that the methyl group on the aniline nitrogen in 1 restricted the free rotation of bond a as well as bond b ( Figure 2) and consequently restricted the conformation of the anilino ring. To study the significance of conformational restriction on biological activities, we first designed compounds 8 and 9. In 9, the rotation of bonds a and b was restricted by incorporating a methyl group at the N4-position to afford compound 8. Incorporation of tetrahydroquinoline rings in 6 and 12 further restricted bond b of 4 and 8. The design of compound 13 via the incorporation of a 5-methoxy naphthalene ring provided a further element of conformational restriction. by incorporating sp3 hybridized carbon atoms in the tricyclic scaffold of the lead compounds 1-3. (3) Variation of the substituents at the 2-position: Compound 7 was specifically designed to determine the effect of replacing the 2-NH2 in 4 with a 2-H. This allows an exploration of the 2-NH2 and hydrogen bond interactions with corresponding amino acids at the CS. It was also of our interest to observe the effect of isosteric replacement of 2-NH2 on compound 4 with a 2-CH3 to afford 8. This would also provide information regarding the activity on the replacement of H with CH3 at the 2-position in the tricyclic scaffold. (4) Conformational restriction: Conformational restriction or rigidification of a ligand can decrease the entropic penalty [42]. The ligand can adopt a preferred conformation for binding, which might lead to enhanced potency for a given physiological target [42]. In an effort to better define the conformational requirements for biological activities, we systematically incorporated various groups to restrict bond rotations. The conformation of 9 ( Figure 3) is determined by three rotatable single bonds: the 4position C-N bond (bond a), the 1′-position C-N bond (bond b) and the 4′-position C-O bond (bond c). Conformational analysis via molecular modeling and 1 H NMR studies [25] suggest that the methyl group on the aniline nitrogen in 1 restricted the free rotation of bond a as well as bond b ( Figure 2) and consequently restricted the conformation of the anilino ring. To study the significance of conformational restriction on biological activities, we first designed compounds 8 and 9. In 9, the rotation of bonds a and b was restricted by incorporating a methyl group at the N4-position to afford compound 8. Incorporation of tetrahydroquinoline rings in 6 and 12 further restricted bond b of 4 and 8. The design of compound 13 via the incorporation of a 5-methoxy naphthalene ring provided a further element of conformational restriction. by incorporating sp3 hybridized carbon atoms in the tricyclic scaffold of the lead compounds 1-3. (3) Variation of the substituents at the 2-position: Compound 7 was specifically designed to determine the effect of replacing the 2-NH2 in 4 with a 2-H. This allows an exploration of the 2-NH2 and hydrogen bond interactions with corresponding amino acids at the CS. It was also of our interest to observe the effect of isosteric replacement of 2-NH2 on compound 4 with a 2-CH3 to afford 8. This would also provide information regarding the activity on the replacement of H with CH3 at the 2-position in the tricyclic scaffold. (4) Conformational restriction: Conformational restriction or rigidification of a ligand can decrease the entropic penalty [42]. The ligand can adopt a preferred conformation for binding, which might lead to enhanced potency for a given physiological target [42]. In an effort to better define the conformational requirements for biological activities, we systematically incorporated various groups to restrict bond rotations. The conformation of 9 ( Figure 3) is determined by three rotatable single bonds: the 4position C-N bond (bond a), the 1′-position C-N bond (bond b) and the 4′-position C-O bond (bond c). Conformational analysis via molecular modeling and 1 H NMR studies [25] suggest that the methyl group on the aniline nitrogen in 1 restricted the free rotation of bond a as well as bond b ( Figure 2) and consequently restricted the conformation of the anilino ring. To study the significance of conformational restriction on biological activities, we first designed compounds 8 and 9. In 9, the rotation of bonds a and b was restricted by incorporating a methyl group at the N4-position to afford compound 8. Incorporation of tetrahydroquinoline rings in 6 and 12 further restricted bond b of 4 and 8. The design of compound 13 via the incorporation of a 5-methoxy naphthalene ring provided a further element of conformational restriction. IC50 ± SD in MDA-435 Cells (nM) EC50 for Microtubule Depolymerization in A-10 Cells (nM) 1 14.7 ± 1.5 130 2 89.1 ± 10 1100 3 130 ± 7.8 1200 by incorporating sp3 hybridized carbon atoms in the tricyclic scaffold of the lead compounds 1-3. (3) Variation of the substituents at the 2-position: Compound 7 was specifically designed to determine the effect of replacing the 2-NH2 in 4 with a 2-H. This allows an exploration of the 2-NH2 and hydrogen bond interactions with corresponding amino acids at the CS. It was also of our interest to observe the effect of isosteric replacement of 2-NH2 on compound 4 with a 2-CH3 to afford 8. This would also provide information regarding the activity on the replacement of H with CH3 at the 2-position in the tricyclic scaffold. (4) Conformational restriction: Conformational restriction or rigidification of a ligand can decrease the entropic penalty [42]. The ligand can adopt a preferred conformation for binding, which might lead to enhanced potency for a given physiological target [42]. In an effort to better define the conformational requirements for biological activities, we systematically incorporated various groups to restrict bond rotations. The conformation of 9 ( Figure 3) is determined by three rotatable single bonds: the 4position C-N bond (bond a), the 1′-position C-N bond (bond b) and the 4′-position C-O bond (bond c). Conformational analysis via molecular modeling and 1 H NMR studies [25] suggest that the methyl group on the aniline nitrogen in 1 restricted the free rotation of bond a as well as bond b ( Figure 2) and consequently restricted the conformation of the anilino ring. To study the significance of conformational restriction on biological activities, we first designed compounds 8 and 9. In 9, the rotation of bonds a and b was restricted by incorporating a methyl group at the N4-position to afford compound 8. Incorporation of tetrahydroquinoline rings in 6 and 12 further restricted bond b of 4 and 8. The design of compound 13 via the incorporation of a 5-methoxy naphthalene ring provided a further element of conformational restriction. EC50 for Microtubule Depolymerization in A-10 89.1 ± 10 1100 3 130 ± 7.8 1200 by incorporating sp3 hybridized carbon atoms in the tricyclic scaffold of the lead compounds 1-3. (3) Variation of the substituents at the 2-position: Compound 7 was specifically designed to determine the effect of replacing the 2-NH2 in 4 with a 2-H. This allows an exploration of the 2-NH2 and hydrogen bond interactions with corresponding amino acids at the CS. It was also of our interest to observe the effect of isosteric replacement of 2-NH2 on compound 4 with a 2-CH3 to afford 8. This would also provide information regarding the activity on the replacement of H with CH3 at the 2-position in the tricyclic scaffold. (4) Conformational restriction: Conformational restriction or rigidification of a ligand can decrease the entropic penalty [42]. The ligand can adopt a preferred conformation for binding, which might lead to enhanced potency for a given physiological target [42]. In an effort to better define the conformational requirements for biological activities, we systematically incorporated various groups to restrict bond rotations. The conformation of 9 ( Figure 3) is determined by three rotatable single bonds: the 4position C-N bond (bond a), the 1′-position C-N bond (bond b) and the 4′-position C-O bond (bond c). Conformational analysis via molecular modeling and 1 H NMR studies [25] suggest that the methyl group on the aniline nitrogen in 1 restricted the free rotation of bond a as well as bond b ( Figure 2) and consequently restricted the conformation of the anilino ring. To study the significance of conformational restriction on biological activities, we first designed compounds 8 and 9. In 9, the rotation of bonds a and b was restricted by incorporating a methyl group at the N4-position to afford compound 8. Incorporation of tetrahydroquinoline rings in 6 and 12 further restricted bond b of 4 and 8. The design of compound 13 via the incorporation of a 5-methoxy naphthalene ring provided a further element of conformational restriction. Molecular Modeling Computational modeling studies were performed to elucidate the binding mode of the lead and target compounds 1-14 and probe the possible interactions with the CS. Compounds 1-14 were docked into the CS (PDB: 6BS2, 2.65 Å) [43] of tubulin using Maestro, Schrödinger 2020-2, New York, NY, USA [44]. Figure 4a shows the docked pose of 4 (cyan) superimposed with colchicine (pink) in the X-ray crystal structure of the CS [43]. Molecular Modeling Computational modeling studies were performed to elucidate the binding mode of the lead and target compounds 1-14 and probe the possible interactions with the CS. Compounds 1-14 were docked into the CS (PDB: 6BS2, 2.65 Å) [43] of tubulin using Maestro, Schrödinger 2020-2, New York, NY, USA [44]. Figure 4a shows the docked pose of 4 (cyan) superimposed with colchicine (pink) in the X-ray crystal structure of the CS [43]. Molecular Modeling Computational modeling studies were performed to elucidate the binding mode of the lead and target compounds 1-14 and probe the possible interactions with the CS. Compounds 1-14 were docked into the CS (PDB: 6BS2, 2.65 Å) [43] of tubulin using Maestro, Schrödinger 2020-2, New York, NY, USA [44]. Figure 4a shows the docked pose of 4 (cyan) superimposed with colchicine (pink) in the X-ray crystal structure of the CS [43]. Molecular Modeling Computational modeling studies were performed to elucidate the binding mode of the lead and target compounds 1-14 and probe the possible interactions with the CS. Compounds 1-14 were docked into the CS (PDB: 6BS2, 2.65 Å) [43] of tubulin using Maestro, Schrödinger 2020-2, New York, NY, USA [44]. Figure 4a shows the docked pose of 4 (cyan) superimposed with colchicine (pink) in the X-ray crystal structure of the CS [43]. Table S1). The pyrimidine ring of 4 overlaps with the pyrimidine ring of the crystalized ligand of 6BS2 (Supplementary Materials Figure S1). Table S1). The pyrimidine ring of 4 overlaps with the pyrimidine ring of the crystalized ligand of 6BS2 (Supplementary Materials Figure S1). Chemistry Compounds 4-14 were synthesized according to the synthetic routes outlined in Schemes 1-4. The Gewald reaction (Scheme 1) was carried out on a solution of sulfur in ethanol, to which cyclohexanone 15 and ethyl cyanoacetate were added. Morpholine was added dropwise to the solution to obtain 16. Cyclization of 16 with chloro-formamidine hydrochloride, formamide and acetonitrile afforded 17, 18 and 19, respectively, using reported methods [25,45,46]. Chlorination [47] Chemistry Compounds 4-14 were synthesized according to the synthetic routes outlined in Schemes 1-4. The Gewald reaction (Scheme 1) was carried out on a solution of sulfur in ethanol, to which cyclohexanone 15 and ethyl cyanoacetate were added. Morpholine was added dropwise to the solution to obtain 16. Cyclization of 16 with chloro-formamidine hydrochloride, formamide and acetonitrile afforded 17, 18 and 19, respectively, using reported methods [25,45,46]. Chlorination [47] Table S1). The pyrimidine ring of 4 overlaps with the pyrimidine ring of the crystalized ligand of 6BS2 (Supplementary Materials Figure S1). Antiproliferative and Microtubule Depolymerization Effects We investigated the microtubule depolymerization and the antiproliferative activities of compounds 4-14 (Table 2). At a concentration of 10 µM, the compounds that caused at least 50% microtubule depolymerization were further evaluated to determine their EC 50 values, the concentration that causes the loss of 50% of cellular microtubules as visualized microscopically. Compounds that caused microtubule depolymerization at 10 µM were further evaluated for antiproliferative potency in the drug-sensitive MDA-MB-435 cancer cell line, and the IC 50 (concentration required to cause 50% inhibition of proliferation) values were determined using the sulforhodamine B assay (SRB assay). Compound 4, the 2-NH 2 analogue of 5,6,7,8-tetrahydrobenzo [4,5]thieno [2,3-d]pyrimidine, was the most potent compound of this series for both microtubule depolymerizing and antiproliferative effects, with an EC 50 of 19 nM and an IC 50 of 9.0 nM ( Table 2). Compound 4 was 7-fold more potent than 1 for microtubule depolymerizing effects, indicating that the 5,6,7,8tetrahydrobenzo [4,5]thieno [2,3-d]pyrimidine ring is significantly better for microtubule depolymerizing activity than the pyrimido [4,5-b]indole ring of 1 ( Table 2) and that it additionally contributes to improvements in antiproliferative potency. We next evaluated the importance of the 4 -OMe group of compound 4 by replacing it with an isosteric 4 -SMe (5). The resulting compound 5 was 15-fold more potent than the corresponding lead compound 2 with respect to microtubule depolymerization activity and was additionally 2.3-fold more potent for antiproliferative effects compared to 2. Clearly, this further substantiated the importance of an S in the scaffold over an NH. However, comparing compounds 4 and 5 indicated that the 4 -OMe was better than the 4 -SMe. Compound 6, the tetrahydroquinoline-substituted compound, a conformationally restricted analogue of 4 around bond b, was 6-fold less potent for antiproliferative effects and for microtubule depolymerizing effects than 4, indicating that conformational restriction in 6 is detrimental to these biological activities. We next focused on substituting the 2-position of the 5,6,7,8-tetrahydrobenzo [4,5] thieno [2,3-d]pyrimidine. The corresponding 2-H analogue 7 of lead compound 3 displayed 27-fold increased potency in the microtubule depolymerizing assay as compared with 3 (Table 2), indicating a more effective engagement with tubulin. Compound 7 was 3.5-fold more potent than compound 3 for antiproliferative effects. The 2-Me analogue 8 displayed slightly less potency compared to the 2-H compound 7 for both microtubule depolymerizing and antiproliferative effects, yet it had 2.7-and 5.8-fold lower potency than the 2-NH 2 compound 4 in the microtubule depolymerization assay and antiproliferative assay, re-spectively. Compound 8 however, with a 2-Me, had a lower EC 50 /IC 50 ratio (1.0 for 8 as compared to 2.2 for 4), indicating a tighter correlation between the microtubule depolymerizing effects and the cancer cell cytotoxicity. Compound 10 with a 4 -SMe group and a 2-Me analogue of compound 5 had 2-fold lower potency than 5 in both assays. In compounds 12 and 13, conformational restrictions of the N4-phenyl moiety of 8 about bond b with a 1,2,3,4-tetrahydroquinoline moiety and 5 -methoxy naphthalene, respectively, caused a 2.3-3-fold decrease in potency for 12 in antiproliferative and microtubule depolymerizing effects compared to 8 and a 1.5-to 2-fold drop in potency for 13 as compared to 8. Compounds 9, 11, and 14 did not show any activity in the microtubule depolymerization assay, and these were not evaluated for antiproliferative effects, corroborating our previous reports that the N4-Me is crucial for MT activity [26]. Inhibition of Tubulin Assembly and Colchicine Binding Compounds 4, 5, 7, 8 and 10 were evaluated for their direct effects on purified tubulin assembly and for inhibition of colchicine binding (Table 3). Compounds 4, 5, 7, 8 and 10 inhibited tubulin assembly with activities better than those of the lead compounds 1-3 as well as CA-4. Compound 4 was 2-fold more potent than the lead 1 as an inhibitor of tubulin assembly. On the other hand, compounds 5, 7, 8 and 10 were 2-fold more potent than the standard CA-4. Moreover, compounds 5 and 7 were 5-fold more potent as inhibitors of tubulin assembly than the corresponding lead compounds 2 and 3, respectively. Compounds 4, 5, 7, 8 and 10 inhibited the binding of [ 3 H]colchicine to tubulin by 89-99%, whereas the lead compounds 1, 2 and 3 showed 84, 67, and 62% inhibition of [ 3 H]colchicine binding, respectively. Thus 4, 5, 7, 8 and 10 were more active than the initial lead compounds 1-3. These results clearly demonstrated that these compounds are CS MTAs. Table 3. Inhibition of tubulin assembly and colchicine binding. Compound Inhibition of Tubulin Assembly IC 50 Effect on βIII-Tubulin and Pgp-Mediated Cancer Cell Resistance Compounds 4-8, 10, 12, and 13 were evaluated for their abilities to overcome βIIItubulin mediated drug resistance using an isogenic HeLa cell line pair (Table 4). Consistent with the results obtained in MDA-MB-535 cells, compound 4 was the most potent in the series in the HeLa and HeLa WT βIII cell lines, with 1.6-fold higher potency than the lead compound 1. Compounds 5 and 7 showed 2-fold higher potency than the lead compounds 2 and 3 in HeLa and HeLa WT βIII cell lines. The Rr values (Table 4) were calculated by dividing the IC 50 of the βIII-tubulin expressing line by the IC 50 obtained in the parental HeLa cells. The expression of βIII-tubulin is known to lead to paclitaxel resistance, and paclitaxel has an Rr value of 8.6 in this cell line pair (Table 4). The target compounds 3-8, 10, 12, and 13 have Rr values~1.0 ( Table 4), suggesting that they circumvent βIII-tubulin mediated drug resistance, in contrast to paclitaxel. The potent MTAs 4-8, 10, 12, and 13 were also evaluated for their activity in the SK-OV-3 ovarian carcinoma cell line and the Pgp-expressing subline SK-OV-3 MDR1-M6/6 ( Table 4). In these cell lines, compound 4 was again the most potent compound in the series. Comparison of the IC 50 values in the parental SK-OV-3 and genetically manipulated SK-OV-3 MDR1-M6/6 cell line allows for the calculation of a relative resistance value, designated Rr. This value is calculated by dividing the IC 50 value obtained in the Pgpexpressing SK-OV-3 MDR1-M6/6 cells by the IC 50 obtained in the parental SK-OV-3 cells. Paclitaxel, a known Pgp substrate, has an Rr value of 240, while CA-4, a poor Pgp substrate, has an Rr value of 1.3 (Table 4). Compound 4 had IC 50 values in SK-OV-3 and SK-OV-3 MDR1-M6/6 cells comparable to that of CA-4 and an Rr of 1.5, indicating that it is able to overcome drug resistance mediated by Pgp. Here, a correlation between a cell-based assay and a biochemical assay is not always observed, which might be due in part to the ability of the compounds to cross the cell membrane and accumulate intracellularly. Compounds 5-8, 10, 12, and 13 also had Rr values ≤ 1.5, suggesting that they are all poor substrates for Pgp-mediated transport and have advantages over the taxanes and vinca alkaloids in multidrug-resistant cancer cells. Activity of Compound 4 in the NCI Cancer Cell Line Panel Compound 4, the most potent compound of the series, was selected for evaluation in the NCI-60 cancer cell line panel [50], and it had a GI 50 (concentration causing 50% inhibition of cell proliferation) of~10 nM against 40 of the 60 cancer cell lines (Table 5). Compound 4 had better potency than the lead compound 1 in 50 cancer cell lines. (better by 5 to 6-fold in leukemia, 2 to 17-fold in NSCLC, 2 to 6-fold in colon cancer, 2 to 5-fold in CNS cancer, 2 to 25-fold in melanoma, 2 to 5-fold in ovarian cancer, 2 to 9-fold in renal cancer, 2 to 5-fold in prostate cancer, and 2 to 6-fold in breast cancer compared to lead compound 1) [26]. Thus 4, the thiophene-fused analogue, is up to 25-fold more potent than our previously published lead [26]. Compound 4 was selected for further evaluation in an in vivo xenograft mouse study in light of its nanomolar potency in vitro in the NCI-60 cancer cell line panel and its potent microtubule depolymerization activity. The in vivo effects of 4 were tested in the MDA-MB-435 xenograft model ( Figure 5). After conducting initial dose tolerance testing, 4 was administered at a dose of 75 mg/kg 3 × a week where it caused moderate weight loss yet had statistically significant antitumor effects as compared to the control at day 14, the end of the trial. In this trial, there was a trend toward antitumor effects with paclitaxel (15 mg/kg), but this did not reach statistical significance at any day or at trial conclusion. in light of its nanomolar potency in vitro in the NCI-60 cancer cell line panel and its potent microtubule depolymerization activity. The in vivo effects of 4 were tested in the MDA-MB-435 xenograft model ( Figure 5). After conducting initial dose tolerance testing, 4 was administered at a dose of 75 mg/kg 3 × a week where it caused moderate weight loss yet had statistically significant antitumor effects as compared to the control at day 14, the end of the trial. In this trial, there was a trend toward antitumor effects with paclitaxel (15 mg/kg), but this did not reach statistical significance at any day or at trial conclusion. Chemistry All evaporations were carried out under a vacuum using a rotary evaporator. Analytical samples were dried in vacuo (0.2 mmHg) in a CHEM-DRY drying apparatus over P2O5 at 50 °C. Thin-layer chromatography (TLC) was performed on Whatman Sil G/UV254 silica gel plates (Whatman International Ltd., Maidstone, England), and the spots were Chemistry All evaporations were carried out under a vacuum using a rotary evaporator. Analytical samples were dried in vacuo (0.2 mmHg) in a CHEM-DRY drying apparatus over P 2 O 5 at 50 • C. Thin-layer chromatography (TLC) was performed on Whatman Sil G/UV254 silica gel plates (Whatman International Ltd., Maidstone, England), and the spots were visualized by irradiation at 254 nm. Proportions of solvents used for TLC are by volume. All analytical samples were homogeneous on TLC in at least two different solvent systems. Column chromatography was performed on a 70-230 mesh silica gel (Fisher Scientific, Waltham, MA, USA) column. The amount (weight) of silica gel for column chromatography was in the range of 50-100 times the amount (weight) of the crude compounds being separated. Columns were wet-packed with appropriate solvent unless specified otherwise. Melting points were determined using a digital MEL-TEMP II melting point apparatus with FLUKE 51 K/J electronic thermometer or using an MPA100 OptiMelt (Stanford Research Systems, Sunnyvale, CA, USA) automated melting point system and are uncorrected. Nuclear magnetic resonance spectra for protons ( 1 H NMR) were recorded on Bruker Avance II 400 (Billerica, MA, USA) (400 MHz) and 500 (500 MHz) systems and were analyzed using MestReC NMR (Mestrelab research, San Diego, CA, USA, data processing software. The chemical shift (δ) values are expressed in ppm (parts per million) relative to tetramethylsilane as an internal standard: s, singlet; d, doublet; t, triplet; q, quartet; m, multiplet; br, broad singlet; exch, protons exchangeable by addition of D 2 O. Elemental analyses or high-performance liquid chromatography (HPLC)/mass analysis were used to determine the purities of the target compounds. Elemental analyses were performed by Atlantic Microlab, Inc., Norcross, GA, USA. Elemental compositions are within ±0.4% of the calculated values and indicate >95% purity. Fractional moles of water or organic solvents found in some analytical samples could not be removed despite 24-48 h of drying in vacuo and were confirmed where possible by their presence in the 1 H NMR spectra. Mass spectral data were acquired on an Agilent G6220AA TOF LC/MS system using the nano ESI (Agilent chip tube system with infusion chip). HPLC analysis was performed on a Waters HPLC system using a XSelect CSH C18 column. Peak area of the major peak versus other peaks was used to determine purity. All solvents and chemicals were purchased from Sigma-Aldrich Co, USA. or Fisher Scientific Inc, USA. and were used as received. (25 mL). The mixture was stirred at room temperature for 1 h, then at 60 • C for 12 h. The reaction mixture was cooled to room temperature, and the solvent was removed in vacuo. The crude product was purified by flash column chromatography on a silica column using hexane/ethyl acetate (10:1) as eluent to obtain compound 16 (7. The POCl 3 was evaporated, and the mixture was cooled in an ice bath. The mixture was neutralized using an aqueous NH 4 OH solution to yield a precipitate. The precipitate was collected by filtration, washed with water, dried and dissolved in MeOH. To the solution was added silica gel (1 g), and the solvent was removed under reduced pressure to provide a silica gel plug. Column chromatography was performed with hexane and ethyl acetate (10:1) to generate 20 (2.0 g, 6.19 mmol, 70%) as a brown solid. TLC Rf = 0.68 (hexane: EtOAc, 3:1); mp 234 • C; 1 H NMR (400 MHz, DMSO-d 6 ): δ 5.28 (s, br, 2H, exch., 2-NH 2 ), 2.74 (t, 2H, -CH 2 ), 1.92 (t, 2H, -CH 2 ), 1.56-1.50 (m, 2H, -CH 2 ), 1.43-1.38 (m, 2H, -CH 2 ). This compound was used for the next reaction without further characterization. 4-Chloro-5,6,7,8-tetrahydrobenzo [4,5]thieno [2,3-d]pyrimidine (21): Treatment of 16 (5.0 g, 22.19 mmol) with formamide (4.42 mL, 110.96 mmol) was carried out in a microwave vessel at 180 • C for 12 h. The reaction was cooled to room temperature, and 50 mL water was added to the mixture. The precipitate was collected and dried under high vacuum to afford 18 as a white solid in 72% yield (3.30 g). The product 18 was taken to the next step without characterization. Chlorination of 18 (3.0 g, 14.54 mmol) was performed using POCl 3 (1.4 mL, 14.54 mmol) and pyridine (1.17 mL, 14.54 mmol), and the mixture was kept at reflux for 8 h. The solvent was removed by evaporation, and the residue was neutralized with ammonia in water solution to generate a pale-yellow precipitate. The precipitate was collected by filtration. To the precipitate was added methanol and 2.0 g of silica gel. The solvent was removed under reduced pressure, and a silica plug was prepared. A flash column chromatographic separation was performed using ethyl acetate-hexane as eluent to afford 1.96 g of 21 ( Compounds 20-22 were dissolved in isopropanol, followed by addition of 1-2 drops of HCl and the appropriate anilines. The reaction mixture was stirred for 4-8 h at reflux. The reaction mixture was cooled, and silica gel was added to the solvent mixture to prepare a silica gel plug. A flash column chromatographic separation was performed using ethyl acetate-hexane as eluent to afford 4-7, 9, 11, 12 and 14 with yields of 48-68%. Compounds 9, 11 and 14 were added to NaH in DMF with drop-wise addition of iodomethane to obtain 8, 10 and 13, respectively, in 57-70% yield. N4-(4-methoxyphenyl)-N4-methyl-5,6,7,8-tetrahydrobenzo [4,5]thieno [2,3-d]pyrimidine-2,4-diamine (4): To a solution of 20 (250 mg, 1.04 mmol) in isopropanol (20 mL), 1-2 drops of HCl were added, followed by addition of 4-methoxy-N-methylaniline (157.3 mg, 1.15 mmol), followed by reflux for 6 h. The reaction mixture was cooled to room temperature, silica gel (500 mg) was added, and the solvent was removed under reduced pressure. Purification was performed by column chromatography using 1% MeOH in CHCl 3 as the eluant, and fractions containing the product (TLC) were pooled. The solvent was evaporated to give a white solid that was washed with CHCl 3 to afford 230. N4-methyl-N4-(4-(methylthio)phenyl)-5,6,7,8-tetrahydrobenzo [4,5]thieno [2,3-d] pyrimidine-2,4-diamine (5): To a solution of 20 (150 mg, 0.625 mmol) in toluene (8 mL), 1-2 drops of HCl were added, followed by addition of N-methyl-4-(methylthio)aniline (105.5 mg, 0.688 mmol), and the mixture was kept under reflux for 6 h. The reaction mixture was cooled to room temperature, silica gel (500 mg) was added, and the solvent was removed under reduced pressure. Purification was performed by column chromatography using 1% MeOH in CHCl 3 as the eluant, and the fractions containing the product (TLC) were pooled. The solvent was evaporated to give a pale-yellow solid that was washed with CHCl 3 to afford 109. Molecular Modeling Docking of target compounds 4-14 was carried out in the colchicine site of tubulin (PDB: 6BS2, 2.65 Å). The crystal structure PDBs were obtained from the protein database. All docking procedures were performed using various modules of the Schrödinger Maestro suite (Schrödinger, LLC, New York, NY, USA, 2020-2) [49]. The protein was optimized and prepared for docking using the Maestro Protein Preparation Wizard to assess bond order and add missing hydrogens, followed by energy minimization using the OPLS3e force field. Gaps in the protein structures were ignored, as they were far from the active site. The Maestro Induced-fit Grid Generation module was then used to define a 15 × 15 × 15 Å grid from the center of all the ligands. Ligands used in the computational docking study were built using the Maestro 2D Build module. The Maestro LigPrep module was then used to generate conformers of each compound subjected to energy minimization using the OPLS3e force field protocol. The resulting compounds were docked into the prepared protein using the Maestro Induced Fit Docking. Induced Fit Docking was performed with standard precision with flexible ligand sampling. A total of 20 initial poses were generated for each compound. Based on the pose score, the top 4 poses were selected and subjected to energy minimization using the OPLS3e force field. Finally, the top 2 poses per compound were generated and ranked according to the Glide score, which is an approximation of binding energy defined by receptor-ligand complex energies. The top pose was analyzed and presented in the Biological Evaluation and Discussion section. Docking scores are listed in Table S1 (Supplementary Materials). Effects of Compounds on Cellular Microtubules The effects of the compounds on cellular microtubules were evaluated in A-10 cells using indirect immunofluorescence microscopy. These cells were obtained from the American Type Culture Collection (ATCC) (Manassas, VA, USA). Cells were treated with the compounds of interest for 18 h, and the cells fixed with cold MeOH and microtubule structures were visualized using a β-tubulin antibody (Sigma-Aldrich, St. Louis, MO, USA). The concentration that caused loss of 50% of the interphase microtubules was defined as the EC 50 and calculated as previously described [52]. These values represent an average of at least three independent experiments. Sulforhodamine B (SRB) Assay The antiproliferative and cytotoxic effects of the compounds in cancer cells were evaluated using the SRB assay [53] as previously described [54]. MDA-MB-435 cells were obtained from the Lombardi Cancer Center of Georgetown University (Washington, DC, USA). SK-OV-3 and HeLa cells were purchased from ATCC. Details about the generation of the SK-OV-3 MDR1-M6/6 and HeLa WTβIII cells were described previously [54]. The IC 50 values represent an average of three independent experiments, each conducted using triplicate points. Quantitative Tubulin Studies Bovine brain tubulin was purified as described previously [55]. The tubulin assembly assay has been described in detail [50]. Briefly, 1.0 mg/mL of tubulin (10 µM) was preincubated for 15 min at 30 • C in 0.8 M monosodium glutamate (pH of 2 M stock solution adjusted to 6.6 with HCl), varying compound concentrations and 4% (v/v) DMSO as compound solvent. After the preincubation, the reaction mixtures were placed on ice and
2022-01-09T16:03:35.049Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "c37ebefbf9a24b66707cac3dadc5cac2ccd7375e", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "8146dae7f1d2574f53695d25a60a8b6e9905b5be", "s2fieldsofstudy": [ "Chemistry", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
53078389
pes2o/s2orc
v3-fos-license
Corrosive Behavior and Physic-Chemical Characterization of Filtration Tanks Most drinkable water supplied to the public in Mexico City comes from deep wells which extract water from the subsoil. Before being distributed, it is treated in steel filtration tanks. This water must be subject to evaluation through physic-chemical and bacteriological analyses in order to determine its quality. However, doubts always remain over the influence of the components of this water on the corrosive behavior of the filtration tanks. In light of this, this article studies the physic-chemical characterization values of water and presents the results. This has also enabled the analysis of the corrosion speed of filtration tanks components, boilers and water-cooled systems, where incrustations in pipes, obstructions and loss of heat transfer efficiency occur, rendering drinkable water bad tasting and, after some time, causing pitting corrosion although this type of corrosion only causes serious problems in the long term. Introduction During 2014 and 2015, an investigation was carried out into how the well water influences and affects the process of filtration of steel tanks [1].Although we studied the microstructure of steel A284 Grade C and the depth of damage suffered by corrosion, we did not study the Physic-Chemical characterization of the process, nor was its influence was analyzed.And this is precisely what we will present here in this article.The water treatment plant in the study is shown in Figure 1.Regarding control of incrustations, it is known that three conditions are required for them to be formed.First, oversaturation, which occurs when the concentration of dissolved ions such as Ca 2+ , 2 3 CO − , Na + , Ba 2+ , and SO − increases, exceeding normal limits of process water solubility.Second, kinetic acceleration, temperature shocks, mechanical or hydrodynamic forces, optimal pH conditions, and sudden pressure, when changes may accelerate kinetics of incrustations formation and the optimal surface; and third, non-uniform surfaces found in heat exchangers pipes.Optimal substrates promote adherence of inorganic microcrystal's and allow incrustation building. Analysis on the mechanics of the damage and the damage by corrosion can be seen in [1].The method used for assessment of corrosion speed of steels under study is weight loss undergone by a metal or alloy in contact with corrosive means cab be seen in [2] [3] [4]. Before presenting physic-chemical characterization, it is useful to present definitions of the main topics of this document: corrosion and physic-chemical characterization.We must do this, in order to fully understand its actual influence on the corrosive behavior of well water filtration tanks throughout their useful life. Corrosion is: 1) damage of a metallic material as a result of a chemical attack by the environment; 2) chemical or electrolytic damage of a material, preferably metallic, due to its reaction to the surrounding environment; and/or 3) impairment of a metal through some means (aside from those which are purely metallic), due to a chemical or electrochemical reaction to the environment or damage of a metal by other means, aside from those which are purely metallic. We may classify as humid corrosion: corrosion occurring when a liquid is present; and dry corrosion: corrosion caused by vapors or gases, generally re- A metal may be corroded without being in contact with another metal.In this case, the various areas of the metal have various electrical potentials.This may occur due to differences in metallurgical properties of the metal or due to variations on the oxidized layer's surface, such as a crack, finishing, incrustation of factory lamination, direct pollution, etc.In this type of corrosion, the upper part of the metal has access to oxygen in the air and creates the cathode.In the pitted bottom, oxygen is reduced and the metal creates the anode, since the oxygen available is lower and the corrosion speed increases.Other types of corrosion present throughout the useful life of filtration tanks are: crevice corrosion [5], stress corrosion, biological corrosion, wear corrosion, galvanic corrosion, and atmosphere corrosion. As for the supply of drinkable water in Mexico City and water treatment plants in Mexico City, it should be known that this has always been a problem for both inhabitants and administrators.In the middle of the Nineteenth Century, exploitation of underground water began with artesian wells.Then, springs near the city (Desierto de los Leones and Santa Fe) were used.In recent years, water has had to be imported from far-away basins and extracted from deep wells.This has corresponding negative consequences-soil subsidence, weather changes and loss of biodiversity.As water supply is such a problem, consideration about its quality often comes second. In order to cover the high demand of water, caused by increasing population and the resulting economic activities, the possibility of finding a large supply has been explored.The main source of water supply water to Mexico City has been the large auriferous underlying the city.However, from the beginning of the 20 th century, extensive exploitation of this body of water has caused holes and differential subsidence in the area.This threatens to cause salinization of the resource due to low recharge levels. The inefficient use of water and its lack of treatment have resulted in insufficient surface sources, overexploited auriferous and most of the bodies of water and many auriferous being polluted.In addition, monetary charges for water are not appropriate and water is not duly measured.It has been estimated that 55% of water is wasted in the agricultural sector (out of 78% of the water), and 40% of water is wasted in urban areas (which consumies 12% of the total consumption [6] [7]. Physic-Chemical Characterization Related to the influence of well water components in the corrosion behavior of filtering tanks, Table 1 shows two strong metals, iron and manganese, with high influent values: 0.428 mg/l for iron and 0.272 mg/l for manganese.Regarding effluent, the 0.02 mg/l value is lower than the limit for iron, and 0.01 mg/l higher for manganese, which defines the standard for both cases.In addition, this shows that industrial activity exists in the area where the well is located.Once purification processes and outflow to be rendered drinkable are determined, reinforced concrete, steel or other material is used to construct tanks, sumps, containers, etc. Figure 2 shows the location of 16 water treatment plants in the eastern and southern areas in Mexico City, the operation area, as well as the corresponding boroughs [6] [7] [8].Table 1 shows physic-chemical characterization of influent water or well and effluent or treatment plant. Let us now see the influence of well water components on corrosive behavior of filtration tanks, through values of physic-chemical characterization of water registered in Table 1.Two strong metals were found, iron and manganese, which influent values are high: 0.428 mg/l for iron and 0.272 mg/l for manganese.From the effluent water point of view, 0.02 mg/l value is lower than the limit value for iron and higher than the limit value for manganese in 0.01 mg/l [8].In both cases, it marks the norm, and it is indicative that there is industrial activity within the well's area.These concentrations of strong metals are indicators of the same damage and corrosion process already present in the tank, as well as of the gradual detachment of metals inside of it.As for the influent, those concentrations may also be due to the damage and corrosion process of pipes feeding the well. Turbidity of water is the optical effect generated when the beams are dispersed or interfered with as they go through a sample of water, due to mineral or organic particles the liquid may contain as a suspension, such as microorganisms, clay, various oxide precipitations, precipitated calcium carbonate, aluminum compounds, et cetera.This effect is generally used as a way to control raw waste water and to characterize the efficiency of secondary treatment, once it is related to concentration of suspended solids.Its maximum allowable limit in drinkable-Figure 2. Areas in the Mexico City for operation of potable water infrastructure [7]. water is 5 NTU (Nephelometric Turbidity Units).A high turbidity value indicates the probable presence of organic matter and microorganisms increasing the quantity of chlorine or ozone used for disinfection of waters for supply of drinkable water.In our case, the turbidity value in its influent part is near the maximum allowable limit.Electrical conductivity of the well under study, showing the presence of dissolved solids, as well as alkalinity, shows high values.The total dissolved solids value is also high, both for its influent and effluent parts. These four parameters together show the existence of inorganic salts in the well. In and should be removed before water may be properly used.Most drinkable water supplies have an average of 250 mg/l hardness, but in this well, stricter parameters of 125 mg/l for manganese and 30 mg/l for calcium have been established.In our case, hardness values surpass the established limit values.Even though, corrosion by pH, calcium, manganese, and microorganisms is discarded. As for chlorides, the maximum acceptable amount for drinkable water is 250 mg/l, but for human use it is recommended from 100 mg/l through 140 mg/l.while here it is present in a 268 mg/l through 214 mg/l range [8].Water with high oxidizability, ammonia, and nitrate and nitrite tenor characterizes pollution and, therefore, chlorides have such origin.If these substances are lacking, such high tenor often is due to the fact that water goes through land rich in chlorides. However, we do not believe that this is true in our case.Its values pollute the air and are corrosive.Chlorates are soluble in water, pollute it, and, with organic composites, they produce explosive mixtures.Based on this interpretation of data, corrosion by pH, calcium and manganese, as well as by microorganisms, is discarded.The fact that the container works under pressure, which makes corrosion slower, must be taken into account. After the first 8 years of operation of the eastern system water treatment Due to the functioning of such pressurized steel tanks, it is not possible to periodically check the evolution of corrosion inside them: the operation filtering tank would have been suspended, thus affecting the drinkable water supply for up to forty days, i.e. the period required to remove filtering material and properly check the false bottom and the wall of the tank.We move now to look at the development of the experiment, in order to obtain data related to corrosion in the wall of the damaged filtering tank within a determined period of time; damage analysis, both mechanical and by corrosion; corrosion speed assessment; and discussion of results. Assessment of Corrosion Speed The degree of corrosion of the samples under study has been assessed through corrosion speed, basically defined as metal mass loss per surface and time unit, Equation Metal mass loss Surface time where Vc is corrosion speed and its units may be expressed in various man- ners.One is:  Consequently, determination of corrosion speed is made through weight loss.In real structures, this is only possible to ascertain if a small part of corroded structure is cut, having previously been cleaned, weighed, and is said weight is subtracted from the starting weight obtained from a material sample and metal density. Since in (1) the profile exposure surface value is involved, Table 2 shows features of test tubes under analysis.Analyzing the general behavior of corrosion speed obtained from structural profiles, it was found that the corrosive phenomenon shows its maximum activity through three years of exposure.The total corrosion or mass loss undergone by metallic material was intended to be quantified using climatic factors.However, it was not possible to apply such criterion due to lack of available meteorological data.Relative humidity periods equal to or higher than 80% must be present, and unfortunately, the number of available data was lower.Due to this, the data is not sufficiently representative to validate this criterion [10] [11].In accordance with data obtained, an acceptable corrosion speed was determined for test tubes made of steel A284 Degree C lower than 0.2 mm/year.This speed considers that, in a similar period after 8 years of operation, the structure shall keep a good corrosion speed, since corrosion composites create a passivation layer controlling and inhibiting corrosion.This expects for proper behavior of the structure, even if it loses all the coating due to corrosion.This is in accordance with the Rules for Buildings in Mexico City, which is 1/6 of plate thickness calculated, in this case, a 3 mm increase [12].The corrosion criteria Table 2 by [13] was taken as a reference.Since the plate is subject to tensile stress, failures due to corrosion under stress may occur (stress corrosion cracking).However, this type of failure was not found upon checking the tank. Conclusions Physic-chemical characterization of well water shows strong metals: steel, and manganese, with high values: 0.428 mg/l for steel and 0.272 mg/l for manganese; such concentrations show damage and corrosion resulting in a progressive detaching of metals inside the containers already studied [1]. Electrical conductivity and total alkalinity values obtained in the study, which show the presence of dissolved solids, imply high values.In like manner, the total dissolved solids value is high both in its influent and effluent parts.These parameters together show the probable existence of inorganic salts in the area under study.The influent value is 7.96 U pH and the effluent value is 8.08 U pH. High alkalinity is due to factors such as chemical reactions among existing particles generating pH tending to be basic.Turbidity shows the presence of suspended solids.High DQO values show pollution due to organic and inorganic particles that may be oxidized by potassium dichromate or permanganate from turbidity and dissolved solids and/or electric conductivity. Most drinkable water supplies within the area have a 250 mg/l average hardness but here, a strict parameter has been defined from its origin: 125 mg/l for magnesium and 30 mg/l for calcium.In our case, the hardness values are over the defined limit values.That said, corrosion due to pH, calcium, magnesium, and microorganisms is rejected.As it has been said, the maximum of chlorides acceptable in drinkable water is 250 mg/l, but it is recommended that, for human consumption, it ranges from 100 mg/l through 140 mg/l.Here, chlorides are within an interval from 268 mg/l through 214 mg/l.Finally, from this interpretation of data, corrosion due to pH, calcium, magnesium, and microorganisms is rejected.It must be taken into consideration that containers work under pressure, which renders corrosion slower. Based on bacteriological results for drinkable water, it is of paramount importance to carry out proper samplings.In accordance with results obtained, it may be said that there is a good bacteriological quality in most sampling sites (wells).Finally, analytical results in surface water depend on the number and type of discharges, their composition, degree of urban, industrial or agricultural growth, and the season. F . Casanova-del-Angel DOI: 10.4236/ojpc.2017.74008125 Open Journal of Physical Chemistry lated to high temperatures and direct chemical or electrochemical reaction occurring when there is a reaction between the metal and other non-metallic elements or compounds.Corrosion by electrochemical reaction occurs when electricity conductive liquids (electrolytes) are present.Most corrosion is caused by liquids. F . Casanova-del-Angel DOI: 10.4236/ojpc.2017.74008127 Open Journal of Physical Chemistry plants in Mexico City, 3 out of 10 filters showed exceedingly high corrosion inside the filtering tanks with A284 Grade C steel plates and e = 9.8 mm wall thickness.Indeed, the corrosion rendered the filters useless.This problem made evident the need to conduct experiments regarding the evolution of corrosion and the structural behavior of the tank damaged by corrosion within the laboratory.Such damage by corrosion generates a local failure in the joint of the false bottom, supporting filtering material, with the wall of the auto-supported tank.Such local failure causes total and/or partial suspension of the drinkable water supply to Mexico City, see https://www.steelconstruction.info/Corrosion_of_structural_steel. addition, with the Chemical Oxygen Demand or COD value high; the pH shows considerable variations towards alkalinity.A 7 Units pH is neutral; in our case, the influent is 7.96 Units pH and the effluent is 8.08 Units pH.High alkalinity is due to other factors such as chemical reactions among existing particles, resulting in pH tending to be basic.Turbidity shows the presence of suspended solids.High COD values indicate pollution by organic and inorganic particlessusceptible to be oxidized by potassium dichromate or permanganate coming from turbidity, dissolved solids, and/or electric conductivity.Hardness is the chemical feature of water determined by the content of carbonates, bicarbonates, chlorides, sulfates and, occasionally, calcium and manganese nitrates.It is known that it is undesirable in domestic and industrial washing processes, since it causes more soap to be used, producing insoluble salts.In boilers and systems cooled with water, this means: incrustations in pipes, obstruction and loss of efficiency of heat transfer, water having a disgusting taste and, in addition, through time, pitting corrosion appears-though it takes time for this type of corrosion to cause real problems in these facilities, see https://www.nace.org/Pitting-Corrosion/. High hardness values are undesirable Table 2 . Features of test tube obtained from a tank made of Steel A284 Grade C, used in filtration of deep well water.
2018-10-04T13:25:33.233Z
2017-11-16T00:00:00.000
{ "year": 2017, "sha1": "4c0558cd44f441d9298cea49067a138b51fc4391", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=80402", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "4c0558cd44f441d9298cea49067a138b51fc4391", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Environmental Science" ] }
257682537
pes2o/s2orc
v3-fos-license
Proteotranscriptomic analyses reveal distinct interferon-beta signaling pathways and therapeutic targets in choroidal neovascularization Aim To investigate the molecular mechanism underlying the onset of choroidal neovascularization (CNV). Methods Integrated transcriptomic and proteomic analyses of retinas in mice with laser-induced CNV were performed using RNA sequencing and tandem mass tag. In addition, the laser-treated mice received systemic interferon-β (IFN-β) therapy. Measurements of CNV lesions were acquired by the confocal analysis of stained choroidal flat mounts. The proportions of T helper 17 (Th17) cells were determined by flow cytometric analysis. Results A total of differentially expressed 186 genes (120 up-regulated and 66 down-regulated) and 104 proteins (73 up-regulated and 31 down-regulated) were identified. The gene ontology and KEGG pathway analyses indicated that CNV was mainly associated with immune and inflammatory responses, such as cellular response to IFN-β and Th17 cell differentiation. Moreover, the key nodes of the protein–protein interaction network mainly involved up-regulated proteins, including alpha A crystallin and fibroblast growth factor 2, and were verified by Western blotting. To confirm the changes in gene expression, real-time quantitative PCR was performed. Furthermore, levels of IFN-β in both the retina and plasma, as measured by enzyme-linked immunosorbent assay (ELISA), were significantly lower in the CNV group than in the control group. IFN-β treatment significantly reduced CNV lesion size and promoted the proliferation of Th17 cells in laser-treated mice. Conclusions This study demonstrates that the occurrence of CNV might be associated with the dysfunction of immune and inflammatory processes and that IFN-β could serve as a potential therapeutic target. Introduction Choroidal neovascularization (CNV), as a pathological process, generally extends along the choroid to the underside of the retinal pigment epithelium (RPE), causing hemorrhage and exudation (1). A variety of pathological conditions can lead to CNV, especially neovascular age-related macular degeneration (AMD) (2). In developed countries, CNV secondary to AMD is the almost universal cause of devastating central visual function damage in the elderly (3). It is predicted that approximately 288 million people will have AMD by 2040, up from 196 million in 2020. In addition, by that time, it is expected that Asia will be the region with the largest number of patients with AMD (4,5). CNV is a characteristic pathological change in wet AMD; it causes exudation, bleeding, and scars in the macular zone of AMD patients, leading to a decrease in central strength and subsequent loss of vision (6). Although patients with CNV secondary to wet AMD constitute only 20% of all patients with AMD, > 90% of these patients exhibit severe visual impairment (7). AMD-related visual impairment and blindness have become worldwide public health problems that aggravate the already heavy healthcare-related financial burden on society. The precise etiopathogenesis of CNV is still unknown, but it is generally presumed to result from imbalances in local angiogenesis stimulators and inhibitors that are the result of aging, oxidation, inflammation, and damage to Bruch's membrane (8). Vascular endothelial growth factor (VEGF) can specifically and directly act on vascular endothelial cells; it is a critical factor in promoting angiogenesis, particularly during the onset of CNV (9). In recent years, anti-VEGF drugs have been extensively used in clinics as first-line drugs for the treatment of wet AMD (10). Such drugs can reduce blood leakage from new blood vessels, reduce neovascularization, and promote foveal edema regression. However, although these drugs can help to partially restore vision and delay disease progression, they cannot cure wet AMD (6). Thus, there is a considerable need to clarify the mechanisms underlying the onset of secondary CNV during the course of AMD and, in turn, to identify potential targets for the treatment of AMD. Previous studies have demonstrated that the onset and progression of CNV are mediated by various regulatory factors (11,12). However, many molecular components of the pathophysiology of CNV remain unknown, especially in the adult population. Proteomic analysis is considered one of the most advanced exploratory research methods that can be used to discover new protein biomarkers of clinical significance (13). Multiple studies have employed proteomic analysis to study AMD (14). Furthermore, RNA transcriptome analysis of human AMD in donor eyes revealed that multiple pathogenic pathways (e.g., angiogenesis, extracellular matrix remodeling, inflammation, and immune responses) were perturbed in retinal pigment epithelium (RPE) cells (15,16). Although proteomics and transcriptomics have improved general knowledge regarding CNV, more systematic studies of the relationship between proteomes and transcriptomes may reveal novel molecular alterations in CNV biology. In the present research, we constructed a laser-induced CNV mouse model and then combined proteomic and transcriptomic (i.e., proteotranscriptome) analyses to identify the core genes, biological processes, and signaling pathways involved in the onset of CNV. In addition, we focused on the functions of interferon-b (IFN-b) and related potential therapy for CNV. Animals The experimental subjects were approximately 8-week-old male C57BL/6J mice, purchased from Gempharmatech Company. These mice were raised in the animal house of Fudan University at a constant temperature (23 ± 1°C) and humidity (40%-60%), with a light/dark cycle of 12 h, and freely available water and standard feed. At the end of the experiment, they were sacrificed by cervical dissection. The animal experiment operation procedures in the research were approved by the Animal Ethics Committee of Fudan University. Laser-induced CNV model Before the operation, the mice were intraperitoneally injected with 1.25% tribromoethanol (0.2 mL/10 g) for anesthesia. Subsequently, 0.4% oxybuprocaine hydrochloride was used for corneal anesthesia, and tropicamide was used for pupil dilation. The cornea was lubricated with carbomer eye gel to prevent drying. Three laser spots were distributed evenly around the optic nerve of each eye utilizing a fundus laser system (Zeiss, Germany) with an energy setting of 100 mW and a duration of 100 ms. Fundus photography and immunohistochemistry Seven days after the induction of CNV, the mice were anesthetized, and their pupils dilated and lubricated as described above. Thereafter, the lens of the fundus imaging system was placed in contact with the eye to be tested, and the focal length was adjusted as necessary for image clarity; photos were then acquired. After fundus photography, the mice remained anesthetized and were sacrificed by cervical dislocation. Each eye was then enucleated and stabilized in 4% paraformaldehyde at room temperature for 1 hour. Under a stereomicroscope, the eye was circumferentially dissected along the corneoscleral limbus; the anterior segment components (e.g., cornea, iris, and lens) were eliminated and the neural retina was detached, thus yielding RPE/choroidal flat mounts. Each RPE/ choroidal plane flat mount was radially cut into a petal shape centered on the optic disk. Thereafter, they were incubated with anti-plant lectin B4 antibody (1:100 dilution, Sigma, USA) overnight at 4°C after being permeabilized and blocked at room temperature for 1 hour. Subsequently, a secondary antibody (1:500 dilution, Sigma, USA) was incubated for 1 hour at room temperature in dark on each flat mount. For observation, the flat mount was placed on glass slides and inlaid in a fluorescence mounting medium. RNA sequencing and data analysis Total RNA was extracted from each retinal sample using RNeasy Kits (Qiagen, Germany), and its purity and concentration were evaluated. Thereafter, a cDNA library was constructed using a commercial reverse transcription kit (Illumina, USA). The sequencing of these libraries was performed on the HiSeq platforms (Illumina, USA). The results indicated the locations of the reads within the reference genome, along with information regarding sequence features specific to the sequenced sample. The number of counts for each sample gene was normalized (BaseMean values were used to estimate expression levels); fold change (FC) was calculated, and a negative binomial distribution was used to verify the significance of differences in reads. Differentially expressed genes (DEGs) were categorized according to FC and read numbers. Unsupervised hierarchical clustering demonstrated a clear separation between the two groups; up-regulation and downregulation trends were consistent, reflecting obvious differences between the two groups of genes. Furthermore, The Gene Ontology (GO) and the Kyoto Encyclopedia of Genes and Genomes (KEGG) databases were analyzed to identify the DEGs, as follows: the species genes were used as the background list, while the differential gene list was used as the candidate list for screening relative to the background list. p-values were computed and corrected by the hypergeometric distribution test. The screening criteria for DEGs were FC ≥ 1.5 and p < 0.05. The RNAseq datasets that are presented in the study were deposited in the Genome Sequence Archive repository, accession number CRA009762. Mass spectrometry-based proteomic analysis Total protein was extracted from each retinal sample. After trypsinization and labeling, the remaining portions of each extract were mixed and chromatographically separated by weight. The sample was loaded onto a chromatographic column (C18, 100 µm × 2 cm), and then separated by a reversed-phase liquid chromatographic column (C1 8 , 75 µm × 15 cm) (both Thermo Fisher, USA). Later, tandem mass spectrometry (MS/MS) was conducted to analyze the separated samples using a HF-X mass spectrometer (Thermo Fisher, USA). A collision energy of 35 eV was applied for all MS/MS spectra acquired using data-dependent high-energy collisional fragmentation. Putative proteins were distinguished by comparing the MS outcome with the UniProt database after using the Sequest HT score > 0 and unique peptide ≥ 1. Significant differences between samples were defined as FC ≥ 1.2 and p < 0.05 for these putative proteins. GO and the KEGG analysis of differentially expressed proteins (DEPs) was conducted in accordance with the process described above. In addition, the STRING database was employed to analyze DEPs to obtain protein-protein interaction (PPI) networks, and the results were visualized using Cytoscape software. Confidence scores of > 0.7 were considered statistically significant. The proteomic datasets were deposited in the ProteomeXchange repository, accession number PXD039971. Western blotting Sample proteins were extracted and quantified as previously described, and boiled with the loading buffer (Beyotime, China) for 5 min. Subsequently, they were separated using pre-cast gels in SDS-PAGE electrophoresis buffer (Beyotime, China) at 130 V for 90 min, and then transferred to polyvinylidene fluoride membranes at 350 mA for 40 min. After being blocked in 5% non-fat milk, the membranes were incubated with the primary antibodies, anti-alpha A crystallin (CRYAA, Abcam, USA) and anti-fibroblast growth factor 2 (FGF2, Abcam, USA), at 4°C overnight, and with goat antirabbit IgG (Thermo Fisher, USA) secondary antibody for 1 hour. Blots were developed using a gel imaging system (Image Quant 350, GE Healthcare, USA). Densitometric analysis was executed using ImageJ software. Enzyme-linked immunosorbent assay Mercantile enzyme-linked immunosorbent assay (ELISA) kits (Multisciences Biotech, China) were applied to determine the IFN-b levels in the plasma and retina of wild-type (WT) and CNV mice. The experimental procedure was implemented in strict accordance with the product instructions. IFN-b treatment The mice in the treatment group received intraperitoneal injections of 10,000 IU of recombinant human IFN-b1a (PeproTech, USA) every other day until the end of the experiment (17). In the control group, intraperitoneal injections of phosphate-buffered saline (PBS) were used. All treatment manipulations were performed by the same research staff. Mice in the experimental and control groups were mixed and housed in randomly assigned cages after they had been labeled. Flow cytometry On day 7, retinal neuroepithelium specimens were collected from the mice. The specimens were ground and lysed with erythrocyte lysis buffer to produce single-cell suspensions. The suspensions were centrifuged at 1,200 rpm for 5 min; the resulting cells were resuspended in PBS. The cells were then stained with CD4 at room temperature for 15 minutes and with interleukin 17A (IL-17A) at 4°C for 30 min after fixation and permeabilization, and finally analyzed by flow cytometry. The outcomes were evaluated using FlowJo software. Statistical analysis GraphPad Prism 9.0 (GraphPad, USA) and SPSS 22.0 (IBM Corp., USA) were used to conduct statistical analysis. Student's ttest was used to make comparisons between the two groups. pvalues of ≤ 0.05 were considered significant. Transcripts altered and biological pathways enriched in CNV retina The data were stratified according to group, and the corresponding net values were screened for differences. A total of 186 genes with significant differences were obtained: 120 were upregulated and 66 were down-regulated. As shown in Figure 1A, various genes were expressed differently between the groups. To visualize the expression profile of genes in each sample and the differences between groups, the top 20 altered genes were plotted in heatmap format ( Figure 1B). Biological process (BP) GO annotation revealed the functional items of the identified DEGs. Trends in enrichment in up-regulated and down-regulated gene expression of DEGs are depicted in Figures 1C, D, respectively. DEGs that showed significantly upregulated expression were those involved in cell development and regulation of the MAPK cascade, whereas DEGs whose expression was significantly down-regulated were those involved in smooth muscle contraction and chemotaxis. The KEGG database was analyzed to identify DEGs in retinal signaling pathways. The main pathways involved in the upregulation of genes in the CNV group included the relaxin signaling pathway and Th17 cell differentiation, as shown in Figure 1E. The main pathways involved in down-regulated genes in the CNV group included extracellular matrix (ECM)-receptor interaction and Th17 cell differentiation, as shown in Figure 1F. respectively. Significantly up-regulated proteins included those involved in the cellular response to interferon-beta and in the innate immune response, while significantly down-regulated proteins included those involved in muscle filament sliding and signal transduction. The differential proteins identified were retrieved and enriched through the KEGG signaling pathway database. A bubble plot was then constructed to illustrate the consequence of enrichment (pvalue), the amount of enrichment, and the enrichment index. As indicated in Figure 2E, the up-regulated proteins were chiefly involved in signaling pathways, including those for antigen processing and presentation and Th17 cell differentiation. As shown in Figure 2F, the down-regulated proteins were principally those involved in processes such as cardiac muscle contraction and the function of tight junctions. Integration of the transcriptomics and proteomics Matching analysis of the identified genes and proteins identified 39 overlapping targets ( Figure 3A), 16 of which were co-upregulated and six of which were co-down-regulated (Table 1). In addition, the biological processes in the interaction networks covered 13 proteins ( Figure 3B). Key nodes in PPI networks mainly involved up-regulated proteins such as CRYAA and FGF2. Validation of selected genes and proteins altered in CNV retina To validate the screening results, we extracted total RNA from the retinas of mice in both the CNV group and the control group, and then reverse transcribed this RNA into cDNA. From the screening results, we selected four genes (Prok1, Adcy1, Col4a3, and Col4a4) that may be involved in neovascularization to serve as qPCR target genes. b-Actin served as the reference gene. As illustrated in Figure 4A, the levels of Prok1, Adcy1, Col4a3, and Col4a4 were significantly increased in the retinas of mice with CNV (p < 0.05 in all cases), consistent with the results of the transcriptomic analysis. To validate the MS results, we conducted a semi-quantitative analysis and exploration of some differential protein molecules (i.e., CRYAA and FGF2) by Western blotting. As shown in Figure 4B, both CRYAA and FGF2 levels were strikingly up-regulated in the CNV group compared with the control group (p < 0.05 in both cases), consistent with the results of proteomic analysis. Enrichment of IFN-b in CNV retina To determine if the expression of IFN-b contributes to the progression of CNV, we analyzed IFN-b in plasma and retina samples from mice with CNV and control mice by ELISA. As shown in Figure 5, serum and retinal IFN-b levels were dramatically down-regulated in the CNV group compared with the control group (p < 0.05 in both cases). IFN-b therapy stimulates Th17 proliferation and limits CNV The IFN-b-treated group had significantly smaller CNV lesions than the PBS-treated control group (p < 0.05; Figure 6A) and substantially decreased expression levels of CRYAA and FGF2 (p < 0.05 in both cases; Figure 6B) on day 7 after laser photocoagulation. In addition, flow cytometry analysis was used to assess the activity expression of Th17 cells in the retina of CNV mice. The proportion of Th17 cells was much lower in the CNV group than in the control group; in the IFN-b-treated group, the proportion was noticeably increased compared with the PBStreated control group (p < 0.05 in both cases; Figure 6C). Discussion The onset of CNV involves many complex mechanisms and pathways, which have not been fully elucidated. To characterize A B FIGURE 4 Verification of selected genes and proteins. (A) RNA expression of Prok1, Adcy1, Col4a3, and Col4a4 was measured in retinas using qPCR. n = 6; mean ± SD; Student's t-test; *p < 0.05, **p < 0.01. (B) The content of Cyraa and FGF2 proteins in retinas was detected by Western blot. n = 6; mean ± SD; Student's t-test; *p < 0.05, **p < 0.01. qPCR, quantitative real-time PCR. CNV pathogenesis, we constructed a mouse model of laser-induced CNV, then conducted combined proteomic and transcriptomic analyses. In all, we found that 186 genes and 104 proteins were dysregulated in the retinas of mice with CNV compared with control mice. These altered genes and proteins are involved in numerous pathways, including inflammation, metabolism, and immune regulation. In addition, we identified 21 proteins that were consistent with DEGs, of which 16 were up-regulated and five were down-regulated. Furthermore, PPI network analysis revealed that CRYAA and FGF2 have important roles in CNV pathogenesis. Usefully, genetic studies in patients with AMD have discovered multiple susceptibility loci for AMD; most of the genetic risk is shared between AMD and CNV (18). In this study, the levels of Col4a3 and Col4a4 were up-regulated in mice with CNV compared with mice in the control group. Using RNAseq analysis, Fletcher et al. found that the levels of many members of the collagen family (e.g., col4a1 and col4a2) were noticeably up-regulated in the RPE/ choroid of patients with CNV (19). This divergence in test results may be related to the different tissue sources. Yu et al. found a protective locus for AMD in Col4a3 in a meta-analysis of European patients (20). Another meta-analysis indicated that Col4a3 was significantly connected with polypoidal choroidal vasculopathy, a subtype of neovascular AMD particularly prevalent in East Asians (21). These investigations demonstrate that Col4a3 may play an important role in AMD pathogenesis. Although Col4a4 is widely regarded as a pathogenic factor in diseases such as Alport syndrome (progressive glomerulonephritis, lens defects, and hearing loss) (22), it has rarely been regarded as a contributing factor in AMD or CNV. Additionally, Prok1 is an angiogenic growth factor with vital roles in regulating the growth of organ blood vessels and tumor blood vessels (23). In these studies, Prok1 levels were dramatically higher in the CNV group than in the control group. To our knowledge, this is the first report of such a finding in a CNV model. Because our result is consistent with previous findings (24, 25), we presume that Prok1 has a vital role in the onset of CNV. In addition, as a member of the ADCY superfamily, Adcy1 is thought to be involved in tumor angiogenesis (26). In our study, Adcy1 levels were radically up-regulated in the CNV group compared with the control group. This finding is presumably the first report of such a difference in a CNV model; it suggests that Adcy1 is involved in CNV formation. On the other hand, we reviewed other transcriptomic studies on CNV and AMD and found that the results of our study did not identify the presence of common genes related to neovascular diseases, such as PNPLA2, MFGE8, and DDIT4 (27). This may be because of the restricted sample numbers used in our study. In future studies, we will analyze more samples to verify the conclusions of this study. The GO and KEGG pathways were analyzed for the DEGs identified in this study. The results indicated that many abnormal BP and signaling pathways (e.g., the synthesis of ECM structural components, the function of celll junctions, actin filament binding, the relaxin signaling pathway, and the PI3K-Akt pathway) are involved in the pathological progression of CNV. These processes and components also participate in normal cell proliferation and migration during angiogenesis. Moreover, the results of the GO analysis of the DEGs were similar to the proteomic analysis of differential protein enrichment, indicating that the gene regulatory trends were consistent with protein regulatory trends. The ECM is an essential component of the vascular microenvironment, and directly or indirectly regulates all essential cellular functions critical for angiogenesis (e.g., cell adhesion, migration, proliferation, differentiation, and lumen formation) (28). Accordingly, the onset of CNV is inseparable from the synthesis (or function) of the structural components of the ECM. Relaxin is an insulin-like polypeptide hormone; most studies have focused on its role in the regulation of angiogenesis during pregnancy, but there have been a few reports of its association with CNV (29). Considering that relaxin can specifically induce VEGF expression to regulate angiogenesis (30), it also has been suggested to play a role in the development of CNV, and the results of this study support this viewpoint. The PI3K-Akt pathway plays pivotal roles in intracellular signaling, which regulates cell proliferation and motility (31). The pathogenesis of CNV involves the activation of the PI3K-Akt pathway, which causes lactic acid fermentation (i.e., the Warburg effect) and induces VEGF expression, in turn leading to the development of CNV (32). In addition, it has been shown that repressing the PI3K-Akt pathway in the choroid by various means can effectively block the onset of CNV (33,34). Therefore, the identification of suitable PI3K-Akt pathway inhibitors may be a A B FIGURE 5 Level of IFN-b in the CNV mice. The levels of IFN-b were significantly lower in the plasma (A) and retina (B) of CNV mice compared with the WT cohort. n = 6; mean ± SD; Student's t-test; *p < 0.05, **p < 0.01. CNV, choroidal neovascularization. means of counteracting the onset and development of CNV in wet AMD patients. Furthermore, in contrast to the proteomics findings, the KEGG pathway analysis of transcriptome data showed greater involvement of immune factors, including changes in pathways related to immune inflammation (e.g., Th17 cell differentiation). Therefore, we speculate that the variations in mRNA expression cause changes in the biological functions of cells targeted by immunity-related inflammation. Thus, an immune mechanism may participate in the onset and progression of CNV. In this study, GO analysis of DEPs was implemented to explore the effects on biological functions. The DEPs had significant effects of CNV on molecular functions such as eye structure development, cellular structure and function, immune activity, ATP metabolism, and signal transduction activity. Furthermore, KEGG pathway analysis revealed significant changes in NLR signaling, Th17 cell differentiation, and tight junction regulation. These enriched biological functions and pathways were also identified in the enrichment analysis of DEGs. Notably, the choriocapillary diameter, blood flow, and oxygen tension are radically increased in the macula compared with the peripheral retina, although pericyte coverage is lower. Choroidal vessels in the macula are more susceptible to pathological changes under stress (35). In addition, the outer third of the retina remains physiologically completely avascular; it relies on the choroidal system to receive essential nutrients and oxygen. When the choroidal vasculature is damaged, this supply chain is disrupted and the onset of CNV begins (36). RPE dysfunction can also lead to disruptions in the supply chain. Critical factors involved in RPE dysfunction include age-dependent changes in phagocytosis and metabolism in postmitotic RPE cells (37). Previous studies have confirmed that differences in retinal tissue structure and cell function have a vital impact on the occurrence of CNV. The retina maintains its normal function through metabolism, which provides energy to the retina; metabolites have important roles in maintaining retinal homeostasis (38). Metabolic alterations (e.g., abnormal ATP metabolism) are presumed to result from combinations of genetic and environmental factors; abnormal cellular metabolism is therefore firmly related to the onset of disease, especially in multifactorial disorders such as AMD (39). The NLR pathway is known to regulate the formation of inflammasomes and stimulate the production of both IL-1b and IL-18, thereby participating in the inflammatory response (40). Furthermore, the NLR pathway has been shown to promote ocular inflammation by activating the production of anti-inflammatory and pro-inflammatory cytokines; such inflammation is closely associated with angiogenesis (41). In AMD, local immunity and inflammatory infiltration promote drusen formation, RPE atrophy, Bruch's membrane rupture, and CNV onset (42). In addition, inflammatory cytokines can also induce VEGF production, which in turn initiates CNV in AMD; macrophages and lymphocytes are present in the retina during the active phase of CNV (43). It was found that the level of IL-17 was greatly elevated in the eyes of AMD patients and that the inhibition of IL-17 had neuroprotective effects on the eyes of mice with focal retinal degeneration (44). As a characteristic secretory factor of Th17 cells, IL-17 can stimulate the production of VEGF. It also induces cell invasion, migration, and angiogenesis in endothelial cells (45). Based on the previous literature, we suspect that Th17 cells are involved in the onset of CNV. The hub genes of PPI networks mainly involved proteins that were up-regulated in our analyses, including CRYAA and FGF2. We selected the CRYAA and FGF2 proteins for validation of the results of MS screening; these findings confirmed that the levels of CRYAA and FGF2 in the CNV group were greatly up-regulated compared with the control group. CRYAA, a subunit of acrystallin, is present in the normal retina; it participates in various retinopathies (46). In addition, knocking out a-crystallin leads to the inhibition of pathological neovascularization through VEGF and VEGFR2 signaling (47). These findings prove that CRYAA may be a valuable target for CNV prevention. FGF2 is an effective factor to stimulate angiogenesis (48). In vivo and in vitro analyses revealed that FGF2 regulates pathogenic angiogenesis through the activation of STAT3 (49). Because it functions as a key mediator of abnormal neovascularization, FGF2 may be useful in the advance of multi-targeted therapies for blinding eye disorders. The above studies suggest that the production of new blood vessels in patients with CNV may be promoted through alterations to the PPI networks. In this research, the functional analysis of differential proteins illustrated that differential proteins up-regulated in the retina of the CNV group were involved in cell proliferation, migration, and angiogenesis; thus, CNV may be mediated by variations in the expression patterns of these proteins. Based on analysis of the aforementioned transcriptomic studies, we speculated that CNV was associated with the IFNb-mediated regulation of the Th17 cell-mediated inflammatory response. In this research, the size of CNV lesions was greatly reduced in CNV mice that received systemic IFN-b treatment compared with mice in the control group. Langmann et al. (17) found that the administration of systemic IFN-b treatment to CNV mice reduced lesion size considerably in the late stage of the disease. Kimura et al. (50) proposed that IFN-b could be able to delay the multiplication of human umbilical vein endothelial cells and enhance the proliferation of RPE cells. Therefore, we speculate that IFN-b can restrain CNV formation. Furthermore, we explored the numbers of Th17 cells among retina cells in CNV mice. In this study, the proportion of Th17 cells among retina cells was considerably lower in CNV mice than in mice in the control group; in contrast, the proportion of Th17 cells was meaningfully increased in the group receiving IFN-b treatment compared with the control group. Moreover, the IL-17 level in peripheral blood and macular cells was significantly increased in AMD patients (44, 51). IL-17 can independently promote CNV formation without the involvement of VEGF (52). IL-17, as a characteristic secretion of Th17 cells, is involved in ocular neovascularization (45). However, the findings of some studies suggest that the elevated level of IL-17 in CNV originates from gdT cells, rather than from Th17 cells (52). This discrepant conclusion may be related to the use of different intervention methods. In addition, IFN-b plays divergent roles in discrete stages of Th17 differentiation (53). Referring to the above information, we speculate that the use of IFN-b to promote the proliferation of Th17 cells can inhibit CNV progression, which cannot be solely achieved by the secretion of IL-17. Thus far, there have been few studies concerning the role that Th17 cells play in inhibiting the progression of CNV. These findings require validation in future studies. In conclusion, we used transcriptomic and proteomic methods to analyze CNV in this study; the results reflected overall changes in RNA and protein expression. Moreover, we identified a series of biological processes (e.g., inflammation and immune mechanisms) that are involved in CNV pathogenesis, along with the signaling pathways that may lead to CNV pathogenesis. Furthermore, the joint analysis of differential genes and differential proteins led to the identification of CRYAA and FGF2, key proteins involved in CNV; this finding is an important insight that will promote a further understanding of the onset and progression of CNV. In addition, IFN-b can inhibit CNV lesions while increasing immune cell activation. However, a notable shortcoming of this study was its lack of functional appraisal concerning the identified mRNAs and proteins. Consequently, the exact roles and mechanisms of altered mRNAs and proteins in CNV ought to be further studied. Data availability statement The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found in the article/supplementary material. Ethics statement The animal study was reviewed and approved by the Animal Ethics Committee of Fudan University. Author contributions YH and SQ performed experiments and manuscript writing. HZ and QZ analyzed the data. YL and HK validated data collection. CZ and SZ revised and finalized the manuscript. All authors commented on and revised the manuscript. All authors contributed to the article and approved the submitted version.
2023-03-23T15:38:57.564Z
2023-03-21T00:00:00.000
{ "year": 2023, "sha1": "f7b32962e2856b2fb22d1cfae85a77376702b618", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fimmu.2023.1163739/pdf", "oa_status": "GOLD", "pdf_src": "Frontier", "pdf_hash": "798aaa369ddb9f8e540d29f42de984fabe741d64", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
5574478
pes2o/s2orc
v3-fos-license
Evaluation of internal reliability in the presence of inconsistent responses Background We aimed to assess the impact of inconsistent responses on the internal reliability of a multi-item scale by developing a procedure to adjust Cronbach's alpha. Methods A procedure for adjusting Cronbach's alpha when there are inconsistent responses was developed and used to assess the impact of inconsistent responses on internal reliability by evaluating the standard Chinese 12-item Short Form Health Survey in adolescents. Results Contrary to common belief, random responses may inflate Cronbach's alpha when their mean differ from that of the true responses. Fixed responses inflate Cronbach's alpha except in scales with both positive and negative polarity items. In general, the bias in Cronbach's alpha due to inconsistent responses may change from negative to positive with an increasing number of items in a scale, but the effect of additional items beyond around 10 becomes small. The number of response categories does not have much influence on the impact of inconsistent responses. Conclusions Cronbach's alpha can be biased when there are inconsistent responses, and an adjustment is recommended for better assessment of the internal reliability of a multi-item scale. Background Internal reliability is an attribute of a multi-item scale that refers to the extent to which items in the scale are related; it is very often evaluated to assess the reliability of patient-reported outcomes (PROs). The most common measure of internal reliability reported in psychometric studies of PROs is Cronbach's alpha [1], but unfortunately, it can be biased by the presence of inconsistent responses. Inconsistent responding occurs when respondents complete a questionnaire without comprehending the items, typically in self-reported questionnaires when the participants are unmotivated or the questions are sensitive [2]. Inconsistent responses are classified as random, when responses are given unsystematically, or fixed, when the same response is given to all items [3]. Although the literature has not stipulated the impact of inconsistent responses on internal reliability, fixed responses by their nature would result in high association among the responses of the associated items and thus inflate the observed reliability in scales whose items have the same polarity. They can also diminish it in scales when that is not the case as the association among the item responses would be lower. Moreover, a substantial number of random responses would diminish the internal reliability by the independent nature of random responses, but what it means by substantial and such an effect in general are less certain. In practice, inconsistent responses may not be easily identified since they can also be plausible responses. Random responses are particularly difficult to detect as they have no identifiable patterns. Nevertheless, there are tested personality scales, namely, the Minnesota Multiphasic Personality Inventory-2 (MMPI-2) and the Minnesota Multiphasic Personality Inventory-Adolescent (MMPI-A), that assess the level of inconsistency for a response [4,5]. Both of them have a variable response inconsistency (VRIN) scale for assessing random responding and a true response inconsistency (TRIN) scale for assessing fixed responding. Cutoff values have also been established for the detection of random and fixed responses [4][5][6]. Depending on the instrument used, the VRIN scale comprises at least 50 item pairs and the TRIN at least 23 item pairs. As their length does not always allow for concurrent use with PRO instruments, we can only assess the sensitivity of internal reliability within an anticipated range of the proportion of inconsistent responses. However, to the best of our knowledge, no method is available for adjusting the internal reliability due to inconsistent responses. In view of these, we aimed 1. to evaluate the impact of inconsistent responding on internal reliability; 2. to propose a method for adjusting Cronbach's alpha in the presence of inconsistent responses; and 3. to illustrate the use of the procedure in evaluating the internal reliability of the standard Chinese 12-item Short Form Health Survey (SF-12v2) for a large sample of adolescents. Adjusting Cronbach's alpha for inconsistent responses We consider a multi-item scale when the total score S is used as a health indicator. Cronbach's alpha requires adjustment when there are inconsistent responses. This could be done when the proportions of random and fixed responses, denoted by p R and p F , respectively, are known. Given these proportions, Cronbach's alpha based on the true responses (α T ) can be derived as the following formula: where α is Cronbach's alpha without the adjustment for inconsistent responses, and m is the number of items. The quantities a and b are obtained from the equations: when all items have the same polarity. μ R and  R 2 are the mean and variance of the random responses and can be taken as 1 2 ( ) K 1  and 1 12 2 1 ( ) K  , respectively, for scales composing of items responded on a K-point Likert scale with each scored from 1 to K. μ T is the mean of true responses and can be taken as Cronbach's alpha adjusted for inconsistent responding can be calculated from (1) after replacing the unknown quantities by the corresponding sample estimates. Note the adjustment assumes that both random and fixed responses to an item are uniformly distributed over the K-point Likert scale; i.e., there is no specific preference of a certain response category. Performance of the adjustment procedure is assessed by a small Monte-Carlo simulation study. Biases of the adjusted Cronbach's alpha are consistently smaller than those of the unadjusted alpha [see Additional file 2]. Assessing the impact of inconsistent responses on Cronbach'salpha The impact of inconsistent responses as well as the number of items and item response categories on Cronbach's alpha is analytically assessed by using our derived formula in (1). The assessment is performed under the following four settings that were chosen to cover some common scenarios in practice: 1. The influence of random responses is assessed by varying its proportion (p R ) from 0 to 50% when p F is taken to be 0 or 5%. The mean difference between the true and random responses (μ T -μ R ) is 0 or 1, and the scale has 5 positive polarity items, each responded on a 5-point Likert scale. 2. The influence of fixed responses is assessed by varying its proportion (p F ) from 0 to 50%. The p R is taken to be 0 or 10%, and the number of positive polarity items is 5 or 3. Moreover, the mean difference between the true and random responses (μ T -μ R ) is 0, and the scale has 5 items, each responded on a 5-point Likert scale. 3. The influence of the number of items is assessed by varying it from 2 to 20 when the proportion of positive polarity items is taken to be 0.5 or 1, and the mean difference between the true and random responses (μ T -μ R ) is 0 or 1. Moreover, all items are responded on a 5point Likert scale. 4. The influence of the number of item response categories (K) is assessed by varying it from 2 to 10 when the number of positive polarity items is 5 or 3, and the mean difference between the true and random responses is 0 or 0.2 K. Moreover, we assume that 20% and 5% of responses are random and fixed, respectively. For each of the four scenarios, Cronbach's alpha based on the true responses is defined to be 0.4, 0.5, 0.6, 0.7 and 0.8. A real example to illustrate the adjustment of inconsistent responses As an example, we evaluate the internal reliability of the standard Chinese SF-12v2. The questionnaire consists of 12 items in eight scales. For the sake of illustration, we considered only the Physical functioning (PF), Role emotional (RE) and Mental health (MH) scales, each of which consists of two items. All items in the three scales are positively worded except one item in MH that is negatively worded. Items in the PF scale use a 3-point Likert scale, while the other items use a 5-point Likert scale. The original scale scores are standardized in the range of 0-100, but for convenience, we just considered the total score after reverse coding the responses of the negative polarity items. Note, however, that the internal reliability is invariant to this standardization. Data in the standard Chinese SF-12v2 were collected from the Hong Kong Student Obesity Surveillance (HKSOS) project conducted in 2006-2007. This study was cross-sectional involving 42 high schools covering all 18 districts in Hong Kong. It administered a survey questionnaire that contained the SF-12v2. The project was approved by the Institutional Review Board of The University of Hong Kong and the Hospital Authority Hong Kong West Cluster. Results The impact of inconsistent responses on Cronbach's alpha Figure 1 shows the influence of random responses on the bias in Cronbach's alpha under setting 1. In general, the presence of random responses reduces the observed Cronbach's alpha (Figures 1(a) and 1(b)). In particular, when there are no fixed responses and the true responses are equal to the random responses on average, the reduction is more for higher Cronbach's alpha calculated from true responses. However, when the true responses are skewed relative to the random responses, Cronbach's alpha can be overestimated (Figures 1(c) and 1(d)). This is contrary to the common belief that the presence of random responses always reduces the internal reliability. The overestimation is higher when the true Cronbach's alpha is smaller. The influence of fixed responses under setting 2 is examined in Figure 2. The presence of fixed responses generally overestimates Cronbach's alpha when all items have the same polarity, but otherwise, it produces a smaller estimate. The bias is again higher when the true Cronbach's alpha is smaller. Figure 3 shows that the bias in Cronbach's alpha due to inconsistent responses may change from negative to positive with an increasing number of items under setting 3, but the effect of additional items beyond around 10 becomes small. On the other hand, a higher skewness of the true responses increases the differential in the bias under different true Cronbach's alpha levels (Figures 3(c) and 3(d)). Under setting 4, the number of response categories does not generally have much influence on the bias of Cronbach's alpha due to inconsistent responses ( Figure 4). There could be a small differential when there are only a few response categories and the true responses are skewed. However, the effect becomes smaller when there are more response categories. Internal reliability of the standard Chinese SF-12v2 We illustrate the adjustment of Cronbach's alpha due to inconsistent responses by evaluating the internal reliability of the standard Chinese SF-12v2. A total of 33,692 completed questionnaires from adolescents were received. A descriptive summary of the RE, PF and MH scales including their Cronbach's alpha coefficients are summarized in Table 1. Note the unusually low internal reliability of the MH scale, which may possibly be due to the presence of inconsistent responses. Although the survey questionnaire did not incorporate scales for tracking inconsistent responses, there were multiple response items other than those in the SF-12v2 with "none of the above" as a response choice. Random responses may be indicated if one or more responses were chosen simultaneously with the contradicting response of "none of the above". Using one to six such items closest to the SF-12v2, we estimated that there would be 1.5% to 11% of random responses in the SF-12v2. On the other hand, one item in the SF-12v2 consists of three sub-items about how often one feels 1. calm and peaceful, 2. energetic, and 3. downhearted and depressed. The same 5-point response scale from "all of the time" to "none of the time" was used. As the three sub-items are closely related and worded in different polarities, the selection of the same extreme response for all of them is suggestive of fixed responding. There were 4% of students who chose "all of the time" or "none of the time" in all three sub-items; this figure was doubled if the less extreme responses of "most of the time" and "a little of the time" were also counted. Hence, we estimated the percentage of fixed responses to be 4% to 8%. We shall now illustrate the adjustment of Cronbach's alpha for inconsistent responses. The adjusted Cronbach's alpha which is an estimate of α T is denoted by α a . For the RE scale, K = 5, and thus μ R can be estimated as 3 and  R 2 as 2. When p R = 0.02 and p F = 0.05, we may estimate μ T as 3.850. By (2) and (3), we have a = 0.835 and b = 0.092. With α = 0.87, solving (1) yields α a = 0.868. The values of α a at other values of p R and p F are shown in Figure 5(a). The presence of random responses can reduce the internal reliability, and thus the true Cronbach's alpha can be underestimated. On the other hand, fixed responses inflate the observed association between the two positive polarity items and thus lead to over-estimation of the true Cronbach's alpha. Nevertheless, within our anticipated range of random and fixed responses, Cronbach's alpha for RE should be above 0.8. Therefore, the RE scale can be considered as internally reliable. For the PF scale, K = 3, μ R is estimated as 2 and  R 2 as 0.67. The values of α a at different values of p R and p F are shown in Figure 5(b). While there remains an inflation of Cronbach's alpha when there are fixed responses, it is interesting to note a general decreasing trend of the true internal reliability after removing more random responses. In other words, the presence of random responses may also inflate Cronbach's alpha. A further examination of the scale items revealed that they were highly left skewed, with ceiling percentages of 80.4% and 81.5%, leading to 72.7% of the scale scores reaching the plausible maximum of 6 (Table 1). Indeed, random responses are systematically lower (μ R = 2) than true responses (μ T > 2). Thus, when there are random responses that uniformly spread over the plausible item values, small item values are more likely random responses than large item values. Consequently, individuals who gave random responses would more likely have small values in all items, and hence their presence would enhance the inter-item association. In fact, it can be shown that the presence of random responses increases the correlation between two positively worded items when the true correlation is below . This threshold increases when Discussion The presence of inconsistent responses may positively or negatively bias the Cronbach's alpha, making the assessment of internal reliability difficult. An adjustment was proposed to Cronbach's alpha for correcting the effects of inconsistent responses when one can estimate a possible range for the percentage of inconsistent responses. This enables a sensitivity analysis to assess the potential impact of inconsistent responses and facilitates a better understanding of the internal reliability of a multi-item scale. As one would expect, the presence of fixed responses overestimates Cronbach's alpha for scales composed of items mostly worded in the same direction but would otherwise lead to an underestimation. However, it is interesting to observe that random responses may indeed inflate Cronbach's alpha when the distribution of true responses is skewed or, more precisely, when the true mean response deviates from the random/fixed mean response. This is contrary to the common intuition that random responses would dilute the association among items and hence reduce the internal reliability. Indeed, when the true item responses are skewed on the same side, the addition of random responses that scatter around the mid-response could strengthen association among the items if they are not too many. Thus, paradoxically, this kind of noise could inflate the internal reliability and hence Cronbach's alpha. Unfortunately, it is common for true responses to differ from random/fixed responses, on average, especially in patients whose quality of life has deteriorated due to their adverse conditions. Hence, we should be careful To determine random and fixed responses, tested personality scales such as the VRIN and TRIN scales of the MMPI-2 and MMPI-A may be considered [4]. They are, however, rather lengthy, requiring at least 23 item pairs, and they may not be feasibly incorporated into large scale studies. Nevertheless, we need to have an estimate of the proportion of inconsistent responses in a sample before the proposed method can be effectively applied. While the determination of whether an individual was endorsing inconsistent responses can be a challenge, modification or addition of a few items for tracking potentially inconsistent responses will be helpful. As in our illustrative example, the response option of "none of the above" in items allowing multiple response choices could be easily incorporated to track for potential random responses. Fixed responses are more easily identified by the patterns that they follow. Incorporating items that would not likely receive the same response will be useful. Cronbach's alpha of a scale has been known to be higher in scales with more items [7]. We have found that, when there are inconsistent responses, scales with more items would also increase any upward bias in Cronbach's alpha. Although the increase diminishes and may become negligible when there are many items, it is better to keep the number of items minimal to avoid reporting an overly optimistic Cronbach's alpha. Nevertheless, there remains a chance of under-estimating Cronbach's alpha, and it is probably better to be conservative when assessing the internal reliability of a scale. We have also shown that the number of response categories does not have much influence on the bias of Cronbach's alpha induced by the presence of inconsistent responses. There could be only a small positive increase in the bias for scales with items of 3 or fewer response categories. Previous studies have shown that scales with fewer response categories tend to have lower internal reliability and suggested the use of more than 3 response categories [8,9]. This recommendation is indeed also good to minimize the impact of inconsistent responses. However, the choice of the number of response categories may largely depend on the actual content of the scale [10]. Modern assessment of item characteristics utilizing item response theory is deemed more useful to determine an appropriate number of response categories [11]. We have illustrated how Cronbach's alpha can be adjusted for inconsistent responses by evaluating the standard Chinese SF-12v2 in a large sample of students. Note that each scale of the SF-12v2 consists of at most two items only. Although the Cronbach's alpha may in theory be used for scales of at least two items, its use for twoitem scales has been criticized [12]. The concern lies in whether two items are sufficient to represent the correspondingly larger domain comprising a much larger collection of items. Alternative forms of reliability that utilize more items in the same construct may be more desirable [13]. Hence, the internal reliability of the SF-12v2 may require further study. It is used here to merely illustrate the impact of inconsistent responses on Cronbach's alpha. The proposed adjustment to Cronbach's alpha for correcting the effects of inconsistent responses facilitates the assessment of the impact of inconsistent responses on internal reliability. In practice, as soon as respondents with inconsistent item-answer behavior had been identified, it would be simpler to exclude them from the calculation of Cronbach's alpha. However, when the identification of such responses is difficult and the anticipated range of inconsistent responses may be taken more conservatively than that of actually identified, the proposed adjustment may be used. We assumed the random and fixed responses to an item are uniformly distributed over a K-point Likert scale. When an individual is endorsing a random or fixed response to an item without referencing to the actual content of the time, there would likely be no specific preference on endorsing a particular response category. Therefore, unless there are particular response categories that would be generally endorsed more often in the population, the assumption of uniform distribution appears to be reasonable. Nevertheless, a non-uniform distribution may also be incorporated. Indeed, the adjustment procedure depends on only the first two moments of the random and fixed responses. A different mean of random and fixed responses would either increase or decrease its difference from the mean of true responses (i.e. μ T -μ R ), on which the influence has been examined in Figure 1. On the other hand, an increase of the variance of random and fixed responses would increase the proportion of variance in the total score that is due to inconsistent responses (i.e.  R 2 /variance of S) which reduces the observed Cronbach's alpha. We have not examined the impact of inconsistent responses on inference about Cronbach's alpha. However, it has been previously shown that the width of the corresponding confidence interval is negatively proportional to the estimated Cronbach's alpha [14,15]. Thus a positively biased alpha would tend to result in a short confidence interval leading to a nominal coverage less than the required level. Hence, the false positive error rate for testing about the significance of Cronbach's alpha would also be increased. Cronbach's alpha has been criticized on the grounds that is just a lower bound of reliability and that other measures may be considered as a better lower bound measure than the coefficient alpha [16]. Moreover, it implicitly assumes the items are responded on an interval scale which limits its use in PRO instruments when items are categorically scored. Besides, it assumes a fixed level of reliability across the whole range of the measurement, and is not a measure of uni-dimensionality. Nevertheless, Cronbach's alpha may be interpreted as a measure of the proportion of the total score variance that can be attributed to true score variance that may be affected by the extent to which the items are associated. Hence, we believe that the impact of inconsistent responses could be applicable to the general evaluation of internal reliability of a scale. An analytical exploration of the impact of inconsistent responses would be desirable. A potential method was the modern psychometric assessment by item response theory which allows the examination of the response characteristics of individual items. It has gained much popularity but it has been reviewed and concluded to be relatively unsuccessful in identifying dissimulation [17,18]. Further work may deem to be necessary. Conclusions Cronbach's alpha may be inflated by inconsistent responses when either the mean of true responses differ from that of the random/fixed responses or all items in the scale have the same polarity. The inflation in the former situation is due to the presence of random responses, while the latter is due to the presence of fixed responses. It should not be assumed that random responses always diminish Cronbach's alpha.
2014-10-01T00:00:00.000Z
2010-03-12T00:00:00.000
{ "year": 2010, "sha1": "45f10a44275379a7bc671107b8375a06166b7fa4", "oa_license": "CCBY", "oa_url": "https://hqlo.biomedcentral.com/track/pdf/10.1186/1477-7525-8-27", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "45f10a44275379a7bc671107b8375a06166b7fa4", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
231681064
pes2o/s2orc
v3-fos-license
Leadership or luck? Randomization inference for leader effects in politics, business, and sports A new method finds that leaders are highly consequential in politics and sports but not business. This PDF file includes: Text S1 Fig. S1 Text S1 Leadership or Luck: Transition Costs To explore the implications of transition costs for our test, we conducted an additional battery of Monte Carlo simulations. For simplicity, everything is identical to the analyses in Figure 1 with 20 units and 20 periods except there is no serial correlation and there is a transition cost in the first period each leader takes office. Specifically, the outcome in each period is drawn from a normal distribution with a mean of 0 a nd standard deviation of 1, and a constant amount is subtracted in transition periods. This means that in non-transition years, the mean and standard deviation are 0 and 1, respectively, while in transition years the mean and standard deviation are −X and 1, respectively, where X corresponds to the magnitude of the transition cost. In Figure S1, we show results for transition costs of .1, .25, .5, 1, and 2. For each transition cost, we simulate 1,000 data sets and implement RIFLE along with a test in the spirit of Jones and Olken (2005). When implementing the latter test, we compute the absolute change in growth for each period, i.e., | − −1 |, and we test whether this absolute change is greater in transition versus non-transition years using a t-test. Figure S1 shows the distribution of p-values resulting from both tests across different transition costs. As expected, the Jones and Olken test (top row of Figure S1) produces p-values that are skewed right, meaning that this test over-rejects the null hypothesis. Furthermore, the transition cost need not be too large to generate a significant bias. On the other hand, RIFLE performs much better in the presence of transition costs. Unless the transition cost is larger than Figure S1. Effect of Transition Costs for Jones and Olken vs. RIFLE Each histogram shows the distribution of p-values resulting from 1,000 simulated data sets with transition costs (TC) of varying magnitudes. The top row presents results from a test in the spirit of Jones and Olken (2005), and the bottom row presents tests using RIFLE. See the text for more details. the standard deviation of the outcome in non-transition years, the bias introduced by transition costs is negligible. Furthermore, when a bias is detectable, it goes in the opposite direction. Specifically, the distribution of p-values is skewed to the left, meaning that RIFLE under-rejects the null. The intuition for this result is provided in the main text. The test of Jones and Olken overstates leader effects in the presence of transition costs, while RIFLE performs much better. When there is a meaningful bias for RIFLE, which only occurs for very large transition costs, it leads us to understate leader effects. Overall, the implications of transition costs are minimal for RIFLE, and if anything, they lead RIFLE to be a conservative test of leader effects. Theoretical Model of Endogenous Turnover As discussed in the main text, one concern for our test is that the outcome of interest influences the tenures of leaders. Suppose, for example, that governors do not matter for a particular outcome, but voters believe that they do, and therefore, the values of that outcome variable influence the chances that the governor will stay in office. This kind endogenous turnover could potentially bias the results of RIFLE. To understand and illustrate the bias that arises from endogenous turnover, consider the following theoretical model. Suppose there is a binary outcome of interest, all leaders are the same (i.e., the outcome is unrelated to the identity of the leader), there is a two-term limit, and turnover for first-termers depends entirely on the outcome in the first term. Recall that 2 ≡ 1 − , where RSS is the residual sum of squares and TSS is the total sum of squares. With RIFLE, the TSS is identical for both the real data and the permuted data sets where the ordering of leaders is randomly shuffled. Therefore, to think about how RIFLE will perform, we can focus on the RSS. Under the null, we'd like the RSS to be identical, in expectation, for the real data and the permuted data. If the expected RSS is greater in the real data, that means the r-squared will be smaller, and we will under-reject the null. If the expected RSS is smaller in the real data, the r-squared will be larger, and we will over-reject the null. Since the sample size is the same for both the real and permuted data sets, we can also think about the average squared residual, and by comparing the expected average squared residual in the real and the permuted data, we can assess whether RIFLE will over-or under-reject the null. In this theoretical model, the data set of leaders and outcomes will include only three different kinds of leaders. Anyone who has a bad year in their first term will be removed from office, so there will be one-termers who had a bad outcome-let's refer to this type of leader as 0. To serve two terms, the outcome must have been good in the first term, but the outcome could have been either good or bad in the second term, so in addition to 0's there are also 1-0's and 1-1's. ). In our random permutations, instead of three kinds of leaders, there will now be six: 0, 1, 0-0, 0-1, 1-0, and 1-1. Four of these types have no variation in their outcome so they make no contribution to the RSS, whereas the 0-1's and the 1-0's will again have residuals equal to ). Comparing the expected average squared residuals in the real and permuted data, we see that they equal each other if and only if = 1 2 . If > 1 2 , the squared residuals will be greater in the permuted data, meaning the r-squared is lower, and RIFLE will over-reject the null. Alternatively, if < 1 2 , the squared residuals will smaller in the permuted data, meaning the r-squared will be greater, and RIFLE will under-reject the null. In other words, endogenous turnover can produce a bias, and that bias can go in either direction. To gain some intuition for the bias in the model, consider an extreme case where is very close to zero but still positive. Remember that the r-squared of the regression of the outcome on leader fixed effects is determined entirely by the proportion of two-term leaders that have one good term and one bad term. In the rare case when there is a good outcome and a leader is retained, they will almost certainly be a 1-0. 1-1's will be exceedingly rare relative to 1-0's. This means that almost all of the two-termers in the real data will be 1-0's, contributing positively to the RSS. When we permute the leader tenures, most of those two-term leaders will happen fall on two bad terms, and they'll become 0-0's, where they will not add to the RSS. The r-squared will be very high in both cases, but it will be almost exactly 1 in the permuted data, and it will be slightly lower in the real data, meaning that RIFLE will under-reject the null. Similarly, consider an extreme case where if very close to but still less than 1. Almost all leaders are 1-1's, and there are roughly equal shares of 0's and 1-0's. When there is a rare bad outcome, half of those belong to one-termers, in which case they contribute nothing to the RSS. However, most of the leaders are two-termers, which means that in the random permutations, most of the bad outcomes get assigned to two-termers, creating 1-0's or 0-1's and increasing the RSS. Again the r-squared is very high in both cases, but it's higher in the real data than the permuted data, meaning that we over-reject the null. Fortunately, the Monte Carlo simulation results in Table
2021-01-23T14:08:52.047Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "cee93a8609d5b31ce41df61236e801dcdae80428", "oa_license": "CCBY", "oa_url": "https://www.science.org/doi/pdf/10.1126/sciadv.abe3404?download=true", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "42c24915dc6a9054c55607e403e20aa627e43c5b", "s2fieldsofstudy": [ "Political Science", "Business" ], "extfieldsofstudy": [ "Medicine" ] }
270577514
pes2o/s2orc
v3-fos-license
Sensory-directed flavor analysis reveals the improvement in aroma quality of summer green tea by osmanthus scenting Flower scenting is an effective way to enhance the aroma of green tea (GT), including those osmanthus scented green tea (OSGT). However, the mechanism of aroma enhancement by scenting is still unclear. Here, the volatiles of GT, OSGT, and osmanthus were detected by GC–MS. The total volatile content of OSGT was significantly increased compared to GT, with the flowery and coconut aromas enhanced. Furthermore, 17 of 139 volatiles were responsible for the enhancement by GC–olfactometry and their absolute odor activity values (OAVs). Aroma recombination, omission and addition experiments showed that dihydro-β-ionone, (E)-β-ionone, (E, E)-2,4-heptadienal, geraniol, linalool, α-ionone, and γ-decalactone were the key aroma volatiles with flowery or coconut aromas. Additionally, the dynamics of the key volatiles (OAVs >1) from different scenting durations were analyzed, proving that the optimal duration was 6–12 h. This study provides new insight into the mechanism of aroma formation during OSGT production. Introduction Green tea is a popular drink worldwide with potential health benefits.According to the season of production, green tea can be divided into spring, summer and autumn varieties.The flavor quality of green tea harvested in spring is better than that harvested in autumn, while the flavor of green tea harvested in summer is the worst.In the summer, tea plants are exposed to high temperatures and strong light, resulting in the accumulation of caffeine and tea polyphenols, compounds related to carbon metabolism.As a result of these conditions, the content of amino acids, compounds related to nitrogen metabolism, decrease; at the same time, metabolic pathways associated with volatile accumulation are inhibited.Seasonal differences result in a much less coordinated taste and aroma of summer green tea compared to spring tea (Guo, Ho, Schwab, & Wan, 2021;Huang et al., 2023;Ji et al., 2018;Shao et al., 2022).This phenomenon causes an increasing amount of summer tea resources to be abandoned, resulting in the waste of tea resources.Therefore, improving the flavor quality of summer green tea is a basic research topic in the tea industry. Aroma is an important factor in evaluating the flavor of green tea.The charming, flower-like aroma properties are considered the hallmark of high-grade green tea.To further improve green tea's floral-like properties, a number of flowers have been used to scent the tea, including jasmine, osmanthus, and roses.Among these flowers, osmanthus is loved by consumers because of its rich aroma, leading to osmanthus scented green tea (OSGT), which has become popular among consumers.For OSGT, raw green tea (GT) is used as a chapi, absorbing the fragrance of osmanthus flowers so that it retains the original tea flavor but is also endowed with the fragrance of osmanthus flowers (An et al., 2022).Generally, osmanthus can be divided into three types: thunbergii, latifolius and aurantiacus.The color of thunbergii is orange-yellow, and the contents of α-ionone, β-ionone and γ-decalactone in thunbergii are higher, with a sweet and rich fragrance.The latifolius is yellowish-white or yellowish, and the content of (E)-β-ocimene in latifolius is higher, with a clear and mild fragrance.The aurantiacus is orange-red, and it lacks compounds such as α-ionone and β-ionone; the aroma of this type is inferior, and although it has a general osmanthus aroma, the sweet aroma is not as strong as that of thunberggius, and the fresh aroma is not as strong as that of latifolius (Cai et al., 2014;Sheng et al., 2020).During scenting, the GT absorbs volatiles in osmanthus flowers by capillary coagulation due to their loose porous structure.This process promotes the accumulation of a large number of floral-like volatiles, giving the GT an obvious osmanthus fragrance.Previous evidence shows that the main components of OSGT include palmitic acid, linolenic acid, (Z)-pyranoid linalool oxide, dehydrodihydro-3-epoxy-violyl alcohol, and methyl hexadecanoate (Guo et al., 2021).In fact, the aroma of OSGT is affected by many volatiles, and the different components and proportions of these volatiles affect the aroma intensity and persistence of OSGT.However, some unsolved problems continue to hinder the development of OSGT, that is, which volatiles are absorbed from flowers in large quantities during the scenting process and which volatiles are important contributors to the formation of the charming aroma of OSGT. Therefore, the aim of this study was to (a) explore the differential volatiles in GT, OSGT, and osmanthus by GC-MS; (b) identify and verify the key aroma volatiles that contribute to OSGT by sensory-directed flavor analysis; and (c) explore the accumulation pattern of these key aroma volatiles during scenting.The results of the study provide a theoretical basis for improving the osmanthus scenting process and the quality of OSGT, thus expanding the resource utilization of summer green tea. Sample preparation and collection Lu'an Guapian green teas were harvested in July 2022 from a tea plantation located in Jinzhai County, Anhui Province.The finished green teas were immediately sent to cold storage at a temperature of 0 • C until scenting.Fresh osmanthus was picked in September 2022 from osmanthus trees in Jinzhai County, Anhui Province.After picking, the fresh osmanthus flowers were sorted to remove impurities such as stalks and leaves.The subsequent processing of the green tea made from Osmanthus flowers is shown in Fig. 1.Specifically, green teas and fresh osmanthus flowers were mixed by weight in a 2:1 ratio, and the scenting process lasted for 12 h.This process was carried out at room temperature.Upon completion of the scenting, the mixture of osmanthus and green teas was separated, from which the osmanthus was removed, leaving pure green teas.The obtained green teas were dried at a temperature of 80 • C to obtain OSGT with a moisture content of 6%.Fresh osmanthus, GT, and OSGT were collected as shown in Fig. 1, and tea samples from the scenting process, which lasted for 3, 6, 9, and 12 h, were also collected.These samples were taken in triplicate, sealed in sample bags and subsequently transported to the laboratory.All samples were freeze-dried prior to the experiment and subsequently analyzed for volatiles. Extraction of volatile compounds The volatile components of the three samples were extracted using headspace solid-phase microextraction (HS-SPME) and solvent-assisted flavor evaporation (SAFE).The extraction method of SAFE better retains the original flavor of the samples and can extract the high molecular weight compounds with low volatility in the samples.SPME is easy, efficient and quick, and is suitable for extracting low molecular weight and high volatile compounds.Therefore, we combined the two extraction methods to obtain a more comprehensive aroma profile (Huang et al., 2022). Preparation of internal standard solution and tea infusion For SPME, ethyl decanoate (12.50 mg) was first dissolved in anhydrous ethanol (10 mL), after which 80 μL of the solution was dissolved in pure water (10 mL).For SAFE, ethyl decanoate (21.95 mg) was first dissolved in anhydrous ethanol (10 mL), after which 2278 μL of the solution was dissolved in pure water (10 mL).Tea samples (3.0 g) were accurately weighed in a conical flask, 150 mL of pure boiling water was added, and the mixture was brewed for 4 min.The tea broth was filtered through 400 mesh gauze, put into an icewater bath, and cooled quickly to room temperature.The sample volatiles were extracted by three methods. Extraction of volatiles through headspace SPME A pipette was used to measure 10 mL of tea broth in a headspace bottle with a rotor, and the SPME internal standard (4 μL) was added and mixed thoroughly with NaCl (3.0 g) to promote the precipitation of aroma volatiles.The headspace flask was equilibrated in a constanttemperature water bath at 40 • C for 15 min and adsorbed under this water bath condition for 40 min (Huang et al., 2022;Zhang et al., 2023).Osmanthus volatiles were extracted in the same way as described above, but the ratio of osmanthus to water was changed.Specifically, 1 g of osmanthus was used and brewed with 100 mL of boiling water to simulate a 2:1 ratio by weight of green tea and osmanthus in scenting. Extraction of volatiles through SAFE Internal standard (6 μL) was added to 150 mL of tea broth and mixed thoroughly.The mixture was passed through the SAFE device under vacuum at 10 − 3 Pa at an extraction temperature of 40 • C and a condensation temperature of − 80 • C (liquid nitrogen).The collected SAFE fractions were thawed under tap water buffer, completely thawed, poured into a split funnel, extracted three separate times with distilled dichloromethane (30 mL), and finally dried with anhydrous sodium sulfate (until quicksand-like anhydrous sodium sulfate appeared). Finally, the extract was nitrogen-blown to 100 μL at 25 • C in a water bath (Huang et al., 2022). Detection and identification of volatile compounds The GC-MS analysis of volatiles was performed using an Agilent system consisting of a gas chromatograph (Model 7890B) and a mass spectrometer (Model 5975B, Santa Clara, CA, USA).During detection, the aroma volatiles were separated on an HP-5MS capillary column (30 m × 0.25 mm × 0.25 μm, J & W, Folsom, CA, USA).Pure helium (purity >99.99%) was used as the carrier gas at a constant flow rate of 1 mL/ min.The inlet temperature and injection method were 250 • C. The nonsplit mode was used.For SPME, the following procedure was employed for the GC: temperature of 40 • C for 5 min, then ramped from 40 to 180 • C at a rate of 4 • C/min.Then, the temperature was raised from 180 to 280 • C at a rate of 15 • C/min (held at 280 • C for 5 min).For SAFE, 2 μL of distillate was injected into the GC injection port.The temperature was maintained at 40 • C for 5 min and then ramped from 40 • C to 100 • C at a rate of 5 • C/min.Then, the temperature was raised from 100 • C to 200 • C at a rate of 3 • C/min and finally to 280 • C at 20 • C/min (held at 280 • C for 5 min).For SBSE, the thermal desorption procedure followed a previous study (Ma et al., 2021;Ma et al., 2023).The temperature was maintained at 40 • C for 5 min and then ramped from 40 • C to 100 • C at a rate of 3 • C/min.Then, the temperature was raised from 100 • C to 130 • C at a rate of 2 • C/min and finally to 250 • C at 10 • C/min (held at 250 • C for 5 min).The mass selective detector was operated in positron ionization mode with a mass scan range from m/z 30 to 350 at 70 eV.The linear retention index was determined by injecting n-alkanes C6-C40 using the same running procedure. Identification and quantification of volatiles All volatile compounds were first identified by comparison with mass spectra from the NIST17 library.The retention times (calculated using nalkanes C6-C40) were used to calculate retention indices, which were compared with those measured under the same conditions in the NIST17 library, and volatile compounds with retention indices within ±20 were retained.Finally, the concentrations of volatile compounds were relatively quantified by the ratio of the peak area of the compound to the peak area of ethyl arachidonate (Liu et al., 2023).Additionally, chemical standards with OAVs >1 were employed to achieve absolute quantification. GC-O analysis The aroma volatiles were extracted by SBSE, an experimental method that was slightly modified from previous studies (Ma et al., 2021;Ma et al., 2023).Ten milliliters of tea broth mixed thoroughly with NaCl (3.0 g), equilibrated in a constant-temperature water bath at 40 • C for 15 min, and adsorbed for 90 min were used.GC-O analyses were performed using the equipped sniffing port (ODP 3, Gerstel, Germany), and the extracted volatiles were fed into the MS (250 • C) and sniffing port (230 • C) in a 1:1 ratio.The injection procedure was consistent with SBSE in 2. 4. The GC-O analysis was conducted by a group of experienced evaluators (3 males and 2 females) who had been trained for up to 2 months, as described in Zhang et al. (2023).Then, GC-O analysis combined with the detection frequency method was used to analyze the aroma properties and odor intensity of the aroma volatiles (Ma et al., 2021;Wang et al., 2020).The average of the five evaluators represents the intensity of the aroma volatiles. OAV calculation OAV is widely used to assess the contribution of individual flavor volatiles to the overall aroma profile of food and tea samples.Individual volatiles with an OAV value >1 are generally considered to be aromaactive compounds and therefore contribute significantly to the overall aroma of the sample (Wang et al., 2020).The OAV is calculated as the ratio of the detected concentration of each volatile to its odor threshold (OT), which is the odor threshold at which a volatile can be smelled and identified in water.The OT values for the different volatiles were taken from some early reports (Liao et al., 2020;Zhai, Zhang, Granvogl, Ho, & Wan, 2022;Zhu et al., 2015;Zhu et al., 2018). Recombination, omission, and addition experiments 2.6.1. Quantitative descriptive analysis (QDA) Ethical permission, to conduct human sensory panel research, is not customary for our institution.The appropriate protocols for protecting the rights and privacy of all participants were utilized during the execution of the research, such as no coercion to participate, full disclosure of study requirements and risks, written or verbal consent of participants, no release of participant data without their knowledge, and ability to withdraw from the study at any time.Attached document is a statement signed by the participants. QDA was used to analyze the characteristics of the tea broths and to assess the differences between the broths.A total of 12 trained assessors Y. Wang et al. participated in this experiment (7 females and 5 males).The evaluated tea products and chemical standards were safe for consumption.Representative tea samples (3.0 g) with a tea-water ratio of 1:50 were placed in an evaluation cup, filled with boiling water, and covered with a lid for 4 min.Twenty-five milliliters of tea broth was transferred to a 50 mL brown sniffing bottle, and the evaluator described the aroma attributes of the tea broth and screened out the aroma descriptors that appeared in the top 6 frequencies.Were flowery, roasted, cooked soybean-like, coconut, green, and chestnut-like.The corresponding standards and foods for the 6 aroma attributes were flowery ((E)β-ionone), roasted (3-ethyl-2,5-dimethylpyrazine), coconut (γ-decalactone), green (hexanal), cooked soybean-like (cooked soybean), and chestnut-like (cooked chestnut) (Zhang et al., 2023).Evaluators used a 4-point scale (0-1 for weak odors; 1-2 for moderate; 2-3 for strong; 3-4 for very strong). Recombination experiments Recombination experiments were performed on the aroma volatiles with OAV > 1 that we screened in GT ( 16) and OSGT (18) to validate the qualitative and quantitative results of the aroma volatiles.The aroma specimens used were dissolved in anhydrous ethanol and then dissolved in 25 mL of deionized water at the concentrations detected in the tea broths (Zhang et al., 2023).The concentration of ethanol was below OT (990,000 μg/L).The recombinant samples were evaluated according to the same evaluation method of QDA. Omission experiments Flowery and coconut aroma were the two aroma attributes that differed most between GT and OSGT, and aroma recombination tests were conducted to verify the importance of single aroma volatiles in these two categories for the overall OSGT tea samples.The flowery and coconut aroma volatiles with OAVs >1 were missing in the recombinant OSGT samples.A total of 11 aroma volatiles were deleted separately.Twelve trained evaluators participated in this experiment (7 females and 5 males) and triangulated samples missing one aroma volatile (three randomly coded vials, two OSGT recombinant sample vials, and one vial missing one aroma volatile) (Wang et al., 2024). Addition experiments Seven key aroma volatiles screened by the omission experiments were added to Re-GT according to the concentration difference between GT and OSGT.Twelve trained evaluators participated in this experiment (7 females and 5 males), who evaluated and scored the key aroma volatiles of the samples to which a single key aroma volatile had been added.Samples were evaluated and scored to calculate the contribution of a single key aroma volatile to the aroma. Statistical analysis All experiments and samples analyzed were replicated at least three times, and the levels of volatile compounds detected are expressed as the mean ± standard deviation (SD).One-way analysis of variance (ANOVA) was performed using SPSS software (IBM, Armonk, NY, USA), with p values <0.05 considered significant.Principal component analysis (PCA) for volatile characteristics was performed using SIMCA software (Umea, Sweden).Data were plotted by Origin (OriginLab Co., USA) and GraphPad Prism 8 software (Shen et al., 2023). Volatile profiles of GT, OSGT, and osmanthus A total of 139 volatile compounds were detected by HS-SPME and SAFE (Fig. S1, Table S1).Among them, 97 were contained in GT, 121 in OSGT, and 74 in Osmanthus.The total volatile content of OSGT (1800.33 μg/L) was much higher than that of GT (495.28 μg/L) (Fig. 2A).These volatiles can be divided into 10 classes, including aldehydes, alcohols, ketones, esters, heterocyclics, terpenes, benzenes, oxides, sulfides, and acids.After scenting, the alcohols, ketones, esters and oxides were increased in OSGT, which were the main volatiles in osmanthus (Fig. 2B).Generally, alcohols and ketones are described as having a flowery aroma, aldehydes possess citrus and green flavors, esters provide sweetness and coconut aroma, and pyrazines are associated with baking and caramel aromas (Meng et al., 2024;Zhai et al., 2022).Therefore, osmanthus is an ideal flower for enhancing the aroma of tea, which is conducive to improving the aroma quality of GT. A total of 92 volatiles were found simultaneously in GT and OSGT, and 4 and 12 were detected separately in GT and OSGT, respectively (Fig. 2C).Forty-three of the 74 volatile compounds in osmanthus were detected in both GT and OSGT, 17 volatiles were common only to osmanthus and OSGT, and 13 were present only in osmanthus.The top 10 volatiles in GT (Fig. 2D), OSGT (Fig. 2E), and Osmanthus (Fig. 2F) are listed.These dominant volatiles showed considerable variation among the three samples.Compared with GT, volatiles in OSGT are significantly increased in relative content, which is attributed to adsorption from osmanthus flowers.For example, the relative content of γ-decalactone, the most abundant volatile in OSGT, changed significantly from 2.44 ± 0.11 to 458.27 ± 14.67 μg/kg.(E)-β-ionone, dihydro-β-ionone, geraniol, and linalool are the main contributing compounds in osmanthus black tea, and they are also characteristic volatiles of osmanthus flowers.The black tea absorbed the fragrance of osmanthus, giving it a rich floral aroma (Meng et al., 2024).The significantly increased content of these volatiles may give OSGT its unique flowery flavor. Identification and quantitation of aroma volatiles GC-O was conducted to select the aroma volatiles from the volatile profiles.During GC-O, individual aroma volatiles are resolved under different temperature conditions and thus perceived by the assessor (Barba, Beno, Guichard and Thomas-Danguin, 2018).Under these conditions, interactions between aroma volatiles can be avoided, thus ensuring the authenticity of the aroma volatiles, and critical aroma volatiles are not missed.Accurate quantitative analysis using GC-O/MS and external standards ensures the accuracy of the original results. Screening of aroma volatiles responsible for aroma enhancement The volatiles with higher content do not indicate a larger contribution.OAV is used to assess and compare the contribution of volatiles.Nineteen of 33 volatiles were selected with OAVs >1 in GT and OSGT (Table S2).Among them, 10 volatiles were described as having a flowery aroma (Fig. 5A), and the other 9 had other aroma attributes (Fig. 5B). These volatiles are the key to distinguishing the flavor of OSGT from GT, especially in flowery and coconut aromas (Fig. S2A). Verification of key aroma volatiles The trained evaluation panelists gave characteristic attributes that matched the samples by sensory evaluation, and six descriptors were summarized and filtered based on the frequency of occurrence: flowery, roasted, cooked soybean-like, coconut, green, and cooked chestnut-like (Fig. S2A).The contributions of the above volatiles to the aroma were further verified by aroma recombination, omission, and addition experiments. Aroma recombination and omission experiments.In total, 16 and 18 responsible aroma volatiles were selected in GT and OSGT, respectively.Therefore, these 16 and 18 responsible aroma volatiles were added to deionized water as recombination samples at their true concentrations in the tea samples.Twelve trained panelists evaluated the sensory differences between the recombination (Resample) samples and the original samples.The similarity between the recombination and GT reached 82.75% (3.31/4), proving the successful recombination of all the responsible aroma volatiles in GT (Fig. S2B).The roasted score in the original GT sample was 2.03, while the score in the aroma recombination sample was only 1.63.Among the added responsible aroma volatiles, only 2,3-diethyl-5-methylpyrazine has the aroma attribute of roasted, and the reason for the higher score of roasted in GT may be that there may be an enhancement effect among the other roasted aroma volatiles.There may be an enhancing effect.The similarity between the recombination OSGT and OSGT reached 88.54% (3.54/4), demonstrating the successful recombination of all responsible aroma volatiles in OSGT (Fig. S2C).For the single aroma attribute, the recombination of OSGT and OSGT for roasted (1.14/1.62,70.47%), green (0.32/0.45, 70.37%), cooked chestnut-like (0.83/1.09, 76.45%), flowery (3.47/ 3.35, 103.58%), and coconut (2.49/2.37,105.06%) were successfully recombined. Aroma addition experiments.The aroma addition experiments were employed to further quantify the contribution of single key aroma volatiles.The six key aroma volatiles selected with flowery aroma attributes were added to each of the six groups, and they all contributed to the enhancement of the flowery aroma attributes of Re-GT (Table S4); among them, the two groups with the addition of dihydro-β-ionone and linalool achieved significant enhancement (p < 0.05).The addition of dihydro-β-ionone increased the Re-GT score from 1.94 to 3.10, and the addition of Re-GT + α-ionone increased the Re-GT score from 1.94 to 2.74.The addition of the other four groups had corresponding aroma enhancements with (E)-β-ionone of 26.72%, (E, E)-2,4-heptadienal of 20.45%, geraniol of 22.85%, and α-ionone of 13.96%.γ-Decalactone is the coconut aroma of interest for the volatiles, and the score of coconut aroma increased from 1.32 to 2.69 in the group with the addition of γ-decalactone only, an increase of 103.91%.It is noteworthy that in the group to which only γ-decalactone was added, his score was higher than that of Re-OSGT (2.49), suggesting that γ-decalactone is the major contributor to coconut aroma.The addition of all the key aroma volatiles from the flowery and fruity aroma categories to Re-GT resulted in a score of 3.37 for the flowery aroma and 2.41 for the coconut aroma, which increased by 73.54% for the flowery aroma and 82.45% for the coconut aroma.The group that added all the key aroma volatiles was compared with Re-OSGT (flowery: 3.47) (coconut aroma: 2.49), and there was not much difference between them.The aroma addition experiments also corroborated the accuracy of the aroma omission experiments, thus demonstrating that dihydro-β-ionone, (E)-β-ionone, (E, E)-2,4-heptadienal, geraniol, linalool, and α-ionone were the main causes of the differences in the flowery aroma of the two samples, and γ-decalactone was the main cause of the coconut aroma between GT and OSGT. Dynamics of aroma volatiles of OSGT during scenting Fig. S3A shows the total content of aroma volatiles for samples scented for 0 h, 3 h, 6 h, 9 h, and 12 h.The total volatile content in OSGT increased continuously with increasing aroma addition time and reached the maximum value after 12 h.In addition, a principal component analysis was performed to characterize the volatiles in the samples (Blasco et al., 2015;Granato, Santos, Escher, Ferreira, & Maggio, 2018).As shown in Fig. S3B, the tea samples with different scenting durations showed a clear distinction.The tea samples with 0 h and 3 h of scenting were in the positive semiaxis of PC1, and the tea samples with 6 h, 9 h and 12 h of scenting were in the negative semiaxis of PC1.This is consistent with the results of Fig. S3A. The absorption of aroma volatiles is a dynamic process.During scenting, the content of key aroma volatiles of tea samples at 3 h increased rapidly (Fig. S4) and tended to stabilize from 6 to 12 h, and the overall content of aroma reached a maximum at 12 h (Fig. S4).Regarding the dynamic changes in the aroma volatiles with flowery aroma, the content increased significantly (p < 0.05) at 6 h, and the content of aroma volatiles reached the maximum value at 12 h.In the comprehensive analysis, it was speculated that there may be a critical point for a better scenting time from 6 to 12 h.Overall, this study is the first to characterize the key aroma volatiles in OSGT, mainly adsorbed from osmanthus.Through sensory-assisted flavor analysis, we identified key volatiles responsible for the floral and sweet aromas, which all showed enrichment patterns during the scenting process.Nonetheless, this is a preliminary study.On the one hand, the aroma profile of green tea (also known as Chapi in Chinese) is influenced by various factors, such as cultivation measures and processing techniques (Wang et al., 2024).Therefore, in-depth exploration of the adsorption patterns of characteristic volatiles in osmanthus by different green teas is a basic topic for the future.On the other hand, how green tea adsorbs the characteristic volatiles from osmanthus, and the deeper mechanisms explaining this specific adsorption phenomenon are also worth being further explored. Ethical statement The appropriate protocols for protecting the rights and privacy of all participants were utilized during the execution of the research, such as no coercion to participate, full disclosure of study requirements and risks, written or verbal consent of participants, no release of participant data without their knowledge, and ability to withdraw from the study at any time.Attached is a statement signed by the participants. Fig. 1 . Fig. 1.The manufacturing process of osmanthus-scented green tea.Seven samples including GT, OSGT, osmanthus, T1, T2, T3, and T4 were obtained.(For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.) Fig. 2 . Fig. 2. (A) Total volatile compounds detected by HS-SPME-GC-MS and SAFE-GC-MS in GT, OSGT, and osmanthus.(B) Categories and numbers of volatiles detected in GT, OSGT, and osmanthus.(C) Venn diagram of volatiles detected in GT, OSGT, and osmanthus.Bar graphs of the top 10 volatiles in contents detected by HS-SPME-GC-MS and SAFE-GC-MS in (D) GT, (E) OSGT, and (F) osmanthus. Fig. 5 . Fig. 5. (A) Box plots of flowery volatiles with OAVs >1 in GT, and OSGT.(B) Box plots of volatiles with OAVs >1 for other aroma attributes in GT, and OSGT.(C) Histogram of OAV ratios for GT and OSGT.
2024-06-19T15:24:31.208Z
2024-06-16T00:00:00.000
{ "year": 2024, "sha1": "1f19f8c7dbecb5972cafe84dc8aaaf50520a582f", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.fochx.2024.101571", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "e8d4ce9ca95301867d162745f0f4c62900ecb91e", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
268022654
pes2o/s2orc
v3-fos-license
End-User Assessment of the Graduate Performance From Islamic Bank Department, Faculty of Islamic Studies, Universitas It is undeniable that external factors drive the development and growth of the Islamic bank industry in Indonesia. One of them is the supply of Islamic Human Resources (IHR) from various universities (PT), both private and public, that have opened Islamic banking study programs. However, the increasing availability of IHR from PT still needs to improve, including; link and math curriculum, absorption of graduates in the Islamic bank industry, competence, and performance of PT graduates. Therefore, this study aims to assess the performance of graduates of Islamic banking study programs who have worked in the Islamic banking and finance/islamic banking industry with an approach of technical competence, conceptual competence, and interpersonal ability. This type of research is descriptive quantitative. The sample is all alums of the Islamic banking study program who have worked in Islamic banks and finance/islamic for 2019-2021 with snowball sampling techniques. The sampling method uses questionnaires distributed to company leaders. The findings showed that most alums of Islamic banking study programs who work in the financial industry show high technical skills, conceptual abilities, and interpersonal skills. Introduction The development of Islamic financial institutions today is one of the supporting tools in applying Islamic in the economic sector.Regarding Islamic financial institutions, their development is dominated by Islamic finance.However, other institutions such as Islamic insurance, Baitul Maal Wa Tamwil (BMT) Islamic capital markets, Islamic mutual funds, and Islamic pawnshops have also operated to serve the needs of the community (Ryandono 2018;Wahyudi 2020).The Islamic bank industry has grown and developed rapidly regarding total assets, number of banks, and amount (Desriani 2022; Anugrah and Irawan 2022; Utari et al., 2022).An emerging consequence of such growth is the need for Islamic Human Recources (IHR).Higher Education (PT) is one of the suppliers of qualified Islamic financial IHR needs, both in terms of quality and quantity, to anticipate the growth of the Islamic bank industry and Islamic financial finance.It is noted that almost all universities, both public and private, have opened Islamic banking study programs and the like to ensure supply in the industry.The condition of the IHR Islamic bank industry, End-User Assessment; Graduate of Performance; Technical Competence; Conceptual Competence; Interpersonal Ability A B S T R A C T It is undeniable that external factors drive the development and growth of the Islamic bank industry in Indonesia.One of them is the supply of Islamic Human Resources (IHR) from various universities (PT), both private and public, that have opened Islamic banking study programs.However, the increasing availability of IHR from PT still needs to improve, including; link and math curriculum, absorption of graduates in the Islamic bank industry, competence, and performance of PT graduates.Therefore, this study aims to assess the performance of graduates of Islamic banking study programs who have worked in the Islamic banking and finance/islamic banking industry with an approach of technical competence, conceptual competence, and interpersonal ability.This type of research is descriptive quantitative.The sample is all alums of the Islamic banking study program who have worked in Islamic banks and finance/islamic for 2019-2021 with snowball sampling techniques.The sampling method uses questionnaires distributed to company leaders.The findings showed that most alums of Islamic banking study programs who work in the financial industry show high technical skills, conceptual abilities, and interpersonal skills. based on the latest research report, shows a gap, especially in the minority educational background that comes from the Islamic banking study program.The condition of IHR in the Islamic bank industry and Islamic finance in Indonesia, where the source of IHR Islamic finance in 2008 who were fresh graduates from universities was only 20 percent.At the same time, S1 (Strata one) graduates who come from non-Islamic scientific majors (for example, development economics majors, financial management majors, and so on), most Islamic finance IHR are taken from conventional bank IHR.Detailed data are presented in Table 1.This research is essential for Islamic banking study programs.First, the research findings will become benchmarking policies in curriculum evaluation that have been implemented in the learning process in the future.Second, the study program can rearrange the link and match the curriculum with practical needs in the Islamic banking industry.Third, based on the need for accreditation forms, one is the absorption and suitability of graduates who work in the scientific field of the study program.Four, the Islamic banking study program will carry out reaccreditation.Based on these four normative bases, the results of this study will contribute significantly to be used as evaluation and consideration material in taking banking study program policies Islamic in the future. IHR Competency Many methods, approaches, and elements are used as measures for IHR performance appraisal.The elements of performance appraisal in this study, referred to by Rivai and Jauvani (2009) there are technical competence, conceptual competence, and interpersonal skills used by researchers. According to by Rivai and Jauvani (2009), technical competence is the competence to use the knowledge, methods, techniques, and equipment used to carry out the task and the experience and training obtained.While conceptual competence, namely the competence to understand the complexity of the company and the adjustment of the field of movement of each unit into the company's operational field as a whole, in essence, the individual understands his duties, functions, and responsibilities as an employee.Interpersonal relationship competencies include cooperating with others, motivating employees, negotiating, and others. Previous research The previous research has been done a lot.Trimulato (2018) concluded that there must be a form of IHR management to develop the competence of Islamic bank employees, one of which is through ZIKR, PIKR, and MIKR.Pangesti and Sutanto (2020) stated that one of the factors that cause low financial performance is the fulfillment of the quality of IHR of Islamic banks. In line with the findings of Latifah and Ritonga (2020), PT must play an essential role in supplying IHR with theoretical and practical competencies.Therefore, Yuliar (2021) provides recommendations for the implementation of Islamic-based IHR management can be a solution choice to reduce the low competence of Islamic bank IHR.Furthermore, Elvira (2015) was analyzed the role of PT Ekonomi Islam in preparing Islamic IHR.As a result of its findings, efforts to produce competent graduates of PT must formulate a curriculum that combines financial theory, jurisprudence, and practicum, as well as supporting programs such as; internships and onjob training.It also suggests that PT instills graduates with moral values (creed & morals).Nuroniah and Triyanto (2015) findings analyze how to prepare the competitiveness of graduates of Islamic banking study programs.The results illustrate that the Islamic economics study program must play an active role in preparing qualified and professional Islamic human resources so that they can meet the needs and expectations of the Islamic financial industry by compiling a KKNI-based curriculum.Another research conducted by Syahri and Panorama (2020), is that the curriculum and learning model of Islamic banking still needs to answer the level of knowledge needed in Islamic banking.Therefore, it is necessary to reconstruct Islamic banking learning. Research conducted by Lestari (2021), analyzes the assessment and responses of Islamic banking graduates.As a result, the competencies needed by Islamic banking alumni for their work are the ability to manage time.In addition, Zubair (2018), on banking technology competencies, such as computer accounting, are provisions that PT must provide.Based on previous studies, it illustrates the focus of the object of study, analyzing the role of PT and what competencies are given to graduates.No research has revealed how PT graduates perform in the industry.Therefore, this study seeks to fill the existing gap, namely the industry's assessment of the performance of graduates of the Islamic banking study program at the Faculty of Islamic Studies, Ahmad Dahlan University. Research Methods This type of research is descriptive quantitative.This descriptive approach is used to determine how the industry assesses the performance of Islamic banking study program graduates.Elements of assessment include technical competence, conceptual competence, and interpersonal ability.The sample is all graduates of the Islamic banking study program of the Faculty of Islamic Religion UAD who have worked in Islamic banks and Islamic finance throughout Indonesia with snowball sampling techniques.The number of respondents amounted to 24 graduates.The sampling method uses a questionnaire measured using a Likert scale and will be distributed to leaders of Islamic banks and Islamic finance.Before the questionnaire is distributed, a data validity test is carried out with validity and reliability tests.Furthermore, the data analysis used is a descriptive percentage, namely the presentation of data, among others, by presenting tabulations or tables, graphs or figures, and simple statistical figures.Table 1 shows that of the 24 questionnaires distributed to respondents, all questionnaires were returned to researchers; in other words, this study had a response rate of 100%.Based on the returned questionnaire, the data that can be used for data analysis is 24 respondents. Profile Characteristics of respondents by workplace The data obtained is then classified into three groups.This classification uses standard intersections with the following categories: Descriptive Analysis Descriptive analysis determines the results of respondents' responses to the variables used through the questionnaire items submitted.Furthermore, the frequency distribution of each variable is processed by grouping the value scores of the answers of the research respondents.The frequency distribution of respondents' answers is the level of perception tendency towards research variables, namely the performance of of the Islamic banking study program, Faculty of Islamic Studies, Universitas Ahmad Dahlan which is divided into technical abilities, interpersonal abilities, and conceptual abilities. Technical Capabilities Technical ability is the capacity of individuals to carry out a job related to technical operations.The measurement indicators used job knowledge, the ability to apply skills on the job, and accuracy or accuracy.Figure 2 shows that of the 24 respondents, most were in the high technical ability category (92%) and the low category (8%).This indicates the excellent technical ability of graduates of the Islamic Banking Study Program, Faculty of Islamic Religion, Universitas Ahmad Dahlan, which is related to technical abilities, including job knowledge, the ability to apply work expertise, and accuracy.In other words, this finding shows that most Islamic Banking study program graduates have high technical abilities.From the available data, it can be concluded that most graduates can master and carry out tasks related to technical aspects in the field of Islamic banking well. Number of respondents 24 Total questionnaires that can be processed 24 Total questionnaires that cannot be processed -High 87% Small 13% operational aspects.By mastering job knowledge, applying expertise well, and maintaining rigor in the execution of tasks, individuals can improve their technical abilities and contribute effectively in a work environment that values technical and operational aspects. Conceptual Abilities Conceptual ability is defined as the ability of employees to understand the company's complexity related to tasks and responsibilities or job disks, with aspects measured including quality of work, the of work, and the ability to solve work problems.Figure 3 shows that regarding conceptual ability, most are in the high ability category (96%), followed by the low category (4%).These results show the excellent conceptual ability of graduates of the Islamic Banking Study Program, Faculty of Islamic Religion, Ahmad Dahlan University, especially related to the quality of work, the quantity of work, and the ability to solve work problems. Figure 3. Level of Conceptual Ability of Graduates from Islamic Banking Study Program Conceptual abilities play an important role in employee performance in a corporate environment.Good conceptual skills include a deep understanding of the complex aspects of work, the ability to see relationships between different elements, and the ability to apply that understanding in the tasks performed.Thus, the development of conceptual abilities becomes an important factor in improving employee performance in a corporate environment. Through deep understanding, the ability to apply that understanding in work tasks, and the ability to solve work problems, employees can improve the quality, quantity, and effectiveness of their work in complex environments. Interpersonal Skills Interpersonal ability is an interpersonal relationship defined as a non-technical specific skill that needs to be possessed by graduates of the Islamic Banking Study Program, Faculty of Islamic Religion, Universitas Ahmad Dahlan, which includes; attitude towards superiors and colleagues, adaptability, ability to cooperate, discipline and communication skills.Figure 4 illustrates the level of interpersonal skills of respondents, mainly in the high category (92%) and then followed by the low category (8% Figure 5. Level of interpersonal ability of alumni from Islamic Banking Study Program There are significant differences between the high category and the low category in terms of technical ability, conceptual ability, and interpersonal skills in graduates of Islamic banking study programs who work in the financial industry.In the high category, most alumni (87%) demonstrate strong technical, conceptual, and interpersonal abilities.This shows that most of them have solid knowledge, can apply expertise well in job tasks, understand the complexities of the financial industry, and have good interpersonal skills in interacting with customers and colleagues.This indicates that they have the potential to succeed and contribute positively to their work.However, in the low category, only a small percentage of graduates (13%) demonstrated adequate technical ability, conceptual ability, and interpersonal skills.This indicates a deficiency in these aspects.Graduates in this category may require more attention in developing their technical abilities, a deeper understanding of the financial industry, and improvements in their interpersonal skills.Such improvement and skill development efforts can help them to grow and improve their performance in a competitive work environment. In conclusion, most graduates of Islamic banking study programs in the high category show good technical, conceptual, and interpersonal skills.This shows the importance of developing these skills to succeed in the financial industry.Meanwhile, a few alumni in the low category need more attention to improve their abilities in these aspects.Educational institutions and the financial industry need to provide the necessary support and training to improve the qualifications and performance of graduates in low categories to adapt and compete in the competitive financial industry. Overall, technical abilities, conceptual abilities, and interpersonal skills are interrelated and complementary in the context of graduates of Islamic banking study programs working in the financial industry.Technical skills provide a solid foundation for carrying out specific operational tasks, conceptual abilities help in a deeper understanding of the financial industry, and interpersonal skills facilitate good relationships with customers and colleagues.Combining these three abilities can produce effective and successful graduates in the financial industry, where they can contribute positively to company growth and customer satisfaction. Conclusion The findings and descriptive tests show that most graduates of Islamic banking Department working in the financial industry show high technical, conceptual, and interpersonal abilities.While on the other hand, there are still a small number of alumni who are in the low category in terms of technical, conceptual, and interpersonal ability.Therefore, this research suggestion for the Islamic banking Department, Faculty of Islamic Studies Universitas Ahmad Dahlan is essential to continue to improve the curriculum and skill development programs to prepare graduates of the Islamic banking study program with strong technical abilities, conceptual abilities, and interpersonal skills.These efforts will help ensure that graduates can best face the challenges of the financial industry and make significant contributions to their work. Recommendation Further research is recommended to investigate the factors that affect technical ability, conceptual ability, and interpersonal skills in graduates of Islamic banking study programs. Figure 2 . Figure 2. Technical Ability Level of Graduates from Islamic Banking Study Program That technical ability is an essential factor in individual performance in the context of work involving technical and operational aspects.Technical solid ability includes a good knowledge of the tasks, procedures to be followed, and expertise required to carry out the job.In addition, the ability to apply these skills well and high accuracy in carrying out tasks are also determining factors in good technical ability.Good technical skills enable individuals to carry out tasks efficiently, produce accurate results, and face technical challenges well.Solid technical capability is essential in achieving high-quality work and making a meaningful contribution in the context of technical operations.Therefore, developing and improving technical capabilities are essential for individuals who want to improve their performance in jobs involving technical and Table 1 . Conditions of Islamic Financial IHR in Indonesia field of Islamic banking.Therefore, UAD Islamic Banking Study Program graduates are targeted to have two competencies.That is, as a practitioner of Islamic banking and a researcher in Islamic economics and banking.Islamic Banking Study Program is a study program at the Faculty of Islamic Religion (FAI) Universitas Ahmad Dahlan (UAD).Islamic Banking Study Program, established in 2016, has received B accreditation, and its graduates get a Bachelor of Economics (S.E). Table 1 . Table 1 shows the jobs held by graduates of Islamic banking study programs by type of field of work.Ten people work as financial staff; 2 people serve as tellers, Customer Service, and Accountants.Furthermore, there is 1 person who serves as financial staff, Staff, Marketing, Customer Success Specialist, Content Analyst, and Bancassurance Specialist.Based on these findings, graduates of Islamic banking study programs have diverse job opportunities in various fields.They can work in banks as well as other companies related to finance and marketing.Most graduates of the Islamic banking department work in finance, such as tellers, financial staff, accountants, and financial staff.This shows that the Islamic banking study program provides a solid financial and management knowledge foundation.Respondents by type of field of work Based on the results of the questionnaire distribution, data were obtained as shown in Table 1 below, which shows briefly the number of samples and the rate of return of questionnaires answered by respondents.Number of Samples and Return Rate of Questionnaire Source: Primary data processed(2022) of Graduates from Islamic Banking Study Program, Faculty of Islamic Studies, Ahmad Dahlan University The ).This shows the excellent interpersonal skills possessed by graduates of the Islamic Banking Study Program, Faculty of Islamic Religion, Universitas Ahmad Dahlan, which include attitudes towards superiors and colleagues, adaptability, ability to work together, discipline, and communication skills.performance appraisal of employees who graduated from the Islamic Banking Study Program, Faculty of Islamic Religion, Universitas Ahmad Dahlan, can be analyzed from three aspects: technical, conceptual, and interpersonal.Based on figure 5 shows that the performance of graduates of the Islamic Banking Study Program, Faculty of Islamic Religion, Universitas Ahmad Dahlan, which includes aspects of technical abilities, conceptual abilities, and interpersonal abilities, is mainly in the high category (87%) and low category (13%).These results indicate the excellent performance of the Islamic Banking Study Program graduates, Faculty of Islamic Religion, Ahmad Dahlan University.
2024-02-27T16:59:49.507Z
2023-12-25T00:00:00.000
{ "year": 2023, "sha1": "a0ebc2378afb3730a71ecb1028cefc13092a194d", "oa_license": "CCBYSA", "oa_url": "https://journal.walisongo.ac.id/index.php/attaqaddum/article/download/16298/pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "6cd9b2eb489f57bd72d26b74026d2ada19720f7e", "s2fieldsofstudy": [ "Business", "Economics" ], "extfieldsofstudy": [] }
118655348
pes2o/s2orc
v3-fos-license
Field-created diverse quantizations in phosphorenes Electronic properties of few-layer phosphorenes are investigated by the generalized tight-binding model. They are greatly diversified by the electric and magnetic fields ($E_z$ and $B_z$). The $E_z$-induced gap transition, Dirac cones, oscillatory bands and critical points are present in bilayer system, but absent in monolayer one. The diverse magnetic quantization phenomena cover the coexistent two subgroups of Landau levels, the uniform and non-uniform energy spacings, and the crossing and anti-crossing behaviors. Specifically, the wavefunctions exhibit the dramatic changes between the well-behaved and multi-mode oscillations. The feature-rich energy spectra are reveled in density of states as .many special structures which could be verified from scanning tunneling spectroscopy. I. INTRODUCTION The two-dimensional layered systems, with the nano-scaled thickness and the unique geometric symmetries, have stirred a lot of experimental and theoretical studies [1,2,3]. They have been successfully synthesized by various experimental methods, such as, graphene [4], silicene [5], germanene [6], tinene [7], and transition metal oxides [8]. Such 2D systems are very suitable for studying the novel physical, chemical and material phenomena. Specifically, few-layer phosphorenes are recently produced by using the mechanical cleavage approach [9,10], liquid exfoliation [11,12,13], and mineralizer-assisted short-way transport reaction [14,15,16]. These systems inherently have energy gaps of 1.5 − 2.0 eV [17,18], as identified from optical measurements [10,19]. Such gaps are higher than that (∼ 0.2 − 0.3eV eV) of bulk system [20,21], and they are in sharp contrast with the zero or narrow gaps of 2D group-IV systems [22]. Transport measurements show that the phosporene-based field-effect transistor exhibits an on/off ratio of 105 and a carrier mobility at room temperature as high as 103 cm 2 /V.s [23]. Furthermore, monolayer phosphorene displays the unusual energy spectra and quantum Hall effect due to the magnetic quantization [24]. Few-layer phosphorenes are expected to have high potentials in the next-generation electronic devices and optical devices [23,25]. This work is focused on how to create the diverse quantization phenomena in monolayer and bilayer phosphorenes by tuning a composite magnetic and electric field (B = B zẑ & E = E zẑ ). Each phosphorene layer possesses a puckered structure, mainly owing to the sp 3 hybridization of (3s, 3p x , 3p y , 3p z ) orbitals. The deformed hexagonal lattice on x-y plane is quite different from honeycomb lattices of group-IV systems [26]. This unique geometric structure fully dominates the low-lying energy bands, in which they are highly anisotropic 2 in dispersion relations of energy versus wave vector, e.g., the linear and parabolic dispersions near E F , respectively, along k x and k y directions [26]. The anisotropic behaviors are clearly revealed in other physical properties, as verified by recent measurements on optical spectra and transport properties [23,25]. This is a unique advantage of phosphorene in comparison with MoS 2 -related and semiconductors. The unusual anisotropy could be utilized in the design of unconventional thermoelectric devices. For example, the thermal gradient and the potential difference are applied in two orthogonal directions, leading to one with the higher thermal conductivity and another with the larger electrical conductivity [27]. Moreover, this intrinsic property will greatly diversify the quantization phenomena. The low-lying electronic structure is easily tuned by the external electric and magnetic fields. A uniform perpendicular electric field can create the monotonous increase of energy gap with strength in monolayer phosphorene. Specifically, bilayer phosphorene presents the drastic changes of energy bands and becomes a gapless system after the critical electric field (E z,c ) [28,29]. There exist rich energy dispersions during the variation of E z , including the parabolic bands, the graphene-like Dirac-cone structure, and the oscillatory bands. The unusual transition comes from the strong competitive or cooperative relations among the intralayer and the interlayer atomic interactions, and the Coulomb potentials. These will be directly reflected in the diverse magnetic quantization phenomena. The generalized tight-binding (TB) model is further developed to explore the essential properties in detail [30]. The Hamiltonin is built from the tight-binding functions on the distinct sublattices and layers, in which all the interactions and external fields are taken into account simultaneously. This method can deal with the the magnetic quantization of electronic states even in the presence of complicated geometric structures and external fields [30,31]. 3 The dispersionless Landau levels (LLs) come from the magnetic quantization of neighboring electronic states. The main features are investigated for monolayer and bilayer phosporenes in a composite electric and magnetic field, especially for the B z -and E zdependent energy spectra and the spatial distributions of quantum modes. The generalized tight-binding model is suitable for studying the competitive quantization due to the multi-constant energy loops and the coexistent extreme and saddle points in the energywave-vector space. This study shows that the LL spectra exhibit the monotonous or non-monotonous dependences, and the non-crossing, crossing or anti-crossing behaviors. Furthermore, there are two kinds of LLs according to the well-behaved and perturbed distribution modes. The anti-crossing spectra will be clearly illustrated by the obvious changes in the mixing modes. Specifically, two distinct subgroups of valence (conduction) LLs near the Fermi level are identified from the distinguishable localization centers. They are never observed in the other 2D systems up to now. The unusual energy spectra are directly revealed in the special structures of density of states (DOS). They could be verified from experimental measurements of scanning tunneling spectroscopy (STS). II. METHODS Monolayer phosphorene, with a puckered honeycomb structure, has a rectangular unit cell (black lines in Fig. 1(a)). There are four phosphorus atoms, in which half of them are located at the lower or higher (A and B) sublattices. The similar structures are revealed in few-layer systems, e.g, bilayer phosphorene in Fig. 1(b). The low-lying energy bands are dominated by the atomic interactions of 3p z orbitals [26]. The few-layer Hamiltonian is characterized by ε l i is zero in monolayer; that of few-layer system is the layer-and sublattice-dependent site energy due to the chemical environment. U l i is Coulomb potential energy induced by an electric field. c l i (c † l ′ j ) is annihilation (creation) operator. t ll ij and t ′ ll ′ ij are, respectively, the intralayer and interlayer hopping integrals, and the effective interactions used in the calculations cover the forth and fifth neighboring atoms. These hopping parameters are adopted from Ref. [26]. Monolayer and bilayer phosphorene are assumed to exist in a uniform perpendicular magnetic field. The magnetic flux through a unit rectangle is Φ = a 1 a 2 B z . a 1 = 3. III. RESULTS AND DISCUSSION The special lattice structure and multiple hopping integrals are responsible for the rich energy bands. Monolayer phospoorene has a direct gap of E g ∼ 1.6 eV near the Γ point ( Fig. 2(a)), while group-IV systems present zero or narrow gaps at the K point [32]. Along ΓX and ΓY directions, energy dispersions are approximately linear and parabolic, respectively. The effective mass of the former is much lighter than that of the latter, being associated with the preferred chemical bonding alongx [33]. The conduction and valence bands are asymmetric about E F = 0. They, respectively, arise from the linearly symmetric and anti-symmetric superpositions of TB functions on the two sublattices (A and B). This simple relation of the same layer is modified in bilayer phosphorene, mainly owing to the finite site energies (the first term in Eq. (1)). Furthermore, the layer-dependent TB functions make the distinct contributions the two pairs of energy bands, as shown in A perpendicular electric field can greatly diversify electronic properties. E g of monolayer system grows monotonously with the increasing field strength (Figs. 2(a) and 2(b)). As for bilayer system, the first pair of energy bands approaches to E F (blue curves in Figs. 2(c))-2(f)), while the opposite is true for the second pair (red curves; E z in unit of V/Å). The parabolic bands of the former lead to zero gap near the Γ point at the critical field of E z,c = 0.3 ( Fig. 2(f)). With the further increase of field strength, their energy dispersions present the dramatic changes ( Fig. 2(g)). Along ΓY and ΓX (k x andk y ), there exist the linearly intersecting bands and the oscillatory bands, respectively. Two splitting Dirac-cone structures are situated at the right-and left-hand sides of the Γ point (along +k y and −ˆk y in Fig. 2(h)). Furthermore, the extreme points are just the Γ point, accompanied with two saddle points at the opposite k ′ x s. All the critical points and the constant-energy loops in the energy-wave-vector space will dominate the main features of LL spectra. Specifically, the Coulomb potential energy differences can create the significant probability transfer between two layers. For example, four TB functions might be extremely non-comparable for a sufficient high field strength. This will play critical roles in the unusual LL wavefunctions. The highly anisotropic energy dispersions create the unique dependence of LL energies on the quantum number (n c,v ; discussed later) and the magnetic filed strength, as shown in Fig. 3. Each LL is four-fold degenerate for each (k x , k y ) state because of the spin degree and the mirror symmetry about z-axis. The (k x = 0, k y = 0) state in the reduced Brillouin zone is chosen for a systematic study. In monolayer and bilayer phosphorenes, the low-lying LL energies cannot be well described by a linear relation n c,v B z (the dashed pink lines), especially for the higher energy and field strength. This is different from the square-root dependence in monolayer graphene [34], and the linear dependence in AB-stacked bilayer graphene [35] and MoS 2 [36]. spectra. The former in (I) have the well-behaved B z -dependence when the magnetic field is below 20 T (Fig. 5(a)). Their energies could be fitted by the square-root relation ( n v D B z ). 9 This dependence is similar to that of monolayer graphene with the linear Dirac cone [34]. However, the two entangled LLs, which arise from the anti-crossing of two subgroups, appear at higher magnetic fields. As for the valence LLs in (II), their spectrum presents the coexistent two subgroups under the non-monotonous relations. Furthermore, the n v D and n v Γ LLs have the opposite B z -dependences. The crossings and anti-crossings occur continuously in the alternate form. This abnormal B z -dependent spectrum is never observed in other 2D materials; that is, the unusual LL spectrum of two competitive subgroups is absent in other layered systems. The anti-crossings clearly illustrate that such LLs are composed of the multi-oscillation modes (Fig. 6). With the deeper state energies, the LL spectrum changes into the monotonous B z -dependence, directly reflecting the parabolic energy dispersion. All the LLs in (III) belong to the single-mode oscillations with many zero points. The complicated LL spectra are also achieved by tuning the electric field. At E z > E z,c ( Fig. 5(b)), there exist the frequent crossings and anti-crossings related to the two subgroups of LLs. The diverse LL spectra will be obviously revealed in DOSs as the special structures, so that they could be directly identified from the STS measurements. The anti-crossing spectra originate from the Landau states with the multi-zero points. To The main features of energy bands and LL spectra are directly reflected in DOS. At zero or weak fields, the band-edge states of parabolic bands create the gap-dependent shoulder structures, e.g., E z =0.29 in Fig. 7(a) by the blue curve. The initial structures are replaced by a valley-like structure due to the deformed Dirac cone (Fig. 2(g)), when the gap transition happens at E z = E c (Fig. 7(b)). The symmetric peaks in the logarithmically The magnetic field induces a lot of delta-function-like peaks. The height and spacing of peaks, respectively, reflect state degeneracy and energy dispersion of B z =0. At E z < E c , the low-frequency DOS peaks have the uniform height of the four-fold degeneracy and the almost same spacing (red curve in Fig. 7(a)), mainly owing to the quantization of parabolic band ( Fig. 2(f)). But for E z ≥ E z,c , the unusual features appear in the energy ranges of (I) and (II) (Figs. 7(b)-7(d)). There are one pair of peaks centered about the Fermi level at E z = E z,c (Fig. 7(b)). With the increase of E z , a very prominent peak, with the eight-fold degeneracy, is revealed at E = 0 (Figs. 7(c) and 7(d)). The similar peaks, which come from the quantized Dirac cone, could survive at stronger electric fields (Fig. 7(d)). The doublepeak structures at higher/deeper energies are due to the two anti-crossing LLs. Apparently, all the low-lying peaks present the highly non-uniform spacings. The STS measurements on the main features of low-energy LL peaks could provide the useful informations about the diverse magnetic quantizations. Phosphorenes are in sharp contrast with graphenes in electronic properties, such as, the field-dependent band structures and LLs. For monolayer systems, the former and the latter, respectively, present the middle and zero gaps associated with the parabolic and linear bands. The Dirac cone of graphene can create the square-root dependence in the B z -dependent LL energy spectrum. Each LL is eight-fold degenerate because of the hexagonal symmetry (or two equivalent valleys) [35]. The AA-and AB-stacked bilayer graphenes are semimetals with band overlaps. The AA stacking could be regarded as the superposition of two monolayer graphenes in magnetic quantization; that is, this system has the well-behaved energy spectra and LL wavefunctions. However, the electric field in the AB stacking leads to an energy gap and the valley-split LLs. The LL degeneracy is reduced to half under the destruction of mirror symmetry about z = 0 plane. Furthermore, each splitting LL subgroup exhibit the anti-crossing behavior. However, the coexistent magnetic quantization, which originates from the Dirac-cone structure and the two-constant loops in one valence (conduction) energy band, is absent in graphene systems. The lattice symmetries and the intralayer and interlayer atomic interactions are responsible for the critical differences. IV. Concluding Remarks The phosphorene and group-IV systems in quantization phenomena [31]. Furthermore, it could combine with the single-and many-particle theories to explore the essential physical properties, such as, magneto-optical and Coulomb excitations [39,40]. A single-layer phosphorene only exhibits the monotonous dependence on E z and B z in terms of energy spectra and wavefunctions. The electric and magnetic fields can create the diverse phenomena in bilayer system, such as, the gap transition, the coexistent linear, oscillatory and parabolic bands, the two subgroups of LLs, the uniform and non-uniform LL energy spacings, and the frequent crossings and anti-crossings. The subenvelope functions present the dramatic changes between the well-behaved and multi-mode oscillations during 13 the hybridization of two mixed LLs. The main features of energy bands and LLs are reflected in DOS as various structures, including valleys, shoulders, and logarithmic and delta-function-like peaks. The number, form, height and energy of LL peaks near the Fermi level are closely related to the magnetic quantization arising from the Dirac cone and the two constant-energy loops, e.g., a stronger peak at E=0 and the double-peak structures. The STS measurements on the low-lying special structures are useful in understanding the competitive or cooperative relations among the external fields, and the intralayer and interlayer atomic interactions.
2016-10-30T02:48:40.000Z
2016-10-30T00:00:00.000
{ "year": 2016, "sha1": "66cf0d54c838280708dad8aaedc14b4311d98e18", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1610.09597", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "66cf0d54c838280708dad8aaedc14b4311d98e18", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
120309799
pes2o/s2orc
v3-fos-license
Parametric Representation for the Multisoliton Solution of the Camassa-Holm Equation The parametric representation is given to the multisoliton solution of the Camassa-Holm equation. It has a simple structure expressed in terms of determinants. The proof of the solution is carried out by an elementary theory of determinanats. The large time asymptotic of the solution is derived with the fomula for the phase shift. The latter reveals a new feature when compared with the one for the typical soliton solutions. The peakon limit of the phase shift ia also considered, showing that it reproduces the known result. §1. Introduction In this paper, we report some new results associated with the multisoliton solution of the Camassa-Holm (CH) equation 1) u t + 2κ 2 u x − u txx + 3uu x = 2u x u xx + uu xxx . (1) Here, u = u(x, t), κ is a positive parameter and the subscripts t and x appended to u denote partial differentiation. Originally, this equation has been found in a purely mathematical search of recursion operators connected with the integrable partial differential equations. 2) Recently, equation (1) has attracted considerable interest since it has been derived as a model equation for shallow-water waves. 1) In addition, the equation has been shown to be completely integrable. With this as a turning point, a large number of works have been devoted to studying the mathematical structure of the equation. A recent paper describes a short history and relevant references concerning the CH and related equation. 3) Almost all the works have been focuced on the case κ = 0 for which the CH equation exhibits peakon solutions which are represented by piecewise analytic functions and whose dynamics are now well understood. 4) However, when κ = 0, several new features appear in solutions. In particular, solutions recover analytic nature, but they are expressed by a parametric form like u = u(y, t), x = x(y, t) where y is a new coordinate variable. A difficult technical problem is to find the inverse mapping x = x(y, t). To date, this problem is resolved only for particular cases. 3,5,6) In this respect, we remark that an approach based on the inverse scattering transform method (IST) provides an explicit form of the inverse mapping in terms of Wronskian determinants. 7) Nevertheless, the general N -soliton formula is not available yet. The main purpose in this paper is to present a complete description of the general the CH equation (1) can be put into the form where the boundary condition for r is r(±∞, t) = κ. Then, we define the coordinate transformation (x, t) → (y, t ′ ) by In the following analysis, we use the time variable t in place of t ′ by virtue of the second relation in (4). Transforming (3) by means of (4), it becomes and u is expressed in terms of r as u = r 2 − r(ln r) ty − κ 2 . We term the system of equtions (5) and (6) the associated CH equation. 6) If we substitute (6) into (5), we obtain an alternative but more convenient form in the following analysis, which is where Q = 1 2 By eliminating the variable r from (7) and (8), we can see that Q evolves according to the following nonlinear wave equation 3) where ∂ −1 y = − ∞ y dy is an integral operator. An important observation is that equation (9) can be identified with a model equation for shallow-water waves. 3) This fact enables us to obtain the N -soliton solution of (9) in the (y, t) coordinate system. To complete the solution, however, one must revert to the original (x, t) coordinate system via the inverse mapping x y = 1 r(y, t) , x t = u(y, t). The most difficult ingredient in the analysis is to integrate (10) for the N -soliton solution. §3. Parametric representation of the N -soliton solution Now, the main result in this paper is summarized as follows: The N -soliton solution of the CH equation (1) can be written in a parmetric representation Here, f 1 = f 1 (y, t) and f 2 = f 2 (y, t) have the determinantal expressions with the N × N marices G and H whose elements are given respectively by where δ ij is Kronecker's delta and the parameter d in (12) is an integration constant. By virtue of the parametrization (17), the N -soliton solution is characterized completely by the 2N parameters k i and ξ i0 , (i = 1, 2, ..., N ) . In terms of the parameters k i , the phase variable ξ i of the ith soliton may be written in the form where we have put ξ i0 = −k i y i0 . Let us now outline the proof of (11) and (12) in which two bilinear identities (23) and (38) below will play an essential role. First, we write the N -soliton solution of equation (9) in a determinantal form 3,8,9) where F is an N × N matrix with elements Substituting (19) into (7) and integrating the resultant equation by y under the boundary condition r → κ, |y| → ∞, we obtain and f 2 as It follows from (21) and (22) that This is a bilinear identity among determinants. We note that analogous relations have been studied in the direct proof of the multiperiodic solutions of the Benjamin-Ono and nonlocal nonlinear Schrödinger equations while employing an elementary theory of determinants. 10) For later convenience, we first introduce some notations as well as formulas for determinants and then describe the main result. Matrices and cofactors associated with any N × N matrix A = (a ij ) are defined as follows : Here, A ij and A ij,kl are the first and second cofactors, respectively. The following formulas are used frequently in the present analysis: N r,s=1 Formula (28) is Jacobi's identity and formulas (29) and (30) Similarly, we find from (29) If we use formula (30) with A = F , f r = p r and g s = −q s as well as Jacobi's identity of the form |F |F li,lj = F ll F ij − F lj F il , we can derive the formula for the y derivative of F ij Differentiating (32) by y and inserting (31) and (32), we obtain the expression for |F | ty . With (31) and (32), this result is substituted in the right-hand side of (23) to obtain the relation Further simplication is possible by applying (27) to (34). This gives On the other hand, using (27) together with the basic formulas for determinants, we can show that The identity (23) follows immediately from (35), (36) and (37). The identity below also plays an important role: which we shall now show. First, it follows from (36) and (37) that Formulas similar to (31) now take the form Then, we calculate the quantity P ≡ f 2 − f 1 f 2 + κ(f 1,y f 2 − f 1 f 2,y ). Substitution of (36), (37), (40) and (41) into P yields Let P 1 be the sum of terms involving |F | and P 2 be the rest. Owing to the basic formula for deteminant, α|F (a i ; b i )|+β|F (a i ; c i )| = |F (a i ; αb i +βc i )| and the relation 1−κp i = κq i which follows directly from (17), P 1 reduces to Using Jacobi's identity and (17), we can show that After a few manipulations, we finally arrive at the relation whereF = (f ij ) is an N × N matrix with elements Thanks to (17) and (20), we see thatF is a symmetric matrix and hence |F (q i ; p i )| = |F (p i ; q i )|, implying that P = 0. Thus, we complete the proof of (38). Here, we address on this problem. The procedure for investigating the asymptotic behavior of the solution can now be performed straightforwardly. The core part of the calculation is to evaluate f 1 and f 2 by utilizing the formula for the Cauchy determinant To this end, we order the magnitude of the velocity of each soliton in the (x, t) coordinate system as c 1 > c 2 > ... > c N where We take the limit t → −∞ with the phase variable ξ i of the ith soliton being fixed. Since then other phase variables behave like ξ 1 , ξ 2 , ..., ξ i−1 → +∞, ξ i+1 , ξ i+2 , ..., ξ N → −∞, f 1 has the leading-order asymptotic of the form By invoking (49) and (17), we obtain Substitution of (52) into (51) gives where Similarly, in the limit of t → −∞, f 2 has the asymptotic form It turns out from (11), (53) and (55) that u is represented by a superposition of N solitons where u i is a one-soliton solution given by 3,5,6) u In the same limit, the mapping relation (12) becomes where x i0 = y i0 /κ. In the limit of t → +∞, on the other hand, the expressions corresponding to (56), (54) and (58) are given respectively by Let ∆ i be the phase shift of the ith soliton in the (x, t) coordinate system. This quantity can be evaluated simply with an appropriate use of (58) and (61). The result is 1, 2, .., N ). (62) The phase shift consists of two contributions. The first term on the right-hand side of (62) comes from u(y, t) and the rest terms from x(y, t). Note that the first term is the same as Introduction of the variables f 1 and f 2 also leads to an analytical form (12) for the inverse mapping. The explicit form of the N -soliton solution makes it possible to construct other class of solutions. For instance, the rational soliton solutions may be obtained from it by taking appropriate long wave limits k i → 0(i = 1, 2, ..., N ). 11) Recently, the CH equation has been generalized to a two-dimensional version by applying an asymptotic expansion method to a system of water-wave equations. 12) It is an interesting problem to investigating its integrability. The method developed in this paper may be useful in constructing multisoliton solutions, if they exist. Furthermore, the Degasperis-Procesi (DP) equation is a current interest in soliton theory. 13,14) Although the DP equation has a form similar to the CH equation, its mathematical structure is quite different from that of the CH equation. 14) Quite recently, we have succeeded in obtaining the multisoliton solution of the DP equation by means of a reduction procedure for the multisoliton solution of the Kadomtsev-Petviashvili equation. 15) The solution can be written in a parametric form analogous to the corresponding solution of the CH equation. However, the simple expressions like (11) and (12) are not at hand yet for the general N -soliton solution. This problem is currently being investigated.
2019-04-18T13:12:17.371Z
2005-04-26T00:00:00.000
{ "year": 2005, "sha1": "100d7e84aba352226ffb476bf6d189949f75e625", "oa_license": null, "oa_url": "http://arxiv.org/pdf/nlin/0504055", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "52c751c7498e4a4e7976c73addbec8a372fccd6e", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
145173218
pes2o/s2orc
v3-fos-license
The Impact of Curiosity and External Regulation on Intrinsic Motivation: An Empirical Study in Hong Kong Education The purposes of this paper are to identify: (1) the factors affecting the intrinsic motivation of university students in Hong Kong; and (2) gender differences in the perception of intrinsic motivation in Hong Kong higher education environment. The factors of curiosity and external regulation with intrinsic motivation are taken into investigation in this study, because these factors and intrinsic motivation of the local university students have seldom been examined. This study adopting a survey of 162 sampled students, was conducted in a local university in 2011. Findings showed that students with curiosity could lead to their higher intrinsic motivation, but external regulation was not found to be related to intrinsic motivation. In addition, there are no gender differences on the level of intrinsic motivation. Introduction Most Hong Kong people spend more than 20 years learning as much knowledge as they can to get high academic qualifications. Among all students, there is a question about how students can gain more than others when being in the same learning environment. Motivation is an essential element to affect students' learning and performance directly. Some students may feel that they are not active but under obligation to learn. It is due to their lack of motivation in learning, which would not result in good performance. According to Olsson (2008, p. 7), motivation is a reason or set, or reasons for engaging in a specific activity, especially in human behavior. The reasons can be basic needs, an object, or a goal. Deci and Ryan (1985;1991) stated that SDT (self-determination theory) is currently one of the most comprehensive theories of motivation. According to SDT, intrinsic motivation is defined as the doing of an activity for its inherent satisfactions rather than for some separable consequence (Xie, Debacker, & Ferguson, 2006). It is the degree to which an individual chooses to accomplish an activity for pleasure and enjoyment (Olsson, 2008, p. 2). This type of motivation is known as the most optimal kind of motivation as being entirely autonomous (Noels, Clement, & Pelletier, 2001;Remedios & Lieberman, 2008;Gao, 2008). Students with intrinsic motivation complete tasks for fun or challenge instead of external stimuli, pressures or rewards. They often have more interest, confidence and excitement in doing the task. According to Brophy (2010), intrinsic motivation emphasizes on motivation as self-determination of goals and self-regulation of actions rather than motivation as response to feel pressures. In view of this emphasis of intrinsic motivation, this study tries to investigate different aspects affecting students' learning so that the students can learn through their self-regulation of actions without pressure. With this improvement, their academic performance can be enhanced at the same time. Some previous studies showed that curiosity has positive relationship with intrinsic motivation (Litman, 2005;Shroff, Vogel, & Coombes, 2008). Curiosity causes internal desire or need to learn new information or acquire information that learners missed. This factor can directly affect students' intrinsic motivation as well as their academic performance. As few studies had focus on Hong Kong students, this study tries to investigate whether curiosity also has relationship with intrinsic motivation for students in Hong Kong. In contrast to improving students' intrinsic motivation, there is an investigation of external regulation that could lead to being unmotivated or downgrading of intrinsic motivation by several researchers (Vansteenkiste, Sierens, Soenens, Luyckx, & Lens, 2009;Boekaerts, 2002;Boekaerts & Cascallar, 2006;Vansteenkiste, Zhou, Lens, & Soenens, 2005). External regulation refers to some students feeling obliged to study and have external pressured contingencies (Vansteenkiste et al., 2009;Pisarik, 2009). Vansteenkiste et al. (2009) andPisarik (2009) were mentally pushed to put effort into their studies. All these previous studies had not been conducted in Hong Kong. Therefore, this study tries to look into the relationship between external regulation and intrinsic motivation among university students in Hong Kong. Narayanan, Rajasekaran, and Iyyappan (2007) showed that females have higher intrinsic motivation in learning English than males do among engineering university students. Meanwhile, Shang (1998) found that females have lower intrinsic motivation in physical education classes than males do. In addition, Schatt (2011) focused his study on the subject of music and found that female students have higher instrumental musical practice rate than males; and the amount of time spent on practice correlates significantly with intrinsic motivational beliefs. It raises a question whether females possess higher intrinsic motivation, which is investigated in this paper. Ning and Downing (2010) had conducted a research study among 581 university students in Hong Kong and found that the students' motivation is the strongest predictor to their academic performance, while few attempts to investigate more specific factors such as curiosity and external regulation that whether they affect intrinsic motivation among university students in Hong Kong, the relationship between these factors and intrinsic motivation are deeply investigated so as to improve students' intrinsic motivation. Also, whether males or females are with higher level of intrinsic motivation is also studied. These serve as the purpose of this paper. The authors attempt to fill in the research gap by asking the following research questions: (1) What is the impact of curiosity on intrinsic motivation for Hong Kong university students? (2) What is the impact of external regulations on intrinsic motivation for Hong Kong university students? (3) Is there any difference in the level of intrinsic motivation between males and females for Hong Kong university students? Theory Background and Hypothesis Students in Hong Kong study in a highly competitive, examination-oriented, and large classes with excessive amount of homework (Moneta & Siu, 2002). Moreover, English is regularly widely promoted to be essential for individuals' social and career development (Gao, 2008;Davison & Lai, 2007). English is the medium of instruction among all universities in Hong Kong. These are the characteristics of Hong Kong education system, which tends to require students to remember all the knowledge and apply all the knowledge to the paper for the examination. Hong Kong students always have surface learning. They will engage in the shortcuts allowed in some courses and attain it till the end without deeper understanding (Moneta & Siu, 2002). According to a study conducted by Ning and Downing (2010) in Hong Kong, which focused on investigating the relationship between intrinsic motivation and academic performance among university students, it was found that the relationship is positive. Besides, another research by Afzal, Ali, Khan, and Hamid (2010) among 342 university students in Pakistan generated the same findings that intrinsic motivation can promote more optimal learning and better academic performance. In this fast-paced society, people need to have high competitiveness, wide range of knowledge, and high capabilities in order to achieve eminent performance. Students who have good academic performance were found to have higher intrinsic motivation. To improve students' academic performance via improving intrinsic motivation, investigation of factors affecting individual's intrinsic motivation is needed. In this research, the focuses on elements influencing ones' intrinsic motivation are curiosity and external regulation. University students are the targeted group. Means of examining and identifying those factors contributing to improvement of students' general performance, relationships between each factor and students' intrinsic motivation will be investigated and discussed as follows. Factors analyzed are curiosity, goal, and external regulation. Curiosity Curiosity is defined as the intrinsic desire to know, to see, or to experience something, which motivates information seeking behavior (Zelick, 2007, p. 147). Acquiring knowledge out of curiosity is considered to be intrinsically rewarding and highly pleasurable, since it eliminates states of ignorance and uncertainty (Litman, 2005). There are two main theoretical accounts of curiosity. These two accounts of curiosity may seem different and incompatible. In the context of this circumstance, another theoretical approach, the I/D model ("interest/deprivation" model), will be presented later on. This model that can reconcile these two seemingly incompatible views was suggested by Zelick (2007). The first one is CDT (curiosity drive theory), which expresses the concept of curiosity as a drive state that arouses intrinsic motivation to seek information with the intention of reducing unpleasant feelings concerning uncertainty, in another word, it is curiosity reduction (Litman, 2005). The second one is OAT (optimal arousal theory), which states individuals who have intrinsic motivation to search for new information aim at maintaining and enhancing pleasurable feelings of interest. Organisms that are under-aroused are motivated to seek for new stimulation that can excite their curiosity (e.g., complicated sight, or events). The flaw in both CDT and OAT is that they missed considering that both inducing and reducing curiosity can motivate information seeking behavior. To reconcile both theories, the I/D model is suggested. There are two types of curiosity which are I-type and D-type curiosity within this I/D model. I-type curiosity motivates learners to acquire new knowledge since it induces positive feeling of interest. D-type curiosity can also motivate learners to acquire new information since it reduces negative feelings associated with uncertainty. For I-type curiosity, learners do not feel that they are lacking any information, but have recognition of an opportunity to learn something new or amusing. Contrarily, D-type curiosity motivates learners to learn as they feel that they are missing essential information that can improve their understanding. In other words, curiosity can involve both searching for information expected to be interesting (I-type) and searching for missing information resolving uncertainty (D-type). Disregarding which type of curiosity students possess, curiosity can be intrinsically motivated. It is an important element to drive learning activities such as academic behavior (Osterloh & Frey, 2009). It is common for university students to have assignments and projects that need research work from various sources. Osterloh (2009) suggested that this behavior is mainly curiosity-driven. Intrinsic motivation is a main determinant for the scholarly behavior. In accordance with the agency theory, it only includes people's interest as the main motivator. From a research study on factors promoting students' intrinsic motivation in online discussions based on individual-level done by Shroff, Vogel, and Coombes (2008), it was found that curiosity is positively related to students' intrinsic motivation. Furthermore, the study also showed that intrinsic motivation positively affects learning and academic performance. Therefore, it proves that improving curiosity can lead to higher intrinsic motivation, which in turn improves students' learning and academic performances. Based on the above evidence, the authors hypothesize: H1 (Hypothesis 1): Curiosity can positively affect students' intrinsic motivation. External Regulation External regulation is the most pressured and controlled type of motivation and is described as external perceived locus of causality, owing to its controlled nature with feelings of inner compulsion and conflict with those externally regulated students (Vansteenkiste et al., 2009;Olsson, 2008, p. 147). It is a kind of extrinsic motivation, as same as introjected regulation (Noels, Pelletier, Clement, & Vallerand, 2000;Gao, 2008). These two kinds of regulation can be combined and subsequently called as controlled motivation, which generates a series of undesirable outcomes of learning. Externally regulated students study to avoid punishment, to obtain rewards, or to meet external expectations (Vansteenkiste et al., 2009;Xie, Debacker, & Ferguson, 2006;Olsson, 2008, p. 147;Boekaerts & Cascallar, 2006). They feel that they are obliged to study. With the external pressured contingencies, they are mentally pushed to put effort into their studies. They tend to be less adaptive, engaged and concentrated, more anxious about tests and procrastination, and lower achievement. From a research study on the relationship between external regulation and the academic performance for Japanese students by Vansteenkiste, Zhou, Lens, and Soenens (2005), it was found that external regulation has a negative relationship with academic achievement and it predicted a work-avoidance orientation, while autonomous motivation has positive relationship with academic achievement, deep-level of processing, and mastery orientation. Moreover, according to Pisarik (2009), it was found that high levels of burnout among university students have high levels of external regulation and low levels of intrinsic motivation. Also, persons who have greater levels of intrinsic motivation experience higher levels of efficacy and lower levels of exhaustion and cynicism. People with lower levels of exhaustion and cynicism experience lower level of external regulation. One reason for this finding is a trend found in this study that students obtaining college education are for vocational rewards such as getting a better job instead of moral and intellectual training. IMPACT OF CURIOSITY AND EXTERNAL REGULATION ON INTRINSIC MOTIVATION 299 Based on the above evidence, the authors propose: H2 (Hypothesis 2): External regulation would lead to lower students' intrinsic motivation. Difference in Gender Most of the previous researches are apt to suggest that females have higher motivation and more desirable learning than male students do. Narayanan, Rajasekaran, and Iyyappan (2007) found that female university students who studied engineering or technology have higher motivation in learning English than males do. It was concluded that female students learn English better than male students do (Narayanan, Rajasekaran, & Iyyappan, 2007). Further to the explanation provided by Narayanan, Rajasekaran, and Iyyappan (2007), females have better listening skills, more concerned with input, i.e., listening, and tend to have better attitudes towards learning. Contrarily, males are less sensitive, more concerned with output, i.e., talking, and think in a more analytical way than females do. These may be the reasons explaining why females perform better in learning. It should be noted that the above research is for university students in learning English. There is a research focusing on another subjectmusic, conducted by Schatt (2011). The study showed that female students have higher instrumental and musical practice rate than males do while the amount of time spent on practice correlated significantly with intrinsic motivational beliefs. Motivational beliefs are guides of students' thinking, feelings and actions in learning some subjects, and can lead to success in learning (Boekaerts, 2002;Clayton, Blumberg, & Auld, 2010). Another research focusing on the subject of physical education, its result is different. One study by Shang (1998) in Taiwan focusing on physical education classes in high and also junior high schools, it was found that female students have lower intrinsic motivation which is relevant to their interest or enjoyment and perceived competence than male students do in most of the sub-scale of the study, but have higher effort put into the learning tasks. It not only proves that learning environment is different for male and female students, but also emphasizes that males perceive the learning environment as more challenging and competitive, while females perceive higher threat than males do in physical education classes (Shang, 1998). According to several researchers investigating the levels of intrinsic motivation of students on different subjects, it resulted in different genders having higher intrinsic motivation towards various subjects. Therefore, it should not have any conclusion, saying that a particular gender is inclined to have higher motivation on all subjects. Based on the above evidence, the authors predict: H3 (Hypothesis 3): There should have no differences between males and females on the level of intrinsic motivation. The proposed theory framework and hypotheses formation are shown in Figure 1. Research Method Survey research among university students is used in this study to test the hypotheses stated above since the questionnaire as an instrument for studying research problems is a survey tool for collecting data from people about themselves, such as attitudes, thoughts, behaviors, and concerning a social unit such as a school (Lanthier, 2002;Siniscalco & Auriat, 2005). The research was completed in three universities in Hong Kong. Before the survey was mass produced and used to gather real data, a pilot study was carried out to disclose problems and refine the wording, ordering, etc. (Litwin, 1995;Hoinville, Jowell, & Associates, 1978). Ten of the author's friends were asked to complete the questionnaires and give feedback independently about the questionnaires. The survey was then conducted by distributing questionnaires with covering letter to explain the purpose of the research to the university students individually. The questionnaire was averagely completed within 10 minutes. Subsequently, 200 questionnaires were given out to undergraduates from various universities in Hong Kong. A total of 162 responses (with a return rate of 81%) were achieved, and the usability rate was 100% since no incomplete questionnaires were found. Data Analysis The purpose of this study is to test correlation between three variables and gender differences on level of intrinsic motivation. SPSS version 17 is used to analyze the data in this study. This is sophisticated software for many scientists and other professionals to analyze statistics. Data analysis including frequency distribution is used to analyze the personal data of respondents. After that, mean and standard deviation are used to study the perception of curiosity, external regulation and intrinsic motivation that university students have. Independent-samples t-test is then used to test the H3 (third hypothesis)-to see if there is any differences between males and females on the level of intrinsic motivation. This test is followed by correlation analysis that tests H1 and H2-to check if there is any relationship between the two elements (curiosity and external regulation) and intrinsic motivation. Before the analysis, the collected data were examined to ensure that it is valid and reliable. It involves checking the usability and the validity of the responses on the questionnaires collected. Subsequently, reliability analysis by using Cronbach alpha, which is a measure of internal consistency about how close elements are related to each other, is carried out to test the reliability of the variables (Nunnally, 1978;Prater & Ghosh, 2006). The test means the freedom from random error (Alreck & Settle, 1985). The Cronbach alpha values (see Table 2) of curiosity, external regulation, and intrinsic motivation are 0.753, 0.640, and 0.671, respectively. A value of 0.60 is also used as the practical lower bound (Narasimhan & Jayaram, 1998). Therefore, reliability figures in this study, which exceed the value of 0.60, can be perceived as acceptable. This study can be considered as reliable. Apart from reliability testing, factor analysis was also utilized to establish construct validity. Results of factor analysis can be used to ensure that questionnaires used in this study are valid (Field, 2005). Factor loading is used to analyze the validity of measurement scales with the general value of acceptance as 0.30 (Anderson & Gerbing, 1998;Fornell & Larcker, 1981). The variable of curiosity includes five items. The factor analysis for those items was conducted for the five items. Factor loadings ranged from 0.325 to 0.594. The variable of external regulation includes two items. Factor loadings are 0.738 for both items in the factor analysis. The variable of intrinsic motivation includes two items. Both factor loadings are 0.753. All values of factor loadings in the questionnaire are higher than 0.30. Hence, this scale is retained. As a result, it can be concluded that the measurement scale is valid and reliable. Findings The demographic statistics of the respondents were analyzed. Table 3 shows the background of totally 162 respondents, of whom 61.7% are males. Sixty four point eight percent are between 21 and 25 years old. Half of them are university students in Grade 2 to university through JUPAS (Joint University Programmes Admissions System), which indicated that they have been studying and encountering different levels of motivation in learning for at least 18 years for education system in Hong Kong. All respondents completed a questionnaire by asking their reasons of study in terms of whether they perceive the specific statement as: (1) "Very true"; (2) "Sort of true"; (3) "Not very true"; and (4) "Not at all true". The reasons in the questionnaire pertain to the three variables (curiosity, external regulation, and intrinsic motivation) investigated in this study. Mean and standard deviation were used to examine the level of the perception of the variables. The values of mean, standard deviation, and Cronbach alpha are shown in Table 2. Results showed that university students have slight perception towards having curiosity and intrinsic motivation, except external regulation. It is indicated by the mean score of 2.0494 for curiosity, 2.8210 for external regulation, and 2.0463 for intrinsic motivation. Correlation analysis was then used to test the relationship between curiosity or external regulation and intrinsic motivation. The relationships investigated are shown in Table 4. Table 4. H1: This hypothesis predicting that curiosity leads to higher intrinsic motivation was supported since there were positive empirical relationships between them (r = 0.185, p < 0.05). H2: This hypothesis predicting that external regulation leads to lower intrinsic motivation was not supported by the results (r = 0.024, p > 0.757). Independent sample t-test was used subsequently to test if there is any difference on the level of intrinsic motivation between males and females. H3: This hypothesis predicting that there is no significant difference on the level of intrinsic motivation between males and females was supported since the t value is 1.140 and the significant value is 0.256, which is higher than 0.05. With the mean difference of 0.11419, it shows that there is no significant relationship between males and females. Discussion and Implication More detailed discussion and implications for practice are elaborated in the following. Possessing Slight Perception of Curiosity The level of curiosity tends to be high since all-round education in Hong Kong from primary school to university provides students with various sorts of knowledge that can boost their interest in learning different subjects. Thus, students can find their interest easily and their curiosity will not be too low. However, the common target for all students is to get high marks in examinations and get a good job for their lives. Education in Hong Kong tends to be examination-oriented, which requires students to remember all knowledge and apply all the knowledge to the paper for the examination. Hong Kong students may always have surface learning that they will engage in the shortcuts, which are allowed in some courses, and will attain it till the end without deeper understanding (Moneta & Siu, 2002). It makes their curiosity lower than what they expected. Slightly Not Possessing Perception of External Regulation External regulation is the most pressured and controlled type of motivation (Vansteenkiste et al., 2009;Olsson, 2008, p. 147). Externally regulated students study to avoid punishments, to obtain rewards, or to meet external expectations (Vansteenkiste et al., 2009;Xie, Debacker, & Ferguson, 2006;Olsson, 2008, p. 147). Meeting external expectation is the most common and possible reason why some of the university students in Hong Kong have stress in learning. However, students do not have high level of external regulation, because they are trained to remember knowledge even without the complete understanding and within the logic. When they have good scores in tests for their memorization of knowledge without the complete understanding of them, it would not subsequently produce much external pressure to students. Possessing Slight Perception of Intrinsic Motivation As for the tendency of students to have slight perception of intrinsic motivation, this result is consistent with the result suggesting that students have slight perception of curiosity and slightly lower level of external regulation. Correlation Between Curiosity and Intrinsic Motivation The result showed that curiosity has significant relationship with intrinsic motivation. Students having higher level of curiosity possess higher intrinsic motivation. It supports H1 predicted above. It is consistent with previous research by Shroff, Vogel, and Coombes (2008) whose previous research suggested that curiosity as one of the six individual factors examined is positively related to intrinsic motivation. Improving curiosity leads to higher intrinsic motivation and in turn, improves students' learning and academic performance. Moreover, according to Litman (2005), acquiring knowledge out of curiosity is considered to be intrinsically rewarding and highly pleasurable since it eliminates states of ignorance and uncertainty. With Hong Kong unique educational system and its pattern of learning, students need to learn a wide range of subjects. It makes them easier to have curiosity about some particular subjects. Once they develop their curiosity about some subjects, their intrinsic motivation towards acquiring knowledge in these subjects is higher. As a result, their performance can be improved. Correlation Between External Regulation and Intrinsic Motivation There are many previous studies which stated that external regulation is associated with negative classroom learning (Vansteenkiste et al., 2009;Boekaerts, 2002;Boekaerts & Cascallar, 2006;Vansteenkiste, Zhou, Lens, & Soenens, 2005) and lower level of intrinsic motivation (Pisarik, 2009). Students who have controlled motivation tend to be less adaptive, engaged and concentrated, and more anxious about tests and procrastination, and have lower achievement. Boekaerts and Cascallar (2006) pointed out that controlled motivation is associated with students who comply with the task due to some external encouragements, rewards, or social pressures. One interesting finding in this study is that external regulation is not significantly related to intrinsic motivation. For most Hong Kong university students, they have been experiencing high pressures from their parents, teachers, and even peers for more than 15 years. Therefore, it can be comprehended that there is no significant relationship between external regulation (i.e., social pressure or external encouragement) and intrinsic motivation since they have got accustomed to the study stress (Gao, 2008). This phenomenon can be further interpreted by the education system in Hong Kong that emphasizes much on scores of tests and examinations (Moneta & Siu, 2002). Contrary to the education system in other countries, the emphasis of education is placed on the understanding of the students. The above analysis can explain the reason why H2 stating that external regulation leads to lower level of intrinsic motivation is rejected in this study. Difference on Level of Intrinsic Motivation Between Males and Females Results indicate that the level of intrinsic motivation for students in Hong Kong is nearly the same between males and females. It can be interpreted by the same education environment for both genders. They received the same education approaches under the same education system, which contributes to possess the same level of intrinsic motivation towards learning. Supported by several researchers with this result, Narayanan, Rajasekaran, and Iyyappan (2007) concluded that female students studying engineering or technology learn English better than male students do. Meanwhile, according to a research of Shang (1998) in Taiwan focusing on the physical education classes, it was found that females have lower intrinsic motivation than males do, but with higher effort put into the learning tasks. Another research conducted by Schatt (2011) focusing on the subject of music found that female students have higher instrumental musical practice rate than males do, while the amount of time spent on practice correlates significantly with intrinsic motivational beliefs. Therefore, it should not have any conclusion, saying that a particular gender is inclined to have higher motivation on all subjects since university students always involve studying English, Chinese culture, and their major subjects together. The result of this research shows that there is no difference between males and females on the level of intrinsic motivation, which supports the H3. Relationship Between Combined Variables (Curiosity and External Regulation) and Intrinsic Motivation For the effect on intrinsic motivation by the combined factors of curiosity and external regulation, it is the same as that by curiosity alone. This phenomenon may be due to Hong Kong students' learning atmosphere. In Hong Kong, students are trained to study under the pressure from others such as their parents and teachers, which make the students have no significant effect on their intrinsic motivation when external regulation is combined with curiosity. Implication for Practice The implication for practice in this study is to let universities identify different practical methods to improve students' curiosity and try to reduce their external regulation so that students' intrinsic motivation can be improved. The universities' professors and students should be aware of their ways of teaching or learning, and what methods should be used to strengthen the intrinsic motivation of students in Hong Kong higher education. Universities should think about changing the learning environment, shifting courses' emphasis from marks in examinations to students' understanding to the knowledge such as doing projects to develop deeper understanding among students. Limitations and Future Opportunities There are mainly two limitations in this project. Firstly, the sample size of some subgroups is not even. The sample size of males is 100 while that of females is 62. The significant level may be influenced, owing to unbalanced distribution of sample size. Also, the distribution of sample size among the five universities in Hong Kong providing engineering fields of study is not even either. With one of the universities accounting for a larger part of the samples, the survey result may not be representative to the general situations of university students in Hong Kong. The second limitation of this study is that the sample size is not large. Less than 200 samples were collected. It may make the survey result not representative enough to show the general learning environment for university students in Hong Kong. Apart from the limitations, there are several future research opportunities from this study. The first is to examine other factors that may also affect intrinsic motivation, such as ages, fields of engineering, etc.. This study only examined two factors (curiosity and external regulation) among university students. Whether there is a relationship between intrinsic motivation and students' academic performance among university students in Hong Kong can also be investigated. Secondly, this project focused on improving intrinsic motivation among university students. This type of research can also be applied to similar research studies in primary schools, secondary schools, overseas schools, or among students studying associate degree in Hong Kong. The factors contributing to their intrinsic motivation or discouragement may be different. This research study also lacks deep investigation. This study that involves only quantitative research is empirical. The survey was conducted in form of questionnaires, without face-to-face interviews. The focus of the investigations in this study is on the existence of the relationships. Further research can be done concentrating on deeply investigating why there are relationships between the elements and intrinsic motivation. For example, it was found in this research study that curiosity can promote intrinsic motivation. Thereby, all these can be a further research for the future development of education. Conclusions Throughout the study, factors of curiosity and external regulation have been examined as tools to improve intrinsic motivation of university students in Hong Kong. A survey was conducted to find out the perceptions of the targeted group towards their curiosity in learning and their external regulation. With investigation of relationships between the two elements and intrinsic motivation, there are also some comparisons between males and females to see if either of the genders possesses higher curiosity, lower external regulation, and higher intrinsic motivation. The survey results also support two of the three hypotheses defined in this research study. Firstly, curiosity leads to higher intrinsic motivation (H1). Secondly, external regulation has no significant relationship with intrinsic motivation, which rejects H2. Thirdly, there is no significant difference on the level of intrinsic motivation between males and females, which supports H3. Finally, more specific factors that may affect students' intrinsic motivation are investigated among university students in Hong Kong so that students' academic performance can be enhanced with higher level of intrinsic motivation (Afzal, Ali, Khan, & Hamid, 2010;Ning & Downing, 2010).
2019-05-06T14:06:24.525Z
2012-05-28T00:00:00.000
{ "year": 2012, "sha1": "376be68873bf600daabbc9b2b0a48876a65ca27d", "oa_license": "CCBYNC", "oa_url": "http://www.davidpublisher.org/Public/uploads/Contribute/56b41556ca3ee.pdf", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "376be68873bf600daabbc9b2b0a48876a65ca27d", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Psychology" ] }
635370
pes2o/s2orc
v3-fos-license
Glycogen Synthase Kinase-3 Regulates Production of Amyloid-β Peptides and Tau Phosphorylation in Diabetic Rat Brain The pathogenesis of diabetic neurological complications is not fully understood. Diabetes mellitus (DM) and Alzheimer's disease (AD) are characterized by amyloid deposits. Glycogen synthase kinase-3 (GSK-3) plays an important role in the pathogenesis of AD and DM. Here we tried to investigate the production of amyloid-β peptides (Aβ) and phosphorylation of microtubule-associated protein tau in DM rats and elucidate the role of GSK-3 and Akt (protein kinase B, PKB) in these processes. Streptozotocin injection-induced DM rats displayed an increased GSK-3 activity, decreased activity and expression of Akt. And Aβ40 and Aβ42 were found overproduced and the microtubule-associated protein tau was hyperphosphorylated in the hippocampus. Furthermore, selective inhibition of GSK-3 by lithium could attenuate the conditions of Aβ overproduction and tau hyperphosphorylation. Taken together, our studies suggest that GSK-3 regulates both the production of Aβ and the phosphorylation of tau in rat brain and may therefore contribute to DM caused AD-like neurological defects. Introduction Alzheimer's disease (AD) is characterized by the presence of two pathological protein deposits: extracellular senile plaques (SP) and intracellular neurofibrillary tangles (NFTs). The former is composed of -amyloid (A ) [1] and the latter is formed by bundles of paired helical filaments (PHFs), which are mainly constituted by abnormal hyperphosphorylated tau protein [2]. Several protein kinases may take part in these pathological processes, including cyclin-dependent kinase 5 (cdk-5), GSK-3, and protein kinase A (PKA). Activated GSK-3 increases the production of A peptides by promoting of -secretase activity and induces the aggregation and deposition of A [3]. On the other hand, GSK-3 regulates tau hyperphosphorylation at ser198/ser199/ser202 sites and ser396/ser404 sites [4]. Furthermore, as a downregulator of insulin signaling, GSK-3 is regulated by Akt [5]. The available epidemiological data are largely inconclusive with regard to the contribution of diabetes mellitus to cognitive impairment and AD-type neurodegeneration [6,7]. Both diabetes and AD are characterized by localized amyloid deposits that progress during the course of the diseases [8]. In hippocampal neurons of diabetic mice, the stains of A 40 and A 42 are increased, but the expression of GSK-3 is weakened by immunohistochemistry [9]. In AD-like Tg2576 mice, diet-induced insulin resistance promotes A 40 and A 42 generation in the brain. Further study suggest that PI3kinase/PS473-Akt/PKB is reduced in these brains and suggest that GSK-3 as a downsteam kinase of these kinases may affect the production of A [10]. In the skeletal muscle of type II diabetes mellitus patients, GSK-3 activity and its expression level are significantly higher [11]. Increased GSK-3 activity is also found in the development of insulin resistance and 2 The Scientific World Journal type II diabetes fat tissue of C57BL/6J mice [12]. Immunohistochemical results show that ser198/ser199/ser202 sites are hyperphosphorylated in the hippocampus of diabetic mice, but the expression of GSK-3 is reduced [13]. In the brain of insulin knockout mice, hyperphosphorylation of tau at threonine 231 and of the neurofilament is exhibited, but GSK-3 activity was inhibited [14]. All of the above evidences imply that GSK-3 may be linked to both A production and tau hyperphosphorylation during the course of diabetes. To better understand the mechanisms of AD-like changes in diabetic rat brain, we investigated the activity of GSK-3 and Akt and the expression of Akt; then A production and tau phosphorylation was determined. Furthermore, the role of GSK-3 was explored by using LiCl as a specific inhibitor [15]. Preparation of Rat Hippocampal Extracts. Following continuous injection of NaCl or LiCl for 10 days, the rats were killed. The hippocampus was immediately removed and homogenized at 4 ∘ C using a Teflon glass homogenizer in 50 mmol Tris-HCl, pH 7.4, 150 mmol NaCl, 10 mmol NaF, 1 mmol Na 3 VO 4 , 10 mmol -mercaptoethanol, 5 mmol EDTA, 2 mmol benzamidine, 1.0 mmol phenylmethylsulfonyl fluoride, 5 g/mL leupeptin, 5 g/mL aprotinin, and 2 g/mL pepstatin. The tissue homogenates were then divided into two portions. One portion of each homogenate was centrifuged at 12,000 ×g for 20 min at 4 ∘ C, and the resulting supernatant was stored at −80 ∘ C for assaying activities of protein kinases. The other portion was mixed in 2 : 1 (v/v) ratio with lysis buffer containing 200 mmol Tris-HCl, pH 7.6, 8% SDS, 40% glycerol, boiled for 10 min in a water bath, and then centrifuged at 12,000 ×g for 30 min, and the supernatant was stored at −80 ∘ C for Western blot analysis. The concentration of protein in the hippocampal extracts was measured by BCA kit according to the manufacturer's instructions (Pierce, Cheshire, UK). Assay of GSK-3 and Akt Activity. The GSK-3 activity in rat hippocampal exacts was measured using phospho-GS peptide 2 (Upstate, Lake Placid, NY, USA) as described previously [16]. Briefly, tissue extracts, 7.5 g proteins were incubated for 30 min at 30 ∘ C with 20 M peptide substrate and 200 mol [ -32 P] ATP (1,500 cpm/pmol ATP) in 30 mmol Tris, pH 7.4, 10 mmol MgCl 2 , 10 mmol NaF, 1 mmol Na 3 VO 4 , 2 mmol EGTA, and 10 mmol -mercaptoethanol in a total volume of 25 L. The reaction was stopped by addition of 25 L of 300 mM -phosphoric acid. The reaction mixture was applied in triplicates on phosphocellulose paper (pierce). The filters were washed three times with 75 mmol -phosphoric acid, dried, and counted by liquid scintillation counter. GSK-3 activities was calculated with picomoles of phosphate incorporated/mg protein/min at 30 ∘ C and expressed as relative activity against control. The Akt activity was measured using histone 2B as a substrate as described previously [17,18]. Briefly, after the immunoprecipitates were washed with lysis buffer and kinase buffer, 40 L kinase buffer containing 200 M [ -32P] (5 Ci), 100 M ATP, 1 g/ L histone 2B; then the samples were incubated at 30 ∘ C for 15 min and spotted onto P81 filter papers; the filter papers were washed by 75 mM -phosphoric acid, dried, and counted by liquid scintillation counter. Akt activity was also expressed as relative activity against control. Measurement of A in Hippocampus of the Rats. The A 40 and A 42 in the hippocampus were measured by a Sandwich enzyme-linked immunosorbent assay (ELISA) using antibodies as described previously [19]. Experiments were performed in a 96-well plate. Briefly, affinity-purified The Scientific World Journal mAb G2-10 (0.5 g/well) was applied as the capture antibody for A 40 , mAb G2-11 (1 g/well) was used as a capture antibody for A 42 , and mAb WO 2 was used as a detection antibody. Neutravidin-horseradish peroxidase and TMB were used for reporter system and absorbance values at 450 nm were determined with microplate reader (TECAN, Austria). Levels of A were expressed as a relative level against control. Each sample was tested in triplicate in each experiment. Evaluation of Tau Phosphorylation and Expression of Akt. The phosphorylation of tau at various sites was determined by Western blot as described formerly [20]. For immunoblotting, about 20 g of proteins were loaded in each lane. Proteins were separated by SDS-PAGE and transferred to polyvinylidene difluoride membranes (Amersham Pharmacia Biotech, NJ, USA). After being blocked for 1 h in a solution of 5% nonfat dry milk in TBS/Tween 20, membranes were immunoblotted using primary antibodies PHF-1 (1 : 500), Tau-1 (1 : 30,000), and 111e (1 : 3,000) at 4 ∘ C overnight and developed with alkaline phosphatase-labeled IgG (0.5 g/mL, Amersham Pharmacia, NJ, USA) as secondary antibodies. 5bromo-4-chloro-3-indolyl-phosphate/nitro blue tetrazolium (BCIP/NBT) was used as substrate [2]. Detection of the phosphorylation of Akt was performed with antibodies to phospho-Ser 473 Akt and total Akt (Cell Signaling technology) (dilution 1 : 800); immunoblots were developed using horse radish peroxidase-conjugated goat anti-rabbit IgG (1 : 2000) followed by detection with enhanced chemiluminescence. Statistical Analysis. The intensity of the protein bands from Western blot was analyzed by Image-Pro Plus software and ID Image Analysis software (KODAK, USA), respectively. The data were presented as mean ± SD and were analyzed by analysis of variance. Measurement the Level of Serum Glucose in Rats . 72 h after STZ injection, the fasting blood glucose (FBG) of the rats among DM group, DM plus NaCl group, and DM plus LiCl group was over 16.7 mmol/L, and it was increased obviously as compared with controls. The fasting blood glucose of the rats in DM plus LiCl group was not induced a significant decrease when compared with DM group, but there was a tendency toward decrease (Figure 1). 4 The Scientific World Journal Assay of the Activity of GSK-3 and Akt. To reveal the role of GSK-3 and Akt during the course of diabetes, activity of GSK-3 and Akt was measured (Figure 3). GSK-3 is an important kinase in the process of A production and tau hyperphosphorylation [3,4]. GSK-3 was inactivated by phosphorylation of serine 9 in GSK-3 and serine 21 in GSK-3 [18]; Akt appears to be the predominant kinase mediating this phosphorylation of GSK-3. Both brain GSK-3 and Akt can be regulated by blood glucose in mice [18]. In our study, we found the strong activity of GSK-3 and lower activity in rat hippocampus when fasting blood glucose was highly increased by STZ intraperitoneal injection. After treating the DM rats with LiCl, GSK-3 activity was decreased 46% approximately. No apparent inhibition of GSK-3 activity was observed after treating the DM rats with NaCl ( Figure 2). There were not obvious changes of Akt activity after treatment of LiCl or NaCl with DM rats. These results implied that hyperglycemia induced strong activity of GSK-3 and lower activity of Akt in rat brain. LiCl could directly inhibit the activity of GSK-3 rather than Akt. Assay Production of A and Tau Phosphorylation and the Role of LiCl. A deposition is an important mechanism in The Scientific World Journal 5 both AD and diabetes, and GSK-3 plays an important role in regulation of A production. The high level of GSK-3 activity and low level of Akt activity were found in our study. To elucidate the effects of GSK-3 and Akt on A production, A production was determined following measurement of GSK-3 and Akt activity. As shown in Figure 4, the production of A 40 (Figure 4(a)) and A 42 (Figure 4(b)) was increased significantly while activation of GSK-3 and inhibition of Akt were induced in DM group. After treating the DM rats with LiCl, the production of A 40 and A 42 was reduced by 60% and 21%, respectively. There was not significant reduction of A 40 and A 42 following NaCl treatment. These data showed that production of A 40 and A 42 is increased in the hippocampus of DM rats; both activation of GSK-3 and inhibition of Akt might play an important role in this process. GSK-3 is also an important kinase in the regulation of tau phosphorylation. When itwas activated, tau was prone to phosphorylation at ser198/ser199/ser202 sites and ser396/ser404 sites. Hence, we examined the state of tau phosphorylation in DM rat by Western blot using phosphorylation-dependent and site-specific tau antibodies. We found that in DM rat hippocampus, the immunoreactivity of PHF-1 (detection of ser396/ser404, phosphorylated sites) was increased obviously as compared with the control group, but the staining of PHF-1 was reversed after treatment of LiCl ( Figure 5(a)). Moreover, the immunoreactivity of Tau-1 (detection of ser198/ser199/ser202, nonphosphorylated sites) was decreased in DM rat as compared with controls, and LiCl could reverse this staining ( Figure 5(b)). NaCl treatment did not change the staining of PHF-1 and Tau-1 in the DM group. The total level of tau measured by R111e was not changed significantly in all the four groups ( Figure 5(c)). Assay of Akt Expression. GSK-3 was inactivated by Akt by phosphorylation at serine 9 and serine 21. To test if Akt was regulated by streptozotocin-induced hyperglycemia and was regulated by activation of GSK-3, we examined phosphorylation level of Akt. Our results show that the phosphorylation of Akt at Ser473 site was deceased in DM rat hippocampus, and both LiCl and NaCl administration could not recover the decrease, and the total level of Akt was not changed. These results suggest that GSK-3 was inhibited by LiCl instead of Akt inhibition ( Figure 6). Behavioral Testing. To further explore the cognition dysfunction caused by the changes of Akt and GSK-3 and Tau hyperphosphorylation in DM rats, the step-down electronic inhibitory avoidance task was assessed in rats. To examine which rats have lower ability of learning and memory, we first trained all rats to stay on the platform for 3 min and not to step down. Ninety-four hours after the training, latencies to stepdown during training session were not significantly different across the groups (data are not shown because the latency to step-down in this session was basically nonexistent). The results suggest significantly shorter latencies and more error times to step-down in DM rat when compared to control groups ( < 0.05). Longer latencies and less error times were observed in LiCl administration to DM rat as compared with DM rat ( < 0.05). There is no significant difference between NaCl administration to DM rats and DM rats ( > 0.05). These results suggest that inhibition of LiCl on GSK-3 might improve the memory of DM rat (Figure 7). Discussion Study and memory dysfunction are the main phenomena of central nervous system complications in type I diabetes mellitus [21]. Cerebral atrophy, which is characterized in AD patients, is also found in young patients with type 1 diabetes who are otherwise healthy [22]. In experimental animal models, an increase of stains with A 40 and A 42 antibody is induced in DM mice hippocampus [9]. Tau is also hyperphosphorylated at ser199/ser202 while the expression of GSK-3 is decreased in DM mice hippocampus [13]. Whether A overproduction and tau hyperphosphorylation happen synchronously in the hippocampus of DM rat is puzzling, and the role of GSK-3 and Akt in these processes is not clear. Both GSK-3 and Akt in mice brain could be regulated by alterations of blood glucose; streptozotocin-induced hyperglycemia brain increases Akt activity and decreases GSK-3 activity, which could be reversed by lowering blood glucose with insulin administration [18]. In an experimental model related to sporadic Alzheimer's disease, after intracerebroventricular injection of streptozotocin for 1 month, there is a decrease of GSK-3 alpha/beta activity in the rat hippocampus [5]. In our study, the hyperglycemia was induced by intraperitoneal injection of rats with streptozotocin, but an increase of GSK-3 activity and a decrease of Akt activity were induced in the rat hippocampus. Hence, we proposed that hyperglycemia might affect the activities of GSK-3 and Akt in DM rat brain. In AD-like Tg2576 mice, diet-induced insulin resistance promotes Abeta40 and Abeta42 peptide generation in the brain that corresponds with increased gamma-secretase activities. Further exploration of the apparent interrelationship of insulin resistance to brain amyloidosis reveals a functional decrease in insulin receptor-(IR-) mediated signal transduction in the brain; Akt/PKB inhibits glycogen synthase kinase (GSK-3 alpha) activity [3]. We also found that there was an increase of A production when GSK-3 activity was increased and PKB activity was decreased in the hippocampus of DM rats. Lithium, a specific inhibitor of GSK-3 and GSK-3 , reduces A production by interfering with APP cleavage at the -secretase step and is found to reduce A production in the mice expressing pathogenic familial Alzheimer's disease [3]. In our study, after treating DM rats with LiCl, GSK-3 activity was decreased significantly in the hippocampus, and A 40 and A 42 were reduced by 60% and 21%, but administration of DM rats with NaCl reduced neither the activation of GSK-3 nor the overproduction of A 40 and A 42. The low level activity of Akt maintained in LiCl administration showed that lithium could not inhibit Akt in DM rat brain. These data suggested that GSK-3 played an important role in A overproduction of DM rat hippocampus. On the other hand, LiCl, as a specific GSK-3 inhibitor, has been confirmed to block tau hyperphosphorylation either in culture neuron or in rat brain [23,24]. The major kinase for tau phosphorylation is GSK3 . Smaller contributions of GSK3 , cdk-5, and MAPK are suggested [25]. In DM mice hippocampus, tau is hyperphosphorylated at ser199/ ser202 sites with lower GSK-3 expression [13] . In our study, we found that tau hyperphosphorylation was induced at ser198/ser199/ser202, Ser396/Ser404 sites just when the activity of GSK-3 increased highly. When the DM rats were treated with LiCl, GSK-3 activity decreased about 46%, and hyperphosphorylation tau was reversed at ser396/ser404, ser198/ser199/ser202 sites in the hippocampus. These sites are just the targets of GSK-3 [26], while NaCl treatment showed no apparent changes in these sites as compared with DM rats. The Scientific World Journal 7 These data also suggested that LiCl reduced tau hyperphosphorylation at ser396/ser404, ser198/ser199/ser202 in DM rat hippocampus by inhibition of GSK-3 rather than of Akt (PKB). Tau hyperphosphorylation may be associated with cognitive impairment [27,28]. Whether the changes of Akt and GSK-3 may reduce memory retention in DM rat is not sure. Our results suggest that the increase of GSK-3 may be response for the impairment of step-down inhibitory avoidance task rather than Akt because the activities and the expression of Akt were not significantly different among DM group, DM + NaCl group, and DM + LiCl group, but other studies show that learning impairment and hippocampal ERK and Akt inactivation are induced by scopolamine in male Sprague-Dawley rats [29]. In conclusion, our results demonstrate that GSK-3 has an important role in the pathogenesis of diabetic neurological complications byregulation of A production and tau hyperphosphorylation, and the present data suggest that GSK-3 might be a key target in the therapy of central nervous system neuropathy of diabetes mellitus.
2018-04-03T02:11:56.109Z
2014-04-03T00:00:00.000
{ "year": 2014, "sha1": "802861c9eade673f760da25472edbe30a4d7570b", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/tswj/2014/878123.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "802861c9eade673f760da25472edbe30a4d7570b", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
3190664
pes2o/s2orc
v3-fos-license
TRB3 mediates advanced glycation end product-induced apoptosis of pancreatic β-cells through the protein kinase C β pathway Advanced glycation end products (AGEs), which accumulate in the body during the development of diabetes, may be one of the factors leading to pancreatic β-cell failure and reduced β-cell mass. However, the mechanisms responsible for AGE-induced apoptosis remain unclear. This study identified the role and mechanisms of action of tribbles homolog 3 (TRB3) in AGE-induced β-cell oxidative damage and apoptosis. Rat insulinoma cells (INS-1) were treated with 200 µg/ml AGEs for 48 h, and cell apoptosis was then detected by TUNEL staining and flow cytometry. The level of intracellular reactive oxygen species (ROS) was measured by a fluorescence assay. The expression levels of receptor of AGEs (RAGE), TRB3, protein kinase C β2 (PKCβ2) and nicotinamide adenine dinucleotide phosphate (NADPH) oxidase 4 (NOX4) were evaluated by RT-qPCR and western blot analysis. siRNA was used to knockdown TRB3 expression through lipofection, followed by an analysis of the effects of TRB3 on PKCβ2 and NOX4. Furthermore, the PKCβ2-specific inhibitor, LY333531, was used to analyze the effects of PKCβ2 on ROS levels and apoptosis. We found that AGEs induced the apoptosis of INS-1 cells and upregulated RAGE and TRB3 expression. AGEs also increased ROS levels in β-cells. Following the knockdown of TRB3, the AGE-induced apoptosis and intracellular ROS levels were significantly decreased, suggesting that TRB3 mediated AGE-induced apoptosis. Further experiments demonstrated that the knockdown of TRB3 decreased the PKCβ2 and NOX4 expression levels. When TRB3 was knocked down, the cells expressed decreased levels of PKCβ2 and NOX4. The PKCβ2-specific inhibitor, LY333531, also reduced AGE-induced apoptosis and intracellular ROS levels. Taken together, our data suggest that TRB3 mediates AGE-induced oxidative injury in β-cells through the PKCβ2 pathway. Introduction Type 2 diabetes is one of the most prevalent chronic diseases worldwide and has serious social and health consequences, and poses a heavy economic burden. Its clinical characteristics include insulin resistance, pancreatic β-cell dysfunction and reduced β-cell numbers (1). In the pathogenesis of type 2 diabetes, high blood glucose, inflammatory cytokines, high free fatty acids (FFAs) and amyloid deposits are the important factors in the progression of diabetes, all of which lead to β-cell apoptosis (2). The identification of the mechanisms responsible for β-cell apoptosis are necessary in order to understand the pathogenesis and to aid in the development of effective treatments for patients with type 2 diabetes. Recent studies have also demonstrated that AGEs play an important role in β-cell failure. The stimulation of AGEs in in vitro and in vivo models has been shown to directly cause the apoptosis of β-cells (3,6,(16)(17)(18). AGEs stimulate reactive oxygen species (ROS) generation, and, mediated by their receptor (RAGE), induce β-cell apoptosis (3,16). However, these above-mentioned studies have not fully elucidated the molecular mechanisms of action of AGEs in β-cells. Therefore, the roles of AGEs in β-cell apoptosis and their mechanisms of action warrant further investigation. Tribbles homolog 3 (TRB3) is one of the family members of tribble homologous proteins. It inhibits mitosis and is a regulatory factor of the protein kinase B (Akt) pathway (19). Through the inhibition of Akt activity, TRB3 negatively regulates the insulin-signaling pathway (20). Our previous studies demonstrated that TRBs play an important role in β-cell apoptosis. High blood glucose, high fat and endoplasmic reticulum (ER) stress upregulate TRB3 expression, which mediates β-cell apoptosis (21)(22)(23). The identification of TRB3 participation in AGE-induced β-cell apoptosis is worthy of investigation. Studies on cardiomyocytes, epithelial cells and retinal diabetic nephropathy have shown that the isoform of protein kinase C (PKC) and PKC β2 (PKCβ2) plays an important role in AGE-mediated cell damage and kidney damage. By increasing PKCβ2 expression, AGEs enhance PKCβ2 activity, as well as the effects and displacement of PKCβ2, increasing ROS formation, which ultimately causes oxidative damage (24)(25)(26)(27). Our previous study demonstrated that TRB3 activated PKCδ and was involved in high-fat-mediated β-cell apoptosis (22). In this study, we focused on AGE-mediated β-cell apoptosis. We also determined whether TRB3 triggered the activation and isoform(s) of PKC, and whether it mediated the damaging effects of AGEs. RNA interference. Lipofectamine 2000 (Invitrogen, Waltham, MA, USA) was used to transfect TRB3 small interfering RNA (siRNA siTRB3; purchased from GenePharma Co., Ltd., Shanghai, China) and the negative control small interference RNA (siNC) and into the INS-1 cells in accordance with the manufacturer's instructions. Target gene sequences were described in our previous study (23). Reverse transcription-quantitative PCR (RT-qPCR). Total RNA was extracted from the INS-1 cells after the corresponding treatments using an RNA extraction kit (Qiagen, Hilden, Germany). Two micrograms of total RNA were used to synthesize the cDNA in a reverse transcription reaction (reverse transcriptase was purchased from Promega, Madison, WI, USA). The RT-PCR reaction and data were analyzed as previously described (28). The MyiQ real-time PCR thermal cycler and SYBR-Green PCR Master Mix kit (both from Bio-Rad Laboratories, Inc., Hercules, CA, USA) were used for the qPCR analyses. Target genes were quantified using MyiQ system software. The specific sequences of the primers used in this study were as follows: β-actin forward, 5'-GACATCCGTAAA GACCTCTATGCC-3' and reverse, 5'-ATAGAGCCACCAAT CCACACAGAG-3'; RAGE forward, 5'-GGAAGGACTGAAG CTTGGAAGG-3' and reverse, 5'-TCCGATAGCTGGAA GGAGGAGT-3'; TRB3 forward, 5'-TGTCTTCAGCAACT GTGAGAGGACGAAG-3' and reverse, 5'-GTAGGATGGCC GGGAGCTGAGTATC-3'; nicotinamide adenine dinucleotide phosphate (NADPH) oxidase 4 (NOX4 forward, 5'-TAGCTG CCCACTTGGTGAACG-3' and reverse, 5'-TGTAACCATGA GGAACAATACCACC-3'. Materials and methods Western blot analysis of protein expression. Following the corresponding treatments of the INS-1 cells, all cellular proteins were lysed in RIPA lysis buffer (Roche Diagnostics) containing protease inhibitors and the concentration was measured using a BCA protein assay kit (Beyotime Institute of Biotechnology, Shanghai, China). Total proteins (20-40 µg)were separated by SDS-polyacrylamide gel electrophoresis (SDS-PAGE). The separated proteins were then transferred onto a PVDF membrane followed by blocking the non-specific antigen and incubating with the corresponding primary antibody overnight. The primary antibodies used in this study were: a mouse anti-rat β-actin antibody (A5316; 1:20,000) and a rabbit anti-rat RAGE antibody (R5278; 1:1,000) (both from Sigma-Aldrich); a rabbit anti-rat PKCβ2 antibody (07-873-I; 1:1,000) and a mouse antirat TRB3 antibody (ST1032; 1:1,000) (both from Calbiochem, Billerica, MA, USA), and a rabbit anti-rat NOX4 antibody (ab133303; 1:1,000; Abcam). The secondary antibodies used in this study were a goat anti-mouse IgG antibody (A3682) and a goat anti-rabbit IgG antibody (A0545) (1:20,000; both from Sigma-Aldrich). An analysis of the protein bands was performed using Quantity One gel analysis software (Bio-Rad Laboratories, Inc.). Detection of ROS levels. ROS levels in the INS-1 cells cultured in 96-well microplates following the corresponding treatments were measured using a ROS detection assay kit (Shanghai Genmed Gene Pharmaceutical Technology Co., Ltd., Shanghai, China) with strict adherence to the manufacturer's instructions. A fluorescence detection microplate reader was used to measure the fluorescence intensity of the assay. Statistical analysis. In this study, data are presented as the means ± standard error of the mean (means ± SEM). SPSS 16.0 software (SPSS, Inc., Chicago, IL, USA) was used for statistical analysis. A comparison between 2 groups was performed using the t-test. Comparisons among groups were performed using analysis of variance (ANOVA). A value of P<0.05 was considered to indicate a statistically significant difference. Results AGEs induce the apoptosis of INS-1 cells. Following exposure to the AGEs (200 µg/ml) for 48 h, apoptosis was increased in the INS-1 cells as compared to the control group, as shown by TUNEL staining (Fig. 1A) and flow cytometry (Fig. 1B). A statistically significant difference in INS-1 cell apoptosis was observed between the AGE-treated group and the control group (untreated group). AGEs upregulate intracellular TRB3 expression in INS-1 cells. To analyze the mechanism of action of AGEs, we first detected RAGE expression in INS-1 cells following exposure to AGEs. As shown in Fig. 2, the mRNA (Fig. 2A) and protein expression (Fig. 2B) levels of RAGE were upregulated following exposure to AGEs, suggesting that AGEs mediated the apoptosis of INS-1 cells through RAGE. These findings further validate the results of previous studies (3,9). In addition, AGEs upregulated intracellular TRB3 expression levels at the mRNA and protein level (Fig. 2). AGEs increase intracellular ROS levels. Our previous study demonstrated that the overexpression of TRB3 facilitated highglucose-induced oxidative stress (21). Thus, in this study, we detected intracellular NOX4 expression and ROS levels. As shown in Fig. 3A and B, AGEs upregulated the mRNA and protein expression levels of NOX4. NOX4 is a major enzyme for the synthesis of intracellular ROS (39). In this study, we detected an increase in intracellular ROS levels in the cells following exposure to AGEs (Fig. 3C). Our findings indicated that AGEs promoted ROS synthesis, and further induced INS-1 cell damage and apoptosis through TRB3. The silencing of TRB3 expression by siRNA suppresses AGE-induced ROS synthesis and the apoptosis of INS-1 cells. To further determine whether TRB3 participates in AGE-induced cell damage and apoptosis, we knocked down the expression of TRB3 in INS-1 cells using siRNA (Fig. 4A). Both AGE-induced cell apoptosis ( Fig. 4B and C) and the intracellular ROS levels were significantly reduced in the cells in which TRB3 was knocked down (Fig. 4D). This result suggested that TRB3 is involved in AGE-induced oxidative damage and the apoptosis of INS-1 cells. TRB3 regulates AGE-induced ROS synthesis and the apoptosis of INS-1 cells through the PKCβ2 pathway. Previous studies have demonstrated that the PKCβ2 pathway plays a key role in AGE-induced oxidative damage to non-islet β-cells (26)(27)(28)(29). However, its exact role in β-cells remains unclear. In this study, we observed an upregulated PKCβ2 expression in INS-1 cells following exposure to AGEs (Fig. 5A). Following the knockdown of TRB3 expression, PKCβ2 and NOX4 expression was downregulated (Fig. 5A). Furthermore, following pre-treatment with the PKCβ2 specific inhibitor, LY333531, AGE-induced INS-1 cell apoptosis, the activity of NOX4 and the intracellular ROS levels were all significantly decreased (Figs. 5B and C, and 6A and B). This result indicated that TRB3 was involved in AGE-induced oxidative damage and the apoptosis of INS-1 cells through the upregulation of PKCβ2 activity. Discussion Studies using diabetic animal models and clinical specimens from diabetic patients have demonstrated that, with the progression of diabetes, the AGE levels in the body gradually increase (18,26). It has also been demonstrated that AGEs play an important role in diabetic retinopathy, kidney diseases, neuropathy and cardiomyopathy (29). Previous studies have shown that AGEs are the main factors which induce β-cell dysfunction and apoptosis (3,16,18). Thus, it is important to unravel the molecular mechanisms of action of AGEs in order to protect β-cells from injury. In this study, we found that AGEs upregulated TRB3 expression in INS-1 cells and mediated oxidative damage and the apoptosis of β-cells through PKCβ2. AGEs bind with RAGE on cell membranes and trigger cellular functional response. RAGE is a multi-ligand cell surface receptor and belongs to the immunoglobulin superfamily (30). RAGE can be activated by binding with different types of ligands, including AGEs, S100 proteins, HMGB1s and Aβ peptides (31)(32)(33)(34)(35). The activation of RAGE is associated with a number of chronic diseases, including different types of diabetic complications (e.g., neuropathy and nephropathy), microvascular disease and chronic inflammation (7). In this study, exposure to AGEs promoted the apoptosis of INS-1 cells and increased the expression of their receptor, RAGE; thus, RAGE mediates the damaging effects of AGEs on β-cells (3,16,18,36). During the course of diabetes, oxidative stress and ER stress are the direct factors causing β-cell dysfunction and apoptosis (37), which results in insulin resistance in type 2 diabetes and β-cell dysfunction (38). Factors involved in oxidative stress include high blood glucose, FFAs and cytokines (38). In recent studies, AGEs have been shown to induce β-cell damage through oxidative stress (3,16,18). In this study, following exposure to AGEs, the ROS levels in INS-1 cells were significantly elevated. In addition, NOX expression was downregulated. This result indicated that AGEs induced oxidative stress in INS-1 cells. NADPH oxidases are the major sources for intracellular ROS synthesis and generally have NOX1, NOX2, NOX4 and NOX5 types. A notable feature of NOX4 is its constitutive activity and preferential generation of a hydrogen superoxide anion that acts as an oxygen sensor (39). In addition, NOX4 has been confirmed to play an important role in glucocorticoid-induced INS-1 cell injury (28). Our results indicated that the AGE-induced oxidative injury to INS-1 cells may be an important cause of the apoptosis of INS-1 cells. Many pathways are involved in mediating oxidative damage in cells. Our previous study showed that TRB3 was associated with oxidative stress in high-glucose-induced β-cells failure (21). TRB3 is a homolog of Drosophila tribbles protein and mammalian protein. TRB3 is widely expressed in insulin targeted tissues and is closely associated with insulin resistance and glucose homeostasis (40). There is recent evidence to suggest that TRB3 plays an important role in apoptosis. However, its role remains controversial. Some studies have shown that TRB3 promotes the cytokine-induced apoptosis of pancreatic β-cells, as well as the ER stress-induced apoptosis of 293 cells, PC-12 cells (rat neuronal cell line) (41)(42)(43). Other studies have shown that TRB3 exerts an anti-apoptotic effect against the nutrient starvation-induced apoptosis of human prostate carcinoma PC-3 cells, and SaOS2 cells (44,45). These differences may be due to different cell types and stresses caused by different stimuli. Relevant studies on β-cell apoptosis have indicated that TRB3 plays a key role in high blood glucose, high fat, FFA and cytokine-induced apoptosis in β-cells (21,22,41). In this study, we found that AGEs stimulated INS-1 apoptosis and increased the expression of TRB3. The knockdown of TRB3 expression inhibited the apoptosis of INS-1 cells. Moreover, the NOX4 and ROS levels were also decreased, indicating that TRB3 plays an important role in the AGE-induced apoptosis of INS-1 cells by affecting ROS levels. The study by Gorasia et al demonstrated that β-cells were susceptible to injury caused by oxidative stress and ER stress (46) and an increased effect between oxidative damage and ER stress (47). TRB3 is an important regulatory molecule in the ER stress-induced apoptotic pathways (42). In this study, we also demonstrated that the knockdown of TRB3 expression affected AGE-induced ROS synthesis and provided evidence of the interaction between oxidative damage and ER stress in β-cells. Several studies in the past have indicated that the PKC path way is associated with oxidative stress induced by ROS synthesis (24)(25)(26)(27)48,49). PKC regulates NADPH oxidase activity and induces ROS synthesis. In addition, PKC plays an important role in AGE-induced oxidative damage in cells. Studies using glomerular microvascular endothelial cells and cardiomyocytes have demonstrated that AGEs enhanced NADPH oxidase activity through PKCβ2 and increased ROS synthesis and cell damage (24)(25)(26)(27). In this study, INS-1 cells exhibited an elevated expression of PKCβ2 following exposure to AGEs. Following the knockdown of TRB3, the expression of PKCβ2 was decreased and the activity of NADPH oxidase was also decreased. In addition, the application of specific inhibitors to suppress PKCβ2 activity significantly decreased the intracellular ROS levels and the apoptosis of INS-1 cells. TRB3 regulated NADPH oxidase and ROS levels which caused damage to INS-1 cells by affecting the activity of the PKCβ2 pathway. TRB3, as a related mole cule of ER stress-induced apoptosis, regulates PKC. PKC is the important regulatory molecule in the pathway of ROS synthesis. Hence, this study provided a new direction in determining the mechanisms responsible behind the interaction between oxidative damage and ER stress. In conclusion, this study demonstrated that AGE mediated oxidative stress through TRB3 to damage INS-1 cells and resulted in the apoptosis of INS-1 cells. TRB3 regulated NADPH oxidase activity, promoted ROS synthesis and resulted in oxidative stress in INS-1 cells through the PKCβ2 pathway. Our data provide a new understanding of the mechanisms responsible for AGE-induced oxidative injury to β-cells and a new direction for studies aiming to identify methods with which to protect β-cells from damage.
2018-04-03T05:59:34.459Z
2017-05-16T00:00:00.000
{ "year": 2017, "sha1": "405a19ded4aacf5064bfbd3eca955ff1d834bbe4", "oa_license": "CCBYNCND", "oa_url": "https://www.spandidos-publications.com/10.3892/ijmm.2017.2991/download", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "405a19ded4aacf5064bfbd3eca955ff1d834bbe4", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
259213771
pes2o/s2orc
v3-fos-license
Does robot-assisted surgery reduce leg length discrepancy in total hip replacement? Robot-assisted posterior approach versus direct anterior approach and manual posterior approach: a propensity score-matching study Background Advocates of robot-assisted technique argue that robots could improve leg length restoration in total hip replacement. However, there were few studies to compare the robot-assisted posterior approach (RPA) with conventional posterior approach (PA) THA and direct anterior approach (DAA) THA in LLD. This study aimed to determine whether robot-assisted techniques could significantly reduce LLD compared to manual DAA and manual PA. Methods We retrospectively reviewed the cohort of consecutive ONFH patients who underwent THA robot-assisted posterior, manual posterior, and manual DAA from January 2018 to December 2020 in one institution. One experienced surgeon performed all procedures. We calculated the propensity score to match similar patients in different groups by multivariate logistic regression analysis for each patient. We included confounders consisting of age at the time of surgery, sex, body mass index (BMI), and preoperative LLD. Postoperative LLD and Harris hip scores (HHS) at two years after surgery of different cohorts were compared. Result We analyzed 267 ONFH patients treated with RPA, DAA, or PA (73 RPA patients, 99 DAA patients, and 95 PA patients). After propensity score matching, we generated cohorts of 40 patients in DAA and RPA groups. And we found no significant difference in postoperative LLD between RPA and DAA cohorts (4.10 ± 3.50 mm vs 4.60 ± 4.14 mm, p = 0.577) in this study. The HHS at 2 years postoperatively were 87.04 ± 7.06 vs 85.33 ± 8.34 p = 0.202. After propensity score matching, we generated cohorts of 58 patients in manual PA and RPA groups. And there were significant differences in postoperative LLD between the RPA and PA cohorts. (3.98 ± 3.27 mm vs 5.38 ± 3.68 mm, p = 0.031). The HHS at 2 years postoperatively were 89.38 ± 6.81 vs 85.33 ± 8.81 p = 0.019. After propensity score matching, we generated cohorts of 75 patients in manual DAA and PA groups. And there were significant differences in postoperative LLD between the DAA and PA cohorts. (4.03 ± 3.93 mm vs 5.39 ± 3.83 mm, p = 0.031) The HHS at 2 years postoperatively were 89.71 ± 6.18 vs 86.91 ± 7.20 p = 0.012. Conclusion This study found no significant difference in postoperative LLD between RPA and DAA, but we found a significant difference between RPA and manual PA, DAA and manual PA in ONFH patients. We found a significant advantage in leg length restoration in primary total hip arthroplasty with robot-assisted surgery. Introduction Total hip arthroplasty (THA) is one of the most successful surgeries in modern medicine [1]. Hip replacement has revolutionized the treatment of advanced osteonecrosis of the femoral head (ONFH) with excellent outcomes. However, the leg length discrepancy (LLD) after THA has been associated with overall dissatisfaction [2][3][4] and was identified as the leading cause of litigation against orthopedic surgeons [5][6][7]. The posterior approach (PA) is mainstream in THA because it is safe, easy to perform, and highly reliable in complex cases. However, positioning the patient in a lateral position is challenging to assess the length of the limb. Furthermore, the PA takes more soft tissue release. Surgeons always sacrifice leg length equality for additional stability when we use the posterior approach. The direct anterior approach (DAA) THA has become increasingly popular because of its advantages in shorting hospital length of stay [8] and low dislocation rate [9,10]. Placing the patient supine is advantageous in evaluating the range of motion and limb lengths [11][12][13]. To date, only a few studies have compared RPA with convention PA or DAA-THA in terms of LLD [14,15]. However, their study had significant differences at baseline and lacked matching or had too small sample sizes. Their conclusions seem to be unreliable. Therefore, we conducted this study to determine whether a robot-assisted technique significantly reduces LLD compared to DAA and manual posterior approach in matched primary THA cohorts. The hypothesis of this study was that RPA might provide a better LLD than PA, similar to DAA. Inclusion and exclusion criteria Institutional Review Board approval of the study was obtained. The consecutive cases that underwent THA through RPA, DAA, and manual PA were reviewed from January 2018 to December 2020. All data were obtained from medical records. Study inclusion criteria were (1) patients with diagnosis of ONFH and (2) operation performed by one surgeon, (3) the patient have availability of postoperative pelvis radiographs and complete medical records, and (4) the surgery was operated with the robotassisted posterior approach, DAA, or manual PA. Exclusion criteria were (1) incomplete clinical data or missing proper postoperative radiographs [16] (radiographs with rotated or tilted pelvis, the included angle between the axial line of femoral marrow cavity and the median line was more extensive than 10 degrees, and radiographs on which at least one of the lesser trochanters or teardrop was difficult to define), (2) the surgery was operated with other surgery approaches, and (3) the operative hip with a history of hip surgery or infection. Surgical procedure In DAA and PA surgery, a standard radiographic template was performed using the Orthoview software (Version 6.6.1, Materialise, Leuven, Belgium) to determine component sizing and positioning, level of the neck cut, and amount of leg lengthening or shortening needed was done for patients scheduled for THA. A tapered, cementless stem and cementless acetabular cups were used in all cases. The Accolade II femoral stem (Stryker, Mahwah, New Jersey, USA), Trident acetabular cups (Stryker), and Pinnacle acetabular cups (DePuy Warsaw, IN, USA) were used for all patients. The surgeon in this study had more than 2000 THAs experience (more than 200 RPA THAs, 500 DAA THAs, and 1000 PA THAs) and performed more than 300 hip replacements annually. The surgeon passed the THA learning curve of the three kinds of operations. The doctor does not have any preference for any procedure. Patients choose which operation to perform freely according to their conditions and costs. But the patients were excluded as candidates for DAA if their body mass index (BMI) was ≥ 30 kg/m 2 . In our institution, RPA did not add to the patient's cost, so patients are free to choose whether or not to use robot-assisted in their surgery. All the benefits and risks of performing RPA are informed preoperatively to the patients to decide which surgical procedure to perform. The Mako robotic arm interactive orthopedic system (Stryker) assisted surgeons in performing RPA during surgery. Computed tomography (CT)-based navigation software could directly measure changes in LLD [17]. All RPA surgeries were performed through a posterolateral approach under general anesthesia. After attaching the pelvic arrays, the surgeon began the skin incision and initial exposure. Before hip dislocation, the proximal and a dismal femoral checkpoint were captured to measure the preoperative leg length and hip offset. The surgeon then dislocated the joint and performed the femoral neck osteotomy. The position of the pelvis was confirmed by registering and verifying the position of patient-specific anatomical landmarks displayed on the screen. The direct anterior approach was performed with the patient in the supine position on a standard operating table. An oblique skin incision starting 3 cm distally and laterally to the anterior superior iliac spine (ASIS) was used. The subcutaneous tissue and the fascia centrally over the tensor fascia lata muscle were divided, followed by blunt dissection to open the interval between the tensor facia lata and the sartorius muscle. The joint capsule was exposed, and the anterior portion was removed. A double osteotomy of the femoral neck facilitated head removal, followed by traditional preparation of the acetabulum using an offset reamer, and the cup was positioned in place. Next, elevate the femur to allow access to the femoral canal. The leg was then placed in external rotation, adducted under the contralateral leg, and the hip was extended by lowering the foot end of the table approximately 30°. The femoral canal was opened, followed by standard preparation using an offset reamer, and the stem was implanted. Leg lengths are checked by palpation of the medial malleoli. The PA procedures of exposure and osteotomy were described above. The smallest reamer was used to determine the acetabular bottom, then the larger reamers in turn to prepare the acetabulum. The acetabular cup and femoral stem were implanted manually. The lesser trochanter-prosthetic tip distance was measured and checked during the operation to ensure leg length restoration. What's more, leg lengths are checked by palpation of the patellar. Hip stability and leg length were tested through the full range of motion of the hip. Stability testing is done with the components in place. Hip stability is tested in extension-first in the abduction and external rotation and then in adduction and external rotation while palpating the femoral head to ensure no impingement or subluxation. Testing in flexion and rotation follows, looking for any posterior subluxation or dislocation. All groups' goals were to restore leg length under the premise of good stability-there is no impingement, dislocation, or subluxation in any hip movement. All charts and radiographs were retrospectively reviewed to collect information including age, sex, Operation side, height, weight, preoperative Harris hip scores (HHS), and body mass index (BMI). Preoperative and postoperative LLD were measured in the pelvis AP radiographs. Postoperative HHS of patients followed up at two years after surgery (Fig. 2). Radiograph measurement The plain pelvis AP radiographs used in this study were taken in the operating room under anesthesia after surgery, positioning the patient's patella forward. The radiographic measurements were performed on digital radiographs using the measurement software package by Orthoview software (Version 6.6.1, Materialise, Leuven, Belgium). The contralateral hip was considered as a reference for measurement. Radiographs were calibrated using the known size of each ceramic head as a marker. The trochanteric technique, as described by Dorr et al. [18], was used to measure the LLD on the low AP pelvis XR. LLD was measured using an inter-teardrop line as a pelvic reference. The teardrop line was marked bilaterally, creating a horizontal inter-teardrop line across the image. After that, two lines were drawn, each perpendicular to the teardrop line, starting from the most prominent portion of the lesser trochanter. LLD was defined as the difference in measurement between the operated and non-operated hip. The LLD was given a positive value if the operative limb was longer than the nonoperative limb. Otherwise, the LLD was given a negative value. In patients undergoing bilateral surgery, the first operated lateral was used as a baseline. When calculating the mean value, the direction of length change was not considered (leg lengthened or shortened). To eliminate bias and improve the accuracy of measurement, all the postoperative imaging measurements were done independently by two blinded observers who collected LLD data twice, two weeks apart. The observers were blinded to each other's results and the type of surgery performed. Each patient's four measurements were averaged into a single number for LLD, and the absolute LLD values were used in all statistical analyses. There were strong interobserver and intraobserver correlations for all LLD measurements (r > 0.82 and p < 0.001). Statistical analysis Given the differences in the baseline characteristics between eligible participants in the three groups, propensity score matching (PSM) was used to identify a cohort of patients with similar characteristics. Patients in the three groups were matched with two other groups, respectively. All analyses were performed using IBM SPSS Statistics software (Version 25; IBM, Armonk, New York, USA). The significance level was set at < 0.05 for all tests. Values are expressed as mean ± standard deviation. When calculating the propensity score by multivariate logistic regression analysis for each patient, we included confounders of age at the time of surgery, sex, body mass index (BMI), and preoperative LLD. In the matched cohort, paired comparisons were performed using McNemar's test for binary variables and a paired Student's t test or paired-sample test for continuous variables. All reported P values are two-sided and have not been adjusted for multiple testing. A post hoc power analysis was performed to compare LLD. (Fig. 3). Using of propensity score matching, 73 patients who underwent RPA were matched with 99 patients who underwent DAA and 95 patients who underwent manual PA, respectively. After that, 95 patients with PA were matched with 99 patients with DAA. Firstly, propensity score matching was performed between 73 RPA and 99 DAA patients. Based on the propensity score, we generated 1:1 matched cohorts to facilitate comparison between RPA and DAA patients. We matched the patients using the nearest neighbor technique, with a predefined caliper width equal to 0.05 of the standard deviation of the logit of the propensity score, after propensity score matching. A total of 40 patients were included for the propensity score-matched analysis in each group. In the RPA cohort matched with the DAA cohort, 13 patients underwent bilateral THA. In the DAA cohort matched with the RPA cohort, 14 patients underwent bilateral THA (p = 0.814). In the first pair of matched cohorts. Mean LLD was 4.10 ± 3.50 mm versus 4.60 ± 4.14 mm in the RTHA cohort compared with matched DAA cohort. There was no significant difference in postoperative LLD between the two cohorts (p = 0.577). The power (0.99) of the comparison of different LLD is convincing (Table 1). Secondly, propensity score matching was performed between 73 RPA patients and 95 manual PA patients. We also generated 1:1 matched cohorts to compare RPA and manual PA patients. Matching the patients use the nearest neighbor technique, with a predefined caliper width equal to 0.05 of the standard deviation of the logit of the propensity score, after propensity score matching. A total of 58 patients were included for the propensity score-matched analysis in each cohort. In the RPA cohort matched with the PA cohort, 20 patients underwent bilateral THA. In the PA cohort matched with the RPA cohort, 23 patients underwent bilateral THA (p = 0.566). In the second pair of matched cohorts. The mean LLD was 3.98 ± 3.27 mm versus 5.38 ± 3.68 mm in the RPA cohort compared with the matched manual PA cohort. There were significant differences in postoperative LLD between the two cohorts (p = 0.031). The power (1.00) of comparison of different LLD is convincing (Table 2). Thirdly, propensity score matching was performed between 99 DAA patients and 95 manual PA patients. We also generated 1:1 matched cohorts to facilitate comparison between DAA and PA patients. Matching the patients use the nearest neighbor technique, with a predefined caliper width equal to 0.05 of the standard deviation of the logit of the propensity score, after propensity score matching. Seventy-five patients were included in the propensity score-matched analysis in each cohort. In the DAA cohort that matched the PA cohort, 25 patients underwent bilateral THA. In (Table 3). When LLD of more than 3 mm was set as an outlier, 18 RPA, and 18 DAA outliers were in the first pair of matched cohorts (p = 1) 0.29 RPA outliers, 37 PA outliers were in the second pair of matched cohorts (p = 0.135), 34 DAA outliers, and 48 PA outliers were in the third pair of matched cohorts (p = 0.022). When LLD of more than 5 mm was set as an outlier, 11 RPA, and 10 DAA outliers were in the first pair of matched cohorts (p = 0.801) 0.16 RPA outliers, 25 PA outliers were in the second pair of matched cohorts (p = 0.024), 21 DAA outliers, and 30 PA outliers were in the third pair of matched cohorts (p = 0.122). When LLD of more than 10 mm was set as an outlier, 4 RPA and 3 DAA outliers were in the first pair of matched cohorts (p = 0.694) 0.3 RPA outliers, 6 PA outliers were in the second pair of matched cohorts (p = 0.292), 4 DAA outliers, and 7 PA outliers were in the third pair of matched cohorts (p = 0.349) ( Table 4). Discussion While THA is widely viewed as one of modern medicine's most successful surgical procedures, it is not perfect [5,19,20]. LLD after THA remains a significant problem. Our study results showed that RPA and DAA THA were equally effective in minimizing LLD. There was no significant difference in LLD between matched RPA cohort and matched DAA cohort. The LLD in matched RPA cohort and DAA cohort were shorter than matched manual PA cohort. The differences were statistically significant in ONFH patients. Postoperative HHS was also significantly higher in the DAA and RPA cohorts than in the PA cohort. Early studies suggested that robotic systems have high accuracy [17], which may lead to reduced leg length discrepancies and restoration of the hip centers of rotations and offsets. These reductions in radiographic outliers will likely lead to better clinical outcomes and patientreported functional outcomes, more durable implant survivorship, and lower rates of complications. Most studies believed a discrepancy of less than 10 mm did not produce symptoms and was well tolerated [19]. Published studies used various methods to control limb length during THA, including DAA with fluoroscopic guidance; preoperative 2D or, more recently, 3D planning; and robot-assisted intraoperative navigation. To date, only a few studies have been conducted to compare postoperative LLD in RPA, DAA, and PA patients [14]. Bitar et al. study reviewed 67 RPA, 29 DAA, and 59 PA patients with the diagnosis of hip osteoarthritis and showed that all groups achieved a clinically acceptable mean LLD. Their study concluded that the accuracy resulted from the high surgical volume, precise preoperative templating, and intraoperative clinical assessment. However, there were significant differences among different groups of patients at baseline in their study, and their study lacked matching. Unlike their study, our study used a propensity score-matched cohort to compare other groups. Our study increases comparability between groups. Kayani et al. [15] compared 25 RPA and 50 manual PA performed by one surgeon. Their study shows no difference between robotic-arm-assisted and conventional manual THA when achieving the planned leg length correction. But the sample size of their study was small. A reason for the accuracy of the DAA group could be attributed to allowing intuitive feedback to the surgeon [21]. Placing the patient in the supine position provides for the combination of both radiographical and palpation checks of the leg length, potentially leading to more accurate leg length. However, this approach has a high learning curve [10,22,23]. Surgeon experience might have played an essential role in minimizing LLD regardless of the technique and approach used for THA. The expert surgeon in our study is far beyond his learning curves. Unlike the DAA, robot-assisted surgery monitors limb length in real-time during the operation and compares it contralateral, providing more information to the surgeon via a screen. Our surgical goals are to restore limb length while maintaining hip stability. The results of both techniques are reasonable because they meet the clinical ideal. The difference between different groups may be covered. The findings of the study can not be generalizable to surgeons with less experience or those in the early part of the DAA learning curve. Although prior studies have suggested that there may be some benefit to using robot assistance through THA, our study results indicate that equivalent radiographic outcomes are achievable without using robot assistance. Our study shows that the improved accuracy did not translate into significant LLD improvement compared with the DAA surgery. We noticed that LLD in matched RPA cohort is shorter than matched manual PA cohort. Postoperative functional scores were also better in the RPA group. When LLD of more than 5 mm was set as an outlier, outliers in RPA cohorts were less than PA cohort. There was a statistical difference between the different cohorts. It shows that RPA has advantages in restoring leg length through the same approach. Experienced surgeons operating for simple primary surgery will have clinically acceptable LLD, no matter which surgery is performed. But this conclusion cannot be further generalized, especially for beginners or complex cases. It is essential to know that these techniques may harbor risk factors such as more intraoperative complication rates, radiation, blood loss, and operation time [24,25]. In addition, robotic technology is associated with additional expenses, such as set-up and maintenance overheads, beyond the costs of different operating room times. The main strengths of this study were that this was a single surgeon study assessing radiological parameters and postoperative function. Outcomes were recorded by blinded observers using standardized techniques with high observer agreement on all outcomes. Because the groups were significantly different at baseline, the best way to compare robot-assisted PA with other surgery operations would be to match similar patients in different groups [26,27]. This study is not without limitations. Firstly, as a retrospective group study, findings may not be as unbiased as those in a randomized study. Some selection bias might have been part of patient selection, especially after the introduction of the robot. All patients in this study were patients with ONFH because ONFH is one of the very common reasons for performing joint replacement surgery in Chinese patients. In order to increase comparability between groups, patients with ONFH were selected for both groups, but this limits the generalizability of the findings. Secondly, we acknowledge that the study lacks a longterm assessment of clinical outcomes; however, this was beyond the scope of the study. Further studies are needed to investigate this procedure's complications, clinical outcomes, and specific indications. Thirdly, although it has been proposed as a gold standard for LLD measurement, our study has not used full lower body X-rays or EOS X-rays. This is one of the weak points of our study. Future studies should focus on using EOS X-rays to measure leg length. Moreover, robot-assisted technology will apply to the DAA approach surgery to explore whether combing the two techniques can bring better results. DAA, Robotassisted DAA, and Robot-assisted PA surgery should be compared for differences in leg length restoration. Longterm follow-up should also be included in future studies. Based on this study's post hoc analysis results, the sample size of this study or a larger sample size can still be used. Conclusion This study found that LLD in the RPA cohort is shorter than in the manual PA cohort. But there is no significant difference in postoperative LLD between RPA and DAA operations. Therefore, before one can fully advocate for robotic technology, further research is needed to determine whether robotic assistance will translate into a leg length restoration that justifies the increased cost and operation time.
2023-06-22T13:09:19.435Z
2023-06-22T00:00:00.000
{ "year": 2023, "sha1": "6dbb068afce3e430175b78fd39362b46e1333c28", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "a279467cbb6590c48ee3e2b28e42bdfe90d2e07e", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
216028201
pes2o/s2orc
v3-fos-license
Intervening on the Developmental Course of Children With Borderline Intellectual Functioning With a Multimodal Intervention: Results From a Randomized Controlled Trial An adverse social environment is a major risk factor for borderline intellectual functioning (BIF), a condition characterized by an intelligence quotient (IQ) within the low range of normality (70–85) with difficulties in the academic achievements and adaptive behavior. Children with BIF show impairments in planning, language, movement, emotion regulation, and social abilities. Moreover, the BIF condition exposes children to an increased risk of school failures and the development of mental health problems, and poverty in adulthood. Thus, an early and effective intervention capable of improving the neurodevelopmental trajectory of children with BIF is of great relevance. Aim The present work aims to report the results of a randomized controlled trial (RCT) in which an intensive, integrated and innovative intervention, the movement cognition and narration of the emotions (MCNT) was compared to standard speech therapy (SST) for the treatment of children with BIF. Methods This was a multicenter, interventional, single blind RCT with two groups of children with BIF: the experimental treatment (MCNT) and the treatment as usual (SST). A mixed factorial ANOVA was carried out to assess differences in the effectiveness between treatments. Primary outcome measures were: WISC III, Child Behavior Checklist (CBCL), Vineland II, and Movement ABC. Results MCNT proved to be more effective than SST in the increment of full-scale IQ (p = 0.0220), performance IQ (p < 0.0150), socialization abilities (p = 0.0220), and behavior (p = 0.0016). No improvement was observed in motor abilities. Both treatments were linked to improvements in verbal memory, selective attention, planning, and language comprehension. Finally, children in the SST group showed a significant worsening in their behavior. Conclusion Our data show that an intensive and multimodal treatment is more effective than a single domain treatment for improving intellectual, adaptive and behavioral functioning in children with BIF. These improvements are relevant as they might represent protective factors against the risk of school failure, poverty and psychopathology to which children with BIF are exposed in the adult age. Limitations of the study are represented by the small number of subjects and the lack of a no-treatment group. Clinical Trial Registration ISRCTN Registry (isrctn.com), identifier ISRCTN81710297. INTRODUCTION Several factors related to the social environment such as low socio-economic status, maltreatment, and high levels of maternal stress represent the major causes for borderline intellectual functioning (BIF) (Bradley and Corwyn, 2002;Marcus Jenkins et al., 2013;Peltopuro et al., 2014;Hassiotis et al., 2019). BIF is a condition characterized by a mental functioning at the border between normal intellectual functioning and intellectual disability, which means an IQ within 1 and 2 standard deviations below the mean of the normal curve of the distribution of intelligence with an impact on adaptive abilities (Salvador-Carulla et al., 2013;Wieland and Zitman, 2016). In primary school age, children with BIF present major difficulties in school achievements due to learning difficulties in more than one domain, difficulties in executive functions, such as attention, concentration, planning, and inhibition of impulsive responses, in memory, and motor skill limitations (Alloway, 2010;Vuijk et al., 2010;Salvador-Carulla et al., 2013;Pulina et al., 2019). Furthermore, limitations in social skills, emotional competencies and behavioral problems affect social participation of these children (Kavale and Forness, 1996;Baglio et al., 2016). Children with BIF are thus at high risk of school failures and dropout (Karande et al., 2008;Shaw, 2008Shaw, , 2010, and to develop psychiatric problems in the adult age (Chaplin et al., 2006;Douma et al., 2007;Ali and Hassiotis, 2008;Gigi et al., 2014;Hassiotis, 2015;Hassiotis et al., 2019). Recent studies established a prevalence of BIF ranging from 7 to 12% (Salvador-Carulla et al., 2013;Hassiotis, 2015). Although intelligence is one of the most heritable behavioral traits, its heritability seems to account for about 20% to 40% in infancy (Plomin and Deary, 2015). Intelligence, indeed, appears to be stable during adolescence to adulthood, but childhood environment can play a crucial role, especially in families with low socio-economic status (SES). The complex interplay between genes and environment during development is supported by findings from a longitudinal study that followed a large cohort of 14,853 children (von Stumm and Plomin, 2015). Results showed that 2-year-old children from low SES environments had an average of six points lower IQ compared to their high SES peers; by the age of 16, this gap had nearly tripled. The link between BIF and social environment is likely related to the interplay between adverse life conditions and brain development. Childhood is indeed a critical period because of the dramatic changes that occur in the brain. It has been demonstrated that low SES correlates with both reduced learning abilities and abnormal brain development in several critical regions including the hippocampus, amygdala, parahippocampal and sensory-motor cortices, and limbic system connectivity (Hanson et al., 2011;Baglio et al., 2014;Hair et al., 2015;Blasi et al., 2019). These data are relevant as they indicate that children with BIF might be at risk of learning difficulties and emotional problems at a very early age. All the aforementioned findings highlight the necessity of an early and effective intervention capable of improving the clinical and neurodevelopmental course of children with BIF by exploiting the substantial plasticity of the developing brain (Johnston, 2009). No specific rehabilitation approach and guidelines are available at the moment for children with BIF. The usual care provided by the national health system in Italy is focused on the learning difficulties and consists of standard speech therapy (SST). Furthermore, in the mainstream Italian school system, children with BIF are classified as Special Educational Needs (SEN). Children with SEN have a personalized and simplified school program (PSSP) whose purpose is to warrant compensatory tools and dispensatory measures (i.e., the prescription to use facilitation devices such as a calculator and/or a computer) as well as to simplify the educational approach. Both PSSP and SST, though, are focused only on academic abilities without considering the complexity and multiplicity of the difficulties and needs of this population. As specific interventions for children with BIF are lacking, we developed a multimodal treatment (Blasi et al., 2017a) based on three main theoretical considerations. First, intelligence seems to be a multidimensional and dynamic process that plays a pivotal role in the development of truly adaptive abilities (Gottfredson, 1997). Intelligence is one of the best "predictors of important life outcomes such as education, occupation, mental and physical health and illness, and mortality" (Plomin and Deary, 2015). For this reason a treatment that is effective in the increment of the IQ can represent a protective factor from social disadvantage in adulthood. Second, the development of emotional, cognitive and motor skills is highly correlated in both typical development (Wassenberg et al., 2005;Inkster et al., 2016) and in children with BIF Houwen et al., 2016). Therefore, effective rehabilitation interventions during childhood should include all these domains. Finally, higher levels of education and living in cognitively stimulating environments result in greater cognitive reserve that can positively impact neurodevelopment (Schapiro and Vukovich, 1970). Based on these considerations, we designed a treatment named movement, cognition and narration of the emotions treatment (MCNT) (Blasi et al., 2017a). Central aspects of MCNT are the intensity and the integration of the approach. Children attend the program for a whole school year (9 months), 3 h per day, Monday through Friday. MCNT operates through a highly enriched and motivating approach in which children are divided into three "teams" that, in rotation, attend three laboratories, one for each domain: cognition, movement, and emotions. The MCNT program is integrated with the school programs and with the families through the engagement of teachers and parents in the finality and the strategies used in the program (Blasi et al., 2017a). The aim of the present work is to report the results of the previously published Study Protocol (Blasi et al., 2017a), in which a detailed description of all the procedures and treatment adopted is available. The aim of the trial was to investigate the efficacy of the MCNT intervention in the recovery of the BIF condition and to compare it with SST in promoting complex reasoning, motor, behavioral and adaptive skills. Study Design and Participants This was a multicenter, interventional, single blind, randomized controlled study (RCT) originally designed with three groups of children with BIF: group 1-children treated with SST (treatment as usual, N = 20); group 2-children treated with MCNT (experimental treatment, N = 20); group 3-children on the waiting list for SST (no treatment; N = 20) (Figure 1). The study was approved by the Ethics Committee of the Don Gnocchi Foundation (DGF) and of the ASST S. Paolo and S. Carlo Hospital. All parents signed a written informed consent at the first meeting. Seventy children were recruited from the Child and Adolescent Neuropsychiatry Unit of the two Medical Centers involved (DGF, and ASST S. Paolo and S. Carlo Hospital) where they were referred to for their difficulties in terms of school achievements and/or socialization. All children were allocated, evaluated and treated at DGF (Figure 1). Ten children were excluded because they did not meet the inclusion criteria described below and/or declined to participate. Moreover, according to the Ethics' Committee recommendations, subjects in the no-treatment group could not be kept on the waiting list and deprived of the conventional treatment when the same treatment became available. Due to unexpected opening of new opportunities of treatment outside our Institution, 14 children belonging to this group exited the study before the T1 assessment. For this reason, the final sample included forty children belonging to the two treatments groups (Figure 1). The measures of primary and secondary outcome were determined at two time points; within 2 months prior to the beginning of the treatment (T0) and within 2 months after the end of the treatment (T1). Two psychologists, blinded to the intervention received, evaluated children before and after treatment. Two outcomes, the Child Behavioral Checklist (CBCL 6-18) (Achenbach andRescorla, 2001, 2007;Achenbach, 2011) and the Vineland II (Sparrow et al., 2005), were not blind to the experimental condition because the questionnaire was completed by the parents. Following two drop-outs in each group, 18 children completed MCNT and 18 completed SST. The inclusion criteria were: age range between 6 to 11 years old and attending primary mainstream school; with a Full Scale Intelligence Quotient (FSIQ) score ranging from 70 to 85 (±5) determined with the Wechsler Intelligence Scale for Children-III (WISC-III) (Wechsler, 2006); presence of learning disabilities assessed with the standardized test battery for developmental dyslexia and dysorthographia (DDE-2) (Sartori et al., 2007) and dyscalculia (AC-MT 6-11) (Cornoldi et al., 2012); presence of an impact on daily life of the above mentioned difficulties as measured by the Child Behavioral Checklist (CBCL 6-18) (Achenbach and Rescorla, 2001;Achenbach and Rescorla, 2007;Achenbach, 2011). Exclusion criteria were: presence of major neuropsychiatric disorders (such as ADHD and autism spectrum disorder); presence of neurological conditions such as epilepsy, traumatic brain injury, brain malformation and infectious disease involving the central nervous system. Other exclusion criteria considered were: the presence of systemic diseases such as diabetes or dysimmune disorders, genetic syndromes such as Down syndrome or Fragile X syndrome. Furthermore, a positive history for psychoactive drugs, particularly referring to current or past use of psychostimulants, neuroleptics, antidepressants, benzodiazepines and antiepileptic drugs were also considered exclusion criteria. Randomization and Blinding Randomization occurred after screening and baseline assessment (T0). Subjects were randomly assigned to the groups. The randomization process was performed using a computer algorithm 1 by an independent operator not involved in the study. The evaluation in both pre-and post-treatment was conducted by two psychologists blind to group allocation. Sample Size and Statistical Analysis Due to four drop-outs (two children for each group), the final sample was represented by 36 children: mean age was 8.23 (sd 1.46) for MCNT group (M/F = 8/10) and 8.22 (sd 1.26) for SST group (M/F = 10/8). Due to the drop-out of the waiting list group and the consequent change in the study design, we performed a new a priori power calculation. We calculated the effect size on preliminary data from a separate sample of 45 children treated with MCNT and 47 with SST for the primary outcome measure (FSIQ) using G*Power version 3.0.10. Results showed a mean difference value between groups after treatment of eight points, with a standard deviation of 10, and a correlation value among repeated measure of 0.3. For a given expected power of 0.82 and an effect size of 0.41, the estimated sample size was 36. Considering a 10% drop-out rate, the number of subjects required was 40. Statistical analysis on outcome measures was conducted using SPSS Statistics 24. All variables were tested for skewness and kurtosis to check for normality. An independent samples t-test assessed baseline differences between groups for demographic and IQ data. A mixed factorial ANOVA, with type of intervention (MCNT and SST) as the independent variable and outcome measures (IQ, FIGURE 1 | CONSORT flow diagram of the RCT. *According to the Ethics' Committee recommendations, subjects in the no-treatment group could not be kept on the waiting list and deprived of the conventional treatment when the same treatment became available. Consequently, 14 children belonging to this group exited the study before the follow-up assessment. MCNT, Movement Cognition and Narration of the emotions Treatment; SST, standard speech therapy; TAU, treatment as usual. Frontiers in Psychology | www.frontiersin.org M-ABC; Vineland II, CBCL, and neuropsychological data) as the repeated measures, was carried out to assess the main effect of treatment (Time T0 vs. T1) and differences in effectiveness between treatments (Time by Group interaction). Post hoc comparisons were carried out to test for simple main effects. Due to the small number of subjects included in the study and to avoid missing a possible effect, we applied a false discovery rate (FDR) correction according to Benjamini and Yekutieli (2001) to account for multiple comparisons. Moreover, due to the small number of subjects we did not perform an intention to treat analysis for the missing data. Interventions In our study, two types of interventions were carried out: MCNT, which represents the experimental intervention; and SST, the treatment as usual. In Italy, SST is the only treatment offered by the National Health System for children with BIF, with the aim to improve their difficulties in learning and verbal comprehension. Both treatments were carried out at DGF in a hospital setting, and lasted for 9 months and there were regular meetings between the professionals, the families and teachers of the children. Both treatments were also discussed during regular weekly meetings among professionals. MCNT was based on a multidimensional approach and children worked in small groups while SST was focused on learning abilities and children worked one-to-one with the speech therapist. For a comprehensive description of both rehabilitative approaches, see the Study Protocol (Blasi et al., 2017a). The Movement Cognition and Narration of the Emotions Treatment (MCNT) Children worked in small groups (five to six children each), for 3 h each day, 5 days a week, Monday to Friday, for 9 months. To encourage cooperative learning within each group and to promote a degree of competition between groups, children were divided into three "teams" named Red, Blue, and Green, for the whole duration of the intervention, according to their global functioning, grade and/or special educational needs. The treatment consisted of: (1) A Movement Lab, to improve motor planning and fine and gross motor abilities with a Game Therapy approach using the Wii and Xbox video game platforms; (2) A Cognitive Lab, for the empowerment of the executive functions such as working memory, planning abilities, problem solving, and reasoning and language comprehension with the use of the multimedia interactive whiteboard (MIW); (3) an Emotion Lab, to learn how to narrate the emotions to help the child to cope with the experiences of her/his daily life. The Movement Lab involved exercises aimed at improving balance, fine and gross motor abilities, hand-eye coordination to make their movements more fluid, economical, quicker and functional, but also impulsive motor response inhibition, planning, and praxic abilities as well as attention. For instance, the child used the Wiimote to point to a moving target to train attention and higher visual-motor integration, or played Wii Sports with the Wiimote and the Wii Balance Board to train balance and coordination of both upper and lower limbs. Moreover, Wii Music and Wii Party games were used to train rhythm, timing of movement and inhibition of impulsive motor behavior. During the whole process, advanced executive functioning, such as planning competence, working memory and inhibitory control were involved. The Cognitive Lab aimed at promoting language comprehension and expression, executive functions, such as deductive and inductive reasoning, working memory, planning and problem solving, attention and concentration, inhibition of impulsive verbal response. Moreover, children were encouraged to view each problem assuming multiple perspectives, examining possible alternatives, monitoring the decisional processes and promoting links among knowledge with explicit metacognitive strategies. For instance, to promote working memory the neuropsychologist used concrete daily tasks such as thinking of all the sequential acts that need to be prepared when preparing for an activity such as painting. Targeted cognitive stimulation was avoided for two reasons: (1) to avoid introducing a bias in the evaluation of the outcome by using tasks that could resemble those used in the assessment; and (2) to promote metacognitive strategies that are more easily fixed in the long term semantic as well as autobiographical memory and that can generalize to different contexts (Baddeley, 2013). Active participation in the activities was promoted through a cooperative learning approach in which children helped each other and were all responsible for the achievements of the group. Throughout the training, the neuropsychologist referred explicitly to the importance of effort and practice in the increment of their abilities and that intelligence is not a fixed entity but a malleable quality (Blackwell et al., 2007). The Emotion Lab concerned emotions and social skills. The objective was to help children to express, recognize and cope with their own emotions (Blasi et al., 2017b). The underlying idea stems from the psychoanalytic model of Bion (1962) in which the comprehension of the emotional experience is central to the development of thought and to learning. The therapist, a psychologist with a psychotherapy degree, used different approaches to promote the narration of the emotions: symbolic play, reading, inventing and/or dramatizing a story, drawing and talking. Treatment as Usual: Standard Speech Therapy (SST) Standard speech therapy consisted of individual sessions of 45 min each twice a week for 9 months. The focus was on the training of the academic abilities compromised in the child as assessed by the evaluation at T0 (pre-treatment). To empower these skills, SST used both pencil/paper tools and specific rehabilitation software 2 . In the event of dyslexia or dysorthographia, the main objectives of SST were to increase processing information speed and transcoding, reduce spelling mistakes and expand personal vocabulary and text comprehension. For dyscalculia, images were used to aid reasoning and solving problems such as in the analogical method (Bortolato, 2014;Mehrnoosh and Fusi, 2016). The empowerment of transversal competences such as phonological competences, verbal comprehension, perception, visual-spatial ability, attention, memory and executive functions were also considered together with the use of compensative tools. Assessment Design and Outcomes Measures All children were evaluated at two time points, within 2 months prior the beginning of the treatment (T0) and within 2 months after the end of the treatment (T1). Primary outcome measures were: 1. WISC-III (Wechsler, 2006) to measure intellectual functioning and evaluate cognitive profile in light of Verbal and Performance QI; 2. The Movement Assessment Battery for Children (M-ABC) (Henderson and Sugden, 1992), for the assessment of the motor skills. The test provides four scores for manual dexterity, ball skills, staticdynamic balance, and total score; 3. The CBCL 6-18 (Achenbach andRescorla, 2001, 2007;Achenbach, 2011) to evaluate a child's adaptive behavior and functioning as seen by the parents. The main scoring for the CBCL is based on eight syndrome scales from DSM5, grouped into two "broad band" scales, Internalizing problems and Externalizing problems along with a Total problems score. The standard scores are scaled so that 50 is average for the youth's age and gender, with a standard deviation of 10 points. Higher scores indicate greater problems; 4. The Emotional Quotient Inventory-Youth Version ( Bar-On and Parker, 2000) was used at T0 for the evaluation of the emotional competencies. Data relative to this test though were not considered interpretable due to the difficulty encountered by children in the comprehension of the items. For this reason, we did not include the test in the post-treatment evaluation since no statistical comparison between T0 and T1 could be performed; the Socialization Scale of the Vineland II (Sparrow et al., 2005) was administered to assess social adaptive abilities. Secondary outcome measure included: the Modified Bells Test (MBT) (Biancardi and Stoppa, 1997), a barrage test to assess visual scanning efficiency, and visual selective attention; the Tower of London (TOL) to evaluate executive functions and specifically planning ability, strategy decision making and problem solving (Shallice, 1982;Fancello et al., 2006); from the Neuropsychological Evaluation Battery for developmental age 5-11 (BVN 5-11), the Speech Fluency tests using both phonological and semantic keys for verbal executive functions, the Selective Word Retrieval tests, for short and long term verbal memory, the Corsi test for visual spatial short term memory (Bisiacchi et al., 2005); and the Test of Reception of Grammar-2 (TROG2) to evaluate the comprehension capacity of syntactically complex sentences (Bishop, 2003;Suraniti et al., 2009). The scores from all tests are calculated as Z-scores with the exception of the TROG2 that is indicated in standard score. Table 1 shows the comparison at baseline between the two groups relative to age, SES, IQ, motor abilities, adaptive skills, and behavior. No significant differences between the two groups were detected for age, SES, IQ at baseline, motor abilities and behavior. The Socialization Scale of the Vineland II in which children belonging to the MCNT group had significantly lower scores (p = 0.002). RESULTS We then proceeded with the factorial ANOVA to assess changes in the primary outcome measures throughout the study (Tables 2, 3). Overall, a significant time by group interaction for the Full scale IQ (p < 0.022) and the Performance IQ (p < 0.015) was observed, with significant post hoc pairwise comparison for the MCNT group only (p < 0.001 in both cases). Moreover, the M-ABC evaluation did not show any significant effect in either group, while the Socialization Scale of the Vineland II showed significant time by group interaction (p = 0.022) with posttreatment improvement in the MCNT group (p = 0.02). Finally, the factorial ANOVA comparing the effect of treatments on the CBCL scores (Table 3) demonstrated significant time by group interaction for all CBCL scores: the Internalizing (p = 0.0016), and Externalizing problems scales (p = 0.0027), and the Total score (p = 0.0016). The pairwise post hoc analyses revealed a significant decrease (improvement) in the scores for the MCNT group (p = 0.01; p = 0.01; p = 0.00 for internalizing, externalizing and total score respectively), while the SST group had significant increment (worsening) of the scores (p = 0.01; p = 0.03; p = 0.04). Table 4 reports data relative to the factorial ANOVA assessing secondary outcome measures derived from the neuropsychological evaluations. The results showed a significant time by group effect only for the Corsi test (visual-spatial memory, p = 0.0148) with significant post hoc pairwise comparison for the MCNT group (p = 0.05). Moreover, for all other variables no significant time by group effect was observed. A significant time effect was observed for short-term (p < 0.001) and delayed verbal memory (p < 0.001), immediate selective attention (Modified Bells Test rapidity, p = 0.0065), planning executive functions (Tower of London, p < 0.001), and grammar comprehension (TROG 2, p < 0.001), post hoc analyses revealed significant effects for both groups. Finally, for the sustained selective attention (Modified Bells Test accuracy) a significant time effect was observed (p = 0.0022) with significant post hoc analysis only in the SST group (p-value < 0.001). DISCUSSION In this paper, we presented data from an RCT whose aim was to determine the effectiveness of an experimental intervention for the treatment of children with BIF, MCNT, and to compare it to the usual care. Children with BIF are at high risk of school failures and dropouts. For these reasons, an effective intervention able to reduce the occurrence of these events is highly relevant. Results showed that children in the MCNT group had a significant improvement in their intellectual functioning while children in the SST group did not. This datum is the principal finding of this study, and it is likely due to the improvement in the performance skills that are strictly related to fluid intelligence, which is the capacity to reason in a creative way and to cope with new situations. Moreover, the verbal component of the IQ, associated with crystallized intelligence, showed only a trend toward significance. These results are in agreement with the type of approach used in the MCNT that gave priority to reasoning and planning abilities skills and was less focused on academic knowledge. In particular, the experimental treatment group received intensive training of cognitive abilities with special attention to metacognitive strategies, brainstorming techniques, and elicitation of semantic associations to make conceptual links and improve long term memory. Several studies have investigated the possibility to increase fluid intelligence with targeted cognitive training. The results of these studies have provided controversial results. A metanalysis on the topic showed effective changes in cognitive skills in adults (Au et al., 2015), while another claimed that working memory training produced only short-term effects that do not generalize to tasks remote from the trained ability (Melby-Lervag et al., 2016). The increment in the IQ scores observed in the present study cannot be attributed to any of these considerations since we did not use targeted cognitive training, but a metacognitive approach. Moreover, several pieces of evidence suggest that during development the role of the environment can be crucial, especially for children growing in adverse social environments (Masten and Coatsworth, 1998;Repetti et al., 2002). Our data seem to support this evidence and underlie the importance of intervening with effective approaches during childhood. Moreover, the MCNT treatment included a set of motivational strategies, such as explicitly underlying the importance of the effort and practice in the increment of their abilities to support their self-efficacy. This approach promoted the motivational systems through the explicitness that intelligence is not a fixed entity but a malleable quality that for any given individual can always be further developed. The individual's motivation toward achievement is shaped by its implicit theory of intelligence: conceiving of one's intelligence as a fixed entity is associated with a maladaptive tendency to perform actions to appear capable and avoid negative judgments, whereas conceiving of intelligence as a malleable quality is associated with a more adaptive attitude toward the learning goal of developing that quality (Blackwell et al., 2007). Another peculiarity of the MCNT treatment was the promotion of cooperative learning, according to Vygotsky's idea of the importance of learning through communication and interactions with others (Doolittle, 1997). In the MCNT intervention one of the main objectives for the group setting was the involvement of all the children in the group's activities. To favor positive interdependence, each member was encouraged to participate in the activities according to his/her own strengths and children could seek for the help of the others. The group as a whole was responsible for the achievement of specific goals. This approach has been proven useful to promote positive collaboration and social interactions with greater academic achievements compared to individualistic learning (Johnson et al., 1990). Unfortunately, the complexity and interdependency of the many factors involved in the MCNT treatment makes it difficult to determine which aspect was most efficacious in ameliorating the BIF condition. In terms of motor skills, the results of this study showed no improvement for either group. A possible explanation was the use of exergaming devices that probably did not allow for optimal training of the fine motor skills. These data indicate the necessity to reconsider the activities of the Movement Lab. According to the Psychodynamic Diagnostic Manual-2 (Lingiardi and McWilliams, 2015), all aspects of the mental functioning of the child (among which the capacity for regulation, attention, and learning; the capacity for relationships and intimacy; and the capacity for affective experience, expression, and communication are included) are relevant for the development of the personality. According to this perspective, MCNT was focused on multiple domains and the improvement of children's emotional and relational competences was one of the main goals of the intervention. Our results demonstrated that the MCNT group improved significantly in terms of socialization abilities and behavior. Conversely, the SST group not only did not improve in either scales and but it also showed a worsening in the CBCL. It should be noted that children belonging to the SST group, at baseline, showed a higher score in terms of Socialization compared to the MCNT group. For this reason, we cannot rule out a ceiling effect in this group. Nevertheless, the data show significant improvement in the socialization skills and behavior in the experimental group, and this is highly relevant due to the importance of these abilities for academic achievements. In the experimental intervention, the Emotion lab was aimed at improving the relational skills of children by means of a better comprehension and narration of their own emotions in everyday experiences. Children were "trained" and helped to increase their emotional competence through a therapeutic intervention centered on the possibility to attribute an emotional meaning to experiences. Behavioral problems in children are often due to the inability to cope with very disturbing emotions and sensations that are not fully understood. Emotional competence is indeed inversely related to several anxiety-related disorders (Mathews et al., 2016). The idea was that taking care of the emotional-relational aspects of BIF children and working toward the improvement of these skills might represent a protective factor against the risk of school failures and the developing of psychopathology in the later stage of life. Our results are thus in line with several studies showing the value of mental state talk, mentalization, and symbolic play in emotional understanding, affect regulation, symptom remission and decrease in disruptive behavior, all relevant elements for the clinical population considered in this study (Halfon et al., 2017Gatta et al., 2019;Halfon and Bulut, 2019;Prout et al., 2019a,b). Moreover, regarding the importance of the intensity of the treatment, two recent studies reported on the efficacy of two intensive interventions for young adults with BIF in the Netherlands: the Assertive Community Treatment (ACT) and the Flexible ACT (Neijmeijer et al., 2018(Neijmeijer et al., , 2019. These treatments consisted of a wide range of supportive interventions such as psychological treatment, emotion regulation, somatic care, support regarding living, etc. Data showed the efficacy of these interventions in a longitudinal period of 5 years during which patients had significant improvement in social and psychological functioning, in association with a decrement in the number of admissions to mental health care, number of contacts with police and justice, and number of behavioral disorders, with a persistence of the financial and employment problems. Furthermore, a pilot study on a cognitive-behavioral group training for social abilities for adolescents with BIF showed interesting positive results for social competences, and social problem solving, with negative results on related cognitive domains (Nestler and Goldbeck, 2011). Data from these studies, in our opinion, support the idea that a multi-domain approach that also includes training of cognitive abilities is necessary for this vulnerable population. Finally, changes in specific cognitive abilities were observed in both groups. The lack of the no-treatment group does not allow us to make a final inference about these data because factors other than treatment, such as maturation and test-learning effects, could be involved. Nevertheless, since all tests used were corrected for age and due to the long interval between pre and post-treatment evaluations, we considered it plausible that data reflected the effects of both treatments. In particular, children in the MCNT group improved in tasks exploring selective attention, visual-spatial short-term memory, verbal long-and short-term memory, verbal comprehension and executive planning. These findings likely reflect the type of work that was done in the cognitive lab, which was centered on cognitive flexibility, memorization strategies, problem solving, verbal comprehension, planning and executive functions. The SST group, working on learning abilities and transversal aspects such as attention, memory and verbal comprehension, also showed improvement in sustained selective attention, verbal short-and long-term memory as well as verbal comprehension and executive planning. Due to the broad influence that verbal comprehension has on virtually all cognitive abilities, both treatments trained children in this aspect. In the SST group, improved cognitive abilities were not coupled with changes in their adaptive/behavioral skills. Although our study involved only a small sample of subjects, this datum suggests that focusing only on the cognitive performance in this population is not sufficient to prevent behavioral, social and mental problems. Due to the high level of stress and adversities that children with BIF face in their school, family and social life, they suffer a much greater risk in developing a problematic personality profile with the consequent risk of psychopathology, highlighting the need to be properly supported in their emotional/relational needs. The present study has some major limitations. The first relates to the lack of a no-treatment group, which prevents us from distinguishing between treatment effects and potential biases related to children maturation or learning effects. The second limitation concerns the different intensity between treatments. It is possible that some of the changes observed were due to this bias. Considering the precise domains in which the improvements occurred in each group, and the worsening observed in the behavior of the SST group, it is unlikely that treatment intensity can explain all the changes that we observed. In particular, children in the SST group, despite the lower intensity of the treatment, did show improvement in all the abilities that were trained such as verbal memory, verbal comprehension, selective attention, and executive planning, but no improvements were observed for visual-spatial memory, IQ and behavior that were not trained. These results are not conclusive evidence but they do suggest that intensity of training alone cannot explain all the differences observed. Another significant limitation is represented by the small number of participants that prevents generalization of the present data to the whole population of children with BIF. Larger scale studies will be necessary to further explore the efficacy of the MCNT approach in the treatment of children with BIF, also in the long term. CONCLUSION Considering the poor prognosis of children with BIF in the long term, with educational and vocational failures and the risk to develop psychopathology, we consider our data highly relevant as they demonstrate the possibility to improve competences at multiple levels with an intensive and integrated training. Although additional studies with a long-term follow up will be necessary, we hypothesize that the improvements obtained after MCNT might represent a protective factor able to reduce the risk of poor outcome. Indeed, improving the fluid intelligence and the emotional/behavioral competencies is likely to enhance the ability of children with BIF to cope with their everyday challenges in school, family and social contexts, promoting resilience (Goldman et al., 2016). The results of the present study indicate the opportunity to implement multimodal, intensive and timely rehabilitation interventions in children with BIF. Cost-efficacy analyses will be necessary to determine the feasibility to incorporate this approach within the healthcare provided by the national health system. These analyses should also consider the high risk of children with BIF to develop mental and physical problems, and poverty. DATA AVAILABILITY STATEMENT The datasets used and analyzed during the current study are available from the corresponding author on reasonable request. ETHICS STATEMENT The studies involving human participants were reviewed and approved by the Ethics Committee of the IRCCS Fondazione Don Carlo Gnocchi Onlus, and the Ethics Committee of the ASST S. Paolo and S. Carlo Hospital. Written informed consent to participate in this study was provided by the participants' legal guardian/next of kin. AUTHOR CONTRIBUTIONS MZ, FB, MC, and VB conceived the study and wrote the manuscript. GB, VB, AG, MW, and MZ executed the study and MPC helped with implementation. SD helped with statistical analyses. All authors contributed to refinement of the manuscript and approved the final content. FUNDING This study was funded by Regione Lombardia (to VB, Ricerca Indipendente, 2014-2017) and by the Ministry of Health (Ricerca Corrente 2018-2020). The funders did not have any influence in the design, implementation, analysis or interpretation of the data in this study.
2020-04-21T13:12:51.299Z
2020-04-21T00:00:00.000
{ "year": 2020, "sha1": "ab1d7c39a68f5603058eb7c621f4cc024f3d5466", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyg.2020.00679/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ab1d7c39a68f5603058eb7c621f4cc024f3d5466", "s2fieldsofstudy": [ "Psychology", "Education" ], "extfieldsofstudy": [ "Medicine", "Psychology" ] }
104234377
pes2o/s2orc
v3-fos-license
The Determination and Practical Application the Kinetic Constants of Destruction of Rocks with Modified Zhurkov’s Formula . Any kind of destruction is bound to microckracks. One of the theoretical model of microcracking is kinetic concept of strength (KCS) by S.N. Zhurkov. This theory allows to associate the time to failure with the activation energy and activation volume in single formula. But however, there are options for destruction in which this formula is incorrect. In this article we presents new modification of Zhurkov’s formula. Also, we present two possible practice applications of kinetic constants of the destruction. Introduction The problem of durability is impotent for all types of mining. Early deterioration and destruction of used structures can lead to technogenic accidents and human casualties. One of such technical facilities is automobile roads of mining enterprises, the quality of which is very important for the technical and economic efficiency of the enterprise. The roads quality of the enterprises for extraction of mineral is in the open way especially important. It is caused by difficult climatic conditions: rainfall, temperature drops, the long winters, mechanical influences. In the underground mining and seismology the very impotent problems are the forecasting the energy, location of the rock burst source and the duration of the source forming. These tasks are related to solving a direct problem about an electric field of the rock burst source or technogenic earthquake in the atmosphere. One of methods by calculation of roads durability or the duration of the source forming is based on S.N. Zhurkov's formula [1]. This formula has been received as generalization of experimental data on destruction of laboratory samples at their stretching. However real loadings can be both on compression and on stretching; and, in case of a paving, loads of compression are main. Therefore calculation of durability and kinetic constants of durability on the basis of Zhurkov's formula isn't reasonable. Currently, there are several practical applications of KCS and modifications of the basic Zhurkov's formula [2 -15]. Let us in the following article 1) present a new modification of S.N. Zhurkov's formula, which is based on experimental data on the destruction of samples (composite and rocks) 2 Result and discussion. 2.1. The existing formulas and their modifications. The S.N. Zhurkov et all. KCS divides deformation process of materials into two stages: the chaotic uncorrelated creation of microcracks and creation the main crack of a gap. The transition from the first stage to the second one is comes when the breaking criterion is executed which is presented by a formula [1] where n -average microcracks concentration, m -3 ; l -average microcracks linear size, м; e ≈ 2,72 -average distance between cracks in an unit volume exemplar in shares of their average size l. The S.N. Zhurkov et all. [1] showed that the durability of the first stage can be calculated as -a typical period of atomic fluctuations, s; Ω -the stress sensitivity factor, m 3 ; U0 -the zero stress activation energy, J; k -Boltzman constant, J/ 0 К; Т -the absolute temperature, 0 К;  -the average external stress, Pa. The further research showed that the formula (2) is not universal. For example, more suitable for elastomers the time-to-fracture can be calculated as [9] where Е -is Young elastic modulus, Pa. For polymers and annealed metals, the dependence of durability on load can be calculated as [10] where P  -dimensional adjustable value which has not a physical meaning, Pa; the value ) , , ( 0 T k U n n . We can see that 1) All three formulas (2, 3, 4) contain dependences from activation energy which means using the Gibbs distribution and differ in the way to evaluate the effect of stress changes on the sample; 2) Dependences (3, 4) have a similar form, but they differ in principle in the sense of physical interpretation the contained parameters. Let us determine the relationship between time τ and some parameter that can be registered and measured. Since the largest defects (microcracks) emit electromagnetic pulses (EMP), the number of pulses that can be registered can be considered as the value of N. In [3], the authors formulated a kinetic model for the accumulation of microcracks, including the rate of crack formation, the Bailey destruction irreversibility condition, and the concentration fracture criterion (1) where max N -the maximum number of cracks that accumulates in the sample at the time of its destruction and allows us to describe the accumulation of structural damage when the effective stresses and temperatures are arbitrarily dependent on the time. To determine the dependence of the number of pulses N on the stress value, we integrate (5) under the condition of a linear increase in the stress hence in the case of the dependence (2) we obtain the expression and in the case of the dependence (3) we obtain expression In 1990 -2000 years the team of scientist from Kuzbass State Technical University made the experiments on controlled destroying of composites and rocks samples with simultaneous measurement of the EMP numbers. In that experiments were investigated phenoplast, textolite, limestone, hornstone, and quartz diorite samples. Plots of dependence between the accumulated numbers of pulses from the mechanical stresses are shown in figures 1 -10. Let us test the above formulas (6, 7) for these data sets. The unknown parameters α and β in dependences (6, 7) will be searched using the least squares method as the numerical implementation of which using the evolution algorithms is contained in the NLPSolver add-in of the free open-source OO Calc table processor. The accuracy of data fiting we will diagnose by the determination index where k N -experimental data, T k N -theoretical data by (6,7) upper and (11) lower, a N -average of experimental data. The parameters value α, β and the determination index R 2 were calculated for the given ten samples. The results of the calculations are given in the table 1. Let us draw some conclusions based on the data in Table 1. 1) The values of parameter β, calculated from two different formulas (6,7), coincide in order of magnitude and, moreover, fall into some possible interval of the form 2) The value of parameter α, calculated by formulas (6, 7), will be negative for five rock samples and one composite sample. All of these values are physically meaningless. 3) The value of E, which in [9] had the meaning of the Young elastic modulus, is a dimensional "fitting" parameter with an unclear physical meaning in this case. New modification of S. N. Zhurkov's formula. Let us modify formula (3) as: hence the dependence for the number of pulses (cracks) from the stress is converted to a simpler form Let us calculate the parameters α, β, E using the NLPSolver add-on and present its and the kinetic constants of destruction  and 0 U in Table 2. We can see that 1) the α takes positive value for all samles; 2) the β also falls into the same interval 3) the R 2 is larger than in table 1. Some of the possible applications. Let us introduce two of the possible applications of the formulas obtained. The first application is the frost resistance evaluation of the material at a cyclic freezing and calculation the number of full destruction cycles [16]. Let us assume that when the samples are frozen and thawed, the temperature and internal microstresses vary linearly. The number of cycles to full destruction ND of the samples can be calculated from the equa- , where N1 -the number of microcracks in first cycle, Nmax -the maximum of microcracks before the full destruction. The value N1 we can calculate with (5, 11) [16] as The second application is the calculation of an apparent density of electric currents in geoelecriс and geomechanic [17]. They knows that the rock burst source generates mechanical stresses changes and hence quasi-stationary electric field. Let us assume that the source is located in the thickness of rocks deep below the earth's surface. We can calculate 3D current density D j 3 A·m -3 in it by the formula [17] where L -the linear size of the formed microcracks, m; / N is calculated by the (5)
2019-04-09T13:11:15.692Z
2018-06-01T00:00:00.000
{ "year": 2018, "sha1": "4ae039eb4f086d04ada414e8cc16ccd9a1cd79c9", "oa_license": "CCBY", "oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2018/16/e3sconf_iims2018_01030.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "a5db449f5a5304e032ac29d9fe5eef61ad71569c", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Physics" ] }
259249294
pes2o/s2orc
v3-fos-license
Characterization of immune responses to two and three doses of the adenoviral vectored vaccine ChAdOx1 nCov-19 and the whole virion inactivated vaccine BBV152 in a mix-and-match study in India Graphical abstract Introduction A number of mRNA, adenovirus-vectored and inactivated SARS-Cov-2 vaccines were found to provide protection against severe SARS-CoV-2 illness caused by the Wuhan strain [1][2][3][4].However, as immunity waned and the virus mutated, breakthrough infections were seen, particularly with the Delta and Omicron variants [5][6][7][8][9][10] and the World Health Organization (WHO) recommended the administration of booster doses [11,12].While most countries used homologous boosters, a few studies indicated that heterologous boosters might be more effective [13][14][15][16].For instance, the highest antibody and T-cell responses were elicited in ChAdOx1 and Ad26.COV2-S vaccine recipients when they were boosted with an mRNA vaccine [13,14].Similarly, superior humoral responses against the ancestral Wuhan strain and variants of concern (VOCs) were seen when individuals vaccinated with the inactivated Coro-naVac vaccine were boosted with a heterologous mRNA or viralvectored vaccine than with the homologous vaccine [15,16]. In India, two vaccines were mainly used for primary immunization-the adenoviral vectored vaccine ChAdOx1 nCoV-19 (Covishield TM , Serum Institute of India, henceforth described as ChAd) which encodes the SARS-CoV-2 Spike glycoprotein given as two doses 4-12 weeks apart, and the inactivated viral vaccine BBV152 (Covaxin Ò , Bharat Biotech, henceforth described as BBV152) adjuvanted with Alhydroxiquim-II given as two doses 4 weeks apart.Effectiveness studies carried out during the Delta wave indicated that protection from severe illness was 80 % for ChAd and 69 % for BBV152 [17,18].As effectiveness of BBV152 dropped further with the emergence of the Omicron variant [19], and in accordance with the WHO guidelines, the government recommended that a homologous booster dose be given 9 months after the initial vaccination, beginning initially with people over the age of 60 [20]. To evaluate the reported superiority of heterologous boosters, we undertook a phase 4 randomized study to determine the safety and immunogenicity of homologous versus heterologous booster doses in 200 individuals who were already vaccinated with 2 doses of ChAd and 204 individuals vaccinated with 2 doses of BBV152.We reasoned that if both vaccines were effective at generating a recall response, this would allow for greater flexibility in case a specific vaccine was unavailable for the homologous booster dose.The results from the study indicated that while both boosters were safe, the ChAd booster elicited a higher secondary antibody response with greater binding and neutralizing antibodies, especially when BBV152 primary vaccination was followed by ChAdbooster [21].As recall responses depend on the stimulation of memory cells generated by antigen encounter, peripheral blood mononuclear cells (PBMCs) from a subset of individuals in the study were used to measure antigen-binding B cells and functional T cell responses before and after the booster dose.We now report our findings on the cellular arm of the immune response, the trajectory of the boosted antibody response at 6 months and an extended evaluation of neutralizing antibody responses to include more variants of concern and Omicron sub-lineages. Participants This study was conducted on a subset of participants in our phase 4 randomized trial (21).In brief, participants who were already vaccinated with two primary doses of ChAd (ChAd recipients) or BBV152 (BBV152 recipients) were recruited and randomized to receive either ChAd or BBV152 booster, 12-36 weeks after their second dose.As described previously (21), ChAd recipients were enrolled from 30th August 2021 to 30th September 2021, whereas BBV152 recipients were recruited from 7th September 2021 to 19th January 2022 (Fig. S7).As the Omicron wave at the study site (Vellore, Tamil Nadu) started towards the end of December 2021, an approximate 2 weeks after the emergence of the Omicron wave i.e., 5th January 2022, was used to estimate the proportion of participants for whom day 0 and day 28 samples were collected during the Omicron wave.Thus, in this nested study, all the ChAd recipients completed their day 0 and day 28 visit before the Omicron wave (Table 1).Among those vaccinated with the BBV152 for the primary series, 31 % (11/36) had completed day 0 and day 28 prior to 5th January 2022, these included 4 individuals boosted with BBV152 and 7 boosted with ChAd.Around 38 % (14/36) had completed their day 0 before the Omicron wave while day 28 sampling was during the Omicron wave for 6 individuals in the BBV152 boosted arm and 8 in the ChAd boosted arm.The remaining 31 % (11/36) had both day 0 and day 28 visits during the Omicron wave of which 7 were boosted with BBV152 and 4 with ChAd (Table 2). Samples Cellular (B and T cell responses) and humoral responses to SARS-Cov-2 were measured at 12-36 weeks after 2 primary doses (pre-booster, day 0) and 28 days after the booster dose (day 28) on a subset of samples (where a minimum of 20 million PBMCs was available) from our earlier antibody-profiling study [21].Thus, we used PBMCs of 36 individuals who had received 2 doses of BBV152 and 37 individuals who had received 2 doses of ChAd, (a subset of 204 and 200, respectively, of the parent study) for cellular B-and T-cell assays.There were four groups depending on the primary and booster doses they received: Group 1 -(ChAd-ChAd, two primary doses of ChAd followed by ChAd booster, n = 16) Group 2 -(ChAd-BBV152, two primary doses of ChAd followed by BBV152 booster, n = 21) Group 3 -(BBV152-ChAd, two primary doses of BBV152 followed by ChAd booster, n = 19) Group 4 -(BBV152-BBV152, two primary doses of BBV152 followed by BBV152 booster, n = 17) (a subset of 99, 101, 102 and 102 individuals, respectively, in the parent study).In addition, we measured the humoral responses at day180 after booster in all participants from the parent study.Thus, after loss to follow up, the numbers in each arm at day180 were: ChAd-ChAd: 95; ChAd-BBV152: 96; BBV152-ChAd: 93 and BBV152-BBV152: 94.The study was approved by the Institutional Review Board and Ethics Committee of the Christian Medical College, Vellore. Collection of PBMCs and plasma Whole blood was spun at 200Âg for 10 min to separate plasma from cells.Plasma was then centrifuged at 1000Âg for 10 min to remove platelets and the supernatant was stored at À80 °C till needed for antibody assays.PBMCs were isolated on a Ficoll-Paque density gradient medium (Histopaque-1077, Sigma), washed with 1X PBS, and resuspended in 90 % FBS/10 % DMSO (D2650, Sigma Aldrich USA).Cells were stored in liquid nitrogen in aliquots of 5-10 million per vial and thawed as needed for experiments. Enumeration of RBD-binding B cells: RBD-binding memory B cell frequencies were determined before booster (day 0) and day 28 after the booster using the Miltenyi RBD B cell memory kit (130-128-032) according to the manufacturer's instructions.Briefly, biotinylated-RBD was pre-incubated for 15 min at room temperature (RT) with streptavidin-Phycoerythrin (PE) and streptavidin-PE-Vio 770.This was then added to 10 million thawed PBMCs along with CD38 BV711(BD) and a kit containing a cocktail of 7aminoactinomycin D (7-AAD), anti-CD19 APCVio770, CD27 VioB-right Fluorescein isothiocyanate (Vio Bright FITC), IgM Allophycocyanin (APC), IgA VioGreen, IgG VioBlue in Phosphate Buffer Saline (PBS) containing 0.5 % Bovine Serum Albumin (BSA) and 2 mM Ethylenediaminetetraacetic acid (EDTA) (for 30 min at 4 °C.A minimum of 4 x10 6 cells were acquired, and memory B cell frequencies were expressed as the frequency of RBD + cells within the memory B cell pool (CD19+ CD38À CD27+ RBD+).As with T cells, PBMCs from a volunteer was run with each batch and gates were set using this internal control sample.All flow cytometry data were analysed using FlowJo, LLC (version 10.8.1) and the gating strategies for T and B cells are shown in Fig. S1. Activation of SARS-Cov-2 specific T cells Cryopreserved PBMCs were thawed, washed, counted and rested for 4 h in glutamine-containing RPMI-1640 medium (72400047, Gibco -Thermo Fisher) supplemented with 10 % FBS and 1 % penicillin/streptomycin (complete medium) at 2x10 6 /ml in 6-well plates.They were then harvested and replated at 1.5 Â 10 6 /ml and pre-incubated with 1 lg/ml of anti-CD40 (130-094-133, Miltenyi Biotec, Germany) for 15 min at room temperature (RT) before adding 1 lg/ml each of peptide pools spanning Membrane, Spike and Nucleocapsid regions of the Wuhan strain (Miltenyi) and 5 lg/ml of protein transport inhibitor cocktail (00-4980-93, eBiosciences).No peptides were added in negative control wells.For the positive control, 1x10 6 cells were stimulated with 5 lg/ml PHA (L9379, Sigma Aldrich USA) without anti-CD40 pre-incubation.Stimulation was for 16 h at 37 °C in a 5 % CO 2 incubator.T cell cross-reactivity to the Delta variant was tested by stimulating PMBCs, as above, with 1 lg/ml each of Peptivator mutant Spike corresponding to the mutated region of the Spike protein in Delta and reference Spike pools that has corresponding region in the ancestral Wuhan strain (DS and DR peptides, Miltenyi).All cultures were done in U bottom plates in a final volume of 200 ll.PBMCs from a volunteer who showed good CD4 and CD8 responses were stored in aliquots and run with each assay to control for batch effects (Table S1).SARS-CoV-2 peptide pools are described in Table S2. Enumeration of SARS-Cov-2 specific T cells For enumeration of memory T cells, cultured PBMCs were washed with 1X PBS, and stained with Live/Dead Fixable Viability stain 780 at 4 °C for 30 min.They were incubated with a cocktail containing anti-CD3, CD4, CD8, CD19, CD14 and CD56 in staining buffer containing 2 % FBS for 30 min at 4 °C to stain surface markers.Cells were then fixed and permeabilized using Cytofix/Cytoperm (554714, BD biosciences) buffer for 20 min at 4 °C and followed by treatment with a cocktail containing anti-IL2, IFNg, TNFa, CD69, CD154 and CD137 in 1X Perm/Wash and Brilliant stain Buffer plus.All above reagents were from BD Biosciences (Table S3).Cells were run on a FACS Symphony (A3) or FACS ARIA III and 5-7 Â 10 5 cells were acquired.Results are expressed as frequencies of Activation Induced Marker+ (AIM+) and IL2+/IFNg+/ TNFa+ cells on gated CD4 (CD19-CD14-CD56-CD4+) and CD8 cells (CD19-CD14-CD56-CD8+) after subtracting frequencies of AIM/cytokine positive cells in matched unstimulated controls.AIM+ cells were identified as CD69+ CD154+ (for CD4) and CD69+ CD137+ (for CD8).Negative values obtained after subtracting the background were changed to zero.In some figures, the frequency of cells positive for any one cytokine is shown. Statistical analysis Data were analyzed by Mann-Whitney U test for unpairedsample comparisons and Wilcoxon signed Rank test for pairedsample comparisons.Data with (multiple comparisons were analyzed using Kruskal-Wallis with post-hoc Dunn's correction.All statistical analyses were performed using GraphPad Prism (version 9.4.1). CD4 T cell responses, measured as frequency of AIM+ cells or cytokine+ cells, following stimulation with peptide pools comprising a 1:1:1 mixture of Spike, Membrane and Nucleocapsid peptide pools, were similar in both ChAd and BBV152 vaccinees (Fig. 2).In the CD8 compartment, TNFa+ CD8 cell frequencies were higher after two doses of BBV152 with median (IQR) values of 0.069 % (0.015-0.18) for BBV152 and 0.04 % (0-0.1) for ChAd (p = 0.04) (Fig. 2).There was no correlation between anti-spike antibody amounts and CD4 cytokine responses in ChAd-vaccinees, and only weak correlation (r = 0.48) in BBV152 vaccinees (Fig. S2) indicating that the higher primary antibody response to ChAd is not dependent on better CD4 priming.Thus, ChAd generates a higher humoral response whereas BBV152 vaccination generates higher memory B cells and CD8 cells with cytotoxic potential.When T cell cross-reactivity to delta variant was assessed, the CD4 T cell response (both cytokine and AIM+) to the reference peptide pool DR was low, as expected, (median % 0.01 in ChAd vs 0.01 in BBV152) and the response generated to the mutant peptide pool DS was comparable between both ChAd and BBV152 primary vaccination (median % of AIM+ CD4 cells is 0.015 vs 0.026 and for cyto-kine+ CD4 cells is 0.02 vs 0.01 in ChAd vs BBV152 respectively).The results are shown in Fig. 7. Humoral immune responses to homologous or heterologous booster doses (day 28 post booster) To see the effect of a third vaccine dose on recall responses, anti-spike IgG amounts and neutralization potential were evaluated 28 days after booster as a part of the larger cohort analysis [21].In the subset of samples analyzed in this study, anti-spike antibodies were higher after ChAd booster: median (IQR) of 122,727 (90,389-236,724) in the Group 1 (ChAd-ChAd) and 398,451 (230,557-5089,95) in Group 3 (BBV152-ChAd), in comparison to the BBV152 booster: with the median (IQR) of 49,215 (23,289) in the Group 4 (BBV152-BBV152) and 47,519 (17,714) in the Group 2 (ChAd-BBV152) (Fig. 3A).The highest fold-increase of 64 was seen when BBV152 was followed by ChAd booster (Group 3).These results go in hand with the larger study data [21].Thus, memory B cells generated in response to the 2 doses of inactivated vaccine BBV152 (Fig. 1B) are recalled poorly with a third dose of the same vaccine, but efficiently with the heterologous ChAd booster; the correlation plots are shown in Fig. 3C. We further assessed for the neutralization potential of antibodies against Omicron and its sub-lineages and the results for the whole cohort are shown in Fig. 5, Fig. S6 and Table S7.Higher inhibition of all Omicron sub-lineages was observed in the Group 3 (BBV152-ChAd), while lesser cross-neutralization of Omicron and its sub-lineages was seen in the other three groups and is in keeping with the previous reports [22,23] [21].In general, anti-Spike IgG amounts correlated with surrogate neutralization potential.The results for Wuhan, Alpha, Delta, Gamma and Omicron strains are shown in Fig. 6 and a similar pattern was seen for various Omicron sub-lineages (data not shown). Cellular immune responses to homologous or heterologous booster doses (day 28 post booster) Similar to antibody amounts, RBD-binding memory B cells also showed higher frequencies in the two vaccine arms that were boosted with ChAd than those that were boosted with BBV152 (Fig. 3B).We also compared memory B cell frequencies before and after the booster dose.We found that the RBD-binding memory B cell frequencies increased marginally in both groups that were boosted with ChAd, with median frequencies going up from 0.41 to 0.51 % in the Group 3 (BBV152-ChAd), (p = 0.0035), and from 0.35 to 0.49 in the Group 1(ChAd-ChAd), (p = 0.0021) (Fig. 3).On the other hand, no increase in memory B cell frequencies was seen in either arm boosted with BBV152.When Ig isotypes were examined, no major shift in the proportion of any isotype was seen in individuals vaccinated with ChAd and boosted with either ChAd or BBV152.However, in individuals vaccinated with BBV152, the proportion of IgM-memory B cells decreased while the proportion of IgG-memory B cells increased after both homologous and heterologous boosters.Thus, IgM memory cells generated by BBV152 vaccination (Fig. 1) can be stimulated to undergo isotype switching and contribute to the maintenance of memory B cells, especially when boosting is with a heterologous vaccine.SARS-CoV-2 specific CD4 and CD8 T cell responses were also measured 28 days after the booster dose by estimating the frequency of AIM+ and cytokine+ cells in cultured cells.The data are shown in Fig. 4 as median (IQR) frequencies.While CD4 responses were similar in the four arms, CD8 responses were higher in the Group 1(ChAd-ChAd), with AIM+ cells being more abundant in this arm than in the Group 2 (ChAd-BBV152) and the Group 4 (BBV152-BBV152) (Fig. 4).The frequencies of stimulated T cells before and after booster are shown in Figs.S4-S5, Tables S5, S6. Effect of Omicron wave on immune responses to primary and at day 28 following different booster regimes As mentioned earlier, among the participants vaccinated with two doses of BBV152 as primary series, around 31 % (11/36) had their day 0 sampling and 69 % (which includes 12/19 in Group 3 & 13/17 in Group 4) had their day 28 sampling after the start of the Omicron wave.Therefore, to assess the effect of Omicron wave on immune responses to primary regimes (ChAd vs BBV152), data were analyzed after excluding 11 individuals from the BBV152 recipients whose day 0 samples were collected after the start of Omicron wave.Even after this exclusion, the results after primary regimes remain unaffected (data not shown).However, at day 28 post booster, similar kind of analysis after excluding the samples collected during the Omicron wave was not done, as the exclusion resulted in further reduction in the number of samples in each arm (7 samples were remaining in Group 3 and 4 samples in Group 4).Instead, their anti-nucleocapsid IgG levels (surrogate of SARS CoV 2 infection) were compared with the other two groups (Group 1 and 2) for which day 28 sampling was completed prior to Omicron wave (Fig. S8).The anti-nucleocapsid IgG levels in Group 3 (BBV152-ChAd) (median; IQR of 7166AU/ml; 1933-16424) was not significantly elevated as compared to the groups where sampling was prior to the Omicron wave (median; IQR in Group 1 (ChAd-ChAd) is 823.4AU/ml;419.5-4,760 & in Group 2 (ChAd-BBV152) is 2,656AU/ml; 214.[2][3][4][5][6][7][8][9][10][11][12][13][14][15][16][17][18][19]991).This suggest that the Omicron wave may not have majorly impacted on the immune responses in Group 3 (BBV152-ChAd).However, those vaccinated with BBV152-BBV152 regime showed significantly higher antinucleocapsid IgG concentration with median (IQR) of 63,865 (23,131-2,14,108) AU/ml in comparison to the groups which completed their day 28 before the start of the Omicron wave (Group 1 and Group 2, medians were mentioned above).But, as BBV152 is a whole virion inactivated vaccine which itself can mount responses against nucleocapsid, we cannot distinguish between the humoral responses induced by vaccination or infection. Durability of the boosted humoral response at day 180 post booster The data from our previous study [21] and in the subset of samples analyzed in this study, showed that the BBV152 primary vaccination followed by a ChAd booster (Group 3) generated highest anti-spike IgG and % ACE-2 inhibition levels at day 28 post booster.Thus, in order to assess their persistence over time, we measured anti-spike IgG and % ACE-2 inhibition levels 180 days after the booster dose in all the participants.We found that a marked decay occurred in this group, from 286,685 (164,549-421,602) on day 28 to 141,451 (65,714-305962), ( = <0.0001) on day180.In the Group 1 and 2, IgG increased, going up from 106,203 (62,394-179,838) to 180,164 (93,670-408,323), (p = <0.0001) in the Group 1 (ChAD-ChAd) and from 36,077 (19,395) to 187,796 (76,079-365,507), ( = <0.0001) in the Group 2 (ChAd-BBV152).The lowest concentration of 78,581 (35,261) was seen at day180 in Group 4 (BBV152-BBV152), going up marginally from 52,274 (26,987-107,148) on day 28, (p = 0.0172).The results are shown in Fig. 8. Interestingly, the neutralization potential of antibodies at this time was quite different from what was seen on day 28.In Group 1 (ChAd-ChAd) and 2 (ChAd-BBV152), percent ACE-2 inhibition of all strains tested increased, while in Group 3 (BBV152-ChAd) and Group 4 (BBV152-BBV152) there was a decline (Figs. 9 and 10, S6, Table S8) and the lowest inhibition seen, once again, in Group 4 (BBV152-BBV152).It is possible that the increase in the spike IgG amounts and the increased percent ACE-2 inhibition in Group 1 (ChAd-ChAd) and Group 2 (ChAd-BBV152), between day 28 and day180 was because of the Omicron wave which occurred just before the day180 visit (Fig. S7).We used anti-nucleocapsid IgG as a surrogate to capture natural infection with SARS CoV 2 during the study period.On stratified analysis based on the increase or decrease of anti-nucleocapsid IgG levels from day 28 to day 180, we observed a considerable decay in the anti-spike IgG in those who were not infected in these groups and a substantial increase in those who were infected (Table S4). Discussion When this study was initiated, the adenoviral vectored vaccine ChAd and the whole virion inactivated vaccine BBV152 were the two most commonly used vaccines used for primary immunization against COVID-19 in India.While homologous and heterologous booster doses were being considered in other parts of the world, and since nothing was known about the efficacy of either homologous or heterologous boosting with ChAd and BBV152, we evaluated the humoral and cellular responses to these primary vaccination regimens and to a 3rd homologous or heterologous booster with these two vaccines in an adult population. We found that serum antibodies were higher in ChAd recipients but that RBD-binding memory B cell frequencies, specifically IgM memory B cells, were higher in BBV152 recipients.Thus, the B cell response to primary regimen appears to be skewed towards greater terminal differentiation after ChAd and somewhat greater memory generation after BBV152.The recall antibody response, however, was highest when primary BBV152 vaccination was followed with a ChAd booster (Group 3), and the lowest recall response was seen when primary BBV152 vaccination was followed with a homologous BBV152 booster (Group 4).Similarly, when the neutralization potential of antibodies was tested four weeks after the booster doses by ACE-2 inhibition of the Omicron sub-lineages, the highest inhibition was seen when BBV152 vaccination was followed by a ChAd booster (Group 3).These data are in keeping with other recent studies indicating the superiority of anti-spike antibody responses when vaccination with an inactivated vaccine is followed by an mRNA or vectored vaccine booster [15,16,24,25].We have taken these observations forward by looking at the correlation between the frequency of primary vaccinegenerated memory cells and recall antibody amounts and find that poor recall of memory rather than poor memory generation per se, is responsible for the lower boosted response to inactivated vaccine.It is unclear why IgM memory cells generated to BBV152 vaccine are not recalled efficiently with a BBV152 booster, but a recent study [26] may provide a clue.The authors report that antigen capture by follicular dendritic cells (FDCs) requires that they escape degradation by proteases which are abundant in subcapsular sinuses and extrafollicular areas of lymph nodes, but not in follicles.It is therefore possible that the vectored vaccine is better able to reach FDCs for prolonged antigen presentation, secondary germinal center formation and response maturation. Functional CD4 AIM+, cytokine+ cell responses after in vitro stimulation of PBMCs with peptide pools, were equivalent in individuals vaccinated with ChAd and BBV152.However, TNFaproducing CD8 cells were present at higher frequencies after BBV152 vaccination.CD8 cells are known to recognize the nucleocapsid protein [27] which is present in BBV152 but not in ChAd and the higher CD8 response in BBV152 vaccinees may relate to our culture conditions which contains peptide pools covering the Spike, Membrane and Nucleocapsid regions and therefore stimulates NP-specific cells in addition to Spike-specific cells in this group.Background non-specific T cell responses were high in most individuals and this led to relatively lower SARS-CoV-2 specific T cell response in our cohort than has been reported and this could be due to low-level systemic inflammation in our population caused by high environmental bacterial loads and increased translocation of intestinal bacteria to systemic organs [28].High background CD8 responses in our population may also be due to NP-specific CD8 cells which are present in high frequencies in the naïve repertoire [29] and known to cross-react with some seasonal coronaviruses [30].Surprisingly, none of the boosting protocols enhanced CD8 responses above those seen with primary vaccination and in the CD4 compartment, marginal increases were seen in the BBV152 boosted arms.This differs from other reports [13,14,16] but is in keeping with a recent study which found that boosters failed to enhance the T cell response to prior vaccination [31]. DS AIM+ cells (% of CD4) C h A d -C h A d When we assessed the humoral responses at day 180 post booster, a significant decay of anti-Spike IgG was seen in Group 3 (BBV152-ChAd) between day 28 and day180 post-booster, and the drop was associated with lower ACE-2 inhibition across the board.The lowest antibody concentrations and poorest ACE-2 inhibition were seen at both time points in the Group 4 (BBV152-BBV152), indicating that this protocol probably generates relatively short-lived plasma cells.Interestingly, no such decay in the humoral response was seen in the two arms that had received the ChAd vaccine.Antibodies increased in these two arms during this period, and so did cross-neutralization potential against variants of concern including Omicron and its sub-lineages.It has been reported earlier that the AstraZeneca COVID-19 vaccine and the mRNA vaccines (Pfizer-BioNTech BNT162b2 or Moderna mRNA-1273) elicit high titer antibodies with potent and broadly cross-neutralization capability in individuals with prior mild/moderate infection [32,33].Our results indicate that such response maturation, with selection of cross-reactive antibodies also occurs following vaccination and boosting with ChAd.However, as the Omicron wave hit India in January 2022, it is also possible that subclinical exposure to the Omicron variant influenced the boosted response of the ChAd vaccinated group during the 6-month period that overlapped with the Omicron wave.Some support for this comes from our observation of a modest increase in anti-Nucleocapsid antibodies from day 28 to day180 in the Group 1 (ChAd-ChAd) and Group 2 (ChAd-BBV152) (Fig. 8C).Even at day 28 post booster, the spread of booster dose administration in the BBV152 recipients to the Omicron wave could also be partly responsible for the higher secondary antibody amounts, associated with higher crossrecognition of the Omicron strain, in Group 3 (BBV152-ChAd) as compared to other groups.However, as the anti-nucleocapsid IgG levels were not seen to be significantly elevated in this group as compared to the ChAd recipients who completed their day 28 visit much before the start of the Omicron wave in India, the impact of the Omicron wave may not be significant on immune responses in Group 3. Our study is the only one to date that has carried out a systematic comparison of humoral and cellular responses following two and three doses of the COVID-19 vaccines, particularly the aden- ovirus vectored vaccine that has been used extensively in countries of the developing world.To the best of our knowledge, it is also the only study that has looked at the durability of the boosted antibody response to these two vaccines.Of the four booster regimes tested, the best combination for achieving high antibody titers and good inhibition of variants of concern, including Omicron sub-lineages, was BBV152-ChAd (Group 3) and this was followed by ChAd-ChAd (Group 1) when tested after 4 weeks of booster.However, when tested at 6 months, all the vaccinated arms except BBV152-BBV152 (Group 4) showed high binding antibody levels with good neutralization potential.No increase in T cell responsiveness to booster doses was seen, but RBD-binding memory B cells increased marginally in both arms that were boosted with ChAd.We have not estimated the frequencies of follicular helper T cells, effector/central memory T cells, or plasma cells and the inclusion of such analyses may have provided better mechanistic understanding of our findings.Nevertheless, the study indicates that successful vaccination against SARS-CoV-2 and known variants is possible with vaccines currently available in countries like India and that widespread and timely surveillance for variants emerging here and elsewhere will inform the choice of a better booster vaccine and timing of boosters in the future.to the serological investigations performed in the study.A.A.J., J.V.L. X., O.S.N., R.R., J.S.D.C. contributed to the project administration.A. C., R.M. drafted the report.A.G. finalized the report.All authors reviewed and approved the final report. Role of funding source This study is funded by the Azim Premji Foundation and the Bill and Melinda Gates Foundation (INV-034599).The funders had no role in study design, data collection, analysis and interpretation of the data. Fig. 1 . Fig. 1.SARS-CoV-2 specific B cell response following two primary doses of ChAd and BBV152 vaccination.A: Anti-spike IgG amounts, B: Frequency of total RBDbinding memory B cells., C: Frequencies of IgM+, IgG+ and IgA+ cells within the RBD-binding memory B cell pool were compared between ChAd and BBV152 vaccinees after 12-36 weeks post 2 doses (day 0).Data is shown as median and interquartile range and all the values are background subtracted.Any p-value < 0.05 is represented as *.n = 35 for ChAd, n = 36 for BBV152. Fig. 2 . Fig. 2. T cell response to Wuhan peptide pools in ChAd and BBV152 recipients.Median frequencies of CD4 cells (left column) and CD8 cells (right column) expressing AIM (A), any one of three cytokines IFNg/TNFa/IL2 (B), and individual cytokines IFNg (C), TNFa (D) and IL2(E) were compared between ChAd and BBV152 recipients after 12-36 weeks post 2 doses (day 0).Data is shown as median and interquartile range and all the values are background subtracted.Any p-value < 0.05 is represented as *.n = 35 for ChAd, n = 36 for BBV152. Fig. 8 . Fig.8.Persistence of antibodies till six months after the booster dose.Comparison of Anti-spike IgG amounts between day 28 (d28) and day 180 (d180) after booster across all four arms (A).Anti-Spike IgG concentrations in the 4 arms on d180 after booster (B) and Anti-nucleocapsid IgG concentrations between d28 and d180 after booster across the four arms (C).Data is shown as median and interquartile range.Any p-value < 0.05 is represented as *.n = 95 for Group 1 (ChAd-ChAd), n = 96 for Group 2 (ChAd-BBV152), n = 93 for Group 3 (BBV152-ChAd) and n = 94 for Group 4 (BBV152-BBV152). Table 1 Demographic description of the participants in the study. Table 2 Proportion of day 0 and day 28 samples collected before and during Omicron wave amongst the BBV152 vaccinated individuals.
2023-06-27T06:17:03.444Z
2023-06-01T00:00:00.000
{ "year": 2023, "sha1": "405c6f6f6922c702a1c7627a33165b6106182ab7", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.vaccine.2023.06.059", "oa_status": "HYBRID", "pdf_src": "ElsevierCorona", "pdf_hash": "83628a7879943eaa8abc1888cda6f63121264162", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
259937336
pes2o/s2orc
v3-fos-license
Training Discrete Energy-Based Models with Energy Discrepancy Training energy-based models (EBMs) on discrete spaces is challenging because sampling over such spaces can be difficult. We propose to train discrete EBMs with energy discrepancy (ED), a novel type of contrastive loss functional which only requires the evaluation of the energy function at data points and their perturbed counter parts, thus not relying on sampling strategies like Markov chain Monte Carlo (MCMC). Energy discrepancy offers theoretical guarantees for a broad class of perturbation processes of which we investigate three types: perturbations based on Bernoulli noise, based on deterministic transforms, and based on neighbourhood structures. We demonstrate their relative performance on lattice Ising models, binary synthetic data, and discrete image data sets. Introduction Building large-scale probabilistic models for discrete data is a critical challenge in machine learning for its broad applicability to perform inference and generation tasks on images, text, or graphs. Energy-based models (EBMs) are a class of particularly flexible models p ebm ∝ exp(−U ), where the modelling of the energy function U through a neural network function can be taylored to the data set of interest. However, EBMs are notoriously difficult to train due to the intractability of their normalisation. The most popular paradigm for the training of EBMs is the contrastive divergence (CD) algorithm (Hinton, 2002) which performs approximate maximum likelihood estimation by using short-run Markov Chain Monte Carlo (MCMC) to approximate intractable expectations with respect to p ebm . The success of CD has lead to rich research results on sampling from discrete distributions to enable fast and accurate estimation of the EBM (Zanella, 2020; Grathwohl et al., 2021;Zhang et al., 2022b;Sun et al., 2022bSun et al., ,a, 2023Emami et al., 2023). However, training EBMs with CD remains challenging: Firstly, discrete probabilistic models often exhibit a large number of spurious modes which are difficult to explore even for the most advanced sampling algorithms. Secondly, CD lacks theoretical guarantees due to short run MCMC (Carreira-Perpinan & Hinton, 2005) and often times leads to malformed energy landscapes (Nijkamp et al., 2019). We propose the usage of a new type of loss function called Energy Discrepancy (ED) (Schröder et al., 2023) for the training of energy-based models on discrete spaces. The definition of ED only requires the evaluation of the EBM on positive and contrasting, negative samples. Unlike CD, energy discrepancy does not require sampling from the model during training, thus allowing for fast training with theoretical guarantees. We demonstrate the effectiveness of ED by training Ising models, estimating discrete densities, and modelling discrete images in high-dimensions (see Figure 1 for an illustration). Energy Discrepancies Energy discrepancies are based on the idea that if information is processed through a channel Q then information will be lost. Mathematically, this is expressed through the data processing inequality KL(Qp data Qp ebm ) ≤ KL(p data p ebm ). Consequently, the difference of the two KL divergences forms a valid loss for density estimation (Lyu, 2011). Retaining only terms that depend on the energy function U results in the energy discrepancy (Schröder et al., 2023): Definition 1 (Energy Discrepancy). Let p data be a positive density on a measure space (X , dx) 3 and let q(y|x) be a conditional probability density. Define the contrastive potential induced by q as We define the energy discrepancy between p data and U induced by q as The validity of this loss functional is given by the following non-parametric estimation result, previously stated in Schröder et al. (2023): Theorem 1. Let p data be a positive probability density on (X , dx). Assume that for all x ∼ p data and y ∼ q(y|x), Var(x|y) > 0. Then, the energy discrepancy ED q is functionally convex in U and has, up to additive constants, a unique global minimiser U * = argmin ED q (p data , U ). Furthermore, this minimiser is the Gibbs potential for the data distribution, i.e. p data ∝ exp(−U * ). We give the proof of Theorem 1 in Appendix A.1. The perturbation q can be chosen quite generally as long as it can be guaranteed that computing y comes at a loss of information which mathematically is expressed through the variance of recovering x from y ∼ q(y|x) being positive. In the next section, we propose some practical choices for q. Training Discrete Energy-Based Models with Energy Discrepancy The perturbation process q needs to be chosen under the following considerations: 1) The contrastive potential U q (y) has a numerically tractable approximation. 2) The negative samples obtained through q are informative for training the EBM when only finite amounts of data are available. We propose three categories for constructing perturbative processes: Due to the symmetry of q, we can then write the contrastive potential as The expectation on the right hand side can now be approximated via sampling M Bernoulli random variables ξ j and taking the remainder of (y + ξ j )/2. We denote this method as ED-Bern. Deterministic Transformation. The perturbation q can also be defined through a deterministic information loosing map g : X → Y, where the space Y may or may not be equal to X depending on the choice of g. The contrastive potential can be expressed in terms of the preimage of g, i.e. with c = log |{g −1 (y)}|. Again, the contrastive potential can be approximated through sampling M instances from the uniform distribution over the set {x : g(x ) = y}. In our numerical experiments, we focus on the mean-pooling transform g pool whose inverse are block-wise permutations. For details, see Appendix C.2. We denote this method as ED-Pool. Neighbourhood-based Transformation. Finally, inspired from concrete score matching (Meng et al., 2022), we may define energy discrepancies based on neighbourhood maps x → N (x) ∈ X K which assign each point x ∈ X a set of K neighbours 4 . We define the forward perturbation q(y|x) by selecting neighbours y ∼ U(N (x)) uniformly at random. Conversely, the contrastive potential can be expressed in terms of the inverse neighbourhood y → N −1 (y) ∈ X K , i.e. the set of points that have y to their neighbour. We then obtain for the contrastive potential In practice, we choose the grid neighbourhood (Appendix C.3) and denote this method by ED-Grid. Stabilising Training. Above schemes permit the approximation of the contrastive potential from M samples which are generated by first sampling y ∼ q(y|x), after which we compute M approximate recoveries x j − . The full loss can then be constructed for each data point x + ∼ p data by calculating log M j=1 exp(U (x + ) − U (x j − )) − log(M ) using the numerically stabilised logsumexp function. In practice, however, we find that this estimator for energy discrepancy is biased due to the logarithm and can exhibit high variance. To stabilise training, we introduce an offset for the logarithm which introduces a deterministic lower bound for the loss. This yields the energy discrepancy loss function with x i + ∼ p data . In Appendix C.5 we proof that this approximation is consistent for any fixed w: Theorem 2. For every ε > 0 there exist N, M ∈ N such that |L q,M,w (U )−ED q (p data , U )| < ε a.s.. Training Ising Models. We evaluate the proposed methods on the lattice Ising model, which has the form of Experiments where J = σA D with σ ∈ R and A D being the adjacency matrix of a D×D , we generate training data through Gibbs sampling and use the generated data to fit a symmetric matrix J via energy discrepancy. In Figure 2, we consider D = 10 × 10 grids with σ = 0.2 and illustrate the learned matrix J using a heatmap. It can be seen that the variants of energy discrepancy can identify the pattern of the ground truth, confirming the effectiveness of our methods. We defer experimental details and quantitative results comparing with baselines to Appendix E.1. Discrete Density Estimation. In this experiment, we follow the experimental setting of Dai et al. (2020);Zhang et al. (2022a), which aims to model discrete densities over 32-dimensional binary data that are discretisations of continuous densities on the plane (see Figure 4). Specifically, we convert each planar data pointx ∈ R 2 to a binary data point x ∈ {0, 1} 32 via Gray code (Gray, 1953). Consequently, the models face the challenge of modeling data in a discrete space, which is particularly difficult due to the non-linear transformation fromx to x. We compare our methods to three baselines: PCD (Tieleman, 2008), ALOE+ (Dai et al., 2020), and EB- GFN (Zhang et al., 2022a). The experimental details are given in Appendix E.2. For qualitative evaluation, we visualise the energy landscapes learned by our methods in Figure 3. It shows that energy discrepancy is able to faithfully model multi-modal distributions and accurately learn the sharp edges present in the data support. For further qualitative comparisons, we refer to the energy landscapes of baseline methods presented in Figure C.2 of Zhang et al. (2022a). Moreover, we quantitatively evaluate different methods in Table 1 by showing the negative log-likelihood (NLL) and the exponential Hamming MMD (Gretton et al., 2012). Perhaps surprisingly, we find that energy discrepancy outperforms the baselines on most settings, despite not requiring MCMC simulation like PCD or training an additional variational network like ALOE and EB-GFN. A possible explanation for this are biases introduced by short-run MCMC sampling in the case of PCD or non-converged variational proposals in ALOE. By definition, ED transforms the data distribution as well as the energy function which corrects for such biases. Discrete Image Modelling. Here, we evaluate our methods in discrete high-dimensional spaces. Table 2. We see that energy discrepancy yields comparable performances to the baselines, while ED-Pool is unable to capture the data distribution. We emphasise that energy discrepancy only requires M (here, M = 32) evaluations of the energy function per data point in parallel. This is notably fewer than contrastive divergence, which requires simulating multiple MCMC steps without parallelisation. We also visualise the generated samples in Figure 11, which showcase the diversity and high quality of the images generated by ED-Bern and ED-Grid. However, we observed that ED-Pool suffers from mode collapse. Conclusion and Outlook In this paper we demonstrate how energy discrepancy can be used for efficient and competitive training of energy-based models on discrete data without MCMC. The loss can be defined based on a large class of perturbative processes of which we introduce three types: noise, determinstic transform, and neighbourhood-based transform. Our results show that the choice of perturbation matters and motivates further research on effective choices depending on the data structure of interest. We observe empirically that similarly to other contrastive losses, energy discrepancy shows limitations when the ambient dimension of X is significantly larger than the intrinsic dimension of the data. In these cases, training is aided significantly by a base distribution that models the lower-dimensional space populated by data. For this reason, the adoption of ED on new data sets or different data structures may require adjustments to the methodology such as learning appropriate base distributions and finding more informative perturbative transforms. For future work, we are interested in how this work extends to highly structured data such as graphs or text. These settings may require a deeper understanding of how the perturbation influences the performance of ED and what is gained from gradient information in CD (Zhang et al., 2022b;Grathwohl et al., 2021) Lazaro-Gredilla, M., Dedieu, A., and George, D. Perturb-and-max-product: Sampling and learning in discrete energy-based models. Advances in Neural Information Processing Systems, 34:928-940, 2021. Lyu, S. Unifying non-maximum likelihood learning objectives with minimum KL contraction. A Abstract Proofs and Derivations A.1 Proof of the Non-Parametric Estimation Theorem 1 In this subsection we give a formal proof for the uniqueness of minima of ED q (p data , U ) as a functional in the energy function U . We first reiterate the theorem as stated in the paper: Theorem 1. Let p data be a positive probability density on (X , dx). Assume that for all x ∼ p data and y ∼ q(y|x), Var(x|y) > 0. Then, the energy discrepancy ED q is functionally convex in U and has, up to additive constants, a unique global minimiser U * = argmin ED q (p data , U ). Furthermore, this minimiser is the Gibbs potential for the data distribution, i.e. p data ∝ exp(−U * ). We test energy discrepancy on the first and second order optimality conditions, i.e. we test that the first functional derivative of ED vanishes in U * and that the second functional derivative is positive definite. For uniqueness and well-definedness, we constrain the optimisation domain to the following set: G := U : X → R such that exp(−U ) ∈ L 1 (X , dx) , U ∈ L 1 (p data ) , and min x∈X U (x) = 0 and require that there exists a U * ∈ G such that exp(−U * ) ∝ p data . We now start with the following lemmata and then complete the proof of Theorem 1 in Corollary 1. Lemma 1. Let h ∈ G be arbitrary. The first variation of ED q is given by where p U (z|y) = q(y|z) exp(−U (z)) z ∈X q(y|z ) exp(−U (z )) . Proof. We define the short-hand notation U := U + h. The energy discrepancy at U ε reads For the first functional derivative, we only need to calculate d d log Plugging this expression into ED q (p data , U ) and setting = 0 yields the first variation of ED q . Lemma 2. The second variation of ED q is given by Proof. For the second order term, we have based on equation 4 and the quotient rule for derivatives: We obtain the desired result by interchanging the outer expectations with the derivatives in . Furthermore, U * is the unique global minimiser of ED q (p data , ·) in G. Proof. By definition, the variance is non-negative, i.e. for every h ∈ G: Consequently, the energy discrepancy is convex and an extremal point of ED q (p data , ·) is a global minimiser. We are left to show that the minimiser is obtained at U * and unique. First of all, we have for U * : By applying the outer expectations we obtain where we used that the marginal distributions x∈X p data (x)q(y|x) cancel out and the conditional probability density integrates to one. This implies for all h ∈ G. We now show that Assume that the second variation was zero. Since the perturbed data distribution x∈X p data (x)q(y|x) is positive, the second variation at U * is zero if and only if the conditional variance Var p data (z|y) [h(z)] = 0. Since U * +εh ∈ G, the function h can not be constant. By definition of the conditional variance, h(z) must then be a deterministic function of y ∼ x∈X q(y|x)p data (x). Since h was arbitrary, there exists a measurable map g such that z = g(y) and Var p data (z|y) [z] = 0 which is a contradiction to our assumptions. Consequently, U * is the unique global minimiser of ED q which completes the statement in Theorem 1. B Connections to other Methods In this section, we follow Schröder et al. (2023). B.1 Connections of Energy Discrepancy with Contrastive Divergence The contrastive divergence update can be derived from an energy discrepancy when, for E θ fixed, q satisfies the detailed balance relation q(y|x) exp(−E θ (x)) = q(x|y) exp(−E θ (y)) . To see this, we calculate the contrastive potential induced by q: Consequently, the energy discrepancy induced by q is given by Updating θ based on a sample approximation of this loss leads to the contrastive divergence update It is important to notice that the distribution q depends on E θ and needs to adjusted in each step of the algorithm. For fixed q, ED q (p data , E θ ) satisfies Theorem 1. This means that each step of contrastive divergence optimises a loss with minimiser E * θ = − log p data + c. However, the loss function changes in each step of contrastive divergence. The connection also highlights the importance Metropolis-Hastings adjustment to ensure that the implied q distribution satisfies the detailed balance relation. B.2 Derivation of Energy Discrepancy from KL Contractions A Kullback-Leibler contraction is the divergence function KL(p data p ebm )−KL(Qp data Qp ebm ) (Lyu, 2011) for the convolution operator Qp(y) = x ∈X q(y|x )p(x ). The linearity of the convolution operator retains the normalisation of the measure, i.e. for the energy-based distribution p ebm we have The KL divergences then become with U q := − log Q exp(−U (x)) KL(p data p ebm ) = E p data (x) [log p data (x)] + E p data (x) [U (x)] + log Z U KL(Qp data Qp ebm ) = E Qp data (y) [log Qp data (y)] + E Qp data (y) [U q (y)] + log Z U Since the normalisation cancels when subtracting the two terms we find KL(p data p ebm ) − KL(Qp data Qp ebm ) = ED q (p data , U ) + c where c is a constant that contains the U -independent entropies of p data and Qp data . C Sample Approximations of Energy Discrepancies In this section, we discuss practical implementations of the mean-pooling transform as an information destroying deterministic process and the grid-neighbourhood as a neighbourhood-based transformation. C.1 General Strategy As a general strategy, the contrastive potential has to be written as an expectation over an appropriate to be determined distribution p neg,q,y that depends on the chosen perturbation process and on the point where the contrastive potential is evaluated, i.e. which allows the evaluation of the contrastive potential via sampling from p neg,q,y . The energy discrepancy can then be written as by using properties of the logarithm and exponential and the fact that U (x) does not depend on the expectations taken in y and x . The loss can then be approximated via ancestral sampling. We first sample a batch x i + ∼ p data , subsequently sample its perturbed counter part y i ∼ q(·|x i + ), and finally sample M negative samples x i,j − ∼ p neg,q,y i . Sometimes, the perturbed sample y i is never explicitely computed in the process. As described in Equation (2), the approximation is always stabilised through tunable hyper-parameter w which finally yields the loss function The justification for the stabilisation is two-fold. Firstly, the logarithm makes the Monte-Carlo approximation of the contrastive potential biased due to Jensens inequality. The bias is negative, given to leading order by the variance of the approximation, and depends on the energy function U . Thus, the optimiser may start to optimise for a high bias and high variance estimator of the contrastive potential rather than learning the data distribution. While this issue can be alleviated by significantly large choices for M , it is much more practical to introduce a deterministic lower bound to the loss-functional through the stabilisation w, which prevents the bias and logarithm from diverging. Secondly, the effect of the stabilisation goes to zero as M increases. Thus, the asymptotic limit for M and N large is retained through the stabilisation. For more details and analogous arguments in the continuous case, see Schröder et al. (2023). C.2 Mean Pooling Transform We describe the mean-pooling transform on the example of image data which takes values in the space {0, 1} h×w . We fix a window size s and reshape each data-point into blocks of size s × s, i.e. The mean pooling transform g pool computes the average over each blockx •,•,i,j for i = 1, 2, . . . , h/s and j = 1, 2, . . . , w/s. The corresponding preimage of the mean pooling transform is given by the set of points which are identical to x up to block-wise permutation, i.e. g −1 (g pool (x)) = {x ∈ X : there exist π i,j ∈ S s×s s.t.x l,k,i,j =x πi,j (l,k),i,j for all l, k, i, j} where S s×s denotes the permutation group for matrices of size s × s. In practice, the mean-pooled data point has to never be computed, only the block wise permutations of the data point are required. Consequently, we obtain negative samples through x i,j − ∼ U(g −1 (g pool (x i ))), i.e. via block wise permutation of the entries of each data point x i . Strictly speaking, this transformation violates the assumptions of Theorem 1 for data points that only consist of blocks that average to 1 or 0. Since this is only the case for a small set of the state space, we assume this violation to be negligible. C.3 Grid Neighborhood The grid neighbourhood for x ∈ {0, 1} d is constructed as where e k is a vector of zeros with a one in the k-th entry. This neighbourhood structure is symmetric, i.e. N −1 grid (y) = N grid (y). Consequently, the negative samples are created by sampling from Notice that each negative sample is the second neighbour of the positive sample, and with a small chance the positive sample itself. C.4 Directed Neighbourhood Structures More generally, the neighbourhood structure may form a non-symmetric directed graph for which the neighbourhood maps N −1 and N don't coincide. In this case, an additional weighting-term is introduced. We denote the number of neighbours of x as K x = |N (x)| and the number of elements of which y is a neighbour as K y = |N −1 (y)|. The forward transition density is given by the uniform distribution, i.e. We then have where we introduced the weighting term ω yx = K y /K x . C.5 Consistency of our Approximation The following proof is similar to Schröder et al. (2023). We first restate the consistency result: Theorem 2. For every ε > 0 there exist N, M ∈ N such that |L q,M,w (U )−ED q (p data , U )| < ε a.s.. Proof. For N data points x i + ∼ p data and perturbed points y i ∼ q(·|x i + ) denote the M corresponding negative samples by x i,j − ∼ p neg,q,y i . Notice that the distribution of the negative samples depends on y i . Using the triangle inequality, we can upper bound the difference |ED q (p data , U ) − L q,M,w (U )| by upper bounding the following two terms, individually: The conditioning expresses that the expectation is only taken in x i,j − ∼ p neg,q,y i while keeping the values of the random variables x i + and y i fixed. The first term can be bounded by a sequence ε N a.s. − − → 0 due to the normal strong law of large numbers. For the second term one needs to consider that the distribution p neg,q,y i depends on the random variable y i . For this reason, we notice that x i,j − are conditionally indepedent given x i + , y i and employ a conditional version of the strong law of large numbers (Majerek et al., 2005, Theorem 4.2) to obtain Next, we have that the deterministic sequence w/M → 0. Thus, adding the stabilisation w/M does not change the limit in M . Furthermore, since the logarithm is continuous, the limit also holds after applying the logarithm. Finally, the estimate translates to the sum by another application of the triangle inequality: For each i = 1, 2, . . . , N there exists a sequence ε i,M a.s. Hence, for each ε > 0 there exists an N ∈ N and an M (N ) ∈ N such that |ED q (p data , U ) − L q,M (N ),w (U )| < ε almost surely. D Related Work Contrastive loss functions Our work is based on an unpublished work on energy discrepancies in the continuous case (Schröder et al., 2023). The motivation for such constructed loss functions lies in the data processing inequality. A similar loss has been suggested before as KL contraction divergence (Lyu, 2011), however, only for its theoretical properties. Interestingly, the structure of the stabilised energy discrepancy loss shares similarities with other contrastive losses such as Ceylan & Gutmann ( Contrastive divergence and Sampling. Discrete training methods for energy-based models largely rely on contrastive divergence methods, thus motivating a lot of work on discrete sampling and proposal methods. Improvements of the standard Gibbs method were proposed by Zanella (2020) through locally informed proposals. The method was extended to include gradient information (Grathwohl et al., 2021) to drastically reduce the computational complexity of flipping bits of binary valued data and to flipping bits in several places (Sun et al., 2022b;Emami et al., 2023;Sun et al., 2022a). Finally, discrete versions of Langevin sampling have been introduced based on this idea (Zhang et al., 2022b;Rhodes & Gutmann, 2022;Sun et al., 2023). Consequently, most current implementations of contrastive divergence use multiple steps of a gradient based discrete sampler. Alternatively, energy-based models can be trained using generative flow networks which learns a Markov chain to construct data by optimising a given reward function. The Markov chain can be used to obtain samples for contrastive divergence without MCMC from the EBM (Zhang et al., 2022a). Other training methods for discrete EBMs. There also exist some MCMC free approaches for training discrete EBMs. Our work is most similar to concrete score matching (Meng et al., 2022) which uses neighbourhood structures to define a replacement of the continuous score function. Another sampling free approach for training discrete EBMs is ratio matching (Hyvärinen, 2007;Lyu, 2012). However is has been found that also for ratio matching, gradient information drastically improves the performance (Liu et al., 2023). Moreover, Dai et al. (2020) proposed to apply variational approaches to train discrete EBMs instead of MCMC. Eikema et al. (2022) replaced the widelyused Gibbs algorithms with quasi-rejection sampling to trade off the efficiency and accuracy of the sampling procedure. The perturb-and-map (Papandreou & Yuille, 2011) is also recently utilised to sample and learn in discrete EBMs (Lazaro-Gredilla et al., 2021). Following Zhang et al. (2022a), all models are trained with an l 1 regularization with a coefficient in {10, 5, 1, 0.1, 0.01} to encourage sparsity. The other setting is basically the same as Section F.2 in Grathwohl et al. (2021). We report the best result for each setting using the same hyperparameter searching protocol for all methods. Quantitative Results. We consider D = 10 × 10 grids with σ = 0.1, 0.2, . . . , 0.5 and D = 9 × 9 grids with σ = −0.1, −0.2. The methods are evaluated by computing the negative log-RMSE between the estimated J φ and the ture matrix J. As shown in Table 3, our methods demonstrate comparable results to the baselines and, in certain settings, even outperform Gibbs and GWG, indicating that energy discrepancy is able to discover the underlying structure within the data. E.2 Discrete Density Estimation Experimental Details. This experiment keeps a consistent setting with Dai et al. (2020). We first generate 2D floating-points from a continuous distributionp which lacks a closed form but can be easily sampled. Then, each samplex := [x 1 ,x 2 ] ∈ R 2 is converted to a discrete data point x ∈ {0, 1} 32 using Gray code. To be specific, givenx ∼p, we quantise bothx 1 andx 2 into 16-bits binary representations via Gray code (Gray, 1953), and concatenate them together to obtain a 32-bits vector x. As a result, the probabilistic mass function in the discrete space is p(x) ∝p ([GrayToFloat(x 1:16 ), GrayToFloat(x 17:32 )]). It is noteworthy that learning on this discrete space presents challenges due to the highly non-linear nature of the Gray code transformation. The energy function is parameterised by a 4 layer MLP with 256 hidden dimensions and Swish (Ramachandran et al., 2017) activation. We train the EBM for 10 5 steps and adopt an Adam optimiser with a learning rate of 0.002 and a batch size of 128 to update the parameter. For the energy discrepancy, we choose w = 1, M = 32 for all variants, = 0.1 in ED-Bern, and the window size is 32 × 1 in ED-Pool. After training, we quantitatively evaluate all methods using the negative log-likelihood (NLL) and the maximum mean discrepancy (MMD). To be specific, the NLL metric is computed based on 4, 000 samples drawn from the data distribution, and the normalisation constant is estimated using importance sampling with 1, 000, 000 samples drawn from a variational Bernoulli distribution with p = 0.5. For the MMD metric, we follow the setting in Zhang et al. (2022a), which adopts the exponential Hamming kernel with 0.1 bandwidth. Moreover, the reported performances are averaged over 10 repeated estimations, each with 4, 000 samples, which are drawn from the learned energy function via Gibbs sampling. Qualitative Results. We qualitatively visualise the learned energy functions of our proposed approaches in Figure 3. To provide further insights into the oracle energy landscape, we also plot the ground truth samples in Figure 4. The results clearly demonstrate that energy discrepancy effectively fits the data distribution, validating the efficacy of our methods. The Effect of in Bernoulli Perturbation. Perhaps surprisingly, we find that the proposed energy discrepancy loss with Bernoulli perturbation is very robust to the noise scalar . In Figure 6, w visualise the learned energy landscapes with different . The results demonstrate that ED-Bern is able to learn faithful energy functions, even with extreme values of , such as ∈ {0.999, 0.001}. This highlights the robustness and effectiveness of our approach. In Figure 5, we further show that, with ∈ {0.9999, 0.0001}, ED-Bern can still learn a faithful energy landscape using a large value of M . However, when ∈ {1, 0}, ED-Bern fails to work. It is noteworthy that the choice of is highly dependent on the specific structure of the dataset. While ED-Bern exhibits robustness to different values of in the synthetic data, we have observed that a large value of ( ≥ 0.1) is not effective for discrete image modeling. The Effect of Window Size in Deterministic Transformation. To investigate the effectiveness of the window size in ED-Pool, we conduct experiments in Figure 7 with different window sizes. The results indicate that employing a small window size (e.g., 2 × 1) does not provide sufficient information for energy discrepancy to effectively learn the underlying data structure. Furthermore, our empirical findings suggest that solely increasing the value of M is not a viable solution to address this issue. Again, the choice of the window size should depend on the underlying data structure. In the discrete image modelling, we find that even with a small window size (i.e., 4 × 4), energy discrepancy yields an energy with low values on the data-support but rapidly diverging values outside of it. Therefore, it fails to learn a faithful energy landscape. Qualitatively Understanding the Effect of w and M . The hyperparameters w and M play a crucial role in the estimation of energy discrepancy. Increasing M can reduce the variance of the Monte Carlo estimation of the contrastive potential in (1), while a proper value of w can improve the stabilisation of training. Here, we evaluate the effect of w and M on the variants of energy discrepancy in Figures 8 to 10. Based on empirical observations, we observe that when w = 0 and M is small (e.g., M ≤ 32 for ED-Bern and M ≤ 64 for ED-Pool and ED-Grid), energy discrepancy demonstrates rapid divergence and fails to converge. Additionally, we find that increasing M can address this issue to some extent and introducing a non-zero value for w can significantly stabilize the convergence, even with M = 1. Moreover, larger w tends to produce a flatter estimated energy landscapes, which also aligns with the findings in continuous scenarios of energy discrepancy Schröder et al. (2023). E.3 Discrete Image Modelling Experimental Details. In this experiment, we parametrise the energy function using ResNet (He et al., 2016) following the settings in Grathwohl et al. (2021);Zhang et al. (2022b), where the network has 8 residual blocks with 64 feature maps. Each residual block has 2 convolutional layers and uses Swish activation function (Ramachandran et al., 2017). We choose M = 32, w = 1 for all variants of energy discrepancy, = 0.001 for ED-Bern, and the window size is 2 × 2 for ED-Pool. Note that here we choose a relatively small and window size, since we empirically find that the loss of energy discrepancy converges to a constant rapidly with larger and window size, which can not provide meaningful gradient information to update the parameters. All models are trained with Adam optimiser with a learning rate of 0.0001 and a batch size of 100 for 50, 000 iterations. We perform model evaluation every 5, 000 iterations by conducting Annealed Importance Sampling (AIS) with a discrete Langevin sampler for 10, 000 steps. The reported results are obtained from the model that achieves the best performance on the validation set. After training, we finally report the negative log-likelihood by running 300, 000 iterations of AIS. Qualitative Results. We show the generated images in Figure 11, which are the samples in the final step of AIS. We see that our methods can generate realistic images on the Omniglot dataset but mediocre images on Caltech Silhouette. We hypothesise that improving the design of the affinity structure in the neighborhood-based transformation can lead to better results. On both the static and dynamic MNIST datasets, ED-Bern and ED-Grid generate diverse and high-quality images. However, ED-Pool experiences mode collapse, resulting in limited variation in the generated samples.
2023-07-18T01:01:02.810Z
2023-07-14T00:00:00.000
{ "year": 2023, "sha1": "12fa330353124ae1b46755bf63b73966af9abc0e", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "12fa330353124ae1b46755bf63b73966af9abc0e", "s2fieldsofstudy": [ "Computer Science", "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Computer Science" ] }
237778031
pes2o/s2orc
v3-fos-license
Legal Analysis Towards Mining in North Sulawesi Post the Implementation of Law Number 3 Year 2020 The issuance of Law No. 3 of 2020 as an amendment to Law No. 4 of 2009 concerning Mineral and Coal Mining, has changed the supervisory mechanism for the implementation of mineral and coal mining in the regions. This study aimed at analyzing the legal mining exploration in the regions associated with the principle of decentralization based on regional autonomy. The method used in this study is empirical juridical, and the results obtained illustrate that the issuance of the new mineral and coal law will cause a minimum role for local governments, both provincial, district and/or city in monitoring mining in the regions, especially in terms of mining exploration. It is because the monitoring system for mining exploration becomes centralized and effects on reducing regional revenue potential. INTRODUCTION In Indonesia, the management of the economy is arranged based on the provisions of article 33 of the 1945 Constitution. In paragraph 2 of article 33, it emphasizes on the production branches which are important and control the livelihood of the people are controlled by the state. In addition, among the production branches that have a significant influence on state revenue are the mining and energy sectors. They have long been the prima donna of the state in gaining revenue to support the development process. Mining management has experienced ups and downs in line with government policies. In the New Order era, it was considered as very centered on the management carried out by the central government, thus it ignored the interests of the area where the exploration of natural resources was carried out. Further, after the 1998 reform, this policy changed with the issuance of Law No. 4 of 2009 concerning Mineral and Coal Mining which gave the authority to local governments to regulate their own mining processes in the regions. This provision was issued, in line with the provision of autonomy for local governments to run local government through decentralization in accordance with Law no. 32 of 2004 which was subsequently amended by the provisions of Law no. 23 of 2014. It provides broad authority for local governments to manage their regional interests, including the mining sector. Indonesia is as a potential country with strong economic grows as the largest in the Southeast Asian region. It has put Indonesia in a good position in anticipating the rapid development of the world economy. Breuer (2018) emphasized that the Gross Domestic Product which is in the range of USD 1 trillion has placed Indonesia as a country in the sixteenth world economic power, which allows Indonesia to highlight its role in global economic policies both in the ASEAN and the G20 forum. Moreover, Al Syahrin (2018) added that the geopolitical and geostrategic position of the Indonesian state which is in the equatorial region and is located between the Australian and the Asian continent has facilitated economic flows. In addition, the position of the Indonesia which is flanked by two oceans, the Pacific and the Indian Ocean, has made Indonesia as a connection of the Asian region both in the south, east and southeast of the Asian continent. In the last decade, Indonesia government have adapted their economic policies to the relevant economy principles theory especially in terms of economic growth. The dependence on the export commodities which became the mainstay has been reduced in value through government policies by increasing the implementation of the manufacturing industry. Besides, economic growth is absolutely necessary in the context of managing a country. In Indonesia, it has become one of the bases for assessing the success of government, including other interrelated aspects such as political, democratic and socio-cultural aspects in society. To increase the economic growth, one of the policies is that the government conducted the opening of investment for strategic sectors, both in the form of foreign and domestic investment. They are carried out either directly or indirectly through the formation of the Investment Coordinating Board in Presidential Decree number: 183 of 1998 as well as the formation of Law number 25 of 2007 concerning Investment plus Presidential Decree number 27 of 2009 which regulates One Stop Integrated Services. They have completed the government efforts in improving the flow of investment licensing in various economic sectors in order to facilitate the direct and indirect investment business that were carried out. In the mineral and coal mining sector, the government has issued law number 3 of 2020 which was approved by the parliament on May 12, 2020. It regulates amendments to law number 4 of 2009 concerning mineral and coal mining, then referred to as the Minerba (mineral and coal mining) Law. The main objective of it is to re-regulation of provisions in accelerating and facilitating the implementation of investment in this sector. This was then offset by the issuance of Law No. 11 of 2020 which concerns on Job Creation. It supports the policy of facilitating investment of several clusters licensing. A'la and Supriyadi (2020) emphasized the urgency of the job creation law as one of the solutions in the investment sector. It is because Indonesia's position as both law and welfare state. Furthermore, as the executor of government authorities based on the principle of autonomy, and to increase revenue for the regions, local governments are required to open access to various sectors which support investment to enter in their regions, including the tourism, trade and mining sector. In North Sulawesi, there are approximately 46 Mining Business Permits referred to as IUPs plus 6 clean and clear Contract of Work companies which are still operating in the area. There are 1045 business units that operate by absorbing a large workforce (BPS, 2015). They have assisted local governments in increasing revenue from profit sharing funds transferred by the central government obtained from exploration funds and royalties that have been given by the IUP holder to the government. With the new provisions, it will change the mechanism that has been running and supervised by the local government. Unfortunately, there are several provisions that are considered problematic in the Minerba Law including: 1). Violation of the principle of decentralization in regional autonomy which was the mandate of the 1998 reform. It is returning back the authority for mineral and coal mining to the central government. It reflects on the revocation of articles 4, 7, and 8 of Law Number 4 of 2009 which regulates the control of mineral and coal mining operations; 2). The abolition of reporting obligations to IUP providers/government in terms of exploration and business feasibility studies as a result of the abolition of the provisions of article 43 of the law. It effects on the weak government supervision of mining implementation and decreases potential state revenues from the mining sector; 3). Elimination of royalties for every mineral that is mined, which gives negative effects on state revenues from the mining sector as the consequence of the abolition of article 45 of Law Number 4 of 2009. In principle, the withdrawal of the authority of an institution is a common thing, as long as it is carried out in accordance with applicable laws and regulations. In this case, the local government's authority in terms of managing mineral and coal resources in the region is obtained through the attribution authority granted by the Law through the Mineral and Coal Mining Law, as well as the regional government law in accordance with the rules because it was made by the parliament. In this regard, Miriam Budiarjo in (Budiardjo, 2008, p. 64) underlines that those who have the authority have the right to issue orders and make rules and are entitled to obtain compliance with these rules. However, the implementation of the new provisions should not contrast with the spirit of regional autonomy which is the reference in state management through the current reforms that have been rolled out since 1998. The phenomenon above caused the spirit of regional government decreased and regional income reduced. Principally, the delegation of authority of a big country is necessary to maximize the role of regional government in carrying out programs that support economic level of society. It can be achieved when regional government delegate representatives to the central government in particular to the mining sector. In the provisions of article 1 number 23 of law number 30 of 2014 concerning government administration, it is stated that delegation is the delegation of authority from higher government agencies and/or officials to lower government agencies and/or officials with full responsibility and accountability to the recipient of the delegation. Based on these provisions, there will be efficiency in terms of mining management in the regions with the spirit of regional autonomy. Previous studies that discuss mining in relation to regional autonomy can be found in Haris (2015) which focuses more on the issue of discretion given by local governments in terms of granting mining permits in areas that are highlighted from the aspect of government administration. Similarly, Isnaeni (2018) indicates a change in the authority of decentralization in the mining sector from the district level to the provincial level based on the amendment of law number 32 of 2004 concerning regional government to law number 23 of 2014. Furthermore, Senduk (2016) highlights the existence of district/city local governments in terms of mineral and coal mining in relation to the implementation of good governance. This study highlights the enactment of the latest law number 3 of 2020 concerning mineral and coal mining (minerba) which revokes the authority of local governments both provincial and district/city in terms of monitoring mining management in the regions. This study is important to be conducted in order to observe the legal consequences caused by the abolition of articles 4, 7, and 8 of minerba law to the newest one. The effects can be seen through the decreasing of local government's role in monitoring deviated behaviors performed by mining operators in the region, decreasing potential regional revenue from exploration fees which are omitted in the new mineral and coal law, and the imposition of a 0% royalty rate for special IUPs in accordance with the provisions of the Job Creation Law. This study aims at analyzing those three problems from a legal point of view with an emphasis on the principles of regional autonomy and efforts to supervise the implementation of the mining process, and efforts to maximize the potential of regional income for the welfare of the society in the region. RESEARCH METHODS This study employs juridis empirical method to explore the research problems emphasizing on the implementation of constitution and legal material as a primary sources. While for the secondary sources, books, journals and other legal materials are used. The data is analyzed by elaborating the theories with the empirical data abaout mining in regional government of North Sulawesi. The data are obtained from regional energy and mineral resources department of North Sulawesi, central bureau of statistics of North Sulawesi, and the ministry of finance, which were taken over a period of two months from December 2020 to January 2021. The data were then analyzed using a qualitative approach and concluded in descriptive form. RESULTS AND DISCUSSION In the frame of a modern state, Indonesia needs principles that philosophically govern the economy of a country. In the theory of economic law, there are principles held by a democratic country to carry out its economic policies towards the welfare of the people. The principles of economic growth, social balance, and sustainability have become absolute necessity in managing the economy of a country. Vaut (2013) views the perspective of social democracy, the three principles of economic management in the form of economic growth, social balance, and sustainability, are absolutely necessary and enforced simultaneously in a state economic policy. It is because the implementation of the three principles can stimulate quality and maintained economic growth that is oriented towards the welfare. In addition, Muhlizi (2017) emphasized that increasing Indonesia's economic development is one of the processes carried out continuously in order to achieve the welfare and prosperity of the people. As an integral part of national development, this goal is reflected in improving the implementation of the state's economy which is accompanied by improving the quality of life of its citizens as stated in article 33 of the 1945 Constitution. In this regard, to improve the welfare of citizens in terms of the national economy, the state has been given a delegation of authority to local governments through the regional autonomy mechanism which became one of the spirits of reform in 1998 by issuing a regulation on regional autonomy in the form of a law number 23 of 2014 concerning Amendments to Law Number 32 of 2004 concerning Regional Government, as well as Law Number 33 of 2004 concerning Financial Balance between the Central Government and Regional Governments. In that regulation, the authority and power of the central government which is delegated to the regional government includes the power to prepare and manage the Regional Revenue and Expenditure Budget as referred as APBD including: 1). Regional fiscal management through taxes and levies collection, 2). Transfer funds Management, and 3) other legitimate revenues management as a source of Regional Original Income or PAD On the other hand, the government through its policies has issued regulations in the field of state finance. Karianga (2017) emphasizes the ability of regions to manage and maximize local revenue sources (income) by increasing natural resource potential is one of the important indicators to be the success of regional autonomy. The existence of central to regional transfer funds in the APBD is considered as a supplement. Thus, capabilities in the field of Human Resources (HR) need to improve to complete the perfection of regional financial management. Meanwhile, according to (Habibi, 2016) there are aspects that affect the implementation of decentralization in regional autonomy, such as managerial aspects, organizational Human Resources, bureaucratic culture, and local political aspect. These two capabilities must be managed optimally in terms of increasing regional income in the context of regional development. One of the sectors managed by local government in order to maximize regional income is the mining sector. It has become one of the prima donnas of local government as an effort to improve people's welfare because of the large strategic effect that the mining sector has on the potential for regional income. It then encourages economic growth in the region. In line with this, the government has issued a new Mining Law which replaces the old one. Mining in North Sulawesi North Sulawesi is one of the six provinces that is located on the island of Sulawesi, its position in the northern part, and in front of the Pacific Ocean and directly adjacent to the Philippines and other countries in the Pacific region. Such location has become a strategic gateway for businesses related to the outside world, including trade, mining, tourism, etc. The structure of districts and cities covering the North Sulawesi region can be seen in table 1. Furthermore, North Sulawesi with an area of 1,527,216 hectares, and a mining area covering 517,825 hectares which is 33%. This has made this sector very productive for exploration and exploitation. Thus, it helps the process of increasing the income and welfare of the people of North Sulawesi in the future (Adm, 2013). Mining in North Sulawesi has become one of the strategic sectors in fulfilling the increase in regional income. This is considering the geographical location of the province of North Sulawesi which contains a lot of useful minerals and encourages the government's efforts to improve the economic. The mineral reserves in this province can be seen in table 2. Number District / City Capital City Total of Subdistrict Table 3 explains that the mineral and coal mining sector in North Sulawesi received 36% of the budget allocation of the total 108 billion revenue-sharing allocations transferred by the central government to the province. The availability of these funds is very helpful to improve the welfare of the people of North Sulawesi from the Regional Revenue and Expenditure Budget (APBD) which can lift the economic life of the community where the mine is located. Also, it is associated with the labor sector, regional infrastructure development, and optimizing the SMEs around the mining exploration area. Isnaeni (2018) elucidates that the state's attributive authority to natural wealth is based on the provisions of article 33 section 3 of the 1945 Constitution of the Republic of Indonesia and article 2 section 2 of the main agrarian law which is delegated to the central government as a state administration organization. Then, it can be delegated to local governments and customary law communities in accordance with statutory provisions. This kind of delegation is called decentralization. Total of Dorp Decentralization, based on the provisions of article 1 number 8 of law number 23 of 2014 as amended to law number 9 of 2015 concerning Regional Government is the transfer of government affairs by the central government to autonomous regions. In addition, Elvalina (2016) argues that the implementation of the regional autonomy policy in principle concerns the transfer of power/authority along with resources from the central government to the regions. This authority is comprehensive unless otherwise stipulated by the law. The authority that is not given to local governments, as stipulated in Article 10 of Law Number 23 of 2014 concerning Regional Governments. This includes Foreign policy, defense, security, judiciary, national monetary and fiscal affairs, and religion. In the historical development of natural wealth management, especially the empowerment of mineral and coal mining after the reform, the government has issued regulations related to mining including law number 4 (2015) argues that permission is an approval from the authorities based on laws or government regulations in certain circumstances. In Article 1 point 7 of Law number 4 of 2009 a mining business permit, referred to as an IUP, is a permit to carry out a mining business which is all of the stages of activities in the context of mineral or coal management and exploitation includes general investigation, exploration, feasibility studies, construction, mining, management and or refining or development and or utilization, transportation and sales, as well as postproduction activities of mining. The abolition of the authority of local governments in terms of issuing mining permits will slow down the process of improvement of community welfare through a decentralized system since it aims at accelerating the realization of people's welfare. The convenience of bureaucratic services in the regions that require fast and comprehensive services without waiting for justification from the central government will ultimately have a positive impact on regional and national economic performance in general. Simandjuntak (2016) elucidates that offering autonomy to regions is intended to facilitate community welfare. It is by improving services and empowering the community so that they increase competitiveness based on the principles of justice, democracy and equity within the framework of the Unitary State of the Republic of Indonesia. Improving economic performance through a decentralization system with the ease of investing, especially in the mineral and coal mining sector, will create a multiplier effect on other sectors such as absorption of labor, increasing remuneration for production factors in the form of land rent, interest, and wages that can lead to other interrelated industries that ultimately provide positive values for both regional economic development and increasing the economic level of local communities. Moreover, it lets business actors focus their efforts on the regions and will deal directly with local governments, so that regional economic performance improved. License for decentralization is one of the economic pillars and an important part of the government's policy that is in line with the need to regulate investment. Licensing is always related to supervision towards its object. Wijoyo in Lestari and Djanggih (2019) states that the purpose of the licensing to control community activities by influencing the community to follow the established method to achieve certain goals. Supervision on the implementation of mineral and coal mining includes mining license givers, mining actors, and mining activities. of 2014 which was amended by law number 9 of 2019 concerning regional government. Supervision in Mineral and Coal Mining Supervision on mineral and coal mining operations in the regions probably cause a tug between local and central government's policies. Firdaus in (Firdaus et al., 2016, p. 2) emphasizes that decentralization has caused tug-of-war between local governments, especially between provincial and district/city governments. In addition, there is often a conflict of interest between the provincial government and district/city governments in terms of the exploration of regional natural resources. This is partly because the provisions concerning the authority to issue Mining Business Permits (IUP), hereinafter referred to as IUPs, have been amended several times. Law number 4 of 2009 gives the authority to issue community mining permits for mineral and metal communities, coal, non-metallic minerals and rocks in community's mining areas to regency governments. Furthermore, in Law Number 23 of 2014 the authority was renewed by giving the authority to the provincial government. Principally, this supervision has been regulated in the Minister of Energy and Mineral Resources Regulation number 26 of 2018. In this provision, it is explained that the authority to supervise mining in the regions is still carried out by the Governor based on the provisions of Article 44 paragraph 2. The regulation includes issuance of mining business permits and the implementation of guidance and supervision its holders. Furthermore, in the provisions of article 45 paragraph 1, it is explained that the authority of the governor to implement an effective mining engineering principles which are carried out by mining inspectors. The principles include a). evaluation of periodic reports and special reports; b) Periodic inspection or at any time if necessary; c). Assessment of the successful implementation of the program. These three principles are supervised inspectors by means of inspection, investigation, and testing. In the latest development, with the issuance of the new Minerba Law, the authority is then withdrawn by the central government which causes the potential accumulation of duties and responsibilities of the central government in terms of supervising the implementation of mineral and coal mining. However, it results inefficient and less productive mining supervision in the regions due to the lack amount of human resources at the central level to carry out supervision. Furthermore, with the abolition of the old provisions that are Article 43 of the Minerba Law, it has triggered opportunities for sporadic exploration practices which have bad impacts on environmental aspects because there is not a supervision from the local government regarding the results obtained and minerals are excavated and lifted by the holders. In fact, the provisions of article 142 number (1) contains Governors and regents/mayors are required to report mining businesses in their respective territories each at least once every 6 months to the minister. Then, the provisions of Article 143 of the Minerba Law explains that the Regent/Mayor conducts guidance and supervision of people's mining businesses. These two provisions exactly give rights to the government to supervise the mining and prevent the unexpected results during the mining process. The causes of the abolition of the old provisions of minerba are that the bloom of corruption cases that were revealed relating to the granting of mining permits by local governments which indicated corrupt practices. Arifin and Irsan (2019) state that licensing in Indonesian still has a dilemma in the form of abuse of authority by regional heads along with bureaucratic ranks such as nepotism granting mining permits, transferring land functions, etc. In addition to this problem, Redi (2016) verified that there were 77% of the people who experienced an increase in welfare due to mining practices without permits in addition to 22% who experienced stagnant income and 2% decreased, all of which were carried out in smallholder mining. Those phenomenon above are regarded as the cause of the abolition of authority of local governments, both district/city governments and provincial governments and returning their authority to the central government through the new mineral and coal law. However, the abolition effects on juridical implications for the implementation of mineral and coal mining exploration in the regions. Senduk (2016) states that there are three juridical implications of the absence of local government authority in the issuance of Mining Business Permits. First, there is an additional burden on the regional government, in this case the district/city government in overcoming the adverse impacts of mineral and coal mining exploration in their area. Second, mineral and coal mining exploration has become passive, and is not in line with the principle of decentralization according to the principle of autonomy. Third, the occurrence of difficulties for local governments to take precautions in terms of supervising the implementation of mining exploration in the region. It gave the lack of authority for region. Another implication of this abolition was the position of the local government very weak in terms of supervising the implementation of mineral and coal mining in their area. As a result, if problems occur in the implementation of the mining, the local government will lose its legitimacy to resolve the mining problem. And then, it effects on the completion process that taking a long time because we have to wait for the authority from the central government to solve it the problem occurred. In addition, regional taxes and levies still include in the authority of region, to determine, collect and use regional taxes on non-metallic minerals and rocks. Thus, it is potential to trigger a conflict of norms between the center and region. Further, the local government loses its authority in terms of fostering and supervising the implementation of mining processes in the regions
2021-09-01T15:05:06.815Z
2021-06-30T00:00:00.000
{ "year": 2021, "sha1": "9cfc033cde54bd1908a770bc6d2d0c439ad6550b", "oa_license": "CCBYNCSA", "oa_url": "http://journal.iain-manado.ac.id/index.php/JIS/article/download/1406/975", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "aea33921c8999c62f37f7e399697e7f570d311cb", "s2fieldsofstudy": [ "Law" ], "extfieldsofstudy": [ "Business" ] }
8039004
pes2o/s2orc
v3-fos-license
Relation of exaggerated cytokine responses of CF airway epithelial cells to PAO1 adherence In many model systems, cystic fibrosis (CF) phenotype airway epithelial cells in culture respond to P. aeruginosa with greater interleukin (IL)-8 and IL-6 secretion than matched controls. In order to test whether this excess inflammatory response results from the reported increased adherence of P. aeruginosa to the CF cells, we compared the inflammatory response of matched pairs of CF and non CF airway epithelial cell lines to the binding of GFP-PAO1, a strain of pseudomonas labeled with green fluorescent protein. There was no clear relation between GFP-PAO1 binding and cytokine production in response to PAO1. Treatment with exogenous aGM1 resulted in greater GFP-PAO1 binding to the normal phenotype compared to CF phenotype cells, but cytokine production remained greater from the CF cell lines. When cells were treated with neuraminidase, PAO1 adherence was equalized between CF and nonCF phenotype cell lines, but IL-8 production in response to inflammatory stimuli was still greater in CF phenotype cells. The polarized cell lines 16HBEo-Sense (normal phenotype) and Antisense (CF phenotype) cells were used to test the effect of disrupting tight junctions, which allows access of PAO1 to basolateral binding sites in both cell lines. IL-8 production increased from CF, but not normal, cells. These data indicate that increased bacterial binding to CF phenotype cells cannot by itself account for excess cytokine production in CF airway epithelial cells, encourage investigation of alternative hypotheses, and signal caution for therapeutic strategies proposed for CF that include disruption of tight junctions in the face of pseudomonas infection. Background Chronic infection of the lung with Pseudomonas aeruginosa and the inflammatory response it stimulates cause much of the morbidity and nearly all the mortality in CF patients. Since the inflammatory response can be reduced pharmacologically in CF patients without allowing infection to increase and with benefit to the patient [1], and since infants and young children with CF have interleukin-8 (IL-8) and neutrophil count in BAL fluid significantly in excess of that observed for non-CF children with comparable bacterial burden [2,3], many investigators have concluded that the inflammatory response is excessive and deleterious in the CF lung [reviewed in [4]]. Though the cellular origin of the excessive inflammatory response in CF is not fully established, in vivo mouse CFTR complementation data suggest that the airway epithelium plays a substantive role in driving excess inflammation [5]. In many, but not all, model systems, CF airway epithelial cells respond to P. aeruginosa or its products with increased IL-8 and/or IL-6 production compared to non-CF cells [4,[6][7][8][9][10][11]. In addition, in some, but not all, model systems binding of P. aeruginosa to CF airway epithelial cells is in excess of its binding to non-CF cells [12][13][14][15][16]. Taken together, these data have been interpreted to mean that the excess cytokine responses in CF epithelium are due to increased stimulus applied at the cell surface by elevated bacterial adherence in the CF phenotype cells [15]. Our prior studies, in two separate cell model systems, have shown that there is increase in available asialoGM1 (aGM1), which binds to P. aeruginosa pilin and flagellin and serves as a major ligand for this organism, on the CF member of the cell pair [17][18][19]. In these same cell pairs, there is an increased response of IL-8, IL-6, and granulocyte macrophage colony stimulating factor (GM-CSF) to a laboratory strain of P. aeruginosa, PAO1, in the CF member of the pair [6]. However, these studies did not directly address the relationship between PAO1 binding and the cytokine response. In order to test the hypothesis that the cytokine response of CF phenotype airway epithelial cells to PAO1 can be attributed solely to increased pseudomonas adherence, we took several approaches. First, we determined whether cytokine responses and P. aeruginosa adherence changed in parallel with increasing amounts of added PAO1. Second, we manipulated the cells to alter surface receptor access to P. aeruginosa. We incubated CF and non-CF cells with exogenous aGM1 to increase the binding sites for P. aeruginosa, and treated them with neuraminidase to add or expose more desialylated binding sites. We then compared cytokine production and binding of PAO1 in the altered cell preparations. In the cell lines that form tight junctions, we increased access to native P. aeruginosa binding sites on the basolateral surface by disrupting tight junctions [20], then tested the ability of the treated cells to respond to PAO1. Our results indicate that excess cytokine responses in CF airway epithelial cells do not correlate well with adherence of P. aeruginosa, and suggest that the excess cytokine response cannot result solely from the increased adherence of P. aeruginosa. Cell lines pCEP and pCEP-R cell lines The development and maintenance of this matched pair of human tracheal epithelial cells derived from SV40 transformed human tracheal epithelial cells (9HTEo-, kindly provided by Dieter Gruenert, University of Calif, San Francisco) have been described previously and these methods were followed here [6,18,21]. 16HBE-14o-AS and S cell lines The development and maintenance of these cell lines have been described previously and these were the methods used [6,22]. Bacteria The laboratory isolate PAO1 and its GFP derivative strain were kindly provided by Alice Prince, Columbia University, NY, and were grown as previously described [6]. PAO1 Binding Assay Green fluorescent protein (GFP)-PAO1 at 10 9 CFU were incubated with cell monolayers of pCEP or pCEP-R cells for 1 hr. Cells were washed with Hanks buffered salt solution (HBSS), lysed, and GFP fluorescence quantitated by fluorimeter. Serial dilutions of GFP-PAO1 were used to assess the change in GFP-PAO1 binding over a range of concentrations. Stimulation of cytokine production by P. aeruginosa These studies were performed as previously described [6]. Briefly, 9/HTEocells, pCEP and pCEP-R, were plated at a density of 1 × 10 6 cells per well on vitrogen-coated 24-well plates, and the sense and antisense clones of 16HBEocells were plated at density of 1 × 10 6 cells per 12 mm Millicell HA filter. Eighteen to 24 hr before the experiment, cells were switched to serum-free media, because PAO1 is serum-sensitive. Washed bacterial aliquots (0.5 ml/well) were incubated for 60 min with the confluent monolayers of epithelial cells at 37°C. Non-treated control wells were processed similarly with HBSS alone. For polarized 16HBE-14o-cells on filters, PAO1 and other treatments were applied to the apical surface only. As a positive control, cells were stimulated for 1 hr with IL-1β (100 ng/ml) and tumor necrosis factor (TNF)-α (100 ng/ml), (Sigma, St. Louis, MO). Cell monolayers were washed 3 times in Hanks Buffered Salt Solution (HBSS), then incubated for 24 hr in 0.5 ml serum-free cell culture medium containing 100 µg/ml gentamicin. Media were collected and analyzed for IL-8 and IL-6 by enzyme linked immunoadsorbant assay (ELISA), and normalized to the protein concentration of the lysed cells. Glycophospholipid addition and fluorescence microscopy 250 µg (or 5 µl of 10 mg/ml stock in dimethylsulfoxide (DMSO)) monosialoganglioside (GM1) or gangliotetraosyl ceramide (aGM1) (Matreya, Inc), was added in 0.195 ml of serum-free media for 1 hr with gentle rocking to pCEP and pCEP-RF cells. Cells were then washed twice with HBSS, and PAO1 was applied as above. Immunofluorescence was performed by incubating the cells with a 1:1000 dilution of rabbit polyclonal anti-aGM1 (Wako Pure Chemical Industries Ltd, Osaka, Japan) in phosphate buffered saline (PBS) with 0.1% bovine serum albumin (BSA), for 1 hr at 37°C, followed by two washes with PBS, and fixation with 4% paraformaldehyde (PFA) for 1 hr, and washed with PBS. Monolayers were then incubated with FITC-conjugated goat anti-rabbit antibody (Jackson Immunoresearch Laboratories Inc.) diluted 1:100 PBS with 0.1% BSA for 1 hr at room temperature, washed with PBS and fixed again with 4% PFA for 20 minutes. Cells were mounted under coverslip with Fluoromount G antifade (Southern Biotechnology Associates, Inc, Birmingham, AL) and visualized by fluorescence microsopy using a fluorescein filter set. FITC-Peanut Agglutinin (PNA, which binds to aGM1), or FITC-Maakia Amurensis lectin (MAL I, which recognizes sialic acid in α2,3 linkages to GlcNAC), at 100 µg in 300 ml PBS, was incubated with cells for 30 minutes after fixation in 4% PFA, washed with PBS and fixed in methanol for 10 minutes. Cells were mounted under coverslip and visualized by epifluorescent microscopy with a Zeiss 100 Axiovert, 40X water immersion objective, NA 0.75, and FITC filter set. Fluorescent-conjugated lectins were purchased from Vector Laboratories, Burlingame, CA. Treatments to disrupt tight junctions The integrity of junctional complexes was diminished in two ways: first, by calcium chelation by incubating 16HBEo-monolayers with 30 mM EGTA in PBS buffer, for 60 min, or second, by overnight incubation with 250 µg of a monoclonal mouse E-cadherin antibody (Zymed Laboratories, San Francisco, CA) in 0.5 ml serum-free media. Transepithelial Resistance Transepithelial resistance (TER) of cell monolayers grown on transwell filters was measured with a Millicell-ERS resistance system (Millipore, Bedford, MA) meter and STX-2 Electrodes (World Precision Instruments, Inc). Electrodes were equilibrated in cell culture media at room temperature, and measurements made with one electrode placed inside the insert and the other outside in the basolateral media. Baseline resistance of filters alone was determined. The TER of the polarized monolayers on filters was determined prior to treatments, immediately following treatment, and then at the final 24 hr time point. Cytotoxicity Assays To quantify cytotoxity of treatments, the concentration of lactate dehydrogenase (LDH) released from cells into the medium was measured using materials purchased from Sigma Chemical Co. (St. Louis, MO) at the same time point as was used for measuring cytokines. Statistics Results are expressed as mean ± standard error of the mean (SEM). All experiments reported were repeated on at least three separate occasions, and each individual cytokine experiment was performed in triplicate wells, except as specified in the legends of Figures 4 and 6. To combine multiple experiments of the 9/HTEo-cell lines, the secreted cytokine concentration (pg/mg protein) of 10 9 CFU of PAO1-stimulated pCEP-R cells at 24 hr. was set to 100% for each experiment, and other concentrations are expressed relative to this value. Most analysis was performed by t-test, some by ANOVA, using Sigma Plot software (SPSS, Inc., Chicago, IL). Results were considered significant when p ≤ 0.05. Binding of GFP-PAO1 to the cell lines Our prior data indicate that for both the 16HBEo-AS and S cell pairs, and for the 9HTEo-pCEP and pCEP-R cell pairs, IL-8 and IL-6 production increased with addition of increasing amounts of PAO1 over the range of 10 7 to 10 9 organisms [6]. Figure 1 illustrates the changes in GFP-PAO1 binding with increasing concentrations of bacteria. For the 16 HBEo-cells, PAO1-GFP binding also increased with added PAO1 from 10 7 to 10 9 CFU/mL, but for 9HTEo-cells, binding increased from 10 6 to 10 8 CFU/mL but did not increase not further with 10 9 CFU/mL, even though the cytokine responses did. Binding of GFP-PAO1 was similar in untreated 16HBEo-sense (S) and antisense (AS) cell lines, at all concentrations, and in untreated 9HTEo-pCEP and pCEP-R cell lines, at all concentrations ( Figure 1). Therefore, the previously reported increase in available aGM1 in the CF member of the pairs, confirmed below, was not necessarily associated with increased GFP-PAO1 binding, and increased cytokine production was not invariably associated with increased binding of GFP-PAO1. Providing additional P. aeruginosa binding sites by addition of asialoGM1 Others report that exogenous aGM1 is incorporated into the cell membrane and provides additional binding sites for P. aeruginosa [23]. We therefore incubated our cell lines with exogenous a GM1 and measured cell-associated aGM1, GFP-PAO1 binding, and cytokine responses. Incubation of the 9/HTEo-cell lines with aGM1 resulted in increased cell-associated aGM1, as demonstrated both by specific antibody binding, and by binding of PNA, a lectin which recognizes aGM1 ( Figure 2). There was no change in LDH release (Table 1). Prior to treatment, as reported previously [19], the 9HTEo-pCEP-R cells displayed more aGM1 than the 9HTEo-pCEP cells (Figure 2A vs E for aGM1 and C vs G for PNA), but following treatment, the two cell lines had similar aGM1 antibody fluorescence and PNA fluorescence (Figure 2, B vs F and D vs H). Prior to treatment, binding of GFP-PAO1 to the two cell types is equivalent, (Figure 1, Table 1). After aGM1 incubation, both cell lines showed increased GFP-PAO1 binding, but more so in the non-CF than the CF phenotype cells (Table 1). Untreated CF phenotype cells had increased IL-8 and IL-6 production in response to PAO1 compared to normal, as previously reported [6]. Following incubation, although aGM1 and PAO1 binding increased in the normal cells, cytokine production did not, but IL-8 production by the CF phenotype cells showed a statistically significant increase ( Figure 3). As a control, the cells were loaded with GM1, which is less efficient in binding PAO1. Following GM1 preincubation, despite the increase in Binding of GFP-PAO1 to airway epithelial cells Figure 1 Binding of GFP-PAO1 to airway epithelial cells. GFP-PAO1 was added to cultured cells for one hour at 37°C, washed, and the cultures lysed and fluorescence determined (and expressed in arbitrary units). A and B, 9HTEo-cells, C, 16 HBEo-cells. For the 9HTEo-cells, binding appears to saturate at about 10 8 organisms/well (A) but for 16 HBEo-cells, binding increases with increasing dose of bacteria over the range tested (C). The 9HTEo-cells change GFP-PAO1 binding with addition of a GM1 or GM1, or with neuraminidase treatment (B) (*, significantly different from no treatment, p < 0.05), but the 16 HBEo-cells do not (C). (Table 1), there was a significant decrease in production of both IL-8 and IL-6 by the CF phenotype cell line (Figure 3), possibly because more PAO1 was bound at sites that do not initiate an inflammatory signal. No changes in cytokine response to TNF-α/IL-1β occurred following incubation with aGM1 or GM1 (data not shown). In polarized epithelial cell lines (16BHBEo-), the addition of aGM1 or GM1 did not increase GFP-PAO1 binding (Table 2, Figure 1), nor alter the proinflammatory cytokine response to P. aeruginosa or TNF-α/IL-1β (data not shown). phenotype cells that have been treated with C. perfringens neuraminidase ( Figure 4B, panels B and E), but the change in MAL I fluorescence after S. typhimurium neuraminidase is less clear ( Figure 4B, panels E and F). Clostridium perfringens neuraminidase treatment significantly increased GFP-PAO1 binding on the non-CF cell line (Table 1), but the cytokine responses to PAO1 or TNF-α/IL-1β did not increase in the non-CF cells ( Figure 4). There was no significant increase in GFP-PAO1 binding in the CF phenotype cells, even though they showed increased IL-8 and IL-6 responses following treatment with the broad spectrum neuraminidase of C. perfringens. Providing additional P. aeruginosa binding sites by enzymatic removal of sialic acid Following treatment with the more specific S. typhimurium enzyme only IL-8 was increased ( Figure 5). C. perfringens neuraminidase treatment did not alter either the IL-8 response or PAO1 binding in the polarized cell lines. However, the IL-6 response of the CF phenotype line was reduced (data not shown). Exposure of basolateral receptors to P. aeruginosa We expected that disrupting the tight junctions in the monolayer would permit PAO1, applied to the apical surface, to access basolateral receptors that were not available when the monolayer was intact [23], and thereby would increase the cytokine response to PAO1. The tight junctions in both the Sense (control) and Antisense-treated (CF phenotype) 16 HBEo-cell lines were disrupted by treatment with EGTA or antibodies to E-cadherin, as shown by the decrease in transepithelial resistance following these treatments ( Table 2). Incubation of the filters without disrupting agents for the time course of the experiment did not alter transepithelial resistance. When incubation with the disrupting agents was combined with P. aeruginosa exposure, the transepithelial resistance fell even further, to approximate that of the filters alone. The IL-8 and IL-6 responses to PAO1 or no stimulation, with or without preincubation with aGM1 or GM1 show both a greater transepithelial resistance and a greater amount of lactate dehydrogenase in the medium at baseline than CF phenotype cell lines (pCEP-R and 16HBEo-AntiSense) (Tables 1 and 2). Apoptosis is reported to be reduced in CF versus non-CF cell lines [24,25], which may account for the lesser release of LDH. However, none of the treatments that alter PA receptor availability further disrupted the integrity of cellular membranes or increased LDH release (Table 1). Although the disruptive treatments had similar effects on resistance in CF and nonCF phenotype cells, the cytokine response to PAO1 increased with disruption of tight junctions only in the CF phenotype cells. There was no increase in cytokine production following TNF-α/IL-1β stimulation: in fact in one sample a small decrease was seen ( Figure 6). In order to test whether the EGTA treatment, in and of itself, altered cytokine production by airway epithelial cell lines, we treated non-polarized 9HTEocell lines with EGTA in the same manner as it was applied to the 16HBEo-cells. There was a slight but statistically significant decrease in IL-6 in response to PAO1 produc-Lectin binding to 9HTEo-cell pairs following treatment with neuraminidase The studies reported here were designed to test the hypothesis that increased binding sites for PAO1 result in increased stimulus and increased cytokine production in Neuraminidase treatment alters cytokine responses in 9HTEo-cell lines Figure 5 Neuraminidase treatment alters cytokine responses in 9HTEo-cell lines. IL-8 (A) and IL-6 (B) responses to 10 9 CFU PAO1 or TNF-α/IL-1β are shown. For 9HTEo-pCEP cells, only IL-8 secretion increased, and only following treatment with C. perfringens neuraminidase (C.p.), not with the enzyme from S. typhimurium. However, 9HTEo-pCEP-R cells showed increased IL-8 response to PAO1 following treatment with either enzyme and IL-6 response to C.p. neuraminidase. Three separate experiments were performed, each with triplicate wells. (*, different from untreated samples, p < 0.05). * response to PAO1 in airway epithelial cells (Figure 7). The hypothesis was not supported. Surprisingly, although aGM1 was increased on the CF phenotype cells studied here under basal conditions, GFP-PAO1 binding was not, so the increased cytokine responses of CF phenotype cells to PAO1 in the basal state [6] cannot be attributed solely to increased PAO1 adherence. Moreover, increasing the binding of PAO1 to non-polarized normal airway epithelial cell lines (9HTEo-pCEP), either by adding aGM1 or by cleaving sialic acid at the cell surface, does not change the cytokine responses to PAO1. CF phenotype cells (9HTEo-pCEP-R) still respond to PAO1 with greater cytokine release than their matched normal counterparts, despite significantly less PAO1 adherence than normal IL-8 Normalized Treatments that disrupt tight junctions increase the PAO1-stimulated IL-8 response, but not the TNF-α/IL-1β stimulated response of CF-phenotype cells Figure 6 Treatments that disrupt tight junctions increase the PAO1-stimulated IL-8 response, but not the TNF-α/IL-1β stimulated response of CF-phenotype cells. 16HBEo-Sense (open bars) and Antisense (black bars) monolayers on filters were pretreated for 60 minutes with 30 mM EGTA prior to 1 hr. stimulation with 10 9 CFU PAO1/(EGTA, n = 5 independent experiments, each with triplicate wells), or an overnight incubation with 250 µg monoclonal antibody to E-Cadherin (ECAD, n = 3 independent experiments, each with triplicate wells), and the IL-8 (A, C) and IL-6 (B, D) response measured 24 H later by ELISA. The IL-8 response to PAO1 was significantly (*) increased in the 16HBE-Antisense cells following pretreatment with E-Cadherin antibody (p = 0.034) or EGTA (p < 0.001). The 16HBEo-AS cells produced significantly more IL-8 than their sense congeners (p < 0.05). There was a significant (*) reduction in the IL-6 (p = 0.05) and IL-8 (p = 0.041) response to TNF-α/IL-1β after overnight incubation to the E-cadherin antibody (n = one experiment of triplicate wells). There is a significant increase of IL-8 in response to PAO1 prior to treatment in the CF phenotype cells compared to normal (p = 0.001). A. B. C. D. It is likely that there are multiple ligands for PAO1 on airway epithelial cells. Two that have been identified are aGM1 and CFTR itself [18,24], and it is likely that GM1 is a weak binding site as well. Thus, it is possible that GFP-PAO1 adheres more to increased aGM1 binding sites on the CF cells (which apparently signal for inflammatory mediators) but may adhere less at other sites, perhaps at CFTR itself, making it appear that adherence has little relation to cytokine response when in fact a only a subset of pseudomonas receptors is responsible for the increased response. Nevertheless, attempts to increase aGM1 directly did not produce the expected changes in the cytokine responses of non-CF cell lines, but did enhance the responses of the CF cell lines. Adding exogenous aGM1 effectively equalized surface aGM1 in both normal IL Cartoon comparing CF and non-CF epithelial cell responses to P. aeruginosa and illustrating two hypotheses to explain the increased cytokine response from CF airway epithelial cells Figure 7 Cartoon comparing CF and non-CF epithelial cell responses to P. aeruginosa and illustrating two hypotheses to explain the increased cytokine response from CF airway epithelial cells. Bacterial adherence to the cell stimulates an intracellular signaling cascade. CF cells produce more IL-8 and IL-6 than non-CF cells. In the first hypothesis, increased bacterial adherence to the CF cell leads to increased signal, with consequent increase in IL-8 and IL-6 secretion. In the second hypothesis, the CF cell responds to each binding event with amplification of the signal compared to non-CF cells, and increased IL-8 and IL-6 secretion. and CF cell lines as measured by antibodies to aGM1 or PNA lectin binding, and actually increased GFP-PAO1 binding to the non CF relative to the CF cell line. Were P. aeruginosa binding to aGM1 the principal determinant of the pro-inflammatory cytokine response, one would expect that the response of the normal cell lines under these conditions would equal or exceed that of the CF phenotype cell lines. However, the CF cell lines still produced more IL-8 in response to PAO1. Although one could argue that increasing aGM1 in this manner might produce binding sites that are not connected to the signaling machinery, others have shown that exogenous aGM1 incorporates into cellular membranes, increases the binding of P. aeruginosa, and augments biological responses to P. aeruginosa, including cytotoxicity, internalization and the apoptotic response [23]. Moreover, the CF cell line treated in this manner did augment its cytokine response to PAO1 (but not to another stimulus, TNF-α/IL-1β, eliminating the possibility of a generalized increase in cytokine production). In contrast, GM1 preincubation, which also resulted in increased PAO1 binding, did not increase the IL-8 response to PAO1: in this case, the increase in binding of P. aeruginosa in the GM1-incubated cells is probably not coupled to a proinflammatory signaling cascade. Association of PAO1 with a non-signaling GM1 ligand could block access to aGM1 receptors, actually reducing the response. IL Another attempt to alter access to aGM1 binding sites also did not reveal association between binding and inflammatory response. Treating the cells with the broad spectrum neuraminidase from C. perfringens resulted in significantly increased lectin binding sites and IL-8 response to PAO1 (but not to TNF-α/IL-1β) in both CF phenotype and non-CF phenotype 9HTEo-cells. However, the excess cytokine response of the CF cells was preserved following neuraminidase treatment, despite equalizing apparent binding of PAO1. Neuraminidase from S. typhimurium, which preferentially removes sialic acid in the α2,3 linkage, produced a significant increase in IL-8 production in response to PAO1 only in the CF phenotype cells. We made no attempt to assess with lectin binding or specific antibody the nature of the basolateral binding sites revealed in polarized cultures by disruption of tight junctions. Others have shown that allowing access to basolateral receptors greatly increases P. aeruginosa binding, cytotoxicity, internalization, and apoptosis independent of CFTR [23,26,27]. Nevertheless, opening tight junctions did not enhance PAO1-stimulated cytokine production in the non-CF cell line, whereas it did in the CF congener. Conclusion The data presented here indicate that the increased cytokine responses in CF airway epithelial cells to P. aeruginosa cannot be attributed solely to increased adherence of the organism. There are several implications of this finding. First, these data focus attention on alternative hypotheses to explain the increased inflammatory response of the CF airway epithelial cell (Figure 7). Our data make the hypothesis that increased pseudomonas binding entirely accounts for the increased inflammatory response of CF epithelium [15] much less likely. Alternatively, there may be increased amplification of the signal from the bacterium in CF cells to account for the increased response. Considerable attention has been paid to the excess activation of NF-κB in CF epithelial cells. Some investigators find that there is activation of this transcription factor even in the unstimulated state in CF epithelial cells, and others find that it is activated to excess only under conditions of stimulation. This pivotal transcription factor could account for a panoply of abnormalities, including the excess cytokine production documented here, but also increased release of MMP-9 and reduced apoptosis of CF airway epithelial cells. Others have proposed that in CF there is failure of anti-inflammatory control mechanisms such as IL-10, NO, or transcription factors that compete with NF-κB for helicases, or there may be subtle abnormalities in both the pro-and antiinflammatory arms of the cascade [reviewed in [4]]. A second caveat raised by our data is that disrupting tight junctions in the CF epithelium can markedly increase inflammatory responses to P. aeruginosa, even if the bacteria are applied only briefly and the cells are given opportunity to recover. Moreover, the combination of disrupting agents and P. aeruginosa produced complete loss of the electrophysiologic barrier in a manner that EDTA or anti-E-cadherin did not. These observations signal caution for therapeutic strategies that propose to access the basolateral surface of CF airway epithelial cells by disrupting the tight junctions in vivo [28,29], especially in patients already infected with P. aeruginosa.
2014-10-01T00:00:00.000Z
2005-07-11T00:00:00.000
{ "year": 2005, "sha1": "27702490d80e2974a10ac6e7987bc104aafb8b6a", "oa_license": "CCBY", "oa_url": "https://respiratory-research.biomedcentral.com/track/pdf/10.1186/1465-9921-6-69", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "fc21b7f681870a1afe6b85231a9b7488ba24a286", "s2fieldsofstudy": [ "Biology", "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
267533678
pes2o/s2orc
v3-fos-license
Dietary Knowledge and Eating Habits among Patients with Type 2 Diabetes in Lebanon Little is known about the dietary knowledge (DK) and eating habits (EHs) of patients with type 2 diabetes (T2D) in Lebanon. Therefore, the aim of this study was to assess the DK and EH of the population with T2D and determine their associated factors. A cross-sectional survey enrolling 351 patients with T2D was carried out, using the snowball sampling technique. The survey used the UK Diabetes and Diet Questionnaire and the Dietary Knowledge questionnaire to assess participants' EH including the frequency of consumption of certain foods and their knowledge of food groups and food choices. While a higher DK index indicated better knowledge, a higher EH index indicated less healthy EH. Independent sample T-test and Mann–Whitney test were used for dichotomous variables, and ANOVA and Kruskal–Wallis tests were used for polytomous variables. Correlation analysis tested the association between two continuous variables. Two multiple linear regression models were used to identify factors associated with DK and EH. Overall, 67% of participants had good or adequate DK, and around 25% and 75% of them had healthy and less healthy EH, respectively. Better knowledge was significantly related to occupation, BMI, presence of comorbidities, and HbA1c testing during the last 3 months. Higher family income, physical activity, family history of diabetes, receiving help in medication administration from family or friends, and higher DK level were factors associated with healthier EH. Nutrition education and awareness campaigns aimed at patients and their families are needed to empower patients with adequate DK and skills to facilitate the adoption of healthy EH. Introduction Diabetes is considered one of the global health emergencies of the 21 st century, afecting over 537 million adults worldwide in 2021, and its prevalence is estimated to rise to 783 million by 2045 [1].In 2019, the world's highest diabetes prevalence was reported in the Middle East and North Africa (MENA) region at 12.2% [2].In Lebanon, a country in the Eastern Mediterranean basin, the prevalence of type 2 diabetes (T2D) is estimated to be 7.95% among adults [3] and 15% among adults in Beirut alone [4], with an incidence of 17.2 per 1000 person-years which stands on the high side for the MENA region [5].Poor HbA1c and blood pressure control, unhealthy lipid profles, and physical inactivity are common among patients with T2D in Lebanon [6], and diabetes complications afect 22% of them [3]. With this alarming increase, and in order to decrease complications and diabetes-related mortality, this health problem requires a comprehensive management plan where patients should be educated to make informed decisions about diet, exercise, weight, and medications, as recommended by the American Diabetes Association (ADA) [7].In fact, patient education has been proven to improve knowledge, dietary behaviors, and health outcomes in patients with T2D, including HbA1c and mortality reductions, lower weight and body fat, and improved quality of life [8].Also, knowledge of diabetes was found to be associated with compliance with treatment and a decrease in complications [9].In Jordan [10] and Nigeria [11], studies reported poor dietary knowledge (DK) levels in the population with T2D, thus hindering diabetes outcomes.In Lebanon, studies showed that patients' knowledge and practice scores related to diabetes' self-management were unsatisfactory [12].However, a diabetes education program helped improve the glycemic levels, dietary habits, body anthropometrics, and lipid profle of Lebanese patients with T2D [13]. In addition to genetics, the economic development and urbanization, the nutrition transition, and the subsequent divergence from the Mediterranean diet toward a more westernized diet are all important factors underlying the increase in diabetes prevalence in the MENA region generally and in Lebanon specifcally [14].Individuals' eating habits (EHs) are now characterized by increased consumption of refned carbohydrates, added sugar, fats, and processed and animal source foods along with reduced fruit and vegetable intake and physical inactivity [15,16]. Given the increasing prevalence of T2D, the poor EH, and the defcient knowledge in terms of diabetes management among patients with diabetes in Lebanon, and since studies investigating DK and current dietary habits in the population in Lebanon are still lacking, the main objectives of our study were to assess the DK and the EH of patients with T2D in Lebanon and to identify the factors associated with both variables. Study Design and Population . Tis is a cross-sectional study conducted between February and June 2021, assessing the DK and EH of patients with T2D in Lebanon. Eligible participants were patients with self-reported T2D (based on the answer to the question Are you diagnosed with T2D?), aged between 18 and 85 years, and currently living in Lebanon.Patients with type 1 diabetes (T1D) were excluded. Sampling. Snowball sampling was used to identify potential study participants.Te online questionnaire was sent to relatives, friends, colleagues, dietitians, and diabetes organizations in Lebanon (Chronic Care Center, DiaLeb, and Lebanese Diabetes Society) who were asked to spread the questionnaire to potential participants. Survey Instrument. Te survey instrument was a selfadministered online questionnaire of four sections.Te frst section collected data about sociodemographic characteristics, lifestyle behaviors, and the personal and family medical histories of the participants.Te second section gathered information related to T2D, such as its duration, blood glucose and glycosylated hemoglobin type A1c (HbA1c) testing, and the presence of complications.Te UK Diabetes and Diet Questionnaire [17] was used in the third section to assess the EH such as the consumption frequency of diferent types of common food groups over the last month.Some of the questions were slightly adapted to the national and cultural Lebanese context by adding some examples of Lebanese foods.Te last section used the Dietary Knowledge questionnaire from Sami et al. [18] to assess patients' DK about food groups, types, and choices. Te questionnaire was developed in English, and a translation to Arabic was performed by a certifed translator to ensure the participation of subjects from different educational backgrounds.Back translation was performed to ensure linguistic and cross-cultural validity of the study.Both the English and Arabic versions were evaluated for consistency in meaning by bilingual experts.Te reliability of individual questions within the DK and EH questionnaires was examined, and the Cronbach value was 0.6 for the correlation of questions within both questionnaires.Each participant received a full disclosure of the nature and purpose of the study, was reassured of the confdentiality of the data, and was given the opportunity to ask questions.Online written informed consent was obtained from all subjects. Statistical Analysis. DK and EH indices were calculated. For the DK index, each question had one correct answer and each correct answer was given one point, whereas wrong or "I do not know" answers were given zero points.For each participant, correct answers were summed to obtain an index ranging between 0 and 21.Te higher the index, the greater the knowledge level.Te DK index was also converted into percentage and classifed into three levels of knowledge: poor DK (<50%), good DK (between 50% and 75%), or adequate DK (>75%) [18]. For the EH index, single select multiple choice questions were included.Each question had a set of six choices (A, B, C, D, E, and F): A and B answers refected a healthy dietary choice, C and D refected a less healthy dietary choice, and E and F refected an unhealthy dietary choice.Te responses were coded as 5 points for F, 4 for E, 3 for D, 2 for C, 1 for B, and 0 for A [19].Te EH index was calculated by summing all the answers.It ranged between 0 and 115 and was classifed into three levels according to Bloom's cutof points: healthy EH (0 to 38), less healthy EH (39 to 77), and unhealthy EH (78 to 115).Te lower the index is, the healthier the EHs are. To describe participants' characteristics, frequencies and percentages were used for categorical variables, and means and standard deviations were used for continuous variables.Two separate multiple linear regression models were used to identify factors associated with DK and EH.Variables that showed a p value <0.2 in the bivariate analysis were included in the regression models [20,21].Te histogram and the Q-Q plots of both the dependent variables (DK and EH indices) and the residuals of the linear regressions were checked to verify the normality of the distribution.p < 0.05 was considered statistically signifcant.Statistical analyses were conducted using the Statistical Package for Social Sciences (SPSS) software, Version 25. Sociodemographic and Lifestyle Characteristics. Te study included 351 participants with a mean age of 59.8 years.Most of the participants were Lebanese, recruited from all the Lebanese territories, particularly Mount Lebanon.More than half of the sample was married (64.7%), and about one-third achieved a university degree level.Approximately 18% of the participants in the survey had a BMI within the normal range.Table 1 provides a summary of the sociodemographic and lifestyle characteristics of the participants. Medical and Diabetes-Related Characteristics. Most participants reported having a family history of diabetes (84.6%), and less than half (39.3%) reported having at least one diabetes complication.A large proportion of participants did not monitor their blood glucose levels (45.0%) or test their HbA1c levels during the last three months (34.5%).Regarding the dietary management of T2D, less than 25% of the participants followed a diabetic-friendly diet.Table 2 presents a description of participants' medical and diabetesrelated characteristics, including monitoring, pharmacological, and dietary management of diabetes. Dietary Knowledge. More than half of the sample acknowledged that the diabetic diet is healthy for most people (58.4%); however, only few were able to defne a wellbalanced diet (41%).Although a large proportion was able to recognize the efect of unsweetened fruit juice on blood glucose (66.1%) and the relationship between excessive sugar consumption and diabetes (88.6%), not many participants knew that hard candies can be used to treat low blood glucose levels (27.6%) and that HbA1c is strongly related to the quality of the diet (38.5%).Moreover, a large percentage (58.4%)wrongly believed that artifcial sweeteners have the highest amount of sugar (58.4%) and that foods labeled "sugar-free" can be eaten freely (39.9%). Participants had good knowledge of carbohydrates' food sources, including breads, cereals, rice, pasta (94%), and baked potatoes (64.7%), as well as of the high fber content of whole grain foods (88.6%) and the high glycemic index of dextrose (60.7%).Respondents also reported correct answers regarding chicken and meat being among foods with the highest amounts of proteins (57.8%), and fsh being a complete source of protein (68.4%).Overall, the average DK index was 12.03 over 21, refecting a good average level of knowledge, with 33% of the participants having poor DK, 52% having good DK, and only 15% of them having adequate DK (Table 3). Eating Habits. More than 30% of the respondents reported typically consuming at least three regular meals a day (33.6%) and having breakfast within two hours of waking up, 5 to 6 times per week (30.5%).While 57.5% of them ate one to two portions of bread per day and 49% ate a serving of pasta or rice per week, the majority had never consumed high-fber bread (53.8%) and high-fber pasta or rice (70.4%).Savory foods (35%), sweet pastries (43.3%), savory pastries (41.3%), desserts (41.9%), and high fat/sugar snacks (37.9%) were consumed once per week or less often, whereas fast foods (48.1%), sweets (43%), and sugary drinks (48.1%) were never consumed by most of the participants.In addition, almost 40% of patients never consumed fatty fsh (37.2%).Also, nearly half of them consumed vegetables (49.6%) and fruits (44.4%) 5 to 6 times per week. Overall, most of the participants (75%) had less healthy EH, compared to only 25% adopting a healthy diet, with the average of the EH index being 43.81 over 115 (Table 3). Discussion To our knowledge, this study is the frst to explore the DK and EH of patients with T2D in Lebanon.Around 67% (n � 235) of the participants had good or adequate DK.Better knowledge was signifcantly related to occupation (p � 0.029), BMI (p < 0.001), presence of comorbidities (p � 0.004), and HbA1c testing (0.006) (Table 4).Around 25% and 75% of the participants reported healthy and less healthy EH, respectively.Higher family income (p � 0.015), physical activity (p � 0.020), family history of diabetes (p � 0.001), and higher DK level (p < 0.001) were factors signifcantly associated with healthier EH among patients with T2D in this study (Table 5). Global Health, Epidemiology and Genomics In terms of DK, our fndings were similar to those of studies conducted in Sudan [22] and Iran [23] where more than half of the participants with T2D had good DK levels.A diferent pattern of results was observed in the Kingdom of Saudi Arabia (KSA) [18], Jordan [10], and Nigeria [11] where subjects demonstrated poor DK.Te diference in knowledge levels among these populations with T2D may be attributable to the diferent knowledge assessment tools, study designs, and sociocultural diferences between these populations.One of the interpretations of high knowledge levels among our participants is that a large proportion of those with good to adequate DK (42.5%) were university degree holders. Among our participants, university degree holders account for 42.5% of those with good knowledge, which may contribute to the high knowledge levels. Te average BMI among T2D patients in this study was 28.7 kg/m 2 , which was concordant with previous reports where most T2D patients in Lebanon had a BMI between 25 and 29 or above [5,12].Te association between higher BMI and better DK found here was in accordance with studies in the KSA showing that overweight and obese individuals had 4 Global Health, Epidemiology and Genomics better diabetes knowledge than their normal-weight peers [24,25].Our study also showed that the presence of comorbidities and regular HbA1c testing were associated with better DK.It is expected that patients of higher weights, those with concomitant diseases, or those who regularly test their HbA1c levels can become increasingly concerned about their health, which incites them to deepen their knowledge and understanding of their medical conditions, enabling them to develop healthy habits.Moreover, the association between being retired and higher DK knowledge levels may be attributed to time availability, increased self-care, and desire for wellness, as well as the access to resources and social engagement that accompany retirement [26], leading retirees to deepen their knowledge about proper nutrition and healthier EH. Concerning EH, our study showed similar results to prior research on EH conducted in Ethiopia [27] and the UAE [28], which found that T2D patients' diet was of poor quality.Tis result is likely to be related to the signifcant changes in EH toward western eating patterns.In fact, a recent review of studies showed a trend of decreasing adherence to the Mediterranean diet in Lebanon over time [29].Our fndings also showed that of those who have less healthy EH, 19% were in debt, owing money that needs to be repaid, and around 60% were meeting routine expenses only.Terefore, another possible explanation of poor dietary quality, especially among low-income earners, is severe food insecurity, with more than 75% of the Lebanese currently living below the poverty line, and around 34% being food insecure [30].In regard to the factors associated with EH, our results build on existing evidence [31,32] showing that lifestyle behaviors, including alcohol consumption and physical inactivity, are associated with less healthy EH.On the other hand, having a family history of diabetes was more likely to lead to improved EH, which accords with other studies' fndings showing that patients with a family history of diabetes were more likely to engage in healthy EH [33,34].Another interesting result is the association between checking nutritional labels and less healthy EH, which demonstrates, similarly to another study [35], that the use of food labels does not necessarily improve dietary quality or EH.Misusing and misunderstanding food labels were shown to be barriers to information and to making healthy eating decisions [36].Even though the DK level was satisfactory for most participants and DK was a predictor of EH, nearly 75% of our patients had less healthy EH.Tis result suggests that barriers to knowledge application and patient adherence to dietary guidelines are present and that better strategies are needed to help patients achieve their dietary and diabetes goals. 6 Global Health, Epidemiology and Genomics According to our results, family members play a considerable role in diabetes care, medication administration, and dietary intake.Terefore, interactive and individualized sessions involving physicians and dietitians, with the family present, can teach and motivate patients and their families and provide them with tips for real-world application of diet recommendations, thereby improving diabetes outcomes.Moreover, awareness campaigns aimed at patients and their families can help promote awareness in a creative and informative way and improve knowledge of the diabetic diet. Limitations. Several limitations of the study should be noted, including the use of snowball sampling, which could have resulted in reducing the representativeness of our study.However, this was partly compensated by including patients with T2D from the diferent Lebanese regions.While a Cronbach alpha >0.6 is desirable in terms of internal consistency, the study provided transparency regarding the specifc items included in the questionnaires.Te selected parameters were clinically relevant and captured meaningful aspects of DK and EH. In relation to the validity of the research instrument, the DK questionnaire developed by Sami et al. was validated for assessing patients' knowledge about carbohydrates, lipids, proteins, food types, and food choices among patients with T2D in KSA and showed good internal consistency reliability [18].Similarly, the EH questionnaire used in this study was also a valid and reliable tool for assessing the EH of patients with T2D in the UK [19].While we recognize the distinctiveness of the Lebanese dietary patterns, it is noteworthy that general questions collecting information about common food types and groups may contribute to the questionnaires' applicability in the Lebanese context.Future studies should prioritize the validation of the questionnaires specifcally within the Lebanese population. Global Health, Epidemiology and Genomics To our knowledge, our study was the frst to provide some valuable insights into the DK levels and EH of patients with T2D in Lebanon.In this survey, and as it is frequently found, females were more willing to participate than men; however, this does not likely impact the results of the study since there were no signifcant associations between gender and DK or EH.Even though most participants were aged 50 years and older, we believe that a web-based survey is still feasible in this population, which is in agreement with what has been shown in a previous study [37], especially in Lebanon, where more than 89% of the total population has access to the Internet [38].On the other hand, with participants having access to the Internet and receiving help from friends and family members, the data collected might not accurately refect the level of respondents' DK.In addition, given the observational nature of our study, it is susceptible to information bias arising from underreporting of foods with a negative health image and overreporting of healthy EH and foods with a positive healthy image. Conclusions An appropriate diabetic diet for managing and controlling carbohydrate intake is considered one of the cornerstones of blood glucose control and overall health management in subjects with T2D.Te results of the present study suggest that nutrition education reinforcement is needed, not only to empower patients with T2D with knowledge and skills to make the right food choices but also to facilitate the adoption of healthy EH. Nutrition education sessions, proper educational tools, and awareness campaigns, led by a multidisciplinary team of healthcare professionals, can teach patients and their families how to manage the disease, reduce its symptoms, and prevent complications through proper dietary management.As awareness spreads, more individuals in the community with diabetes begin to seek answers and take action, which can have far-reaching benefts beyond patients with T2D. Table 1 : Sociodemographic and lifestyle characteristics of patients with T2D included in the study (n � 351).Summary statistics are expressed as mean and standard deviation for continuous variables and as frequency and percentage for categorical variables. Table 2 : Medical and diabetes-related characteristics of patients with T2D included in the study (n � 351). Table 3 : Distribution of dietary knowledge and eating habit indices of patients with T2D included in the study (n � 351). Table 4 : Multiple linear regression for the factors associated with dietary knowledge index (n � 351).p value <0.05: there is a linear relationship between independent variable and DK index adjusting for the efects of other variables.Te analysis included age, nationality, residence (Beirut/outside Beirut), education level, occupation, family income, BMI, presence of other chronic diseases, alcohol consumption, smoking, duration of diabetes, daily blood glucose monitoring, HbA1c testing, lipid lowering medications use, responsible for diabetes care (patient himself or family member/physician), help in dietary intake, checking nutritional composition of foods, and social media and physician as sources of dietary information.R 2 � 45.6%: 45.6% of the DK index is predicted by independent variables.Te bold values in the table indicate p values that are less than 0.05, signifying statistical signifcance. Table 5 : Multiple linear regression for the factors associated with eating habit index (n � 351).Eating habit index Unstandardized β † Standardized β Lower 95% CI of β Upper 95% CI of β p value p value <0.05: there is a linear relationship between independent variable and DK index adjusting for the efects of other variables.Te analysis included age, gender, nationality, residence (Beirut/outside Beirut), education level, occupation, family income, BMI, presence of other chronic diseases, presence of diabetes complications, family history of diabetes, physical activity, alcohol consumption, smoking, HbA1c testing, responsible for diabetes care (patient himself or family member/physician), help in medication administration, checking nutritional composition of foods, social media, physician, and dietitian as sources of dietary information, and DK score.Being physically active refers to engaging in more than one hour of physical activity per week.R 2 � 41.7%: 41.7% of the EH index is predicted by independent variables.Te bold values in the table indicate p values that are less than 0.05, signifying statistical signifcance.
2024-02-08T16:12:38.979Z
2024-02-06T00:00:00.000
{ "year": 2024, "sha1": "f0951990d5aaccdea1ce2389026a0c1a0af6342d", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/gheg/2024/3623555.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8a3a9def98f113e02c88dce4c501fe4bf9a009b0", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
251326042
pes2o/s2orc
v3-fos-license
Adeno-associated virus-mediated expression of activated factor V (FVa) for hemophilia phenotypic correction Adeno-associated virus (AAV) gene therapy has been successfully applied in hemophilia patients excluding patients with inhibitors. During the coagulation pathway, activated factor V (FVa) functions downstream as a cofactor of activated factor X (FXa) to amplify thrombin generation. We hypothesize that the expression of FVa via gene therapy can improve hemostasis of both factor IX and FVIII deficiencies, regardless of clotting factor inhibitor. A human FVa (hFVa) expression cassette was constructed, and AAV8 vectors encoding hFVa (AAV8/TTR-hFVa) were intravenously administrated into mice with hemophilia A and B with or without FVIII inhibitors. Hemostasis, including hFVa level, activated partial thromboplastin time (aPTT), tail clip, and the saphenous vein bleeding assay (SVBA), was evaluated. In hemophilia B mice, a dose of 4 × 1013 vg/kg AAV8/TTR-hFVa vectors achieved a complete phenotypic correction over 28 weeks. In hemophilia A mice, hemostasis improvement was also achieved, regardless of FVIII inhibitor development. In vivo hemostasis efficacy was confirmed by tail clip and SVBA. Interestingly, while minimal shortening of aPTT was observed at a lower dose of AAV8 vectors, hemostasis improvement was still achieved via in vivo bleeding assays. Collectively, FVa-based AAV gene therapy shows promise for hemostasis correction in hemophilia, regardless of inhibitor development and no potential risk for thrombosis. Introduction Hemophilia is an inherited bleeding disorder caused by a deficiency of the functional clotting factor FVIII (hemophilia A) or FIX (hemophilia B). Current treatment by protein replacement therapy is constrained by the short half-life of the clotting factors, requiring repeated infusions at relatively large doses. The development of inhibitors (alloantibodies to clotting factors) remains the single most important obstacle for managing hemophilia with protein replacement therapy. These alloantibodies render replacement therapy ineffective, with an increase in the risk of serious bleeding, progressive arthropathy, and treatment-related costs. Approximately 30-40% (1) of patients with hemophilia A and 5% of patients with hemophilia B can develop inhibitors after protein replacement therapy (2)(3)(4). In patients with inhibitors, the administration of clotting factors is ineffective, and poor control of hemorrhagic episodes mostly increases orthopedic complications. Treatment with immune tolerance induction (ITI) therapy is successful in about 70% of hemophilia A and 30% of hemophilia B (5, 6). In addition to ITI therapy, the bypass product FVIIa has also been applied in hemophilic patients with inhibitors (7, 8). Other novel options, including emicizumab, a bispecific mAb that bridges FIX and FX in the clotting cascade, has been a major addition for hemophilia A with or without inhibitors with some potential safety concerns (9)(10)(11). Gene therapy could ultimately provide a cure and avoid the need for repeated clotting factor infusions. Among the gene therapy vectors, adeno-associated virus (AAV) vectors have been used successfully in numerous preclinical applications. AAV is a single-stranded DNA virus, while AAV vectors have been shown to induce a long-term and stable therapeutic gene expression over 10 to 15 years in dogs, primates, and human (12)(13)(14). AAV can infect both dividing and non-dividing cells. Although there is some evidence of AAV integration events in rodent and canine models as summarized in recent discussion of AAV Integration Roundtable meeting initiated by the American Society of Gene & Cell Therapy (15), the risk of integration has putatively been recognized as low. Due to extensive preclinical studies, over 150 clinical trials with AAV vectors are ongoing, and great success has been achieved in patients with diseases such as inherited blindness and hemophilia. In multiple clinical trials with hemophilia B patients, after systemic administration of AAV vectors into the liver, a stable therapeutic level of FIX activity (16, 17) and a supraphysiological level have been achieved (18). In patients with hemophilia A, liver targeting with a high dose of AAV vectors encoding human FVIII also supports FVIII expression in the blood from 4% to over 100% (16,19,20). However, this therapy is currently restricted to patients who are negative for FVIII or FIX inhibitors. Clotting factor V (FV) is synthesized as a single-chain protein in the liver, which is composed of A1, A2, B, A3, C1, and C2 domains, in order. FV is a pro-cofactor and in this state does not have procoagulant activity. It is activated by thrombin via limited proteolysis to release the B domain, and the interaction of the HC and the LC generates the procoagulant heterodimer FVa (21). As a cofactor, FVa interacts with FXa to form a prothrombinase complex, which is essential for the rapid generation of thrombin (22). Indeed, FVa is able to enhance the rate of thrombin generation by approximately 10,000-fold (23). Therefore, treatment with FVa can theoretically bypass any upstream factors involved in the coagulation cascade, and most importantly, FVa function still remains under the regulation of activated protein C. Activated protein C is one of the principal physiological inhibitors of coagulation and degrades FVa, ameliorating the concern of thrombosis. In this study, we explored the therapeutic potential of FVa delivered by AAV vectors in hemophilia mice with or without pre-existing FVIII inhibitors. The efficacy and safety with AAV vectors encoding human FVa as a bypass product were achieved in hemophilia mouse models. HEK293 cells and western blot HEK293 cell lines were incubated at 37 • C in 5% CO 2 in Dulbecco's modified Eagle medium (DMEM) supplemented with 10% fetal bovine serum (FBS) and penicillin-streptomycin. The cell supernatant 72 h after transfection was subjected to sodium dodecyl sulfatepolyacrylamide gel electrophoresis (SDS-PAGE), followed by transferring to a polyvinylidene difluoride (PVDF) membrane, and then primary antibodies of anti-hFVa (Santa Cruz Biotechnology, Dallas, TX, United States, and Haematologic Technologies, Essex Junction, VT, United States) and conjugated secondary antibody (Abcam, Cambridge, MA, United States) were added for protein detection. Construction of adeno-associated virus vectors and adeno-associated virus vector production All human FV DNA sequences (Figures 1, 2) were synthesized by GenScript (Piscataway, NJ, United States) and driven by the chicken beta actin (CBA) promoter (Figure 1) or the liver-specific transthyretin (TTR) promoter (500 bp including an MVM intron) and fused with BGH polyA (230 bp), as described previously (24). These AAV FVa Frontiers in Medicine 02 frontiersin.org FVa was functional in hemophilia mice after hydrodynamic injection of hFVa plasmids. (A) Constructs of FVa "hFV BDD": complete deletion of B domain; "hFVa": complete deletion of the B domain and a furin cleavage site linker between the FV heavy chain (HC) and the light chain (LC); "hFVBD SQ": complete deletion of the B domain with an SQ sequence inserted between the HC and the LC. (B) aPTT change after hydrodynamic injection. In both FIX -/and FVIII -/mice, the three plasmid constructs were tested, and FVIII/FIX-specific aPTT was measured 48 h after hydrodynamic injection. FIX -/-/FVIII -/mice ("FIX -/-"/"FVIII -/-"and their "WT") controls injected with normal saline were used as the controls. Frontiers in Medicine 03 frontiersin.org constructs ( Figure 2) were cloned by either deleting the B domain (hFV BDD) or linking the heavy chain [HC, with signal peptide (SP), 737aa-2,211 bp] with light chain (LC, 651aa-1953 bp) using a furin cleavage motif (RKRRKR) (hFVa) or with an SQ sequence (SFRNPDNIAAWYLRRKR) (hFVBD SQ) (25). Both plasmids of full length and BDD-hFVa driven by the CBA promoter ( Figure 1A) were transfected into HEK293 cells, with the supernatant subjected to the thrombin (Roche, Mannheim, Germany) cleavage. The supernatant from hFVa plasmid transfection was also tested for activated protein C (Sigma, St. Louis, MO, United States) inhibition assay based on FVIII-specific aPTT assay. All vectors were produced following the triple transfection protocol and tittered at the Virus Vector Core Facility at the University of North Carolina at Chapel Hill, as described previously (26). Induction and quantification of anti-FVIII inhibitor To remodel the pre-existing FVIII inhibitor development in hemophilia A mice, FVIII −/− mice were treated with rhFVIII (Advate, Baxter, Westlake Village, CA, United States) at a dose of 100 IU/kg, once a week for 4 weeks. The titer of anti-human FVIII inhibitor was measured by using the Bethesda assay, as previously described, using a STart 4 Coagulation Analyzer (Diagnostica Stago, Parsippany, NJ, United States) (29). Hydrodynamic injection Purified plasmids (100 µg/mouse) in 2 ml of phosphatebuffered saline were injected hydrodynamically into the lateral tail vein of mice within 5-8 s, as described previously (30). Plasma samples were harvested 48 h later for aPTT analysis. At least three mice were included for each treatment group. FIX-or FVIII-specific activated partial thromboplastin time and quantification of hFVa expression by enzyme linked immunosorbent assay (ELISA) Given the presence of FV/FVa in the final common pathway of blood coagulation and either hemophilia A or B has a normal PT time, we adopted FVIII/FIX-specific aPTT as the parameter to demonstrate hemostasis improvement after gene therapy, along with tail clip and the saphenous vein bleeding assay (SVBA). FIX-or FVIII-specific aPTT tests were used to monitor the hemostasis improvement on the STart 4 coagulation analyzer by incubating 50 µl mouse plasma samples diluted in Owren-Koller buffer (Diagnostica Stago), 50 µl hFVIII-or FIXdeficient plasma (George King Bio-Medical, Overland Park, KS, United States), and 50 µl aPTT reagents for 3 min and then adding 50 µl 0.025 M calcium chloride. Pooled plasma from wild-type mice was used as the control. For the quantification of hFVa expression in mouse plasma, we employed an ELISA method with coating and detection antibodies (Affinity Biologicals, Ancaster, ON, Canada). Activated human Factor V (Haematologic Technologies, Essex Junction, VT, United States) was used to construct the standard curve. Tail clip and the saphenous vein bleeding assay The tail clip was performed as described (31) (N = 44 FVIII −/− mice treated with AAV8.FVa). In brief, 3 mm of the distal tail was transected, and the proximal tail was placed in a pre-warmed and pre-weighed tube. After 40 min of the tail clip or upon death, blood loss per gram body weight was calculated. Saphenous vein bleeding assay was performed on the right saphenous vein by piercing it with a 23-G needle (N = 23 FVIII −/− mice treated with AAV8.FVa). Blood was gently wicked away until hemostasis occurred. The clot was then removed to restart bleeding, and the blood was again wicked away until hemostasis occurred again. The number of clots that occurred over the course of 30 min was recorded (32). Rotational thromboelastometry (ROTEM) and thrombin/antithrombin III complex detection For fibrinolysis analysis, ROTEM was employed, which can provide global information on the dynamics of clot development, stabilization, and dissolution, which reflect in vivo hemostasis. Whole blood was collected from the inferior vena cava at killing in FIX −/− mice treated with AAV8.FVa and placed in 3.2% sodium citrate mixed at a ratio of 9:1. Then, 300 µl of the resulting mixture was coagulated with 20 µl of 0.2?M CaCl 2 , and finally, 3 µl of tissue plasminogen activator (DiaPharma, Louisville, KY, United States) was added to anticoagulated whole blood to a final concentration of 1 ng/ml in a pre-warmed rotational thromboelastometer. Thrombin-antithrombin (TAT) evaluation was carried out (N = 21 FIX −/− mice treated with AAV8.FVa) from plasma collected as a terminal puncture of the inferior vena cava using an Enzygnost TAT micro ELISA system (Siemens Healthcare Diagnostics, Tarrytown, NY, United States). Statistical analysis Results of a one-way, non-parametric analysis were analyzed using GraphPad Prism 6.0 software (GraphPad Software, La Jolla, CA, United States). An adjusted P-value < 0.05 was considered a statistically significant difference. Result Detection of factor V after transfection of factor V plasmid in HEK 293 cells To test whether the FVa transgene construct can be properly processed intracellularly, the plasmid encoding human FVa (hFVa), driven by a CBA promoter (Figure 1A), was transfected into HEK 293 cells, as displayed in Figure 1B. The heavy chain (HC, 110 kd) was detected with the antibody GMA-044, which specifically recognizes the hFV HC. As shown in Figure 1C, FV cleavage was seen by the addition of thrombin to full-length FV plasmid-transfected supernatant (light chain detected, weak band 10 min later at lane 1, and a clear band 30 min later at lane 2, while no band without thrombin at lane 3). The aPTT was significantly prolonged after the addition of activated protein C to the supernatant from hFVa plasmid transfection (data not shown), implicating that hFVa was subjected to inhibition by activated protein C. Hydrodynamic injection of factor V transgene plasmids leads to activated partial thromboplastin time improvement in both hemophilia A and B mice Next, we investigated whether the engineered sequence with SQ activation peptide sequence (33) could increase the expression efficiency. Given the efficiency and convenience Complete phenotypic correction after administration of AAV8/hFVa in FIX -/mice. 4 × 10 13 vg/kg body weight of AAV8/hFVa was administrated into FIX -/mice via the tail vein (N = 5, male, at ages 8-10 weeks). Blood was harvested for aPTT assay and hFVa quantification by ELISA. (A) aPTT assay after AAV8/hFVa administration, and dashed line represents normal WT mice. (B) hFVa concentration in blood was quantitated by ELISA. Data are presented as an average with standard errors. ** Compared with pre-treatment, P < 0.01 (two-way ANOVA with multiple comparisons and unpaired t-tests). of hydrodynamic delivery of genes of interest, we employed hydrodynamic injection to assess the efficacy of different FV constructs in the liver. As shown in Figure 2, compared to the construct with only B domain deleted FV versus FV constructs with the B domain deletion and replacement by the SQ linker or furin cleavage, the FVa plasmid with the furin cleavage domain showed a slight tendency to induce a better hemostasis improvement with a shorter aPTT in both hemophilia A and B mice. These results indicated that FVa protein can be properly processed and formed by the furin cleavage intracellularly. Therefore, "hFVa" was chosen as the candidate for AAV vector package and further in vivo study. Hemostasis improvement after factor V gene therapy in FIX −/− mice To study whether hFVa induces a phenotypic correction in a hemophilia setting, we constructed AAV8 vectors encoding Frontiers in Medicine 05 frontiersin.org hFVa driven by the TTR promoter (AAV8/TTR-hFVa). A dose of 4 × 10 13 vg/kg of AAV8/TTR-hFVa was administrated into FIX −/− mice via the tail vein (N = 4-5 mice/time point). Plasma was collected. aPTT was analyzed during a 28-week follow-up after AAV8/hFVa vector injection. The quantity of expressed hFVa was also measured by ELISA. As shown in Figure 3A, a complete phenotypic correction with a normal aPTT was achieved when compared to the wildtype mice (about 72 s of aPTT), indicating that AAV8/hFVa gene therapy improved hemostasis in hemophilia B mice. Due to the lack of sensitive and reliable assays to differentiate the human factor V and FVa, we adopted an ELISA method using the anti-human FV antibody as the capture and detection antibodies, and hFVa as the standard. With the baseline level of FV in FIX −/− mice ( Figure 3B) at around 1,000 ng/ml, the level of FVa after AAV8/hFVa treatment showed a steady increase during the follow-up period from week 1 to week 20, which was consistent with the result from the aPTT analysis. Phenotypic correction in FVIII −/− mice treated with low doses of AAV8/TTR-hFVa vectors Next, we studied whether the administration of AAV8/hFVa was also able to induce hemostasis improvement in FVIII −/− mice. A dose of AAV8/TTR-hFVa vectors (8 × 10 12 vg/kg and 1.6 × 10 13 vg/kg) was given. After 12-20 weeks of follow-up, a slight reduction of aPTT was observed ( Table 1). At the end of experiments, all mice were used for tail vein transection to evaluate the hemostatic improvement in AAV8/FVa treatment in vivo. As displayed in Table 1, the survival from tail clip was significantly improved in hemophilia A mice treated with AAV8/FVa vectors when compared to that in naive hemophilia mice (95 vs. 61%); only two mice died among the 44 treated mice. In contrast to the results from aPTT, the blood loss from tail clip was significantly decreased in mice treated with AAV8/hFVa vectors (8.7 ± 7.2 g/kg body weight vs. 34.5 ± 12.9 g/kg body weight in naive FVIII −/− , P < 0.01), approximately close to the findings with wild-type mice (4.9 ± 4.4 g/kg). In untreated hemophilia A mice, considerable blood loss was found (34.5 ± 12.9 g/kg). When we further divided mice into low-dose (8 × 10 12 vg/kg) and highdose (1.6 × 10 13 vg/kg) groups, there was no statistical difference between the two dose groups for both aPTT and blood loss from tail clip. In the studies, even with no obvious improvement in hemostasis from the aPTT assays, the in vivo bleeding via tail clip was significantly improved in hemophilia A mice after the administration of AAV8/hFVa vectors at low doses. No over-correction of hemostasis with a high dose of AAV8/hFVa vectors in hemophilia A mice Next, we studied the effect of AAV8/hFVa at different doses on hemostasis improvement in hemophilia A mice. To explore the potential minimal dose for hemostasis improvement and whether there was any dose-dependent potential hepatic toxicity, five different doses were used, from 4 × 10 12 vg/kg to 2.8 × 10 14 vg/kg. Other than aPTT analysis, SVBA was also performed 8-12 weeks after gene therapy treatment to evaluate the in vivo phenotypic correction. None of the mice showed any potential liver toxicity reflected by liver enzymes (data not shown). As displayed in Figure 4 and Table 2, four trends were drawn: (1) aPTT can be shortened in an approximately dose-response pattern. (2) In vivo bleeding assay with SVBA did not show dose-dependent hemostasis improvement. The disruption number was 18.7 ± 4.4 for the wild-type mice and 4.0 ± 1.9 for the untreated hemophilia A mice. The disruption number was over 10 for any mice treated with AAV8/hFVa vectors, regardless of the doses. There was no significant difference in the disruption number between any groups of mice receiving various doses of AAV8/hFVa vectors (P < 0.05). Pooled data from mice treated with five different doses of AAV8/hFVa vectors showed no difference in the disruption number compared to the WT mice (P > 0.05), consistent with data from the blood loss assay from tail clip ( Table 1). (3) It is also worth noting that the aPTT was barely improved in mice with the lowest dose (4 × 10 12 vg/kg) of vectors, but the in vivo improvement in hemostasis via SBVA remained significant (disruption number of 13.7 ± 3.8). (4) When the highest dose of AAV8/hFVa (2.8 × 10 14 vg/kg) was analyzed, the aPTTs were slightly shortened when compared to the dose at 8 × 10 13 vg/kg (80.5-87 s vs. 86-102 s), but no significance was reached. This result indicated that over-correction of hemostasis was not induced even when extra high doses of AAV8/hFVa are used. Factor V gene therapy improved hemostasis in FVIII −/− mice with pre-existing anti-FVIII inhibitor In the aforementioned experiments, we have demonstrated that administration of AAV8/hFVa induced hemostasis improvement in both hemophilia A and hemophilia B mice without inhibitors. Next, we studied whether the phenotypic correction in hemophilia mice without inhibitors from the application of AAV/hFVa could also apply to hemophilia mice with inhibitors. FVIII inhibitors were first induced by administration of protein in hemophilia A mice. All the mice developed high titers of anti-FVIII inhibitor (from 8.7 to 42.3 BU/ml as shown in Figure 5C). At 1 week after the final FVIII inhibitor titer was quantified, AAV8/hFVa vectors were administrated at a dose of 8 × 10 13 vg/kg. Hemostasis correction was assessed by measuring aPTT and the levels of FVa by ELISA from plasma collected at week 1 and week 4 post-AAV8/hFVa injection. As shown in Figure 5A, hemophilia A mice with FVIII inhibitors showed continuous shortening of aPTT clotting time after the administration of AAV8/hFVa vectors, similar to aPTT in mice without FVIII inhibitors. The shortening of clotting time coincided with the elevation of hFVa, as shown in Figure 5B. At week 8 post-AAV8/hFVa vector administration, the mice were subjected to the tail clip bleeding challenge. As shown in Figure 5C, all seven mice treated with AAV/FVa gene therapy survived the tail clip, and the blood loss was significantly lower than that in naive hemophilia A mice. There was no difference of blood loss in mice with or without pre-existing anti-FVIII inhibitors after treatment with AAV8/hFVa vectors. Overall, there were no statistical difference between mice with or without pre-existing FVIII inhibitor, implicating that FVa-based gene therapy can improve hemostasis in FVIII −/− mice, regardless of pre-existing anti-FVIII inhibitor. No thrombotic risk was detected in hemophilic B mice treated with AAV8/hFVa As no mice showed liver enzyme elevation, including the highest dose (2.8 × 10 14 /kg)-treated mice in the report (data Administration of AAV8/hFVa vectors improved hemostasis in hemophilia A mice with inhibitors. (A) FVIII -/mice were first induced to develop anti-FVIII inhibitor by repeated FVIII protein infusion ("AAV8/hFVa + inhibitor," N = 7). Hemophilia A without FVIII inhibitor was utilized as the controls ("AAV8/hFVa control," N = 4). AAV8/hFVa vectors were administrated via the tail vein. Plasma was collected for aPTT assay at different time points. ** Compared with pre-treatment, P < 0.01 (two-way ANOVA with multiple comparisons and unpaired t-tests). (B) Plasma from the same mice ("A") was measured for hFVa by ELISA. N = 6 for "AAV8/hFVa + inhibitor"; N = 4 for "AAV8/hFVa control." ** Compared with pre-treatment, P < 0.01 (two-way ANOVA with multiple comparisons and unpaired t-tests). (C) Eight weeks post-AAV8/hFV injection, FVIII -/mice were subjected to the tail clip bleeding challenge. The blood loss was recorded. M1-M7 represent anti-FVIII inhibitor titer (BU/ml) of individual mouse. All data are presented as averages with standard errors. not shown), the concern of thrombosis due to inappropriate activation of coagulation from continuous secretion of FVa after AAV gene delivery should be addressed, given that FVa participates in the common pathway of the coagulation cascade. Then two parameters were used to evaluate the risk of thrombosis following FVa-based gene therapy: TAT assay and fibrinolysis analysis on ROTEM. FIX −/− mice were administrated AAV8/hFVa at the following doses: 2 × 10 12 /kg, 7 × 10 13 /kg, and 2 × 10 14 vg/kg. At 16-20 weeks after gene therapy, plasma TAT complexes were measured. As displayed in Figure 6, no elevation of TAT levels was observed in mice treated with AAV8/hFVa vectors even at the highest dose of 2 × 10 14 vg/kg when compared with ageand gender-matched WT and untreated hemophilia B mice. Evaluate the risk of thrombosis after AAV8/hFVa treatment by detection of the TAT complexes. TAT complexes were measured via ELISA from platelet-poor citrated plasma collected as a terminal puncture of the inferior vena cava at 20 weeks after AAV8/hFVa vector administration when the FIX -/-mice were killed. Age-matched WT and untreated hemophilia B mice served as the controls. To further assess whether the normal fibrinolysis pathway was affected by persistent production of FVa, a ROTEM assay was conducted. As shown in Figure 7, the clot was dissolved completely in WT mice. By contrast, clot formation was barely developed in the FIX −/− mice. However, the clot was formed in the FIX −/− mice treated with AAV8/hFVa vector (at week 20) and then fully dissolved, a pattern similar to that in the WT mice. The result implicates that continuous expression of FVa does not lead to clot resistance with no thrombotic risk. Discussion Adeno-associated virus vectors encoding the bypass molecule FVIIa have been used in hemophilia animal models with inhibitors, yet only partial improvement in hemostasis was achieved (34,35). To overcome the issues of inhibitors and poor efficacy in a single-dose format, FVa-based gene therapy may represent a potentially safe and effective single-dose drug for treatment in hemophilia patients with or without inhibitors. In this study, we found that FVa protein can be properly processed and formed by furin cleavage intracellularly. In hemophilia B mice, complete phenotypic correction was achieved over 28 weeks of follow-up within a normal activated partial thromboplastin time (aPTT). In hemophilia A mice with or without pre-existing FVIII inhibitors, hemostasis correction, represented by aPTT, was achieved. The phenotypic correction was also supported by tail clip and SVBA, even without improvement from the aPTT assay at low doses of AAV8/hFVa vectors. In comparison to WT and untreated hemophilia mice, gene therapy with AAV8/TTR-hFVa did not induce increased risk of thrombosis based on normal TAT levels and fibrinolysis on ROTEM. These results imply that continuous Evaluate the risk of thrombosis after AAV8/hFVa treatment by fibrinolysis on ROTEM. Fibrinolysis analysis using whole blood collected from the inferior vena cava at the end of experiments was performed on ROTEM for mice ( Figure 6). expression of FVa via AAV gene delivery would not lead to uncontrolled clot formation. Using the AAV vector to deliver FVa for long-term expression allowed us to assess the risk of thrombotic formation related to FVa treatment. Based on the data from TAT detection and fibrinolysis analysis, no thrombotic events were observed in mice treated with AAV8/hFVa. The lack of thrombotic risk may be attributable to tight regulation of FVa by normal physiological mechanisms. Once circulating FVa blood levels are sufficient to complex with FXa for sufficient thrombin generation and phenotypic correction after administration of AAV vectors, the thrombin will activate protein C for rapid degradation of extra FVa to maintain a certain amount of FVa for thrombin production. Given the unique features of AAV, including broad tropism, low immunogenicity, non-pathogenicity, and rare integration, AAV-mediated gene therapy has emerged as the proven platform for the treatment of multiple diseases. The efficacy and long-term expression of the transgene (FVIII/FIX) has been achieved in ongoing clinical trials for both hemophilia A and B (36). In this report, a relatively large dose of AAV8/hFVa vector was required to normalize the hemostasis correction in both hemophilia A and B mice. The size of FVa cDNA is approximately 4,200 bp. Due to size limitations for encapsulation in the AAV vector (37,38), it is not possible to use a strong liver promoter of significant size to increase FVa expression. Future work should explore shorter strong liverspecific promoters, genetic optimization of FVa DNA sequences, or transduction enhancement through use of a customized capsids with stronger liver tropism. These enhancements would further reduce the potential for high-dose AAV vector-induced liver toxicity (39). In this study, FVa expression cassette (including the TTR promoter with an MVM intron 500 bp and hFVa 4,182 bp and BGH polyA 230 bp) is about 4,900 bp long. This total length of AAV FVa construct including two ITRs is less than 5.2 kb, which is maximum package limitation for AAV vector production (38). Actually, we performed alkaline gel electrophoresis and found that the intact genome was packaged (data not shown). The larger size of AAV construct is potential to induce truncate AAV genome packaged or produce more empty virions. Several approaches have been explored to use AAV vectors for large transgene delivery involving split vectors using splicing intron or intein (14,15,34,35,37,(39)(40)(41)(42)(43)(44). It is interesting to note that phenotypic correction was still achieved when tail clip/SVBA was performed in hemophilia mice even without improved aPTT after gene therapy. It is unclear why the data from aPTT analysis cannot predict the hemostasis correction in vivo after AAV/hFVa gene therapy in hemophilia mice. It is possible that the secreted FVa may play a different role under conditions of active hemorrhage (tail clip/saphenous vein bleeding in this study) or without active bleeding. Under the normal conditions without active bleeding, the "extra" FVa may be instantly inactivated after secretion, while under the condition of active bleeding, the circulating FVa quickly interacts with the trace amount of FXa to initiate the downstream of coagulation cascade. Whether potential platelet expression of FVa at the lower vector doses can be attributed to the in vivo hemostasis improvement warrants further investigation. Due to the inability of aPTT/PT to predict hemostasis improvement in vivo after FVa gene therapy, exploration of other measurements, for example, annual bleeding rate in a large animal model, should be taken into consideration in future clinical studies (11). Several decades of innovation have produced novel, nontraditional molecular medicines that have greatly improved the clinical management of hemophilia. Some of these treatments include biphasic antibodies bridging FIX and FX to bypass the FVIII requirement, antibodies against tissue factor pathway inhibitor (TFPI), and antithrombin-specific small interfering RNA (siRNA) targeting natural anticoagulant pathways to "rebalance" hemostasis (45). However, the longer half-lives of the antibody products or out of the loop of physiological coagulation regulation of these therapies has led to reports of thrombosis or associated events. Gene therapy, especially with AAV vectors, still holds the promise of a one-time infusion cure for hemophilia. Nonetheless, thrombotic events and concerns with overexpression of clotting factors have been reported or brought to notice in ongoing clinical trials of both hemophilia A (NCT04370054) and hemophilia B (NCT03369444). Both FV and factor X (FX), as well as their activated forms FVa and FXa, are involved in the common pathway of the coagulation cascade. In addition to FVa, activated factor X (FXa) has also been considered a potential target for hemophilia therapy (46,47). An ongoing phase 1b clinical trial of FXa has been carried out in patients with intracerebral hemorrhage and has confirmed its safety profile. FXa is rapidly inactivated by circulating protease inhibitors, resulting in a short halflife (<1-2 min). Furthermore, FXa can activate a range of procoagulant clotting factors, possibly leading to pathological activation of coagulation and is closely related to vascular disorders. Therefore, we have chosen its cofactor, FVa, as the candidate to treat hemophilia via a gene delivery approach. Although treatment with FVa has been proposed as a protein therapy in animal models in hemophilia, due to its very short half-life, the thrombosis risk with FVa therapy has not been fully evaluated. Administration of AAV8/hFVa leads to circulating FVa blood levels sufficient to complex with FXa produced in response to injury, resulting in adequate thrombin production and phenotypic correction. Thrombin then activates protein C for rapid degradation of the remaining circulating FVa, halting the clotting process. Constant endogenous liver production of FVa rebuilds circulating levels available to complex again with FXa upon the next injury. This theory is supported by our dose-response studies. Increasing the vector dose 70-fold dramatically shortened the aPTT, but the in vivo bleeding phenotype was corrected without the same magnitude of change. In combination with TAT data and fibrinolysis analysis via ROTEM, all of these results support the safety profile with AAV vector-mediated delivery of FVa in hemophilia with a minimal concern of thrombotic risk. The short half-life of hFVa restricts its application in clinical trials. Recently, FVa mutants by engineering FVa APC cleavage sites have been proposed as a novel approach to bypass inhibitors (48-50) with a good safety profile including risk of thrombosis in the lung and immunogenicity in mice (50). However, these FVa mutants may not be good candidates for gene therapy due to evidence that mutated FVa is out of regulation of normal physiological hemostasis mechanisms and potential thrombosis risk. Gene delivery of wtFVa with AAV vectors has several advantages over the replacement treatment with mutated FVa protein: (1) AAV vectors have been successfully applied in patients with hemophilia A and B and proven to be safe. (2) Only one infusion is required since long-term transgene expression has been observed in preclinical animal models and human clinical trials. (3) There is no contamination from the processes required for protein production and purification. (4) There is no need for an extra step to cleave FV using thrombin to generate FVa. (5) The wtFVa will be directly formed after expression from AAV gene therapy. (6) Its function will be closely regulated by normal physiological mechanisms, which is considered to minimize the potential risk of thrombosis from high expression of FVa, if any, as evidenced in our study. Therefore, the AAV/FVa technology raises the promise for broader application for hemophilia management in patients with inhibitors or without inhibitors. However, a relatively higher dose of AAV doses is required to achieve hemostasis correction. Further optimization of the expression cassette, for example, choosing stronger promoters and transgene optimization, may be necessary for translational success. Given the similar molecular structure between FV/FVa and FVIII, the potential immunological risk of FV/FVa per se may also require further characterization. In conclusion, FVa-based AAV gene therapy showed high promise for hemostasis correction in hemophilia, regardless of clotting factor inhibitor development and without risk of thrombosis. The approach has the potential to achieve the goal of "one drug for multiple clotting factor deficiencies" in patients with hemophilia. Data availability statement The original contributions presented in this study are included in the article/supplementary material, further inquiries can be directed to the corresponding authors. Ethics statement The animal study was reviewed and approved by the University of North Carolina at Chapel Hill.
2022-08-05T13:24:15.902Z
2022-08-05T00:00:00.000
{ "year": 2022, "sha1": "eba0fdf5ead88992e70f4fba121dc57c2f37f0e9", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "eba0fdf5ead88992e70f4fba121dc57c2f37f0e9", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [] }
237353230
pes2o/s2orc
v3-fos-license
Measuring Interaction-based Secondary Task Load: A Large-Scale Approach using Real-World Driving Data Center touchscreens are the main HMI (Human-Machine Interface) between the driver and the vehicle. They are becoming, larger, increasingly complex and replace functions that could previously be controlled using haptic interfaces. To ensure that touchscreen HMI can be operated safely, they are subject to strict regulations and elaborate test protocols. Those methods and user trials require fully functional prototypes and are expensive and time-consuming. Therefore it is desirable to estimate the workload of specific interfaces or interaction sequences as early as possible in the development process. To address this problem, we envision a model-based approach that, based on the combination of user interactions and UI elements, can predict the secondary task load of the driver when interacting with the center screen. In this work, we present our current status, preliminary results, and our vision for a model-based system build upon large-scale natural driving data. INTRODUCTION According to the National Highway Traffic Safety Administration (NHTSA) [15] 3,142 people were killed and 424,000 people were injured in crashes where drivers were distracted from the main driving task. Although the number of driver monitoring systems has increased steadily in recent years, a decrease in the number of accidents due to distracted driving can't be observed. Whereas the usage of smartphones during driving plays a major role, there are concerns that the increase in complexity, capability, and size of touchscreen-based HMIs will additionally increase the driver's cognitive workload. To prevent automotive HMIs from being too distracting, they undergo expensive and time-consuming empirical testing before they can be integrated into production line vehicles. While such measures are essential and will remain necessary, early feedback on potentially distracting usage patterns can be valuable for UX experts to design systems that are safe to use. A system that can predict the secondary task load based on the anticipated UI interactions and their properties, such as the sequence in which they occur or position on the display, can help UX experts to detect potentially distracting designs at an early stage and to develop appropriate alternatives. To enable such predictions, we envision a system that is based on driving and interaction data, automatically collected from a large amount of © 2021 Patrick Ebel. ACM 2020. This is the author's version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in 12th International Conference on Automotive User Interfaces and Interactive Vehicular Applications. Manuscript submitted to ACM production line vehicles. Having access to such large-scale data makes it possible to generate insights that go beyond the detail of current, mostly qualitative or relatively small-scale naturalistic driving studies [5]. Additionally, as soon as a software update is deployed to the fleet, the changes that were made can directly be assessed. In this work, we first investigate how the engagement of drivers with the touch-based HMI can be measured using driving parameters and UI interaction data. We further elaborate on specific UI features and behavior that might lead to increased driver distraction. RELATED WORK Driver Distraction Measurement is a well-studied field of research that will remain relevant even in the approaching age of automated driving. In certain driving environments, drivers will need to take over control of the car for a long time. Therefore, multiple approaches exist that try to predict and model driver distraction. A large group of such approaches is based on physiological data [1,8,19,20] or data retrieved from eye-tracking systems [7,16,17]. Whereas many of these approaches provide promising results and have proven their capability to effectively detect distracted driving, multiple factors prevent their large-scale usage. The main drawback of approaches based on physiological data is the fact that sensors need either to be attached to the body of the driver, or additional measurement units need to be installed in the car. This makes it nearly impossible to apply such methods outside of experimental environments or naturalistic driving studies. For approaches based on eye tracking, the costs of highly accurate eye-tracking systems are still a limiting factor for widespread deployment in production vehicles. Due to the highly sensitive nature of the data, most Original Equipment Manufacturers (OEMs) are reluctant to store video or gaze data. In contrast, methods that are based on driving data [4,10,[12][13][14]18] such as steering wheel angle, speed deviations, or vehicle accelerations, are more suitable for large-scale use cases since the data is already available in all modern cars and no additional instrumentation is necessary. Already in 1999 Nakayama et al. [14] introduced the so-called Steering Entropy metric and they were able to show clear correlations between an increase in driver workload and steering behavior. Additionally, Markkula and Engström [13] introduced the Steering Wheel Reversal Rate (SWRR) and compared it to other steering angle metrics concerning their sensitivity to the effect of secondary task workload on lateral control performance. Markkula APPROACH To model the effect, specific user interactions or usage patterns on the touchscreen HMI have on secondary task load, a large amount of data is needed. This is due to the many UI elements and the even larger number of potential combinations in modern HMIs as well as due to the diverse range of driving situations for which the effect might be different. Collecting this large amount of data in the form of naturalistic driving studies is time-consuming and expensive. Therefore, in this work, we utilize data collected from production line vehicles. However, this results in limitations, as in contrast to laboratory studies, strict data protection regulations must be met. For this reason, it is not possible to collect personalized data. Additionally, due to the many different participants and the uncontrolled driving environment, the data quality is significantly different compared to simulator studies or controlled naturalistic driving studies. This makes a detailed data analysis and preprocessing necessary. In this work in progress, we present Data Collection and Processing The data used in this work is collected via a telematic framework that allows live Over-The-Air (OTA) data transfer from the car to the backend where the data is processed. The framework is available in the new generation of production vehicles and no additional instrumentation is needed. Detailed descriptions of the telematics architecture, processing framework, and data collection are provided by Ebel et al. [6]. In the first processing step, the interaction sequences are extracted. The event sequence data consists of timestamped events containing the name of the interactive UI element that was triggered by the user and the type of gesture that was detected. We consider an interaction sequence to be a sequence of interactions where the time interval between two consecutive interactions is less than = 10 . In the second step, the interaction data is enriched with the driving data shown in Table 1. We only consider sequences in which the driver assistance systems were not active. For the remaining sequences, we also consider the driving data immediately preceding the first and immediately following the last interaction of a sequence. This is based on the assumption that the anticipated interaction with the HMI, before the actual gesture on the touchscreen is made, already influences the drivers' driving behavior. The same applies to the driving behavior shortly after the last interaction, as drivers tend to wait for visual confirmation and then reach back to the steering wheel. Considering the findings made by Large et al. [11], Green et al. [9] and Pettitt and Burnett [17], we choose a duration of buffer = 2 . Applying the introduced prepossessing steps, 29,055 interaction sequences are extracted. To compare the driving behavior during interaction sequences with the driving behavior during no-interaction sequences we sampled the same amount of driving data snippets from sequences where no interactions were made. However, compared to a controlled experiment we define the no-interaction sequences such that during those periods no interactions were made with the in-vehicle HMI. We can't control for distractions happening outside of the head unit, for example, due to phone usage or passengers. To make a valid comparison between interaction sequences and no-interaction sequences we apply stratified sampling, such that the distribution in sequence length is equal in both groups. After sequence extraction, aggregated statistics for each sequence as well as driver distraction metrics, namely SWRR (1, 2, and 5 degree according to Markkula and Engström [13]) and Steering Entropy (SE) (according to Nakayama et al. [14] with adjustments proposed by Boer et al. [2] to avoid extremely high entropies based on outliers) are calculated. Since no personalized data is available it is not possible to calculate a personalized for the SE metric. We, therefore, averaged over all no-interaction sequences to obtain an average . Preliminary Results Similar to Markkula and Engström [13], we compare the steering wheel metrics based on their standardized effect size [3] and two different driving conditions. We differentiate between straight driving and curved driving since previous work [2,13,14] found that the SE and SWRR metrics are highly sensitive with regard to the road curvature. As shown in Fig. 1 one can observe the difference in effect size between straight and curved driving, meaning that interaction and no-interaction sequences can be better separated for straight driving. The SWRR metrics are even more affected than the steering entropy. In general, the standardized effect sizes are smaller than reported by Markkula and Engström [13] (e.g. = 0.34 for the data in this work compared to = 0.8). Multiple reasons may cause this difference. The first and probably most important difference is the driving context in which the data was collected. Whereas, the dataset at hand comprises of multiple different driving scenarios, drivers, and interactions, the data used by Markkula and Engström was collected in a field study, where 48 participants drove the same sequence on a motorway performing the same two tasks (half of the participants performed a visual arrow task the other half a cognitive one). Additionally, we did not yet tune the metrics to increase sensitivity. In Fig. 2 a comparison between interaction and no-interaction sequences for straight driving is presented. We report the SE and the 2 • SWRR since they have been found to be most sensitive. Even though the driving environment is highly uncontrolled and multiple confounding factors can influence the driving behavior, one can clearly observe the anticipated differences between interaction and no-interaction sequences for the SWRR and for the SE. One can see that the SE is larger for smaller vehicle speeds. Whereas the 2 • and 5 • SWRR measures show the same effect, this trend is not observable in the 1 • SWRR (not displayed). A comparison of the 1 • SWRR with the results from the field experiment conducted by Engström et al. [7] lead to similar results in the absolute values and the difference between interaction sequences and no-interaction sequences. Research Agenda The preliminary results show that, even in the highly uncontrolled setting of real-world data, both metrics studied are suitable to measure secondary task load induced by HU touch interactions. In particular, the SE provided promising results in terms of sensitivity. As a first next step, we plan to adjust the metrics to increase sensitivity. Then, building on the current state of work, we plan to evaluate the correlation between certain UI elements, interactions or interaction Manuscript submitted to ACM patterns, and the driver distraction metrics. Proving this correlation is an important step toward a predictive model of secondary task load. To then draw conclusions about the workload induced by specific interactions with certain UI elements, we plan to perform a feature importance analysis. First, we will use a machine learning-based approach to predict steering entropy based on the proportion of different interactions (e.g. list tab or map drag) and additional metadata like the number of interactions and interaction density. The insights generated via a feature importance analysis can then serve as a first feedback on interactions and interaction patterns that highly influence secondary task load. The overarching goal is to develop an evaluation tool, that DISCUSSION In this work in progress we present our planned approach to develop a model-based method, leveraging real-world data to predict secondary task load induced by interactions with a touchscreen HMI. The predictions should be based on the type of interaction and the respective type of UI element used and can serve as an early-stage estimate of driver distraction far before the first experiments are conducted. Therefore, UX experts get an early feedback on their designs which supports them to design non-distracting interfaces that increase road safety and to save costs since re-designs due to necessary changes in later studies can be avoided.
2021-08-31T01:16:13.073Z
2021-08-30T00:00:00.000
{ "year": 2021, "sha1": "fc0507988d02508ea384b6e1fe02041f56f71fcf", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2108.13243", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "fc0507988d02508ea384b6e1fe02041f56f71fcf", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
234686497
pes2o/s2orc
v3-fos-license
Micro- and nano-porosity of the active Alpine Fault zone, New Zealand . Porosity reduction in rocks from a fault core can cause elevated pore fluid pressures and consequently influence the recurrence time of earthquakes. We investigated the porosity distribution in the New Zealand’s Alpine Fault core in samples recovered during the first phase of the Deep Fault Drilling Project (DFDP-1B) by using two-dimensional nanoscale and three-dimensional microscale imaging. Synchrotron X-ray microtomography-derived analyses of open pore spaces show total microscale porosities in the range of 0.1 %–0.24 %. These pores have mainly non-spherical, elongated, flat shapes and show subtle bipolar orientation. Scanning and transmission electron microscopy reveal the sam-ples’ microstructural organization, where nanoscale pores ornament grain boundaries of the gouge material, especially clay minerals. Our data imply that (i) the porosity of the fault core is very small and not connected; (ii) the distribution of clay minerals controls the shape and orientation of the associated pores; (iii) porosity was reduced due to pressure solution processes; and (iv) mineral precipitation in fluid-filled pores can affect the mechanical behavior of the Alpine Fault by decreasing the already critically low total porosity of the fault core, causing elevated pore fluid pressures and/or intro-ducing weak mineral phases, and thus lowering the overall fault frictional strength. We conclude that the current state of very low porosity in the Alpine Fault core is likely to play a key role in the initiation of the next fault rupture. Introduction Fault mechanics, fault structure, and fluid flow properties of damaged fault rocks are intimately related (e.g., Gratier and Gueydan, 2007;Faulkner et al., 2010). Fault rupture is associated with intense brittle fracturing that enhances porosity, and thus permeability, and therefore also possible rates and directions of fluid propagation within fault zones (e.g., Girault et al., 2018). Conversely, post-seismic recovery mechanisms (gouge compaction and pressure solution processes) result in reductions in porosity, permeability, and fluid flow (Renard et al. 2000;Faulkner et al., 2010;Sutherland et al., 2012). These processes may cause elevated pore fluid pressures within fault cores and trigger frictional failure (e.g., Sibson, 1990;Gratier et al., 2003;Zhu et al., 2020). Therefore, the state of porosity within rocks from fault cores can play a key role in fault slip. The Alpine Fault of New Zealand is late in its seismic cycle (Cochran et al., 2017), so studying it allows us to investigate pre-earthquake conditions that may influence earthquake nucleation and rupture processes. Recently, drilling operations were undertaken in this fault zone to investigate the in situ conditions (Sutherland et al. 2012. Slug tests in the DFDP-1B borehole (Sutherland et al., 2012) and laboratory permeability measurements of core samples (Carpenter et al., 2014) indicate permeability decreases by 6 orders of magnitude with increasing proximity to the fault. Furthermore, Sutherland et al. (2012) documented a 0.53 MPa fluid pressure difference across the principal slip zone (PSZ) of the fault, which suggests that the fault core has significantly lower permeability than the surrounding cataclasite units. It is therefore interpreted as acting as a fault seal that limits fluid circulation within its hanging wall (Sutherland et al., 2012). Permeability variations like this are closely associated with the porosity evolution of fault cores and thus are likely to affect the fault strength and seismic properties (Sibson, 1990;Renard et al., 2000;Gratier and Gueydan, 2007). In this study, we investigate the porosity distribution in rocks from the Alpine Fault core and consider the potential effects of this porosity on fault strength. We have measured open pore spaces in these rocks from X-ray computed tomography (XCT) datasets and examined pore morphology by implementing quantitative shape analyses. Lithological and microstructural characteristics of these samples were performed by using scanning electron microscopy (SEM) and transmission electron microscopy (TEM). Geological setting New Zealand's Alpine Fault (Fig. 1a) is a major active crustal-scale structure that ruptures in a large earthquake every 291 ± 23 years, the last one of which occurred in 1717 (Cochran et al., 2017). The fault is the main constituent of the oblique transform boundary between the Australian Plate and the Pacific Plate, accommodating around 75 % of the relative plate motion. Ongoing dextral strike-slip at 27 ± 5 mm yr −1 along the fault has resulted in a total strike separation of ∼ 480 km over the last 25 Myr Cooper, 1995, 2001;Norris and Toy, 2014). In Neogene times, a dip-slip component added to the fault motion has resulted in more than 20 km of vertical uplift of the hanging wall Cooper, 1995, 2001;Norris and Toy, 2014). Consequently, rocks comprising the hanging wall of the fault have been exposed in various outcrops, where they can be studied in detail. The amphibolite facies Alpine Schist is the metamorphic protolith of a ∼ 1 km thick mylonite zone, which has been exhumed from depth and now structurally overlies an up to 50 m thick zone of brittlely deformed cataclasites and gouges (e.g., Cooper, 1995, 2001;Norris and Toy, 2014). These rocks have been investigated in outcrops and from samples collected in three boreholes during the two phases of the Deep Fault Drilling Project (DFDP-1A, DFDP-1B, and DFDP-2B; Fig. 1a) along the Alpine Fault (Sutherland et al., 2012;Toy et al., 2015Toy et al., , 2017. Most of the brittle shear displacement along the fault has been accommodated within the fault core, which includes PSZ gouges and cataclasite-series rocks . Both in surface outcrops and drill core samples, the Alpine Fault manifests itself as a thin (5 to 20 cm thick) gouge zone with a predominantly random fabric of clay-rich material Schuck et al., 2020). This cohesive but uncemented layer has a grain size significantly finer than the surrounding cataclasite units, which shows that the material was reworked only within this layer, most probably as a result of ultra-comminution due to multiple shear events under brittle conditions Toy et al., 2015). The local presence of authigenic smectite clays (Schleicher et al., 2015) and calcite and/or chlorite mineralization within sealed fractures and in the gouge matrix indicate that mineral reactions are restricted to an alteration zone within the fault core (Sutherland et al., 2012;Schuck et al., 2020). The Alpine Fault core has been interpreted as having formed during a cyclical history of mineralization, shear, and fragmentation . In addition, in the DFDP-1B borehole (Fig. 1b, Sutherland et al., 2012) fault gouges occur at two distinct depths: 128.1 m (PSZ-1) and 143.85 m (PSZ-2); this shows that the slip was not localized within a single gouge layer . X-ray computed tomography (XCT) We imaged the samples using X-ray absorption tomography, where the signal intensity depends on how electron density and bulk density attenuate a monochromatic X-ray along its path through the material (e.g., Fusseis et al., 2014). We acquired the X-ray microtomography data for this study at the 2-BM beamline of the Advanced Photon Source, Argonne National Laboratories, USA, in December 2012. The noncylindrical samples of ∼ 7 mm height and ∼ 4 mm diameter were mostly drilled parallel to the foliation, mounted on a rotary stage, and imaged with a beam energy of 22.5 keV. A charge-couple device camera collected images at 0.25 • rotation steps over 180 • . A sample-detector distance of 70 mm yielded a field of view of 2.81 mm. The voxel size (i.e., spatial sampling) was 1.3 µm, and the spatial resolution ranged from 2 to 3 times the voxel size. We have reconstructed the datasets with a filtered back-projection parallel beam reconstruction into 32 bit gray level volumes consisting of 2048 × 2048 × 2048 voxels using X-TRACT (Gureyev et al., 2011). Analyses of XCT datasets Data analyses and image processing were performed using the commercial software Avizo 9.1™ (Fig. 2). Initially, the datasets were rescaled to 8 bit grayscale volumes for enhanced computer performance. In addition, small volumes of interest were cropped from the whole volume before a nonlocal means filter was applied to reduce noise (Buades et al., 2005). For each voxel, this filter compares the value of this voxel with all neighboring voxels in a given search window. A similarity between the neighbors determines a correction applied to each voxel (e.g., Thomson et al., 2018). On the filtered grayscale images, pores were identified as disconnected materials of the darkest grayscale range (Figs. 2a and S1 in the Supplement). The corresponding grayscale values were thresholded, and the datasets were converted into binary form. This step is called segmentation. Several segmentation techniques exist, from thresholding at a given grayscale value (e.g., Ianossov et al., 2009;Andrew et al., 2013) to deep-learning algorithms (Ma et al., 2020). It is up to the user to choose the segmentation technique that is most appropriate to analyze a given dataset. To our knowledge, no single segmentation technique can be generalized and universally used independently of the nature of the samples. In the present study, we have chosen a simple segmentation technique by applying a threshold to the grayscale images to separate the void space from the solid. This technique has been used in many studies in the last 2 decades to characterize porosity in rocks, including some very recent studies in rock physics (Macente et al., 2019;Renard et al., 2019). The segmented porosity volume depends strongly on the choice of the threshold and some studies have demonstrated that the final porosity estimated by different segmentation methods can vary by 20 % (Andrä et al., 2013). However, when the level of noise in the data is low, the differences in porosities estimated by different segmentation techniques are negligible (Andrew, 2018). Our data were acquired at a synchrotron where the parallel beam and high photon flux ensured a low level of the noise in the images. In addition, application of a non-local-means filter applied to our data reduced the noise level. For these reasons, we consider that it was robust to apply a simple thresholding technique to this dataset but acknowledge that the porosity values we estimate could differ by < 20 % from the "true" porosity of the rock (see Andrä et al., 2013;Hapca et al., 2013). However, our segmentation procedure also captured cracks within a sample, which are likely to result from depressurization during core recovery (Figs. 2b and S1 in the Supplement). To omit the cracks, we utilized the morphological operation "connected components" available in the software Avizo 9.1, which allows volumes larger than the selected number of connected voxels to be excluded from the binary label images. To each sample we applied upper limits of 20 (43.94 µm 3 ), 50 (109.85 µm 3 ), 100 (219.7 µm 3 ), and 200 (439.4 µm 3 ) face-connected voxels. Total porosity estimates based on these operations are presented as percentages of the sample volume in Table S1 in the Supplement. Unfortunately, this methodology results in either loss of larger pores or inclusion of small cracks depending on the implemented limit of connected components, and thus the calculated porosities include significant bias. Therefore, the operation "connected components" was used only for visualization purposes, and clusters of 200 face-connected voxels were created to show the 3D volumes of segmented pore spaces (Fig. 2c) Instead, the volumes and shape characteristics of segmented materials (including cracks, i.e., without any data limitation) were exported from the Avizo software in numerical format, and volume distributions within a sample were plotted on a logarithmic scale (Fig. 3). Data up to a specific volume size were fit to a polynomial curve, and then the curve was extrapolated to the x-axis intercept, which is the expected maximum pore size (Fig. 3). For each sample the total porosity was then estimated by integrating the curve, which excludes all volumes on the right side of the curve. To- tal porosities are presented as a percentage of the whole sample volume (Fig. 3). The implemented equations are given in the Supplement. Pore shapes were analyzed on bivariate histograms plotted by using the numerical pore characteristics, previously extracted from the Avizo software. Only pore volumes between 21.97 µm 3 (10 voxels) and 878.8 µm 3 (400 voxels) were included to avoid bias in the data due to an insufficient voxel count and the presence of cracks, respectively. Individual pores in our dataset are separated (Fig. 2c). The covariance matrix of each pore was calculated, and the three eigenvalues of this covariance matrix were extracted. These three values correspond to the three main orthogonal directions in each pore (i.e., the longest, medium, and shortest axes), and we use them as proxies to describe pore geometry. Thus, their amplitudes provide information on the spatial extension of a given pore and its shape. The ratio between the medium and largest eigenvalues of each pore defines its elongation (Fig. 4), the ratio between the smallest and the largest eigenvalues defines its sphericity (Fig. 5), and the ratio of the smallest and the medium eigenvalues defines its flatness (Fig. 6). The angles θ and ϕ that describe the orientation of the longest eigenvalue (i.e., axis) of each pore with respect to the global orthogonal axes system of the 3D scan were calcu-lated. These angles were translated into trend and plunge and then plotted on a lower-hemisphere equal-area stereographic projection with a probability density contour to display the distribution of pore unit orientations (Fig. 7). Scanning electron microscopy (SEM) SEM images were collected on Zeiss Sigma-FF-SEM at the University of Otago's Centre for Electron Microscopy. The SEM was operated at a working distance of 8.5 mm, an accelerating voltage of 10 keV, and a 120 µm aperture with a dwell time of 100 µs. EDS maps were created by using the Aztec Software (https://www.oxford-instruments.com/products/ microanalysis/energy-dispersive-x-ray-systems-eds-edx/ eds-for-sem/eds-software-aztec, last access: 30 November 2020). Transmission electron microscopy (TEM) TEM images were collected on a FEI Tecnai G2 F20 X-Twin transmission electron microscope, located at the German Research Centre for Geosciences (GFZ), Potsdam, Germany (Fig. 9). The instrument is equipped with a field-emission gun (FEG) electron source and a high-angle annular darkfield (HAADF) detector. Images were collected from samples placed on a Gatan double-tilt holder at an accelerating XCT-derived characteristics of porosity All samples contain low total porosities, ranging from 0.1 % to 0.24 % (Fig. 3). If different segmentation techniques were applied, a variability in the range that Andrew (2018) demonstrated would be reasonable (from nearly 0 % to 20 %) and would correspond to porosities between 0.08 % and 0.29 % in our samples. It can be noted that the lower-cataclasite sample (DFDP-1B 69_2.57) has twice as much pore space (Fig. 3d) as any of the other samples. The characterized pore volume distributions range over almost 3 orders of magnitude for all samples (Fig. 3). Furthermore, the expected maximum pore volume was estimated to be largest in the PSZ-2 sample (DFDP-1B 69_2.54), reaching 862 µm 3 (Fig. 3c). In all samples, shape analyses of pores with volumes between 21.97 µm 3 (10 voxels) and 878.8 µm 3 (400 voxels) demonstrate predominantly elongated (Fig. 4), non-spherical (Fig. 5), and flat pore shapes (Fig. 6). This is particularly pronounced for the smaller pore volumes. The number of elon-gated pores per sample increases in the upper foliated cataclasites ( Fig. 4a and b) with increasing proximity to PSZ-2, where most elongated pores occur (Fig. 4c). Conversely, the lower-cataclasite sample demonstrates proportionally fewer elongated pores within the sample (Fig. 4d). The degree of sphericity is uniform for all samples, and pores appear as mainly non-spherical (Fig. 5). A few isolated spherical pores are manifested only by small pore volumes (Fig. 5). A trend of increasing the number of flat pores is observed with increasing sample depth (Fig. 6), and most flat pores are detected in the lower cataclasite (Fig. 6d). Microstructural characteristics of porosity To demonstrate the microstructural arrangement of the cataclasites, we show representative SEM images from sample DFDP-1B 69_248 (Fig. 8), previously described as a "lower foliated cataclasite" by Toy et al., 2015. SEM images presented here reveal rounded to subrounded crystalline clasts up to 100 µm in diameter (Fig. 8a, b), which con- Figure 5. Bivariate histograms showing sphericity vs. pore volume (µm 3 ) and number of pores for each sample. The arrow indicates the direction of increasing sphericity. Here, the sphericity is defined as the ratio between the smallest and the largest eigenvalues (i.e., axis) of each pore. sist of ∼ 50 % plagioclase, ∼ 40 % K-feldspar, and ∼ 10 % quartz and are elongated at angles of 0-30 • to the foliation. The surrounding matrix material is composed of finer grains (< 30 µm in diameter) of white micas, chlorite, K-feldspar, calcite, and Ti oxide (Fig. 8c). Numerous quartz clasts contain microfractures, filled by calcite and/or chlorite. TEM characterization of the gouge material from PSZ-2 (sample DFDP-1B 69_2.54) reveals that the Alpine Fault gouges are composed of angular quartz and/or feldspar fragments (∼ 200 nm in size), wrapped by smaller phyllosilicates (< 100 nm long). This random fabric is ornamented by nanoscale pores (< 50 nm), distributed along all grain and phase boundaries but especially abundant within or around clay minerals (Fig. 9a). The gouge material also demonstrates phyllosilicate-rich areas, defined by an increase in the clay/clast ratio. In these zones, fine (< 100 nm long) and coarser (few µm long) clay grains coexist and are aligned in wavy fabric that surrounds sporadic protolith fragments (Fig. 9b). Pore spaces are again distributed along the boundaries of the constituent mineral grains, but some of them are larger (∼ 0.5 µm) with thin ellipsoidal or elongated shapes (Fig. 9b, c). These pores are commonly associated with inter-clay-layer porosity. Large size pores are also observed along quartz-feldspar phase boundaries. These latter pores are associated with multiple grains and occasionally disrupt the boundaries and, thus, were labeled as fracture porosity (Fig. 9d). Characteristics of porosity within the Alpine Fault core Porosity analyses of samples from or in close proximity to the two PSZs encountered in the DFDP-1B drill core reveal total pore volumes between 0.1 % and 0.24 % (Fig. 3). These values are significantly lower than the porosity estimates from other active faults in the world, such as 0.2 to 5.7 % total porosity in the core of the Nojima Fault, Japan (Surma et al., 2003) and 0 % to 18 % in the San Andreas Fault core (Blackburn et al., 2009). The Alpine Fault core contains total pore space volumes comparable only with the lower porosities in these previous studies. It should be noted that the smallest pore spaces captured in the XCT datasets are 1.3 µm in size due to acquisition constraints, whereas nanoscale porosity was identified on the TEM images. Therefore, the estimated total porosities from XCT data represent only minimum values of the open pore spaces in the Alpine Fault core. TEM images presented here mainly focus on nanoscale materials (Fig. 9a, c, d) but were also used to describe the distribution of micro-porosity in these rocks (Fig. 9b). The pores visible on grain and phase boundaries in Fig. 9b have similar sizes to the pores segmented on XCT images (> 1.3 µm in diameter); thus we conclude that this is the typical habit of both nano-and micro-pores within the Alpine Fault core (Fig. 9). In addition, both quantitative micro-porosity shape analyses (Figs. 4,5,and 6) and nano-pores identified on TEM images (Fig. 9) reveal that a significant population of pores is predominantly non-spherical with elongated, flat shapes. We attribute this observation to the tendency of these pores to ornament clay minerals where pores are distributed and elongated along their (001) planes (Fig. 9b, c, and d). Foliation in the upper cataclasites is defined by clay-sized phyllosilicates that become more abundant with proximity to the PSZ , where a weak clay fabric is developed (Schleicher et al., 2015). This gradual enrichment in clay minerals coincides with the subtle development of bipolar distributions of pore orientations with increasing sample depth (Fig. 7). This observation and the fact that pores are mainly distributed along grain boundaries of clays (Fig. 9) suggest that the distribution of clay minerals also controls pore orientations within the Alpine Fault core. Previously, the phyllosilicate foliation in the Alpine Fault cataclasites has been used to define shear direction . Thus, we speculate that pore orientations in these rocks are also systematically related to the kinematic framework of the shear zone. If these pores represent remnants of fluid channels, their spatial orientation is likely to reflect the fluid flow directions during deformation. To address this possibil-ity more data for systematic analyses of pore orientations are needed. Porosity reduction within the Alpine Fault core The comparatively lower porosity estimates of the Alpine Fault core compared to other active faults (e.g., the Nojima Fault, Surma et al., 2003, andthe San Andreas Fault, Blackburn et al., 2009) could be attributed to the fact that the Alpine Fault is late in its c. 300-year seismic cycle and the last seismic event occurred in 1717 (Cochran et al., 2017). Thus, we propose that the fault has almost completely sealed. Porosity of fault cores is believed to evolve during the seismic cycle, since fault rupture can cause porosities to increase up to 10 % (Marone et al., 1990), and subsequent healing mechanisms (such as mechanical compaction of the fault gouge and/or elimination of pore spaces within the fault core due to pressure solution processes) cause porosity to decrease over time (Sibson, 1990;Renard et al., 2000;Faulkner et al., 2010). SEM data presented here show that fine-grained chlorite and muscovite grains formed as a cement in the cataclastic matrix (Fig. 8c). Our TEM data reveal the abundance of newly precipitated authigenic clays, wrapped around coarser clay minerals (Fig. 9b). Furthermore, delicate clay minerals form fringe structures (Fig. 9a) and strain shadows (Fig. 9c) around larger quartz-feldspar grains. These microstructural observations demonstrate that pressure solution processes operated within these rocks . Evidence for pressure solution processes has been previously documented in all units comprising the Alpine Fault core . Abundant precipitation of alteration minerals (Sutherland et al., 2012), calcite-filled intragran- ular and cross-cutting veins , and the occurrence of newly formed smectite clays (Schleicher et al., 2015) indicate extensive fluid-rock reactions. In addition, anastomosing networks of opaque minerals (such as graphite; Kirilova et al., 2017), which define foliation in the upper cataclasites , have been interpreted as being concentrated by pressure solution processes during aseismic creep Gratier et al., 2011). The petrological characteristics of the Alpine Fault core lithologies indicate that solution transfer was likely the dominant mechanism for pore closure within these rocks. Porosity estimates presented here are so low that presumably negligible variations in between samples can represent significant gradients in porosity. For example, the increase in total porosity in sample DFDP-1B 69-2.57 with only 0.14 % manifests itself as twice as many open pore spaces in comparison to the rest of the analyzed samples (Fig. 3). In addition, this is the only footwall sample analyzed here and, as already mentioned in Sect. 3.1, does not contain any gouge material. Post-rupture porosity reduction is known to operate 3 to 4 times faster within fine-grained fault gouges than in coarser-grained cataclasites (Walder and Nur, 1984;Sleep and Blanpied, 1992;Renard et al., 2000), which may explain the porosity differences demonstrated above. Furthermore, previous studies documented less carbonate and phyllosilicate filling of cracks in the Alpine Fault footwall cataclasites than in the hanging wall cataclasites (Sutherland et al., 2012;Toy et al., 2015), suggesting more reactive fluids are present and isolated within the hanging wall of the Alpine Fault. Thus, more intense dissolution-precipitation processes took place in the fault's hanging wall, which very likely resulted in more efficient porosity reduction, as demonstrated by our porosity estimates (Fig. 3). Effects of porosity on the Alpine Fault strength Very low-porosity estimates are presented here (Fig. 3). Very low permeabilities of 10 −18 m 2 were also measured experimentally in clay-rich cataclasites and gouges from the Alpine Fault zone (Carpenter et al., 2014). In addition, the documented difference in total porosities between the hanging wall and footwall samples (Fig. 3) may be interpreted as reflecting different intensities of pressure solution processes and thus compartmentalization of percolating fluids. Our porosity data show a spatial trend similar to the permeability measurements of Carpenter et al. (2014). This observation yields increased confidence in the interpretation of Carpenter et al. (2014) of a permeability gradient with distance from the PSZ, which itself acts as a hydraulic seal (Sutherland, et al., 2012). The existence of such a barrier to flow is characteristic of faults undergoing creep and locked faults (Rice, 1992;Labaume et al., 1997;Wiersberg and Erzinger, 2008). However, much higher permeabilities in the surrounding damaged rocks (Carpenter et al., 2014) allow fast propagation of fluids within them and can cause localization of high fluid pressures on one side or the other of a hydraulic seal (Sibson, 1990). Such fluid pressures can enhance gouge compaction and pressure solution processes, which will eventually introduce zones of weakness and thus may trigger fault slip (Faulkner et al., 2010). Previous studies and the observations presented here show that fluids were present in the Alpine Fault rocks. Fluid-filled pores represent a favorable environment for mineral precipitation, which can affect the fault strength in two ways: (i) a very small decrease in these critically low total porosities due to mineral precipitation would cause fluid pressurization, which is a well-known fault-weakening mechanism described by Byerlee (1990) and Sibson (1990); however, this pressure increase could be slightly offset by the inclusion of fluids into new hydrous minerals; (ii) deposition of frictionally weak phases (such as clay minerals and graphite), especially if they decorate grain contacts and/or form interlinked weak layers, would lower the overall frictional strength (Rutter et al., 1976;Niemeijer et al., 2010). Precipitated authigenic clay minerals were identified in our TEM data (Fig. 9) and also documented by previous studies (Schleicher et al., 2015). As well as having low frictional strengths (Moore and Lockner, 2004), clay minerals may also contribute to the formation of an impermeable seal if they form an aligned fabric, which can enhance the likelihood of fluid pressurization in the fault rocks (Rice, 1992;Faulkner et al., 2010). In addition, graphite, which was previously documented in these rocks , may effectively weaken the fault due to mechanical smearing (Rutter et al., 2013) and/or localized precipitation within strained areas (Upton and Craw, 2008). Such graphite precipitation within shear surfaces was previously documented by Kirilova et al. (2017). In summary, the presence of trapped fluids in the lowporosity rocks of the Alpine Fault core possibly controls the mechanical behavior of the fault and could be responsible for future rupture initiation due to fluid pressurization and/or precipitation of weak mineral phases. This hypothesis is further supported by an experimental study showing that the DFDP-1 gouges are frictionally strong in the absence of elevated fluid pressure (Boulton et al., 2014). Conclusions Analyses of XCT datasets and TEM images of borehole samples from the core of the Alpine Fault reveal micro-and nanoscale pores, distributed along grain boundaries of the constituent mineral phases, especially clay minerals. The tendency of these pores to ornament clays defines their predominantly non-spherical, elongated, flat shapes and the bipolar distribution of pore orientations. The documented extremely low total porosities (in the range 0.1 %-0.24 %) in these rocks suggest effective porosity reduction and fault healing. Microstructural observations presented here and documented in previous studies indicate that pressure solution processes were the dominant healing mechanism and that fluids were present in these rocks. Therefore, fluid-filled pores may be places where elevated pore fluid pressures develop, due to further mineral precipitation that decreases the already critically low total porosities. Alternatively, these pores may also facilitate the deposition of weak mineral phases (such as clay minerals and graphite) that may very effectively weaken the fault. We conclude that the current state of the fault core porosity is possibly a controlling factor in the mechanical behavior of the Alpine Fault and will likely play a key role in the initiation of the next fault rupture. Data availability. Avizo screenshots, total porosity estimates, Matlab script, and numerical data of pore volumes can be found in the Supplement. Author contributions. MK reconstructed, processed, and analyzed the XCT datasets presented here, interpreted the TEM data, and prepared the paper. Most of this work was performed during MK's PhD under the academic guidance of VT. VT and KG collected the XCT data with technical support by XX. FR and KS contributed with valuable discussion about XCT data analyses and edited the paper. RW enabled TEM data acquisition and provided his expertise on TEM data interpretation. RM collected and analyzed the presented SEM data. The final version of this paper benefits from collective intellectual input.
2020-06-18T16:06:42.326Z
2020-05-29T00:00:00.000
{ "year": 2020, "sha1": "8185a92f7cfb19f0785df51f32328cfc532dc61d", "oa_license": "CCBY", "oa_url": "https://se.copernicus.org/articles/11/2425/2020/se-11-2425-2020.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "09a98c69fe79d029684c80d605e60095bba1cf39", "s2fieldsofstudy": [ "Geology" ], "extfieldsofstudy": [ "Geology" ] }
237369570
pes2o/s2orc
v3-fos-license
A Study of Natural Rosmarinus Corrosion Inhibitor for Zinc In HCl Solution In this study, the effect of important variables on the corrosion rate of Zinc metal was studied with free corrosion, weight loss, and polarization techniques. The test system was designed to measure corrosion potential, corrosion rate, limited current density, and the polarization technique. The experiment used a 0.1M HCl solution as its medium. Temperatures (20,30, 40, 50, and 60) C and rosemary inhibitor concentrations (1 and 5) g/L were used to study the efficacy of the zinc corrosion process. The results showed that the corrosion rate increased with increasing temperature but decreased with increasing inhibitor concentration in acid solution. The maximum inhibition efficiency in Weight Loss Experiments observed at 5g/L of rosemary and 20 °C is 49.07%. The corrosion potential became more negative with increasing temperature and became nobler (less negative) with increasing inhibitor concentration. It has been shown that rosemary is good as a green inhibitor in acid solution. Introduction Corrosion is a metal's destructive attack on its environment caused by a chemical or electrochemical reaction. Metal corrosion wastes not just the metal, but also the energy, water, and human work that went into creating and fabricating the metal structures in the first place [1][2][3]. Zinc is one of the most often used metals, coming in fourth place behind iron, aluminum, and copper in terms of production and consumption worldwide. Zinc has a variety of applications, including galvanizing, die casting, and electronics. In high-energy-density batteries (such as Ni/Zn, Ag/Zn, and Zn/air), it is the preferred anode material. Acids have a high proclivity for attacking zinc. Figure (1) indicates that in acidic conditions, no surface oxides of zinc are stable, whereas zinc corrosion products are generated and are more stable under neutral or slightly alkaline circumstances. Hence, for scale removal and cleaning of zinc surfaces with acidic solutions, it becomes necessary to use inhibitors [4][5][6]. The use of inhibitors is one of the most practical methods by which to protect zinc from corrosion, particularly in acidic mediums [7]. Aloe vera extract [8], Citrullus vulgaris peel [9], mansoa alliacea plant extract [10], fenugreek [11], and natural onion juice [12] have all been reported as green corrosion inhibitors for Zn in various environments. Figure 1. Potential-pH equilibrium Pourbaix diagram for the zinc-water system at 25 o C Plant extracts contain tannins, alkaloids, flavonoids, polyphenols, saponins, glycosides, anthraquinones, amino acids, proteins, and other heterocyclic compounds, among other phytochemical elements. These phytochemicals have been suggested as possible corrosion inhibitors [13].The effectiveness of inhibition is achieved through one or more of the following mechanisms: preferential adsorption on anodic or cathodic sites and stopping the reaction, or the formation of a protective barrier film on the surface. The inhibitors are classed as anodic, cathodic, or mix-type based on their inhibition mechanism. Natural inhibitors have thus become a viable alternative to long-term technological advancement [14][15][16][17]. Rosmarinus officinalis l. (Rosemary) is an attractive evergreen shrub with pine needle-like leaves that grows wild in most Mediterranean regions. Rosemary essential oil is a widely used aromatic and medicinal plant with sterilizing, insecticidal, and anti-inflammatory properties. Perfume, bath liquid, cosmetics, shampoo, air freshener, ant repellant, and other everyday chemicals all include it. When it comes to spices, rosemary has a long history in the food industry [18][19][20]. Antioxidant, anti-inflammatory, antibacterial, anticancer, and antiandrogenic properties have been documented for the plant. Carnosol, carnosic acid, rosmanol, rosmadial, epirosmanol, rosmadiphenol, and rosmarinic acid are all phenolic compounds having inhibitory characteristics found in rosemary [21,22].The corrosion inhibitory activity of Rosemary extract as a green inhibitor has previously been investigated in acid solutions on steel [23-30] and nickel [31].To our knowledge, no research has been published on the inhibitory effects of rosemary extract on acid corrosion of zinc in hydrochloric acid solution.The effects of rosemary on zinc corrosion in a 0.1M HCl solution, as well as an open circuit and electrochemical polarization, were investigated in this study. It also takes into account the temperature. Experimental work 2.1. Specimen Preparation The working electrode is a sheet of zinc specimen (99.9% purity) with dimensions of 5 cm x 2 cm x 0.05 cm, which was used in the experiments as a cathode. The Zinc specimen was carefully and lightly polished with grit silicon carbide paper before each experiment, rinsed with water, cleaned in 3% HCl for 5 minutes, washed in tap water, dried with Gauze, and then dried in an electrical oven at about 110 o C for 10 minutes [32]. Inhibitor preparation After drying the rosemary, it was ground with an electric grinder. This powder was weighed to the required weight (1,5 grams) before being immersed in one liter of water for 24 hours and filtered using a special filter paper. (1) and (2) below, corrosion inhibition efficiency (IE%) and the corrosion rate (gmd) were computed based on the measured corrosion current density [4]: Typically, the inhibitor of the corrosion remains valued in terms of inhibition competence, and is given through the relationship [33]: Electrochemical Measurements Anywhere CR is the corrosion rate and the subscripts o and I mention toward the absence and presence of the inhibitor. Free Corrosion Experiments: Electrochemical measurements were performed with two electrodes: a zinc specimen as the working electrode and a saturated calomel electrode (SCE) as the reference electrode. After preparing the sample with previous steps and immersing it in a 0.1M HCl acid solution for 2 hours and using a voltmeter, the corrosion potential was determined at different temperatures and inhibitor concentrations. Electrochemical Polarization Experiments to determine Ecorr corrosion current and corrosion potential using the Tafel method. Under different conditions, anode and cathode polarization curves were induced using three electrodes: zinc, SCE, and a graphite rod as an auxiliary electrode. The circuit was connected, and once all electrical connections were completed, the circuit was turned on, with a constant current source set to 10 volts. Record the reading of the voltage and the cathode current of the voltmeter and ammeter respectively, and cathodic curve can be plotted and then the anodic curve of polarization by substituting the anodic and cathodic connections. The corrosion parameters such as corrosion current (Icorr), corrosion potential (Ecorr), anodic Tafel slope (ba), and cathodic Tafel slope (bc) were calculated from the conducted polarization curves using the program Origin (data analysis software). The corrosion inhibition efficiency was calculated using Equation (4): Anywhere I is corrosion current density and the subscripts o and I mention toward the absence and presence of the inhibitor. Discussion and Results of Weight Loss The weight loss of zinc in a 0.1M HCl solution in the absence and presence of different rosemary concentrations and temperatures. The inhibitory efficiency, the rate of corrosion (gmd), and the values of dissolution current density (id) were calculated and given in Table (1). It can be shown in table (1) and Figure (2) that when the inhibitor concentration is increased, the corrosion rate is reduced. Adsorption of inhibitor results in the formation of bulky precipitates. Rosemary performs as an effective zinc corrosion inhibitor in hydrochloric acid solution, according to the findings. The efficiency of inhibition improves as the inhibitor concentration increases. His results show that increasing the extract concentration increases the number of inhibitor molecules adsorbed onto the zinc surface and decreases the surface area available for a direct acid attack. The presence of organic components in rosemary extract is thought to be responsible for its inhibitory impact. Rosemary contains a variety of chemicals. Discussion and Results of Free Corrosion From the figures (3, 4, 5) the potential behavior of zinc corrosion over time in HCl solution is shown under different conditions, and it can be seen that the potential becomes more negative with time. This behavior was due to the depletion of Oxygen due to its high drop on the surface of the metal. When the inhibitor concentration increased the potential became less negative. Figures 6, 7, and 8 show the polarization curves for zinc in acid solution in the absence and presence of various Rosemary concentrations at various temperatures. The anodic and cathodic polarization curves shift to lower current densities when the inhibiter is added, according to the study. The polarization parameters; anodic Tafel slope (ba), cathodic Tafel slope (bc), corrosion potential (Ecorr), corrosion current (Icorr), and inhibition efficiency (IE%) are presented in table (2). The data in Table (2) show that adding rosemary extract reduces the corrosion current density and that the corrosion potential shifts slightly to less negative values as the concentration of the added inhibitor is increased, and a large number of corrosion pits form on the surface of the zinc. This could be attributed to the adsorption of the inhibitor molecules on the metal's surface. It's also clear that as inhibitor concentrations rise, the IE% of rosemary extracts rise. The rosemary extracts had the highest inhibitory efficiency of 57.55. Instead, the general shape of the polarization curves is unaffected by temperature changes. These results suggest that rosemary has an inhibitory effect on zinc corrosion in an acidic medium, and that it works as a mixed inhibitor. FTIR Examination FTIR spectra in the diversity of 4000-500 on behalf of the rare Rosmarinus remain exposed in Figure ( Conclusion As obtained results have shown, rosemary (Rosmarinus) as the green inhibitor can be applied in order to reduce the corrosion rate of zinc in 0.1M HCl solution and it works as a mixed inhibitor. The inhibition efficiency was found to increase by increasing inhibitor concentrations and decrease with increasing temperature. In addition, the corrosion potential became less negative with increasing rosemary inhibitor concentration and became more negative with increasing temperature.
2021-09-01T20:09:12.297Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "6f9f13bcaf575a5bddc2f081fcc11a89cf78fc2c", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1973/1/012126", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "6f9f13bcaf575a5bddc2f081fcc11a89cf78fc2c", "s2fieldsofstudy": [ "Chemistry", "Materials Science", "Environmental Science" ], "extfieldsofstudy": [ "Physics" ] }
252736461
pes2o/s2orc
v3-fos-license
Lung ultrasound as a screening tool for SARS‐CoV‐2 infection in surgical patients Abstract Purpose To evaluate the diagnostic performance of lung ultrasound (LUS) in screening for SARS‐CoV‐2 infection in patients requiring surgery. Methods Patients underwent a LUS protocol that included a scoring system for screening COVID‐19 pneumonia as well as RT‐PCR test for SARS‐CoV‐2. The receiver operator characteristic (ROC) curve was determined for the relationship between LUS score and PCR test results for COVID‐19. The optimal threshold for the best discrimination between non‐COVID‐19 patients and COVID‐19 patients was calculated. Results Among 203 patients enrolled (mean age 48 years; 82 males), 8.3% were COVID‐19‐positive; 4.9% were diagnosed via the initial RT‐PCR test. Of the patients diagnosed with SARS‐CoV‐2, 64.7% required in‐hospital management and 17.6% died. The most common ultrasound findings were B lines (19.7%) and a thickened pleura (19.2%). The AUC of the ROC curve of the relationship of LUS score with a cutoff value >8 versus RT‐PCR test for the assessment of SARS‐CoV‐2 pneumonia was 0.75 (95% CI 0.61–0.89; sensitivity 52.9%; specificity 91%; LR (+) 6.15, LR (−) 0.51). Conclusion The LUS score in surgical patients is not a useful tool for screening patients with potential COVID‐19 infection. LUS score shows a high specificity with a cut‐off value of 8. | INTRODUCTION By the end of 2019, an outbreak of SARS-CoV-2 infection rapidly spread from China to the rest of the world. 1 The clinical manifestations of SARS-CoV-2 ranged from asymptomatic to severe pneumonia accompanied by organ function damage. [2][3][4] The available data to date suggest that at least one-third of SARS-CoV-2 infections are asymptomatic. 5 Nevertheless, asymptomatic patients can be a source of transmission. Li et al. 6 Identifying asymptomatic patients with SARS-CoV-2 infection is important for taking appropriate measures to protect health care workers and other patients against nosocomial infections. Furthermore, surgical patients with asymptomatic or symptomatic SARS-CoV-2 infection have an increased risk of perioperative morbidity and mortality. [7][8][9] These findings are particularly relevant for patients requiring urgent surgery, because such procedures cannot be delayed. Given the considerable perioperative morbidity and mortality associated with operating on COVID-19 patients and the risk of nosocomial transmission, it is recommended that surgical patients undergo appropriate screening prior to surgery, without unnecessarily delays. 10 Preoperative COVID-19-positive testing rates range between 0.74% and 0.86%. 7,11 A diagnostic work-up that included clinical evaluation, history of exposure to SARS-CoV-2, and testing with reverse transcription-PCR (RT-PCR) is recommended. Computed tomography (CT) has been proposed for characterizing pulmonary involvement in COVID-19 patients, due to its ability to detect lung changes related to SARS-CoV-2 infection. 10,12 However, CT has the disadvantage of additional costs of screening, the burden of ionizing radiation, and low costeffectiveness in asymptomatic patients. Indeed, studies have shown limited added value of chest CT in preoperative screening. [13][14][15] Interest has emerged in the use of lung ultrasound (LUS) as an alternative first-line imaging modality for screening patients. Soldati et al. 13 proposed a protocol for standardization of the use of LUS in COVID-19 patients, using landmarks on chest anatomic lines and a scoring system that allows clinicians to record the highest score obtained in each area. The potential role of LUS in characterizing lung involvement in COVID19 is still debated. While in some studies LUS has been a useful tool for the early detection of SARS-CoV-2 infection 14-16 some recent studies found it is not a reliable imaging tool in ruling out covid19 pneumonia in patient presenting to the emergency department. 17 The purpose of this study was (1) to evaluate the diagnostic performance of LUS in screening for SARS-CoV-2 infection in patients requiring urgent surgery; and (2) to identify the cutoff value of the LUS score for COVID-19 pneumonia that discriminates patients with SARS-CoV-2 infection. | Study design and human subjects This prospective study was designed for reporting the diagnostic accuracy of LUS for diagnosing SARS-CoV-2 infection during the screening process of patients requiring emergency surgery, following the STARD guideline. 18 The study was carried out in an academic hospital in Cali, Colombia, equivalent to a Level I trauma center. The study was conducted in accordance with the Declaration of Helsinki and was approved by the Research Ethics Committee of our institution. | Data source Between May 15, 2020, and November 17, 2020, all adult patients were enrolled who were <72 hours from admission and who required urgent surgery for any cause. We excluded patients <18 years old, prisoners, patients with chronic lung disease, heart or renal failure, patients who were intubated on admission, and patients who declined to participate. Demographics, clinical parameters, laboratory values, PCR-testing for SARS-CoV-2, imaging features of LUS, and outcome variables were collected. Data were collected using a database that allows real-time data entry. The RT-PCR testing results and outcomes were later registered by the researchers. A 15-day follow-up was completed in patients with negative RT-PCR results. | Data collection In our hospital during a SARS-CoV-2 outbreak, collection of LUS score and nasopharyngeal swabs for RT-PCR assays were included in the screening process of the routine work-up for patients who required any surgical procedure. All attending physicians (emergency medicine and surgeons) had experience in point-of-care ultrasound as a standard diagnostic tool; they were also trained to perform LUS. | PCR testing for SARS-CoV-2 Clinical specimens for COVID-19 diagnostic testing were obtained from nasopharyngeal swabs and processed by RT-PCR assay. In cases of positive results, patients undergo their scheduled surgery and are thereafter admitted to the COVID-19 Care Unit. In cases of negative SARS-CoV-2 test results, but that are highly suspected to be infected due to the presence of symptoms (fever, cough, dyspnea, diarrhea, dry cough, ageusia or anosmia) or previous contact with positive cases within the last 14 days, patients were re-tested for SARS-CoV-2 at For each of the 14 regions, a score ranging from 0 to 3 was assigned, according to the following findings: the pleural line is continuous and regular (0 pts); the pleural line is indented (1 pt); the pleural line is broken (2 pts); the presence of dense and largely extended white lung with or without larger consolidations (3 pts). At the end of the procedure, the sonographer recorded the highest score obtained for each area. The total LUS score was calculated by summing the scores of the 14 zones (range of possible scores: 0-36). | Statistical analysis Statistical analyses were performed using Stata 15.1 ® (College Station, TX). Categorical variables were presented as frequencies and percentages. The normality of continuous variables was examined by the Shapiro-Wilk test. Afterward, they were presented as mean and standard deviation or median and IQR, according to the normality of the data. Operative characteristics were also calculated. The ultrasounds were interpreted as negative when no suspicious signs were identified or when the sum of the scores was zero. Conversely, the ultrasounds were interpreted as positive when any suspicious sign was found or when the score was ≥1. Regarding SARS-CoV-2 infection, each patient was classified as positive if any RT-PCR test results were positive and classified as negative if the results were negative (Figure 1). After the classification, operative characteristics with their respective 95% CI were calculated for every ultrasonographic sign. The severity of the ultrasonographic findings was computed by summing the severity of the identified sign in each of the 14 prespecified points, thereby generating the Lung Ultrasound Severity Score (LUS). An ROC curve was constructed to evaluate the discriminative ability and the best cutoff for the obtained score. | Sample size The operative characteristics of ultrasound for diagnosing SARS-CoV-2 infection were unknown when the study was planned. However, sensitivity and specificity were reported by Testa and coworkers for the early diagnosis of H1N1 pneumonia (94% and 85%, respectively). 19 Buderer's method was used to calculate the sample size. 20 With a 95% CI and an error margin of 10%, samples of 433 and 217 are appropriate for a prevalence of COVID-19 of 5% and 10%, respectively. 21 For every 100 included patients, the prevalence was evaluated to adjust the sample size. | Ethical considerations This investigation was approved by the investigation committee and the ethics committee on May 12 and 14 of 2020 respectively (record #041, act #160-2020). Given that the pulmonary ultrasound and the RT-PCR assay were incorporated into usual care, the requirement to obtain informed consent was waived. | Patient features A total of 292 patient candidates for urgent surgery were evaluated with LUS; of these patients, 89 were excluded, the majority of whom because they did not undergone surgery (Figure 1). The remaining 203 patients were enrolled, whose median age was 48 years and 121 (59.6%) of whom were women and 82 (40.4%) of whom were men. The clinical characteristics of the patients are summarized in Table 1. The most frequent surgical group was trauma and emergency surgery (39.9%), followed by orthopedic surgery (14.8%) and gynecology (3.4%; Table 1). The LUS score constructed from the sum of the findings in | DISCUSSION During the study period, Colombia faced its second COVID-19 wave. The Valle del Cauca was one of the most affected regions in Colombia, with an incidence rate of 77 per 100 000 individuals. 22 Hospitals in our city were at maximum capacity due to SARS-CoV-2 cases, with 8.3% of infected cases requiring in-hospital management. 22 Patients in our study were treated by different surgical subspecialists, with the majority being treated by specialists in trauma surgery and orthopedic surgery, as reported by other authors. 23,24 In a study by Lei et al. 9 the median age of positive COVID-19 patients who underwent elective surgery was 55 years (IQR 43-63) and, as in our study, the majority of patients were women. In our study, 8.4% of surgical patients tested positive for SARS-CoV-2, 17% of whom did not report any respiratory symptoms. The asymptomatic proportion of patients is estimated to range between 18% and 57% in other cohorts. 4,25,26 In our study, 17.5% of positive cases died due to respiratory complications of COVID-19. Di Martino et al. 27 reported a similar percentage of positive cases (7%) in ambulatory surgery patients, but only a 1.4% mortality rate attributed to disease progression. This discrepancy may have arisen from the fact that their patients were not emergency surgery cases. The mortality rate in the study by Li 6 in thoracic surgery patients with SARS-CoV-2 was 30.8%, which is significantly higher than previously reported rates. During the pandemic, LUS became a frequent tool to identify disease severity and to facilitate screening of potentially infected patients, as described by other authors. 28,29 LUS was also used in 2009 for rapid point-of-care triage and management of patients during the H1N1 influenza virus outbreak. 30 For the detection of pneumonia, LUS can achieve a specificity of 75%-94% and a sensitivity of 85%-95%, 16,[31][32][33] and it is superior to X-ray and comparable to thoracic CT. 34 peripheral or subpleural consolidations. 16,40,41 The most frequent LUS findings in our study were B lines, followed by pleural abnormalities and, less frequently, subpleural consolidations; these pathological features have been previously described. 16,[42][43][44] Our results differ from those reported by Tung-Chen et al. 45 Some limitations need to be accounted for in this study. This was a single-center experience, which may lack inter-rater reliability. Multicenter studies are needed for a more precise sensitivity and specificity of LUS in surgery patients. Finally, the total sample size of the study as well as the SARS-CoV-2 positive subset of patients were small. LUS is an important tool in emergency settings-it is radiation free, time-saving, and has a broad availability. In a high prevalence setting of SARS-CoV-2 infection, a LUSS > 8 showed a high specificity, and B lines and subpleural consolidations had the best performance at identifying patients with pneumonia. However, LUSS cannot be recommended as a screening tool in surgical emergency patients, due to the low sensitivity of LUS.
2022-10-07T06:17:42.319Z
2022-10-06T00:00:00.000
{ "year": 2022, "sha1": "8e3ae6547e3fea6a84b6c875a22afaa038c5c9d8", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ScienceParsePlus", "pdf_hash": "87d320919e883c5be386854b7448fb8333f9858a", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
15190448
pes2o/s2orc
v3-fos-license
Bullet Fragment of the Lumbar Spine: The Decision Is More Important Than the Incision Study Design Case report. Objective Treatment of gunshot wounds to the spine is a topic of continued discussion and controversy. The following case study provides a description of a patient with a gunshot wound to the lumbar spine with a retained bullet in the intrathecal space. Methods Immediately after gunshot injury, a patient developed lumbar and radicular pain, as well as neurologic deficits. He was taken for surgery to remove the retained bullet. Results Following surgery, pain and neurologic function improved. The operative techniques and the postoperative clinical management are discussed in this report. Conclusion In our opinion, it was necessary to remove the bullet to avoid migration and possible worsening of neurologic function. However, surgical intervention is not appropriate in every case, and ultimately decisions should be based on patient presentation, symptomology, and imaging. Introduction Despite an increased incidence of studies analyzing bullet fragments lodged in the spinal canal, there is no clear consensus regarding the treatment of such injuries. In cases of spinal fracture secondary to a gunshot wound, the evidence suggests the fractures tend to be stable and not necessitate surgical intervention. 1 However, some researchers advocate that surgery is indicated due to the possibility of migration of the intrathecal fragment, particularly when in close proximity to the conus. 2 When surgery is elected, careful attention to the surrounding structures with regards to the location of the fragment is required, as well as a thorough analysis of the preexisting neurologic compromise and duration from the time of injury. In this report, we discuss a patient with a 4-week-old gunshot wound and retained bullet in the lumbar spine, including the technical nuances to consider for the procedure and subsequent care. Case Report A 30-year-old man presented to an outside emergency department with a penetrating gunshot wound to the abdomen. Upon examination, the patient had one gunshot wound to the left upper quadrant. A computed tomography (CT) scan of the abdomen verified the bullet had fractured the left L2 pedicle before becoming lodged in the spinal canal between L2 and L3. Neurologically, his only deficit was left lower extremity quadriceps weakness, 4/5 strength. He was taken for an urgent laparotomy for exploration of the peritoneum, with findings of a hemoperitoneum, six small bowel enterotomies, a left perinephric hematoma, a grade 2 renal laceration, as well as a laceration of the left psoas muscle. Postoperatively, the patient had urinary retention, which resolved within 2 days. The patient was directed to our institution 4 weeks after the injury for specialized neurosurgical care. The left quadriceps weakness was unchanged from the time of injury, but Keywords ► intrathecal foreign object ► bullet ► gunshot wound Abstract Study Design Case report. Objective Treatment of gunshot wounds to the spine is a topic of continued discussion and controversy. The following case study provides a description of a patient with a gunshot wound to the lumbar spine with a retained bullet in the intrathecal space. Methods Immediately after gunshot injury, a patient developed lumbar and radicular pain, as well as neurologic deficits. He was taken for surgery to remove the retained bullet. Results Following surgery, pain and neurologic function improved. The operative techniques and the postoperative clinical management are discussed in this report. Conclusion In our opinion, it was necessary to remove the bullet to avoid migration and possible worsening of neurologic function. However, surgical intervention is not appropriate in every case, and ultimately decisions should be based on patient presentation, symptomology, and imaging. the patient had developed subsequent paresthesia in the left foot, as well as bilateral lower extremity radiculopathy radiating down the lateral thighs and posterior calves (pain score of 8 to 9 on a scale of 0 to 10). A CT myelogram of the lumbar spine confirmed the unchanged location of the bullet in the spinal canal with high-grade blockage of the cerebrospinal fluid (CSF) above the level of the bullet (►Fig. 1). Operative Procedure The patient was taken to the operative theater for a lumbar laminectomy of L2-L3 and removal of the bullet. Although a fusion was not expected to be necessary, the required instrumentation was available if instability was noted intraoperatively. After induction of general anesthesia, the patient was placed in a prone position on a Jackson table. A standard L2-L3 laminectomy was performed, and a palpable, hard mass was discerned in the dural sac. Prior to opening the dura, a thorough inspection of the thecal sac was completed to evaluate the exact location of the bullet (i.e., intradural, extradural) and to assess if there was a sealed durotomy from the injury or whether a remaining open, leaking portion of the sac required further intervention and repair. Once concluded that the bullet was intrathecal, an 11-blade knife was used to open the dura at the midline. There was no release of CSF with the incision. A 9-mm bullet was found with its head facing the incision, completely encased in scar tissue, blocking the flow of CSF (►Fig. 2). It was deduced that the high amount of heat produced by the bullet entering the dural sac split the nerve roots and encased the bullet. A slow dissection was performed with caution to avoid any damage to the nerve roots. Microscopic magnification with extraction of 1 to 2 mm of dissection at a time was utilized for the release of the bullet from the scar tissue and dura. Excessive retraction was avoided to reduce risk of undue stress on the nerve roots. Ultimately, the bullet was successfully released from the surrounding tissue (►Fig. 3). Throughout the dissection and with wound closure, no CSF was appreciated. Multiple Valsalva maneuvers were administered to a pressure of 40 cmH 2 O with still no noticeable leak. The dura was closed in a watertight fashion with a 6-0 Prolene (Ethicon, Somerville, New Jersey, United States). A lumbar drain was inserted above the level of the laminectomy as a precaution to avoid an unrecognized fistula leaking at a later time. Postoperative Course Upon awaking from anesthesia, the patient had worsening of his left lower extremity weakness (3þ/5), but notable improvement in his lumbar and radicular symptoms (pain score of 4 on a scale of 0 to 10). The lumbar drain was kept in place for 72 hours with intermittent drainage of 10 to 15 mL of CSF hourly. It was clamped on the evening of postoperative day 3, and the patient was ambulated multiple times without evidence of a CSF leak (i.e., drainage from the wound, nausea/ vomiting, or headache). The drain was discontinued without complication on postoperative day 4. At discharge, his left lower extremity weakness was improved from his baseline at admission (5À/5). Discussion The management of an intradural bullet continues to remain controversial. [3][4][5] Proponents of the conservative theory support a nonsurgical approach with cautious measures involving pain management and rehabilitation, 6 although others recommend surgical intervention with the anticipation of a more rapid improvement in neurologic symptoms. 7 At the root level of the spine, it is our opinion that in several case instances, the removal of the foreign object will lead to better outcomes and carries a higher potential for regeneration of the axons of the injured nerve roots. 4,5 A thorough understanding of the regional anatomy is crucial when determining the appropriateness of surgery versus conservative measures; decisions should be made on a case-by-case basis. For the case presented, we felt the removal of the bullet from the canal was necessary to improve this patient's quality of life, as evidenced by decreased pain, as well as to avoid future complications related to the possible migration of the bullet. Although continued back pain secondary to arachnoiditis is possible, given the patient's young age, it was our opinion that preserving his neurologic function was of utmost importance. Ultimately, a careful dissection technique and detailed postoperative care are imperative to success with each case. Conclusion In our opinion, it is necessary to remove an intradural bullet or larger fragments to avoid migration and possible worsening of neurologic function. However, surgical intervention is not appropriate in every case, and ultimately decisions should be based on patient presentation, symptomology, and imaging. Editorial Perspective Sadly, the subject of gunshot wounds to the spine and the preferred form of management remain a relevant but commonly overlooked topic, in large part due to the variability of injury mechanisms (high versus low velocity; hollow tip versus penetrating missiles or shrapnel; metal composition and presence of wadding; concurrent concussive trauma or cavitation) and patient factors (region of spine hit; type of neural structures encountered; bacterial wound contamination; concurrent vascular/abdominal trauma, overall injury load, health of patient, among others), which make any attempt at generalization or formal protocol-driven study difficult. As a review of the references in the article by Moisi et al and the commentary by Schroeder show, many of the key articles date back to the1980s and 1990s. The question of surgical removal of bullets and other projectiles from the spinal column and decompression of neural elements as well as the question of when to perform a surgical reconstruction with an instrumented fusion remain a prime example of anecdotal medicine. As confirmed by Schroeder in his commentary, a few agreements have emerged over time: • Apply Advanced Trauma Life Support principles when approaching penetrating spine trauma. • Penetrating injuries to the cervical spine are probably best approached with a multispecialty concept under inclusion of interventional angiography. 1 • Penetrating spinal column trauma with concurrent viscous or esophageal contamination does not require surgical debridement of the spine to prevent infection; a course of appropriately selected intravenous antibiotics over a period of up to 2 weeks can suffice. 2 • Structural instability of the spinal column as a result of civilian-type low-velocity injures is rare in the thoracolumbar spine. • Patients with complete thoracic-level spinal cord injury have a nearly absent rate of recovery. 3 • Routine decompression of the spinal canal to clear it from smaller bony and metal fragments is not necessary. • Steroid use for spinal cord injury has not been shown to be beneficial. 4 • Long-term toxic effect of lead, copper, and other materials may emanate from bullet casings that are exposed to cerebrospinal fluid, disks, or joints ("plumbism"). 5 • Magnetic resonance imaging of a patient with retained bullet fragments requires clarification of the metal composition-alloy and copper is deemed safe, steel is not. 6 Beyond these agreements, many issues remain unresolved; the exact indications for surgical decompression and the role of reconstructive surgery in ballistic and penetrating trauma are prime examples of this lack of clarity. Based on empirical insights, one of the leading concerns in earlier publications about decompressing a spinal canal with penetrating trauma is not a real issue anymore; cerebrospinal fluid leaks are not major concerns in the more recent literature. Advances in dural reconstruction techniques may be one cause, but the exact reason has not been studied yet. Perhaps the evolving importance of databases and registries will help provide better insights toward guiding our specialty toward a more concise treatment algorithm. 7 The large numbers of armed conflicts and violent crime around the world necessitate our increased attention to this acutely relevant topic. 8
2016-05-12T22:15:10.714Z
2015-12-01T00:00:00.000
{ "year": 2015, "sha1": "978f248d516f4dfdc2657bcf33293777e7f2a1ef", "oa_license": "CCBYNCND", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1055/s-0035-1566231", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4717542f76d49534ab384d1115116272239a2878", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
269999379
pes2o/s2orc
v3-fos-license
Comprehensive Numerical Evaluation of CO2 Huff-n-Puff in an Offshore Tight Oil Reservoir: A Case Study in Bohai Bay Area, China Many reports have presented that in tight formation, the flow mechanism differs from a conventional reservoir, such as molecular diffusion, Pre-Darcy flow behavior, and stress sensitivity. However, for CO2 Huff-n-Puff development, it is a challenge to synthetically research these mechanisms. Considering the above flow mechanisms and offshore engineering background, the development plan optimization becomes a key issue. In this paper, a self-developed simulator that satisfies research needs is introduced. Then, based on experimental results, the simulation is launched to analyze the effects of CO2 diffusion, Huff-n-Puff period, and permeability heterogeneity. The results indicate that molecular diffusion makes a positive contribution to the oil recovery factor. Additionally, for offshore reservoirs, limited to the development cost and CO2 facilities corrosion, when the total Huff-n-Puff time is constant, the ratio of 0.5–1.0 between the Huff period and the Puff period in every cycle performs better. Finally, the greater heterogeneity in permeability is much more favorable for the CO2 Huff-n-Puff because of more intensive transport processes in formation. These different scenarios can increase the understanding of the CO2 Huff-n-Puff in tight oil offshore reservoirs. INTRODUCTION In recent years, China's onshore tight oil reservoirs characterized by ultralow porosity (<10%) and low permeability (<0.1mD) have been effectively developed. 1With this basis, for offshore unconventional reservoirs, the economic development has been received increasing attention. 2,3ince the oil recovery factor in tight oil reservoir is typically below 10% in the primary production, 4 CO 2 injection has been considered as an enhanced oil recovery method to improve the performance in tight oil reservoirs.Usually, the waterflooding method is more economical; however, compared with gas injection, water injection is not suitable due to weak mobility, poor injectivity, and high viscosity.CO 2 has been regarded as a suitable injectant for enhanced oil recovery 5 because of the possibility to develop a miscible zone in formation and some of the benefits in CO 2 injection, such as reduction of oil viscosity and interfacial tension, improvements of relative permeability, and oil swelling. Currently, CO 2 Huff-n-Puff has been widely considered as an effective technique for tight oil reservoir development.Several simulation efforts have been done to examine deeply the mechanisms of CO 2 Huff-n-Puff.Based on the laboratory experiment of gas injection into fractured cores done by G.R. Darvish et al., 6 Z. Alkhansa and S. Zare Ghorbae 7 simulated these experimental characteristics by making a compositional model with a radical grid, which has a diameter of 46 mm and length of 60 cm, to evaluate the significance of molecular diffusion.The authors concluded that molecular diffusion was the dominant recovery mechanism since molecular diffusion contributed to about 70% of the oil recovery factor.Yu et al. 8 built a conceptual reservoir model with four hydraulic fractures to explore the moderate value of the CO 2 diffusion coefficient in the CO 2 Huff-n-Puff process in the tight rock.The analyzed results showed that the CO 2 diffusion mechanism with the value of the CO 2 diffusion coefficient in the range of 0.0001 cm 2 /s to 0.01 cm 2 /s was more pronounced than the convention mechanism for the reservoir with lower permeability.Tang et al. 9 also selected the same value of CO 2 molecular diffusion coefficient proposed by Yu et al. 8 However, the confined fluid phase behavior calculation depending on capillary pressure in nanopores was not studied in their work.Nojabaei et al. 10 calculated the effect of capillarity in phase behavior in a compositionally extended black-oil model to improve the accuracy of phase-equilibrium calculation in nanopores.Due to the effect of capillary pressure on phase behavior, the oil recovery increased in the simulation.Zhang et al. 11 used a numerical reservoir simulation to evaluate the well performance of the CO 2 Huff-n-Puff process by including the mechanisms of CO 2 molecular diffusion and nanopore confinement.The authors concluded that because of the lower MMP of the CO 2 and reservoir fluid under the influence of the effect of capillary pressure, it is easier to reach miscibility between CO 2 and the fluid.Meanwhile, the CO 2 molecular diffusion and nanopore confinement play a significant role in improving the oil recovery factor.In a tight reservoir, the mechanism of stress-dependent deformation is critical to the final oil recovery factor.Combined with the experimental data of the permeability/stress relationship with tight rock in the middle Bakken formation, Y. Cho and E. Ozkan 12 presented appropriate correlations between pressure decline and the reduction of porosity and permeability.On account of the correlations shown by Kim et al., 13 applied three exponential correlations to calculate the stressdependent fracture conductivity for discussing the effect of stress-dependent deformation on the final oil recovery factor.This study concluded that since the effect of stress-dependent deformation impaired the fracture conductivity, the oil recovery factor was reduced by 2.9%−5.0%.The authors also pointed out that this effect should be correctly implemented in the numerical simulation model.A rock compaction table for Bakken shale was established by Nojabaei et al. 14 Based on the above work, Yan et al. 15 built a numerical stress-dependent tight formation with the permeability of 0.002 mD and the porosity of 6% to simulate the CO 2 injection process and performed the two-phase flash calculation with capillary pressure varied with interfacial tension and pore size distribution in CO 2 injection in nanopores.The authors discovered that in the period of pressure depletion, the effect of rock compaction decreased the formation permeability, offsetting the effect of oil mobility increase caused by capillary pressure.In a previous study, 16−18 researchers recognized the non-Darcy flow phenomenon in low permeability systems and believed that the main flow regime is the nonlinear flow in the tight or shale reservoirs.With the help of a self-designed microflux measuring instrument, Wang et al. 19 carried out an experimental study on the flow characteristics at low velocity in low permeability formation.The authors concluded that in low permeability rock fluid flow is nonlinear at low velocity, and the essence of nonlinear percolation is the obvious boundary layer at low velocity.By fitting some of highly accurate available experimental data, Wang and Sheng 20 introduced a low velocity non-Darcy model with the corresponding parameter correlations for vertical wells and horizontal wells with multifractures in shale or tight reservoirs.The authors concluded that the well performance of vertical wells is more sensitive to the effect of non-Darcy than multifractured horizontal wells.In summary, many scholars have dedicated numerous studies to a single mechanism on tight oil development, such as gas diffusion, stress sensitivity, Pre-Darcy flow, and oil−gas phase calculation with high capillary pressure.However, few studies have been reported in the comprehensive effect on offshore reservoirs.The ignorance may cause a lower development performance of offshore reservoirs. To bridge this gap, a compositional simulator is developed to evaluate the CO 2 Huff-n-Puff performance.Specifically, the mechanisms of CO 2 molecular diffusion, the capillarity in phase behavior, stress-dependent deformation, and low-velocity non-Darcy flow could be comprehensively considered.Furthermore, based on the development background of offshore reservoirs, a series of sensitivity studies are performed to investigate the effects of the CO 2 diffusion coefficient, Huff-n-Puff period, and reservoir heterogeneity. MATERIALS AND METHODS 2.1.Assumptions.In this work, the mathematical simulation model is achieved under the following assumptions: (1) the stiffness matrix of the rock skeleton is isotropic, (2) gravity variance and all kinds of chemical reactions are neglected, and (3) the variation of the reservoir temperature is not considered. Mathematical Formulations. 2.2.1. Mass Conservation Equations.The governing equations are described by the mass conservation equation, in which component i in phase l has been used. The left-hand side of the equation is the accumulated term of component i at each simulation step.The first term on the righthand side is the flux and diffusion of component i in phase α.Q i is the sink or source term, which is the mass rate of component i exiting or entering phase α. ϕ is the porosity.ρ α , S α , and v α are mass density, saturation, and velocity of phase α, respectively.C iα and D i are the mass fraction and the diffusion coefficient of component i in phase α. 9,21 N c is the number of components, and N α is the number of phases.The S α and C iα are constrained by the following equations: Pre-Darcy Flow in the Oil Phase. In eq 1, for the oil phase, the velocity is calculated by the pre-Darcy flow model in this paper, 22 which is shown in eq 4. The flow model could characterize the dynamic effect of the oil mobility on the nonlinear flow degree, which performs better calculation accuracy compared with the classical Darcy flow model.i k j j j j j y where k is the rock permeability; k ro is the oil relative permeability; ∇ p is the pressure gradient; λ is the non-Darcy parameter; and a and b are coefficients which can be confirmed by matching experimental data. Phase Equilibrium Calculation. In tight formations, the rock pore size is much smaller, leading to larger capillary pressure between the oil−gas phase interface.Therefore, in the phase equilibrium calculation, the effect of capillary pressure on fluid properties is considered. 23To implement the vapor−liquid flash calculation with capillary pressure, in this paper the fugacity equation is applied as follows: 15 where f i L is the component fugacity in liquid phase; f i V is the component fugacity in gas phase; and p L and p V are the oil phase pressure and gas phase pressure, respectively.x i is the component molar fraction in liquid and y i is the component molar fraction in liquid; i L is the dimensionless fugacity coefficient in liquid phase; and i V is the dimensionless fugacity coefficient in gas phase.Oil-gas capillary pressure p c is calculated in eq 9 as follows: and where go is the interface tension (IFT) between the gas and oil phase.L and V are densities of the bulk liquid and vapor phases, respectively.Li and Vi are the parachors of component i in oil and gas fluid, which can be found from the work of Reid. 24 is the scaling exponent.In this paper, = 4.0 is adopted. Stress Sensitivity Effect. During the oil production from a tight porous matrix, the effect of stress sensitivity is substantial because of the apparent decrease of pore pressure.The relation between dynamic pressure and rock permeability is described as 12 where k i and p i are the permeability and pressure in the initial condition, and k p and p are the dynamic permeability and pressure in the reservoir.Therefore, the PEBI grid system was applied in this work to more carefully describe the near areas of fractures and the horizontal wellbore.Actually, by linking vertices of PEBI grid surfaces, 2.5D PEBI grids with dimensions of 400 m × 150 m × 3 m corresponding to length, width, and height, respectively, are widely applied in reservoir simulations.The 2.5D PEBI grids for the different cases are generated for this work (Figure 1).No flow boundary condition is required in this model.One horizontal well with a lateral length of 80 m and 4 hydraulic fractures is incorporated in the model.The half-length of the hydraulic fracture is 50 m.To keep computation accuracy and exactly capture the fluid transport behavior from the tight matrix to fracture, the local grid refinement measure of arranging PEBI grids is taken, which is denser around wells and looser far away from wells, as shown in Figure 2. 3.1.2.Permeability Heterogeneity.To consider the heterogeneity in the tight matrix, we applied the geostatistical method to realize the stochastic permeability.Based on the mean and variances of observed static data, which represent the anisotropy of the model in different dimensions, this approach establishes a relationship between data in space for the better presentation of the natural variability of permeability.In this model, the spherical variogram with the nugget value of 0.00018 is used to generate heterogeneity and discontinuity in the reservoir, as shown in Figure 3.The average permeability of the matrix in this model remains 0.01 mD unchanged, which is based on the studied offshore oil block. Based on eq 10, an actual fracture with a width of 0.001524 m and a permeability of 20000 mD is considered as a 2 m wide fracture with a permeability of 15.24 mD.Other parameters related to the model are summarized in Table 1. Fluid Properties. In this work, the fluid sample is taken from the offshore reservoir A in the Bohai Bay Area, China.The 2 and the constant composition expansion (CCE) experimental results are shown in Figure 5.The oil viscosity and density at the initial formation conditions are 0.3291 mPa•s and 0.8205 g/cm 3 , respectively.The bubble point pressure is 28.509MPa.Based on experimental datum fitting, the phase envelope of the studied fluid is shown in Figure 6. The oil−gas relative permeability and oil−water relative permeability are measured by the JBN method in the hightemperature and high-pressure relative permeability experiment system (Figure 7).The curves at the reservoir condition are presented in Figure 8. Simulator Validation. To validate the developed simulator, a case with the PEBI grid is modeled, in which the basic parameters are listed in Table 1.The above fluid properties of the reservoir A are used.In this case, one CO 2 Huff-n-Puff cycle is designed after the depletion process.Two groups of simulations are performed with this simulator and the commercial simulator.The developed simulator results are indicated with solid lines and the commercial simulator results with data points.From the results in Figure 9, it can be observed that the oil production rates of the two simulators are basically the same.The simulator is compiled by C++, and the finite difference method is used for numerical discrete calculation.Also, the other detailed validation information has been published in the previous research. 26 Effect of CO 2 Diffusion Coefficient. The diffusion effect is an important consideration in the gas injection development of unconventional reservoirs.The diffusion intensity is characterized by the diffusion coefficient.Since the diffusion transfer condition is high-temperature and highpressure, the diffusion coefficient could not be directly identified by experimental test.Meanwhile, the pressure from convective mass transfer also has an impact on the diffusion behavior.Therefore, numerical simulation is an effective way to research the diffusion coefficient.Based on the effective simulation achievement of the CO 2 diffusion coefficient, 0.001 cm 2 /s, 8 in this work the range of CO 2 diffusion coefficient is set as 0.0001− 0.01 cm 2 /s to study the diffusion effect. Figure 10 and Figure 11 show the comparison of the CO 2 mole fraction distribution in the CO 2 Huff-n-Puff process with and without considering CO 2 diffusion.Because of the low matrix permeability, the injected CO 2 is mainly contained nearby the hydraulic fractures.However, when considering the CO 2 diffusion mechanism in the simulation model, CO 2 can diffuse into a tight matrix and mix with the liquid phase to improve oil mobility, resulting in incremental produced oil. The CO 2 diffusion coefficient has been explored as a major critical parameter for the CO 2 Huff-n-Puff process.The result of effects of the CO 2 diffusion coefficient in the CO 2 Huff-n-Puff process is shown in Figure 12.As shown in Figure 12, the final oil recovery factors are 13.8%, 12.6%, and 12.2% for the CO 2 diffusion coefficients of 0.01 cm 2 /s, 0.001 cm 2 /s, and 0.0001 cm 2 /s, respectively, illustrating that the CO 2 diffusion mechanism acts as a pivotal part in the CO 2 Huff-n-Puff process. Effect of the Huff-n-Puff Period. Although CO 2 is an efficient injection medium, for offshore reservoirs, there are two disadvantages in CO 2 injection operation: (1) CO 2 source shortage and (2) facilities corrosion. 27Hence, further investigation is necessary to study the CO 2 Huff-n-Puff period to increase the CO 2 injection efficiency. In the CO 2 Huff-n-Puff process, when the injection period (production period) in one cycle is fixed at a moderate value, such as 50 days, it is not optimal for the final oil recovery to choose a too short or too long production period (injection period).Meanwhile, the exact length of each CO 2 Huff-n-Puff period depends on the number of cycles in the CO 2 injection process of the unchanged Huff-n-Puff total time.Therefore, the interdependence between these three variables, the injection and production period in every cycle and the number of cycles, is the key impacting factor for the CO 2 Huff-n-Puff process. In order to discuss the effect of the relationship between injection and production periods clearly and conveniently, the variable R t which is defined as the ratio of the production period to the injection period in every cycle, is applied in this work.Table 3 and Table 4 show the detailed schedule of the number of cycles and the CO 2 Huff-n-Puff of these 56 cases.The primary production period and total simulation time remain 365 days and 1100 days unchanged, respectively.Moreover, the CO 2 soaking time is fixed at 5 days.For example, Figure 13 illustrates the detailed time schedule based on the 4 Huff-n-Puff cycles in the CO 2 -EOR process. Figure 14, which consolidates the data from Table 3 and Table 4, illustrates the 3D scatter plot of the final CO 2 Huff-n-Puff oil recovery as a function of the ratio of the production period to the injection period in every cycle and the number of cycles.A global maximum of final oil recovery, 22.7%, is located at the higherright corner of the number of cycles, the R t domain, which corresponded to an R t value of 0.5 and 16 Huff-n-Puff cycles.In this work, "final global maximum" is defined for the specific input variable ranges. Figure 15 illustrates the nonlinear dependence of R t and the final oil recovery factor.First, it is clearly observed that no matter how many cycles are applied in the CO 2 Huff-n-Puff process, the R t value of 0.5 or 1.0 always leads to the highest final oil recovery.This is because the injection time and production time in every CO 2 Huff-n-Puff cycle have a competing relationship.Under the condition of the same number of cycles, as R t decreases from 3.0 and 1.0, the length of the CO 2 injection period in every cycle is increasing.Therefore, more CO 2 is injected to swell the oil and reduce the interfacial tension, resulting in the increase of final oil recovery.While in the areas with R t of 0.2 and 0.5, the longer injection period leads to the decease of the injection efficiency and CO 2 utilization (i.e., injected CO 2 amount per barrel of oil produced 28 ).Second, no matter how many cycles are used, based on the same R t value, the total injection and production time is the same in the CO 2 -EOR process; meanwhile the results indicate that the more CO 2 Huff-n-Puff cycles result in a higher oil recovery factor.On one hand, more Huff-n-Puff cycles adequately complement the formation energy effectively to take advantage of the high initial production rate.Because of the low permeability of tight formation, the oil production rate of a reopened well rises to a maximum value and declines dramatically.On the other hand, since the injected CO 2 is mainly contained near the hydraulic fractures and horizontal well, a too long injection period in fewer cycles excessively consumes the injected pressure gradient to make CO 2 inefficiently diffuse into a tight matrix.More Huff-n-Puff cycles provide the injected CO 2 with more opportunities to improve the mobility of tight oil.Third, it can be seen clearly that a clear additional oil recovery factor in the 16-cycle operation is 0.21% or 0.17%, compared to the oil recovery of the 18-cycle operation when the value of R t is 0.5 or 1.0.This is because more numbers of cycles lead to shorter lengths of CO 2 injection and oil production time in every cycle.The timing of Huff-n-Puff is too tight to adequately produce oil in the horizontal well.Specifically, when the well holds the acceptable oil production rate, it has to change the production situation to inject CO 2 .In addition, compared to the 16-cycle case with R t of 1.0, the longer total time of injection in the 16-cycle case with R t of 0.5 leads to higher CO 2 utilization, which results in more capital consumption.Hence, based on the consideration with the balance between capacity and final oil recovery, the economically moderate ratio of the production period to injection period is 1.0, and the optimum number of cycles of the operation should be 16 in this work. Effect of Reservoir Heterogeneity. A geostatistical approach that generates permeability of different heterogeneity levels is applied to evaluate the tight formation heterogeneity effect.Based on the mean and variances of observed static data, which represent the anisotropy of the simulation model in different dimensions, the stochastic calculation method builds the relationship between data in space.For the description of the natural uncertain property of the reservoir, this approach can better guarantee the quality.In order to achieve heterogeneity, the spherical variogram type is set up in the geological model of the base case.Specifically, the larger nugget value leads to more discontinuity and more heterogeneity. In this paper, Figure 16 illustrates respectively the minimum heterogeneity (base case in Section 3.1.2),the medium heterogeneity, and the maximum heterogeneity of permeability in three cases caused by three different nugget values, 0.00012, 0.21, and 0.53.The average permeability of these three cases keeps the same unchanged value as the base case of 0.01 mD.Based on the discussion of the relationship of injection and production period and the optimized number of cycles, the detailed time schedule is the same as the 16-cycle case with the R t value of 1.0 in the previous section. The comparison of the oil recovery factor of 3 cases with different heterogeneity levels is shown in Figure 17.As Figure 17 shows, the final oil recovery factors at 1100 days of tight oil production are 21.63%, 21.96%, and 22.72% for the minimum heterogeneity, medium heterogeneity, and maximum heterogeneity, respectively.It can be clearly seen that the more heterogeneous permeability results in the higher oil recovery factor; meanwhile, the result illustrates that compared to the process of primary production, the heterogeneous effect in tight formation is more favorable for the CO 2 Huff-n-Puff operation.The main reasons include that in the geological model of more heterogeneous permeability, lower permeability portions are more dispersed in the model area, resulting in higher and more dispersed residual oil saturation.In addition, the more dispersed lower permeability portion is more convenient for the interlacing of higher permeability and lower permeability areas, leading to the improvement in the mobility of tight oil near the horizontal well.Moreover, the more intense interlacing of different permeability portions is more conducive to the diffusion of injected CO 2 in lower permeability areas.This again demonstrates that the transport processes in the tight geological formations depend closely on the structure of heterogeneity. 29 CONCLUSIONS Based on the developed compositional simulator with multiple physical mechanisms, a numerical model for the offshore A tight oil reservoir is built to study the CO 2 Huff-n-Puff scenario.A series of cases with CO 2 diffusion coefficient, Huff-n-Puff periods, and reservoir heterogeneity are studied.The following conclusions can be drawn from this work: (1) CO 2 diffusion has a positive effect on tight oil recovery in convective-diffusion mass transfer.(2) When the total time of the CO 2 Huff-n-Puff process is constant, the CO 2 Huff-n-Puff scenario with the ratio of production period to injection period in every cycle, R t of 0.5 or 1.0, performs better than the same operations with R t of any other values.(3) An increase in the number of Huff-n-Puff cycles leads to a higher final oil recovery factor; meanwhile, limited by the cost of development and the utilization of injected CO 2 , for the 735-day CO 2 -EOR process in the offshore reservoir A, the optimal R t value and the number of Huffn-Puff cycles are 1.0 and 16, respectively.(4) The more heterogeneity in permeability is much more favorable for the CO 2 Huff-n-Puff process because of more intensive transport processes in tight heterogeneous formation. Figure 1 . Figure 1.2.5D PEBI grid for the different cases. Figure 2 . Figure 2. PEBI grid in the case with one horizontal well. Figure 3 . Figure 3. Reservoir heterogeneity in the tight matrix with a PEBI grid. 1 . k p and p are updated in every time-step and newton-step evolution.Obtained by fitting experimental data, ε is the rock sensitivity coefficient.p D and k D are the user defined dimensionless parameters, which are defined as Model Grid.3.1.1.PEBI Grid of the Model.Gridding is a basic part of a numerical reservoir simulation.StructuredCartesian grids and Corner grids are widely presented in the simulated application of a tight oil reservoir.However, compared with these grids, unstructured PEBI grids have unique advantages as follows:25 Figure 5 . Figure 5. CO 2 -injection relative volume at different pressure levels. Figure 6 . Figure 6.P−T phase envelope dynamic with different CO 2 mole fractions. Figure 7 . Figure 7. HTHP (high temperature and high pressure) relative permeability experiment system. Figure 8 . Figure 8. Relative permeability curves for the studied reservoir.(a) Oil−water relative permeability curve.(b) Oil−gas relative permeability curve. Figure 9 . Figure 9. Simulator validation result for the oil rate calculation. Figure 10 . Figure 10.CO 2 gas mole fraction distribution without considering the CO 2 diffusion. Figure 11 . Figure 11.CO 2 gas mole fraction distribution with considering CO 2 diffusion. Figure 12 . Figure 12.Effect of the CO 2 diffusion coefficient on comparison of oil recovery. Figure 13 . Figure 13.Detailed time schedule based on the 4 Huff-n-Puff cycles with different R t values (the red bar represents CO 2 soaking). Figure 14 . Figure 14.3D scatter plot of CO 2 Huff-n-Puff oil recovery factors for the 56 combinations of R t and number of cycles. Figure 15 . Figure 15.2D line plot of final Huff-n-Puff oil recovery. Table 1 . Basic Reservoir and Fracture Properties for the Simulation Study Table 2 . Mole Fractions of Components of Offshore Reservoir A Table 3 . Detailed Schedule of the Injection Period in 56 Cases (Primary Depletion = 365 days) Table 4 . Detailed Schedule of the Production Period in 56 Cases (Primary Depletion = 365 days)
2024-05-25T15:03:20.451Z
2024-05-23T00:00:00.000
{ "year": 2024, "sha1": "ad8317779c34c21f44b61d3c7acae366123eb0c3", "oa_license": "CCBYNCND", "oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acsomega.4c01907", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e36803b0c6ea916a88847887ed2149cabac26f79", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
103918623
pes2o/s2orc
v3-fos-license
Screening Effects of Methanol Extracts of The Diplotaxis tenuifolia and Reseda lutea on The Enzymatic Antioxidant Defense Systems and Aldose Reductase Activity Objectives: The aim of the study was to investigate the effects of methanol extracts from the flowers and leaves of Diplotaxis tenuifolia and Reseda lutea on the activity of AR, CAT, GST, and GPx. Materials and Methods: Total phenolic and flavonoid contents of the plant samples were evaluated using Folin-Ciocalteu reagent and aluminum chloride colorimetric methods. Also, the effects of extracts on CAT, GST, GPx, and AR enzyme activities were investigated using kinetic assays. Results: The highest phenolic and flavonoid contents were detected in the methanol extract of D. tenuifolia leaves with 144.49±0.29 mg gallic acid equivalent/L and 250.485±0.002 quercetin equivalent/L, respectively. The best activity profile for GST and GPx were observed in the extract of leaves belonging to D. tenuifolia with IC50 values of 121±0.05 and 140±0.001 ng/mL, respectively. According to the results, methanol extracts from leaves of R. lutea and D. tenuifolia showed no significant activity potential on AR. Moreover, none of the studied extracts demonstrated any reasonable CAT activation potential. Conclusion: The results indicated that leaves of D. tenuifolia had good effect on the antioxidant enzymatic defense system, which it makes it a good constituent of the daily diet. INTRODUCTION Reactive oxygen species (ROS) is a term used to describe a number of reactive molecules and free radicals derived from molecular oxygen, which are generated by all aerobic species. These molecules are generated as by-products during the mitochondrial electron transport of aerobic respiration or by oxidoreductase enzymes and metal catalyzed oxidation. In normal physiologic conditions, a number of defense mechanisms have evolved to provide a balance between the production and removal of ROS, but alterations of the balance between ROS production and the capacity to detoxify reactive intermediates lead to oxidative stress. It has been caused to a wide variety of states, processess and metabolic diseases such as heart disease, severe neural disorders such as Alzheimer's and Parkinson's, and some cancers. 1,2 Under oxidative stress, an organism has a variety of defense mechanisms to prevent or neutralize negative ROS effects. These are mainly based on enzymes such as catalase (CAT), superoxide dismutase (SOD), glutathione peroxidase (GPx), glutathione reductase (GR), and glutathione-S-transferase (GST) or non-enzymatic components such as vitamin E, vitamin C, glutathione, and flavonoids. 3 GST is one of the phase II enzymes and plays a critical role in the detoxification and metabolism of many xenobiotic compounds. 4 GPx has an important role as a catalyst in the reduction of hydro peroxides, including hydrogen peroxides (H 2 O 2 ), by using GSH. GPx also functions to protect the cell from oxidative damage. Several studies related dysfunctional GPx with cancer. 5 CAT is a very important enzyme of living organisms, which catalyzes the decomposition of H 2 O 2 to water and oxygen. Aldose reductase (AR) is a nicotinamide adenine dinucleotide phosphate (NADPH)-dependent enzyme and it has been implicated in the formation of cancer and diabetic complications such as retinopathy, neuropathy, nephropathy, and cardiovascular disorders. 6 Plants synthesize a vast range of organic compounds that are traditionally classified as primary and secondary metabolites. Primary metabolites are compounds that have essential roles associated with photosynthesis, respiration, growth, and development. Other phytochemicals that accumulate in high concentrations in some species are known as secondary metabolites, which possess antioxidant activity. Antioxidant compounds found in different parts of plants involve phenolics, flavonoids, alkaloids, glycosides, tocopherols, carotenoids, and ascorbic acid. These are structurally diverse and many are distributed among a very limited number of species within the plant kingdom. 7 Secondary metabolite compounds have played an important role in treating and preventing human diseases. They are important sources for new drugs and are also suitable lead compounds for further modification during drug development. 4 Diplotaxis tenuifolia (L.) DC., commonly known as 'wild rocket', belongs to the Brassicaceae family. It was originally found as a crop in Mediterranean and Middle Eastern countries and became popular largely due its pungent aromas and tastes. 8 In Turkish folk medicine, D. tenuifolia is known as "Yabani Roka" and wildly distributed in North and West parts of Turkey. Phytochemical studies show that the aerial parts of D. tenuifolia contain significantly high concentration of flavonoids, tannins, glucosinolates, sterols, and vitamin C. 9 The genus Reseda is one of the herbs in the Resedaceae family. In Turkey, this genus is represented by 15 species including Reseda lutea L. and Reseda luteola L. It is known as yellow mignonette or wild mignonette and has economic importance. It is widely used in the carpet and rug industry as a source of natural dye due to its high luteolin content. In addition to its staining properties, luteolin has attracted great scientific interest because of its pharmacologic activities. Luteolin displays numerous anti-inflammatory effects at micromolar concentrations, which cannot be completely explained by its antioxidant capacities. In addition, phytochemical analysis of aerial parts of R. lutea has shown the presence of flavonoid, anthocyanin, and glucosides. 10 The aim of the present study was to evaluate the total amount of the phenolic and flavonoid contents of methanol extract obtained from the flowers and leaves of D. tenuifolia and R. lutea and to determine their effects on the activity of AR, CAT, GST, and GPx. These enzymes play critical roles in the antioxidant defense system. Plant materials Plant samples of D. tenuifolia and R. lutea were harvested in July 2010 from Ankara, Turkey, and were authenticated by Prof. Dr. Fatmagül Geven, in the Department of Biology, Ankara University. The plant specimens with their localities and the necessary field records were recorded and numerated as voucher specimen numbers. The voucher numbers of D. tenuifolia and R. lutea were FG-2010-10 and FG-2010-13, respectively. They were deposited in the herbarium department at Ankara University. Extraction of plant Different parts of fresh plant samples (flowers and leaves) were washed with tap water and dried at room temperature before analysis. For methanol extraction, 2 g of dried samples were weighed and ground into a fine powder with liquid nitrogen, then mixed with 20 mL methyl alcohol at room temperature in 160 rpm for 24 h. The obtained extract was filtered over Whatman No. 1 paper and the filtrate was collected. Methanol was then removed using a rotary evaporator at 40°C to obtain a dry extract. The obtained product was dissolved in DMSO and kept in the dark (4°C) to be prevent oxidative damage until analysis. 11 Total phenolics determination The total phenolic content of the plant extracts was determined using the method of Slinkard and Singleton. 12 Each plant extract solution (0.1 mL) was mixed with 2 mL of a 2% (w/v) sodium carbonate solution and vortexed strongly. After 5 min, 0.1 mL of 50% Folin-Ciocalteu's reagent (w/v) was added and vortexed, then incubated for 1 hr at room temperature. Afterwards, the absorbance of each mixture was measured at 750 nm using an ultraviolet (UV) spectrophotometer (HP 8453 A, USA). Results were evaluated using 50, 100, 200 and 400 mg/L of GA as a standard curve and recorded as milligrams (mg) GA equivalent/L of extract. Total flavonoid determination The total concentration of flavonoids in the extracts was determined using aluminum chloride colorimetry, which was previously described 13 ; 0.1 mL of each plant extract was separately mixed with 0.15 mL of 95% ethanol, 0.01 mL of 10% aluminum chloride, 0.01 mL of 1 M sodium acetate, and 0.25 mL of DMSO. The mixture was incubated at room temperature for 30 min and the absorbance of the reaction was measured at 415 nm with the UV spectrophotometer (HP 8453 A, USA). A standard curve was calculated by preparing quercetin solutions at different concentrations for 25, 50, 100, 150, and 200 mg/L. The total flavonoid content of the extract was expressed as milligrams (mg) quercetin equivalent/L of extract. Isolation of cytosol from bovine liver Bovine liver was obtained from a slaughterhouse in Kazan, Ankara, Turkey. The liver samples were homogenized in 10 mM potassium phosphate buffer (pH 7.0), containing 0.15 M KCl, 1.0 mM EDTA, and 1.0 mM of DTT, using a glass Teflon homogenizer and then centrifuged at 10.000 g for 20 min. The supernatant was filtered through cheesecloth and the filtrate was centrifuged at 30.000 g for 60 min. The collected supernatants were filtered again and the resultant filtrate was considered as cytosol. 14 The prepared homogenates, containing 46.41 mg protein/mL, were kept in ultra-low freezer (-80°C) for future use. The total protein content was determined using the Lowry method. 15 Isolation of aldose reductase from bovine liver Bovine liver was obtained from a slaughterhouse in Kazan, Ankara, Turkey. The liver samples were cut into small pieces and washed with 1.0 mM EDTA. It was then weighed and homogenized with threefold 1.0 mM EDTA 50 μM PMSF and centrifuged at +4°C, 10.000 rpm for 30 min. To obtain a 40% saturation, 22.6 g ammonium sulfate was added to every 100 mL supernatant solution and mixed for 5 min on a magnetic stirrer and then centrifuged at +4°C, 10.000 rpm for 25 min. To obtain 50% and 75% saturations, the previous method was repeated adding 5.8 g and 15.9 g of ammonium sulfate to the 100 mL supernatant solution, respectively. The obtained pellets were dissolved with 50 mM sodium chloride and kept in a deepfreeze at -80°C. 16 Assay of glutathione-S-transferase GSTs activity was determined against the substrate 1-chloro-2, 4-dinitrobenzene (CDNB), by monitoring thioether formation at 340 nm. 17 Briefly described, the assay mixture containing plant extracts solution (final concentration in the range of 7-476 ng/mL), 200 mM potassium phosphate buffer (pH 6.5) with 50 mM CDNB and 3.2 mM GSH, and bovine liver cytosolic fractions (0.782 mg protein/mL) was prepared and used as the enzyme source to measure GST activity. GSH-CDNB conjugate formation was followed in a 250-μL total volume assay using a multimode microplate reader (Specra Max M2e, USA) at 340 nm for 240 seconds. The initial rates of enzymatic reactions were determined as nanomoles of the conjugation product of GSH and reported as nmol/min/mg protein. Assay of aldose reductase AR activity was determined against the substrate, DL-Glyceraldehyde, by monitoring the oxidation of NADPH to NADP + at 340 nm. 18 In brief, the assay mixture consisting of plant extract (5 μL) solution (final concentration in the range of 7-476 ng/mL), AR (4.54 mg/mL) Li 2 SO 4 (320 mM-400 mM), NADPH (9×10 -5 M) KP buffer (50 mM, pH 6.2), DL-GA (6×10 -4 M) was prepared and used as the enzyme source to measure AR activity. NADP + oxidation was followed in 0.25 mL total volume assay using a multimode microplate reader at 340 nm for 4 min. The initial rates of enzymatic reactions were determined and reported as nmol/min/mg protein. Assay of glutathione peroxidase GPx activity was measured using a previously reported method. 19,20 Also, GPx activity was measured against the substrate, tertiary butyl hydro-peroxide (t-BuOOH), and the decrease in NADPH was monitored at 340 nm. GPx activity changes were measured using purified GPx (37.5×10 -3 U/mL) and plant extracts (7-476 ng/mL) or control (DMSO alone), with 2.0 mM GSH, 0.25 mM NADPH, GSH-reductase (GR, 0.5 unit/mL) and 0.3 mM t-BuOOH, in 50 mM Tris-HCl (pH=8.0). The reaction was initiated by adding GPx and the change in absorbance was recorded at 340 nm for 5 min using a multimode microplate reader. Assay of catalase CAT inhibition was determined by monitoring a red quinoneimine dye remaining H 2 O 2 . 21,22 The assay was miniaturized for microplate application and contained plant extraction solutions with a final concentration in the range of 7-476 ng/mL, 50 mM phosphate buffer (pH 7.0), 20 U/mL purified bovine liver CAT, and 0.0961 mM H 2 O 2 . The reaction was stopped using NaN 3 and incubated at room temperature for 5 min, followed by incubation with chromogen at room temperature for 40 min and then the absorbance was read at 520 nm. The enzyme activity was calculated with respect to the H 2 O 2 remnant, which was determined using a calibration curve constructed in the range of 9.61-307.6 μM H 2 O 2 . Data analysis The data analysis was performed using the Graphpad Prism 6.0 software. The activity of extracts against enzyme targets was calculated as 50% inhibitory concentration (IC 50 ) values obtained from dose-response curves. The enzyme calibration and the dose-response curve construction were accomplished using 2-3 independent experiments, each in duplicate or triplicate using a multimode microplate reader, in 96-well microplates. RESULTS Each extract was prepared by dissolving 2 g of dry samples in 20 mL of methanol. The extraction yields for D. tenuifolia leaf samples was 13.02%, and 10.15% and 6.02% for R. lutea flower and leaf samples, respectively ( Table 1). The total phenolic contents of extracts were determined by using Folin-Ciocalteu's method. Additionally, the total amount of flavonoids in extracts were determined using aluminum chloride colorimetry. According to the results, the methanol extract of D. tenuifolia leaves has a high amount of total phenolic and flavonoid contents. The results of total phenolic and flavonoid contents of the methanol extracts of the plant samples are listed in Table 1. The activation percent profile of GST, GPx, CAT, and AR enzymes and IC 50 values of the methanol extracts of plant samples are presented in Table 2. GST activity was determined against the substrate, CDNB, by monitoring the thioether formation at 340 nm. In order to calculate the percentage of GST activity and IC 50 values, the utilized final concentration of plant extracts in the assay was taken between 7-476 ng/mL. According to the results, which are presented in Table 2, the best activity effect was exhibited in the crude methanol extract of D. tenuifolia leaves with IC 50 value of 121±0.05 ng/mL. The activity of GPx was determined as the amount of enzyme that converted 1 μM of NADPH per min in 1 mL which is expressed as U/mg of total protein. The final concentration of plant extracts within concentration range of 7-476 ng/mL were used in the assay to calculate the percentage of GPx activity and IC 50 values. The best activity profile for GPx was observed in the extract of leaves belonging to D. tenuifolia with an IC 50 value of 140±0.001 ng/mL. AR activity was determined using the substrate DL-Glyceraldehyde, by monitoring the oxidation of NADPH to NADP + at 340 nm. The methanol extracts from leaves of R. lutea and D. tenuifolia showed no significant activity with AR (Table 2). In addition, none of the studied extracts showed reasonable CAT activity potential. DISCUSSION The aim of the present study was to evaluate the total amount of the phenolic and flavonoid contents of methanol extract obtained from the flowers and leaves of D. tenuifolia and R. lutea. Furthermore, it was aimed to determine the effects of the extract on the activity of AR, CAT, GST and GPx. Phenolic compounds have at least one or more aromatic rings with one or more hydroxyl groups attached. 23 Many phenolic compounds and flavonoids have been reported to have potential for antioxidant, anticancer, anti-atherosclerotic, antibacterial, antiviral, and antiinflammatory activities. 24 Flavonoids are phenolic compounds found throughout the plant kingdom. They have been shown to possess a variety of biologic activities in organisms. Many flavonoids possess antitumor, anti-proliferation, cell cycle arrest, induction of apoptosis and differentiation, inhibition of angiogenesis, antioxidant and reversal of multidrug resistance activities. [25][26][27] Different studies have shown that plant extracts with high polyphenol contents are known as a good source of antioxidant activity. [28][29][30] In this study, for the first time, it was shown that the methanol extract from leaves of D. tenuifolia contains a high amount of total phenolic and flavonoid compounds. The results indicated that the methanol extract from the leaves of D. tenuifolia had a significant effect on GST and GPx activities. Therefore, it can be said that the leaves of D. tenuifolia have a good effect on the antioxidant enzymatic defense system. However, it is found that the leaf extracts of D. tenuifolia had no effect on AR and CAT activities. It was also demonstrated that the methanol extract from leaves of R. lutea contained more phenolic and flavonoid contents than its flower samples. However, the flower extract of R. lutea showed good effects on GPx activity than the leaf extract and the opposite of this situation was seen in the GST results. In a previous study, D. tenuifolia was analyzed for active compounds and antitumor actions on colorectal cancer cells. The results showed that D. tenuifolia was a good source of carotenoids, phenolics, and glucosinolate compounds. It also has antitumor activities on colorectal cancer. 31 Marrelli et al. 32 evaluated thirteen hydro alcoholic extracts of edible plants from Southern Italy for their in vitro antioxidant and antiproliferative activity on breast cancer MCF-7, hepatic cancer HepG2, and colorectal cancer LoVo. They showed that the lowest antioxidant activity was exhibited by D. tenuifolia (DT) extract. In addition, the authors reported that D. tenuifolia extract was able to induce an inhibitory activity of cell proliferation of more than 40%. In another study, the polyphenol content and biologic activities of the main component of D. simplex extract was investigated. The analyzed extracts showed that flower extracts exhibited a potent in vitro antioxidant capacity using oxygen radical absorbance capacity and displayed a strong anti-inflammatory activity and inhibited nitric oxide release. The findings suggested that the Diplotaxis flower was a valuable source of antioxidants and anti-inflammatory agents. 33 Durazzo et al. 34 However, no pharmacologic studies have been performed with R. lutea extracts to date, but Reseda species have been reported to possess various pharmacologic properties such as anti-inflammatory, antioxidant, antibacterial, and antimicrobial effects. For the first time, Benmerache et al. 36 isolated six flavonoids from the aerial parts of R. phyteuma. They also found that the butanolic extract exhibited good antioxidant and antimicrobial activities. R. luteola L. has been used as a dye due to its high luteolin content since ancient times. Woelfl et al. 37 determined anti-proliferative and apoptosis-inducing effects of the R. luteola extract RF-40. They found that it contained 40% flavonoids, primarily luteolin, luteolin-7-O-glucoside, and apigenin. Further, it was observed that the isolated flavonoids dose-dependently inhibited cell proliferation and induced apoptotic oligonucleosomes in PHA-stimulated peripheral blood mononuclear cells. Moreover, they showed that Reseda extract was an interesting raw material dyeing purposes and for further pharmacologic investigation. In another study, Berrehal et al. 38 investigated the methanolic and n-butanolic extracts of R. duriaeana and R. villosa for their antioxidant activity. The authors indicated that the methanolic and n-butanolic extracts of R. duriaeana exhibited better antioxidant activity than the respective extracts of R. villosa. This may be explained by the presence of more quercetin derivatives in R. duriaeana. From a consideration of ethnobotanical information, seeds of 45 Scottish plant species were obtained from authentic seed suppliers. The n-hexane, dichloromethane (DCM), and methanol (MeOH) extracts were assessed for free radical scavenging activity in a DPPH assay. The results showed that the methanol extract of R. lutea seeds exhibited moderate levels of free radical scavenging activity. Also, the n-hexane extract was much less active than the MeOH and DCM extracts. 39 Tawaha et al. 40 determined the relative levels of antioxidant activity and the total phenolic content of aqueous and methanolic extracts of a total of 51 Jordanian plant species. They indicated that the aqueous and methanolic extracts of R. lutea had remarkably high total phenolic contents and showed good levels of antioxidant activity. CONCLUSION In conclusion, the biologic potential of D. tenuifolia and R. lutea on the antioxidant defense system such as GST, GPx, CAT, and AR were considered in this research. It was shown that the methanol extract of D. tenuifolia leaves had a high amount of phenolic and flavonoid compounds. Also, it is indicated that it has good activity potential on GPx and GST. These results might be related to the high content of phenolics and flavonoids found in the species. This work highlights the importance of D. tenuifolia as a part of the daily diet.
2019-04-09T13:09:04.839Z
2018-04-01T00:00:00.000
{ "year": 2018, "sha1": "9a7ec9b3728c288706b20d515881c0123a631ddc", "oa_license": null, "oa_url": "https://doi.org/10.4274/tjps.82473", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "fdbddd052b9b3b5b75b8a18c8340c8147f49cd76", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
246315595
pes2o/s2orc
v3-fos-license
Wavelength-Tunable Vortex Beam Emitter Based on Silicon Micro-Ring with PN Depletion Diode Herein we propose a design of a wavelength-tunable integrated vortex beam emitter based on the silicon-on-insulator platform. The emitter is implemented using a PN-depletion diode inside a microring resonator with the emitting hole grating that was used to produce a vortex beam. The resonance wavelengths can be shifted due to the refractive index change associated with the free plasma dispersion effect. Obtained numerical modeling results confirm the efficiency of the proposed approach, providing a resonance wavelength shift while maintaining the required topological charge of the emitted vortex beam. It is known that optical vortices got a lot of attention due to extensive telecommunication and biochemical applications, but also, they have revealed some beneficial use cases in sensors. Flexibility in spectral tuning demonstrated by the proposed device can significantly improve the accuracy of sensors based on fiber Bragg gratings. Moreover, we demonstrate that the proposed device can provide a displacement of the resonance by the value of the free spectral range of the ring resonator, which means the possibility to implement an ultra-fast orbital angular momentum (de)multiplexing or modulation. Introduction Since the unique properties of optical beams carrying orbital angular momentum (OAM), also referred to as optical vortices, have been discovered in [1], the request for further research and development in this field has been growing steadily. This was not unreasonable, as applications of the vortex beams turned out to be interesting in a wide variety of areas. One of their most known use cases is trapping and moving particles with the optical tweezers and spanners [2,3]. Optical tweezers empowered with OAM have demonstrated the manipulation of particles with multiple degrees of freedom, as well as the simultaneous trapping of multiple particles [4,5]. Optical beams carrying OAM, e.g., Bessel beams [6], along with other kinds of structured light [7] have also found their application in such a remarkable topic as quantum communications [8], specifically in higher-dimensional quantum key distribution [9], entanglement swapping [10], and multidimensional entanglement [11]. Another major field for vortex beams is optical communications where OAM is usually considered as an additional degree of freedom for multiplexing. The exponentially growing demand for network traffic [12,13] resulted in the fiber-optic lines utilizing time, wavelength, and polarization division multiplexing, which have almost reached the Shannon limit [13,14]. Therefore, the next step on the way to increase the throughput of fiber transmission lines was the space division multiplexing (SDM) [15][16][17]. SDM technology is based on the use of a degree of freedom determined by the transverse distribution of the electromagnetic (EM) field, that corresponds to multiplexing of spatially separated optical fields in multicore fibers (MCF) or using several linear polarized (LP) modes in few-mode fibers (FMF) [18]. SDM concept can be applied in both fiber-optic [17][18][19][20] and atmospheric [21][22][23] optical communication lines. A common property of optical modes carrying OAM is the presence of a multiplier e i ϕ , where is the azimuthal mode index. These modes are the eigenfunctions of the angular momentum operator and carry the OAM proportional to [1]. As the OAM modes represent a basis of orthogonal functions which can be divided spatially by its order , it is convenient to use them in the SDM approach [19,24]. Moreover, OAM multiplexing can be combined with other multiplexing technologies and multilevel modulation formats for increasing throughput to the Tbit level [20,25]. Finally yet importantly, beams carrying OAM are widely used in sensing. In biochemistry, optical vortices were used to detect the molecules of amino acids, nucleotides, and sugars [26]. Diffraction limit [27] and super resolution [28] imaging was reached with focusing of vortex beams. Another example is a temperature sensor consisting of a fiber Bragg grating (FBG), an optical fiber path used to eliminate errors, and a Gaussian beam, interfering with the OAM beam transmitted through the Bragg grating [29]. The principle of operation of this temperature sensor lies in a combination of the thermo-optical effect and the effect of thermal expansion, which appear in the Bragg grating when the temperature changes and leads to a shift in the central wavelength of the reflected spectrum. In turn, the phase difference between the Gaussian beam and the vortex beam leads to the rotation of their interference diagram. The temperature measurement step corresponds to the rotation of the radiation pattern. A similar method can be used to make highly accurate measurements of microstrains caused by pressure and displacement. It is also important to note the recent successes in plasmonic vortex studies: nanometrology approaches [30], generation of the high order plasmonic vortices [31], and OAM-SPR (surface plasmon resonance) based refractive index sensing [32] are making a breakthrough. In this paper, we propose and numerically verify a novel scheme of real-time OAM order switch for radiated optical vortex beam using a pn-depletion diode integrated into the ring waveguide. The most common solution that can be used to excite optical beams with a helical phase front, and which we employed in the proposed design, is a micro-ring resonator [33]. Usually, this is a ring-shaped waveguide with grating elements for the light beam emission, and a bus waveguide located at a small gap, which couples an input beam into the resonator. Such µm-scale structure was first demonstrated in [34]. Such devices are capable to emit vector optical vortices with definite and quantized OAM states. It is possible due to the ring (or disk) resonators supporting whispering gallery modes (WGM), which carry high orders of OAM. The grating provides a periodic modulation of an effective refractive index, and its working principle is analogous to the operation principle of grating couplers in straight waveguides. The light wave is scattered by the grating elements, and as a result, a part of the radiated power is deflected in the direction of the constructive interference. Because the waveguide has a ring shape and supports WGM, according to the Huygens principle the wavefront of the emitted light should point to the azimuthal direction ϕ and be helical. To our knowledge, there has been one system demonstrated based on a single ring resonator that realizes real-time OAM switching. In [35], authors developed an approach to a fast electrically-controlled vortex order switch and demonstrated a scheme with the switching time of down to 20 µs. In this scheme, heaters are used to change the refractive index of the waveguide, and as a result, the effective WGM index changes, which causes the restructuring of the emitted OAM mode. In our case, an inversely-biased pn-junction offers more energy-efficient and faster switching [36] compared to the existing schemes. Therefore, we believe, that this scheme can be used also as a fast (GHz-scale) electro-optical OAM modulator. The paper is organized as follows: in Section 2 we describe the working principle of the proposed scheme; in Section 3 the device modeling results are presented; and in Section 4 the methodology and the results of analysis of the emitted beams propagation are described. Principle of Operation A schematic view of the proposed device is depicted in Figure 1. The device consists of a straight input waveguide and a ring resonator with a light-emitting grating (etched holes on top of the ring waveguide) and a pn-depletion diode, imprinted over the part of the ring. The diode cross-section in detail is shown in Figure 2 and its dimensional quantities listed in the Table 1. The main principle of vortex beam generation is similar to the presented in [37], where the order of the radiated OAM carrying beam satisfies the following condition: where p is the WGM order in the ring, q is the number of grating elements in the ring resonator, and g is the diffraction order, which is an integer and can be calculated as [37]: where R is the ring resonator radius, n e f f is the effective index of the ring waveguide, and λ is the operating wavelength. To switch the emitted vortex order and the emitter resonance wavelength, it is necessary to change the effective index of the ring resonator. There are two main methods to modify the effective index: to use thermo-optical effect and electro-optical effect (mainly plasma dispersion effect). In the context of optical switches and modulators, the first effect is usually used in cases when GHz response frequencies are not required [36]. The main disadvantages of the thermo-optical effect for modulation and switching are milli-to microsecond response and mW order power consumption [38]. In contrast, the electro-optical effect is characterized by nanosecond to sub-nanosecond response and in most cases much lower, or at least comparable to the thermo-optical case, power consumption [36]. On the other hand, realizations of electro-optical effects impose higher losses and crosstalk [36], however, the advantage of quick response usually overrides these problems. In our device, we propose to use an inversely-biased pn-depletion diode, integrated into the ring resonator, providing to change the refractive index of the ring waveguide due to the plasma dispersion effect [39]. To model the device, we used modified coefficients for the Soref and Bennet model, where a change in absorption coefficient and refractive index for wavelength 1.55 µm (C-band) can be expressed as [40]: where ∆N e and ∆N h are the changes in number of electrons and holes in the active region, respectively. The resonant wavelength λ res of the ring can be calculated as [41]: where L is the circumference of the ring resonator. The proposed device can be fabricated using a standard silicon-on-insulator (SOI) platform, or other platforms supporting doping, where the waveguides with pn-junction can be implemented. For our simulations, we considered a generic SOI platform with the Si layer thickness of 220 nm. The designed waveguide structures in this platform can be typically fabricated using 193 nm deep ultraviolet photolithography, or electron-beam lithography (EBL). Simulation Results For the numerical modeling and simulation of the device, we used the Ansys Lumerical software. The first step was calculating the carrier number in the cross-section of the ring waveguide. This distribution was simulated in Lumerical Device and further exported into a mat-file for application in the following modeling steps. The obtained carrier number distributions are presented in Figure 3. In the next step, we used the obtained carrier distributions to account for the effective index change due to the free plasma dispersion effect in Lumerical FDTD. To implement this, Lumerical's silicon material model with the modified coefficients from Equation (4) have been applied. Next, we calculated the resonance characteristics of the microring emitter when voltages of 0.5 V and −5 V are applied between the inner and outer parts of the ring waveguide. Finally, we investigated the emitted field distributions (after spreading 4 µm from the device) at the resonance wavelengths of interest (near 1550 nm). As our simulations show, for the ring resonators with radii of less than 25 µm there is no possibility to obtain the resonance shift equal or greater than the device free spectral range (FSR). The main reason is that the FSR value of the small rings is biggish, and the length of the doped region is too short to obtain the corresponding phase shift. This limitation does not allow us to realize the OAM order modulation for such small rings, but as shown in Figure 4 we can utilize pn-diode to adjust the resonance characteristics of the OAM emitter. Figure 5 shows that the vortex order does not change within one FSR. This effect confirms that we have only functionality of the resonance adjustment for small rings. In the case of larger rings (especially for the rings with radii of greater than 25 µm), the FSR value becomes small enough to realize the change in the OAM state. As can be seen from Figure 6, the resonances shift over the one FSR value. Through this effect, the OAM order modulation of the optical signal can be obtained, as can be seen in Figure 7. In addition, note that the resonant curves have smaller peaks due to the lower coupling in the case of the larger ring. The weakening of coupling can be explained by the increased complexity and hence the heterogeneity of the emitter, containing about 300 grating elements, which becomes difficult for optimizing due to doping. Nevertheless, we have shown that the proposed scheme generally is capable to implement the OAM order switching. Analysis of the Emitted Field Propagation To ensure that the resulting beams retain their vortex structure as they propagate through free space (for example, before injecting them into the fiber), we performed calculations using some obtained field distributions from the emitter. The near-field distributions for the vortices of 3rd (field 1) and 7th (field 2) orders are shown in Figure 8. As it can be seen, field 1 is a radially polarized field with the 3rd order vortex phase component. Field 2 is a hybrid-polarized (superposition of radial and azimuthal polarization) field with a 7th order vortex phase component. There is comparably high intensity in the E-field component |E z | 2 due to the diffraction in the near field [42][43][44]. Primarily, we calculated the electromagnetic field in the lens focal plane (focal length f = 1.3 mm, numerical aperture of the lens N A = 0.01) by using the transverse electric field components of the incident beam and the vector propagation operator [45][46][47]: where sin(θ max ) corresponds to the lens numerical aperture, polarization vectors are defined on the transverse electric field components of the incident beam E 0x (θ, φ) and E 0y (θ, φ) applying the following equations: The calculation results, corresponding to the field in the far zone, are shown in Figure 9. It can be seen that the beam structure has changed, becoming close to the radially polarized Bessel beam of the third order, but the phase and polarization states of the beam are preserved. Similarly, a hybrid-polarized seventh-order Bessel beam was formed in the far-field. and in far field: where t = {x, y, z} and m is integer. Using the coefficients from Equations (10) and (11), the OAM value for each field component can be calculated by the following formula [49]: where 2N + 1 is the number of calculated decomposition coefficients (we used N = 15). As follows from the results shown in Figure 9, the value of OAM (12) is practically preserved in the far-field for all the field components of the vector vortex beams. In this case, the OAM value for the longitudinal component differs by one from the OAM values of the transverse components, which is in full agreement with the theory [44][45][46]. Discussion In this paper, we proposed the novel design of the microring-based vortex beam emitter. First, the proposed scheme provides a possibility to realize the adjustment of the spectral properties of optical vortex emitter and, correspondingly, reduce the dependence of the emitter resonances on the fabrication errors. Also, for the structures with larger radii (starting from approximately 25 µm), it becomes possible to realize the OAM switching in the wavelength domain, which can be used for electro-optical OAM modulation of the signal. As mentioned above, the proposed scheme is expected to provide much faster resonance adjustment or OAM order switching with much lower power consumption compared to the schemes based on thermo-optical effect following from the operating characteristics of the inversely biased pn-junction. Moreover, the proposed integrated scheme is much smaller than the conventional discrete optics for generating optical vortices in free space. Generally, our device can be useful for future data transmission systems with spatial division (de)multiplexing, for OAM encoding, or in sensing systems. It is worth noting, that tunable vortex mode emitter can be especially useful in applications where the beam's OAM order and fine-tuning of its spectral characteristics are of decisive importance. For example, FBG temperature sensors, which are based on the interaction of OAM radiation with the transmission medium, are influenced by the measured external parameters. Theoretically, our proposed device can be used for spin [50] and lateral motion [51] detecting schemes. These methods are based on the light-matter interaction, which couples OAM and mechanical momentum. Also, it would be interesting to use such an emitter in an OAM-controlled hybrid plasmonic circuit for optical logic operations [52] and in other photonic circuits for information processing. For further development of the proposed device, the metal mirror placed in the buried oxide [53] can be applied to suppress the cylindrical vector Bessel modes of higher orders, which are formed sideways, and therefore improve the emission efficiency. There is also a possibility to further optimize the coupling between the bus waveguide and the ring resonator to increase the depth of the resonances and raise the emitted vortex beam power. Conclusions In summary, we showed that by integrating a pn-diode in the ring waveguide of the OAM emitter, the carrier number can be controlled enabling a refractive index modulation, that allows rapidly changing the transmission spectrum of the emitter. This effect allows adjusting the emitter resonance to the desired wavelength while maintaining the required OAM in the case of rings with radii smaller than 25 µm. For the larger ring radii, it is possible to implement the OAM (de)multiplexing or modulation. Modeling results have shown that our device in case of the small ring radius (5.5 µm) emits an optical vortex with the topological charge = −6 at the wavelength of 1549 nm and the voltage level of 0.5 V, and with the same azimuthal order at 1552 nm and −5 V. This provides a resonance wavelength shift while maintaining the required topological charge of the emitted vortex beam. In the case of the larger ring (with the radius of 26.5 µm) the voltage change provides the change in the topological charge of the emitted vortex beam (−7 and −6 at the wavelength of 1548 nm for the voltage levels of 0.5 V and −5 V, respectively). In a detailed study of the available beam distributions, we also ensured that the radiated beams maintain their topological charges while propagating in free space. We believe that the presented results will be useful both for the further development of OAM-powered devices and for moving towards full-fledged photonic integration.
2022-01-28T16:24:45.171Z
2022-01-25T00:00:00.000
{ "year": 2022, "sha1": "5c812fd831490081e68e133e936b196399416901", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8220/22/3/929/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e058350ebed9f043dad4c7192462960a478d8dec", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
119586026
pes2o/s2orc
v3-fos-license
Ramanujan type congruences for the Klingen-Eisenstein series In the case of Siegel modular forms of degree $n$, we prove that, for almost all prime ideals $\frak{p}$ in any ring of algebraic integers, mod $\frak{p}^m$ cusp forms are congruent to true cusp forms of the same weight. As an application of this property, we give congruences for the Klingen-Eisenstein series and cusp forms, which can be regarded as a generalization of Ramanujan's congruence. We will conclude by giving numerical examples. Introduction Kurokawa [9] found some examples of congruence relations on eigenvalues between the Klingen-Eisenstein series and Hecke eigen cusp forms, in the case of Siegel modular forms of degree 2. Mizumoto [12] and Katsurada-Mizumoto [6] showed some congruence properties of this kind for more general cases. In this paper, we prove congruences on Fourier coefficients between the Klingen-Eisenstein series and cusp forms, in the case of Siegel modular forms of degree n. We remark that congruences on Fourier coefficients are stronger properties than congruences on eigenvalues of eigen forms. In order to show these congruences, we determine all mod p m cusp forms which are congruent to true cusp forms, where "mod p m cusp forms" are Siegel modular forms of degree n whose Fourier coefficients of rank r with 0 ≤ r ≤ n − 1 vanish modulo p m (see Definition 3.1). Namely, we can explain our main results as follows: (1) In the case of Siegel modular forms of degree n, for almost all prime ideals p in any ring of algebraic integers, mod p m cusp forms are congruent to true cusp forms of the same weight (Theorem 3.2). (2) We take a prime ideal p such that a constant multiple of the Klingen-Eisenstein series α[f ] n r attached to a Hecke eigen cusp form f is a mod p m cusp form. Then there exists a cusp form F such that α[f ] n r ≡ F mod p m (Corollary 3.4). The congruences we prove can be regarded as a generalization of Ramanujan's congruence which asserts that where σ m (n) is the n-th Fourier coefficient of the Eisenstein series of weight 12 (i.e., the sum of m-th powers of the divisors of n) and τ (n) is the n-th Fourier coefficient of Ramanujan's ∆ function. In the case of degree 2 and of f = 1 for the situation (2), we already proved these congruences in [7]. Notation First we confirm the notation. For the elementally facts, we refer to Klingen [8]. Let Γ n = Sp n (Z) be the Siegel modular group of degree n and H n the Siegel upper-half space of degree n. We denote by M k (Γ n ) the C-vector space of all Siegel modular forms of weight k for Γ n , and S k (Γ n ) is the subspace of cusp forms. Any f (Z) in M k (Γ n ) has a Fourier expansion of the form where T runs over all elements of Λ n , and For a subring R of C, let M k (Γ n ) R ⊂ M k (Γ n ) denote the R-module of all modular forms whose Fourier coefficients lie in R. Let k a positive even integer with k > n + r + 1 and f ∈ S k (Γ r ) a Hecke eigen form. Then the Klingen-Eisenstein series attached to f is defined by . Let K f be the number field generated over Q by the eigenvalues of the Hecke operators over Q on f . Then it is known that [f ] n r ∈ M k (Γ n ) K f by [10,11,15]. Theorem 3.2. For a finite set S n (K) of prime ideals in K depends on n, we have the following: Let k > 2n and p be a prime ideal of O with p ∈ S n (K). Let f ∈ M k (Γ n ) Op be a mod p m cusp form. In other words, we assume that f ∈ M k (Γ n ) Op satisfies Φ(f ) ≡ 0 mod p m . Then there exists g ∈ S k (Γ n ) Op such that f ≡ g mod p m . Remark 3.3. Since there does not exist non-cusp form of odd weight, the statement for the case where k is odd in Theorem 3.2 is trivial. We will see how to determine the exceptional set S n (K) in the later section (Definition 3.9). As an application of this theorem, we obtain congruences between the Klingen-Eisenstein series and cusp forms: Let v p be the normalized additive valuation with respect to p. We define two values v p (f ) and v Then we have Corollary 3.4. Let k > 2n be even and f ∈ S k (Γ r ) K f (n > r) a Hecke eigen form. For the Klingen- (2) For a prime l and 1 ≤ i ≤ n, we define Hecke operators T (l) and T i (l 2 ) by For an eigen form F and a Hecke operator T , we denote by λ(T, F ) the Hecke eigenvalue of T . By Deligne-Serre lifting lemma ([3] Lemma 6.11), we can take an eigen form r is the ordinary Siegel-Eisenstein series. In particular, if n = 2, this was proved by [7]. Using the integrality theorem obtained by Mizumoto [14], we can give conditions on p to find congruences for the Klingen-Eisenstein series and cusp forms as in Corollary 3.4. We shall introduce an example: To apply his theorem, we assume that and (f, f ) is the Petersson norm of f . For the precise definitions of these numbers, see [14]. This property tells us all possible primes appearing in denominators of all Fourier coefficients of [f ] n r , since the property (2.1). For example, we consider a simple case where r = n − 1. We choose p satisfying Then α[f ] n r is a mod p m cusp form for any α ∈ p m . Applying Theorem 3.2, we can find F ∈ S k (Γ n ) Op such that α[f ] n r ≡ F mod p m . Remark that it may become α[f ] n r ≡ F ≡ 0 mod p m for this choice of p, compared with Corollary 3.4. Proof of the theorem In order to define S n (K) and to prove the theorem, we start with introducing some basic properties. The finite generation of k M k (Γ n ) Z is known by Faltings-Chai [4]. Namely, we always assume that k M k (Γ n ) Z (p) = Z (p) [f 1 , · · · , f s ]/C for any prime p and hence also that k M k (Γ n ) Op = O p [f 1 , · · · , f s ]/C for any prime ideal p. Let M be a natural number. We take the minimum of integers α i ∈ Z ≥0 such that, the weight of f α i i is strictly greater than M. Then the graded algebra M <k M k (Γ n ) Op is generated over O p by the following finitely many monomials; Proof. First, we remark that any g ∈ M k (Γ n ) Op can be written by a liner combination of monomials of the form f a 1 1 · · · f as s . Hence we may consider only the case g = f a 1 1 · · · f as s . Let k 0 := α 1 k 1 + · · · + α s k s . If 2k 0 ≥ k > M, then the assertion is trivial. Hence, we assume that k > 2k 0 . Now we consider a i = α i q i + r i (0 ≤ r i < α i ). Then there exists j 0 such that q j 0 ≥ 1 because of k > 2k 0 . In this case, we may consider the following decomposition; · · · f αsqs s . Then, both h 1 and h 2 are written by the monomials of (3.1) and (3.2). This completes the proof. Proof. By Shimura [17], we have M k (Γ n ) K = M k (Γ n ) Q ⊗ Q K. Since C is faithfully flat over K, the surjectivity of Φ K is equivalent to that of Φ : M k (Γ n ) C → M k (Γ n−1 ) C . The surjectivity of Φ was proved by Klingen [8]. Therefore, we obtain the assertion of the lemma. In order to prove the theorem, it suffices to consider the case where the weight is even (see Remark 3.3). From Lemma 3.7, we may assume that 2n<k∈2Z We are now in a position to define the set S n (K) and to prove Theorem 3.2. Definition 3.9. Let S n (K) be the set of all prime ideals p in O such that, there exists i which satisfies that for all is a finite set depends on n not depends on generators of 2n<k∈2Z M k (Γ n−1 ) Op (Remark 3.10 in Subsection 3.3). Proof of Theorem 3.2. We choose a polynomial In fact, we may choose γ as γ := a(T 0 ; Φ(f )) for some This completes the proof of Theorem 3.2. Remark on S n (K) Remark 3.10. For each prime ideal p, it does not depend on the choice of generators of 2n<k∈2Z M k (Γ n−1 ) Op whether p belongs to the exceptional set S n (K) or not. Namely, we get the following property: Proof. For each 1 ≤ j ≤ t, we can write as f ′ j = P (f 1 , · · · , f s ) for some polynomial Remark 3.11. (1) We have S n (K) ⊂ {p | p ∩ Z ∈ S n (Q)}. Hence, to obtain the congruences as in Corollary 3.4, it suffices to except the prime ideals above p with (p) ∈ S n (Q). (1) Let p ∈ S n (K). If we assume that k M k (Γ n ) Z (p) = Z (p) [f 1 , · · · , f s ]/C, then k M k (Γ n ) Op = O p [f 1 , · · · , f s ]/C by Lemma 3.6. Since p ∈ S n (K), there exists i with 1 ≤ i ≤ s such that for all This contradicts for p ∩ Z = (p) ∈ S n (Q). Numerical examples We give some numerical examples of Corollary 3.4 for the case of degree 2. For simplicity, we put E k := E k . Let ∆ ∈ S 12 (Γ 1 ) be Ramanujan's delta function. We write simply (m, r, n) for n r 2 r 2 m ∈ Λ 2 . In the following construction of examples, we apply Sturm type theorem obtained by [2]. In order to prove a congruence between two modular forms of even weight k of degree 2 by using the theorem in [2], it suffices to check the congruences for Fourier coefficients for The reason is that all Fourier coefficients corresponding to (n, r, m), (m, r, n), (n, −r, m), (m, −r, n) are the same in the case of even weight. Then we have [f 20 ] 2 1 ≡ F 20 mod 71 2 . In fact, we can confirm this by the following table and an application of Sturm type theorem:
2014-02-13T14:51:28.000Z
2014-02-13T00:00:00.000
{ "year": 2014, "sha1": "a17ff009bdd750958502e9b92b908230c9c7e802", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1402.3159", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "a17ff009bdd750958502e9b92b908230c9c7e802", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
73522073
pes2o/s2orc
v3-fos-license
Further studies of isolated photon production with a jet in deep inelastic scattering at HERA Isolated photons with high transverse energy have been studied in deep inelastic $ep$ scattering with the ZEUS detector at HERA, using an integrated luminosity of $326\,$ pb$^{-1}$ in the range of exchanged-photon virtuality $10 - 350$ GeV$^2$. Outgoing isolated photons with transverse energy $4<E_T^\gamma<15$ GeV and pseudorapidity $-0.7<\eta^\gamma<0.9$ were measured with accompanying jets having transverse energy and pseudorapidity $2.5<E_T^{jet}<35$ GeV and $-1.5<\eta^{jet}<1.8$, respectively. Differential cross sections are presented for the following variables: the fraction of the incoming photon energy and momentum that is transferred to the outgoing photon and the leading jet; the fraction of the incoming proton energy transferred to the photon and leading jet; the differences in azimuthal angle and pseudorapidity between the outgoing photon and the leading jet and between the outgoing photon and the scattered electron. Comparisons are made with theoretical predictions: a leading-logarithm Monte Carlo simulation, a next-to-leading-order QCD prediction, and a prediction using the $k_T$-factorisation approach. Introduction The isolated high-energy photons that are emitted in high-energy collisions involving hadrons are predominantly unaffected by parton hadronisation. Their production probes the underlying partonic process and can provide information on the structure of the proton. Processes of this type have been studied in a number of fixed-target and hadron-collider experiments [1]. The production of isolated photons in photoproduction, where the incoming photon is quasi-real, was previously studied at HERA by the ZEUS and H1 collaborations [2][3][4]. Deep inelastic neutral current (NC) ep scattering (DIS), in which the exchanged photon has virtuality Q 2 > 1 GeV 2 , has also been measured in a variety of Q 2 ranges [5][6][7]. The analysis presented here extends an earlier ZEUS measurement of isolated photons and jets in DIS [8]. Figure 1 shows leading-order diagrams for high-energy photon production in DIS. Such "prompt" photons are emitted either by the incoming or outgoing quark or by the incoming or outgoing lepton. In the first case, the photons are classified as "QQ" photons, and the hadronic process has two hard scales: the virtuality Q 2 of the incident exchanged photon and the square of the transverse momentum of the prompt photon. In the second case, the photons are denoted as "LL" and are emitted from the incoming or outgoing lepton. The present analysis requires the observation of a scattered electron, a high-energy outgoing photon and a hadronic jet. Processes in which the final state consists solely of a hard outgoing electron and a hard outgoing photon are thereby excluded. By requiring the outgoing photon to be isolated, a further class of processes in which the photon is produced within a jet is suppressed. In the previous ZEUS publication on this topic [8], kinematic distributions of the outgoing photon and the jet were studied. Using the same data set, the analysis is extended here by measuring variables that involve two of the outgoing photon, the jet and the scattered electron. Results from a leading-logarithm parton-shower Monte Carlo [9] are compared to the measurements. Comparison is also made with two theoretical models: one at next-toleading order (NLO) in QCD [10,11], and one based on a k T -factorisation approach [12]. Experimental set-up The data sample used for the measurement corresponds to an integrated luminosity of 326 ± 6 pb −1 and was taken with the ZEUS detector in the years 2004-2007. During this 1 period, HERA ran with an electron/positron beam energy of 27.5 GeV and a proton beam energy of 920 GeV; 138 ± 2 pb −1 of e + p data and 188 ± 3 pb −1 of e − p data 1 were used in the present analysis. A detailed description of the ZEUS detector can be found elsewhere [13]. Charged particles were recorded in the central tracking detector (CTD) [14] and a silicon microvertex detector [15] which operated in a magnetic field of 1.43 T provided by a thin superconducting solenoid. The high-resolution uranium-scintillator calorimeter (CAL) [16] consisted of three parts: the forward (FCAL), the barrel (BCAL) and the rear (RCAL) calorimeters. The BCAL covered the pseudorapidity range −0.74 to 1.01 as seen from the nominal interaction point 2 . The FCAL and RCAL extended the range to −3.5 to 4.0. The smallest subdivision of the CAL is called a cell. The barrel electromagnetic calorimeter (BEMC) cells had a pointing geometry aimed at the nominal interaction point, with a cross section approximately 5 × 20 cm 2 , with the finer granularity in the Z-direction. This fine granularity allows the use of shower-shape distributions to distinguish isolated photons from the products of neutral meson decays such as π 0 → γγ. The luminosity was measured using the Bethe-Heitler reaction ep → eγp by a luminosity detector which consisted of two independent systems: a lead-scintillator calorimeter [17] and a magnetic spectrometer [18]. Event selection and reconstruction The ZEUS experiment operated a three-level trigger system [13,19,20]. At the first level, events were selected if they had an energy deposit in the CAL consistent with an isolated electron. At the second level, a requirement on the energy and longitudinal momentum of the event was used to select NC DIS events. At the third level, the full event was reconstructed and tighter requirements for a DIS electron were made. Offline selections, similar to those of the earlier ZEUS analysis [8], were then applied. Outgoing electrons were selected with polar angle θ e > 140 • in order to provide a good measurement in the RCAL, kinematically separated from the selected outgoing photons. Their 1 Hereafter, "electron" refers to both electrons and positrons unless otherwise stated. 2 The ZEUS coordinate system is a right-handed Cartesian system, with the Z axis pointing in the nominal proton beam direction, referred to as the "forward direction", and the X axis pointing towards the centre of HERA. The coordinate origin is at the centre of the central tracking detector. The pseudorapidity is defined as η = − ln tan θ 2 , where the polar angle, θ, is measured with respect to the Z axis. The azimuthal angle, φ, is measured with respect to the X axis. impact point (X,Y ) on the surface of the RCAL was required to lie outside a rectangular region ±14.8 cm in X and [−14.6, +12.5] cm in Y , to give a well understood acceptance. The outgoing electrons were identified using a neural network [21], and the energy of the outgoing electron, E e , corrected for apparatus effects, was required to be larger than 10 GeV. The kinematic variable Q 2 was reconstructed as Q 2 = −(k − k ) 2 , where k (k ) is the fourmomentum of the incoming (outgoing) electron. The kinematic region 10 < Q 2 < 350 GeV 2 was selected. A requirement that the event vertex position, Z vtx , should be within the range |Z vtx | < 40 cm reduces the background from non-ep collisions. A further requirement for a well-contained DIS event, energy of the i-th CAL cell, θ i is its polar angle and the sum runs over all cells [22]. Photon candidates were identified as energy-flow objects (EFOs) 3 without an associated track, for which at least 90% of the reconstructed energy was deposited in the BEMC. The calibration of the energies of the photon and scattered electron was taken from an earlier ZEUS analysis and used deeply virtual Compton scattering events [24]. The reconstructed transverse energy of the photon candidate, E γ T , was required to lie within the range 4 4 < E γ T < 15 GeV and the pseudorapidity, η γ , had to satisfy −0.7 < η γ < 0.9. Jets were reconstructed with the k T clustering algorithm [25] in the E scheme in the longitudinally invariant inclusive mode [26] with the R parameter set to 1.0. Since all EFOs of the event were used except for the electron signal, one of the jets found by this procedure corresponds to or includes the photon candidate. At least one accompanying jet was required with transverse energy E jet T > 2.5 GeV and pseudorapidity, η jet , in the range −1.5 < η jet < 1.8; if more than one jet was found, that with the highest E jet T was used. Photons radiated from final-state electrons were suppressed by requiring that ∆R > 0.2, where ∆R = (∆φ) 2 + (∆η) 2 is the distance to the nearest reconstructed track with momentum greater than 250 MeV in the η − φ plane. Isolation from hadronic activity was imposed by requiring that the photon candidate possessed at least 90% of the total energy of the jet-like object of which it formed a part. This also reduced the background of photon candidates arising from neutral meson decay. Approximately 6000 events were selected at this stage; this sample was dominated by background events in which one or more neutral mesons such as π 0 and η, decaying to photons, produced a photon candidate in the BEMC. Variables studied In the previous ZEUS publication [8], distributions of photon and jet variables were studied. In the present analysis, variables that depend on two of the three measured outgoing physical objects were studied, namely the high-p T photon, the leading jet and the scattered electron. They were defined as follows: • x meas γ is a measure of the fraction of the exchanged-photon energy and longitudinal momentum that is given to the outgoing photon and the jet: where E γ and E jet denote the energies of the outgoing photon and the jet, respectively, p γ Z and p jet Z denote the corresponding longitudinal momenta, E e = 27.5 GeV, and the Jacquet-Blondel variable y JB is given by summing over all energy-flow objects in the event except the scattered electron, each object being treated as equivalent to a massless particle. This variable is sensitive to higher-order processes that generate additional particles in the event; • x obs p estimates the fraction of the proton energy transferred to the outgoing photon and jet: where E p = 920 GeV. This variable is sensitive to the partonic structure of the proton; • ∆φ is the azimuthal angle between the jet and the outgoing photon: where φ jet and φ γ denote the azimuthal angles of the jet and photon, respectively. This variable is sensitive to the presence of higher-order gluon radiation from the outgoing quark, which generates a contribution to the non-collinearity between the photon and the leading jet; • ∆η is the difference in pseudorapidity between the jet and the outgoing photon: ∆η = η jet − η γ , where η jet and η γ denote the pseudorapidity of the jet and the photon, 4 respectively. This variable is sensitive to the dynamical properties of the scattering process; • ∆φ e,γ is the azimuthal angle between the scattered electron and the outgoing photon: where φ e denotes the azimuthal angle of the electron; this and the following variable are sensitive to higher-order processes and to whether the process is LL or QQ; • ∆η e,γ is the difference in pseudorapidity between the scattered electron and the photon: ∆η e,γ = η e − η γ , where η e denotes the pseudorapidity of the electron. A similar ZEUS analysis has been previously performed for photoproduction [24], studying all the present variables except those associated with the scattered electron. Event simulation Monte Carlo (MC) event samples were generated to evaluate the detector acceptance and to provide signal and background distributions. The program Pythia 6.416 [9] was used to simulate prompt-photon emission for the study of the event-reconstruction efficiency. In Pythia, this process is simulated as a DIS process with additional photon radiation from the quark line to account for QQ photons. Radiation from the lepton is not simulated. The LL photons that were radiated into the detector and were isolated from the outgoing electron were simulated using the generator Djangoh 6 [27], an interface to the MC program Heracles 4.6.6 [28]; higher-order QCD effects were included using the colour dipole model of Ariadne 4.12 [29]. Hadronisation of the partonic final state was in each case performed by Jetset 7.4 [30] using the Lund string model [31]. Interference between the LL and QQ terms was neglected. The main background to the QQ and LL photons came from photonic decays of neutral mesons produced in general DIS processes. This background was simulated using Djangoh 6, within the same framework as the LL events. This provided a realistic spectrum of single and multiple mesons with well modelled kinematic distributions. The generated MC events were passed through ZEUS detector and trigger simulation programs based on Geant 3.21 [32]. They were then reconstructed and analysed by the same programs as the data. 5 Theoretical calculations The Pythia predictions and the predictions of two parton-level models were compared to the results of the present analysis. The NLO QCD calculation of Aurenche, Fontannaz and Guillet (AFG) [10], was performed in the MS scheme. Uncertainties on the QCD scale at this order contribute a normalisation uncertainty of typically ±8%. This calculation was performed in the centre-of-mass frame and transformed into the laboratory frame, which introduces uncertainties on the cross sections in some regions of the parameter space due to non-perturbative effects [11]. The AFG predictions were calculated with a cut of 2.5 GeV on the photon transverse momentum in the centre-of-mass frame, and do not include an LL contribution, which was evaluated using the Djangoh-Heracles simulation and added separately to the AFG calculation for comparison with the data. The uncertainties on the AFG predictions shown in the present paper represent the QCD scale uncertainties. A calculation by Baranov, Lipatov and Zotov (BLZ) [12] used updated parameters for the present paper. It is based on the k T -factorisation method. This approach uses unintegrated parton densities and takes into account both QQ and LL photons, neglecting the small interference contribution. The final result is obtained as the convolution of the off-shell scattering matrix element with the unintegrated quark distribution in the proton. In the k T -factorisation theory, some part of the final-state jets can originate not only from the hard subprocess but also from the parton evolution cascade in the initial state. The quoted uncertainties on the BLZ predictions represent the QCD scale uncertainties. In the previous ZEUS analysis of prompt photons in DIS, the measured variables were associated with the entire event, with the outgoing photon, and with jets. Comparisons were made to an earlier NLO QCD theory [33][34][35] and to BLZ. Both theories described the shapes of the single-particle cross sections well, but failed to reproduce the normalisation of the data. A later version of the original AFG calculation agreed well with the results [36], and has been used in the present study. The predictions of AFG and BLZ were calculated at the parton level and incorporated kinematic and isolation criteria corresponding to the data. Corrections to the hadron level were made using Pythia to determine the ratio of the hadron-level cross sections to those at the parton level for each variable in each bin. The Pythia events were weighted at the parton level to represent the shapes of the AFG and BLZ distributions in x meas γ in order to calculate the hadronisation corrections for all the other measured variables. The corrections for AFG and BLZ were similar to within 10%. This procedure was also applied separately to the AFG predictions for the different Q 2 ranges. For the BLZ x meas γ distribution, 98% of the parton-level cross section is in the (0.9, 1.0) bin; consequently, for this variable a transfer matrix from the parton to the hadron level was calculated using Pythia. The same procedure was used for the AFG x meas γ distribution. The relevant transfer matrices for the other variables gave similar results to the reweighting procedure. Extraction of the photon signal The event sample selected according to the criteria described in Section 3 was dominated by background from neutral meson decays; thus the photon signal was extracted statistically following the approach used in previous ZEUS analyses [2,5,6]. The photon signal was evaluated making use of the width of the BEMC energy-cluster corresponding to the photon candidate. This was calculated as the variable where Z i is the Z position of the centre of the i-th cell, Z cluster is the centroid of the EFO cluster, w cell is the width of the cell in the Z direction, and E i is the energy recorded in the cell. The sum runs over all BEMC cells in the EFO. The distributions of δZ for the full data set and the fitted MC are shown in Fig. 2. The δZ distribution exhibits a double-peaked structure with the first peak at ≈ 0.1, associated with the photon signal, and a second peak at ≈ 0.5, dominated by the π 0 → γγ background. The contribution of isolated-photon events was determined for each bin in each measured variable by a χ 2 fit to the δZ distribution in the range 0.05 < δZ < 0.8, using the LL and QQ signal and background MC distributions as described in Section 5. The mean value of χ 2 /n.d.f was 1.2. Compared to the earlier ZEUS publication [8], improvements have been made in the modelling of the shapes of the δZ distributions of the QQ and LL contributions, using a comparison between the shapes associated with the scattered electron in MC simulation of DIS and in real data. By treating the LL and QQ photons separately, account is taken of the effect of their differing kinematic distributions on the acceptance, and the effect of their differing (η, E T ) distributions on the shape of the photon signal. In performing the fit, the theoretically well determined LL contribution was kept constant at its MC-predicted value and the other components were varied. Of the 6149 events selected, 2451 ± 102 correspond to the extracted signal, including 526 LL photons. The fitted scale factor applied to the QQ contribution in Fig. 2 was 1.6, consistent with the earlier ZEUS analysis. For a given observable Y , the production cross section was determined for each bin using where N (γ QQ ) is the number of QQ photons extracted from the fit, ∆Y is the bin width, L is the total integrated luminosity, σ MC LL is the predicted cross section for LL photons from Djangoh-Heracles and A QQ is the acceptance correction for QQ photons. The value of A QQ was calculated, using the Pythia MC, from the ratio of the number of events generated to those reconstructed in a given bin; it lies in the range 0.91-2.28. To improve the representation of the data, and hence the accuracy of the acceptance corrections, the MC predictions were reweighted. This was done using parameterised functions of Q 2 and of η γ , and also bin-by-bin as a function of photon energy; the three reweighting factors were applied multiplicatively. Their net effect on the acceptances was small. Systematic uncertainties The sources of systematic uncertainty on the measured cross sections are as in the previous paper [8]. The principal sources of uncertainty were evaluated as follows: • the energy scale of the photon candidate was varied by ±2%. The mean change of the cross section was ±6%; • the energy scale of the jets was varied by ±1.5% for jets with E jet T > 10 GeV, ±2.5% for jets with E jet T in the range [6,10] GeV and ±4% for jets with E jet T < 6 GeV. The uncertainty was typically ±7%; • the energy scale of the scattered electron was varied by ±2%. The overall average effect on the cross sections was less than ±1%. Systematic uncertainties related to the MC generators were evaluated as follows: • the dependence on the modelling of the hadronic background by means of Djangoh-Heracles was investigated by varying the upper limit for the δZ fit in the range [0.6, 1.0], giving variations that were typically ±5%; • uncertainties in the acceptance due to the Pythia model were accounted for by taking half of the change attributable to the reweighting described in Section 7 as a systematic uncertainty; for most bins the effect was approximately 1%. Other sources of systematic uncertainty were found to be negligible and were ignored [6,37]: these included variations on the cuts on ∆R, the track momentum, E − p Z , Z vtx and the electromagnetic fraction of the photon shower, and a variation of 5% on the LL fraction. The systematic uncertainties were symmetrised by taking the mean of the positive and negative uncertainty values and were combined in quadrature. The common uncertainty of 1.8% on the luminosity measurement is not included in the tables and figures. Results Differential cross sections for the production of an isolated photon in DIS with an additional jet have been measured in the laboratory frame in the kinematic region defined by 4 < E γ T < 15 GeV, −0.7 < η γ < 0.9, E jet T > 2.5 GeV and −1.5 < η jet < 1.8. The DIS electron was constrained to be in the angular range θ e > 140 • , with energy greater than 10 GeV and 10 < Q 2 < 350 GeV 2 , where Q 2 was determined from the electron scattering angle. The jets were formed according to the k T -clustering algorithm with the R parameter set to 1.0. Photon isolation was imposed such that at least 90% of the energy of the jet-like object containing the photon belonged to the photon. The differential cross sections for the full Q 2 range as functions of x meas γ , x obs p , ∆φ, ∆η, ∆φ e,γ and ∆η e,γ are shown in Fig. 3 and are given in Tables 1-6, which also list the values of the LL contributions and the hadronisation corrections. The cross section decreases with increasing x obs p , having a peak around 0.01, and rises at high values of x meas γ , ∆φ and ∆φ e,γ . The predictions for the sum of the expected LL contribution from Djangoh-Heracles and a factor of 1.6 times the expected QQ contribution from Pythia agree well with the measurements. The success of the Pythia calculation can be attributed to its use of a leading-logarithm approach to gluon emission to augment its LO parton-scattering calculation. The differential cross sections for the separate ranges 10 < Q 2 < 30 GeV 2 and 30 < Q 2 < 350 GeV 2 are shown in Figs. 4 and 5. In both these ranges, a good description of the data is given by the combination of the LL and Pythia MCs. The LL contribution is small in the lower Q 2 region, as was already seen in Fig. 3(a) of the earlier ZEUS publication [8]. In the higher Q 2 range, the LL component contributes significantly, as can be seen in the x obs p , ∆φ, ∆η, and ∆η e,γ distributions where it is dominant at high values of these variables. This reflects the changes with Q 2 in the structure of the contributing processes. The increased importance of the LL component at higher Q 2 is also reflected in the x meas γ distribution. Figure 6 presents the x meas γ and x obs p cross sections on a logarithmic scale. The data in the low-x meas γ region are satisfactorily described by Pythia without the need for further higher-order processes. Comparisons of the data with the AFG and BLZ predictions are presented for the entire Q 2 range in Fig. 7. The updated BLZ predictions describe the shape of most of the distributions reasonably well, but there is an overestimation of about 20% in the overall cross section, and the extremely peaked prediction for the x meas γ distribution is not in agreement with the data. The AFG predictions describe all the distributions well and also agree in the overall normalisation. Comparisons of the data with the AFG model in the two separate Q 2 ranges are shown in Figs. 8-9. In the higher Q 2 range, the description by AFG is excellent. In the lower range, the only deviation observable is in the ∆η distribution, where the data show a tendency towards higher values than the theory. This might be related to the cut of 2.5 GeV on the transverse photon momentum applied in the AFG calculation [10]. Summary The production of isolated photons accompanied by jets has been measured in deep inelastic scattering with the ZEUS detector at HERA, using an integrated luminosity of 326 pb −1 . Expanding on earlier ZEUS results [8], which studied single-particle distributions, differential cross sections have been evaluated as functions of pairs of measured variables in combination. The kinematic region in the laboratory frame was defined by 4 < E γ T < 15 GeV, −0.7 < η γ < 0.9, E jet T > 2.5 GeV and −1.5 < η jet < 1.8. The DIS electron was constrained to be in the angular range θ e > 140 • , with energy greater than 10 GeV and 10 < Q 2 < 350 GeV 2 , where Q 2 was determined from the electron scattering angle. The jets were formed according to the k T -clustering algorithm with the R parameter set to 1.0. Photon isolation was imposed such that at least 90% of the energy of the jet-like object containing the photon belonged to the photon. Differential cross sections are presented for the following variables: the fraction of the incoming photon energy and momentum that is transferred to the outgoing photon and the leading jet; the fraction of the incoming proton energy transferred to the photon and leading jet; the differences in azimuthal angle and pseudorapidity between the outgoing photon and the leading jet and between the outgoing photon and the scattered electron. The Pythia prediction for the quark-radiated photon component plus the Djangoh-Heracles calculation for the lepton-radiated component describes all the distributions well if the Pythia prediction is scaled up by a factor of 1.6. This is also true if the data are divided into ranges above and below a value of Q 2 = 30 GeV 2 . Predictions from two theoretical models were also compared to the data. The BLZ model gives a fair description of the data but does not give a good description of the overall normalisation or the shape of some of the distributions. The AFG model gives an excellent description of the normalisation and almost all the distributions, both for the entire data set and for the separate Q 2 ranges. x meas had. cor. . The quoted systematic uncertainty includes all the components added in quadrature. The calculated LL contribution which was added to the Pythia and AFG calculations is also listed, and the hadronisation correction calculated for the AFG predictions. Differences between cross sections in the first section and the sum of the corresponding values in the second and third sections are of statistical origin.
2017-12-12T13:04:53.000Z
2017-12-12T00:00:00.000
{ "year": 2018, "sha1": "734c136c40fb8d6345f22f18839de85a1b074971", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP01(2018)032.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "c661d52c559f20803705a3da28d3fdb7b2bc4c51", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
2994109
pes2o/s2orc
v3-fos-license
New Strategies and Methods to Study Interactions between Tobacco Mosaic Virus Coat Protein and Its Inhibitors Studies of the targets of anti-viral compounds are hot topics in the field of pesticide research. Various efficient anti-TMV (Tobacco Mosaic Virus) compounds, such as Ningnanmycin (NNM), Antofine (ATF), Dufulin (DFL) and Bingqingxiao (BQX) are available. However, the mechanisms of the action of these compounds on targets remain unclear. To further study the mechanism of the action of the anti-TMV inhibitors, the TMV coat protein (TMV CP) was expressed and self-assembled into four-layer aggregate disks in vitro, which could be reassembled into infectious virus particles with TMV RNA. The interactions between the anti-TMV compounds and the TMV CP disk were analyzed by size exclusion chromatography, isothermal titration calorimetry and native-polyacrylamide gel electrophoresis methods. The results revealed that assembly of the four-layer aggregate disk was inhibited by NNM; it changed the four-layer aggregate disk into trimers, and affected the regular assembly of TMV CP and TMV RNA. The four-layer aggregate disk of TMV CP was little inhibited by ATF, DFL and BQX. Our results provide original data, as well as new strategies and methods, for research on the mechanism of action of anti-viral drugs. Introduction Tobacco mosaic virus (TMV) is responsible for devastating diseases in major agricultural crops, including vegetables and tobacco, which often lead to high frequency of occurrence together with serious damage and enormous economic loss [1,2]. Based on these important and difficult challenges, researchers have exerted intensive efforts in discovering novel anti-TMV lead structures for plants, optimizing lead compounds to obtain commercially registered anti-TMV agent products and investigating new molecular targets. The abovementioned agents are divided into three types. The first type includesinactivating agents, which can destroy the morphology of virus particles, such as Antofine (ATF) [3,4]. The second type includes curative agents, which can inhibit virus replication and proliferation, such as Ningnanmycin (NNM) [5,6]. The third type includes immune and protective agents, which can induce resistance to plant diseases, such as Dufulin (DFL) [7] and Bingqingxiao (BQX) [8]. NNM is a microbial pesticide with active ingredients including cytosine nucleoside compounds isolated from Streptomyces nourseivar. xichangensis. Some reports have indicated that NNM is a good anti-viral drug [3], Han et al. reported that NNM inhibited the polymerization of TMV CP in vitro, in addition to multiple other activities affecting expression of various host genes that may contribute to systemic resistance to TMV in tobacco [9]. ATF is extracted from Strobilanthe cusia, and has strong interactions with TMV initial RNA, leading to assembly blocking of virus particles [4]. DFL is a plant anti-viral agent with a high activity against TMV and an immune activator in plants [7]. BQX has preventive and curative effects, which could change the profile of protein expression after TMV infection [8,9]. TMV CP include disk and helix formations [10]. The disk forms appeared in sodium phosphate buffer without TMV RNA; the helix forms appeared sodium phosphate buffer with TMV RNA. The TMV CP disk forms play an important role in forming TMV particles with both the TMV CP helix forms and TMV RNA [10]. Thus, TMV CP disk forms are potential targets of anti-viral compounds. To further study the mechanisms of action of anti-TMV inhibitors, we obtained the four-layer aggregate disk forms of TMV CP as a target of the anti-TMV drugs in vitro. The four-layer aggregate disk of TMV CP can form rods, and can assemble into infectious virus particles with TMV RNA in 10 mM sodium phosphate and 100 mM sodium chloride at pH 7.2. After adding the anti-TMV drugs, the TMV CP disk could be disassembled into trimers by NNM, and could be disassembled into dimers by ATF, but not by BQX and DFL. After analysis using size exclusion chromatography (SEC) and native-polyacrylamide gel electrophoresis (native-PAGE) between NNM and the TMV CP disk, we found that NNM inhibited the assembly of TMV CP; it changed the disk of TMV CP into trimers. The isothermal titration calorimetry (ITC) results revealed that the hydrogen-bonding networks of TMV CP disk had interactions with NNM. TMV CP Disk Formation Confirmed by SEC and Native-PAGE To obtain TMV CP disk formation, the TMV CP gene was subcloned in a prokaryotic expression system. The TMV CP fused to a 6-His-tag produced in a prokaryotic expression system was purified using the 6-His-tag, which was then cleaved from the TMV CP using thrombin. The fresh TMV CP protein was observed primarily as tetramers (~4 subunits,~70 KDa) using SEC in 10 mM sodium phosphate and 100 mM sodium chloride solution (pH 7.2).The disk (~34 subunits,~595 KDa) was formed when the tetramers were incubated in 10 mM sodium phosphate and 100 mM sodium chloride solution (pH 7.2) at 295 K for more than 12 h ( Figure 1A). TMV CP tetramers were incubated at 295 K for 24 h, and TMV CP disk were detected by native-PAGE. The TMV CP disk were confirmed by SEC ( Figure 1B). This process of obtaining TMV CP disk is easier than traditional methods [11][12][13]. 3 kDa, lanes 1-7, purified TMV CP at 12 mg/mL (lane 1) or 6.8 mg/mL (lanes 2-7) was incubated in 10 mM sodium phosphate (pH 7.2) at 295 K for 0.5 h (lanes 1-3), 5 h (lanes 4, 5), or 24 h (lanes 6, 7); (B) 0.5 mM (6.8 mg/mL) TMV CP disk (~34 subunits,~595 kDa) were confirmed by SEC, the peak time of TMV CP disk is 18.3 min. Reconstructed TMV Observed by Transmission Electron Microscopy (TEM) As described in the Experimental Section, the activation and function of TMV CP were confirmed using TEM. TMV CP refolding and further self-assembly was carried out at a protein concentration of 6.8 mg/mL. When TMV CP protein concentration was 6.8 mg/mL at 295 K for 24 h, disk and rod were observed by TEM (Figure 2A), and reconstituted TMV was obtained in the solutions when 2 mg/mL TMV RNA was added and incubated at 295 K for 24 h ( Figure 2B). The freshly purified TMV CP oligomers self-assembled into TMV CP disk in the appropriate solutions for reconstruction into newly infective viruses ( Figure 2C,D). Therefore, the freshly purified TMV CP may be regarded as the target of anti-TMV compounds. Interactions between Anti-TMV Drugs and TMV CP Disks Using ITC ITC experiments were performed under the following conditions: 10 mM sodium phosphate and 100 mM sodium chloride at pH 7.2 to explore the energetic association of the compounds BQX, DFL, ATF and NNM with the TMV CP disk. The raw data of the heat change over time (top) and the plots of the integrated, corrected molar heats versus the ligand-to-protein ratios (bottom) are shown in Figure 3. The results showed that NNM and ATF had a micromole affinity for the TMV CP disk: Analysis by ITC revealed that one TMV CP disk interacted with 4100 to 4632 NNM molecules, and NNM bound to TMV CP disk with a dissociation constant (K d ) of 3.3 µM ( Figure 5D). The titration data indicated an apparent negative enthalpy value ( G «´7.5) when NNM bound to TMV CP disk (Table 3), which indicated that the NNM-TMV CP disk complex was stable; one TMV CP disk interacted with 39 to 40 ATF molecules, and ATF bound to TMV CP disk with a dissociation constant (K d ) of 38.8 µM ( Figure 5C). The titration data also indicated an apparent negative enthalpy value ( G «´5.6) when ATF bound to TMV CP disk, which indicated that the ATF-TMV CP disk complex was stable, but less sensitive than NNM. However, the affinities between other anti-TMV drugs (BXQ and DFL) and TMV CP disk were less sensitive ( Figure 5A,B) than NNM and ATF and were within the 400-13,900 µM range. Interactions between Anti-TMV Drugs and TMV CP Studied by Native-PAGE Native-PAGE was carried out in the presence of 0.5 mM TMV CP disk and 5 mM DFL containing 2.5% DMSO, BQX containing 2.5% DMSO, AFL and NNM separately. The results showed that DFL and BQX could not destroy the TMV CP disk, whereas NNM could change TMV CP disk into trimers and ATF could change the TMV CP disk into dimers ( Figure 4). Interactions between Anti-TMV Drugs and TMV CP Studied by SEC In the SEC experiments, TMV CP disk were mixed with 5 mM DFL (containing 2.5% DMSO), 5 mM BQX (containing 2.5% DMSO), 5 mM NNM and 5 mM ATF separately and incubated in 10 mM sodium phosphate and 100 mM sodium chloride solution (pH 7.2) at 295 K for 1 h. The TMV CP disks were not disassembled into oligomers by DFL and 5 mM BQX (both containing 2.5% DMSO); however, TMV CP disk were disassembled into trimers by NNM and disassembled into dimers by ATF ( Figure 5). Figure 5. Interactions between the TMV CP disk and the anti-TMV drugs, as analyzed by SEC: (A) 0.5 mM (6.8 mg/mL) TMV CP disk were mixed with 5 mM DFL (containing 2.5% DMSO), and then incubated in 10 mM sodium phosphate and 100 mM sodium chloride solution (pH 7.2) at 295 K for 1 h; (B) 0.5 mM (6.8 mg/mL) TMV CP disk were mixed with 5 mM BQX (containing 2.5% DMSO) and then incubated in 10 mM sodium phosphate and 100 mM sodium chloride solution (pH 7.2) at 295 K for 1 h; (C) 0.5 mM (6.8 mg/mL) TMV CP disk were mixed with 5 mM NNM and then incubated in 10 mM sodium phosphate and 100 mM sodium chloride solution (pH 7.2) at 295 K for 1 h; and (D) 0.5 mM (6.8 mg/mL) TMV CP disk were mixed with 5 mM ATF and then incubated in 10 mM sodium phosphate and 100 mM sodium chloride solution (pH 7.2) at 295 K for 1 h; (E) the peak time of TMV CP disk is 18.3 min; (F) the peak time of TMV CP dimers is 51 min; (G,H) are protein markers for size-exclusion chromatography, the peak time of ferritin is 22 min, and the peak time of BSA is 38 min. The concentrations of NNM solution were adjusted for further investigation of the interactions between TMV CP disk and NNM. When the ratio of TMV CP disk to NNM was 1:5, few TMV CP disks were disassembled into trimers; when the ratio was 1:10, most TMV CP disks were disassembled into trimers ( Figure 6). The results imply that NNM could destroy the interlayer hydrogen-bonding networks in the four-layer aggregate of TMV CP disk. In Vivo Assays of Anti-TMV Drugs and Reconstituted TMV Virus Based on the mechanical inoculation methods of reconstituted TMV virus with anti-TMV drugs, NNM was verified to have a very good curative activity against TMV (60.6% in 500 µg/mL and 30.1% in 100 µg/mL) and ATF was verified that it has curative activity against TMV (61.1% in 500 µg/mL and 27.6% in 100 µg/mL), better than BQX and DFL. Preparation of Compound Samples NNM was kindly supplied by Chen Jiaren of the Chengdu Biology of Chinese Academy of Sciences; ATF by Wang Qingmin of the Research Institute of Elemento-Organic Chemistry; DFL and BQX were designed and synthesized in our laboratory. Primer Name Primers Primer 1 2). After thrombin digestion of 6-His-tags at 277 K overnight, the TMV CP was further purified by SEC using a Superdex 200 column (GE Healthcare, Little Chalfont, Buckinghamshire, UK, 120 mL) in a buffer containing 10 mM sodium phosphate buffer and 100 mM sodium chloride solution at pH 7.2. The protein was then concentrated to 1 to 10 mg/mL for the biochemistry trials using Amicon Ultra centrifugal filter units (Millipore, Darmstadt, Germany, UFC501096) with a 10-kDa molecular weight cutoff. The target proteins were briefly stored at 277 K. Formation of TMV Disk The purified proteins were incubated in 10 mM sodium phosphate and 100 mM sodium chloride solution (pH 7.2) at 295 K for 24 h to obtain the four-layer aggregate disk [16][17][18][19][20]. The disk forms were confirmed by SEC and native-PAGE. Reconstituted Virus of TMV CP One milliliter of purified TMV CP disk (6.8 mg/mL) solution (10 mM sodium phosphate and 100 mM sodium chloride solution, pH 7.2) was mixed with 0.2 mL of purified TMV RNA (2 mg/mL) and incubated at 295 K for 24 h. The suspensions were centrifuged at 5000 rpm for 1 min, and then the reconstituted virus was obtained [13,21,22]. TEM Imaging TMV RNA and the self-assembled TMV CP disk were incubated as mentioned previously. 20 µL of the mixed solution was deposited onto a 300-mesh Formvar-carbon-coated copper grid for 2 min, followed by rinsing with ddH 2 O. The grid was then stained with 20 µL of 2% aqueous solution of tungstophosphoric acid (Sinopharm, Beijing, China) for 90 s as a negative stain [23][24][25][26][27][28]. Images were obtained at the Zunyi Medical University Electron Microscope Lab using a Hitachi H-7650 transmission electron microscope with 80 kV accelerating voltage. Interactions of Anti-Viral Drugs and TMV CP Given that TMV CP disks are regarded as the target of small-molecule compounds [6], we studied the interactions between TMV CP disk and anti-TMV compounds 0.5 mM of the small-molecule compounds NNM, ATF, DFL and BQX were respectively added to 0.5 mM (6.8 mg/mL) TMV CP disk (Table 2), and then incubated for 30 min. The solution interaction was then investigated by SEC, 17% native-PAGE, and ITC methods. Interactions of Anti-Viral Drugs and TMV CP Given that TMV CP disks are regarded as the target of small-molecule compounds [6], we studied the interactions between TMV CP disk and anti-TMV compounds 0.5 mM of the small-molecule compounds NNM, ATF, DFL and BQX were respectively added to 0.5 mM (6.8 mg/mL) TMV CP disk (Table 2), and then incubated for 30 min. The solution interaction was then investigated by SEC, 17% native-PAGE, and ITC methods. Native-PAGE Native-PAGE was performed on ice with the TMV CP samples that were equilibrated overnight in a buffer containing 10 mM sodium phosphate and 100 mM sodium chloride at pH 7.2. 20 μL of the sample was treated with 20 μL of 2× loading buffer, including 12.5% 0.5 M Tris-HCl (v/v) at pH 6.8, 0.5% bromophenolblue (w/v), and 30% glycerin (v/v). Subsequently, 8 μL of the samples and 4 μL of the protein marker were loaded on a native-PAGE gel (4% stacking and 17% separating gel). Electrophoresis was run at 1× native-PAGE buffer (Tris-Gly, pH 8.8) at 273 K for 1 h [29]. After native-PAGE electrophoresis, the lane was stained with Coomassie blue [30,31] to locate the protein, and then destained with methanol and glacial acetic acid. ITC ITC binding experiments were performed using an ITC 200 Micro Calorimeter at 291 K. All proteins were dialyzed against a buffer for 3 day prior to forming the TMV CP disk; the buffer contained 10 mM sodium phosphate, 100 mM sodium chloride at pH 7.2 or 10 mM sodium phosphate, 100 mM sodium chloride at pH 7.2, 2.5% DMSO. Aliquots of the anti-TMV compounds at Interactions of Anti-Viral Drugs and TMV CP Given that TMV CP disks are regarded as the target of small-molecule compounds [6], we studied the interactions between TMV CP disk and anti-TMV compounds 0.5 mM of the small-molecule compounds NNM, ATF, DFL and BQX were respectively added to 0.5 mM (6.8 mg/mL) TMV CP disk (Table 2), and then incubated for 30 min. The solution interaction was then investigated by SEC, 17% native-PAGE, and ITC methods. Native-PAGE Native-PAGE was performed on ice with the TMV CP samples that were equilibrated overnight in a buffer containing 10 mM sodium phosphate and 100 mM sodium chloride at pH 7.2. 20 μL of the sample was treated with 20 μL of 2× loading buffer, including 12.5% 0.5 M Tris-HCl (v/v) at pH 6.8, 0.5% bromophenolblue (w/v), and 30% glycerin (v/v). Subsequently, 8 μL of the samples and 4 μL of the protein marker were loaded on a native-PAGE gel (4% stacking and 17% separating gel). Electrophoresis was run at 1× native-PAGE buffer (Tris-Gly, pH 8.8) at 273 K for 1 h [29]. After native-PAGE electrophoresis, the lane was stained with Coomassie blue [30,31] to locate the protein, and then destained with methanol and glacial acetic acid. ITC ITC binding experiments were performed using an ITC 200 Micro Calorimeter at 291 K. All proteins were dialyzed against a buffer for 3 day prior to forming the TMV CP disk; the buffer contained 10 mM sodium phosphate, 100 mM sodium chloride at pH 7.2 or 10 mM sodium phosphate, 100 mM sodium chloride at pH 7.2, 2.5% DMSO. Aliquots of the anti-TMV compounds at Interactions of Anti-Viral Drugs and TMV CP Given that TMV CP disks are regarded as the target of small-molecule compounds [6], we studied the interactions between TMV CP disk and anti-TMV compounds 0.5 mM of the small-molecule compounds NNM, ATF, DFL and BQX were respectively added to 0.5 mM (6.8 mg/mL) TMV CP disk (Table 2), and then incubated for 30 min. The solution interaction was then investigated by SEC, 17% native-PAGE, and ITC methods. Native-PAGE Native-PAGE was performed on ice with the TMV CP samples that were equilibrated overnight in a buffer containing 10 mM sodium phosphate and 100 mM sodium chloride at pH 7.2. 20 μL of the sample was treated with 20 μL of 2× loading buffer, including 12.5% 0.5 M Tris-HCl (v/v) at pH 6.8, 0.5% bromophenolblue (w/v), and 30% glycerin (v/v). Subsequently, 8 μL of the samples and 4 μL of the protein marker were loaded on a native-PAGE gel (4% stacking and 17% separating gel). Electrophoresis was run at 1× native-PAGE buffer (Tris-Gly, pH 8.8) at 273 K for 1 h [29]. After native-PAGE electrophoresis, the lane was stained with Coomassie blue [30,31] to locate the protein, and then destained with methanol and glacial acetic acid. ITC ITC binding experiments were performed using an ITC 200 Micro Calorimeter at 291 K. All proteins were dialyzed against a buffer for 3 day prior to forming the TMV CP disk; the buffer contained 10 mM sodium phosphate, 100 mM sodium chloride at pH 7.2 or 10 mM sodium phosphate, 100 mM sodium chloride at pH 7.2, 2.5% DMSO. Aliquots of the anti-TMV compounds at Interactions of Anti-Viral Drugs and TMV CP Given that TMV CP disks are regarded as the target of small-molecule compounds [6], we studied the interactions between TMV CP disk and anti-TMV compounds 0.5 mM of the small-molecule compounds NNM, ATF, DFL and BQX were respectively added to 0.5 mM (6.8 mg/mL) TMV CP disk (Table 2), and then incubated for 30 min. The solution interaction was then investigated by SEC, 17% native-PAGE, and ITC methods. Native-PAGE Native-PAGE was performed on ice with the TMV CP samples that were equilibrated overnight in a buffer containing 10 mM sodium phosphate and 100 mM sodium chloride at pH 7.2. 20 μL of the sample was treated with 20 μL of 2× loading buffer, including 12.5% 0.5 M Tris-HCl (v/v) at pH 6.8, 0.5% bromophenolblue (w/v), and 30% glycerin (v/v). Subsequently, 8 μL of the samples and 4 μL of the protein marker were loaded on a native-PAGE gel (4% stacking and 17% separating gel). Electrophoresis was run at 1× native-PAGE buffer (Tris-Gly, pH 8.8) at 273 K for 1 h [29]. After native-PAGE electrophoresis, the lane was stained with Coomassie blue [30,31] to locate the protein, and then destained with methanol and glacial acetic acid. ITC ITC binding experiments were performed using an ITC 200 Micro Calorimeter at 291 K. All proteins were dialyzed against a buffer for 3 day prior to forming the TMV CP disk; the buffer contained 10 mM sodium phosphate, 100 mM sodium chloride at pH 7.2 or 10 mM sodium phosphate, 100 mM sodium chloride at pH 7.2, 2.5% DMSO. Aliquots of the anti-TMV compounds at 0.5 5 a Pure product. Native-PAGE Native-PAGE was performed on ice with the TMV CP samples that were equilibrated overnight in a buffer containing 10 mM sodium phosphate and 100 mM sodium chloride at pH 7.2. 20 µL of the sample was treated with 20 µL of 2ˆloading buffer, including 12.5% 0.5 M Tris-HCl (v/v) at pH 6.8, 0.5% bromophenolblue (w/v), and 30% glycerin (v/v). Subsequently, 8 µL of the samples and 4 µL of the protein marker were loaded on a native-PAGE gel (4% stacking and 17% separating gel). Electrophoresis was run at 1ˆnative-PAGE buffer (Tris-Gly, pH 8.8) at 273 K for 1 h [29]. After native-PAGE electrophoresis, the lane was stained with Coomassie blue [30,31] to locate the protein, and then destained with methanol and glacial acetic acid. ITC ITC binding experiments were performed using an ITC 200 Micro Calorimeter at 291 K. All proteins were dialyzed against a buffer for 3 day prior to forming the TMV CP disk; the buffer contained 10 mM sodium phosphate, 100 mM sodium chloride at pH 7.2 or 10 mM sodium phosphate, 100 mM sodium chloride at pH 7.2, 2.5% DMSO. Aliquots of the anti-TMV compounds at 5 mM (syringe) were injected into TMV CP solutions at 0.5 mM (cell). Data were processed using Origin software, and binding isotherms were calculated based on a one-site binding model. A single titration was conducted for TMV CP. The K d error values were based on the sum of square deviations between the nonlinear regression curve and the experimental data (Table 3) [32,33]. The experiment was performed by titrating 10 mM compounds into 0.5 mM TMV CP disk. The ITC data were fitted to a one-set-of-sites model, errors from the fitting were shown. In Vivo Assays We performed the efficacy of BQX, DFL, ATF and NNM on reconstituted TMV virus infection by N. glutinosa by mechanical inoculation. The leaves on N. glutinosa of the same ages were selected. The reconstituted TMV with 58.8 µg/mL concentration was dipped and inoculated on the whole leaves. Then the leaves were washed with water and dried. The compound solution (100 and 500 µg/mL) was smeared on one side and the buffer was smeared on the other side for control. The local lesion numbers were then recorded 3-4 days after inoculation. For each compound, three repetitions were conducted to ensure the reliability of the results. The in vivo assays were showed in (Table 4) as follows. c The experiment was inoculated by means of half leaf, Inhibition rate (%) = (av local numbers of control (not treated with compounds)´av local numbers of treatment with compounds)/av local numbers of control (not treated with compounds); d In order to avoid errors, when we used means of half leaf, the treatment half and the opposing half were used alternatively, the buffer is 10 mM sodium phosphate and 100 mM sodium chloride solution (pH 7.2), errors were shown. Conclusions TMV CP disk were used as targets of anti-viral compounds. In our study, TMV CP disk were disassembled into trimers when the NNM solution concentrations increased; and TMV CP disk were partly disassembled into trimers or dimers when ATF was added. By contrast, the disk showed little disassembly into trimers or dimers with the addition of DFL and BQX. The results demonstrated that there were interactions between NNM and TMV CP disk, and that interactions between the inter-subunits and layers of the TMV CP disk were disrupted by NNM. We speculate NNM replaced the binding sites in the disk and disrupted the interactions between the inter-subunits and layers of the TMV CP disk. To the best of our knowledge, this study is the first to clearly show the inhibition of the TMV assembly through a small-molecule-protein interaction. This approach provides original data, as well as alternative strategies and methods, which can be used in anti-viral drug mechanism research.
2016-03-14T22:51:50.573Z
2016-02-26T00:00:00.000
{ "year": 2016, "sha1": "11f3d2887a5fe4ace3ac18794047814000db0946", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/17/3/252/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d7adb0d071b9f612706c3de2ffe39b1bffca1a29", "s2fieldsofstudy": [ "Biology", "Chemistry", "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
247427987
pes2o/s2orc
v3-fos-license
CAR T cells remain during long-term cancer remission A number of chimeric antigen receptor (CAR) T cell-based therapies are now approved by the FDA for the treatment of cancer. A study published in Nature found that CAR T cells are still present in two patients who remain cancer-free over a decade after they received CAR T cell therapy as part of a clinical trial. Chronic lymphocytic leukaemia (CLL) is a cancer that arises in B cells, the cells within the immune system that produce antibodies, and is one of the most common types of leukaemia in adults. Chimeric antigen receptor (CAR) T cell therapy, in which a patient's T cells are isolated and genetically engineered to attack the patient's tumour cells, has been tested as a treatment in patients with CLL. A subset of CLL patients treated with CAR T cells targeting the B cell antigen CD19 (CTL019 cells) as part of a phase I trial in 2010 exhibited complete and durable responses, with some still in remission over 10 years later. To further understand the biology of such sustained responses to CAR T cell therapy, Melenhorst et al. studied two of the patients who displayed complete responses to CTL019 cells in 2010 and remain in remission 1 . The authors investigated whether CTL019 cells remained detectable in the patients. Peak numbers of CTL019 cells occurred in the patients 3 and 31 days after infusion. However, 10 and 9 years post-infusion, CTL019 cells remained, representing 0.8 and 0.1% of all T cells analysed from the two patients. By sequencing the T cell receptor on the CAR T cells, the authors showed that the specific populations of CTL019 cells present in the patients changed initially but then stabilised at different time points in the two patients. Two major subtypes of T cells are CD4+ (helper) T cells and CD8+ (cytotoxic) T cells, which express either the CD4 or CD8 glycoproteins. Initially, the CTL019 cells lacked either glycoprotein or were CD8+; later, a small number of CD4+ cell populations became dominant. Further analysis of these CD4+ T cells suggested that they remained functionally active and, unusually, had some of the characteristics of cytotoxic T cells. In summary, these results show that there have been two major phases of response in the patients, with an initial response phase followed by a long-term remission phase. The sustained remission seen in these two patients could be a consequence of the cytotoxic activity of persistent CTL019 cells. Studies such as this one further our mechanistic understanding of how these treatments work long-term and elicit the remarkable clinical outcomes seen in some patients. Katharine Barnes ✉ ✉ email: k.barnes@nature.com Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/ licenses/by/4.0/.
2022-03-14T15:15:45.722Z
2022-03-11T00:00:00.000
{ "year": 2022, "sha1": "c2e885ea7f5c01f5e47a62b516dd0212dfe85cbe", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s43856-022-00092-w.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ee89fedc1616b422c961e145a0654fd7b0dec558", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [] }
247119213
pes2o/s2orc
v3-fos-license
Dexmedetomidine Alleviates Lung Oxidative Stress Injury Induced by Ischemia-Reperfusion in Diabetic Rats via the Nrf2-Sulfiredoxin1 Pathway Oxidative stress injury (OSI) is an important pathological process in lung ischemia-reperfusion injury (LIRI), and diabetes mellitus (DM) can exacerbate this injury. Dexmedetomidine protects against LIRI by reducing OSI. However, the effect of dexmedetomidine on LIRI under diabetic conditions remains unclear. Therefore, this study is aimed at exploring the effects and mechanisms of dexmedetomidine on OSI induced by LIRI in diabetic rats. Rats were randomly divided into control+sham (CS), DM+sham (DS), control+ischemia-reperfusion (CIR), DM+ischemia-reperfusion (DIR), and DM+ischemia-reperfusion+dexmedetomidine (DIRD) groups (n = 6). In the CS and DS groups, the nondiabetic and diabetic rats underwent thoracotomy only without LIRI. In the CIR, DIR, and DIRD groups, LIRI was induced through left hilum occlusion for 60 min, followed by reperfusion for 120 min in nondiabetic and diabetic rats, and rats in the DIRD group were administered dexmedetomidine (3, 5, and 10 μg/kg). Compared with those in the CS group, the OSI, lung compliance, apoptosis, and oxygenation indices deteriorated in the DS group (P < 0.05), and these indices were further aggravated in the CIR and DIR groups (P < 0.05), being the worst in the DIR group (P < 0.05). Compared to those of the DIR group, the OSI, lung compliance (15.8 ± 2.4 vs. 11.6 ± 1.7 ml/kg), apoptosis (22.5 ± 2.6 vs. 51.8 ± 5.7), oxygenation (381 ± 58 vs. 308 ± 78 mmHg), and caspase-3 and caspase-9 protein expression indices were attenuated, and Nrf2 and sulfiredoxin1 protein expression was increased in the DIRD group (P < 0.05). And the lung injury, oxygenation, OSI, and Nrf2 and sulfiredoxin1 protein expression changed in a concentration-dependent manner. In conclusion, dexmedetomidine alleviated lung OSI and improved lung function in a diabetic rat LIRI model through the Nrf2-sulfiredoxin1 pathway. Introduction Many people have diabetes mellitus (DM) worldwide. With an increased occurrence in both aging individuals and young adults (<40 years old), DM is an independent risk factor for morbidity and mortality after lung ischemia-reperfusion (I/R) injury, especially in those with type 2 DM [1]. DM is associated with many severe complications, and the lung is one of the target organs [2]. In the context of diabetes, increased attention should be paid to preventing lung injury during lung surgery. Lung I/R injury can occur in many clinical contexts, such as lung transplantation, shock, cardiopulmonary bypass, and single-lung ventilation, which can contribute to severe organ failure and increase mortality in patients [3,4]. Oxidative stress injury can lead to the intracellular generation of reactive oxygen species (ROS), which are important factors that contribute to lung I/R injury [5]. Additionally, previous studies have shown that sustained hyperglycemia produces excessive ROS, which contribute to diabetic lung injury in rats and humans [6][7][8][9]. Thus, the inhibition of oxidative stress injury is essential for alleviating diabetic lung I/R injury. Dexmedetomidine (DEX), a highly selective α 2 -adrenergic agonist with sedative, anxiolytic, analgesic, and sympatholytic inhibitory characteristics, has been widely applied in the clinic [10]. Although DEX has been shown to have protective effects on lung I/R injury by decreasing oxidative stress injury, the specific mechanism by which DEX affects oxidative stress injury remains unclear [11,12]. A previous study showed that nuclear factor erythroid 2-related factor (Nrf2) and its downstream protein sulfiredoxin1 participated in oxidative stress injury [13]. DEX pretreatment could activate the Nrf2 pathway in many organs in I/R models, such as the liver, brain, and intestines [14][15][16]. However, whether the effects of DEX on diabetic lung I/R injury are related to Nrf2-sulfiredoxin1-induced antioxidative effects is still unknown. Through this study, we provided an innovative pathway, Nrf2-sulfiredoxin1 pathway, and tried to explain its effects on oxidative stress injury induced by lung I/R. Therefore, this study is aimed at examining the effects of DEX on lung injury induced by I/R in diabetic rats and exploring the potential role of the Nrf2-sulfiredoxin1 pathway. We present this article in accordance with the ARRIVE checklist. Materials and Methods 2.1. Animals. Adult, pathogen-free male Sprague Dawley rats weighing 200-220 g were purchased from the Experiment Center of the Affiliated Hospital of Qingdao University. The animals were housed in a temperature-controlled room with ad libitum access to food and water and a 12-12 h lightdark cycle before the experiment. The animal health and behavior were monitored every day. The Animal Care and Welfare Committee approved all experiments and procedures in this study (No. AHQU-MAL20180913). DM Model. The DM rat model was established by the administration of a high-fat diet (15% lard, 5% sesame oil, 20% sucrose, 2.5% cholesterol, and 57.5% normal chow) for 4 weeks followed by a low-dose intraperitoneal injection of streptozotocin ( [17]. The rats fed a standard diet were used as nondiabetic controls. 2.3. Lung I/R Injury Model. The rats were anesthetized by an intraperitoneal injection of sodium pentobarbital (60 mg/kg). After anesthesia, the rats were intubated with a tracheal tube under a laryngoscope. The tracheal tube was connected to a small animal ventilator for mechanical ventilation with a tidal volume of 8 ml/kg. The respiratory rate was adjusted to maintain an arterial carbon dioxide tension (PaCO 2 ) of 35-45 mmHg. The femoral artery and vein were cannulated for blood pressure monitoring and drug administration, respectively. After a left lateral thoracotomy, the left lung hilum was clamped 5 min after the administration of heparin (50 IU/animal) with a noninvasive microvascular clip at the end of expiration. The tidal volume was reduced to 6 ml/kg during clamping. After a 60 min ischemic period, T0-T3 represented the baseline, 60 min after ischemia, and 60 min and 120 min after reperfusion, respectively. PaO 2 /FiO 2 : partial pressure of arterial oxygen (PaO 2 )/fraction of inspired oxygen (FiO 2 ); BE: base excess; PaCO 2 : arterial carbon dioxide tension; CS: control+sham; DS: diabetes mellitus+sham; CIR: control+ischemia-reperfusion; DIR: diabetes mellitus+ischemia-reperfusion; DIRD: diabetes mellitus+ischemia-reperfusion+dexmedetomidine. In the DIRD group, the 3 μg/kg of DEX was used. * P < 0:05 vs. the CS group; # P < 0:05 vs. the DS group; △ P < 0:05 vs. the CIR group; § P < 0:05 vs. the DIR group. 3 BioMed Research International the microvascular clip was removed for 120 min of reperfusion. Then, the tidal volume was adjusted to 8 ml/kg. During the experiment, the body temperature was maintained between 37.5°C and 38.5°C with a heating blanket. At the end of the experiment, the rats were euthanized by exsanguination under anesthesia. Groups. The rats were randomly divided into 5 groups (n = 6): control+sham (CS) group, DM+sham (DS) group, control+I/R (CIR) group, DM+I/R (DIR) group, and DM +I/R+DEX (DIRD) group. In order to further explore the effects of DEX, 3 μg/kg, 5 μg/kg, and 10 μg/kg were used. In the CS and DS groups, the nondiabetic and diabetic rats underwent thoracotomy and were ventilated with 40% O 2 without ischemia and reperfusion. In the CIR and DIR groups, lung I/R injury was established in the nondiabetic and diabetic rats, which were then ventilated with 40% O 2 . In the DIRD group, lung I/R injury was established in diabetic rats, and DEX (No. 191005BP, Jiangsu Hengrui Pharmaceutical Co., Ltd., Lianyungang, China) was administered through the femoral vein for 10 min before reperfusion. The rats in the other groups were administered the same volume of normal saline (Figure 1). Blood Gas Analysis. Arterial blood gas analysis was performed by a blood gas analyzer (Rapid Lab 248, Bayer, Medfield, MA, USA) at baseline (3 min after ventilation), 60 min after ischemia, and 60 and 120 min after reperfusion, and the data were recorded as T0-T3. At the end of the experiment, blood from the left pulmonary vein was also collected for blood gas analysis. Measurement of Lung Static Compliance. After the euthanasia of the rats, the median sternotomies were immediately performed. The lungs were isolated with a tracheal tube, and the right lung hilum was ligated. Then, the tracheal tube was connected to an apparatus to measure the left lung static pressure-volume (P-V) curve to assess static lung compliance. Airway pressure was increased to 30 cm H 2 O before being decreased to 0 cm H 2 O in stepwise intervals of 5 cm. After 1 min of stabilization, the lung volume was recorded through gas compression [18]. This parameter was examined by a researcher who was blinded to the study conditions. Measurement of Oxidative Stress Injury Parameters. After the euthanasia of the rats, one part of the lower lobe of the left lung was homogenized with cold normal saline Histopathological Examination and Scoring. After the euthanasia of the rats, the middle lobe of the left lung was fixed in paraformaldehyde, embedded in paraffin, and cut into 6 μm thick sections for hematoxylin-eosin (H&E) staining. The lung injury score (LIS) [19] was evaluated by histopathology based on the following criteria: (1) neutrophil infiltration, (2) airway epithelial cell damage, (3) interstitial edema, (4) hyaline membrane formation, and (5) hemorrhage. Each criterion had five scores as follows: normal = 0, minimal change = 1, mild change = 2, moderate change = 3, and severe change = 4. All sections were evaluated via light microscopy by a pathologist who was blinded to this study. 2.9. Apoptosis Measurement by Immunohistochemistry. The tissues, middle lobe of the left lung, embedded in paraffin were used for terminal deoxynucleotidyl transferase dUTP nick end-labeling (TUNEL) staining to assess alveolar epithelial cell apoptosis (Zhongshan Golden Bridge Biotechnology, Beijing, China). The number of positive cells per 100 cells in five random fields from the same TUNEL-stained section was counted and recorded as the apoptotic index (AI) [20]. The protein expression levels of caspase-3 and caspase-9 in lung tissues were measured using immunohistochemical staining (Zhongshan Golden Bridge Biotechnology, Beijing, China). The number of positive cells per section was counted in five random fields in each specimen to calculate the immunohistochemical score (IHS), which was determined by multiplying the quantity score (an estimation of the percentage of immunoreactive cells: no staining was scored as 0; 1%-10% of cells was scored as 1; 11%-50% was scored as 2; 51%-80% was scored as 3; and 81%-100% was scored as 4) by the staining intensity score (an estimation of the staining intensity: 0 = negative, 1 = weak, 2 = moderate, and 3 = strong). All sections were examined by a pathologist using a single-blind method [21]. Results 3.1. Experiment-Related Data. All rats were operated on successfully in this study. After the microvascular clip was removed, the chest was closed. Thus, the total time the chest was open was 62:3 ± 1:5 min in the CS group, 62:5 ± 1:0 min in the DS group, 63:5 ± 1:0 min in the CIR group, 63:7 ± 1:0 min in the DIR group, and 63:7 ± 1:6 min in the 3 μg DIRD group, 65:2 ± 1:5 min in the 5 μg DIRD group, and 63:0 ± 2:5 min in the 10 μg DIRD group. There were no significant differences among the groups. Additionally, the baseline data including weight (214 ± 4 g in the CS group, 214 ± 5 g in the DS group, 210 ± 8 g in the CIR group, 209 ± 5 g in the DIR group, and 211 ± 6 g in the 3 μg DIRD group, 211 ± 5 g in the 5 μg DIRD group, and 209 ± 7 g in the 10 μg DIRD group) and blood gas analysis in all groups showed no significant differences (Table 1). 7 BioMed Research International oxygenation index in the CIR, DIR, and DIRD groups was further decreased, and the oxygenation index in the DIR group was lower than that in the CIR group (P < 0:05). Compared with that in the DIR group, the oxygenation index in the DIRD group was increased significantly (P < 0:05). The oxygenation index in all groups at the T3 time point exhibited a similar trend as that at T2, as did the base excess and pH values (Table 1). Additionally, there was also a similar trend in the analysis of blood from the left pulmonary vein as the related indices shown above, which provided concentrationdependent effects (Table 2 and Figure 2). DEX Maintained Lung Tissue Structure. The histological changes showed minimal lung injury in the CS group, mild lung injury in the DS group, and severe lung injury in the CIR and DIR groups. In contrast, the histological changes showed moderate damage in the DIRD group. The LISs paralleled the histological changes. Therefore, the neutrophil infiltration LIS in the DS group (0.5 (0 to 2)) was higher than that in the CS group (0 (0 to 1)), and the neutrophil infiltration LIS in the CIR group (3 (1 to 3)) was higher than that in the CS and DS groups (P < 0:05). The neutrophil infiltration LIS in the DIR group (4 (3 to 4)) was higher than that in the CIR group, and the neutrophil infiltration LIS in the DIRD group (1.5 (1 to 2)) was lower than that in the DIR group (P < 0:05). The LISs of the other criteria also exhibited similar trends (Figure 4). Additionally, the lung injury score improved in a concentration-dependent manner ( Figure 5). DEX Decreased Lung Oxidative Stress Injury. Compared with that in the CS and DS groups, the MDA level was significantly increased in the CIR, DIR, and DIRD groups, and the MDA level in the DS group was higher than that in the CS group (P < 0:05). Compared with that in the CIR and DIR groups, the MDA level in the DIRD group was significantly decreased, and the MDA level in the DIR group was higher than that in the CIR group (P < 0:05). The levels of 8-OHdG and iNOS showed the same trend as those of MDA (P < 0:05). Additionally, the activities of GSH-PX, SOD, and T-AOC showed the opposite trend to those of MDA (P < 0:05) (Table 3). Additionally, the oxidative stress injury changed in a concentration-dependent manner ( Figure 6). DEX Decreased Apoptosis in Lung Tissue. Compared with that in the CS and DS groups, the number of TUNEL-positive cells in the CIR, DIR, and DIRD groups increased significantly, and the number of TUNEL-positive cells in the DS group was higher than that in the CS group (P < 0:05). Compared with that in the CIR and DIR groups, the number of TUNEL-positive cells in the DIRD group was significantly decreased, and the number in the DIR group was higher than that in the CIR group (P < 0:05). Therefore, compared with that in the CS (6:3 ± 2:5) and DS groups (14:8 ± 2:6), the AI in the CIR group (38:8 ± 6:9), the DIR group (51:8 ± 5:7), and the DIRD group (22:5 ± 2:6) was significantly increased (P < 0:05). Compared with that in the CIR and DIR groups, the AI in the DIRD group was significantly decreased, and the AI in the DIR group was higher than that in the CIR group (P < 0:05) (Figure 7). Additionally, a similar trend was observed in the IHS of caspase-3 and caspase-9 (Figures 8 and 9). 3.7. DEX Activated the Nrf2-Sulfiredoxin1 Pathway. Compared with those in the CS and DS groups, the Nrf2 and sulfiredoxin1 protein expression levels in the CIR and DIR groups were increased significantly (P < 0:05), and the Nrf2 and sulfiredoxin1 protein expression levels in the DIRD group were significantly increased compared with those in the DIR and CIR groups (P < 0:05) ( Figure 10). Additionally, the Nrf2 and sulfiredoxin1 pro-tein expression levels increased in a concentrationdependent manner (Figure 11). Discussion In this study, DM exacerbated lung I/R injury in rats, as demonstrated by decreases in oxygenation, lung compliance, GSH-PX, SOD, and T-AOC; severe tissue structure damage; and increases in 8-OHdG, iNOS, MDA, and apoptosis. The administration of DEX before reperfusion alleviated lung I/R injury, maintained oxygenation and lung compliance, and decreased apoptosis by increasing antioxidant capacity and reducing oxidative stress injury. Additionally, DEX 11 BioMed Research International activated the Nrf2-sulfiredoxin1 pathway, which was associated with the protective effects of DEX on diabetic lung I/R injury. Oxidative stress injury is the initial mechanism of I/R injury [22]. MDA is one of the most important products of membrane lipid peroxidation, 8-OHdG is the most commonly used biomarker of DNA oxidative damage, and iNOS induces a large amount of NO and causes oxidative stress injury when stimulated. SOD is an antioxidative metal enzyme, T-AOC is an indicator of the total antioxidative level, and GSH-PX is an important peroxidase that protects the cell membranes from peroxide damage. This study evaluated oxidative stress through these indices. Oxidative stress injury could lead to the intracellular generation of ROS, which directly leads to tissue injury and apoptosis [23]. The resultant oxidant stress has been implicated in the subsequent development of the inflammatory response, which increases apoptosis and decreases lung compliance and oxygenation [24]. In this study, severe oxidative stress injury was found, and worsened tissue structure, compliance, oxygenation function, and apoptosis were also observed after reperfusion, which was consistent with the findings of previous studies [25][26][27]. Figure 11: Protein expression of Nrf2 and sulfiredoxin1 by western blotting in different concentrations of DEX groups (n = 4). The expression of Nrf2 and sulfiredoxin1 in lung grafts was measured by western blotting after 120 min of reperfusion. (a) Representative bands of sulfiredoxin1, Nrf2, and β-actin; (b) expression of sulfiredoxin1 and Nrf2 normalized to β-actin. Nrf2: nuclear factor erythroid 2-related factor. † P < 0:05 vs. the DIR group; ‡ P < 0:05 vs. the 3 μg DIRD group; ┼ P < 0:05 vs. the 5 μg DIRD group. 12 BioMed Research International DM, characterized by persistent blood hyperglycemia, was related to approximately 1.5 million deaths in 2012 [2]; moreover, DM is an independent risk factor for morbidity and mortality after lung I/R injury [1]. We hypothesized that DM was associated with oxidative stress injury. First, hyperglycemia can induce the overproduction of superoxide in many kinds of tissue injuries [28]. Second, sustained hyperglycemia produces excessive ROS, resulting in damage to DNA, lipids, and proteins [6,29]. In this study, we also found that DM worsened lung I/R injury and induced excessive oxidative stress injury, which was also demonstrated by other studies [4,30]. Therefore, reducing oxidative stress injury is important for the treatment of diabetic lung I/R injury. DEX, a second-generation, highly selective α 2 -adrenergic receptor agonist, is used as a preoperative sedative and general anesthesia adjuvant in the clinic. In 2017, Fu et al. [31] showed that DEX decreased lipopolysaccharide-induced acute lung injury by regulating the levels of ROS and lipid peroxides. In 2018, Zhou et al. [32] reported that DEX could attenuate MDA levels and improve SOD levels to decrease oxidative stress injury in a rat ex vivo lung I/R model. In 2019, Liang et al. [11] also demonstrated that DEX decreased lung I/R injury by decreasing oxidative stress injury. Considering the antioxidative effects of DEX, we hypothesized that DEX could also exert a beneficial effect in diabetes models. Thus, this study applied DEX in a model of DM. The results showed that DEX reduced oxidative stress injury, decreased apoptosis, maintained cell structure stability, and improved lung function. These effects were concentration dependent. This finding further confirmed our initial hypothesis. Although the protective effects of DEX on lung I/R injury have been recognized, the mechanisms remain unclear. In 2017, Wu et al. [33] found that sulfiredoxin1 could prevent cerebral I/R-induced oxidative stress injury. In 2018, Zhang et al. [13] demonstrated that when Nrf2 expression decreased, its downstream protein, sulfiredoxin1, was decreased in an oxygen-glucose deprivation/reoxygenation model in primary neurons, and oxidative stress injury was exacerbated. Thus, we hypothesized that the activation of the Nrf2-sulfiredoxin1 pathway may decrease oxidative stress injury induced by lung I/R and attenuate lung I/R injury. In this study, DEX activated the protein expression of Nrf2 and sulfiredoxin1 and decreased the oxidative stress injury induced by I/R. And with the increased concentration of DEX, the more protein expressions of Nrf2 and sulfire-doxin1 were activated and the less oxidative stress injury was displayed. Through this study, we demonstrated the importance of the Nrf2-sulfiredoxin1 pathway in the protective effects of DEX on diabetic lung I/R injury. We know that activation of Nrf2 protects against oxidative stress injury induced by ischemia-reperfusion injury. Previous studies [14,16,34] found that DEX restores the decline of Nrf2 activity induced by I/R injury or inflammation to the level of the control group. In their studies, the Nrf2 expression decreased in the "injury" group and DEX restores the decline of Nrf2 activity to the level of the control group, which was not exactly the same as our current research. The reasons existed. First, in the normal tissues without injury, the Nrf2 was expressed relatively low in a certain degree, while the Nrf2 expression increased in the injured tissues. This may mean a kind of self-regulation after tissue injury, and self-regulation may have a certain limit. After DEX was used, Nrf2 was further activated. So the activation of Nrf2 was much more enhanced in the DEX group than in the control group. Second, it cannot be ruled out that it was related to the conditions of the rats, the establishment of the model, the degree of tissue damage, etc. The researches of Chen et al. [15] and ours [35] provided similar results with this study. Of course, these all need to be further verified and confirmed in our future research. There were some limitations to the present study. First, the DEX used in nondiabetic rats was not observed. In this study, we mainly observed the effect of DEX under diabetic condition, and there have been many articles that have confirmed the protective effects of DEX on I/R injury in the lungs of normal rats [11,12,36]. In the future, we will explore the effects of DEX at different concentrations in the normal and diabetic I/R injury groups to clarify whether the effect of DEX is simply to reduce I/R injury or whether it is specifically effective for I/R injury enhanced by diabetes. Second, the time-dependent effects of DEX on oxidative stress injury and protein expression were not measured, so the extent of the effect of DEX is unknown. Third, this DM model was established via a straightforward procedure to simulate the clinical symptoms of diabetic patients, but these symptoms could not be fully simulated. Then, HO-1 is a widely studied downstream target of Nrf2, and the relationship between sulfredoxin1 and HO-1 will be explored in the further, which will be more conducive to the research of the Nrf2 pathway. Finally, the Nrf2-knockout model will further demonstrate the mechanism by which DEX affects lung I/R injury in diabetic rats. Conclusions Rat lung I/R can result in oxidative stress injury, apoptosis, and worsened lung function, and DM can exacerbate these injuries. DEX treatment can alleviate diabetic lung I/R injury by decreasing oxidative stress injury, probably via the Nrf2-sulfiredoxin1 pathway.
2022-02-26T16:20:30.516Z
2022-02-24T00:00:00.000
{ "year": 2022, "sha1": "c6b8ccba6415469826ac3d809c314e690bebfc15", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/bmri/2022/5584733.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "99d9a53e111855b319d38e459448071e360abc20", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
251928946
pes2o/s2orc
v3-fos-license
On the Use of Water and Methanol with Zeolites for Heat Transfer Reducing carbon dioxide emissions has become a must in society, making it crucial to find alternatives to supply the energy demand. Adsorption-based cooling and heating technologies are receiving attention for thermal energy storage applications. In this paper, we study the adsorption of polar working fluids in hydrophobic and hydrophilic zeolites by means of experimental quasi-equilibrated temperature-programmed desorption and adsorption combined with Monte Carlo simulations. We measured and computed water and methanol adsorption isobars in high-silica HS-FAU, NaY, and NaX zeolites. We use the experimental adsorption isobars to develop a set of parameters to model the interaction between methanol and the zeolite and cations. Once we have the adsorption of these polar molecules, we use a mathematical model based on the adsorption potential theory of Dubinin–Polanyi to assess the performance of the adsorbate-working fluids for heat storage applications. We found that molecular simulations are an excellent tool for investigating energy storage applications since we can reproduce, complement, and extend experimental observations. Our results highlight the importance of controlling the hydrophilic/hydrophobic nature of the zeolites by changing the Al content to maximize the working conditions of the heat storage device. Section S1. Structural model of zeolites. All zeolites were generated following the same procedure regardless the Si/Al ratio. The unit cell of the pure silica FAU contains 192 Si atoms. We substitute some Si atoms by Al atoms to reproduce the experimental chemical composition (Si/Al ratio of 100, 2.61, and 1.06, HS-FAU, NaY, and NaX, respectively). HS-FAU, NaY, and NaX contain 2, 56, and 88 Al atoms, respectively. We generated the structures following the methodology described in previous works [1,2]. We started from the crystallographic positions of the pure silica zeolite from the International Zeolite Association (IZA) database [3] to construct the aluminosilicates. For each structure, we created a set of 50 configurations by randomly substituting some silicon atoms by aluminum atoms within the constraint of Löwenstein's rule and selected the most energetically favorable configuration. Then, we compensated the net negative charge of the adsorbents by placing sodium extra-framework cations in the most probable crystallographic positions reported in the literature. A detailed description of these extra-framework cations is given in references [4][5][6]. Once we added the extra-framework cations to their preferential location, we optimized the structures with energy minimization simulations using Baker's [7] method and a full-flexible core-shell potential. [8,9] Section S2. Parameterization of methanol-zeolite interactions. Interactions parameters between the molecules of water and the HS-FAU and NaX zeolites were developed in our previous work [10] using experimental adsorption isobars as reference data. In this work, we also computed the adsorption isobar of water in NaY, showing that the water-zeolite interactions are transferable in the whole range of Si/Al substitutions. Here we followed a similar procedure to obtain the methanolzeolite interactions. Starting from the cross-term Lennard-Jones parameters for each pseudo atom of the methanol-zeolite pairs, we iteratively modify the ε and σ parameters, creating a matrix of values smaller and larger than the initials. The partial charges for the adsorbates and zeolites are kept fixed and given in Table S1. For each set of parameters, we computed five values of an adsorption isobar from the low to the high coverage regime. We first compare with experimental data for NaX to narrow the search of adequate parameters to reproduce the adsorption in the zeolite with the highest content of extra-framework cations. Then, we compare with the measured data for HS-FAU and finally for NaY. We repeated the process until we found reasonable agreement between experiments and simulations using the same set of Lennard-Jones parameters regardless the Si/Al ratio. The optimal values are provided in Table S2 and the validation against experimental values is shown in Figure 2 of the manuscript. a reported in reference [11], b reported in reference [12], and c reported in reference [13], Table S2. Lennard-Jones parameters to describe the interactions between the zeolite and the water and methanol molecules. Section S3. Thermodynamical and mathematical model. We used the mathematical model based on the Dubinin-Polanyi theory [14,15] to obtain the energy storage properties of the zeolite-fluid working pairs. We first convert the adsorption isobars into their corresponding characteristic curves. The characteristic curve relates the volumetric uptake ( ) (volume of fluid adsorbed in the micropores [ml/g]) and the adsorption potential ( ) [kJ/mol]. The adsorption potential is the molar free energy of adsorption with opposite sign ( ). where ( ), is the temperature-dependent vapour saturation pressure of the working fluid, ( ), the loading of adsorbed fluid per mass of adsorbent, [g/g], and ( ), the density of fluid confined within the micropores [g/ml]. We use the Peng Robinson equation of state to calculate the saturation pressure of each fluid. [16] We obtained the loading of fluid from QE-TPDA experiments and GCMC simulation. We used the model of Hauer to obtain the density of S5 confined fluids within the micropores. [17,18] This model gives a linear relationship between the density of a fluid confined within the pores of an adsorbent and the operational temperature: where is the free liquid density at the reference (283.15 K for water [15] and 298 K for methanol [19]). is the free liquid thermal expansion coefficient of each working fluid at the reference temperature and 100 MPa [18,19] (3.871 10 -4 K -1 for water and 8.026 10 -4 K -1 for methanol). One of the properties of interest for an adsorption-based heat storage application is the thermochemical storage density or simply storage density (SD). For a given pressure, the SD can be obtained by integrating the loading dependence of the specific enthalpy curves within two selected adsorption and desorption temperatures ( Figure S1): where the relation between loading and temperature (adsorption isobars) can be obtained from the characteristic curve and is the specific adsorption enthalpy. The specific adsorption enthalpy, also referred as differential adsorption enthalpy, isosteric adsorption enthalpy, differential heat of adsorption, or isosteric heat of adsorption, is the amount of heat released or required during adsorption/desorption cycles. It should be noted that in this context, is a positive value, even the enthalpy change related to adsorption processes is defined as a negative value. Figure S1. Schematics of storage density calculation (SD) from the integration of the specific adsorption enthalpy over adsorption-desorption cycles. S6 The Dubinin-Polanyi theory also allows determining the specific adsorption enthalpy , which is defined as [19][20][21][22][23][24]: where is the enthalpy of vaporization, is the adsorption potential and is the differential entropy variation. As mentioned above, is a positive quantity, as well as the three terms in eq. 5. The enthalpy of vaporization is the energy change to transform a substance in liquid phase to gas phase, which is a positive value. This term can be also found in the literature as heat of condensation, usually, when the adsorption enthalpy is referred as heat of adsorption. It should be noted, that in eq. 5, enthalpy of vaporization (or heat of condensation), which depends on the temperature, should express a positive magnitude. The adsorption potential A, is a positive quantity for pressure values below the vapour saturation pressure, and the entropy change for an adsorption process is negative. The two latter terms in eq. 5 account for the total adsorption enthalpy changes during adsorption processes, which can be related with the adsorption potential (molar free energy, ) as: (6) It should be noted that enthalpy is a magnitude that does not depend on the entropy change. As can be inferred from eq. 6, the entropic term in eq. 5 and its negative sign are introduced to cancel the entropic contribution of the molar Gibbs free energy. Finally, the entropy variation [25] is related with the slope of the characteristic curve as: where is the thermal expansion coefficient of the fluid in the adsorbed phase, obtained from the density model. In summary, the mathematical model based on the Dubinin-Polanyi theory allows obtaining the storages densities of adsorbent-fluid working pairs, just from an adsorption isotherm or isobar and some physicochemical properties of the fluids. These properties are the enthalpy of vaporization, bulk liquid density, thermal expansion coefficient, and saturation pressure. Another advantage of the characteristic curve is that it can be reverted to obtain the adsorption isobars or isotherms at different conditions. In addition to the adsorption isobars, we also computed the adsorption isotherms to check the validity of the Dubinin-Polanyi theory. Figure S2 shows the adsorption isotherms of both working fluids in the three zeolites.
2022-08-31T01:16:27.976Z
2022-08-30T00:00:00.000
{ "year": 2023, "sha1": "2e8a737a3eefdb337bd94eb00d46a1e51b9e1694", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1021/acssuschemeng.2c05369", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "85d45644ba862b33bfbf6372b2e1432f9e765c9d", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Medicine", "Physics" ] }
270417023
pes2o/s2orc
v3-fos-license
Comparative effectiveness of treatments on time to remission in atopic dermatitis: real-world insights Introduction: It remains unclear which therapy contributes to atopic dermatitis (AD) remission and to what extent. We aimed to clarify which therapy contributes to the treatment of AD by investigating the time-to-remission and remission hazard ratios for each therapy using real-world data. Methods: This retrospective cohort study included 110 patients diagnosed with AD after their fi rst visit to the Department of Dermatology at Fukuoka University Hospital between 2016 and 2022. The patients were categorized into six treatment groups: 1) topical treatment alone or topical treatment plus 2) ultraviolet light, 3) oral steroids, 4) oral cyclosporine, 5) dupilumab, and 6) oral Janus kinase inhibitors (JAKi). The topical therapy alone group served as the control, and the hazard ratios for remission (Investigator ’ s Global Assessment [IGA] 0/1) were calculated. Results: Forty patients achieved remission, while 70 did not (IGA ≥ 2) with the fi rst treatment regimen. A multivariate Cox proportional hazards analysis adjusted for age, sex, and severity at the fi rst visit (IGA) revealed that the hazard ratios for remission were 4.2 (95% con fi dence interval (C.I.): 1.28 – 13.83, p = 0.018) for the oral cyclosporine group, 5.05 (95% C.I.: 1.96 – 13, p = 0.001) for the dupilumab group, and 67.56 (95% C.I.: 12.28 – 371.68, p < .0001) for the oral JAKi group. The median time to remission was 3 months for JAKi, cyclosporine, and steroid was shorter than 6 months for dupilumab. No serious adverse events were observed. Conclusion: Oral therapy with small molecules requires a shorter duration to achieve remission. However, long-term safety and recurrence are important indicators. Introduction Atopic dermatitis (AD) is an inflammatory skin disease characterized by chronic eczema and pruritus, often associated with allergic diseases, such as asthma and allergic rhinitis, and is known to reduce patients' quality of life [1,2].An average of 7.3% of the population is affected by AD; however, there are considerable differences in its prevalence between countries [3][4][5][6].In Japan, the prevalence of AD in patients between 4 months and 30 years old is approximately 10% [4,7,8].Severe cases of AD are also characterized by the development of erythroderma, which affects work capacity and quality of work life and decreases labor productivity [9,10]. Recently, various new therapeutic agents have been introduced for the treatment of AD.In Japan, cyclosporine, an oral calcineurin inhibitor, was first used in 2008 for refractory AD of moderate or high severity; however, no new therapeutic agents have emerged in the following 9 years [6].In 2017, dupilumab, an anti-interleukin (IL)-4 receptor antibody [11,12], became available for the treatment of AD, followed by the Janus kinase (JAK) inhibitors, baricitinib in 2020 [13] and upadacitinib [14,15] and abrocitinib [16] in 2021.Nemolizumab [17], an anti-IL-31 receptor antibody, was introduced in 2022, and anti-IL-13 and anti-OX40 antibodies will become available in the near future [18][19][20].However, the abundance of new drugs makes it difficult for clinicians to decide which one to choose because there is yet to be sufficient real-world evidence regarding the effectiveness of these new systemic therapies.To clarify the effectiveness of these newly available therapeutic options, we investigated the proportion of patients and the time to achieve remission after the first systemic intervention. Study design This retrospective cohort study included patients younger than 65 years of age with AD who visited the Department of Dermatology, Fukuoka University Hospital, between 2016 and 2022.AD was diagnosed according to the Japanese Dermatological Association guidelines.The exclusion criteria were as follows: i) patients who had been followed up for less than 12 weeks at our hospital; ii) patients with mild symptoms (Investigator's Global Assessment [IGA] value <2); iii) patients receiving two or more systemic therapies simultaneously (e.g., combined use of dupilumab and oral cyclosporine); and iv) patients treated in clinical trials (Figure 1). Treatment and comparison groups The treatment initiated at the first visit for patients with AD included in this study was categorized into six groups, as follows: 1) topical treatment alone or topical treatments plus 2) ultraviolet light, 3) oral steroids, 4) oral cyclosporine, 5) dupilumab, and 6) oral JAK inhibitors.All six groups received topical treatment.For comparative analyses, group 1 served as the control. Endpoints Patients who experienced "remission" were those who initially had an IGA of 2 or higher and subsequently achieved an IGA of 0 or 1 (Figure 1).The endpoints of the study were the number of patients who achieved remission and the time needed to achieve remission with the first systemic treatment at our facility. Follow-up methods In this study, only the treatment initiated at the first visit was evaluated.If remission was achieved with the first treatment, the observation period was terminated, and the time (months) was recorded.However, if remission was not achieved, the observation period was terminated at 1) the time of switching to another treatment, 2) the time of quitting the hospital visit, or 3) the end of the observation period (31 December 2022) (Figure 2).Data on age, sex, severity of illness, treatment, duration of treatment, and serious adverse events were extracted from the medical records and tabulated. Statistical analysis Patients' background characteristics were compared between two groups (men versus [vs.] women, remission vs. failure) using the Mann-Whitney U test or unpaired t-test with Welch's correction or Fisher exact test.Categorical data among the IGA groups, namely, IGA-2, IGA-3, and IGA-4, were analyzed using the chi-square test.The cumulative unsuccessful rate (IGA ≥2) of each treatment group was estimated using the Kaplan-Meier method.Differences in remission across the treatment groups were evaluated using Cox proportional hazards models, and hazard ratios, 95% confidence intervals (C.I.), and p-values are reported.In the multivariate analysis, age, sex, and the first IGA value were included in the Cox proportional hazards models.Statistical Analysis System (SAS) version 9.4 and GraphPad Prism version 5 were used to perform the statistical analysis.The significance level was set at p < 0.05. Ethics The study protocol was approved by the Institutional Review Board of the Fukuoka University School of Medicine (approval number: U21-08-004).This study complied with the Declaration of Helsinki and the Medical Ethics Guidelines for Research Involving Human Subjects. Patient demographic characteristics A total of 178 first-time patients with AD were enrolled in the registry during the study period, and 110 patients (76 men and 34 women) were included in the analysis (Figure 1).The demographic characteristics of the patients are summarized in Table 1.There was no difference in the median age of men/ women at their first visit (29.5 years for men and 31 years for women; p = 0.866, Mann-Whitney U test).The median severity at initial diagnosis (IGA) was slightly higher in men than in women (3.5 vs. 3.0), but the difference was not significant (p = 0.344, Mann-Whitney U test). Discussion Several novel and effective treatments have recently become available for patients with moderate-to-severe AD; however, few studies have compared and evaluated real-world data.Therefore, determining the best treatment for patients is challenging. We compared six individual treatment groups for AD to obtain evidence from our retrospective cohort.Possible factors of bias that could affect the effectiveness, such as age [21], sex, and initial severity, were adjusted.A Cox proportional hazards analysis of five systemic therapies with the topical-alone group as the control showed that the hazard ratio for remission increased for all systemic therapies (Table 2).In particular, JAK inhibitors were associated with the highest hazard ratio for remission at 67.56 (p < .0001),as all five patients in this group achieved remission.However, a very limited number of patients used it as the first treatment.Dupilumab was highly effective in 15 of 25 patients who achieved remission and was associated with a multivariate hazard ratio for remission of 5.05 (p = 0.001) (Table 2).However, the median time to remission for the dupilumab group was 6 months, longer than that for the JAK inhibitors, cyclosporine and steroid (3 months) groups.This observation suggests that dermatologists should inform patients that steady treatment with dupilumab will help achieve remission; however, it takes longer than treatment with small molecules.This advice may help the patients maintain their motivation for treatment.Many patients with AD are referred to our outpatient center because they are refractory to treatment at their primary dermatology clinic.We supported and educated these patients upon admission and instructed them on regular topical treatments [22].Although regular topical treatment remains essential, our results illustrate that topical treatment alone is not sufficient to improve the condition, at least in certain populations.Therefore, the appropriate choice of systemic therapy is important. Only five patients were treated with ultraviolet light therapy, and only one achieved remission.The hazard ratio for remission increased to 3.83, but this difference was not significant (p = 0.26).Oral steroids significantly increased the hazard ratio to 4.78 (p = 0.038); however, long-term management of AD with oral steroids is generally not recommended because of various side effects.Hence, oral steroids should be limited to a short period of treatment [4,6,[23][24][25].Dupilumab blocks IL-4 and IL-13 signaling and has been used for patients with an inadequate response to conventional therapy for both remission induction and maintenance [26][27][28].It can cause [5,11,29] allergic conjunctivitis as an adverse reaction, but this is usually mild [12,30,31].Newly developed JAK inhibitors, such as baricitinib, upadacitinib, and abrocitinib, are also used for moderate-tosevere or refractory AD [13,14,32].We found that these JAK inhibitors induced AD remission in a shorter period than dupilumab (Table 2).The efficacy of JAK inhibitors is dosedependent, and higher doses can provide better remission rates in refractory patients [33,34].However, JAK inhibitors often cause skin infections, such as acne and herpes viruses' reactivation [13,15,16,35].A study of 112 Japanese moderate to severe AD patients (aged 12 years and older) reported that when treated with upadacitinib, herpes zoster (HZ) was more likely to develop in patients who had a history of HZ than those without a history of HZ.This result may be a class effect of JAK inhibitors, and attention to patients, especially with a history of HZ, may urge them earlier visits for treatment [35].Furthermore, neutropenia, anemia, liver dysfunction, and renal dysfunction may rarely occur; therefore, regular blood tests are needed [33,34,36,37].The safety of JAK inhibitors may depend on the disease and age, but major adverse cardiovascular events (MACE), serious infections, malignancy, and thrombosis can occur in patients with rheumatoid arthritis (RA) [38,39].Although patients with RA are usually older than those with AD and have immunosuppressive conditions, the safety of JAK inhibitors in patients with AD should also be monitored in the long term.Long-term administration of cyclosporine has been reported to increase the risk of renal damage, malignancy [40,41], and MACE [42].We found that JAK inhibitors and cyclosporine have faster action; however, the risks of long-term administration should also be considered when choosing a therapeutic modality.In addition, our study did not assess the duration of remission or frequency of relapse.The Japanese Dermatological Association guidelines for AD do not recommend using JAK inhibitors for maintenance [6].The total usefulness of the treatment should be determined on the basis of both long-term efficacy and safety; therefore, further studies are warranted. This study had some limitations.Firstly, it was conducted at a single institution, and the number of patients with AD was small.In particular, the number of patients who received JAK inhibitors, ultraviolet light therapy, and oral steroids as firstline treatment was limited.Secondly, the doses of cyclosporine and oral steroids were usually tapered, which may have affected our results.Finally, this study only assessed the time until remission with the initial treatment.AD waxes and wanes; therefore, a separate study regarding how long the remission lasts and how often the patient experiences a relapse is needed.Further studies are also required to determine the most beneficial treatments for these patients.We are planning to define recurrence in real-world clinical practice and assess recurrence in our facility in a future study.Additionally, we will perform a multicenter prospective study to further validate the results of systemic therapy in real-world clinical practice. In conclusion, modern systemic therapies can remit AD in relatively short durations; however, biologics and small molecules exhibit different characteristics.Safety and recurrence should also be considered in the long term. FIGURE 2 FIGURE 2Follow-up methods for the study. TABLE 1 Patient demographic data. TABLE 2 Time to remission and hazard ratios following treatment. a indicates only patients who achieved remission.The follow-up time of patients who did not achieve remission is not included.bC.I.: confidence interval.c
2024-06-13T15:22:11.835Z
2024-06-10T00:00:00.000
{ "year": 2024, "sha1": "f37f1f76dc2068cb0d96582767ed58167186c535", "oa_license": "CCBY", "oa_url": "https://www.frontierspartnerships.org/articles/10.3389/jcia.2024.12974/pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "24d1ce94a61d6440b23009fd51e8a0070917d644", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
366138
pes2o/s2orc
v3-fos-license
Intratumoral infusion of fluid: estimation of hydraulic conductivity and implications for the delivery of therapeutic agents. We have developed a new technique to measure in vivo tumour tissue fluid transport parameters (hydraulic conductivity and compliance) that influence the systemic and intratumoral delivery of therapeutic agents. An infusion needle approximating a point source was constructed to produce a radially symmetrical fluid source in the centre of human tumours in immunodeficient mice. At constant flow, the pressure gradient generated in the tumour by the infusion of fluid (Evans blue-albumin in saline) was measured as a function of the radial position with micropipettes connected to a servo-null system. To evaluate whether the fluid infused was reabsorbed by blood vessels, infusions were also performed after circulatory arrest. In the colon adenocarcinoma LS174T with a spherically symmetrical distribution of Evans blue-albumin, the median hydraulic conductivity in vivo and after circulatory arrest at a flow rate of 0.1 microl min(-1) was, respectively, 1.7x10(-7) and 2.3x10(-7) cm2 mmHg(-1) s. Compliance estimates were 35 microl mmHg(-1) in vivo, and 100 microl mmHg(-1) after circulatory arrest. In the sarcoma HSTS 26T, hydraulic conductivity and compliance were not calculated because of the asymmetric distribution of the fluid infused. The technique will be helpful in identifying strategies to improve the intratumoral and systemic delivery of gene targeting vectors and other therapeutic agents. sient state of infusion, because C is involved in the process. To uncouple K from C, the measurements for K must be performed in steady-state conditions (Swabb et al, 1974;Ford et al, 1991;Tokita and Tanaka, 1991). The goal of the present study was, therefore, to develop a new technique to estimate K in solid tumours, in vivo, at steady state and with a good spatial resolution. To this end, fluid (Evans blue-albumin in saline) was infused at low flow rates into the centre of a tumour with a special needle approximating a point source. The pressure increased gradually during fluid infusion and reached a plateau, indicating the attainment of steady state. At steady state, the pressure was measured at known distances from the source with a micropipette connected to a servo-null device. K and C were estimated in tumours with a spherically symmetrical distribution of the fluid infused as determined by the distribution of Evans blue albumin. In the sarcoma HSTS 26T, the distribution of Evans blue was asymmetric. K and C were estimated in LS 1 74T following the confirmation of a spherically symmetrical fluid distribution. C was estimated from the time constant required for the equilibrium of the infusion pressure. Animals and tumours Eightto ten-week-old athymic NCr/Sed-nu/nu mice from the Edward L Steele Laboratory animal facility were used. The mice *Supported by an NCI Outstanding Investigator Award (R35 CA5659 1) to RKJ and by fellowships from Homans Legat and the Norwegian Research Council to CB. The first two authors (YB and CB) contributed equally to this work. The sharp tip of a 22-gauge needle was cut and glued to one end of the steel wire. The other end of the steel wire was placed in the 22-gauge cylinder. A fixed opening of approximately 0.7 mm was produced between the tip and cylinder by bending the wire perpendicular to the main axis of the needle. PE 50 tubing was used to stabilize the opening between the tip and cylinder, and for connection with the infusion pump and pressure transducer. B Schematic diagram of the experimental set-up. The point-source needle (1) was placed in the centre of the tumour (radius R) and fluid infused at a constant flow rate. Steady-state radial pressure profiles were measured with high spatial resolution using a glass micropipette (2) in a region 0.3-2.0 mm from the edge of the infusion needle (3) were fed sterilized rodent food and water ad libitum. Two human tumours, the colon adenocarcinoma LS 1 74T and the sarcoma HSTS 26T, were implanted subcutaneously in the leg of each mouse. K was estimated when the tumours had reached a volume between 200 and 500 mm3. Infusion needle To infuse fluid into the centre of a tumour, an infusion needle was constructed to produce a spherically symmetrical fluid source ( Figure 1A). A very thin stainless-steel wire (35 mm long) was fixed to the sharp portion (2 mm in length) of a 22-gauge needle with epoxy glue. The other end of the wire was introduced in the lumen of a 22-gauge cylinder. A fixed opening of approximately 0.7 mm in length between the sharp portion and the 22-gauge cylinder was produced by bending the wire perpendicular to the main axis of the needle. To fix the opening between the sharp portion and the cylinder, PE-50 tubing was attached to the cylinder. The needle and the PE 50 tubing were glued to a Plexiglas arm mounted on a graded micromanipulator, which was used for controlling the depth of needle insertion. To measure the pressure of infusion, the needle was connected to the side port of the dome of a pressure transducer (Statham 23b, Spectramed, Oxnard, CA, USA) by the PE-50 tubing, and another side port on the dome was connected by PE 50 tubing to a l-ml syringe controlled by a constant flow infusion pump (Pump 22, Harvard Apparatus, South Natick, MA, USA). The compliance of the pressure transducer infusion pump setup was 0.04 ,ul mmHg-'. Pressure measurements The pressure was measured with micropipettes and a servo-null device as previously described previously (Boucher et al, 1990;Boucher and Jain, 1992). To prepare the micropipettes, thick-wall capillary tubing (0.86 mm outer diameter, 0.38 mm inner diameter) was pulled with a micropipette puller. Micropipettes with a tip diameter between 2 and 4 gm were filled by capillarity with a 1 M sodium chloride solution prepared from filtered, distilled, deionized water. The micropipettes and the servo-null device were used to measure the baseline interstitial fluid pressure (IFP) profiles in the tumour as well as the pressure profiles generated by the low flow rate infusions. A graded micromanipulator was used to insert micropipettes at known depths from the tumour surface. The pressure was measured for periods of 25-50 s, once a patent fluid communication between the tissue and micropipette was established. The following criteria were used to validate the measurements: (a) the fluid communication between the micropipette and the tissue was confirmed electrically, (b) the pressure remained stable after varying the feedback gain of the system and (c) the zero pressure in saline at the tumour surface did not change during the measurements. Experimental procedure During fluid infusion into the tumour centre, there is a transient redistribution of fluid. This phenomenon is controlled by K and the C of the tissue. At steady state, the steepness of the pressure profile around the infusion site is controlled by K only. Therefore, it is possible to evaluate K by measuring the pressure profile generated by the intratumoral infusion. The mice were anaesthetized with ketamine/xylazine (90/ 10 mg kg-'). During all procedures, mice were placed on a temperature-regulated heating pad and the body temperature was maintained between 36°C and 37°C. To minimize the deformation of the tumour during the insertion of the infusion needle, a small skin incision was made. The insertion depth of the needle was controlled by the moving arm of a graded micromanipulator. Before the infusion, the baseline IFP in the tumour was measured with micropipettes. To estimate K in LS 174T, 5% albumin and 0.25% Evans blue in saline (0.9%) were infused at a rate of 0.10 or 0.14 gl min-' with the infusion pump. Because of the low compliance of HSTS 26T, infusions were made at a flow rate of 0.05 Rtl min-' in that tumour. The steady-state pressure profile induced by the infusion needle was measured with micropipettes ( Figure 1B). With a reference mark on the infusion needle, the micropipette was inserted close to the infusion source at an angle of 450 with a graded micromanipulator under stereomicroscopic Figure 2 Distribution of Evans-blue albumin in HSTS 26T and LS174T tumours after infusions at flow rates of 0.05 and 0.14 RImin-' respectively. A Asymmetric distribution of Evans-blue in HSTS 26T. The infusion site is indicated by the arrowhead. The Evans blue has accumulated in two different regions, close to the infusion site and at the tumour surface. A faint Evans blue streak (arrow) can be seen between the two principal regions of Evans blue accumulation (magnification 9.0x). B Symmetrical distribution of Evans bluealbumin in the centre of a LS174T tumour. The infusion site (arrowhead) was located in the centre of the Evans blue accumulation (magnification 9.0x) 40. from a least squares non-linear regression to the experimental data. Note that the time constants of the increase and the decay in pressure were similar (approximately 10 min) guidance. The error in the radial position was ± 20 jm, as determined by the tip of the micropipette touching the tumour surface before and after the measurements. The pressure was measured within a distance of 0.3-2.0 mm from the edge of the infusion cavity. Measurements were not obtained closer to the infusion cavity, because the drop in pressure is very steep in that region. An error in the radial position close to the source would, therefore, lead to a relatively large error in the pressure profile. Furthermore, the region close to the source is characterized by flow irregularities. To evaluate whether tumour blood vessels reabsorbed the fluid infused, K was estimated after circulatory arrest by sacrificing the animals. Distribution of the Evans blue-albumin complex The assumption of a spherical distribution of the infusion solution was evaluated following the completion of the measurements. The tumour was cut in the centre, and 2-3 mm anterior and posterior to the fluid source. Measurement of water content To evaluate whether the infusion volume significantly modified the water content in the tumour, we compared the water content of tumours infused in vivo and tumours without infusion. After the infusion of 8 ,ul of saline, a 3-mm-thick slice from the infusion region was obtained from the centre of five tumours, and cut to obtain a piece of tissue of 3x3x3 mm. A similar piece of tissue was also obtained from the centre of five tumours that were not infused. The wet weight (Tw) was measured immediately after cutting. The dry weight (Td) was measured 24 and 48 h after drying at 50°C. The tumour water content (T ) was calculated as Histology At the end of the infusion, the animals were sacrificed. The leg with the tumour was dissected from the animal, being careful not to move the needle in the tumour. The tumours with the needle were placed in fixative solution (formaldehyde 3.5%, methanol 1.5%) for 2-3 days. To examine the infused area, the tumour was cut in half and two tissue slices (2 mm thick) were then obtained from each side of the central cut. The tissue was processed for histology and embedded in the plastic resin JB4. Tissue sections (1-2 gm thick) were obtained and stained with toluidine blue. Data analysis K was estimated by applying Darcy's law for flow through a porous medium (Baxter and Jain, 1989), normalized (viscosity 5% albumin in isotonic saline/viscosity isotonic saline) at 20°C for comparison with literature values (Levick, 1987). Furthermore, we estimated tissue compressibility from the time constant of the pressure transients (Basser, 1992). The time constant was obtained from a least squares non-linear regression of a monoexponential function to the experimental data P(a) = P(a), ± AP(a)[I-exp(-t/T)] Average baseline pressure (4) where P(a), is the pressure in the infusion cavity, P(a), is the initial pressure in the infusion cavity before a step change in flow rate AP(a) is the difference in steady-state pressure and t is the characteristic time constant. Compressibility was given by the following formulation 0.4 0.8 1.2 1.6 2 2.4 2.8 Radial position (mm) Figure 4 Typical radial pressure profiles measured during infusion at 0.1 gl min-1 into a LS174T xenograft. Two sequential measurements of pressure were made at each spot. The radius of the infusion cavity (a) was 0.35 mm. The dotted line represents the average interstitial fluid pressure measured before infusion. Errors in IFP and radial position were determined from measurement of zero pressure on the surface and from the surface coordinates respectively. The solid line represents the least squares nonlinear regression of the theoretical profile to the experimental data collected during infusion. The theoretical profile used was P(r) The value of Kobtained in this tumour was 2.3x10-7 cm2/mmHg-1 s-1. Note that the pressure measured in the infusion cavity was not included in the determination of K where 4 is the tissue compressibility; and a is the radius of the infusion cavity. C was obtained from the product of 4 and the tumour volume. Statistical analysis The data are given as the median and the range. Significant differences between two experimental groups were analysed with the Mann-Whitney U-test. The relationship between parameters were tested with a Spearman correlation. (1) RESULTS where u is the fluid velocity and Vp is the pressure gradient. Based on the experimental data, the baseline IFP was considered constant throughout the tumour. The radial steady-state pressure profile during infusion was fitted for estimation of K using this Darcy's law model (2) where P(r), is the pressure at radial position r, P0 is the baseline IFP, Q is the constant flow rate and R is the tumour radius. A regional distribution of K was obtained from contiguous points in the induced pressure gradients, using the differential form of the theoretical profile. The mean radial position was taken as (r, + r2)/2. Only pressure measurements which were 1 mmHg above the baseline plateau pressure were included in the differential analysis. The values of K obtained were considered as average tissue K-values and were To estimate K, we verified the assumption that spherically symmetrical fluid flow occurred. After infusions, the tumours were cut to evaluate the distribution of the Evans blue-albumin complex. In HSTS 26T, the distribution of Evans blue was asymmetric. The non-uniform distribution was observed in the region of infusion or as a significant accumulation of Evans blue separated from the infusion site (Figure 2A). Histological examination of tumour slices revealed that in some tumours the accumulation of Evans blue at some distance from the infusion site was associated with necrotic regions. In other HSTS 26T tumours, the distribution of Evans blue was associated with viable tumour tissue, and it was impossible to characterize the causes of the asymmetric distribution of the Evans blue-albumin complex. In contrast, in LS174T tumours at flow rates of 0.1 or 0.14 tl min-' the dye occupied, after approximately 90 min, a circular region of 2.5-4.0 mm in the centre of most tumours, thus confirming the assumption that spherically symmetrical fluid flow occurred ( Figure 2B). The main mode of transport was, thus, bulk flow. The r2)]. The mean radial position was taken as (rl+r2)/2. Note that, at this flow rate, neither in vivo (0; n = 8) nor after CA (0; n = 7) do the data suggest any radial dependence of K arrest. At a flow rate of 0.1 ,gl min-', by both types of analysis, the median K-values were less in vivo compared with circulatory arrest; however, the differences were not significant (Table 1). If K was increased by the infusion (hydration of the tissue), the effect could be more pronounced close to the source where the pressure was higher. A linear regression of K (estimated by differential analysis) vs distance from the source showed no significant differences in K-values in vivo (R2 = 0.09; P > 0.1) or after circulatory arrest (R2= 0.11; P>0.8) at a flow rate of 0.lOugl min-' ( Figure 5). The median water content in the central regions of infused tumours was 83.4%, and 83.3% in tumours that were not infused; the difference was not significant (P > 0.3). Histological examination of the infusion area also suggested that the intratumoral infusion at low flow rates used in this study did not alter the organization of the tissue. No pockets of fluid were found in the immediate periphery of the hole left by the needle. The width of the space between tumour cells was comparable in the proximity and at some distance from the infusion cavity. At a flow rate of 0.10Il min-', K was apparently not affected by hydration. However, K increased by 80% at a flow rate of 0.14 gl min-' compared with 0.10 ugl min-' (Table 1). DISCUSSION diffusion coefficient (3 x 107 cm2 s-l) of albumin in LS174T tumours (D Berk and RK Jain, unpublished data) cannot explain the volume of penetration of the Evans blue-albumin complex. Based on a length scale approximation ('4Dt where D is the diffusion coefficient and t is time), albumin in the LS174T tumour could penetrate by diffusion approximately 0.8 mm in 90 min. The data collected in HSTS 26T were not included in the analysis for K and C because of the asymmetric fluid distribution. In previous studies, we have shown that the baseline IFP throughout a tumour is quasi-uniform, except for a sharp pressure drop in the tumour periphery (Boucher et al, 1990;Boucher and Jain, 1992). In the LS 174T tumour, the IFP was also uniform in the centre and dropped close to the surface. The median tumour IFP in vivo was 14.0 mmHg (range 7-23.5). The placement of the infusion needle in the tumour did not modify the steady-state IFP profiles. Figure 3 shows typical changes in the infusion needle pressure in a tumour during infusion in vivo, at a rate of 0.10 ,ul min-'. The infusion pressure reached steady state within 25-60 min, with a time constant of 7-18 min. The pressure profile induced by the infusion was measured at steady state (Figure 4). At 0.3-0.5 mm from the infusion source, the pressure measured with micropipettes was 3-12 mmHg higher than the baseline IFP in the tumour before the infusion. The pressure dropped to the baseline IFP value in the tumours within a radius of 1-2 mm from the source. In LS 174T, at a flow rate of 0.1 p1l min-', median K in vivo was 1.7 x 10-7 cm2 mmHg-' s, both by differential analysis and by least squares non-linear regression of the theoretical profile to the measured pressure profile. After circulatory arrest, median K by differential analysis was 2.8 x 10-' and from a fit to the complete profile 2.3 x 10-7 (Table 1). No significant differences were found by estimating K by differential analysis or by fitting the complete profile (Table 1). Because of the high hydraulic permeability of the tumour vasculature (Sevick and Jain, 1991), we expected that the fluid infused would be reabsorbed in part by the blood vessels, thus the estimate of K in vivo could be an overestimate. To evaluate whether fluid was reabsorbed, K was measured after circulatory K has been measured with in vitro and in vivo techniques. Two approaches are used to measure K in vitro: the measurement of fluid extrusion from tissue under compression or by applying a pressure head across a tissue slice of known thickness. With in vitro techniques, the influence of compression and hydration on the measurements of K have to be considered (Levick, 1987). K in vivo is obtained from the measurement of fluid velocity resulting from a natural or an applied pressure gradient (Guyton et al, 1966;Swabb et al, 1974;Levick, 1979;DiResta et al, 1993). Most in vivo measurements of K are limited by the poor definition of geometric dimensions and by difficulties in separating K from C. The present technique measures with micropipettes the radial pressure profile generated by a constant-flow infusion. The resolution provided by micropipettes permits precise determinations of the distance between the infusion source and the tip of the micropipette. Measurements of K in vivo or after circulatory arrest are made at steady state, the contribution of C is, thus, negligible. K can be measured by differential analysis at different radial positions from the infusion needle or by fitting the complete pressure profile. Because the technique is dependent on the spherically symmetrical distribution of the fluid infused, heterogeneity in fluid flow limits the determination of K (e.g. HSTS 26T tumour). Potentially, K could be estimated from the pressure in the infusion needle. This estimation would be less accurate because it is not possible to determine precisely the size of the cavity (needle radius + tissue displacement). A small error in the estimation of this dimension would lead to a big error in the K estimation. A potential limitation of the present and also of previous in vivo techniques estimating K is the possibility that the fluid infused could be reabsorbed by blood or lymphatic vessels (Guyton et al, 1966;Levick, 1980). We addressed the issue of reabsorption by estimating K in vivo and after circulatory arrest. Mellander (1960) demonstrated that fluid reabsorption in the hind limb microcirculation ceased completely within 1 min of circulatory arrest. Recent data suggest that this is also the case in tumours (Netti et al, 1995). Hydraulic conductivity of solid tumours 1447 after circulatory arrest, thus suggesting that reabsorption of the fluid infused is minimal or zero in the LS174T tumour. A possible reason why reabsorption was not significant in vivo could be because of local properties in the region of the infusion. The infusions were done in the tumour centre. Generally, vascular density and blood flow in experimental tumours are reduced in the centre compared with peripheral regions (Thompson et al, 1987;Jirtle, 1988;Tozer et al, 1990). However, in the region surrounding the infusion needle, blood vessels were observed on histological slides. In some cases, the blood vessels appeared congested with red blood cells suggesting a stagnant flow. If blood perfusion was stopped in the vicinity of the needle, reabsorption would be minimal and most of the fluid would be transported by bulk flow through the interstitial matrix. The fact that K-values were similar in vivo and after circulatory arrest in LS174T tumours demonstrates that K can be measured in two different types of preparations without being modified. However, this cannot be a general rule, it is possible that in other tumour types K estimates could be significantly different in vivo and after circulatory arrest. If K is estimated in vivo, it should also be measured after circulatory arrest to determine whether reabsorption is significant. Several studies have shown that K is modified by tissue hydration (Guyton et al, 1966;Fatt, 1968;Zawieja et al, 1992). At a flow rate of 0.10 tl min-, we did not detect any influence of the infusion on K. No significant differences in water content could be found between infused and non-infused tumour tissue. We speculated that hydration and K could be higher when closer to the infusion needle. However, K estimated by differential analysis did not change with distance from the source. At a constant flow of 0.14 ,tl min-', K was 80% higher than at a flow of 0.10 Itl min-'. This increase might be due to reabsorption or hydration. Reabsorption was probably not playing a major role, because the pressures induced at infusion rates of 0.10 and 0.14 Rl min-' were similar. In normal tissues, K-values span four orders of magnitude. High K-values have been measured in lung tissue and vitreous body, and lower K-values have been found for cartilage, corneal stroma and subcutaneous tissue (Levick, 1987;Lai Fook et al, 1989). Swabb et al (1974) reported the first measurements of K for tumour tissue, K in vitro for a slice of rat hepatoma (0.3 x 107 cm2/mmHg/s) was fivefold higher than in normal subcutaneous tissue. Our in vivo values of K (1.7 x 10-7) for the colon adenocarcinoma LS 174T are almost sixfold higher than K for slices of rat hepatoma. In another study with the same tumour (LS 174T), we found that K in vitro (2.4 x 10-7cm2/mmHg/s) measured with a flow chamber was comparable to the present in vivo values (C Znati, Y Boucher and RK Jain, unpublished data). Swabb et al (1974) also estimated Kin vivo by measuring the unsteady flow from micropore chambers embedded in a subcutaneous tumour, and reported values that were tenfold lower than their in vitro values. Because of the uncertainty in the calculations to obtain K, it is possible that their in vivo values were not accurate, as acknowledged by Swabb et al (1974). In a recent study, DiResta et al (1993) calculated from IFP gradients and bulk flow measurements a K-value of 59xlO-7 cm2/ mmHg/s in a human neuroblastoma transplanted into immunodeficient animals. Estimation of C and implication for estimation of Lp By estimating K in vivo at steady state with the present technology, it would be possible to estimate other fluid transport parameters. The transient evolution of the infusion pressure (Figure 3) is controlled by the product of C and K (Ford et al, 1991;Basser, 1992). From the estimate of the time constant of this evolution and K, it is possible to estimate compressibility and C (Table 1). Because these measurements of compressibility and C are highly dependent on the assumptions and the formulation used to calculate them, they have to be considered as first order approximations. A better estimate could be provided by a proper mathematical model that would describe more accurately the transient phenomena. The steepness of the baseline IFP profiles in tumours can be defined by the parameter oc& (Jain and Baxter, 1988;Baxter and Jain, 1989). OC2= R2 (L IK) (S/V) where R is the tumour radius, Lp the vascular hydraulic conductivity and S/V surface area per unit tissue volume for transcapillary exchange. By knowing K and obtaining &2 from measurements of peripheral IFP profiles, Lp could also be estimated. The accuracy of Lp determination will be dependent on careful estimates of SIV and K. Implications for systemic and intratumoral delivery of therapeutic agents The tissue hydraulic conductivity (K) is a key determinant of the systemic and intratumoral delivery of therapeutic agents. A relatively low K can contribute to the elevated IFP which has been associated with the poor accumulation of macromolecules (e.g. monoclonal antibodies) in tumours (Sands, 1992;Jain, 1993). We previously demonstrated that the IFP profiles in experimental tumours were uniform throughout the tumour and dropped steeply in the periphery (Boucher et al, 1990). In a mathematical model for fluid transport in solid tumours, the ratio of LpIK was identified as a determinant of the steepness of the IFP profiles (Jain and Baxter, 1988;Baxter and Jain, 1989). In a subsequent study, we found that the superficial microvascular pressure was similar to the central IFP, whereas in the tumour periphery the microvascular pressure was significantly higher than the IFP (Boucher and Jain, 1992). The IFP distribution in tumours suggests that fluid filtration is negligible in the centre and high in the periphery. Because extravasation of macromolecules and filtration of fluids are potentially coupled, this could explain the poor accumulation of macromolecules in the central areas of tumours (Jain and Baxter, 1988;Baxter and Jain, 1989). A large increase in K could reduce the IFP in the centre of tumours and, thus, increase the filtration of fluids and the extravasation of macromolecules. To improve the penetration of monoclonal antibodies, gene vectors and other therapeutic agents in normal or tumour tissues, interstitial infusion methods have been developed Morrison et al, 1994;Order et al, 1994) The present technique is able to characterize key determinants (K, compliance, pressure gradients and reabsorption) that will influence the success of intratumoral infusions. If K is elevated, a uniform distribution of the infused drug throughout the tumour would be expected. However, if K is very low, the enhancement provided by intratumoral infusion may be less. As K decreases the volume of tissue penetrated by fluid will reduce to a small region around the infusion source. Heterogeneity in K or in compliance could result in the asymmetric distribution of therapeutic agents or other molecules that are infused into the tumour. Significant differences in the distribution of Evans blue-albumin were found between HSTS 26T and LS174T tumours. In general, in LS174T tumours the distribution of Evans blue-albumin was uniform, whereas in HSTS 26T tumours the distribution was asymmetric. The asymmetric distribution of Evans blue-albumin in HSTS 26T tumours was observed in viable and necrotic regions. Large deposits of Evans-blue were associated with necrotic regions at distance from the infusion site, thus suggesting that necrotic areas could represent preferential pathways (higher K) and sinks for the accumulation of therapeutic agents (Figure 2A). The non-uniform distribution of therapeutic agents could significantly limit the success of intratumoral infusions. To increase K, enzymatic degradation of the interstitial matrix could be used. Degradation of the matrix with hyaluronidase increased K by ten-to 20-fold in muscle fascia (Day, 1952), and by a factor of 24 in the lung (Lai Fook et al, 1989). In a preliminary study, we measured K in vivo in two tumours following the intratumoral infusion (0.1 tl min-') of hyaluronidase. By least squares non-linear regression of the measured pressure profile, the values were 7.7 x 107 and 11.0 x 107 cm2/mmHg-' s. These two values are greater than the maximum values of K in the control group (Table 1). Further studies are needed to evaluate the effect of enzyme digestion on K and on the IFP profiles in solid tumours. In conclusion, we have developed a new technique to estimate K in vivo and after circulatory arrest in tumours with a spherically symmetrical distribution of the fluid infused. The precise spatial resolution of the micropipette technique provides a significant advantage over other techniques for estimating K in vivo. Most importantly the technique can be used to measure and manipulate fluid transport parameters in tumours to improve the delivery of therapeutic agents.
2014-10-01T00:00:00.000Z
1998-12-01T00:00:00.000
{ "year": 1998, "sha1": "b96b6b78e13007253ad38ac5dfd321f1ecf558fb", "oa_license": "CCBY", "oa_url": "https://europepmc.org/articles/pmc2063228?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "b96b6b78e13007253ad38ac5dfd321f1ecf558fb", "s2fieldsofstudy": [ "Engineering", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
56025560
pes2o/s2orc
v3-fos-license
Optimal control of a probabilistic dynamic for epidemic spreading in arbitrary complex networks This paper presents a discrete time probabilistic dynamic for simulating a contact-based epidemic spreading based on discrete time Markov chain process, in particular the attention is addressed to the susceptible-infectious-removed (SIR) model and the phase diagram of such model will be presented. Then, this report presents the set of equations that represent the optimal control strategies, by the means of Pontryagin's maximum principle, in two different cases a vaccination policy and a combined vaccination-hospitalization policy and show a numerical simulation, with the standard forward-backward sweep procedure, for these equations. I. INTRODUCTION Modeling how diseases spread among individuals was introduced by Kermack and McKendrick [18,19,25] in 1927; they introduced a model known as the susceptibleinfectious-removed (SIR) epidemic model, and they supposed that individuals of a population could be divided into three non-intersecting classes: susceptible who are healthy but can contract the disease, infectious individuals who have contracted the disease and can transmit it, and removed individuals who have recovered and cannot contract the disease anymore. If the epidemics is assumed to be Markoffian and proceeds via infection of nearest neighbors, then the relations that represent the SIR model are This kind of problem is called the general epidemic process (GEP) [10]; it is a stochastic multiparticle process that describes a vast number of phenomena in nature; for example, a simple stochastic and space-dependent model in regular lattice could describe the fairy rings, chemical solitary waves [12,15], percolation [14], forest fires, and contact-based epidemic spreading. The study complex networks has provided a better tool study the GEP in complicated topology that describe the biological system: for example, scale-free networks can represent sexual contacts, the Internet, and many other social, technological, and biological networks [2,7,28]. One of the most important goals of these studies [29] is to find an optimal feedback control strategy to minimize the epidemics, and optimality in this case means balancing some control costs against performance. For example, the optimal solution could provide the best vaccination or hospitalization strategy to minimize the cost of an epidemic [6]. Most of the studies on this problem use the mean-field (MF) approximation [7,29], and these kinds of studies have evaluated the macroscopic feature of the systemlike critical behavior and phase transition [21,37]; however, MF is not designed to provide information about individual nodes. To obtain information at the individual level (microscopic) of description, it is necessary to use Monte Carlo (MC) simulations. The main problem of MC models is the computational effort needed to evaluate the expectation values of microscopic quantities. Another point against Monte Carlo approach is that the optimal solution must be found through a random search algorithm, and the computational effort could be cumbersome due to the dimension of the space of possible configurations of a complex network. In this paper, the microscopic Markov-chain approach (MMCA) [9] for the SIR model is presented, and the vaccinated class to the model is then added for the optimal control system for this model in two different cases: the first case considers only a vaccination of susceptible policy, and the second case only considers two possible control strategies: the vaccination of the susceptible and the hospitalization of the infected. Some numerical examples are also presented. II. THE MODEL Consider the dynamics of a SIR epidemic process over a complex network composed by N nodes and the example of compartments diagram that represent the SIR model shown in Figure 1 (top). The topology of the contacts of the network is completely determined by its adjacency matrix [7] A. Without loss of generality, A is assumed to be symmetric and its elements a i,j are assumed to be only two possible values: 0 and 1. This hypothesis is not necessary, and in general cases where the strength of the link is considered can be treated in the same manner of this work. Following the construction of Johnson [17] for a stochastic dynamic, a vector space V k k = 1, . . . N is associated with every node; and defining a basis, a non-natural isomorphism V k R d k is built, and where d k are the possible configurations of k-th node, which d k = d is supposed ∀k. Then, a vector that represents its state is attached to each node. For example, in the SIR model, the following state is obtained for every node: Components of vectors A k (t i ) for A = S, I, R are treated as the probability that k-th node is in the configuration A at time t i . This hypothesis implies that every v k (t i ) is normalized by the means of the 1-norm, or In addition, the systems is represented by a state Λ(t i ) ∈ ⊗ N k=1 V k . α is defined as the probability for the unit of time that an infectious individual spreads the disease to one of its neighbors and δ as the probability for unit of time that an infected node is removed. Then, the probability of node k not being infected by any neighbor is the following: and the discrete-time equation that describes the dynamics are given by This kind of formalism in [9] is called MMCA, since it is equivalent to associate a discrete time Markov chain for every node of the network; for example, see Figure 1 (bottom) for the microscopic Markov chain of the SIR model. The solution is easily obtained if initial conditions are given, and in [9], it is proven that this formalism can outperform heterogeneous MF (HMF) and MC approaches. Equations 5 can always be solved if an initial condition is given, and it is possible to accommodate initial conditions with some grade of uncertainty, which is common dealing with real data. It is known that the SIR model has two different phases [10,11,14,21] (equilibrium states) for t f in → ∞ , one where all the susceptible become infected or removed another where some susceptible manage to survive without contracting the disease. III. SINGLE CONTROL This section shows the construction of an optimal control system for the discrete time process presented in [ 5,22,32]. In addition, there is a policy maker that is supposed to be able to determine when it is necessary to vaccinate a node. To simulate this procedure, the vaccinated class was added to the system, and the model in this case is called susceptible-infectious-removed -vaccinated (SIRV) [25]; Now, the possible micro states of a node are d = 4. The set of admissible controls (possible actions of the policy makers) is assumed to be provided by the following: is the probability at time i that the k-th susceptible node becomes vaccinated; here some grade of failure of the therapy is assumed in fact it is possible that a vaccinated becomes infectious with probability γ < α, this represents a non-efficacious vaccination [25] or a development of the resistance to the disease [33]. The probability of the vaccinated node k not being infected by any neighbor is defined as the following: where V k (t i ) is the probability that the k-th node is vaccinated at time t i . Then, the dynamics of the probabilities is driven by the following 4N finite difference time equations: The last equation is obtained from the normalization condition. The compartment diagram for this model shown in Figure 2 (left) instead of in Figure 2 (right) shows the microscopic Markov chain associated to every node of the complex network. To proceed further, the following 3N functions must be defined: In addition, the cost must be defined: the first term represents a payoff given by the final state, while the second term represents a cumulative cost. Without loss of generality, the maximum is given by the following: (11) This cost simulates when there is a cost associated to the vaccination and a cost associated to the infectious individuals, which, for example, can represent the loss of work days. This definition fixes the behavior of the control, and it could be defined in a different manner to change the behavior of the policy maker, for example, see [6]. However, the only request is that the cost is quadratic in the control (i.e. ω k (t i )), and the same procedure of this work could be used to consider more realistic costs. By choosing a finite horizon optimal control, adding a time discount to the cost creates an infinite horizon control. However, the first term of (11) gives a cost that increases with time due to the intrinsic dynamics of the system, so it is not necessary to use a time discount function. In addition, if cost depends onto R k (t i ), then it is not possible to determine R k (t i ) from the normalization condition. Then, to build an optimal control system, a Hamiltonian must be defined: where λ k,A (t i ) A = S, I, V are the adjoint functions. First, the transversality conditions are given by the following: Then, from the definition (12), the adjoint functions for t = t f in are given by the following . Thus, the following recurrence relations are obtained for the adjoint functions by solving the previous equations: where h k,j (t i ) and hv k,j (t i ) are given by the following: Eventually, by applying the extension of Pontryagins maximum principle for discrete systems [30], the characterization of the optimal control is obtained, imposing ∂Hi ∂ω k (ti) = 0: Now, a more general case of a contact epidemic it is consider supposing that there is a natural time decay of the protection given by the vaccination, there is a probability (θ) for a unit of time that a vaccinated becomes susceptible. In addition, there are the possibility that the policy maker could cure an infected, for example, the hospitalization of a person with an infectious disease, this action is represented by a transition from infected to vaccinated. Thus, the probability for a unit of time of this event is given by the new set of control τ k (t i ), there are 2N controls, and the set of possible actions of the policy maker is given by the following equation: The compartment diagram for such system is presented in Figure 3 (top). Given the initial conditions, the discrete time dynamics of this system is completely specified by the following 4N equations : Again, these equations completely specify the time dynamics of the probability if initial (probabilistic) conditions are given. In Figure 3 (bottom), the microscopic Markov chain associated to Equations (24) is presented. In this case, the cost that is maximized in the following sections is the following: (25) In this case, a term that can mimic hospitalization and drug treatment costs is used. As performed in the previous section, a Hamiltonian can be defined to obtain the recurrence relations for the adjoint functions: where the functions h j,k (t i ) and hv j,k (t i ) are defined in Equations (19) and (20). The transversality conditions are the same as previously given (see Equations (14)) Again, by imposing the following condition, , the following 2N equations that characterize the optimal control are obtained: with with The optimality system consists of state Equations (24) with initial conditions and adjoint Equations (26) with the final time conditions (transversality conditions) and with the characterizations of the optimal control (30) and (32). V. NUMERICAL EXAMPLES To show some examples, a random complex network with 30 nodes is first chosen such that the adjacency matrix is symmetric, a i,j = a j,i , ∀i, j = 1, . . . 30. In addition, every node must have at least one connection with another node, and the reference complex network is shown in Figure 4. Then, the initial condition is fixed by choosing random a node and setting it as I k (t 0 ) = 1. The last choice made corresponds to the constants, and they were set for the SIR model α = 0.2 and δ = 0.1. The choice of these constants and the network was made to assure that the system has critical behavior, in other words, for t f in → ∞, R k (t f in ) = 1 ∀k = 1, . . . 30, which is the so called the susceptible-free equilibrium, in Figure 6, it is shown the phase diagram of SIR model in the reference network. In addition this behavior could be also seen in Figure 5, and these plots present the probability of a node to be susceptible, infected, or removed, and this quantity is defined as follows: with A = R, I, S or V . To obtain the optimal control, the standard forwardbackward sweep method is applied, and the procedure is the following: • Step 1. Create a guess for all controls. • Step 2. Solve forward the dynamics of the system with the given control and initial conditions. • Step 3. Solve backward in time the adjoint equations. • Step 4. Evaluate new optimal conditions for controls. • Step 5. Update the control with a weighted average between the old control and the controls evaluated in Step 4. • Step 6. Check the convergence, if reached, stop; otherwise, go back to Step 2. If l is the index that represents the iteration number of the procedure stated, the criterion of convergence for the case with one control is the following equation: in contrast, for the case with double control, the stop criterion is given by the following equation: in both cases, ρ = 10 −4 . This criterion means that the process is stopped only when the maximum distance between states and controls of between two consecutive iterations is less than ρ. Figure 7 shows the optimal solution of the SIRV problem with one control for different choices of the values of the constants in Equation (11). In contrast, Figure 8 shows the optimal solution of the SIRV problem with two controls for different choices of the values of the constants in Equation (11). The optimal solution could indicate a probabilistic vaccination of a node, since 0 ≤ ω k (t i ) ≤ 1. This fact is not a problem if a node represents a metapopulation, but if a node represents a single individual, a partial vaccination is not understandable (and maybe ethically problematic). Attempts to solve this problem include using a binary control, but in this case, the hypothesis of a piecewise continuous cost fails, and the optimal solution could not be obtained . Eventually to quantify the effects of the controls on the system the incidence is presented, we compare the time evolution of incidence in the three models taken in account, results are show in Figure 9, in this figure it is possible to notice the effects of optimal control in reducing the but that in the case of double control the optimal solution for C 1 = C 2 = C 3 is to leave a very small probability that a nodes is infected so the infectious disease become endemic in that complex network, the average probability of a node to be infected by an another node after 100 time steps is very low ∼ 7 · 10 −3 and this could not be true if the constants in the cost are estimated in an another way, however this behavior is due to the fact that in the double control model taken in account a vaccinated individual could become susceptible again, this feature gives "supplies" to the infected population as in the susceptible-infected-susceptible (SIS) model. Another fundamental contribution to this behavior is due to the fact the cost is quadratic in controls, and the controls are bounded between 0 and 1 so, distribuiting the action in more steps is "cheaper" than doing the same action in a lesser number of steps. Another quantity common in epidemiology is the force of infection, defined in this case as F (t i ) = N k=1 αI k (t i ), again it is clear the effects of controls in reducing the "size" of the infection, and again the double control model show an endemicity of the disease. VI. CONCLUSION Summarizing, the time discrete-time equations for the dynamics of the probability of individual nodes in the SIR model have been derived. In addition, this method outperforms HMF and MC simulation [9], since the proposed method requires small computational effort and can accommodate probabilistic (with some degree of uncertain) initial conditions that are common in epidemiology studies [4,26]. The Red dashed line represents P S (t i ), the green dotted line represents P I (t i ), the purple dot dashed line represents P V (t i ), and the blue solid line represents P R (t i ). In addition, the framework for the SIRV was generalized, and the optimal control in two different cases was then studied, considering only a vaccination strategy (transition from susceptible to vaccinated) and a double control (transition from susceptible to vaccinated and a transition from infectious to vaccinated), this study presents the set of equations needed to obtain the optimal control strategy given a cost, a network, and an initial probability distribution of infectious, susceptible, removed, and vaccinated. This kind of simulation could give an insight of how to balance cost (e.g drugs production), epidemics constants (failure probability and duration of the vaccination), and distribution of health care in order to eradicate (or minimize) a endemic disease. Further works are required to overcome the problem of a probabilistic treatment. This method will also be applied to studying an infectious plant disease. Since in this case the topology of the complex network is known and fixed in time, there is an interesting development of remote supervision [3,8,26] of the disease, and the only concern in this case is that the SIRV dynamics could be not appropriate [20] for simulating a plant disease, so a tailored dynamic could be required.
2017-12-20T21:10:03.000Z
2017-12-20T00:00:00.000
{ "year": 2017, "sha1": "7303016ad6a0709e4c6997bd646fcd52cfa256a5", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "7303016ad6a0709e4c6997bd646fcd52cfa256a5", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
233613032
pes2o/s2orc
v3-fos-license
HADES: A Multi-Agent Platform to Reduce Congestion Anchoring Based on Temporal Coordination of Vessel Arrivals—Application to the Multi-Client Liquid Bulk Terminal in the Port of Cartagena (Spain) Featured Application: HADES, a software tool for the optimal operation of liquid bulk terminal management, is based on an optimization model for the allocation of liquid bulk berths to reduce congestion and vessel waiting time. By means of a user-friendly interface, all the agents involved in the process share the data relevant to optimal quay management, supervised by port authorities. Abstract: Ports are key factors in international trade, and new port terminals are quite costly and time consuming to build. Therefore, it is necessary to optimize existing infrastructure to achieve sustainability in logistics. This problem is more complex in multi-client port terminals, where quay infrastructure is shared among terminal operators who often have conflicting interests. Moreover, the berth allocation problem in liquid bulk terminals implies demanding restrictions due to the reduced flexibility in berth allocation for these types of goods. In this context, this paper presents HADES, a multi-agent platform, and the experience of its pilot use in the Port of Cartagena. HADES is a software platform where agents involved in vessel arrivals share meaningful but limited information. This is done to alleviate potential congestion in multi-client liquid bulk terminals, promoting a consensus where overall congestion anchoring is reduced. A study is presented using a mixed integer linear program (MILP) optimization model to analyze the maximum theoretical reduction in congestion anchoring, depending on the flexibility of vessel arrival time changes. Results show that 6 h of flexibility is enough to reduce congestion anchoring by half, and 24 h reduces it to negligible values. This confirms the utility of HADES, which is also briefly described. Introduction Maritime traffic is the main type of transport for goods, accounting for over 11 billion tons in 2018 [1], and it is directly related to a country's economic activity: higher volumes of traffic yield greater activity and vice versa. In this context, port congestion implies a loss of time and money for all the actors in the logistic chain and therefore undermines the competitive position of ports and the ecosystem of companies in port communities [2]. Port infrastructure is costly and time consuming, and currently, bearing sustainability in mind, construction of new port terminals is probably not the best solution. It is important to keep in mind that in liquid bulk terminals, especially when managing IMDG products (international maritime dangerous goods), berth allocation is highly constrained, typically not because of the berthing infrastructure but because the loading arm, which is highly specific with respect to the good to load/unload (e.g., phenol, methanol, gasoline, jet fuel, LNG) have particular chemical characteristics or temperature requirements. This limited flexibility in allocation results in the appearance of congestion limitations when they reach around 30-40% occupancy. From the perspective of port management, optimizing quay operations to reduce congestion and delays in the loading and unloading of vessels is addressed by the socalled berth allocation problem (BAP). The objective of berth allocation is to serve a set of vessels on a set of piers for a given period in order to optimize parameters like total vessel operation time, fuel consumption, and emissions. The improvement of berth usage reduces not only the port costs of loading and unloading vessels but their downtime costs. Thus, economic growth for ports and their activity, client loyalty, and attracting new clients are expected, all of which indirectly influence society. The study of the berth allocation problem has been extensively developed for container terminals. However, liquid bulk terminals have received little attention in recent years, mainly due to the uncertainties and more demanding restrictions in the operation of these types of goods. In liquid bulk terminals, the berth where a vessel must stay is often determined by the goods to be loaded or unloaded since specific pipes and facilities are required. For this reason, there are few or no possibilities of changing operations from congested quays to other, less congested terminals or quays, which is a common approach to reducing congestion in container-based operations. From a commercial point of view, there are a few software tools to manage liquid bulk berths. The existing ones are all costly and have been developed for container terminals, sometimes even for specific terminals, such as Posidonia Operation [3], Dropboard [4], and Marine Enterprise Suite [5]. In addition, all these tools share opacity in terms of the optimization or heuristic models used for berth assignment, and they allow for little improvement from the user interface. Furthermore, maritime logistic chains involve many agents and companies with different interests. In multi-client port terminals, the quay infrastructure is shared among terminal operators with conflicting interests, although all clients have the common goal of reducing the anchoring time caused by vessel congestion and increasing operational capacity. In liquid bulk terminals, vessel arrival times can frequently be macroscopically determined by terminal owners or petrochemical companies far in advance due to the terms of trading contracts. Thus, fostering temporal coordination among terminal operators to produce schedules that avoid simultaneous vessel arrivals at the same terminal can be a solution to congestion anchoring. Currently, company/terminal operators work individually using their own equipment, and there is no communication among them, even though they share the same resources offered by the port authority. This paper presents the multi-agent platform HADES, developed in collaboration with the Cartagena Port Authority, in Spain. HADES has two objectives: (i) to foster coordination among the different port agents and terminals and (ii) to solve the BAP problem in multiclient liquid bulk terminals. HADES adds to the literature about BAP problems by focusing on multi-client liquid bulk terminals, making coordination among the different port agents easier to achieve and providing improvements to the value chain associated with trade, which results in more efficient, sustainable, and competitive logistic chains. Specifically, HADES is a software tool for the optimal operation of liquid bulk terminal management developed by E-lighthouse. This tool is based on an optimization model for the allocation of liquid bulk berths to reduce congestion and vessel waiting time. Based on information technologies, HADES facilitates coordination among the different port agents. It allows the three main agents of the logistic-port community: terminal operators/owners, shipping agents, and port authorities to communicate. By means of a user-friendly interface, all the agents involved in the process share the data relevant to optimal quay management, supervised by port authorities. This project has been promoted by the Cartagena Port Authority. Since 2016, an increase in delays at numerous liquid bulk berths (for petroleum, petroleum products, oil, and chemical products, among others) in the Escombreras Basin has been detected. These delays have increased the number of vessels having to anchor as well as their anchoring time due to congestion. In fact, an analysis of the multi-client E010 dock, with data from January 2010 to August 2020, estimates that anchoring congestion was the reason for an annual average anchoring time of 6.3 h per call (over an average call time of 45.28 h), with an estimated cost of 500,000 EUR per year in ship freight. The document is organized as follows. In Section 2 a literature review of temporal coordination procedures and BAP models is described. Section 3 presents the HADES tool. Section 4 provides the algorithm applied to solve the berth allocation problem, considering the restrictions related to liquid bulk. The real application to the liquid bulk terminal in the Port of Cartagena (Spain) is shown in Section 5. Finally, in Section 6, conclusions are drawn, together with a discussion about the future research lines of HADES. Temporal Coordination Procedures Maritime logistic chains involve many agents and companies with different types of organizations. In multi-client port terminals, understood as terminals where the quay infrastructure is shared by several terminal operators that have placed loading arms in the same quay infrastructure, conflicting interests appear. Each terminal operator seeks a profit and a return on their investment, although all of them have common interests, such as reducing the anchoring time of their vessels caused by congestion and increasing their operational capacity. Port authorities pursue both private and public goals like contributing to regional economic growth and enhancing sustainability. The effects and implications of cooperation among all the different port agents in container terminals have been studied extensively in recent years. Specifically, the Port of Rotterdam and the Port of Barcelona have been analyzed to understand how the quality of hinterland access (trucks, railways, and barges) is important for seaport competitiveness [6][7][8][9][10][11]. Van der Horst and de Langen [7] identified a set of coordination problems among the actors involved in a port's hinterland chain and propose different coordination arrangements. The four coordination mechanisms are the introduction of incentives, the creation of an interfirm alliance, changing scope, and creating collective action. Incentives influence the behavior of actors. For example, bonuses or penalties could be established for companies that follow (or do not follow) the operational rules of a terminal operator. Interfirm alliances imply more responsibility and common arrangements among companies than incentives. They can include subcontracting, standards for quality and services, or formalized procedures. Possible coordination measures to change the scope of an organization could be vertical integration or the introduction of a new market. The last category enhances collective instead of individual action; for instance, branch associations or the development of information technology systems for a sector of the port industry or its whole. The multi-agent platform described in the manuscript is based on the last category because it seeks collective action for operational improvement in liquid bulk terminals. From a commercial standpoint, there are a few software tools to coordinate different port agents and manage liquid bulk berths. The existing ones (see Table 1) are costly, have been developed for container terminals, and are mostly designed for specific terminals. All these tools share opacity in terms of the optimization or heuristic models used for berth assignment and allow for little improvement from the user interface. Table 1 summarizes the current products available for port operations. Most of them can be applied to liquid bulk terminals, except PortChain-Motor optimization [12]. It focuses only on container terminals. Shipping companies are the main users of port operation software, although the relationships with terminal operators and port authorities are also considered in some software products (Suite Posidonia-Prodevelop-Spain [3], PortChain-Motor optimization-Denmark [12], Marine Enterprise Suite-Cirrus Logistics-United Kingdom [5]). The proposed multi-agent platform HADES considers the coordination among shipping companies, terminal operators, and port authorities in terminals dedicated to liquid bulk. Therefore, HADES provides a new perspective to the temporal coordination problem in liquid bulk terminals, considering the different port agents involved. The platform combines public, third-party data and AI forecasts to generate accurate information for the shipping company on the planning of a call Yes Shipping companies No QronoPort-Antwerp [15] Collaborative platform to reduce waiting times with data-assisted planning and predictive models Yes Terminal operators No Navi-Port-Wartsila Finland [16] Middleware of dynamic exchange of information between ship and port for "Just In Time Arrival" The Berth Allocation Problem (BAP) From the point of view of port management, one of the problems to address is to reduce congestion and delays in vessel loading and unloading as much as possible. This is the Berth Allocation Problem (BAP). The objective of berth allocation is to service a set of vessels on a set of piers for a given period. The objectives addressed more frequently in the literature are (i) minimizing total vessel operating and waiting time; (ii) minimizing early or late departures with respect to scheduled times; and (iii) minimizing fuel consumption and emissions. There are several spatial and temporal constraints involved in BAP problems, leading to a multitude of formulations. Time restrictions are related to the vessel arrival process, the start of the service, and vessel handling times. Spatial restrictions are based on the design of the docks and their use (shared or not). According to Bierwirth and Meisel [17], a vessel's arrival process can be considered static or dynamic. In static arrivals, all ships are already in port. In dynamic arrivals, only a number of the scheduled ships are in port and the rest are assigned arrival times. These arrival times can be considered deterministic, with fixed values, or stochastic, in which a distribution of arrival times can be given to reflect the uncertainty of arrivals. Spatial restrictions limit the feasible docking positions of vessels according to a preestablished division of the quay into alignments. Based on berth design, the BAP can be classified as discrete, continuous, or hybrid [17]. In the discrete case, the dock is divided into a set of sections and only one ship can be serviced per section at any given time [18][19][20]. In the continuous case, there is no dock division and a ship can occupy any arbitrary position along the dock [21][22][23]. This leads to better utilization of dock space; however, it is computationally more complicated. In the hybrid case, the dock is divided into a set of sections but a vessel can occupy more than one section at a time, and several vessels are also allowed to share the same alignment at the same time [24,25]. The study of the berth allocation problem in liquid bulk terminals (liquid bulk-BAP) has received little attention in recent years, mainly due to the uncertainties and more demanding restrictions, such as specialized pipelines or conveyers, involved in these types of operations. Table 2 shows recent papers about liquid bulk-BAP problems. For each reference, the type of algorithms used, the purpose of their objective function, and the characteristics considered, based on vessel arrival times and spatial constraints, are indicated. Some authors [26][27][28] proposed a mixed integer programming (MIP) model to minimize total vessel service time, considering a dynamic vessel arrival time and hybrid [26,27] or discrete spatial constraints [28]. Moreover, [29] study and solve the problem of recovering a baseline vessel berthing schedule in a port in real time as disruptions occur. The uncertainty of vessel arrival and handling times is modeled on probability distributions derived from past data. Various metaheuristic models are used to solve bulk-BAP problems. These algorithms allow several problems to be worked on together, such as bulk-BAP and yard assignment problems [27]. Other algorithms minimize waiting time, ship operating time after berthing, and ship priority deviation based on decision support systems [30,31]. Recently, machine learning techniques have been applied to minimize the cost associated with vessel handling operations [32]. In this case of study, the BAP is based on dynamic vessel arrivals since this arrival time is a variable decision and discrete from spatial restrictions. This is because liquid bulk quays only have one possible mooring position, determined by product handling systems. The BAP at this level aims to optimize the delays and waiting times for liquid bulk carriers and maximize the port's turnaround. All the cited papers related to bulk-BAP are focused on the mathematical problem. However, this paper presents a platform where the mathematical model is integrated. The characteristic of being multi-agent, that is, the platform uses data not only provided by ships but also from terminal operators/owners and port authorities, is a distinctive feature. The HADES Framework The HADES system is a proof of concept of a coordination system among terminal operators, shipping agents, and port authorities that optimizes berth occupancy with the aim of increasing port resource efficiency. HADES is based on an optimization model that (i) recommends the allocation of liquid bulk berths to arriving vessels and (ii) suggests beneficial time shifts to vessel arrival times within realizable time windows that, if voluntarily applied by the involved actors, would positively influence overall efficiency. HADES was developed by E-lighthouse Network Solutions, a start-up at the Technical University of Cartagena focused on mathematical optimization in different industrial sectors. HADES has a web interface for its users (https://hades.apc.es (accessed on 30 March 2021). User name and password are required). This prototype has been in operation since July 2020 in the Port of Cartagena (Spain), specifically, at the E010 and E011 multiclient alignments, both allocated to liquid bulk goods. In this section, the context, purpose, and general guidelines of the HADES system are provided. The Port of Cartagena The Port of Cartagena constitutes two separate and independent docks: the Cartagena basin and the Escombreras basin. The distance between the two basins is 1.5 miles by sea and 5 km by road (Figure 1). The liquid bulk terminal is in the Escombreras dock. It has 13 quays, of which 8 are single client; that is, they are only operated by one terminal operator, and 5 are multi-client, operated by more than one terminal operator-that is, the quay infrastructure is shared by several terminal operators that have placed loading arms in the same quay infrastructure (Figure 2). The Motives Driving the Development of HADES The division between the single-and multi-client liquid bulk docks and the importance of these goods in the port have driven the creation of HADES. Although congestion had been increasing since 2016, in 2019, efficiency problems in the use of some of its liquid bulk terminals were detected in the Port of Cartagena. A temporal occupation analysis and the practical experience of operation managers discovered unevenness in vessel arrivals, with weeks of high occupancy that produced anchoring delays for the vessels involved, followed by underutilized periods where resources were idle. An internal analysis showed that there was little to no possibility of offloading vessels requesting operations in congested terminals to other terminals. That is, the large majority of the vessels arriving to the congested quays could not be served at any other quay due to the specific loading/unloading resources they needed. This was behind the idea of trying to influence the decision of arrival time by terminal users to smooth out the irregularity of vessel arrivals, stimulating a more uniform distribution of these arrivals, which would inherently result in fewer anchoring delays. The process started with a number of meetings where relevant actors were formally queried about two aspects: 1. Their time margin flexibility. For a system like HADES to be workable, given that it recommends optimized shifts in vessel arrival times to terminal users, it is necessary to understand whether such time flexibility really exists among the Port of Cartagena users. 2. Their potential interest and acceptance of a system that provides such benefits, at the cost of sharing some of their vessel arrival information. The answer was positive, with a one-day time margin flexibility informally announced and the explicit (and logical) constraint that this margin would be different for different operations. This feedback triggered the development of the HADES system. HADES Mission and Guidelines One of the objectives of HADES is to transform the berth programming methodology in multi-client terminals from linear to circular (Figure 4). In the current (linear) methodology, different agents communicate their arrival and berth planning to the port authority. That information is static and there is little opportunity to coordinate, leaving terminal operators with limited access to the information. As already mentioned, such uncoordinated arrivals often produce days of congestion (with vessels suffering delays) followed by days of underutilization with unoccupied fronts. To address this situation, the mission of HADES is to promote, assist, and monitor time coordination at berths in multi-client terminals, which prevents congestion anchoring situations before they occur. Although applicable to any type of operation, it is estimated that this methodology will have special impact on liquid bulk terminals, whose operations cannot usually be diverted to other fronts. The result is a dynamic circular method, where users can adjust their arrivals in light of forecasts and estimated occupation. The circular methodology of HADES encourages a more harmonious use of resources, promoting coordination among terminal operators, shipping agents, and port managers. HADES is conceived as a multi-agent platform, where the port authority is the owner, and the terminal operators share limited but relevant information. HADES: • Requires terminal operators to record expected future operations at target docks, including expected arrivals, required resources, and operation durations. • Permits these operators and the port manager to visualize the forecasted quay occupation based on the previous information, and the operators to see if their projected operations would be simultaneous and, therefore, in conflict with operations recorded by other terminal operators. The aim is to promote consensual coordination in arrivals to reduce dead periods. • Periodically provides recommendations of vessel arrival time shifts to the terminal operators to reduce overall anchoring times, which may be accepted or not. Such recommendations are the output of the HADES time coordination algorithm, which jointly attempts to exploit the time flexibility in vessel arrivals and (if any) flexibility in quay allocation. The MILP optimization model prototype behind the joint allocation process is described in Section 4. • Provides meaningful statistical information to the port authority on the efficient use of the port's resources and forecasted occupation to identify possible bottlenecks and improvements. • Includes automatic methods to monitor the fair use of the system, identifying inefficient situations and bottlenecks. Monitoring and auditing fairness in accessing shared resources and automatically identifying misuse helps encourage proper behavior in multi-client systems. For instance, HADES is evolving to adopt a predictive system of vessel arrival times using automatic identification system (AIS) to verify the accuracy of the arrival estimates provided by users. To do so, it evaluates previous average accuracy in the operation durations estimated by HADES users. HADES proof-of-concept, in operation in the Port of Cartagena since July 2020, is integrated into the port management system (PMS) of the Cartagena port in an application known as INTEGRA2 [34]. This application is used by most of the Spanish Port Authorities, which would facilitate the adoption of a similar system by other Spanish ports. INTEGRA2 includes a database that records multiple aspects of port operations, including a record of past vessel arrivals and resource use. HADES automatically reads from the INTEGRA2 system and includes an alternate database, which stores the estimated future vessel arrivals and resource occupation provided by port terminal users, and other information from the application business logic. Join Berth and Time Coordination Optimization Model This section describes the MILP at the heart of the HADES system prototype, which can jointly optimize (i) quay allocation decisions and (ii) limited variation in the arrival time of a vessel, with the intention of later recommending this variation to the terminal operator. The optimization target considered in this description is to minimize average anchoring delays. However, it is important to note that, in practice, anchoring stays can exist for different reasons. One of these reasons could be to intentionally delay the loading or unloading of goods that have a fluctuating price (a typical situation with liquid bulk like petrol or gas) to increase profits. In contrast, this model is focused on reducing so-called congestion anchoring stays. This means, a nondesired anchoring stay that a vessel is forced to make since the berth/s that are appropriate for the vessel are occupied and the vessel must wait until the berth becomes available. Input Parameters Let Q denote the set of quays under the control of this optimization model. Let C denote the set of vessel calls in the port during the time in which we are going to perform the optimization. Each call c C is characterized by the vessel v(c) making the call and a sequence of so-called stays S(c). A stay s S(c) represents the anchoring of the vessel v(c) in a particular quay q(s) to perform a particular set of operations (e.g., loading/unloading of goods). Stays must be conducted in a particular order; the optimization model is not allowed to interchange them. We denote as s first(c) and s last(c) the first and last starts in the call c, respectively. Each stay s has a set of eligible quays where the involved operations should occur. This set is denoted as Q(s). Typically, in liquid bulk operations, the Q(s) sets are composed of a small number of options involving expensive and specific resources that are requested for the loading/unloading operations and are not replicated in multiple quays. The estimated duration of the operations associated with stay s, if performed in quay q, is denoted as d(s,q). Note that this time may be different among the different eligible quays, e.g., when the loading of liquid goods is made via pipes, and pipes with different throughputs exist in different quays. For each call c C, t min(c) and t max(c) denote the earliest and latest vessel arrival times to the port in the call. This defines the time flexibility window that the model has to optimize vessel arrival time. Additionally, there are pre-stay time constraints if there are some specific limitations, like a particular operation to be initiated not later than a given time. To accommodate this, t min(s) and t max(s) indicate the earliest and latest starting times of stay s. Note that a stay s S(c) with the earliest starting time t min(s) is always posterior to the call earliest starting time (t min(c) ) plus the duration of the previous stays in the same call, if any. Finally, P is the set of potential allocation conflicts that can exist between two particular stays of different calls (s1 S(c1), s2 S(c2)) at a particular quay q. Note that for a potential conflict to exist: 1) the two stays must have q among their eligible quays; and 2) the two time intervals [t min(s1) , t max(s1) + t(s1,q)] and [t min(s2) , t max(s2) + t(s2,q)] should be overlapping. These intervals are the earliest and latest time in which the quay can be occupied for each stay. If they do not overlap, no time decision can cause these two stays to have a conflict at that quay. Note that the set P, with all of the potential conflicts (s1, s2, q), can be computed in advance from the input data. Decision Variables The decision variables of the problem follow: , for all p P-in each potential conflict (s1,s2,q), takes the value of one if both s1, s2 are assigned quay q, and thus a conflict is really possible and time overlapping is forbidden and 0 otherwise. • x1(p) {0,1}, for all p P-for each potential conflict (s1,s2,q), if s1 and s2 are assigned the conflicting quay q, this decision variable takes the value of 1 and s1 is scheduled to occur before s2 and 0 otherwise. If not, its value is undetermined and unimportant. • x2(p) {0,1}, for all p P-for each potential conflict (s1,s2,q), if s1 and s2 are assigned the conflicting quay q, this decision variable takes the value of 1 and s2 is scheduled to occur before s1 and 0 otherwise. If not, its value is undetermined and unimportant. Objective Function The objective function (1) seeks to minimize the duration of calls; that is, the time between a vessel's arrival time to port (call start, a(c)) and the start time of the last stay of the call (a(s last(c) )). (1) Constraints Below, the problem constraints are enumerated together with their descriptions. a f irst s(c) ≥ a(c); ∀ c, ∀ s ∈ S(c) Constraint (2) reflects that the first stay of the call cannot start before the arrival of the ship to the port. Constraint (3) states that the order of the stays in a call has to be respected so that the starting of a stay cannot be before the operations of the previous one end. In fact, prev(s) denotes the previous stay in the same call. Constraint (4) indicates that each stay is assigned one and only one quay, among those eligible. Constraint (5) sets the value of x1 for each potential conflict p = (s1,s2,p) when both s1 and s2 are assigned the same quay q (and thus o(q) = 1): if s1 is before and s2 and does not overlap it (and thus t(s2) − (t(s1 + d(s1,q))) ≥ 0), then x1 is forced to have the value of one and is forced to take the value of 0 otherwise. For this to happen, parameter M should be a sufficiently large constant, greater than any difference between time events in the system. Constraint (6) repeats the same behavior for the case of x2, setting its value to 1 if and only if s2 occurs before s1 and does not overlap it timewise. Constraint (7) means that s1 is earlier and does not overlap s2, or else s2 is earlier and does not overlap s1, but one of the two options must occur. Assuming that s1 and s2 are not assigned the same quay, the variable o(p) = 0 and Constraint (6) and Constraint (7) do not restrict the values of x1 and x2, which are free to take values of 0 or 1. Then, the variables would take either the value of x1(p) = 0, x2(p) = 1, or the opposite, to satisfy (7). Time Normalization To avoid numerical problems during the optimization solver execution, all the time variables t(c) and t(s) are converted into so-called normalized times when introducing their values into the solver. In particular, D 1 indicates the initial date of the earliest call in the dataset to use in the optimization and D 2 the ending date of the last stay among all the calls in the dataset. Then, each date d is converted into normalized time t as follows: By doing so, each date that is an input parameter to the problem (a(c); a(s)) is converted into a number between 0 and 100, and durations are also normalized accordingly. After the optimization is run, the obtained a(s) and a(c) values, which are normalized dates, are converted into regular dates by reversing Equation (11): Application to the Liquid Bulk Terminal in the Port of Cartagena (Spain) and Its Evaluation This section describes a use case application of the optimization model explained in the previous section, which served as a feasibility study for HADES. The study has two parts: • Estimation of the maximum theoretical anchoring delay reduction attainable by terminal operators and its monetary value if the HADES system was used with the current port traffic. • Estimation of the maximum theoretical additional traffic (and thus, income) that the Port of Cartagena could serve using the same infrastructure, without increasing the current service levels (i.e., anchoring congestion delays), thanks to using HADES. Dataset Description The results of this study are based on the data collected in the INTEGRA2 database for the 10 years between January 2011 and August 2020, describing the actual operations conducted in the port. The study focuses on the berths of the E010 dock, at which 1000 calls occurred during the period noted. E010 is a multi-client dock used for loading/unloading liquid bulk that, in general, cannot be handled at other docks in the port. Therefore, the simultaneous arrival of ships on this front is an unavoidable cause of congestion in Cartagena since only one of them can use the berth, while the rest must wait. From the initial 1000 calls, 960 are considered in the analysis because these calls do not involve other docks. The other 40 calls are assumed to be nonoptimizable, i.e., the vessel arrival times cannot be modified. Note that by restricting the view to one single dock, it is not allowing the simulated HADES allocation to exploit the distribution of vessels to different quays. This is because there are no historical records of the existing flexibility of arrivals during the last 10 years and, in general, such flexibility is infrequent in our use case. Therefore, only the benefits that arrival time optimization could bring about will be observed. The E010 dock has an average occupation of 30%, with an average call time of 45.28 h, of which an average of 20 h is for anchoring and the rest (25.26 h) is actual E010 occupation. Of these 20 h of anchoring, an average of 6.3 h is caused by congestion, with an estimated cost of 500,000 dollarsR per year in ship freight. To determine congestion anchoring, it is assumed that a vessel delay was not caused by congestion anchoring provided that its anchoring started when the E010 dock was idle. Theoretical Anchoring Delay Reductions The first analysis estimates the improvements that could be obtained by a system that uses optimization techniques to assist and monitor a consensus among terminal operators in E010 to make small adjustments to vessel arrival dates. The target is to measure how much the original congestion anchoring (6.3 h) could have been reduced if the terminal operators had had a system like HADES to recommend adjustments. The time margin flexibility considered is the same for all the vessels. Three tests are conducted, where each vessel can advance or delay its arrival, at the most, 6, 12, and 24 h, from its recorded arrival time. These time margins are consistent with feasible operations according to terminal operators at HADES meetings. Moreover, it is assumed that the HADES system would have accurate information about future vessel arrivals, which inherently assumes fair and precise estimations and announcements by HADES users. The results of the tests are shown in Table 3. The first row of the table refers to the average call time, subtracting the congestion time not caused by congestion (13.79 h on average), which is considered as nonoptimizable, i.e., we assume that HADES is not able to improve such time in any form, a conservative approach. Results show that congestion anchoring time can be reduced by 50% if the per-vessel time arrival flexibility (H) is plus/minus 6 h, by 75% for flexibility of H = 12 h, and it is practically eliminated with flexibility of one day. Assuming a freight charge of 20,000 dollars per day, these reductions would reach annual economic savings for the terminal clients of the E010 dock of between 220,000 dollars (H = 6 h) and 480,000 dollars (H = 24 h). Theoretical Terminal Occupancy Decreases without Service Degradation A second study was carried out with the aim of estimating the increase in occupancy (and therefore income) that the Port of Cartagena could obtain from the E010 dock without increasing the average congestion anchoring delay above the current 6.3 h. In order to simulate the traffic that would arrive to dock E010 with loads higher or lower than those stored in the database, a scaling process of arrival times is used. Let us use U to denote the original average occupation of dock E010 during the observation period (January 2010 to August 2020), with U ≈ 30%. A realistic traffic arrival trace for a different average occupation U' is artificially created, which can then feed the same tests as the ones described in the previous section. The arrival date of the first vessel in the observation period is denoted as t0. Then, giving a call c, t(c) indicates the original relative arrival date of the vessel at port, measured as the time between its original date and t0. In order to obtain a scaled version of traffic, the original relative arrival at time t(c) is exchanged for another arrival at time t(c)/x, keeping the same duration of the operations at port. According to this process, a trace where the port is occupied at 60% (double occupation) would result in a trace where the arrivals are concentrated in 5 years, instead of the original 10 years of the dataset. After scaling the traffic arrivals for different simulated occupations U for dock E010, the optimization procedure described in the previous section is applied to each of them, with time flexibilities H of 6, 12, and 24 h. The results are plotted in Figure 5. The x-axis is the simulated occupation of E010, while the y-axis shows the normalized congestion delay observed. "Normalized" here means that the average anchoring delay is divided by the average occupation of E010 in each stay. Figure 5 clearly illustrates the benefits that the traffic smoothing effect produced by HADES could provide. In any case, average congestion delay increases in respect to dock occupation. This is a well-known effect of queue systems. The interesting aspect here is the quantification of how the arrival coordination controlled by HADES produces curves that are below the original service curve and are even better when greater flexibility is available. This means that service delays will be lower for the same dock occupation or that more vessels can be served (and thus increase port income) for the same target congestion delay. In this latter approach, the results indicate that exploiting the adjustments limited to 6, 12, or 24 h in the calls would allow them to double the occupation of front E010 (going from 30% to levels of 55% to 60%), maintaining the same level of service (around 6 h of average congestion delay). Conclusions In this paper, the HADES system is presented. HADES is the first version of a web-based multi-client coordination system to be used by terminal operators, shipping agents, and port authorities that optimizes berth occupancy. The strategy of HADES is to incentivize voluntary time coordination among vessel arrivals to multi-client terminals in order to reduce congestion delays. This is of special interest in liquid bulk terminals, which often do not have the flexibility to allocate arriving vessels into more than one position and quay, in contrast to container-based terminals. The paper describes the reasons for the creation of HADES, motivated by the needs of the Port of Cartagena (Spain) and, arguably, of other liquid bulk ports, and its main design guidelines. Then, the results of a feasibility study to assess the potential benefits of HADES and similar systems is presented when applied to a particular terminal (E010) in the Port of Cartagena. This study is based on an optimization model that is able to jointly optimize both quay allocation and vessel arrival time coordination. This is the main objective of HADES, but it is used in this study to assess the potential theoretical benefits that time coordination can bring by considering different realistic time flexibility windows (6, 12, and 24 h). The results show that time coordination under the flexibility margins quoted as feasible by port terminals can bring significant benefits by reducing congestion delays of between 50% and 90%. Alternatively, such coordination could be used to almost duplicate port occupation (from 30% to 55%), providing the same level of service. These results show the good performance of HADES and validate interest in using temporal stay coordination as a means of reducing anchoring times caused by congestion. Therefore, the direct consequences are an increase in port profitability as ports can increase revenues by hosting a greater number of calls, maintaining the same infrastructure and the same level of service. Both issues have an impact on the economic benefit of the port, its activity, client loyalty, and the attraction of new clients, which will have an indirect impact on society. The studies reported in this paper stimulate future research works based on the performance, user behavior, and feedback of the HADES proof-of-concept, which has been in place since July 2020, once enough data have been collected. For example, a complete system applied to the whole port will generate greater savings provided that more degrees of freedom are evaluated, such as interaction with more docks, dock flexibility with operations that can be performed on more than one dock, potential simultaneous occupation, and calls with several stays at different docks with possible flexibility in their order. The influence of changes in port loading/unloading rates due to pumping systems improvements or systems failures could also be considered in temporal coordination. Additionally, HADES is able to naturally accommodate the existence of different service level agreements among terminal operators and shipping companies; these agreements and other constraints result in different flexibilities time windows announced, an input to HADES optimization. In future work, artificial intelligence and machine learning could be tools to implement in the HADES platform to expand the multi-agent platform to other terminals and improve decision making.
2021-05-04T22:06:28.687Z
2021-03-31T00:00:00.000
{ "year": 2021, "sha1": "ff0e5441c92d3d01dc7060c648e75a2e84ef7195", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-3417/11/7/3109/pdf?version=1617938758", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "fc2c7fd0edff40f43538f96cbef12b25e7118419", "s2fieldsofstudy": [ "Engineering", "Environmental Science", "Business" ], "extfieldsofstudy": [ "Computer Science" ] }
28524553
pes2o/s2orc
v3-fos-license
Atypical presentation of mature cystic teratoma (“floating balls”) Radiol Bras. 2017 Mai/Jun;50(3):199–208 206 aorta (aortomesenteric compression). In rare cases, the LRV is retroaortic. In such cases, compression occurring between the aorta and the spine is known as posterior NCS. The nutcracker phenomenon corresponds to these findings without clinical correlation. The prevalence of NCS is unknown, although it is known that it occurs predominantly in healthy, thin individuals between 20 and 40 years of age and in women. Clinically, hematuria is the most common finding, followed by pain on the left side, dyspareunia, dysmenorrhea, dysuria, varicoceles, and pelvic varices. In exceptionally rare cases, anatomic variations in the pancreas compress nearby vessels, including the LRV. Renal vein thrombosis (RVT) is common in nephrotic syndrome and in severely hypotensive neonates. Other causes: traumas, surgery, infections, neoplasias, vasculitis, venous compressions, contraceptives and myeloproliferative diseases. It’s infrequent in healthy adults, predominantly unilaterally. The clinical presentation of RVT is much like that of NCS, with the added features of an acute increase in renal volume, late atrophy, and progressive deterioration of renal function, as well as the complication of pulmonary thromboembolism in up to 50% of cases. The pathophysiology of thromboses encompasses Virchow’s triad: endothelial lesions, stasis, and hypercoagulability. Generally, thrombotic events involve at least two factors, although one may be sufficient. One of the principal methods employed in the diagnosis of NCS is Doppler ultrasound, which is noninvasive and can be used in determining venous caliber and flow, the latter being suggestive of NCS when it exceeds 100 cm/s, with a sensitivity and specificity of 78% and 100%, respectively, for the diagnosis. It shows high sensitivity in the investigation of RVT. Ultrasound, however, is operator-dependent and may not detect small thromboses. For the diagnosis of NCS and RVT, angiography has a sensitivity of 66.7–100% and a specificity of 55.6–100%. It is able to evaluate the aortomesenteric angle (compression); possible compression and dilation of the LRV; filling defects; endoluminal blood clots; and signs of chronic thrombosis, such as thickening of the vessel walls and calcifications. However, it uses radiation and potentially nephrotoxic contrast agents. Retrograde venography is the gold standard examination in NCS and RVT; it shows pressure gradients greater than 3 mmHg in the LRV, in addition to the filling defects that represent thrombi. However, it is invasive, potentially triggering thrombosis, and uses intravenous iodine. The therapeutic options are conservative treatment, reimplantation/transposition of the LRV, the use of an external or internal stent, renal autotransplantation, gonadocaval bypass, and nephrectomy. If RVT occurs, anticoagulation and thrombolysis can also be employed. Renal vein thrombosis (RVT) is common in nephrotic syndrome and in severely hypotensive neonates. Other causes: traumas, surgery, infections, neoplasias, vasculitis, venous compressions, contraceptives and myeloproliferative diseases. It's infrequent in healthy adults, predominantly unilaterally (8,9) . The clinical presentation of RVT is much like that of NCS, with the added features of an acute increase in renal volume, late atrophy, and progressive deterioration of renal function, as well as the complication of pulmonary thromboembolism in up to 50% of cases (5,8,9) . One of the principal methods employed in the diagnosis of NCS is Doppler ultrasound, which is noninvasive and can be used in determining venous caliber and flow, the latter being suggestive of NCS when it exceeds 100 cm/s, with a sensitivity and specificity of 78% and 100%, respectively, for the diagnosis (1)(2)(3)(4) . It shows high sensitivity in the investigation of RVT (8) . Ultrasound, however, is operator-dependent and may not detect small thromboses (8,9) . For the diagnosis of NCS and RVT, angiography has a sensitivity of 66.7-100% and a specificity of 55.6-100% (8) . It is able to evaluate the aortomesenteric angle (compression); possible compression and dilation of the LRV; filling defects; endoluminal blood clots; and signs of chronic thrombosis, such as thickening of the vessel walls and calcifications (1)(2)(3)(4)9) . However, it uses radiation and potentially nephrotoxic contrast agents (8,9) . Retrograde venography is the gold standard examination in NCS (1)(2)(3)(4) and RVT (8) ; it shows pressure gradients greater than 3 mmHg in the LRV, in addition to the filling defects that represent thrombi (1)(2)(3)(4)8) . However, it is invasive, potentially triggering thrombosis, and uses intravenous iodine (8) . Dear Editor, A 43-year-old female patient with no known diseases sought medical attention complaining of increased abdominal volume. The patient underwent ultrasound and subsequent magnetic resonance imaging (MRI) of the pelvis (Figure 1), which showed an expansile cystic lesion, with heterogeneous content, measuring 16.0 × 16.0 × 10.0 cm and containing numerous oval formations of various sizes. The lesion was hyperechoic on ultrasound and mobile upon a change in patient position. The oval formations showed intermediate signal intensity on T1-and T2-weighted MRI scans, with no evidence of signal loss in fat-saturated sequences or signal drop on an out-of-phase T1-weighted gradientecho sequence. These imaging findings, although uncommon, are pathognomonic of mature cystic teratoma (MCT). The patient underwent surgery, and the diagnosis was confirmed by histopathological analysis of the surgical specimen. Also known as a dermoid cyst, MCT is the most common benign ovarian tumor, accounting for 10-25% of cases in adult patients and 50% of those in pediatric patients (1)(2)(3) . MCTs are typically asymptomatic and slow-growing (1,3) . They are usually seen in women of reproductive age and are rarely diagnosed before puberty. Its growth ceases at menopause (4)(5)(6)(7) . An MCT typically contains well-differentiated tissues of the three germ layers (1,5) : the ectoderm, (derived from the skin and neural tissues); the mesoderm (osteomuscular and adipose tissues); and the endoderm (ciliated and mucinous epithelium). The diversity of tissues in teratomas results in a wide variety of characteristics in imaging studies. In most cases, pelvic tumors do not present imaging features that are considered diagnostic (8)(9)(10)(11)(12) . However, MCTs often present typical imaging features, which facilitate the diagnosis. Among such features, one of the most common is that of a fatty tumor (3) . In such cases, the most common ultrasound finding is that of a cystic mass with an echogenic tubercle (a Rokitansky nodule), presenting posterior acoustic shadowing secondary to calcifications, strands of hair, or foci of fat (3,5,7) . Characteristic findings on computed tomography include areas of fat attenuation, with or without foci of calcification. On MRI, the fat seen within the lesion produces a hyperintense signal on T1-weighted images and signal loss in fat-saturated sequences (3,5,7) . In rare cases, the presentation of MCT is atypical, which can be a diagnostic challenge for radiologists (2,6) . Multiple small floating spheres within a large cyst, as observed in the case presented here, is one of those rare presentations, known as the "floating ball" presentation (4,6) . Histologically, the spheres are composed of keratin, fibrin, hemosiderin, sebaceous debris, hair, and fat, in variable proportions (2,6,13) . Although the mechanism of formation of these spheres has yet to be clarified, it is speculated that it involves aggregation of sebaceous material around a nidus (2,4,14) . The mobility of the spheres is due to their low density relative to the other content of the cyst (2,4,6) . A finding of multiple floating spheres within a single large cyst has not been reported for other types of tumors and is therefore considered pathognomonic of MCT (2,4,6,(14)(15)(16) . ; out-of-phase T1-weighted gradient-echo MRI sequence (C); and in-phase T1-weighted gradient-echo MRI sequence (D). Note the expansile cystic lesion with heterogeneous content, containing numerous oval formations that were hyperechoic on the ultrasound and showed intermediate signal intensity in the T1-and T2-weighted sequences, with no evidence of signal loss in the out-of-phase T1-weighted gradient-echo sequence. Dear Editor, A 75-year-old woman presented with a 3-week history of intermittent hemoptysis related to a history of recurrent episodes of pneumonia. Chest computed tomography (CT) showed cylindrical bronchiectasis in the lingula, and bronchoscopy showed clots in the left bronchial tree. Bronchial arteriography was requested and revealed a shunt ( Figure 1A) between the left bronchial ar-tery and the left pulmonary artery. During manual-injection digital subtraction angiography, enhancement and stagnation of the contrast media were observed in a false lumen of the descending thoracic aorta (Figures 1B and 1C), consistent with iatrogenic aorta dissection. The iatrogenic aortic dissection extended to the left bronchial artery, leading to obstruction of blood flow to the shunt. However, there were no signs of hemodynamic instability, and the patient therefore received conservative therapy with clinical and radiological monitoring. A second CT scan, obtained 7 days later, showed that the iatrogenic aorta dissection was stable http://dx.doi.org/10.1590/0100-3984.2015.0155
2017-08-30T17:50:46.409Z
2017-05-01T00:00:00.000
{ "year": 2017, "sha1": "d1e998e98fab352148ef631dd6cb0d2665d03c02", "oa_license": "CCBY", "oa_url": "http://www.scielo.br/pdf/rb/v50n3/0100-3984-rb-50-03-0206.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d1e998e98fab352148ef631dd6cb0d2665d03c02", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
269755254
pes2o/s2orc
v3-fos-license
Transplant Trial Watch keep the transplantation community informed about recently published level 1 evidence in organ transplantation ESOT and the Centre for Evidence in Transplantation have developed the Transplant Trial Watch To keep the transplantation community informed about recently published level 1 evidence in organ transplantation ESOT and the Centre for Evidence in Transplantation have developed the Transplant Trial Watch.The Transplant Trial Watch is a monthly overview of 10 new randomised controlled trials (RCTs) and systematic reviews.This page of Transplant International offers commentaries on methodological issues and clinical implications on two articles of particular interest from the CET Transplant Trial Watch monthly selection.For all high quality evidence in solid organ transplantation, visit the Transplant Library: www.transplantlibrary.com. RANDOMISED CONTROLLED TRIAL 1 A prospective controlled, randomized clinical trial of kidney transplant recipients developed personalized tacrolimus dosing using model-based Bayesian Prediction. authors demonstrate that a significantly higher proportion of patients in the study arm achieved therapeutic target, with lower interpatient variability, shorter time to target trough concentrations and fewer dose modifications.Whilst no differences in clinical outcomes were seen, there was a trend towards lower incidence and shorter duration of DGF in the study group.These results are very promising and appear to demonstrate the benefit of personalised dosing using the Bayesian model.The population in this study are from a single centre, and predominantly male and Caucasian.Future studies should confirm these findings in populations with a greater mix of ethnicity, and confirm potential clinical benefit in a larger sample. Data Analysis Per protocol analysis. Allocation Concealment No. Funding Source Non-industry funded. Aims The aim of this study was to compare the effect of an immune monitoring-guided approach versus the current standard for tailoring the duration of antiviral prophylaxis to measure cytomegalovirus (CMV)-specific immunity in solid-organ transplant recipients. Interventions Participants were randomised to receive a duration of antiviral prophylaxis according to immune-guided monitoring or a fixed duration (control). Participants 193 kidney and liver transplant recipients CMV-seronegative with seropositive donors or CMV-seropositive receiving antithymocyte globulins. Outcomes The two primary endpoints were proportion of patients with clinically significant CMV infection and reduction in days of prophylaxis.The secondary endpoints were the incidence of all CMV events including untreated CMV replication, high-level CMV-DNAemia, patient survival, graft survival and incidence of acute rejection. CET Conclusion This multicentre trial enrolled kidney and liver transplant recipients receiving organs from CMV-positive donors, and randomised them to either fixed-duration prophylaxis, or guided by immune monitoring.In the study group, CMV ELISpot was used to monitor, and prophylaxis stopped if positive (indicating immune reactivity).The study failed to confirm non-inferiority of the immune monitoring strategy, although the overall rates of CMV infection were similar, with earlier CMV infection seen in the study group.However, duration of prophylaxis was shorter in the study arm.The failure to demonstrate non-inferiority is due to a lack of statistical powerin reality, the infection rates were very similar between groups.The study also fails to stratify randomisation by recipient serostatus, leading to an imbalance between the two arms of the study.This is important, as the risk of CMV infection is likely different between the two subgroups.Despite these limitations, it does appear that immune monitoring-guided prophylaxis is a reasonable strategy, resulting in a shorter duration of prophylaxis and a relatively low risk of clinically relevant CMV disease. Data Analysis Per protocol analysis. Allocation Concealment Yes. Funding Source Industry & non-industry funded. CLINICAL IMPACT SUMMARY This report is from a very interesting study in both liver and kidney transplantation, that could be practice changing.Monitoring for an immune response to CMV was used as a comparator to standard-duration CMV prophylaxis with valganciclovir.In the intervention arm of the study, prophylaxis was stopped if the immune monitoring showed a significant response (CMV ELISpot).The primary outcome was clinically significant CMV infection, which may be represented by symptomatic disease or asymptomatic viraemia that required treatment. The study was designed on a non-inferiority basis and was statistically powered as such.Approximately 31% of patients had clinically significant CMV infection, which was higher than expected.This meant that the immune monitoring approach was not shown to be statistically non-inferior, despite similar event-rates in the study and control arms.The duration of antiviral prophylaxis was however, significantly shorter with immune monitoring, by about 26 days on average.The safety of the immune monitoring approach was consistent, whether or not the recipient was CMV positive or negative.The incidence of CMV disease was very low for both groups (0 versus 2 events).As the risk of any CMV infection was higher than expected in both arms, the 95% CI for the risk difference was wide and therefore a significant inferiority could not be ruled out. Despite the limitations of the study, it seems that the immune monitoring strategy is safe and can result in a much earlier opportunity to stop CMV prophylaxis.A cost-benefit analysis would have been interesting to see but is not formally provided in this paper. RANDOMISED CONTROLLED TRIAL 2 Immune monitoring-guided vs. fixed duration of antiviral prophylaxis against cytomegalovirus in solid-organ transplant recipients.A Multicenter, Randomized Clinical Trial.by Manuel, O., et al.Clinical Infectious Diseases 2023 [record in progress].
2024-01-24T16:25:55.249Z
2024-01-22T00:00:00.000
{ "year": 2024, "sha1": "cb4952d63f31600959f8450faf4a6a3adfd6f582", "oa_license": "CCBY", "oa_url": "https://www.frontierspartnerships.org/articles/10.3389/ti.2024.12597/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c7242f73a44e6a702f81037a6eb850cff0a0553c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119241711
pes2o/s2orc
v3-fos-license
Reweighting QCD simulations with dynamical overlap fermions I apply a recently developed algorithm for reweighting simulations of lattice QCD from one quark mass to another to simulations performed with overlap fermions in the epsilon regime. I test it by computing the condensate from distributions of the low lying eigenvalues of the Dirac operator. Results seem favorable. Introducing the operator we can write w as an average over a set of complex random vectors ξ, With the usual definition of the link gauge variable U and pure gauge action S g , expectation values of operators evaluated at mass m 2 are given by where If we imagine that our simulation at mass m 1 consists of a stream of pairs of variables {U i , ξ i }, then the expectation value is That is, we reweight each configuration by a factor This was so far all quite general. Now we assume that we are doing simulations with overlap fermions. For any number of flavors, all calculations can be performed using the squared Hermitian Dirac operator D( is the massless squared overlap Hermitian Dirac operator The quantity h is the kernel operator h = γ 5 (d − R 0 ) (in terms of a kernel Dirac operator d) and ǫ(h) is the matrix sign function. We assume that we have recorded a set of eigenfunctions of H(0) (and their associated eigenvalues), H(0)|k = λ k |k . The spectrum of H(0) 2 consists of a set of zero eigenvalue chiral modes and a set of degenerate (paired) nonzero eigenvalue eigenmodes of opposite parity. The nonzero mode contribution to w in Eq. 3 can be computed using random vectors ξ which are chiral, with chirality in the sector without zero modes. Each flavor of dynamical fermion has its own chiral random vector. Now we come to the question of practicality: Reweighting will fail if the weight of each configuration deviates widely from the mean, because then only the (presumably small number) of configurations carrying a large weight will contribute to averages. It can also fail if the estimator (Eq. 7) has a large variance, for then one will need to average the same underlying gauge configuration over many estimators. Can schemes be devised, so that the weights w i do not fluctuate too much from configuration to configuration? Presumably what will work will depend on the simulation and reweighted quark mass and the simulation volume. The phase space of possible choices is large. It is always a good thing to replace as much of the stochastic estimator of the determinant with an exact result. Introducing the Hermitian projector onto low eigenmodes of H(0) 2 (call it P and its complementP = 1 − P ), We compute P Ω from eigenvalues and we only need to make a stochastic estimator for the high eigenmode part of the weight w i (reweighting a configuration with N 0 zero modes from mass m 1 to m 2 , and considering a single flavor). To complete the equation set, where c 12 = m 2 1 − (s 1 /s 2 )m 2 2 , and with y =P ξ because the random vector can only live in the space ofP ΩP , Projection of low modes plus the use of a random vector in the chirality space without zero modes improves the effective conditioning number of H(m 2 ) −2 . One interesting place in parameter space is the epsilon regime. Here the quarks are so light that the pion "fills the box" -if the volume is V = L 4 , then m π L << 1 (and all other mass scales M large, M L > 1) defines the epsilon regime. Let's perform some experiments there: I have several sample data sets with N f = 2 flavors of overlap fermions on 12 4 simulation volumes at a nominal lattice spacing of 0.14 fm. I will use a quark mass am q = 0.03 fm (nominally about 43 MeV) and am q = 0.01. They were generated using the hybrid Monte Carlo algorithm, with the reflection/refraction algorithm devised in Ref. [5]. They used the differentiable hypercubic smeared link of Ref. [6] and one or two additional heavy pseudo-fermion fields as suggested by Hasenbusch [7]. The integration is done with multiple-time scales [8]. Details of the actions are described in Refs. [9,10,11,12,13]. All in all, these are very conventional overlap fermions. I typically compute the lowest 12 eigenvectors and eigenvalues of H(0) 2 ; these eigenvalues run up to about λ ∼ 0.04. The first test checks what can gained by removing eigenmodes. We take a set of 22 lattices from our stream of am q = 0.03 simulations and reweight them to a set of target masses: am q = 0.01 and 0.035. In this test we averaged the stochastic part of the weight over six pairs of two chiral pseudofermions. Fig. 1 show a comparison of the resulting weights either not keeping any eigenmodes or removing the lowest 12 eigenmodes from the stochastic estimator. The error bars show the variation in weight over the ensemble of pseudofermion noise vectors used for each configuration. It is clear that the choice of removing eigenmodes is the superior one, from the point of view of suppressing the variance of the estimator. The low eigenvalues do not capture the entire reweighting factor. Fig. 2 shows the weights from just the low eigenvalues (divided by the average reweighting factor from the true weights). Their (incorrectly normalized) values appear to track to full weight. II. EIGENMODE DISTRIBUTIONS IN THE EPSILON REGIME By themselves, pictures of the fluctuating weights give no indication of how well an actual reweighted calculation will perform. A test is needed. For a little physics example I select the problem of determining the condensate from the distribution of low-lying eigenvalues of the massless Dirac operator in the epsilon regime in sectors of fixed topology. These distributions are given by Random Matrix Theory (RMT) [14,15,16,17,18]. Overlap fermions are optimal for this project (as for any epsilon regime simulation) due to the control they give over lattice topology. I am aware of three previous measurements of Σ L from eigenvalues with N f = 2 flavors of dynamical overlap fermions. Two of them, Refs. [12] and [13]. were not really in the epsilon regime; in the second paper, the bare quark mass is am q = 0.03 corresponding to a pion mass in lattice units of am π = 0.324 (so m π L ∼ 3.9). The JLQCD collaboration, Ref. [19], has a true epsilon regime calculation of Σ. I am also aware of two recent studies which use dynamical fermions which are not exactly chiral, but which are said to have highly improved chiral symmetry: Refs [20] and [21]. (The latter simulation used 2+1 flavors.) All of these papers produce similar and unsurprising values for the condensate, Σ ∼ (250 MeV) 3 . An unpleasant feature of the epsilon regime is that finite volume corrections are power law, not exponential. The effect is to replace the value of the condensate extracted from the RMT fit, Σ, by Σ L = ρ Σ Σ where with ∆(0) the contribution to the tadpole graph (propagator at zero separation) from finite-volume image terms. In the epsilon regime, ∆(0) = −β 0 / √ V and β 0 depends on the geometry [22]. (It is 0.1405 for hypercubes.) I carry two data sets into -or closer to -the epsilon regime. The first data set is the set of am q = 0.03 configurations from Ref. The data were analyzed with a conventional bootstrap analysis. In the bootstrap, the weight of a configuration was the number of times it was selected for the bootstrap, times the (normalized) weight factor from the determinant ratio. Results for Σ L V from a fit to the lowest eigenvalue distribution in one topological sector are shown in Fig. 3. Of course, all the reweighted points at different quark masses are highly correlated; they came from the same data sets. It appears that reweighting into the epsilon regime was successful, while trying to go to larger quark masses (am q = 0.01 to 0.035, for example) was less so. It is probably no surprise that reweighting for a small change in mass works better than reweighting a big change. Readers might recall that to complete a calculation of Σ L , one needs a separate determination of a lattice spacing and a lattice-to-continuum matching factor Z S . Z S was determined for this action in Ref. [13]: Z MS S (2 GeV)=0.76(3). Of course, the lattice spacing varies as the bare parameters of the simulation change. However, this variation is small in the epsilon regime simply because the absolute change in the quark mass is small. For this data set, r 0 /a = 3.71(5) at am q = 0.03 and 3.77(7) at am q = 0.01. Thus the three unreweighted values of r 3 0 Σ L from this study are 0.326(30) (am q = 0.01, ν = 0), 0.347(37) (am q = 0.03, ν = 0), and 0.294(20) (am q = 0.03, |ν| = 1, and with r 0 ∼ 0.5 fm, Σ L ∼ (260 − 270 MeV) 3 . III. CONCLUSIONS Reweighting dynamical overlap fermion data sets into the epsilon regime worked better than I expected. Groups doing simulations with overlap fermions might well be advised to investigate it as a technique. All the ingredients will probably already be in hand. The main reason for reweighting nonchiral actions -namely, that one wants to avoid exceptional configurations -obviously does not apply to overlap fermions. However, overlap fermion simulations are so expensive that running at many parameter values is daunting. Any methodology which allows one to recycle old configurations is worth exploiting.
2008-10-03T16:12:04.000Z
2008-10-03T00:00:00.000
{ "year": 2008, "sha1": "8488835a9b55d8ceeafe334a4eedc9089f4fc164", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0810.0676", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "8488835a9b55d8ceeafe334a4eedc9089f4fc164", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
231802302
pes2o/s2orc
v3-fos-license
Active Boundary Loss for Semantic Segmentation This paper proposes a novel active boundary loss for semantic segmentation. It can progressively encourage the alignment between predicted boundaries and ground-truth boundaries during end-to-end training, which is not explicitly enforced in commonly used cross-entropy loss. Based on the predicted boundaries detected from the segmentation results using current network parameters, we formulate the boundary alignment problem as a differentiable direction vector prediction problem to guide the movement of predicted boundaries in each iteration. Our loss is model-agnostic and can be plugged in to the training of segmentation networks to improve the boundary details. Experimental results show that training with the active boundary loss can effectively improve the boundary F-score and mean Intersection-over-Union on challenging image and video object segmentation datasets. Introduction Semantic segmentation is a fine-grained, pixel-wise classification task that assigns each pixel a semantic class label to facilitate high-level image analysis and processing. Recently, the accuracy of semantic segmentation has been substantially improved with the introduction of fully convolutional networks (FCNs) (Long et al. 2015;Minaee et al. 2021). FCNs leverage convolutional layers and downsampling operations to achieve a large receptive field. Although these operations can encode context information surrounding a pixel, they tend to propagate feature information throughout the image, leading to undesirable feature smoothing across object boundaries. Thus, the segmentation results might be blurred and lack fine object boundary details. To address this issue, boundary-aware information flow control and multi-task training methods have been proposed to improve the discriminative power of features belonging to different objects (Bertasius et al. 2016;Takikawa et al. 2019;Zhu et al. 2019). Alternatively, the segmentation errors at boundaries can be remedied by learning the correspondence between a boundary pixel and its corresponding interior pixel (Yuan et al. 2020b). Despite the empirical success of boundary-aware methods in improving the segmentation accuracy, there still exist a significant amount of segmentation errors at object boundaries, especially for small and thin objects. The mutual dependence between semantic segmentation and boundary detection should be further studied to improve the quality of segmentation results. In this paper, we propose a novel active boundary loss (ABL) to progressively encourage the alignment between predicted boundaries (PDBs) and ground-truth boundaries (GTBs) during end-to-end training, in which the PDBs are semantic boundaries detected in the segmentation results of the current network. To facilitate end-to-end training, the loss is formulated as a differentiable direction vector prediction problem. Specifically, for a pixel on the PDBs, we first determine a direction pointing to the closest GTB pixel, and then move the PDB at this pixel towards the direction in a probabilistic manner. Moreover, we also propose to detach the gradient flow to suppress possible conflicts. Overall, the behavior of ABL is dynamic because the PDBs are changing with the updated network parameters during training. It can be viewed as a variant of classical active contour methods (Kass et al. 1988), since our method first determines the direction vectors in accordance with the PDBs in the current iteration and lets the PDBs move along the direction vectors to reach the GTBs. Unlike the cross-entropy loss that only supervises pixellevel classification accuracy, ABL supervises the relationship between PDB and GTB pixels. It embeds boundary information such that the network can pay attention to boundary pixels to improve the segmentation results. Moreover, Intersection-over-Union (IoU) loss pays more attention to overall regions of semantic classes but does not focus on the boundary matching. Thus, ABL can provide complementary information during the training of the network. As a result, it can be combined with other loss terms to further improve semantic boundary quality. In our work, we let the ABL work with the most commonly used cross-entropy loss and the lovász-softmax loss (Berman et al. 2018), a surrogate IoU loss, to significantly improve the boundary details in image segmentation. The lovász-softmax loss is introduced to regularize the training so that the ABL can be used even when the PDBs might be noisy and far from the GTBs. The advantage of ABL is that it is model-agnostic and can be plugged into the training of image segmentation networks to improve the boundary details. As illustrated in Fig. 1, it is beneficial to preserve the boundaries of thin objects that contain a small number of interior pixels. We tested the ABL with state-of-the-art image segmentation networks, including CNN-based networks DeepLabV3 (Chen et al. 2017a), OCR network (Yuan et al. 2020a) and Transformer-based network SwinTransformer (Liu et al. 2021). We have also tested the ABL with STM (Oh et al. 2019), a video object segmentation (VOS) network, to show that our loss can be applied to improve VOS results as well. The forward inference stage of these networks remains the same during testing. The experimental results show that training with the ABL can effectively improve the boundary F-score and mean Intersection-over-Union (mIoU) on challenging segmentation datasets. Related Work FCN-based semantic segmentation. FCNs (Long et al. 2015) for semantic segmentation frequently utilize encoderdecoder structure to generate pixel-wise labelling results for high-resolution images. Successor methods (Ronneberger et al. 2015;Ding et al. 2018;Minaee et al. 2021) are dedicated to a better fusion of multi-scale features to enhance the accuracy of localization and handle small objects. FCN-based methods have also been widely used in VOS, including propagation-based methods (Hu et al. 2017;Oh et al. 2019;Voigtlaender et al. 2019) and detection-based methods (Caelles et al. 2017;Li et al. 2017;Shin Yoon et al. 2017). The key challenge is how to leverage temporal coherence and learn discriminative features of target objects to handle occlusion, appearance change, and fast motion. Since our loss is model-agnostic, it can also be applied to VOS for the purpose of boundary refinement. Boundary-aware semantic segmentation. One way to exploit boundary information in deep learning-based semantic segmentation is through multi-task training, in which additional branches are often inserted to detect semantic boundaries (Chen et al. 2020;Gong et al. 2018;Ruan et al. 2019;Su et al. 2019;Xu et al. 2018a;Takikawa et al. 2019;Zhu et al. 2021). A key challenge in these methods is how to efficiently fuse features from a boundary detection branch to improve semantic segmentation. There are also works focusing on the control of information flow through boundaries (Bertasius et al. 2016;Ke et al. 2018;Bertasius et al. 2017;Ding et al. 2019;Chen et al. 2016). These methods usually learn pairwise pixellevel affinity to maintain the feature difference for pixels near semantic boundaries, while enhancing the similarity of features for interior pixels simultaneously. Assuming that boundaries can be correlated through a homography transformation, Borse et al. (2021) proposed a frozen inverse transformation network as a boundary-aware loss for boundary distance measurement. The boundary details of the segmentation results can also be improved in post-refinement. DenseCRF (Krähenbühl et al. 2011) is often used to refine the segmentation results around boundaries. Segfix (Yuan et al. 2020b) trains a separate network to predict the correspondence between boundary and interior pixels. Thus, labels of interior pixels can be transferred to boundary pixels. Although these methods can efficiently refine most boundaries, they fail to model the relationship of pixels inside thin objects that contain a small number of interior pixels, which may downgrade the quality of slender object boundaries, as shown in Fig. 1. In contrast, the ABL encourages the alignment of PDBs and GTBs. Our experiment shows that it can handle such boundaries well. The uniqueness of our ABL is that it allows propagating the GTB information with a distance transform for regulating the network behavior at the PDBs, while the network structure can remain the same. As a loss, ABL can save efforts in network design. Kervadec et al. (2019) proposed Boundary Loss (BL) for image segmentation, which is most related to our work. However, this loss is designed for unbalanced binary segmentation and actually a regional IoU loss. In our implementation, the ABL is coupled with an IoU loss in (Berman et al. 2018) to further refine the boundary details. Active Boundary Loss The ABL continuously monitors the changes on the PDBs in segmentation results to determine the plausible moving directions. Its computation is divided into two phases. First, for each pixel i on the PDBs, we determine its next candidate boundary pixel j closer to the GTBs in accordance with the Local distance map: the number indicates the closest distance to the GTBs. Local probability map: X and Yi, i ∈ {0, 1, ..., 7} denote the class probability distribution for these pixels. relative location between the PDBs and GTBs. Second, we use the KL divergence as logits to encourage the increase in KL divergence between the class probability distribution of i and j. Meanwhile, this process reduces the KL divergence between i and the rest of its neighboring pixels. In this way, the PDBs can be gradually pushed towards the GTBs. Unfortunately, candidate boundary pixel conflicts might occur, severely degrading the performance of the ABL. Thus, we carefully reduce the conflicts through gradient flow control in the computation of ABL, which is crucial to its success. The overall pipeline of ABL is illustrated in Fig. 2. Each phase and how to suppress the conflicts are detailed as follows. Hereafter, we use A i to denote the value stored at pixel i of a map A. Phase I. This phase starts with detecting the PDBs using the class probability map P ∈ R C×H×W output by the current network, where C denotes the number of semantic classes, and the image resolution is H × W . Specifically, we compute a boundary map B through the computation of KLdivergence to indicate the locations of PDBs. For a pixel i in B, its value B i is computed as follows: where 1 indicates the existence of PDBs, and P i is the C-dimensional vector extracted from the probability map at pixel i. N 2 (i) indicates the 2-neighborhood of pixel i. Specifically, the offsets of pixels in N 2 (i) to pixel i are {{1, 0}, {0, 1}}. Since it's difficult to define a perfect fixed threshold to detect PDBs, we choose an adaptive threshold to ensure that the number of boundary pixels in B is less than 1% of the total pixels of the input image, where 1% is a ratio to approximate the number of boundary pixels in an image. Empirically, we observe that setting in this adaptive way can largely avoid the emergence of excessive misleading pixels in B far from the GTBs, especially in the early training period. Controlling boundary pixel number also helps to save the computational cost of ABL. Subsequently, for a pixel i on PDBs, its next candidate boundary pixel j is selected as its neighboring pixel with the smallest distance value computed by the distance transform 1 of the GTBs. The GTBs are also determined using Eq. 1, but the KL divergence is replaced by checking whether the ground-truth class labels are equal between pixel i and j ∈ N 2 (i). To represent the coordinate of pixel j in the computation of ABL, we convert it into an offset to pixel i and then encode it as a one-hot vector. Specifically, we compute a target direction map D g ∈ {0, 1} 8×H×W , where the onehot vector for a pixel i stored at D g i is 8D, because we use 8-neighborhood in this operation. The formula to compute D g i can be written as: and M is the result of distance transform of GTBs. The pixel i + ∆ j with the smallest distance is selected as the next candidate boundary pixel. The function Φ converts index j into a one-hot vector. For instance, if j = 1, Φ(j) should be {0, 1, 0, 0, 0, 0, 0, 0}, which is similar to the direction representation used in Segfix. In implementation, we dilate B with 1 pixel and perform this operation for all the pixels in dilated B to accelerate the movement of the PDBs, since more pixels are covered. Phase II. The 8D vector D g i computed in Eq. 2 is set to be the target distribution in the cross-entropy loss. We aim to increase the KL divergence between the class probability distribution of i and j, and simultaneously reduce the KL divergence between i and the rest of its neighboring pixels. An 8D vector using the KL divergence between pixel i and its neighboring pixel j as logits, denoted by D p i , is then computed as follows: where KL indicates the function to compute the KL divergence using P i and P i+∆ k . For those pixels on the PDBs, the ABL is computed as the weighted cross-entropy loss: The weight function Λ is computed as where N b is the number of pixels on the PDBs and θ is a hyper-parameter set to 20. The closest distance to the GTBs at pixel i is used as a weight to penalize its deviation from the GTBs. If M i is 0, indicating that the pixel is already on the GTBs, this pixel will be discarded in the ABL. Conflict suppression. Determining pixels on the PDBs using KL divergence might lead to the conflict case, as shown increase. ↓: decrease. The KL divergence between V 1 and V 2 is required to increase for V 1 but to decrease for V 2 , resulting in contradictory gradients for V 1 and V 2 . in Fig. 3. In this case, pixels V 1 and V 2 are deemed to be on a PDB (indicated by the red curve) because the KL divergence values computed for (V 1 , W 1 ) and (V 2 , V 3 ) are larger than the threshold. However, the GTB (indicated by the green curve) leads to the conflict when computing the ABL for V 1 and V 2 because the GTB is to the right of V 1 and V 2 . Thus, for pixel V 1 , we need to increase KL(P V1 , P V2 ) because V 2 is the closest to the GTB and it should be the next candidate pixel in the neighborhood of V 1 . In contrast, for pixel V 2 , we need to decrease KL(P V2 , P V1 ) because pixel V 3 is the next candidate boundary pixel for V 2 rather than V 1 . Thus, the gradients of the ABL computed for P V1 and P V2 might contradict with each other. While it might be possible to design a global search algorithm to remove such kind of conflicts, it will significantly slow down the training. Thus, we choose to suppress the conflicts through the easy-to-implement detaching operation in Pytorch. Specifically, through the detaching operation, the gradient of ABL is computed only for the pixels on the PDBs, but not for its neighboring pixels. This process indicates that, for a 3 × 3 patch, we focus on the adjustment of the class probability distribution of pixels on the PDBs only so as to move the PDBs towards the GTBs. As a result, the conflicting gradient flow from KL(P V2 , P V1 ) to P V1 is blocked in this case, and vice versa. Empirically, we found the mIoU drops around 3% without the detaching operation. Furthermore, we use label smoothing (Szegedy et al. 2016) to regularize the ABL by setting the largest probability of the one-hot target probability distribution to 0.8 and the rest to 0.2/7 (the parameters, 0.8 and 0.2/7 are determined through experiments). This process can avoid overconfident decisions of network parameter updating, especially when there exist several pixels with the same distance value in the neighborhood of pixels on the PDBs. The detaching operation is also beneficial in this case to avoid conflicts in the gradient flow. Training Loss The training loss L t we mainly used to train a semantic segmentation network consists of three terms: where CE is the most commonly used cross-entropy (CE) loss, which focuses on the per-pixel classification. The combination of lovász-softmax loss, namely IoU, and our ABL are two loss terms that are added to improve the boundary details, and w a is a weight. The lovász-softmax loss is expressed as follows (Berman et al. 2018): where C is the number of classes, and m(c) is the vector of prediction errors for class c ∈ C. ∆ Jc indicates the lovász extension of the Jaccard loss ∆ Jc . The reason for introducing the lovász-softmax loss is twofold: 1) This loss tends to prevent small objects from being ignored in segmentation such that the ABL can be used to improve their boundary details, since the ABL relies on the existence of predicted boundaries as the beginning step of its computation. 2) It can balance with the noisy predicted boundary pixels, especially at the early training period. The improvement of ABL over CE plus IoU is verified in the Experiments section. Experiments We implemented the ABL on a GPU server (2 Intel Xeon Gold 6148 CPUs, 512GB memory) with 4 Nvidia Tesla V100 GPUs. In this section, we report ablation studies, quantitative and qualitative results obtained from the evaluation of the ABL in image segmentation experiments and a test of fine-tuning VOS network. Baselines. We use the OCR network (Yuan et al. 2020a) (Liu et al. 2021)] as the baseline models for the task of semantic image segmentation. To verify that our ABL can be applied to the task of video object segmentation, we use STM (Oh et al. 2019) as the baseline, since its pre-trained model is publicly available. Dataset. We evaluate our loss mainly on the image segmentation dataset Cityscapes (Cordts et al. 2016) and ADE20K (Zhou et al. 2017). These two datasets provide densely annotated images that are important for the training of our method to align semantic boundaries. Cityscapes dataset contains high-quality dense annotations of 5000 images with 19 object classes, and ADE20K is a more challenging dataset with 150 object classes. There are 20210/2000/3000 images for the training/validation/testing set in ADE20K, respectively. Following the training protocol of (Yuan et al. 2020a), we use random crop, scaling (from 0.5 to 2), left-right flipping and brightness jittering between −10 and 10 degrees in data augmentation. In multi-scale inference, we apply scales {0.5, 0.75, 1.0, 1.25, 1.5, 1.75, 2.0} and {0.5, 0.75, 1.0, 1.25, 1.5, 1.75} as well as their mirrors. Training parameters. We use stochastic gradient descent as the optimizer and utilize a "ploy" learning rate policy similar to Chen et al. (2017b) in the training. Hence, the initial learning rate is multiplied by (1 − iter maxiter ) power with power = 0.9. Sync Batch Normalization (Zhang et al. 2018) is used in all our experiments to improve stability. The detailed training and testing settings for ADE20K and Cityscapes are as follows: • ADE20K: the parameters are set as follows: initial learning rate = 0.02, weight decay = 0.0001, crop size = 520 × 520, batch size = 16, and 150k training iterations, which are the same as the setting in Yuan et al. (2020a). Evaluation Metrics. Three metrics, i.e. pixel accuracy (pix-Acc), mean Intersection-over-Union (mIoU), and boundary F-score (Perazzi et al. 2016a;Yuan et al. 2020b), are used to demonstrate the performance of the ABL. The first two metrics are used to evaluate the pixel-level and region-level accuracy of a segmentation result, respectively. Boundary F-scores are used to measure the quality of boundary alignment and computed within the area of the dilated GTBs. The dilation parameters are set to 1, 3, 5 pixels in our implementation. To better preserve boundary details in the evaluation, we do not use resize operation in the testing. Combination of loss terms. To ease the description of the ablation study, we denote different combinations of loss terms used in the training as follows: CE = cross-entropy; CE+IoU = cross-entropy + lovász-softmax; CE+IABL = cross-entropy + lovász-softmax + ABL. w a is set to 1.0 for ADE20K dataset but 1.5 for Cityscapes dataset, since training images' resolution is much larger for Cityscapes. In addition, we rely on the KL divergence of the class probability distributions of adjacent pixels, which can be viewed as the pair-wise term used in condition random field (Lafferty et al. 2001). Hence, it is necessary to verify how simply enforcing the KL divergence loss at each edge of an image works in the image segmentation, i.e. enforcing the loss for each edge between a pair of adjacent pixels, not only at semantic boundaries. To this end, we define a full KL-divergence (FKL) loss as follows: where e denotes an image edge that connects a pair of pixels e i and e j , N e is the total number of edges in an image, and (G ei = G ej ) returns 1 if the ground-truth label of pixel e i is not equal to the label of e j , otherwise 0. If the FKL loss is used with cross-entropy and lovász-softmax loss in the training, we denote this combination as CE+IFKL. Ablation Studies Loss terms. We first test the influence of loss terms on the Cityscapes validation dataset by re-training the DeepLabV3 network and show the results in Tab. 1. Since the gradient of ABL is not useful when PDBs are far from the GTBs, adding ABL at the beginning of the training does not improve network performance. Thus, we start to add ABL at the last 20% epochs to verify its effect, but only obtain a 0.1% improvement over mIoU. Then, we re-train the network with CE+IoU and CE+IABL. It shows that adding ABL to CE+IoU in training can increase the mIoU by 0.3%, and the combination of IoU loss and ABL, i.e. CE+IABL contribute 1% improvement on mIoU in this study. Although the ABL does not contribute most to mIoU in this case, we do see the obvious improvement of boundary alignment in qualitative comparisons. In Tab. 2, we test the contribution of each loss term on ADE20K dataset by re-training the OCR network. While the mIoU and pixel accuracy can both be improved after adding IoU loss and ABL, CE+IABL contributes most of the improvement to mIoU by around 0.65% over CE+IoU in the single-scale inference, which verifies the contribution of the proposed ABL in this experiment. We argue that the ABL can contribute more to a dataset with a large number of semantic classes and hence more GTBs. For instance, ADE20K has 150 classes, while Cityscapes only has 19 classes. More GTBs give the ABL more space to adjust the network's behavior. Detaching operation. We verify the effectiveness of the detaching operation in Tab. 1. Significant drops of pixel accuracy and mIoU can be observed when training without the aforementioned detaching operation to suppress conflicts. Hence, it is important to control the gradient flow when there exist contradictory targets for KL divergence between two neighboring pixels. FKL loss. In Tabs. 2 and 7, it can be seen that the combination of IoU loss and FKL, denoted by IFKL in the 3 rd row, can also improve the pixel accuracy and mIoU quantitatively. However, CE+IFKL does not perform as well as CE+IABL. We speculate that it is because FKL treats every pixel equally, while the ABL pays more attention to the pixels on the PDB. Such design allows the network to adjust its behavior in a progressive way, avoiding over-confident decisions when updating the network parameters. Boundary pixels number threshold. We evaluate the influence of different thresholds on the Cityscapes validation set with the FCN [backbone: HRNetV2-W18s]. The results are as follows: mIoU 75.59% using 1% threshold, 75.46% using 2%, and 75.41% using 0.5%. This empirically verifies that our choice of 1% threshold is reasonable. The degree of ABL's dependence on IoU loss. To evaluate ABL's contribution further, we design an IoU weight decay experiment, which linearly decreases the weight of IoU loss from 1 to 0 during training but increase the weight of ABL from 0 to 1. It achieves mIoU 75.65% on the Cityscapes validation set with the FCN [backbone: HRNetV2-W18s], comparable to the mIoU 75.59% trained with CE+IoU+ABL without weight decay. It can be seen that the decreased IoU weight does not lead to the downgrade of segmentation performance. Moreover, we do observe that ABL can refine the semantic boundary for thin structures and complex boundaries, as shown in Tab. 6 and Figs. 4-6. Quantitative Evaluation Results on ADE20K and Cityscapes validation sets. In Tabs. 3 and 4, we show that training with IABL along with cross-entropy loss can improve the pixel accuracy and mIoU over state-of-the-art image segmentation networks. As for the ADE20K dataset, training the OCR network (Cordts et al. 2016) with additional IABL improves the mIoU and pixel accuracy by 1.22% and 0.23% over that trained with cross-entropy only on the validation set. Not only ef- Comparison with Segfix. We compare our method with Segfix (Yuan et al. 2020b) on the Cityscapes validation set by using mIoU and boundary F-score metrics, since both methods focus on improving boundary details in semantic segmentation. In Tab. 5, our method achieves comparable performance when using DeepLabV3 as the segmentation network, but improves the mIoU over Segfix by 0.3% when using the OCR network. In Tab. 6, we show the class-wise boundary F-scores of Segfix and our method. The scores are computed using the GTB dilation parameters 1, 3, and 5 pixels. While Segfix outperforms in the cases of 1 pixel, our method achieves a higher score in the cases of 3, 5 pixels. Segfix is an elegant boundary refinement solution that propagates the interior labels to class boundaries. However, the propagation operation might downgrade the segmentation performance for thin objects that contain a small number of interior pixels. In contrast, the ABL is an end-to-end training loss that encourages the alignment of PDBs and GTBs, which achieves better mIoU and boundary F-scores, even for thin structures. Taking the class of traffic light as an example (Tab 6, 9 th column), our method achieves a consistent improvement of the boundary F-score over Segfix in all parameter settings, which shows that our method can handle boundaries of thin objects well. Since Segfix is a postprocessing method, it can also be used to improve the seg- (Kervadec et al. 2019) for a fair comparison. We also use the same learning rate 0.001, batch size 8, training epochs 200, and loss conjunction method: Loss = α * GDL + (1 − α) * ABL, where α linearly decreases from 1 to 0.01. In Tab. 8, we show that training with ABL + GDL achieves higher dice similarity coefficient (DSC) and smaller Hausdorff distance (HD) than Boundary Loss (BL) + GDL. Moreover, we extend BL to a multiple-class loss and make a further comparison on Cityscapes validation set. In Tabs. 1 and 2, IoU+ABL archieves higher mIoU than IoU+BL. The motivation of BL is to minimize the distance between GTBs and PDBs. With the geo-cuts optimization techniques (Boykov et al. 2006), this problem is converted to minimize the regional integral. This behavior will weaken the influence of pixels near GTBs since the distance weights there are much smaller, and the ratio of these pixels is small compared to the image size. In contrast, ABL focuses on PDB pixels, which can achieve better alignment. BL needs to work with a region-based IoU loss GDL, to avoid making the network collapse quickly into empty foreground classification results. Similarly, we use ABL and IoU loss together. VOS results. We fine-tune the state-of-the-art VOS network STM (Oh et al. 2019) with our loss to verify that our method can also be applied to VOS. Specifically, the STM is finetuned for 1k iterations with batch size 4 on both DAVIS-2016 (Perazzi et al. 2016b) and YouTube-VOS (Xu et al. 2018b) training data. The learning rate is set to 5e−8, and the weight w a is set to 5.0. In Tab. 7, it can be seen that fine-tuning with addition IABL can improve region similar- ity metric J -mean and contour accuracy F-mean by around 0.7% and 1%, respectively, when testing on Davis-2016 validation set. Similar to image segmentation, training with CE+IABL can improve over CE+IoU loss, which also verifies ABL's contribution in VOS. However, adding FKL loss does not show superior performance, as shown in the 5 th colume of Tab. 7. Fig. 4 illustrates the progressive refinement of boundary details when using IABL as the additional training loss. This result is obtained when training DeepLabV3 on the Cityscapes dataset. It can be seen that the PDBs (red lines) of the traffic light and other objects are pushed toward the GTBs (blue lines). In Figs. 1 and 5, we show how adding loss terms influences the quality of semantic boundaries. The results show that the proposed ABL can greatly improve the semantic boundary details. Fig. 6 illustrates the improved boundary details when fine-tuning STM with additional IABL. It also shows that fine-tuning with CE+IABL can further improve the boundary details over CE+IoU, such as the tail of motorcycle. Conclusion In this work, we proposed an active boundary loss to be used in the end-to-end training of segmentation networks. Its advantage is that it allows the propagation of the ground-truth boundary information using a distance transform so as to regulate the network behavior at predicted boundaries. We have demonstrated that integrating the ABL into the network training can substantially improve the boundary details in semantic segmentation. In the future, it would be interesting to investigate how to reduce conflicts in our loss to further control the network behavior around boundaries efficiently. In addition, we plan to explore how to design boundary-aware loss to improve the boundary details in the task of depth prediction.
2021-02-05T02:15:37.867Z
2021-02-04T00:00:00.000
{ "year": 2021, "sha1": "c35ae7f62bd662eaf09834d962366202225d10dd", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "c35ae7f62bd662eaf09834d962366202225d10dd", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
144217228
pes2o/s2orc
v3-fos-license
The Use of Metacognitive Knowledge in Essay Writing among High School Students This paper report part of a bigger project aimed to evaluate the effectiveness of metacognitive strategies on students’ performance in essay writing. The aspects of metacognitive strategies considered in this study include the use of declarative knowledge, conditional knowledge, and procedural knowledge. The focus of this paper is on the use of metacognitive strategies during the writing activity. Before the intervention process, the participants were given a task to write an essay and after that they were asked to do metacognitive reflection. Data were analyzed using content analysis procedure. The respondents consisted of 18 secondary school students from poor urban community. The data revealed that ten participants did not use declarative knowledge namely: i) did not make the outline of the essay before writing, ii) did not identify keywords that represent the requirement of the question, and iii) how to expand ideas. These results indicate that students did not possess enough declarative knowledge about writing. The study also found that 11 respondents did not use declarative knowledge in the attempt to expand ideas. In terms of conditional knowledge, nine of the respondents still could not identify when and why certain strategies should be used. Whereas in terms of procedural knowledge, the study showed that all the participants did not show the use of important steps needed in writing a good essay. In conclusion, this study provides evidence on the need of an intervention or teaching modules to help improve students writing skills. Introduction Studies in learning process have found that students are more able to learn complex skills when they can think "metacognitively," that is, when they think about their own thinking and performance so they can consciously monitor and change it.Baradaran and Sarfarazi (2011) stated that the use of the principles of teaching based on cognitive and metacognitive as scaffolding through contextualize, modeling, discussion, contingency, and construction in the Zone of Proximal Development (ZPD) can solve the problems in teaching writing skills in English as a Foreign Language at the Islamic Azad University of Mashhad.The result shows that the scaffolding technique can improve students' performance in writing through generating ideas, structuring essays, drafting, writing and editing.Tufekci and Sapar (2011) also noted that a constructive method also improves the ability of students to produce creative writing among students as well as helps to improve their communication skills, knowledge of grammar, vocabulary, and increase awareness of the relationship of culture and language.This method also enhances students' motivation to learn a foreign language.Therefore, the approach based on cognitive and metacognitive strategies are able to produce students who can generate ideas in a critical and analytical writing skill. Literature shows that there are weaknesses in students' writing skills.National Assessment of Educational Progress (NAEP; Salahu-Din, Persky, & Miller, 2008), only 33% of eighth-grade and 24% of 12th-grade students perform at or above the proficient level in writing (defined as solid academic performance).Student who score below this level are classified as obtaining only partial mastery of the literacy skill needed at their respective grade.If partial mastery is interpreted as performing below grade level, then 67% of eight-grade and 76% of 12 th-grade students can be considered as writing below grade level.Malaysian Examination Board Report (2010) also revealed that the number of students with moderately in essay writing is more than a high performance (excellent) and low.Many weaknesses that need to be improved to achieve the level of mastery of the writing at the level of honors and awards.Khir and Marzukhi (2009) reported that most students cannot write a good, accurate, and meets the requirements of the questions in a timely manner.Students do not have extensive knowledge about the topic of the essay is written, and the students are not able to display the contents of a brilliant, cannot describe the content in a clear and precise, and no evidence or examples of appropriate and clear to every argument put forth.Essay writing presented by students is not growing.Students write one or two sentences about the topic statement, but did not elaborate title by associating it with the current issues.In fact, the introduction paragraph does not have a strong association with the paragraph content.The same is done by the student in writing of the contents of paragraph essays in which most students cannot write a paragraph to fill more than 100 words and only use 30 to 40 words only. In the case of writing activity, it is hypothesized that students who are weak in writing skills are related to their thinking skills.Writing activity involves high order thinking skills.It is one of the most difficult skills in language proficiency.Review of past studies revealed that students did not have thinking and learning management skills (Shahlan, 2012).In addition, there is a lack of metacognitive knowledge namely declarative knowledge, conditional knowledge and procedural knowledge in writing tasks (Saemah, 2010).It is not known how students capitalized this kind of thinking skills in their writing activity.Therefore, the objective of this paper to discusses, analyze and to determine how the students used metacognitive knowledge includes declarative knowledge, procedural, and conditional when producing essays writing among secondary students in Malaysia. Metacognitive Knowledge In this study, metacognitive strategy refers to the use of metacognitive knowledge namely declarative knowledge, procedural knowledge and conditional knowledge in essay writing.Flavell (1976Flavell ( , 1978Flavell ( , 1979) ) described metacognitive knowledge as consisting of knowledge or one's belief in basic knowledge about the factors that influence cognitive process.He divides knowledge into three categories, namely knowledge about your own self or individuals (declarative knowledge), knowledge of the tasks or activities (procedural knowledge) and knowledge of learning strategies (conditional knowledge). Declarative knowledge (facts and information) is "knowledge about" or "knowledge concerning".Some researchers argue that all declarative knowledge is stored or disclosed in statements and joint statements in memory (Anderson, 1985).Declarative knowledge includes facts, beliefs, opinions, generalizations, theories, hypotheses and attitudes towards something, someone and yourself (Paris, Lipson, & Wixson, 1983).Miechenbaum and Biemiller (1998) and Forgaty (1994) Procedural knowledge refers to knowledge about the 'how' to conduct cognitive activities (Anderson, 1990;Hunt et al., 1989;Paris et al., 1983).Meanwhile, A. Reber and E. Reber (2001) state procedural knowledge is the knowledge that helps us control the relevant factors when evaluating a particular phenomenon (specific steps taken in solving a particular task or activity).An example of a procedure in writing an essay is the use of metacognitive strategies.Its use can be recognized when the situation arises; (i) how can I use the information properly?(ii) how can I present this information?(iii) what are the steps I need to use in completing the task?Therefore, knowledge of important procedures in carrying out cognitive activities is expected to improve writing proficiency among students thereby improving achievement in writing (Anderson, 1990;Hunt et al., 1989;Paris et al. ,1983). Conditional knowledge refers to the question of when and why a certain strategy or procedure was used (Woolfolk, 2008).In this study, conditional knowledge is a description of the context and appropriate situation with the application or procedure in relation to the writing technique or strategies.As a conclusion, this study focuses on the use of metacognitive elements (declarative knowledge, procedural knowledge and conditional knowledge) in helping students produce good essays and expert.Metacognitive teaching and learning strategies have been developed based on the theory of metacognition through metacognitive reflection among students. The implementation of these activities can be explicit using the plan technique, drafting introduction technique, and expand the topic sentences, review, edit and conclusion.Metacognitive knowledge and cognitive regulation complement each other during the writing process of teaching and learning.This is because declarative knowledge is important when making the acquisition, transfer and settlement of essay writing assignment whereas procedural knowledge is important when performing routine or tasks (Gagne, 1985). During the writing process, students will receive two types of knowledge about the language they are learning (Faerch & Käsper, 1983).The first is the declarative knowledge that is implicit or implied and involves internalization or absorption of language rules, such as definitions of words, aspects of grammar and spelling.Secondly, which is the procedural knowledge, is generally used implicitly or explicitly (Carr, 2010).These strategy and procedures are used to process the information and language skills (e.g.writing and reading).In an effort to continue the learning process of writing skills, it starts from declarative knowledge to procedural knowledge until the performance of these skills become more automatic.The expand topic sentences technique uses declarative knowledge to answer questions on when and how they are used to expand the contents of which are listed (Shahlan, 2012).Therefore, the declarative knowledge, procedural and conditional knowledge about writing technique and strategies are important metacognitive elements as it help students learn how to learn. Methodology The research design is a survey which aims to identify the extent of metacognitive knowledge is used in the production of an essay writing.The participants comprised of 18 students from one secondary schools in Malaysian Education system (age between 14-16 years old) from one poor urban community and have low academic achievement.Respondents were assigned to all student (18 students) who participate in academic tutoring classes under MyKasih programme.This is because previous studies showed that the use of metacognitive knowledges can improve the performance suitable for use on low and higher achievements among students, especially in writing activity (Shahlan & Saemah 2012).They were given a task to write an essay and after that they were asked to do metacognitive reflection explaining step by step how they come up with the essay during the writing process.Data were analyzed using content analysis procedure. Declarative Knowledge Based on the respondents' metacognitive reflection, the respondents were able to master the declarative knowledge by understanding the task given to them.This is shown from the respondents' responses below; I read the instructions because I can understand the questions and then I will start writing the essay because I already know what is required in the task.(respondent 1) I will read the questions first because I want to know how to write this essay.(respondent 2) First, I will read the title of the question in order to understand the title…(respondent 4) Try to understand the title given because we need to understand the title that was given before writing an effective essay (respondent 5) Try to understand the title given and think of the content (respondent 7) By repeatedly reading the questions has also assisted me in searching for relevant information (respondent 10) The result of the analysis of the findings on documents and metacognitive reflective journal showed that 10 respondents did not use declarative knowledge in writing the outline of the essay, did not focus on the key words and thus were unable to elaborate the important points.This has caused difficulties in the students to obtain ideas and write the appropriate introduction.It shows the respondents of this research have not been mastered declarative knowledge.These are the responses given by them. After that, I will think for a while on how to start writing the topic sentence.(respondent 5). After that, I will start thinking of some the points because I want to complete writing the essay. After that I will think for a while to find ideas to start writing the essay (respondent 15) For a moment I will think of how I came about choosing those points for the content in order to expand the essay.(respondent 14) Trying to find ideas, searching for relevant materials.(respondent 8) Based on the metacognitive reflection and the document analysis, 3 respondents succeeded in providing good main points.The reason is that they were able to relate their experiences and their previous knowledge.Therefore, declarative knowledge can be applied when someone wants to know or understand the given question.This is shown in the responses below; I find the main points by reminiscing the past or the mistakes that I have made.(respondent 5) After that, I get ideas from my experience and it has helped me to write.(respondent 7) In order to find the first main point, I will recall my experience or put myself in the situation.(respondent 9) Based on the metacognitive reflective analysis and the students' answers, several factors have been identified; Outlining the Main Points Results show 10 respondents did not stress the importance of writing the outline as the approach in writing the essay.Consequently, they did not elaborate the main points and they only wrote whatever came across their minds without any systematic planning.However, those who wrote the outline gave these responses; Write the points that I have brainstormed so that I will not forget and outline the points.(respondent 11) Write the outline to ease the process of writing the essay.(respondent 12) I will rephrase the title when writing the introduction.(respondent 13) After that, I will write an outline to ensure that I follow the correct format of essay writing (respondent 15) After that, I will try to write an essay on a different sheet of paper first (respondent 6) Students Were Merely Listing Important Points In this research, 14 respondents were not only incompetent in the use of declarative knowledge in elaborating the ideas but also the inability to use the discourse markers effectively in their essays.This is highlighted in the responses below; The things that I was supposed to do….obtain high marks help my parents in times of hardship and try not to offend them by hurting their feelings.(respondent 8) I help by parents with the house chores like, cleaning the house, sweeping the floor and hanging out the clothes (respondent 3).This sector can be developed with good management practices when communicating with the clients.This will in turn satisfy tourists who are travelling in Malaysia (respondent 14) Only 4 respondents were able to develop the salient points effectively as shown in their responses below: The government can help in developing the tourism sector by advertising the places of interests in our country.Thus, the society who are really keen to visit these places…(respondent 13) …to maintain the cleanliness of the public toilets and public places.This is because, cleanliness in public places…(respondent 10) …by running many promotions about the interesting places in Malaysia.This can be implemented through advertisements…(respondent 11) Conditional Knowledge The use of conditional knowledge encompasses the aspects of when and why the students are not using the strategy.The findings also showed that all the 18 respondents are not competent inconditionalskills which refer to when and why certain steps should be used.The respondents can only perform the writing task based on their basic knowledge of essay writing which include introduction, the content of the body and the conclusion as based on the following statements; After I have written the introduction, I immediately wrote the first main point.However, after writing several lines I found the essay that I have written was out of topic or off topic.Consequently, I wrote a new introduction (respondent 10) I start thinking of the first main point related to the title or question without drafting anything (respondent 13). Then, I understood the meaning and tried to write grammatically correct sentences to strengthen my essay.(respondent 15) Subsequently, I will think for a while and plan on how to write the sentences…(respondent 5) Procedural Knowledge Based on the answers given by the respondents at the beginning of the evaluation, it was found that the respondents have not mastered procedural knowledge.Some of the introductions that were witten did not have the characteristics of effective sentences that will impress the reader namely to focus on the requirements of the question, to state the consequences, to provide examples, to explain the stepsand to express the importance of the points stated in the essay.In other words FCESInemomic technique (Focus, Consequence, Example, Steps and Importance) are not used in the essay.There are some respondents who did not writethe introductory paragraph and instead continue to write the main points only.Some examples of the paragraph writing are shown below; Essay Question: Parents are people who should be loved.What are some ways you should do to show them that you truly love them? Parents are people who should be loved.(F).Without them, we will not exist in this world (I) (ST3/4).Some of the things that I should be doing are helping my parents.I should help them to wash the dishes and do the laundry (S) (ST2/1). Parents are people who are loved by many (F).We really need them irrespective whether it rains or shines (I) (ST2/4). Parents are very important to the daily lives of children when their children need it (I) (ST3/1). The things should I should do are to help my parents.The things that I do are such as washing the dishes, hanging out the clothes, washing the clothes and taking care of my siblings (E) (ST2/2). In this decade, parents are people who are very important in our lives (F).In addition to showing our love to them, we can also show other ways to portray our love to them (E) (ST3/3). Only one respondent successfully used the techniques of writing by writing the introduction with the emphasis on the characteristics of good introduction in writing. Parents are noble people (F).They have been taking care of us and shower us with love (E).Their dedication and sacrifice on our behalf can never be fully repaid (I).However there are many ways for us to prove our love to them (emphasis to the subject) (ST2/5). Based on several answers of the respondents, we can compare the suggested answers using FCESInemonic technique for good introduction writing essay (Shahlan 2012). Parents are people whom we truly love in a family (focus).Without our parents, we would not be in this world (consequence).This is because they give birth to us, shower us with all their love and shape our future (example).We should strive to provide our unconditional love to our parents (steps).Therefore, the practice of the love towards our parents should be instilled in each individual's life so that life will become more meaningful (importance). Most of the respondents were able to list the ideas or main points, but they could not elaborate the ideas properly.They lack the use of appropriate discourse markers.This can be shown given in the example of the second paragraph (first main point). Things that I should do to my parents are by getting excellent exam result, helping my parents when in trouble and not hurt their feelings (ST3/4).I help my parents to clean the house, sweep the floor and hang out the clothes to dry (ST2/1).I help my parents to clean the house every day.I sweep the rubbish, mop the house and clean the windows.I spend my free time by helping them to clean the house (ST2/2).Some of the things that I do...one of it is to learn diligently regardless in school or at home so that my our parents effort will not go to waste in raising us to be a useful person (SlT3/1).Some of the things that I ought to do for my parents are to make them happy because they want their children to live comfortabl...they always say work hard at first, and enjoy the fruits of labour.They want to live comfortably during their old age (ST3/2). Things I should do to help parents by helping is when mother is cooking I will help her to cook mother.During mother's day, I will give a gift in honor of mother's day (ST2/4). By cleaning the house while they are busy working or earning a living, when we clean the house while they are not at home, it will reduce their burden to do house work when they return home.Indirectly we are able to show our appreciation to them (ST3/3). Among them are shaking hands and kissing our parents before going to sleep each day.In this way they will be reminded that they always being loved.This act will also strengthen ties between families (S2/5). Based on analysis of the respondents' documents, none of the respondents was able to master the ways on how to develop the content of the essay from the main points.Overall, the content of the paragraph writing shows glaring grammatical errors in both words and sentences.Even though the content is relevant, the explanation was unclear as most of the respondents were not able to elaborate on the salient points for content of the essay.The development of the content was also less interesting, inappropriate paragraph and not a single discourse marker was used in writing the paragraph.For example …, conclusion ..., etc. Discussion Results of the study provided evidence on the lack of metacognitive skills namely declarative knowledge, procedural knowledge and conditional knowledge that contribute to student achievement on the essay wriring.It is therefore appropriate for language educators to focus on these elements during the process of teaching and learning.With the use of metacognitive teaching strategies, the elements of metacognitive knowledge are emphasized.The focus of the activities carried out in this strategy is to plan, draft, revise or edit the essay writing process.The implementation of these activities can be explicit using the plan technique, drafting introduction technique, and expand the topic sentences, review, edit and conclusion.Metacognitive knowledge and cognitive regulation complement each other during the writing process of teaching and learning.This is because declarative knowledge is important when making the acquisition, transfer and settlement of essay writing assignment whereas procedural knowledge is important when performing routine or tasks (Gagne 1985). During the writing process, students will receive two types of knowledge about the language they are learning (Faerch & Käsper, 1983).The first is the declarative knowledge that is implicit or implied and involves internalization or absorption of language rules, such as definitions of words, aspects of grammar and spelling.Secondly, which is the procedural knowledge, is generally used implicitly or explicitly (Carr, 2010).These strategy and procedures are used to process the information and language skills (e.g.writing and reading).In an effort to continue the learning process of writing skills, it starts from declarative knowledge to procedural knowledge until the performance of these skills become more automatic.The expand topic sentences technique uses declarative knowledge to answer questions on when and how they are used to expand the contents of which are listed (Shahlan, 2012) According to Baker and Brown (1984), metacognitive skills allow students to control the development of what he or she has learned and try to understand something.Thus, metacognitive awareness relates to the process of thinking and learning strategies, while procedural knowledge is directly involved with the monitoring, directing learning and thinking.All these processes translate in the metacognitive strategies such as focusing on learning, organizing and planning the learning and assessing whether the learning is successful or otherwise.Therefore, coordination between metacognitive awareness and thinking processes in accordance with the views of Piaget (1973) which states that metacognition can be developed when the child enters the formal operational level and capable of thinking to a more abstract thinking.Through the practice of metacognitive reflection, students will remember and reflect on the learning process that occurs. Metacognitive reflection helps students to be aware of the learning process through which it passes (Beyer, 1988).In this study, among the activities involved in the process of metacognitive reflection is to evaluate the achievement of learning outcomes and learning content (the content of a subject) and the process of learning how to learn (Saemah et al., 2010).Metacognitive reflection practices through metacognitive strategies technique encourage students to make self-reflection.Personal reflection allows the students to identify the advantages and disadvantages of a writing assignment and follow-up action plan to improve the quality of learning.Through self-reflection activities, students can familiarize themselves with self-questioning.Practices of self-reflection, students tend to form a frame, reconstruct the frame, and a new action plan on an ongoing basis as recommended by Schon (1983).Techniques used in the metacognitive strategy against the students are able to make students more aware of metacognitive awareness when, how, and why the technique is used.As a result, the depicting elements of the explanation metacognitive strategies influence and contribute to the increase in mean score essays. Conclusion Declarative knowledge, procedural and conditional knowledge about writing technique and strategies are important metacognitive elements as it help students learn how to learn.Consequently, it contributes to students' performance in essay writing.However, the study found, students still lack of these skills.It is hypothesized that using the metacognitive strategy in writing will enhance students writing skills.Therefore, it is suggested that language educators give attention to the importance of these elements during the teaching and learning writing skills.
2018-12-05T16:22:56.516Z
2014-12-21T00:00:00.000
{ "year": 2014, "sha1": "1ffe2786b876516fb231494d66049381d5220f77", "oa_license": "CCBY", "oa_url": "https://ccsenet.org/journal/index.php/ies/article/download/43619/23824", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "1ffe2786b876516fb231494d66049381d5220f77", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Psychology" ] }
213781608
pes2o/s2orc
v3-fos-license
RELATIONSHIPS OF EFFECTIVE FAMILY COMMUNICATION AGAINST YOUTH SEX BEHAVIOR Dya Sustrami, Setiadi, Diyah Arini, Astrida Budiarti and Varinta Putri Pratanti. Medical Institute Of Hangtuah University Surabaya. ...................................................................................................................... Manuscript Info Abstract ......................... ........................................................................ Manuscript History Received: 03 October 2019 Final Accepted: 05 November 2019 Published: December 2019 Teenagers will experience psychological and biological changes generally. The changes that occur in teenagers provide a strong impetus to do things that they find interesting in it. Most of teenagers were explore sexual information through their environment because they assume that it could be free to ask their friends without any restrictions, meanwhile they assume that parents often forbid them to ask questions or talk about matters that relate to sex on the grounds that sexuality is a taboo subject to be discussed with children (Lou & Chen's, 2009) in (Fauzy, 2014a). Juvenile sexual behavior among teenagers today is very worrying. Many juvenile behaviors violate the norms in society. If their parents are set them free without controlling and are not accompanying them in obtaining sexual information, it can cause a bad impact on juvenile sexual behavior as a result many incidents of abortion, HIV and AIDS Collen et al, (1999) in (Suwarni, 2009). One method that can be done is by having good communication between parents and children, controlling children's friend environment , and how the education they get at school. However, the lack of communication and attention between parents and their children caused juvenile are more likely to choose to look for information outside the home, fill the spare time with negative things and engage in sexual behavior. This is supported by Hutchinson (2003), in (Lenciauskiene and Zaborskis, 2008) show that the quality of parent and children relationships is a social interaction that affects early sexual behavior among juvenile. Juvenile who actively communicate with their parents ISSN: 2320-5407 Int. J. Adv. Res. 7 (12), 309-314 310 have lower rates of free sex compared to passive teenager in communicating with their parents. According to Cleveland et al, (2003) in (Ellis et al, 2013) it shows that teens who have a close relationship with their parents are less likely to be involved in risky teen dating relationships. According to Rubin et al, (2006) in (Ellis et al, 2013) Peers are the main source of influence on adolescent attitudes and behavior in the context of dating relationships. According to WHO (2010) in (Wulandari, 2013) teenager's growth and development is divided into three stages; early teenager aged 11-14 years, mid-teenager aged 14-17 years and late teenager aged 17-20 years. The number of Indonesian teenagers in 2010 was 237.6 million people, 26.67% that will affect the development of social, economic and demographic aspects both now and in the future (BKKBN, 2011). Surabaya is the capital of East Java Province which has been quite advanced, moreover it is coupled with the increasing number of nightclubs, which represent that East Java, especially Surabaya is the metropolis provincial capital. Educational Hotline survey results that 44% of high school students assume that sex when dating is a natural thing. The police found that most of molestation victims and human trafficking were students. Police data shows that human trafficking cases during 2012 reached to 20 cases (Source: Jawa Pos, January 1 st , 2012) in (Susanti, 2013). Professor of the University of Dr Soetomo (Unitomo) Surabaya who is also the Secretary of the East Java AIDS Commission Drs. Otto Bambang Wahyudi said, East Java was first ranked in Indonesia regarding cases of HIV / AIDS sufferers, there were 18,008 cases that were discovered by the East Java AIDS Commission during 2017 and the largest number of people affected by HIV / AIDS are in Surabaya which is reached to 7,000 people. Patient age is productive age, between (Scott and Rickard, 2008) interactions of parents and friends have an influence on adolescents, they get information about sex in a willingness not to engage in sexual behavior. If the child has not be a teenager yet and unfortunately they get more specific information about sexual behavior from their friends it can cause a risky behavior. According to Kinsman et al, (1998) in (Scott and Rickard, 2008) friends have the influence of creating normality needs in individuals which cause sexual behavior to begin in order to meet this standard of normality. Friends influence at its peak is during grade 11 and grade 12 (Dilorio et al, 1999), Treboux and Busch (1995) in (Scott and Rickard, 2008). The role of parents is to educate children to avoid sexual behavior because parents are the child's first environment to get an education. Teenagers who involved in high-risk behavior often realize that their behavior is risky but do not believe that they are personally at risk (Van Der Pligt, 1996) in (Scott and Rickard, 2008). Parents have an important role in reducing sexual behavior in teenagers. In this case parents must provide information to children about the dangers of free sex and ensure they are in a healthy environment (Koss, 2011). According to Agha & Rossem (2004) in (Fauzy, 2014a) shows that prevention in free sex behavior in teenagers can also be done in the school environment conducted by the school. The provision of sex education from an early age makes the teenager be more careful and take care of himself in his behavior. If this is not done, there is a risk of reproductive health problems such as pregnancy outside marriage, abortion, and sexually transmitted infections. Based on this, the researcher is interested in taking the title of "Relationships of Effective Family Communication Against Youth Sex Behavior in 11 th Grade Student of Barunawati High School Surabaya". Materials And Methods:- This type of research is observational analytic. The type of design used is cross sectional. This research was conducted at Barunawati High School in Surabaya and was carried out on 12 April 2018 -April 16, 2018. The population in this study were 11 th Grade Student at Barunawati High School Surabaya, which totaled 214 respondents. In this study, questionnaire data collection instruments to measure the risk factors for rheumatoid arthritis and data collection to determine the occurrence of rheumatoid arthritis in the elderly with questionnaire sheets. Results:- Statistical test results using the Chi-square test obtained significance value ρ = 0.035 with a degree of significance (ρ <0.05) it can be concluded that H1 is accepted, which means there is a relationship between effective family communication against teenage sex behavior of 11 th Grade Student in Barunawati High School, Surabaya. According to (Sustrami, 2012), the more experience a person has, the better the way they communicate and attitudes will also affect the communication process can be effective or not. From the analysis of the answers to the questionnaire questions effective family communication of 140 respondents 108 (77.1%) experienced ineffective communication. First the respondent answered question no. 5 which reads "I always communicate with parents when facing problems with a boyfriend?" This question is part of the favorable question for items / parameters of ineffective communication. Discussion This is in line with (Noegroho, 2014), communication between parents and teenagers can be interpreted as a conversation between parents (can be father and / or mother) with teenager who occur in the family and the main purpose of family communication is to maintain interaction between one member with other family members so that effective communication is created. Communication can be ineffective if adolescents feel their relationship with parents is lacking in good communication and they increasingly feel they don't get attention in facing problems faced especially around physical and psychological development. So that teens are lazy to ask questions to communicate with their parents. The question number 6 is "Have your parents ever provided information that related to sexuality?" This question is part of the favorable question for items / parameters of ineffective communication. This is supported by (Noegroho, 2014) the family has an important role in the development of the child's personality because in the family the first time children get experience and education about the dangers of sexual behavior, so it is necessary to instill a strong self-foundation in children for example by providing sex education, information about the dangers of premarital sexual behavior in order to minimize the occurrence of adolescent sexual behavior. Communication that exists between parents of children if parents and children do not have good and open communication without feeling taboo when talking about sexual behavior to teenagers. Effective communication between parents and teenager has been identified as the main strategy in promoting responsible sexual behavior and minimal risky sexual experiences in teenager (Burgess 2005) in (Gistina, 2017). According to Sarwono (2007) in (Syaputri, 2014), the role of parents is very large in providing choices of answers to the behavior and questions asked by children. Wise parents will provide more than one answer and alternative so that the teenager can think further and choose the best, while parents who are unable to provide explanations wisely and be rigid will make children lazy to ask questions and exchange opinions with parents so that teens will look for information outside the home. Youth sex behavior of 11 th grade Student of Barunawati High School, Surabaya:- Factors that can influence youth sexual behavior are the relationship of teenager's parents, friend negative pressure is a significant influence, both directly and indirectly on youth sexual behavior. Most teenagers say that they cannot talk freely with their parents about sexual matters, so teenagers talk about sexual matters with peers which affect teenage sexual behavior. According Sarwono (2011) in (Najib, 2016), the factors that influence teenager premarital sexual behavior of teenager sexual behavior is caused by several factors, such as: 1) Biological: biological changes 312 that occur during puberty and hormonal activation that can cause sexual behavior , 2) Parents' influence: the lack of open communication between parents and teenagers in sexual problems, can strengthen the emergence of sexual behavior deviations, 3) Friends' influence: influence of teenagers makes teens have a tendency to use peer norms compared to the existing social norms . According to the research of Seotjiningsih (2006) in (Sapto, 2011), the factors that influence adolescent sexual behavior are adolescent parents' relationships, negative peer pressure, and pornographic media exposure have a significant effect, both directly and indirectly on behavior teenage sex. According to Marbiyati (2016) in (Sustrami, 2017) in accordance with Notoatmojo's theory which states that knowledge is a very important domain in the formation of individual behavior. Behavior based on knowledge, awareness, and a positive attitude, then the behavior will last a long time. According to Kinnaird, 2011and Sarwono, 2010in (Syaputri, 2014, factors that influence sexual behavior in teenager are internal factors and external factors, it mentioned that contact with information sources, family characteristics is the level of family education as a social supporter to provide information to his child. According to Sarwono (2007) in (Syaputri, 2014), sex education is a way of teaching or education that can help teenagers to deal with life problems that originate from sexual drive. Thus this sexual education intends to explain all matters relating to sex and the dangers of sexual behavior so that teenagers will understand better the risks when engaging in deviant behavior. From the analysis of the answers to the questionnaire questions of youth sex behavior of 140 respondents, 129 (92.1%) are at risk of youth sexual behavior with some criteria such as knowledge, attitudes and actions have a value that is at risk. The respondent answers questions number 3 which is "Masturbation can cause impotence in men (conditions when a man's genitals are unable to get an erection)?" This question becomes part of the unfavorable question for items / parameters of knowledge about lack of sexual behavior. According to Sarwono (2011) in (Najib, 2016) mentions that premarital sexual behavior is all behavior that is driven by sexual desire committed by two people, men and women outside of legal marriage. Youth sexual behavior is an opposite sex male and female activity that arises because of sexual urges or activities to get pleasure from sexual organs through various behaviors without the existence of marriage ties. Both respondents answered the question of positive attitude No. 4 which reads "I will obey whatever the wishes of the boyfriend so as not to break the relationship of dating or fiancee?" This question becomes part of the parameter questions about positive sexual behavior. The third answer the question of positive action No. 1 which reads "I am holding hand with girlfriend is a normal things?". Question number 4 which reads "I kissed my boyfriend sometimes?". Question number 7 which reads "I masturbate when sexual desire arises?". Question number 12 which reads "Having intercourse without inserting genitals with a partner is done for fear of pregnancy?". Question No. 13, which reads "I have a body relationship with a boyfriend / fiance because we are sure to get married?" The above question becomes part of the question for the item / parameter of action about positive sexual behavior. According to (Catur, 2017) the younger a person knows about dating, the greater the potential for sexual intercourse and causing an increase in sexually transmitted infections. According to the BKKBN (2014) in (Chess, 2017) Dating behavior to the kissing stage has the potential to have sexual relations, especially if you have a wet kiss or more than that, then the chance to have sexual intercourse is 26 times greater than those who do not. According to Green and Kreuter (2000) in (Chess, 2017) a person's behavior is influenced by 3 factors: predisposition (knowledge, attitude, gender and age), reinforcement factors (friends and friends' roles), and enabling factors (infrastructure, affordability of facilities and mass media). Relationship of Effective Family Communication against Youth Sex Behavior in Class XI at Barunawati High School, Surabaya:- Statistical test results using Chi-square test in the SPSS 16 program obtained significance value ρ = 0.035 with a degree of significance (ρ <0.05) can be concluded as H1 which is expected to be related to any relationship related to gender of 11 th grade students in Barunawati High School Surabaya. According to Sarwono (2010) in Syaputri (2014) sexual behavior of all behaviors that are supported by sexual desires even be done alone, and the opposite sex or same-sex without the relationship of marriage according to religion. The active role of parents in engaging in youth sexual relations by engaging in communication activities between parents (can be fathers and / or mothers) and partners involving teenagers with regard to the topic of adolescent reproduction health (Noegroho, 2014). Because by communicating well between parents and children it can reduce teenage sex problems. According to Achdiat (1997) in (Noegrogo, 2014) family communication is an organization that uses words, gestures (intonation), voice intonation, actions to create hopes, challenges, and interconnections between words. The main purpose of family communication is the interaction of parents with children so that communication can run effectively and does not have a negative impact on children and children who get attention and gratitude from parents so that effective communication is created. According to (Catur, 2017) in terms of dating in teenagers can cause them to behave in negative dating because of knowing about dating behavior since a young age. This is inseparable from parents in providing information about the dangers of teenage sex because the younger a person is to get to know the relationship then it will question the relationship of a larger section and increase cause an increase in sexually transmitted infections. This study is in line with (Qomarasai, 2015) which is showing a relationship between family roles and sexual behavior and is statistically significant (p <0.001). Teenagers who have weak family communication (score <mean) are 0.09 times more likely to engage in sexual behavior than adolescents who have strong family communication. Conclussion:- Based on the results of research conducted at Barunawati High School in Surabaya, it shows that 214 respondents found that effective communication of 11 th grade student's families in Barunawati High School Surabaya was largely ineffective and youth sex behavior of 11 th grade student at Barunawati High School Surabaya was mostly at risk. The results of this study are expected health workers to promote adolescent reproductive health through home visits, school guidance through the UKS program and collaborating with health centers in counseling about adolescent reproductive health and the dangers of youth sexual behavior. Furthermore, it is expected that adolescents should increase their knowledge of reproductive health openly communicate with parents or family and use information media to access positive information. And can be more selective in associating with friends so that it can provide a positive influence so it does not fall into deviant sexual behavior.
2020-01-09T09:18:58.746Z
2019-12-31T00:00:00.000
{ "year": 2019, "sha1": "9e63d5c4c8a937ad576d466b0c7ee199d52ceb16", "oa_license": "CCBY", "oa_url": "http://www.journalijar.com/uploads/455_IJAR-29863.pdf", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "f14354f0bfc669aaeb9a4dfa84407fd8a1762caf", "s2fieldsofstudy": [ "Education", "Medicine", "Sociology" ], "extfieldsofstudy": [ "Psychology" ] }
8867870
pes2o/s2orc
v3-fos-license
Characteristics of victims of alleged child sexual abuse referred to a child guidance clinic of a children’s hospital Child sexual abuse (CSA) is a major public health problem affecting all cultures and social classes. Estimated global prevalence of CSA is 11.8% [1]. Retrospective studies in Sri Lanka have shown prevalence of sexual abuse among adolescents to be 21.9% [2]. A retrospective descriptive study was carried out of all children referred through courts or Judicial Medical Officer to a Child Guidance Clinic at Lady Ridgeway Hospital, from 2010-2014, due to alleged CSA. Psychological consequences were assessed by a Consultant Psychiatrist and diagnosis was made according to the International Classification of Diseases, 10th edition. Approval was obtained from Ethics Review Committee of The Lady Ridgeway Hospital, Colombo. Data obtained from case records were suitably altered to maintain confidentiality. Thirty-five children presented with alleged CSA, during 2010-2015, with the highest number referred in 2013(Table 1). Majority (57.1%) were females. Commonest age group was 12-14 years (9/35). Sixty percent were from Colombo District. In 9 (25%), parents were separated. In 5 (14%) the mother was abroad. In all cases, the perpetrator was male. Majority 29 (83%) were known to the child. In most (29/35), a single perpetrator was involved. In 23 (66%), abuse occurred on several occasions by the same person. Threats or violence was used in 17 (49%) and rewards were given in 7 (20%). The commonest form of sexual abuse was non-penetrative contact 17 (49%). Co-existing forms of abuse were present in 13 (37%). Psychological consequences were present in 24 (68%), with post-traumatic stress disorder being the commonest 7 (20%). Globally, CSA is commoner in females [1], which is compatible with our findings. Similarly, highest rates of CSA has been reported among adolescents [3]. Mother A retrospective descriptive study was carried out of all children referred through courts or Judicial Medical Officer to a Child Guidance Clinic at Lady Ridgeway Hospital, from 2010-2014, due to alleged CSA. Psychological consequences were assessed by a Consultant Psychiatrist and diagnosis was made according to the International Classification of Diseases, 10 th edition. Approval was obtained from Ethics Review Committee of The Lady Ridgeway Hospital, Colombo. Data obtained from case records were suitably altered to maintain confidentiality. Globally, CSA is commoner in females [1], which is compatible with our findings. Similarly, highest rates of CSA has been reported among adolescents [3]. Mother This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. (Index words: child sexual abuse, psychological consequences, offender characteristics) Ceylon Medical Journal Letters living abroad has been shown to be a risk factor, which is supported by the present results [2]. Worldwide, majority of abusers are male and known to victims [3]. In the current study, all perpetrators were male and 83% were known to the child. However, a previous Sri Lankan study showed most perpetrators to be strangers [4]. Data from Australia has shown that commonest place of abuse was at offender's home [5]. However, in our study, most abuse occurred at child's home (43%). Literature shows that perpetrators gain compliance of children by ''grooming behaviour'', rather than by threats [5]. In contrast, in the current study, threats or violence has been used in 17 (48.6%) with rewards being given only in 20%. Previous community studies of adolescent boys in Sri Lanka showed oral and intra-crural sex to be the commonest forms, with 10.7% being penetrative sex [2]. Percentage of anal penetrative sex in boys was higher (17.1%) in the present study, possibly due to differences in sampling. Previous literature has reported that different forms of child abuse and neglect frequently co-exist [3]. This was supported by our results. In previous studies psychological consequences have been reported in up to two-thirds of victims, this was consistent with our results [3]. Previous data show that post-traumatic stress disorder (PTSD) was present in 48% [3]. In our study, rate of PTSD was lower (20%). The most likely reason is that our data included findings at the first visit following abuse and PTSD may develop up to six months after the initial abuse. The sample was derived from referrals to child guidance clinic and may not represent all children subjected to CSA as only some make complaints and are referred for assessment. This is a limitation of our study. Since a majority of perpetrators were known to victims, public education programmes should aim at recognition of the danger that exists at home in addition to danger from strangers. Since CSA frequently coexists with other forms of abuse, clinicians should be vigilant about this. All children should be screened for psychological problems following abuse in order to minimise adverse outcomes, as majority show psychological consequences. Conflicts of interests There are no conflicts of interest.
2018-04-03T00:05:42.754Z
2016-01-04T00:00:00.000
{ "year": 2015, "sha1": "8ce52b5f31b0d962d7538878ab872057df3c694b", "oa_license": "CCBY", "oa_url": "http://cmj.sljol.info/articles/10.4038/cmj.v60i4.8227/galley/6198/download/", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "333583050ffa32ec5abec6b081ecc4c176f3ec12", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
195545025
pes2o/s2orc
v3-fos-license
A Thermoelectric-Heat-Pump Employed Active Control Strategy for the Dynamic Cooling Ability Distribution of Liquid Cooling System for the Space Station’s Main Power-Cell-Arrays A proper operating temperature range and an acceptable temperature uniformity are extremely essential for the efficient and safe operation of the Li-ion battery array, which is an important power source of space stations. The single-phase fluid loop is one of the effective approaches for the thermal management of the battery. Due to the limitation that once the structure of the cold plate (CP) is determined, it is difficult to adjust the cooling ability of different locations of the CP dynamically, this may lead to a large temperature difference of the battery array that is attached to the different locations of the CP. This paper presents a micro-channel CP integrated with a thermoelectric heat pump (THP) in order to achieve the dynamic adjustment of the cooling ability of different locations of the CP. The THP functions to balance the heat transfer within the CP, which transports the heat of the high-temperature region to the low-temperature region by regulating the THP current, where a better temperature uniformity of the CP can be achieved. A lumped-parameter model for the proposed system is established to examine the effects of the thermal load and electric current on the dynamic thermal characteristics. In addition, three different thermal control algorithms (basic PID, fuzzy-PID, and BP-PID) are explored to examine the CP’s temperature uniformity performance by adapting the electric current of the THP. The results demonstrate that the temperature difference of the focused CP can be declined by 1.8 K with the assistance of the THP. The proposed fuzzy-PID controller and BP-PID controller present much better performances than that provided by the basic PID controller in terms of overshoot, response time, and steady state error. Such an innovative arrangement will enhance the CP’s dynamic cooling ability distribution effectively, and thus improve the temperature uniformity and operating reliability of the Li-ion space battery array further. Introduction The Li-ion battery has been proven to be a promising candidate to substitute other energy storage batteries as the power source in space stations [1][2][3] owing to its merits of high power density, high Introduction The Li-ion battery has been proven to be a promising candidate to substitute other energy storage batteries as the power source in space stations [1][2][3] owing to its merits of high power density, high single voltage, long cycling life, environmental friendliness, large operating temperature ranges and so on [4][5][6]. The operation performance of the Li-ion battery array greatly depends on its operating temperature and temperature uniformity. An improper operating temperature may contribute to the reduction in charging efficiency and service life of the batteries [7,8]. Further, an uneven temperature distribution of the single cells may potentially decrease the pack capacity and cause serious safety problems [9]. Accordingly, the battery thermal management system has become an essential approach to enhance the performance of the batteries effectively. The battery thermal management system has been investigated actively where different cooling technologies, namely air cooling, liquid cooling, heat pipe cooling, and phase change material cooling, were adopted. The air cooling approach [10,11] is classified into natural convection and forced air convection, and the latter is widely researched because of its high convective heat transfer coefficient. The heat pipe cooling [12,13] achieves heat transfer from the heat source to the cooling end to lower the battery array's temperatur and is largely used in electrical devices for its high effective conductivity. As to the phase change material cooling method, the latent heat of the batteries is stored in the phase change material as the phase changes over a small temperature range, thereby the temperature rise inside the battery can be reduced [14,15]. When confronting more complicated configurations, especially in the case of a large-size high-rate discharging battery array [16], liquid cooling thermal management could manifest better performances than other methods. Furthermore, due to its advantages of strong heat dissipation ability, gravity immunity, structural simplicity, technological maturity, etc., , the single-phase fluid loop is regarded as the most promising active approach for Li-ion battery thermal management in space stations. The conventional strategy of the single-phase fluid loop for the space batteries [17] is shown in Figure 1. It comprises of the cold plate (CP), radiator, pump, reservoir and three-way valve. It is shown that the onboard vacuum packaged battery array is cascaded into a single-phase fluid cooling cycle system via a CP. The CP is applied for absorbing the heat generated inside the batteries and the coolant flowing through is for transporting heat to the radiator by which the heat is dissipated to the outer space. To balance the flow through the bypass line and the main fluidic line linking to the radiator,a three-way valve is used by adjusting the valve opening factor between 0% and 100%, through which the coolant temperature can be regulated. The reservoir collects the coolant flowing from the bypass line and the main line for the next cooling cycle, and acts as coolant supplement for the fluid loop when necessary. Undertaking a variety of works, including flow management, heat transfer and energy conversion, etc., the CP is considered to be the key component of the battery liquid cooling system on account of its compactness and ability to separate the battery and fluid [18][19][20]. Much attention has been devoted to the performance improvement of the CP. Yamada et al. [21] developed a honeycomb-cord CP that can obtain a remarkable mass decrease without lowering heat removal capability. Wang et al. [22] designed a silica-plates based cooling system for prismatic lithium-ion batteries during fast charging-discharging process and in specific operating conditions. Jarrett et al. [23] proposed a serpentine-channel CP for the battery liquid cooling system and assessed the effect from the geometry of the channels. The numerical results indicate that with the optimum design, both pressure drop and average temperature can be decreased, but at the expense of temperature uniformity. Wang et al. [24] presented an advanced single phase actively-pumped fluid loop using distributed thermal control strategy applied in spacecraft thermal control system, which included a self-driven CP and a paraffin-actuated thermal control valve. This self-driven control system not only simplified the structure of the conventional mechanically pumped fluid loop effectively, but also improved the operation economy significantly. However, few studies focused on the dynamic adjustment of the cooling ability of CP's different locations which is benefit for the improvement of the overall space battery's temperature uniformity. As the structure of the CP is fixed, it is difficult to adjust the cooling capacity of the different locations of the CP dynamically which may cause the overheating area or undercooling area. Accordingly, this can directly lead to large temperature difference inside the cells, which is unfavorable for the reliability and capacity utilization of the battery array. As shown in Figure 2a, the temperature of the cells in the middle of the array is much higher than that of the cells on the edge of the CP, which can result in severe temperature unevenness of the power-cell-array and performance degradations or operating failures eventually. Therefore, it is urgent to investigate how to improve the temperature uniformity of the power-cell-array. Much attention has been devoted to the performance improvement of the CP. Yamada et al. [21] developed a honeycomb-cord CP that can obtain a remarkable mass decrease without lowering heat removal capability. Wang et al. [22] designed a silica-plates based cooling system for prismatic lithium-ion batteries during fast charging-discharging process and in specific operating conditions. Jarrett et al. [23] proposed a serpentine-channel CP for the battery liquid cooling system and assessed the effect from the geometry of the channels. The numerical results indicate that with the optimum design, both pressure drop and average temperature can be decreased, but at the expense of temperature uniformity. Wang et al. [24] presented an advanced single phase actively-pumped fluid loop using distributed thermal control strategy applied in spacecraft thermal control system, which included a self-driven CP and a paraffin-actuated thermal control valve. This self-driven control system not only simplified the structure of the conventional mechanically pumped fluid loop effectively, but also improved the operation economy significantly. However, few studies focused on the dynamic adjustment of the cooling ability of CP's different locations which is benefit for the improvement of the overall space battery's temperature uniformity. As the structure of the CP is fixed, it is difficult to adjust the cooling capacity of the different locations of the CP dynamically which may cause the overheating area or undercooling area. Accordingly, this can directly lead to large temperature difference inside the cells, which is unfavorable for the reliability and capacity utilization of the battery array. As shown in Figure 2a, the temperature of the cells in the middle of the array is much higher than that of the cells on the edge of the CP, which can result in severe temperature unevenness of the power-cell-array and performance degradations or operating failures eventually. Therefore, it is urgent to investigate how to improve the temperature uniformity of the power-cell-array. To achieve the dynamic cooling ability regulation of different locations of the CP, a thermoelectric heat pump (THP) and a high-heat-conductivity cover plate are employed in this study. Due to the small size, fast cooling speed, no environment pollution, and control simplicity, THPs have been extensively used in the temperature management system [25][26][27]. Traditionally, the cold side of the THP is attached to the cooling object and the hot side of THP the heat sink, which means that the cold-side temperature is usually the only variable can be controlled. The transferred heat in the hot-side heat is dissipated to the external environment space [28][29][30][31]. Nevertheless, different from conventional THP applications, the THP in Figure 2b functions as an effective heat transfer medium through which the heat can be transferred from the high temperature zone to the cover plate firstly. Then the accumulated heat in the cover plate is transferred to the low temperature zone of the CP through heat conduction. Through the function of the THP, the cooling ability of the different locations of the CP can be adjusted dynamically and the temperature uniformity of the power-cellarray can be enhanced notably. To achieve the dynamic cooling ability regulation of different locations of the CP, a thermoelectric heat pump (THP) and a high-heat-conductivity cover plate are employed in this study. Due to the small size, fast cooling speed, no environment pollution, and control simplicity, THPs have been extensively used in the temperature management system [25][26][27]. Traditionally, the cold side of the THP is attached to the cooling object and the hot side of THP the heat sink, which means that the cold-side temperature is usually the only variable can be controlled. The transferred heat in the hot-side heat is dissipated to the external environment space [28][29][30][31]. Nevertheless, different from conventional THP applications, the THP in Figure 2b functions as an effective heat transfer medium through which the heat can be transferred from the high temperature zone to the cover plate firstly. Then the accumulated heat in the cover plate is transferred to the low temperature zone of the CP through heat conduction. Through the function of the THP, the cooling ability of the different locations of the CP can be adjusted dynamically and the temperature uniformity of the power-cell-array can be enhanced notably. In this paper, an active temperature uniformity control strategy of a novel combined CP-THP system (CCTS) is proposed to improve the temperature uniformity of the CP and finally, the Li-ion space array. A dynamic heat transfer model for the CCTS based on the lumped-parameter method is established. Simulation analyses are conducted to investigate the impacts of the heat load and THP's electric current on the thermal characteristics of the CP. Three different controllers, namely the basic PID controller, fuzzy-PID controller, and BP-PID controller, are designed to manipulate the CP's temperature difference by adjusting the electric current passing through the THP. With this THP-based active control strategy, CP's maximum temperature difference can be decreased, and the temperature uniformity can be improved evidently, which implies that the temperature unevenness of the power-cell-array can be alleviated effectively as well. For further verification, relevant experiments have been conducted and the experimental work will be published in our subsequent paper. General Idea of CCTS Usually, due to the non-uniformity of the flow pattern in the parallel channels, a clear non-uniform temperature distribution occurs in the CP accordingly which is unfavorable for the temperature uniformity of the space batteries. To alleviate this issue, an adaptive CP module based on the THP is put forward, as described in Figure 3. In this paper, an active temperature uniformity control strategy of a novel combined CP-THP system (CCTS) is proposed to improve the temperature uniformity of the CP and finally, the Li-ion space array. A dynamic heat transfer model for the CCTS based on the lumped-parameter method is established. Simulation analyses are conducted to investigate the impacts of the heat load and THP's electric current on the thermal characteristics of the CP. Three different controllers, namely the basic PID controller, fuzzy-PID controller, and BP-PID controller, are designed to manipulate the CP's temperature difference by adjusting the electric current passing through the THP. With this THPbased active control strategy, CP's maximum temperature difference can be decreased, and the temperature uniformity can be improved evidently, which implies that the temperature unevenness of the power-cell-array can be alleviated effectively as well. For further verification, relevant experiments have been conducted and the experimental work will be published in our subsequent paper. General Idea of CCTS Usually, due to the non-uniformity of the flow pattern in the parallel channels, a clear nonuniform temperature distribution occurs in the CP accordingly which is unfavorable for the temperature uniformity of the space batteries. To alleviate this issue, an adaptive CP module based on the THP is put forward, as described in Figure 3. The adaptive CP module comprises an aluminum CP, a THP embedded into the lower surface of CP, and a cover plate installed at the bottom of CP. Detailed geometric characteristics of the components of the CP modules are provided in Table 1. There are two kinds of channel in the CP: (1) multi-fin channels arranged symmetrically and (2) a central channel linking to the outlet. In operation, the working fluid flows from the inlet into the multi-fin channels of both sides. Then these two flows converge into the central channel to be charged out of the CP as a whole through the outlet. The adaptive CP module comprises an aluminum CP, a THP embedded into the lower surface of CP, and a cover plate installed at the bottom of CP. Detailed geometric characteristics of the components of the CP modules are provided in Table 1. There are two kinds of channel in the CP: (1) multi-fin channels arranged symmetrically and (2) a central channel linking to the outlet. In operation, the working fluid flows from the inlet into the multi-fin channels of both sides. Then these two flows converge into the central channel to be charged out of the CP as a whole through the outlet. Specifically, the cover plate is made of carbon fiber because of its extremely thin thickness, light weight and excellent thermal conductivity, which is of great importance for the system operating in space. Additionally, to guarantee optimal contact among components, the cover plate with eight bolts is capable of sealing the CP perfectly during actual operation. The configuration of the CCTS is exploded in Figure 4. As described in Figure 2a, the temperature of the batteries in the middle of the array is much higher than that of the batteries on the edge of the array for that the batteries in the middle manifest poor heat dissipation effect, which can also give rise to high temperature in the middle of the CP. Besides, according to the unique channel distribution of CP presented in Figure 3, theoretically there exists the high temperature area corresponding to the central channel in the middle and the low temperature area in the multi-fin channels. The above two reasons both result in temperature non-uniformity of the CP. It will harm the temperature uniformity of the batteries finally. To address this problem, a THP is embedded between the bottom of the CP and the top surface of the cover plate, as also shown in Figure 4, for transferring the heat from the high temperature zone to the low temperature zone of the CP via the carbon fiber plate. Specifically, the cold side of THP is attached to CP's bottom surface while the hot side to the top surface of the cover plate. In theory, there should be temperature difference between the cold side and the hot side of THP due to the Peltier effect, which means that there may exist temperature difference between the CP and the cover plate. However, owing to the extremely high thermal conductivity and thin thickness of the cover plate, as well as the large contact area between the CP and the cover plate, this temperature difference can be reduced to be negligible. Considering mainly the effect of heat conduction, we define Q as the thermal load that the battery cells charges into the CP. For temperature acquisition in real time, three temperature sensors are placed on the bottom of CP, which are denoted in Figure 4 as T m1 , T m2 and T m3 respectively. To be precise, T m1 and T m2 are temperatures of the multi-fin channels on both sides, T m3 is temperature of the outlet liquid-collecting channel in the middle. Theoretically, T m1 is identical with T m2 and both are less than T m3 . Need of special note is that since the cold side of the THP is tightly attached to the middle area of CP's lower surface, T m3 is considered to be equal to the cold side temperature of the THP. The process of counterbalancing the heat is realized by manipulating the electric current that passes through THP. In operation, the electric current input to the THP is adjusted according to the temperature difference among T m1 , T m2 and T m3 . Therefore, an intelligent controller, as well as a THP driving unit, are designed for simulation. Mathematical Modeling In this section, working mechanism of the CCTS is provided from the point of view of the mathematics and thermodynamics including the lumped-parameter model and the controlling model, which may contribute to a better understanding of the function and dynamic characteristics of the system for the control of its thermal performance. Mathematical Modeling In this section, working mechanism of the CCTS is provided from the point of view of the mathematics and thermodynamics including the lumped-parameter model and the controlling model, which may contribute to a better understanding of the function and dynamic characteristics of the system for the control of its thermal performance. Lumped-Parameter Model As shown in Figure 5, a lumped-parameter-based dynamic heat transfer model for the CCTS is proposed to investigate the thermal performance of the CP in detail. In accordance with CP's channel layout, the multi-fin channels on both sides can be treated as two lumped-parameter nodes which are CP1 and CP2 respectively, and the central channel is treated as the third lumped-parameter node CP3. Besides, the hot side of THP with the cover plate together is considered as the fourth lumpedparameter node HT similarly, which will be fully described afterwards in this section. Lumped-Parameter Model As shown in Figure 5, a lumped-parameter-based dynamic heat transfer model for the CCTS is proposed to investigate the thermal performance of the CP in detail. In accordance with CP's channel layout, the multi-fin channels on both sides can be treated as two lumped-parameter nodes which are CP 1 and CP 2 respectively, and the central channel is treated as the third lumped-parameter node CP 3 . Besides, the hot side of THP with the cover plate together is considered as the fourth lumped-parameter node H T similarly, which will be fully described afterwards in this section. Mathematical Modeling In this section, working mechanism of the CCTS is provided from the point of view of the mathematics and thermodynamics including the lumped-parameter model and the controlling model, which may contribute to a better understanding of the function and dynamic characteristics of the system for the control of its thermal performance. Lumped-Parameter Model As shown in Figure 5, a lumped-parameter-based dynamic heat transfer model for the CCTS is proposed to investigate the thermal performance of the CP in detail. In accordance with CP's channel layout, the multi-fin channels on both sides can be treated as two lumped-parameter nodes which are CP1 and CP2 respectively, and the central channel is treated as the third lumped-parameter node CP3. Besides, the hot side of THP with the cover plate together is considered as the fourth lumpedparameter node HT similarly, which will be fully described afterwards in this section. For simplicity, some assumptions are made as follows: (1) the heat transferred from the battery cells to the CP is dominated by heat conduction denoted as thermal load Q; (2) the thermal resistance between the CP and carbon fiber plate is negligible considering the extremely thin thickness and excellent thermal conductivity of the cover plate, as well as large contacting area between the these two components; (3) the thermal resistance between the nodes of CP 1 and CP 2 is also inappreciable because of the small contacting area; (4) the thermal resistance between the node CP 3 and the cold side of THP is considered to be zero for their optimal contact; (5) the initial temperature difference between the hot side and cold side of the THP is 1K. According to Assumption (2) and Assumption (3), the temperature dynamics of the nodes of CP 1 , CP 2 , and CP 3 , which are based on the energy conservation principle, can be simplified and expressed by Equations (1)-(3). where m 1 , m 2 and m 3 are the mass of the nodes of CP 1 , CP 2 and CP 3 respectively; T m1 , T m2 , T m3 and T h are the temperatures of the CP 1 , CP 2 , CP 3 and the hot side of THP respectively; T i1 , T i2 and T i3 are the inlet water temperatures for CP 1 , CP 2 and CP 3 respectively; Q is the thermal load of the battery cells; c cp and c f are the specific heat of the aluminum-made CP and water as working fluid; R 13 and R 23 are thermal resistances between CP 1 and CP 3 , as well as between CP 2 and CP 3 ; R 1t and R 2t are thermal resistances between CP 1 and the hot side of THP, as well as between CP 2 and the hot side of THP; G 1 , G 2 and G 3 are the mass flow rates of CP 1 , CP 2 and CP 3 respectively; η is the heat exchange efficiency. It is noted that due to the symmetry of the two multi-fin channels, all the parameters of CP 1 and CP 2 are considered identical with each other theoretically (namely, . Additionally, G 3 is the total mass flow rate coming into the CP which is the sum of G 1 and G 2 . Equations (4) and (5) calculate the output water temperatures from the nodes CP 1 and CP 2 respectively. Equation (6) presents the temperature dynamic equation of the mixing working fluids from the nodes of CP 1 and CP 2 . By solving the Equations (1)-(6), we can obtain the temperature of mixing water from nodes of CP 1 and CP 2 (T fo ) which is actually the input water temperature of the node of CP 3 (T i3 ) as expressed by Equation (7). Besides, the coolant outlet temperature from the CP is given by Equation (8). The temperature dynamics of the node H T is represented by Equation (9). Therein, the heat flux Q h charged into the THP hot side can be estimated by Equation (10) and the cooling capacity Q c in the cold side of the THP which appears in the last term of the Equation (3) is given by Equation (11) where the T c is the cold-side temperature of the THP which is regarded as equal to that of the node of CP 3 owing to Assumption (4). The power consumption of the THP is expressed by Equation (12). Given the cooling capacity Q c in Equation (11) and power supply P in Equation (12), the performance of THP can be evaluated by the coefficient of performance (COP) defined in Equation (13). A high COP means less power consumed by the THP, which is of great significance in simulating the CCTS for the operation of Li-ion batteries in the space stations. where α t , R, and K t are the Seeback coefficient, electrical resistance, and the thermal conductivity of the THP, respectively. Given a constant ∆T t which is defined in Equation (14), the most suitable current corresponding to the peak value of COP can be attained by Equation (15) with the maximum COP gained by Equation (16) where the M can be acquired by Equation (17). Control Model The CCTS is a typical nonlinear system where there are difficulties in reducing the nonlinear constitutive equations to simple linear models while maintaining the accuracy of the respond of the system. To overcome such difficulties, three different control strategies which are basic PID control, fuzzy-PID control, and BP-PID control are proposed in this paper. The PID control serves as a base line for the comparative study with the other two intelligent control strategies. The temperature difference of CP (∆T cp ), obtained by Equation (18), is selected as the target variable based on which the control variable electric current passing through the THP is adjusted directly and the ∆T cp is further manipulated by changing heat flux Q h and cooling capacity Q c of the THP. Basic PID Controller The block diagram of the CCTS with the basic PID controller is shown in Figure 6. As the input of the basic PID controller, the control error e T is the difference between the local temperature difference ∆T cp and the desired temperature difference ∆T r described in Equation (19). In this paper, the incremental PID control method is adopted for the basic PID controller, which can be expressed by Equation (20). where u(t) is the incremental output of the controller at the sampling time t; K p , K i , K d are proportional coefficient, integral coefficient and differential coefficient, respectively; e(t) and e(t−1) are the deviation values at the sampling times t and t−1 respectively. the incremental PID control method is adopted for the basic PID controller, which can be expressed by Equation (20). e t e t e t where u(t) is the incremental output of the controller at the sampling time t; Kp, Ki, Kd are proportional coefficient, integral coefficient and differential coefficient, respectively; e(t) and e(t−1) are the deviation values at the sampling times t and t−1 respectively. Figure 6. Block diagram of basic PID control system. Figure 7a shows the outline structure of the fuzzy-PID control system. On the basis of conventional PID controller, fuzzy-PID controller adopts the error eT and error change rate ec as inputs, and the parameters of Kp, Ki, Kd as outputs. Figure 7b shows the detailed structure of the fuzzy-PID controller consisting of a fuzzifier, an inference engine, a defuzzifier, a fuzzy rule-base, and a PID controller. The inputs to the fuzzifier are the error en and its changing rate ecn normalized by the factors ke and kec. Similarly, the outputs of the defuzzifier up, ui, and ud (scaled by the factors kp, ki and kd) are normalized increments of the controlling parameters Kp, Ki and Kd. The relationships between the parameters (Kp, Ki, Kd) and the inputs (eT, ec) can be expressed by Equation (21). Fuzzy-PID Controller where Kp0, Ki0, and Kd0 represent the initial values of Kp, Ki and Kd respectively; ∆Kp, ∆Ki, and ∆Kd are the increments of Kp, Ki and Kd respectively. Figure 7a shows the outline structure of the fuzzy-PID control system. On the basis of conventional PID controller, fuzzy-PID controller adopts the error e T and error change rate ec as inputs, and the parameters of K p , K i , K d as outputs. Figure 7b shows the detailed structure of the fuzzy-PID controller consisting of a fuzzifier, an inference engine, a defuzzifier, a fuzzy rule-base, and a PID controller. The inputs to the fuzzifier are the error e n and its changing rate ec n normalized by the factors k e and k ec . Similarly, the outputs of the defuzzifier u p , u i , and u d (scaled by the factors k p , k i and k d ) are normalized increments of the controlling parameters K p , K i and K d . The relationships between the parameters (K p , K i , K d ) and the inputs (e T , ec) can be expressed by Equation (21). Fuzzy-PID Controller where K p0 , K i0 , and K d0 represent the initial values of K p , K i and K d respectively; ∆K p , ∆K i , and ∆K d are the increments of K p , K i and K d respectively. e t e t e t where u(t) is the incremental output of the controller at the sampling time t; Kp, Ki, Kd are proportional coefficient, integral coefficient and differential coefficient, respectively; e(t) and e(t−1) are the deviation values at the sampling times t and t−1 respectively. Figure 6. Block diagram of basic PID control system. Figure 7a shows the outline structure of the fuzzy-PID control system. On the basis of conventional PID controller, fuzzy-PID controller adopts the error eT and error change rate ec as inputs, and the parameters of Kp, Ki, Kd as outputs. Figure 7b shows the detailed structure of the fuzzy-PID controller consisting of a fuzzifier, an inference engine, a defuzzifier, a fuzzy rule-base, and a PID controller. The inputs to the fuzzifier are the error en and its changing rate ecn normalized by the factors ke and kec. Similarly, the outputs of the defuzzifier up, ui, and ud (scaled by the factors kp, ki and kd) are normalized increments of the controlling parameters Kp, Ki and Kd. The relationships between the parameters (Kp, Ki, Kd) and the inputs (eT, ec) can be expressed by Equation (21). Fuzzy-PID Controller where Kp0, Ki0, and Kd0 represent the initial values of Kp, Ki and Kd respectively; ∆Kp, ∆Ki, and ∆Kd are the increments of Kp, Ki and Kd respectively. The input and output variables of the fuzzy-PID controller are characterized by the fuzzy sets, linguistic values and associated analytical ranks which are listed in Table 2. Each fuzzy set (or its linguistic value) is defined by various membership functions shown in Figure A1. The controller output is determined from the linguistic rules in the following form: if en is Ei and ecn is CEj, Then up is Ul(i,j), ui is Um(i,j), ud is Un(i,j). Ei, CEj, Ul(i,j), Um(i,j), and Un(i,j) are the fuzzy values of en, ecn, up, ui, and ud; and the subscript variables i, j, l(i,j), m(i,j) and n(i,j) denote the analytical ranks associated with these linguistic values listed in Table A1. For a two-input system (en and ecn, each with seven fuzzy values), a fully populated rule base will have 7 × 7 = 49 input rule combinations derived with the aid of simulations. BP-PID Controller As Figure 8 illustrates, the back propagation PID (BP-PID) controller consists of two parts, the conventional PID controller and the back propagation neural network (BP-NN). The former controller carries on a direct closed-loop control of the THP with the online adjusted K p , K i , and K d obtained by the latter one. linguistic value) is defined by various membership functions shown in Figure A1. The controller output is determined from the linguistic rules in the following form: if en is Ei and ecn is CEj, Then up is Ul(i,j), ui is Um(i,j), ud is Un(i,j). Ei, CEj, Ul(i,j), Um(i,j), and Un(i,j) are the fuzzy values of en, ecn, up, ui, and ud; and the subscript variables i, j, l(i,j), m(i,j) and n(i,j) denote the analytical ranks associated with these linguistic values listed in Table A1. For a two-input system (en and ecn, each with seven fuzzy values), a fully populated rule base will have 7 × 7 = 49 input rule combinations derived with the aid of simulations. BP-PID Controller As Figure 8 illustrates, the back propagation PID (BP-PID) controller consists of two parts, the conventional PID controller and the back propagation neural network (BP-NN). The former controller carries on a direct closed-loop control of the THP with the online adjusted Kp, Ki, and Kd obtained by the latter one. A three-layer BP-NN with four input neuron nodes, five hidden nodes and three output nodes is set up, as shown in Figure 9. As inputs of the input layer, rin(k), yout(k), and error(k) adjust parameters of PID controller Kp, Ki and Kd representing outputs of the output layer according to BP arithmetic. A three-layer BP-NN with four input neuron nodes, five hidden nodes and three output nodes is set up, as shown in Figure 9. As inputs of the input layer, rin(k), yout(k), and error(k) adjust parameters of PID controller K p , K i and K d representing outputs of the output layer according to BP arithmetic. Due to the input layer is equivalent model, the outputs are equal to the inputs, namely where k is the number of iterations, the superscript (1) refers to the input layer. Due to the input layer is equivalent model, the outputs are equal to the inputs, namely where k is the number of iterations, the superscript (1) refers to the input layer. The inputs, activation function and outputs of the hidden layer can be expressed by Equations (23)- (25), where the superscript (2) represents the hidden layer, ω jm is the connection weight from the input layer to the hidden layer defined in Equation (26), α is the learning rate, δ (2) m is the local gradient of the hidden layer that is given by Equation (27). Similarly, the inputs, activation function and outputs of the output layer can be respectively written by Equations (28)- (30), where the superscript (3) represents the output layer, ω mn is the connection weight from the hidden layer to the output layer which is deduced by Equation (31), δ n is the local gradient of the output layer that can be expressed by Equation (32), and d n (k) is the desired output of the network. As previously mentioned, the output nodes of the output layer correspond to three adjustable parameters k p , k i , and k d , that is O 3 = k d . Since these parameters cannot be negative, the activation function of the output layer takes non-negative sigmoid function as Equation (29) describes. The learning rate is 0.7, and the inertia coefficient is 0.03 for simulation. The initial values of the connection weight coefficients distribute randomly on the interval from −1 to 1, as described in detail in Table A2. Solution Procedure and Simulation Condition Arrangement As shown in Figure 10, the CP is divided into three domains artificially corresponding to the three lumped-parameter nodes of the focused CP established in Section 2.2.1 (CP 1 , CP 2 , and CP 3 ). To be more specific, the three parts account for 25%, 50%, and 25% of the total area of the CP. The initial temperatures of CP 1 , CP 2 , and CP 3 are presented in Figure 10 as well, which are 297.85 K for CP 1 , 297.85 K for CP 2 , and 300.85 K for CP 3 respectively. Notice that the initial temperature difference of CP (∆T cp ) is 3 K. The initial temperature of the THP are also preset as follows, in line with the partition setting of the CP. The cold-side temperature T c is set as 300.85 K which is identical with T m3 according to Assumption (4), and the hot-side temperature T h is set as 301.85 K according to Assumption (5). Solution Procedure and Simulation Condition Arrangement As shown in Figure 10, the CP is divided into three domains artificially corresponding to the three lumped-parameter nodes of the focused CP established in Section 2.2.1 (CP1, CP2, and CP3). To be more specific, the three parts account for 25%, 50%, and 25% of the total area of the CP. The initial temperatures of CP1, CP2, and CP3 are presented in Figure 10 as well, which are 297.85 K for CP1, 297.85 K for CP2, and 300.85 K for CP3 respectively. Notice that the initial temperature difference of CP (ΔTcp) is 3 K. The initial temperature of the THP are also preset as follows, in line with the partition setting of the CP. The cold-side temperature Tc is set as 300.85 K which is identical with Tm3 according to Assumption (4), and the hot-side temperature Th is set as 301.85 K according to Assumption (5). Related parameter determinations of the CCTS and the initial state of the system are summarized in Table 3 where the thermal load of the batteries (Qi) and electric current of the THP (I) are critical input parameters as the main inputs for simulation. These two parameters are the primary variables that will be changed and controlled throughout the simulation process. As the base line for transient and control effect analyses, the initial Qi is 630W, and the initial I is 0 A. Note that in order to take the CP's heat loss into account, we set the heat exchange efficiency of CP as 0.9. Related parameter determinations of the CCTS and the initial state of the system are summarized in Table 3 where the thermal load of the batteries (Q i ) and electric current of the THP (I) are critical input parameters as the main inputs for simulation. These two parameters are the primary variables that will be changed and controlled throughout the simulation process. As the base line for transient and control effect analyses, the initial Q i is 630W, and the initial I is 0 A. Note that in order to take the CP's heat loss into account, we set the heat exchange efficiency of CP as 0.9. Matlab R2017a was used as the simulation software. The flowchart of the simulation procedure is shown in Figure 11 which can be described in detail as follows: (1) The program is initialized at the beginning in terms of the physical and working-condition parameters listed in Table 3. (2) The simulation time and the calculation step are preset. (3) The initial temperatures of the CP and THP, T m1 , T m2 , T m3 , T h and T c are given. (4) Various input disturbances tabulated in Table 4 are applied to the CCTS, beyond which the new value of ∆T cp can be derived according to Equations (1)- (8) and Equation (22). (5) During the closed-loop control, the control error e T can be obtained in terms of Equation (23) firstly; and then e T is transferred into different controllers to calculate the control parameters K p , K i and K d separately by Equations (24)-(36); afterwards the electric current I can be adjusted further; with the adjusted I, Q h and Q c brought to the CP will be computed using Equations (9)-(11), leading to a new ∆T cp finally. (6) After the above procedures, a judgment of whether the simulation time is over will be made. If the simulation time has not reached the set value and the control objective is un-convergence, the relative control algorithm should be modified to adapt the varying input disturbances. The simulation cycle will be updated by the new control parameters. simulation time and the calculation step are preset. (3) The initial temperatures of the CP and THP, Tm1, Tm2, Tm3, Th and Tc are given. (4) Various input disturbances tabulated in Table 4 are applied to the CCTS, beyond which the new value of ΔTcp can be derived according to Equations (1)-(8) and Equation (22). (5) During the closed-loop control, the control error eT can be obtained in terms of Equation (23) firstly; and then eT is transferred into different controllers to calculate the control parameters Kp, Ki and Kd separately by Equations (24)-(36); afterwards the electric current I can be adjusted further; with the adjusted I, Qh and Qc brought to the CP will be computed using Equations (9)-(11), leading to a new ΔTcp finally. (6) After the above procedures, a judgment of whether the simulation time is over will be made. If the simulation time has not reached the set value and the control objective is un-convergence, the relative control algorithm should be modified to adapt the varying input disturbances. The simulation cycle will be updated by the new control parameters. The focus of this study is to validate that the established CCTS can enhance CP's temperature uniformity effectively. Two simulation conditions are arranged and listed in Table 4 accordingly. Specifically, a variety of step disturbances in the input Qi take place in the first simulation condition for demonstrating the influence of the heat load upon the temperature difference of CP. Step The focus of this study is to validate that the established CCTS can enhance CP's temperature uniformity effectively. Two simulation conditions are arranged and listed in Table 4 accordingly. Specifically, a variety of step disturbances in the input Q i take place in the first simulation condition for demonstrating the influence of the heat load upon the temperature difference of CP. Step disturbances in the THP input current I occur in the second simulation condition to analyze the effect of the THP electric currents on CP's temperature difference. Additionally, to investigate the performance in rejecting disturbances of the designed controllers elaborated in Section 2.2.2, three cases (step disturbance, external disturbance and periodic disturbance) in Q i are organized and listed in Table 5. By evaluating the overshoot, settling time, and steady-state error together, we can evaluate the controlling performance of these three controllers when confronting different system disturbances. Determinations of the control parameters used in the simulation are summarized in Table A3. Open-Loop Dynamic Characteristics The purpose of this section is to investigate the effects of the heat load Q i and electric current I of THP on the thermal characteristics of the CCTS. Due to the symmetry of multi-fin channels as stated in Section 2.1, T m2 changes identically with T m1 under certain condition and both have the same temperature. Therefore, T m1 is only discussed. The initial value of ∆T cp (T m3 −T m2 ) is 3 K without the operation of the THP. On this basis, all the step-disturbances in Q i and I take place at 50 s. Figure 12 demonstrates the characteristics of the CP under various step disturbances in the thermal load Q i corresponding to the first simulation condition in Table 4. Note that the heat load is 630 W at the first 50 s, followed by a variety of step-disturbances in an increment of 10% centred on the initial value. In Figure 12a, take the +30% step-disturbance for instance, ∆T cp rises rapidly from 3 K when the step-disturbance occurs, then it settles to be 3.9 K ultimately which is represented by the blue. For other positive step-disturbance cases, ∆T cp presents the same trend as well. In contrast, the trend is the opposite for the negative step-disturbance cases. CP's stable temperature difference (∆T cp_sta ) and the response time τ r at different Q i are plotted in Figure 12b. It is obvious that ∆T cp_sta raises linearly at a rate of 4.76 × 10 −3 K/W with a slight increase in τ r from 800 s to 1100 s with the increase in Q i from −30% step-disturbance to +30% step-disturbance. The above observations suggest that more heat load applied to the CP leads to a more severe temperature non-uniformity and a longer settling time. In addition, the temperature difference of the CP is proportional to the heat load. Figure 13 presents the effect of THP electric current on the temperature changes of the CP. Figure 13a shows three typical step-disturbances of the current (4 A, 8 A and 12 A) upon the ΔTcp. It is obvious that ΔTcp in these three cases goes down rapidly to the minimum at about 120 s followed by a slow rise, and then attains a new level of equilibrium finally. The new ΔTcp is lower than the initial one of 3K. Figure 13b illustrates the transient temperature curve of Tm1 and Tm3 under the step-disturbance of 12 A. It can be observed that both Tm1 and Tm3 increase in that the power consumption of the THP will be transferred into waste heat which will be applied to the CP finally. To be specific, Tm1 rises sharply and settles to a new steady-state value of 300.65 K under the 12 A step-disturbance at 50 s. Nevertheless, Tm3 declines in a small range at the beginning and escalates to a new stable value of 302.12 K because of Qc removed from the node of CP3. Therefore, ΔTcp reaches a minimum at first and gets to a new stable level finally, which explains that why there exist valley in Figure 13a. Electric Current Disturbance The range of the current step-disturbance is from 1 A to 15 A with the increment of 1 A, ΔTcp_sta varies from 2.79 K to 1.28 K accordingly in Figure 13c where the relationship between ΔTcp_sta and I suggests an approximately negative proportional relationship while the steady-state time τr ranges slightly from 700 s to 1200 s and exhibits an increasing trend as a whole. Meanwhile, minimum ΔTcp in Figure 13d declines from 2.67 K to 0.8 K linearly with the climbing THP current from 1 A to 15 A. Furthermore, the time for minimum ΔTcp increases quickly at first from 75 s and the increasing rate becomes slow when the time comes to 125 s. The above observations demonstrate that the THP has a significant effect on the temperature uniformity of the CP absolutely, which is the biggest discovery and innovation in this study. The temperature difference can be reduced by 1.8 K under the maximum current step-disturbance of 15 A of the THP in this paper. However, the higher the current is, the more power the system will consume as expressed in Equation (12). This means more waste heat will be generated which leads to an increase in the overall temperature of the CP. It should be acknowledged that such side-effect is not favorable for the system operation and should be minimized. Therefore, additional discussions in Section 3.3 were carried out to investigate the optimum operating condition of the THP in terms of small temperature difference of the CP and optimum COP of the THP. Figure 13 presents the effect of THP electric current on the temperature changes of the CP. Figure 13a shows three typical step-disturbances of the current (4 A, 8 A and 12 A) upon the ∆T cp . It is obvious that ∆T cp in these three cases goes down rapidly to the minimum at about 120 s followed by a slow rise, and then attains a new level of equilibrium finally. The new ∆T cp is lower than the initial one of 3 K. Optimum Operating Conditions of THP This section aims to determine the optimal range of the control variable I considering the optimal COP with small power consumption which facilitates to depress the excursion of the overall temperature of the CP as shown in Figure 13b, as well as the ΔTcp which should be small that is crucial for the CCTS. Before the simulation results being discussed, correlative calculations theoretically should be carried out for further comparison. According to Equations (14) and (15), and the listed parameters of the THP listed in Table 3, the calculated maximal COP m ax ε is 0.978 and the calculated optimal current for this maximal COP max I ε is 0.996 A on the condition that the ∆Tt remains unchanged of 1K in simulation according to Assumption (5). Figure 13b illustrates the transient temperature curve of T m1 and T m3 under the step-disturbance of 12 A. It can be observed that both T m1 and T m3 increase in that the power consumption of the THP will be transferred into waste heat which will be applied to the CP finally. To be specific, T m1 rises sharply and settles to a new steady-state value of 300.65 K under the 12 A step-disturbance at 50 s. Nevertheless, T m3 declines in a small range at the beginning and escalates to a new stable value of 302.12 K because of Q c removed from the node of CP 3 . Therefore, ∆T cp reaches a minimum at first and gets to a new stable level finally, which explains that why there exist valley in Figure 13a. The range of the current step-disturbance is from 1 A to 15 A with the increment of 1 A, ∆T cp_sta varies from 2.79 K to 1.28 K accordingly in Figure 13c where the relationship between ∆T cp_sta and I suggests an approximately negative proportional relationship while the steady-state time τ r ranges slightly from 700 s to 1200 s and exhibits an increasing trend as a whole. Meanwhile, minimum ∆T cp in Figure 13d declines from 2.67 K to 0.8 K linearly with the climbing THP current from 1 A to 15 A. Furthermore, the time for minimum ∆T cp increases quickly at first from 75 s and the increasing rate becomes slow when the time comes to 125 s. The above observations demonstrate that the THP has a significant effect on the temperature uniformity of the CP absolutely, which is the biggest discovery and innovation in this study. The temperature difference can be reduced by 1.8 K under the maximum current step-disturbance of 15 A of the THP in this paper. However, the higher the current is, the more power the system will consume as expressed in Equation (12). This means more waste heat will be generated which leads to an increase in the overall temperature of the CP. It should be acknowledged that such side-effect is not favorable for the system operation and should be minimized. Therefore, additional discussions in Section 3.3 were carried out to investigate the optimum operating condition of the THP in terms of small temperature difference of the CP and optimum COP of the THP. Optimum Operating Conditions of THP This section aims to determine the optimal range of the control variable I considering the optimal COP with small power consumption which facilitates to depress the excursion of the overall temperature of the CP as shown in Figure 13b, as well as the ∆T cp which should be small that is crucial for the CCTS. Before the simulation results being discussed, correlative calculations theoretically should be carried out for further comparison. According to Equations (14) and (15), and the listed parameters of the THP listed in Table 3, the calculated maximal COP ε max is 0.978 and the calculated optimal current for this maximal COP I ε max is 0.996 A on the condition that the ∆T t remains unchanged of 1K in simulation according to Assumption (5). Simulation results plotted in the form of ∆T cp and COP versus I in the range of 0-15 A are presented in Figure 14. It can be seen that ∆T cp declines gradually with the increasing electric current. The minimum temperature difference is 1.2 K with the maximum current of 15 A. Addtionally, the COP increases rapidly for a small range of the current (0-1 A) and the COP reaches its peak (1.2) when the current is 1 A. After that, the COP decreases gradually to final about 0.4 which is the lowest when the current is 15 A. This simulation result presents a good agreement with the theoretical optimal values (I ε max = 0.996 A and ε max = 0.978). Optimum Operating Conditions of THP This section aims to determine the optimal range of the control variable I considering the optimal COP with small power consumption which facilitates to depress the excursion of the overall temperature of the CP as shown in Figure 13b, as well as the ΔTcp which should be small that is crucial for the CCTS. Before the simulation results being discussed, correlative calculations theoretically should be carried out for further comparison. According to Equations (14) and (15), and the listed parameters of the THP listed in Table 3, the calculated maximal COP m ax ε is 0.978 and the calculated optimal current for this maximal COP max I ε is 0.996 A on the condition that the ∆Tt remains unchanged of 1K in simulation according to Assumption (5). Simulation results plotted in the form of ΔTcp and COP versus I in the range of 0-15 A are presented in Figure 14. It can be seen that ΔTcp declines gradually with the increasing electric current. The minimum temperature difference is 1.2 K with the maximum current of 15 A. Addtionally, the COP increases rapidly for a small range of the current (0-1 A) and the COP reaches its peak (1.2) when the current is 1 A. After that, the COP decreases gradually to final about 0.4 which is the lowest when the current is 15 A. This simulation result presents a good agreement with the theoretical optimal values ( ε = The observations suggest that higher current (no more than 15 A) results in smaller temperature difference while lower current can obtain a higher COP when the current is within 1-15 A. In this paper, we propose scrupulously that 0.5 is the minimum acceptable COP in the practical operation. Therefore, the THP current is recommended to be controlled below 12 A, and the minimum temperature difference is 1.5 K. On this basis, three strategies are proposed in Section 4 to control the electric current well above a certain COP level (0.5 in this paper) while improving the temperature uniformity of the CP at the same time. Closed-Loop Control Effect Analyses In this section, the CCTS with three controllers, which are Basic PID, fuzzy-PID, and BP-PID controllers, responses to a variety of disturbances in the thermal load for closed-loop simulation. The results under these three control strategies, which examine the closed-loop control effects, are compared from the parameters such as overshoot, settling time, and steady-state error. Notice that the desired temperature difference ∆T r in the closed loop is set to be 1.5 K in this study in that when operated under the current of 12 A as stated in Section 3.3, ∆T cp was reduced by 1.5 K (from 3 K to 1.5 K). For simplicity, the dynamic temperature difference responses to manipulations from basic PID controller, fuzzy-PID controller, and BP-PID controller are defined as ∆T cp_PID , ∆T cp_F and ∆T cp_BP respectively. Step Disturbance Response The dynamic temperature difference responses to a 10% step increase in the input heat load Q i to the CCTS are plotted in Figure 15. As mentioned in Section 3.1, ∆T cp will climb and settle to a new steady-state value of 3.3 K (0.3 K higher than initial 3 K) without closed-loop control under a +10% step-disturbance in the heat load Q i at 50 s. As shown in Figure 15b, all the three controllers achieve the objective of 1.5 K. However, there are several specific differences in the control effects among these three control strategies. The closed-loop overshoots γ, settling time τ, and steady-state errors δ of the simulated transients are summarized in Table 6. The overshoot, settling time, and steady-state error of the basic PID controller are calculated to be 9.4% (0.14 K), 68 s and 0.426% (0.006 K) respectively as a reference for the comparison. Obviously, the fuzzy-PID controller and BP-PID controller offer a much better temperature dynamic performance than the reference. The overshoot of ∆T cp_F is reduced to 39.1% compared with that of ∆T cp_PID , and the settling time in ∆T cp_F is shortened by 52 s compared with that of ∆T cp_PID . The overshoot of ∆T cp_BP is reduced to 52.4% compared with that of ∆T cp_PID , and the settling time is accelerated by 58 s compared with that of ∆T cp_PID . Besides, the steady-state errors of both ∆T cp_F and ∆T cp_BP are all sufficiently small which can be neglected according to the values in Table 6. Generally, BP-PID controller is better than fuzzy-PID controller from the perspective of the response rate and system stability. In contrast, the latter performs better from the perspective of the maximum overshoot. The observations suggest that higher current (no more than 15 A) results in smaller temperature difference while lower current can obtain a higher COP when the current is within 1-15 A. In this paper, we propose scrupulously that 0.5 is the minimum acceptable COP in the practical operation. Therefore, the THP current is recommended to be controlled below 12 A, and the minimum temperature difference is 1.5 K. On this basis, three strategies are proposed in Section 4 to control the electric current well above a certain COP level (0.5 in this paper) while improving the temperature uniformity of the CP at the same time. Closed-Loop Control Effect Analyses In this section, the CCTS with three controllers, which are Basic PID, fuzzy-PID, and BP-PID controllers, responses to a variety of disturbances in the thermal load for closed-loop simulation. The results under these three control strategies, which examine the closed-loop control effects, are compared from the parameters such as overshoot, settling time, and steady-state error. Notice that the desired temperature difference ∆Tr in the closed loop is set to be 1.5 K in this study in that when operated under the current of 12 A as stated in Section 3.3, ΔTcp was reduced by 1.5 K (from 3 K to 1.5 K). For simplicity, the dynamic temperature difference responses to manipulations from basic PID controller, fuzzy-PID controller, and BP-PID controller are defined as ∆Tcp_PID, ∆Tcp_F and ∆Tcp_BP respectively. Step Disturbance Response The dynamic temperature difference responses to a 10% step increase in the input heat load Qi to the CCTS are plotted in Figure 15. As mentioned in Section 3.1, ∆Tcp will climb and settle to a new steady-state value of 3.3 K (0.3 K higher than initial 3 K) without closed-loop control under a +10% step-disturbance in the heat load Qi at 50 s. As shown in Figure 15b, all the three controllers achieve the objective of 1.5 K. However, there are several specific differences in the control effects among these three control strategies. The closed-loop overshoots γ, settling time τ, and steady-state errors δof the simulated transients are summarized in Table 6. The overshoot, settling time, and steadystate error of the basic PID controller are calculated to be 9.4% (0.14 K), 68 s and 0.426% (0.006 K) respectively as a reference for the comparison. Obviously, the fuzzy-PID controller and BP-PID controller offer a much better temperature dynamic performance than the reference. The overshoot of ∆Tcp_F is reduced to 39.1% compared with that of ∆Tcp_PID, and the settling time in ∆Tcp_F is shortened by 52 s compared with that of ∆Tcp_PID. The overshoot of ∆Tcp_BP is reduced to 52.4% compared with that of ∆Tcp_PID, and the settling time is accelerated by 58 s compared with that of ∆Tcp_PID. Besides, the steady-state errors of both ∆Tcp_F and ∆Tcp_BP are all sufficiently small which can be neglected according to the values in Table 6. Generally, BP-PID controller is better than fuzzy-PID controller from the perspective of the response rate and system stability. In contrast, the latter performs better from the perspective of the maximum overshoot. External Disturbance Response The transient responses of the temperature difference to an unfavorable external disturbance in Q i are depicted in Figure 16. It can be viewed from Figure 16a that there is a sharp wave pulse disturbance and a half-sinusoid pulse disturbance in the heat load brought from battery cells within 900-1000 s, which may be attributed to the abrupt change in the discharge rate of the Li-ion batteries [32][33][34]. The peak of the sharp wave pulse is 700 W at 900 s and the valley of the half-sinusoid pulse is 600 W at 960 s. As shown in Figure 16b, once the disturbance occurs at 900 s, ∆T cp_PID manifests a fluctuation (maximum overshoot of 0.017 K at 909 s) to the sharp pulse at first and then, a major oscillation (maximum overshoot of 0.055 K at 995 s) to the half-sinusoid pulse, followed by a slow returning at 1000-1300 s. ∆T cp_F and ∆T cp_BP , nevertheless, display just a tiny fluctuation (maximal overshoot of 0.012 K at 907 s and 0.008 K at 905 s respectively) to the sharp pulse with a shorter returning time (20 s and 18 s respectively). In addition, there are no significant responses to the half-sinusoid pulse for these latter two control strategies. The corresponding parameters are listed in Table 7. ∆T cp_F manifests the least maximum overshoot which is 3.81% (0.06 K) while ∆T cp_BP provides the fastest settling time of 10 s compared with that of the other two. With regard to the steady-state performance, the steady-state errors of ∆T cp_F and ∆T cp_BP are both infinitesimally small, and so can be ignored. Therefore, it can be concluded that the fuzzy-PID controller and BP-PID controller present better temperature dynamic ability in responding to unexpected external disturbances as compared to the basic PID controller. External Disturbance Response The transient responses of the temperature difference to an unfavorable external disturbance in Qi are depicted in Figure 16. It can be viewed from Figure 16a that there is a sharp wave pulse disturbance and a half-sinusoid pulse disturbance in the heat load brought from battery cells within 900-1000 s, which may be attributed to the abrupt change in the discharge rate of the Li-ion batteries [32][33][34]. The peak of the sharp wave pulse is 700 W at 900 s and the valley of the half-sinusoid pulse is 600 W at 960 s. As shown in Figure 16b, once the disturbance occurs at 900 s, ∆Tcp_PID manifests a fluctuation (maximum overshoot of 0.017 K at 909 s) to the sharp pulse at first and then, a major oscillation (maximum overshoot of 0.055 K at 995 s) to the half-sinusoid pulse, followed by a slow returning at 1000-1300 s. ∆Tcp_F and ∆Tcp_BP, nevertheless, display just a tiny fluctuation (maximal overshoot of 0.012 K at 907 s and 0.008 K at 905 s respectively) to the sharp pulse with a shorter returning time (20 s and 18 s respectively). In addition, there are no significant responses to the halfsinusoid pulse for these latter two control strategies. The corresponding parameters are listed in Table 7. ∆Tcp_F manifests the least maximum overshoot which is 3.81% (0.06 K) while ∆Tcp_BP provides the fastest settling time of 10 s compared with that of the other two. With regard to the steady-state performance, the steady-state errors of ∆Tcp_F and ∆Tcp_BP are both infinitesimally small, and so can be ignored. Therefore, it can be concluded that the fuzzy-PID controller and BP-PID controller present better temperature dynamic ability in responding to unexpected external disturbances as compared to the basic PID controller. Figure 17 presents the dynamic temperature difference responses to a periodic disturbance in Qi. As revealed in Figure 17a, the periodic disturbance in Qi is simulated in the form of a central constant Figure 17 presents the dynamic temperature difference responses to a periodic disturbance in Q i . As revealed in Figure 17a Figure 17b that ∆T cp_PID fluctuates within 4% of its steady-state value (the absolute overshoot is 0.06 K) with the settling time of 84 s, while ∆T cp_F fluctuates within 0.53% of its steady-state value (the absolute overshoot is 0.008 K) with the settling time of 16 s. Further, ∆T cp_BP reaches an equilibrium after 10 s with a merely 0.4% fluctuation of its steady-state value (the absolute overshoot is 0.006 K). Additionally, the maximum overshoots of these three controllers are 9.31% (at 53 s), 3.6% (at 27 s), and 4.7% (at 17 s) respectively, and the settling times are 84 s, 16 s and 10 s respectively. All the parameters above can be referred to Table 8 for comparison. The observations suggest that when experiencing a periodical disturbance in the heat load, both fuzzy-PID controller and BP-PID controller still manifest fast response ability and strong self-adaptability as previously expected. Figure 17b that ∆Tcp_PID fluctuates within 4% of its steady-state value (the absolute overshoot is 0.06 K) with the settling time of 84 s, while ∆Tcp_F fluctuates within 0.53% of its steady-state value (the absolute overshoot is 0.008 K) with the settling time of 16 s. Further, ∆Tcp_BP reaches an equilibrium after 10 s with a merely 0.4% fluctuation of its steady-state value (the absolute overshoot is 0.006 K). Additionally, the maximum overshoots of these three controllers are 9.31% (at 53 s), 3.6% (at 27 s), and 4.7% (at 17 s) respectively, and the settling times are 84 s, 16 s and 10 s respectively. All the parameters above can be referred to Table 8 for comparison. The observations suggest that when experiencing a periodical disturbance in the heat load, both fuzzy-PID controller and BP-PID controller still manifest fast response ability and strong self-adaptability as previously expected. Conclusions For the purpose of actively adjusting the cooling ability of different locations of the cold plate (CP), which is a key component of the single-phase fluid loop, a combined CP-THP system (CCTS) comprising a micro-channel CP integrated with a thermoelectric heat pump (THP) for thermal management of the Li-ion space battery array is proposed to improve the temperature uniformity of the space batteries. The adoption of the THP, which is intended to balance the internal heat transfer within the CP by regulating the THP electric current, is the largest innovation in this paper. The dynamic model for the evaluation of the CP's thermal characteristics is theoretically established and three control strategies aiming to achieve effective and robust control of the temperature difference within the CP confronting various disturbances are developed. Numerical analyses for both openloop and closed-loop simulations under different working conditions have been illustrated. Primary conclusions are summarized as follows. (1) The maximum temperature difference of the CP was largely influenced by the heat load. Increasing the heat load from the batteries aggravated CP's temperature unevenness and lengthened the settling time. (2) The THP may dynamically adjust the cooling ability of the different locations of the CP and significantly improve the temperature uniformity of the CP under active control strategies. Compared with the traditional module without the THP, the temperature difference of the CP was decreased by 1.8 K with the maximum electric current (15 A) of the THP. The higher electric Conclusions For the purpose of actively adjusting the cooling ability of different locations of the cold plate (CP), which is a key component of the single-phase fluid loop, a combined CP-THP system (CCTS) comprising a micro-channel CP integrated with a thermoelectric heat pump (THP) for thermal management of the Li-ion space battery array is proposed to improve the temperature uniformity of the space batteries. The adoption of the THP, which is intended to balance the internal heat transfer within the CP by regulating the THP electric current, is the largest innovation in this paper. The dynamic model for the evaluation of the CP's thermal characteristics is theoretically established and three control strategies aiming to achieve effective and robust control of the temperature difference within the CP confronting various disturbances are developed. Numerical analyses for both open-loop and closed-loop simulations under different working conditions have been illustrated. Primary conclusions are summarized as follows. (1) The maximum temperature difference of the CP was largely influenced by the heat load. Increasing the heat load from the batteries aggravated CP's temperature unevenness and lengthened the settling time. (2) The THP may dynamically adjust the cooling ability of the different locations of the CP and significantly improve the temperature uniformity of the CP under active control strategies. Compared with the traditional module without the THP, the temperature difference of the CP was decreased by 1.8 K with the maximum electric current (15 A) of the THP. The higher electric current was, the better temperature uniformity of the CP can be obtained. Nevertheless, a high electric current may result in unnecessary waste heat which will increase the overall temperature of the CP. Considering the tradeoffs of the temperature difference and power consumption, the electric current should be controlled below 12 A. (3) Confronting various conditions of step disturbance, external disturbance and periodic disturbance, both fuzzy-PID and BP-PID controllers achieve an excellent control ability with sufficient fast response and strong stability, which are attractive alternatives to the basic PID controller. Specifically, the fuzzy-PID controller specializes in decreasing overshoot while the BP-PID controller facilitates a reduction of the response time and steady state error. In conclusion, this THP-based temperature uniformity controlling system can achieve the cooling ability readjustment of different locations of the focused CP, which is extremely benefit for the temperature uniformity improvement of the space battery array. It is expected that this approach would have promising application prospects for other thermal control systems where extremely strict temperature uniformity demands should be satisfied.
2019-06-26T14:37:23.586Z
2019-06-01T00:00:00.000
{ "year": 2019, "sha1": "ef02c3732b99229babf68510e5521064e38b5144", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1099-4300/21/6/578/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a50dd4f7a38a928414bc94c9fdf0f248720f23b5", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Mathematics", "Computer Science", "Medicine" ] }
247056075
pes2o/s2orc
v3-fos-license
Procedural Sedation Using Two Different Proportions of Ketamine-Propofol Combination in Short Gynecological Procedures: A Randomized Controlled Trial Background: Procedural sedation with a combination of propofol and ketamine for short-duration surgeries is a convenient technique of anesthesia as it has a faster recovery avoiding the side effects of general anesthesia. The aim of this study was to compare the sedative and analgesic effects of two different proportions of ketamine and propofol combination in patients undergoing short gynecological procedures. Methods: A randomized double-blind study was conducted in 140 patients posted for elective gynecological procedures with a duration equal to or less than 30 minutes. After premedication of all participants, sedation was induced with bolus administration (0.1 mL/kg) of the study drugs to achieve desired Ramsay sedation score (RSS) of 6, followed by infusion at 0.3 mL/kg/h (Group A, ketamine:propofol in the ratio of 1:4 and Group B, ketamine:propofol in the ratio of 1:2). The adequacy of sedation, volume of drug to induce the patient, time to achieve desired RSS, time for first bolus dose, the total volume of the drugs, hemodynamic variables, awakening time, and side effects were observed. Results: The incidence of movement of lower extremities was found to be significantly lower in the higher concentration ketamine group (Group B, P - 0.028). The volume of a drug for induction and the duration to reach RSS of 6 were significantly lower in Group B with P-values of 0.002 and <0.001, respectively. Hemodynamic variables, awakening time, and side effects were not statistically significant between the two groups. Conclusion: Ketamine-propofol combination in the ratio 1:2 provides better sedation and analgesia with no increased side-effects compared to ketamine-propofol in the ratio 1:4 for short outpatient gynecological procedures. Introduction Short gynecological procedures are mostly done as outpatient day-case procedures where the patients are discharged on the same day of admission after the intended procedure. Procedural sedation is a convenient technique of anesthesia for these procedures which provides adequate anesthetic depth and hemodynamic stability with early recovery and minimum adverse effects in the recovery period. Various drugs have been tried to achieve the goals of day-case procedures done under sedation. Since no single drug can provide all the requirements of procedural sedation, different drugs are used in varying combinations to provide balanced anesthesia, that is, amnesia, hypnosis, and analgesia [1]. Ketaminepropofol combination has been used in varying proportions for procedural sedation in a variety of procedures both in outpatient and emergency department settings with good results. The opposing hemodynamic and respiratory effects of each drug may enhance the use of this combination thereby increasing both safety and efficacy and allowing a reduction in the dose of propofol required to achieve sedation and decrease the need for supplemental opioid analgesics [2,3]. The combination of these two drugs has been used in many clinical situations, with better hemodynamic stability, minimal respiratory depression, better analgesia, and recovery than each agent alone [4][5][6][7][8][9][10][11][12][13]. The effectiveness of the two agents, ketamine and propofol in combination mixed in a single syringe has demonstrated efficacy in operating and ambulatory settings in varying proportions with varying results but the ideal proportion has not been established yet. We conducted this study using 1:2 and 1:4 ratios of ketamine propofol combination in short gynecological procedures for providing procedural sedation. The primary objective of the study was to compare the adequacy of sedation and analgesia provided by two different ratios of ketamine propofol combination in patients undergoing short gynecological outpatient procedures. The secondary objectives were to compare the hemodynamic variables, airway intervention if any, time for awakening, and the incidence of side effects between the two groups. We hypothesized that ketofol in the ratio of 1:2 would provide better analgesia and sedation compared to ketofol in the ratio of 1:4. Materials And Methods After the approval of the Institutional Ethics Committee (#KIMS/KIIT/IEC/156/2018) and written informed consent, this randomized double-blind study was conducted between September 2019 and January 2021, in 140 female patients in the age group of 18-60 years, belonging to American Society of Anesthesiologists (ASA) physical status I or II, undergoing elective short gynecological daycare procedures lasting less than 30 minutes. Patients with a history of allergy to study drugs, obstructive sleep apnea, and behavioral problems were excluded from the study. The trial was registered with CTRI with registration number CTRI/2019/08/020808. Patients were asked to fast as per the standard Nil Per Oral guidelines. Patients were randomly assigned to one of the two groups using computer-based randomization. An intravenous catheter was secured on the dorsum of the non-dominant hand in the preoperative waiting room and premedication of intravenous Glycopyrrolate 0.2 mg and Midazolam 1 mg were given to all patients 10 minutes prior to induction. After shifting the patients to the operating room, standard ASA monitors were attached which included 5 lead electrocardiograms, pulse oximeter, and non-invasive blood pressure. Oxygen was delivered to all patients by a face mask at 6 L/min. Sedation was induced by bolus intravenous administration of 0.1 mL/kg of the study drug. Group A: 1:4 ratio of ketamine-propofol combination (1 mL of 50 mg/mL Ketamine added to 20 mL of 1% propofol and 4 mL of 5% Dextrose to make a total volume of 25 mL) and Group B: 1:2 ratio of the ketaminepropofol combination (2 mL of 50 mg/mL Ketamine added to 20 mL of 1% Propofol and 3 mL of 5% Dextrose to make a total volume of 25 mL) ( Figure 1). FIGURE 1: Consort flow chart After sedation was induced with bolus administration of the study drugs, it was maintained with an infusion of the study drug at 0.3 mL/kg/hr. The drugs used to induce and maintain anesthesia were prepared in the same syringes by an anesthetist not involved in the study. Ramsay Sedation Score (RSS) of 6 was considered satisfactory and the surgeon was allowed to proceed with the surgery. If the patient did not achieve the desired RSS, a 2 mL bolus of the study drug was administered. The parameters like systolic blood pressure, diastolic blood pressure, mean arterial blood pressure, heart rate, respiratory rate, oxygen saturation, and depth of sedation were assessed at baseline (before injecting the study drug), and every two minutes till the end of the procedure. The observations were recorded by an independent researcher who was blinded to the study group. End-tidal carbon dioxide (EtCO 2 ) was monitored continuously by a side stream sampling line inserted into the facial mask. If apnea occurred, as assessed clinically or by capnography trace, or if the peripheral oxygen saturation (SPO 2 ) was ≤ 96%, a jaw thrust maneuver was performed by the anesthetist. If effective ventilation was not achieved after the initial maneuver, bag-mask ventilation was performed. If there was an incidence of movement in the lower extremities during the procedure, a 2 mL bolus of the study drug was administered. Induction, maintenance, and delivery of bolus doses were done using a single syringe pump. After the procedure, the patient was transferred to the Post Anesthesia Care Unit (PACU) and monitored until they met discharge criteria assessed by the Modified Aldrete Score of ≥ 9. The primary objective of the study was to compare the adequacy of sedation in both groups. This was assessed by the incidence of movement in the lower extremities. The secondary objectives were to compare the volume of drug required for induction, duration to reach RSS of 6, time taken for administration of first bolus dose, a total number of bolus doses administered, the total volume of drug used, hemodynamic variables, time for awakening (defined as the time from the discontinuation of infusion at the end of surgery till the patient responds to verbal commands), airway intervention if any, postoperative nausea and vomiting (PONV), recovery agitation and recall of intraoperative events. PONV if any was treated with Ondansetron 4 mg IV. Oh et al. [14] observed the prevalence of movement in lower extremities was 32.5% and 10% in 1:3 ketofol group and 2:3 ketofol combination, respectively. Based on this at a 5% level of significance and 90% power, the sample size was calculated as 67 in each group. To adjust for any dropouts, 70 patients were recruited in each group. Statistical analysis was performed using SPSS® version 20.0 (SPSS, Chicago, IL, USA). For continuous variables, the data were presented as mean ± SD using Student's t-test, and the categorical variables were presented as frequency and percentage. The Chi-square test or Fisher exact test was used to check the association between the two different groups and a P-value of ≤0.05 was considered to be statistically significant. Results A total of 140 patients were enrolled in the study, 70 in each group. The baseline characteristics such as age, weight, height, and body mass index (BMI) were similar in the two groups ( Table 1). The data are represented as mean± standard deviation (SD) and analyzed using unpaired student's t-test. A P-value of ≤0.05 is considered statistically significant. Baseline characteristics Group The incidence of movement in lower extremities that correlates with the number of patients receiving bolus doses was significantly lower in Group B (30,42.9%) compared to group A (43, 61.4%) with a P-value of 0.028. The number of bolus doses and the time for administration of the first bolus dose were not statistically different between the two groups. The time taken to reach an RSS of 6 was significantly lower (P<0.001)) in Group B compared to Group A. Also, the volume of drugs for induction was significantly lower in Group B compared to Group A with a P-value of 0.002. The total volume of drug used, total duration of the procedure, and awakening time were not statistically different between the two groups ( Table 2). There was no statistically significant difference in the blood pressure readings, heart rate, and respiratory rate (RR) in the two groups ( Figure 2). TABLE 2: Primary and secondary objectives The data are represented as a number (percent) or as mean± standard deviation and analyzed using Chi-square test or unpaired student's t-test as appropriate. A P-value of ≤0.05 is considered statistically significant. FIGURE 2: Comparison of mean arterial pressure and heart rate at twominute intervals The incidence of PONV in Group A and group B was 17.1% and 14.3%, respectively, and this difference was statistically insignificant (P=0.642). Recovery agitation was seen in only one of the patients in Group B whereas none of the patients in Group A had recovery agitation (P=1.000). No patient in either group had a recall of intra-operative events ( Table 3). TABLE 3: Incidence of side-effects Data presented as number (percent) and analyzed using the Chi-square test. A P-value of ≤0.05 is considered statistically significant. Discussion Procedural sedation for short surgical procedures is most commonly carried out with ketamine or propofol in addition to opioids and benzodiazepines. Ketamine commonly produces emergence delirium and vomiting along with an increase in heart rate and blood pressure in the routine induction dose. Propofol at induction dose can result in a severe fall in blood pressure and does not have any analgesic properties. The aim of anesthesia in short outpatient gynecological procedures is to reduce the patient's anxiety, ensure adequate sedation and analgesia during the procedure and facilitate early recovery with minimal side effects for an early discharge. The combination of propofol and ketamine produces more stable hemodynamic conditions than ketamine or propofol used individually. The combination of ketamine and propofol in different proportions is being used for procedural sedation because of the increased analgesic effect of ketamine and reduction of the side effects of propofol. In the present study, we compared the sedative and analgesic effects, hemodynamics and respiratory changes, the requirement of amount of anesthetic solutions, recovery times, and complications of two different ratios of ketamine propofol combination in 140 patients undergoing daycare gynecological procedures. In this study, we found the incidence of movement in lower extremities was significantly lower in the 1:2 ketofol group compared to 1:4 ketofol group. A similar result was reported in a study conducted by Oh et al. [14] with an aim to reduce patient movement in loop electrosurgical excision procedure. They found that the incidence of adduction motion in lower extremities was significantly lower in patients receiving higher ketofol concentrations. Similar results were found in a trial studying different doses of ketamine with propofol in patients undergoing breast biopsy procedures [15]. The incidence of movement correlated to the number of patients needing bolus doses of the study drug as it was administered when the patient responded to surgical stimulus. In our study, the number of patients requiring bolus doses was significantly higher in group A compared to Group B. This was in concurrence with studies conducted by other authors [15][16][17]. This was due to the higher concentration of ketamine in group B providing better analgesia. The volume of drug used during induction to achieve desired RSS of 6 was significantly lower in Group B compared to Group A. Using a lower dose of propofol in the ketofol mixture helps avoid the problems with excess propofol use like hemodynamic instability and the need for airway intervention. This is practically beneficial to both patients and clinicians. The time required for the patients to reach an RSS of 6 after the induction dose was found to be significantly higher in group A compared to group B in our study. In the study conducted by Badrinath et al. [15], they found no difference in the time required to achieve the desired Observer Assessment of Alertness score. This could be possibly due to the different concentrations of ketamine in the ketofol groups studied. Our study showed that the total number of bolus doses given when there was a response to the surgical stimulus was insignificant in both groups. A similar observation was made in studies conducted by other authors [15,18]. However, in a study by Oh et al. [14], a statistically significant difference was observed. This could be due to the lower concentration of ketamine in the ketofol mixture needing more boluses. The total volume of drugs used was not statistically different in our study although it was observed to be lower in the higher ketamine group (Group B). Similarly, in the study concluded by Miner et al. [19] the total sedative bolus dose requirement was higher in the lower ketamine concentration group. Our study did not find any statistically significant difference in the hemodynamic parameters measured throughout the procedure which aligned with findings from previous studies [15][16][17]. The need for airway intervention in our study was insignificant between the two groups, which were similar to a few studies conducted earlier [16,18]. However, in the studies conducted by other authors [14,17], there was a statistically significant difference in the need for airway intervention in the groups receiving higher ketamine in the ketofol mixture. The difference in the findings could be due to the deep level of sedation with a higher dose of ketamine in the ketofol mixture which led to impaired breathing and increased need for airway support. The awakening time in both groups was statistically and clinically insignificant in our study. Similar results were found in a study conducted by Miner et al. [19]. The recovery time in our study was significantly longer in the group with higher ketamine concentration. This was also demonstrated in studies conducted by other authors [16,17,19]. We noticed recovery agitation in one patient in the higher ketamine group and none in the lower ketamine group. It was transient and the patient did not require any restraining or use of opioids or benzodiazepines. Similar results with no difference in the incidence of recovery agitation were found in other studies [14,16]. However, studies conducted by other authors [17,19] found a higher incidence of recovery agitation in the group receiving 1:1 ketofol compared to the other groups with lower ketamine concentrations. This could be due to the higher concentration of ketamine in 1:1 ketofol group. The incidence of PONV in PACU in our study was found to be statistically insignificant. Similar results were found in studies conducted by previous authors [15][16][17][18]. No patients in either group experienced recall of intraoperative events. Similar results were documented by other authors [14,15]. The limitations of our study were that the depth of sedation was assessed by RSS. The use of bispectral index monitoring could have been done for a better assessment of depth of anesthesia during procedural sedation. In addition, a specific scoring system to measure the analgesic component of the patients undergoing procedures was lacking in our study. Future studies can be carried out taking these into consideration. Conclusions Procedural sedation for short gynecological procedures can be safely and effectively carried out using a 1:2 ratio of ketamine propofol combination. Based on the findings of our study, we state that the use of ketamine-propofol combination in the ratio 1:2 provides better sedation and analgesia with fewer patients needing additional boluses compared to ketamine-propofol in the ratio 1:4 for short outpatient gynecological procedures. Additional Information Disclosures Human subjects: Consent was obtained or waived by all participants in this study. Institutional Ethics Committee, Kalinga Institute of Medical Sciences issued approval #KIMS/KIIT/IEC/156/2018. This prospective double blind randomized trial was approved by the Institutional Review Board and cleared by the Institutional Ethics Committee after which the trial was registered with CTRI prospectively. Animal subjects: All authors have confirmed that this study did not involve animal subjects or tissue. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
2022-01-21T16:27:42.919Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "137f7ea01a86051ea434f5e7a804bd3290e44e8d", "oa_license": "CCBY", "oa_url": "https://www.cureus.com/articles/83243-procedural-sedation-using-two-different-proportions-of-ketamine-propofol-combination-in-short-gynecological-procedures-a-randomized-controlled-trial.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "38215a2a429ce906c5523f85d963971978f1a377", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
251640725
pes2o/s2orc
v3-fos-license
The Emerging Key Role of the mGluR1-PKCγ Signaling Pathway in the Pathogenesis of Spinocerebellar Ataxias: A Neurodevelopmental Viewpoint Spinocerebellar ataxias (SCAs) are a heterogeneous group of autosomal dominantly inherited progressive disorders with degeneration and dysfunction of the cerebellum. Although different subtypes of SCAs are classified according to the disease-associated causative genes, the clinical syndrome of the ataxia is shared, pointing towards a possible convergent pathogenic pathway among SCAs. In this review, we summarize the role of SCA-associated gene function during cerebellar Purkinje cell development and discuss the relationship between SCA pathogenesis and neurodevelopment. We will summarize recent studies on molecules involved in SCA pathogenesis and will focus on the mGluR1-PKCγ signaling pathway evaluating the possibility that this might be a common pathway which contributes to these diseases. Introduction Spinocerebellar ataxias (SCAs) are a heterogeneous group of autosomal dominantly inherited progressive disorders with degeneration and dysfunction of the cerebellum [1][2][3]. By now, more than 40 genetically distinct subtypes of SCA have been identified. The genetic background of SCAs can be classified into two groups: Group I representing repeat expansion SCAs, such as SCA1 and SCA2 which are caused by dynamic repeat expansion mutations, typically polyglutamine repeat expansions, and Group II representing conventional mutation SCAs (non-repeat expansion SCAs), which are caused by nonsense mutations, missense mutations, deletions or insertions, such as SCA5 or SCA14. In general, signs and symptoms can develop in patients with SCA at any time from childhood to late adulthood, but adult-onset is the most common [1][2][3]. Previous studies have shown that in Group II conventional mutation SCAs the age of onset is earlier than in SCAs due to polyglutamine repeat expansions [1,2]. The clinical features of all SCAs include progressive loss of balance and coordination, accompanied by slurred speech. The mobility and communicative skills of individuals with SCA are reduced and many SCAs lead to premature death. As the name suggests, pathological changes of SCAs occur primarily in the nervous system and the cerebellum is the principal target. Purkinje cell degeneration, which leads to cerebellar atrophy, occurs in most SCAs [1][2][3]. The pathways causing degeneration or loss of Purkinje cells are complex. Dysregulation of gene expression is an acknowledged characteristic of several SCAs and has been proposed to trigger the pathogenesis of SCAs [4,5]. Several proteins which can cause SCAs when they contain mutations are recognized as key factors associated with the regulation of gene expression and contribute to gene regulation on the transcriptional level. In some representative types of SCAs caused by polyglutamines, such as SCA1 and SCA7, the causative proteins are ataxins which interact with transcriptional regulators and indirectly affect gene expression especially for developmental processes [6][7][8][9]. Some SCA causing genes are directly responsible for the regulation of gene expression, e.g., TBP in the case of SCA17. The TATA-Box binding protein (TBP) is a general transcription factor and mutants of TBP with CAG repeat expansion result in SCA17 [10,11]. Hence, these studies suggest that mutated proteins of SCAs can affect gene expression directly or indirectly by changing the activity of signaling proteins resulting in a dysregulation of transcription. Some studies have focused on the analysis of the gene expression on the global transcriptional level by the use of animal or cellular models and aimed to identify genes with an altered expression in SCAs [12][13][14]. In this review article, we will specifically focus on the molecules linking Purkinje cell development to SCAs and elaborate the function of these molecules that are causing SCA and at the same time play a role in the development of Purkinje cells. Furthermore, we will review recently identified key molecules dysregulated in different SCAs and discuss how they participate in the emerging shared pathogenic pathways in the SCAs, in particular the mGluR1-PKCγ signaling pathway. Increasing Evidence Linking Purkinje Cell Dendritic Development to SCAs Purkinje cells are cells which have a large and highly branched dendritic tree. Many molecules participating in the regulation of different stages of Purkinje cell development have been identified, including dendritic growth, differentiation, and maintenance. Examples for these molecules are Beta-III spectrin, PKCγ, TRPC3 and mGluR1. The genes (SPTBN2, PRKCG, TRPC3 and GRM1) encoding these molecules have also been identified as causative genes of SCAs and are probably associated with pathogenesis of SCAs. Here we will briefly review the current information about these molecules with respect to Purkinje cell dendritic development and their involvement in SCA pathology. SCA5 SCA5 is caused by gene mutations of SPTBN2 encoding Beta-III spectrin protein [15][16][17]. The mutant of Beta-III spectrin protein was strongly expressed in Purkinje cells by immunofluorescence staining in a SCA5 mouse model with a phenotype of progressive cerebellar degeneration [18]. Beta-III spectrin has been identified to play a critical role in the organization of the dendritic tree and the development of dendritic spines of Purkinje cells. Beta-III spectrin knockout mice have defects of the ordered dendritic arborization, particularly a loss of monoplanar organization, a decreased dendritic diameter, a reduction of the density of dendritic spines and a reduced number of synapses in Purkinje cells. Purkinje cells from Beta-III spectrin knockout mice also show strongly reduced dendritic areas compared to wildtype cells in dissociated cultures [17]. SCA6 The alteration of calcium channel conductance is believed to be an important factor in the pathology of SCAs. SCA6, which is caused by a CAG repeat expansion in the CACNA1A gene, encoding the voltage-gated calcium channel subunit alpha 1A, belongs to the Group I type of SCA [19], but there are also a number of SCA6 patients with point mutations of this gene [20,21]. Overlapping phenotypes have been reported for SCA6 caused by missense mutations, episodic ataxia type 2 (EA2) and familial hemiplegic migraine (FHM). The causative gene of EA2 and FHM is also CACNA1A, previously called CACNL1A4 [22]. A mutation of the CACNA1A gene, the causative gene of SCA6, was also reported in the tottering mouse. In this mouse model, a reduction of dendritic development was observed [23]. The CACNA1A gene can functionally encode two proteins: the α1A subunit of P/Q-type voltage gated calcium channel and α1ACT, a transcription factor, encoded by a second cistron in the CACNA1A gene. Recent studies have found that α1ACT is involved in the regulation of genes associated with the function of Purkinje cell development [24]. In an autopsy study in early-onset SCA6, Purkinje cells were found to have reduced dendritic mass and spines, as well as reduced dendritic branching complexity [25,26]. Recent studies have shown that patients with CACNA1A mutations exhibit atrophy of the cerebellum during development, which is a recognizable neurodevelopmental disorder [27]. SCA14 Mutations of PKCγ cause SCA14 and in a mouse model of SCA14 expression of mutated PKCγ leads to dendritic abnormalities of Purkinje cells [28][29][30]. Increased PKCγ activity was identified as a negative regulator for Purkinje cell dendritic development [31]. Although many isoforms of PKC are expressed in the cerebellum, PKCγ is specifically and strongly expressed in Purkinje cells [32][33][34]. PKCγ expression is relatively low at birth and increases in the postnatal period [35,36]. PKCγ can be activated by binding of diacylglycerol (DAG) and Ca 2+ [37]. In rat organotypic slice cultures, Purkinje cells were shown to grow increased dendritic trees with more ramified dendritic branches after treatment with a general PKC inhibitor, GF109203X [38]. Treatment of an inhibitor specific for conventional PKC isoforms, Gö6976 also promoted extensive branching of Purkinje cells [39]. In slice cultures from PKCγ deficient mice, Purkinje cells were shown to have expanded dendritic trees with an increased number of dendritic branching points compared to wildtype mice [39]. When Purkinje cells are treated with phorbol-12-myristate-13-acetate (PMA), an activator of PKC, a marked reduction of the dendritic trees was shown in either organotypic slice cultures or dissociated cerebellar cultures [28,30,38,39]. SCA15/16/29 Similar to SCA6 involving calcium channels, many genes of Group II SCAs caused by point mutations have been found to be involved in the regulation of the calcium equilibrium. SCA15 [40], also known as SCA16 [41], is caused by a heterozygous mutation in the ITPR1 gene and by deletions involving the ITPR1 gene. The ITPR1 gene encodes inositol 1,4,5-trisphosphate (IP3) receptor type 1, mediating intracellular calcium release. The ITPR1 protein had a decreased expression in lymphocytes of the patient, suggesting a loss of function or a haploinsufficiency of ITPR1 protein [42,43]. SCA29 is also related to the ITPR1 gene but has different single-nucleotide variants in ITPR1. The clinical features of SCA29 differ significantly from SCA15. SCA29 has non-progressive features and occurs congenitally, whereas SCA15 is a progressive cerebellar ataxia and occurs in adulthood [44,45]. The SCA29 missense mutations of ITPR1 protein are localized in the functional domain that refers to coupling or regulatory events and phosphorylation sites to influence the regulation of ITPR1 protein signaling [45]. Abnormal dendritic development of Purkinje cells is reported for cultured cerebellar cells from ITPR1 knockout mice [46,47]. SCA41 The mutation Arg762His of TRPC3 protein has recently been identified to result in SCA41 [48,49]. To date, no SCA41 disease-associated mouse model has been reported, however, the moonwalker (Mwk) mouse mutant is caused by a different Trpc3 protein point mutation T635A and exhibits profound impairment of growth and differentiation of Purkinje cell dendrites [50]. The TRPC3 protein Arg762His mutation was shown to have a similar phenotype to the mouse Mwk Trpc3 mutation in cellular experiments [48]. In addition, disrupted regulation of TRPC3 has been reported in other SCAs. TRPC3 protein downregulation has been reported in Purkinje cells of a SCA1 mouse model before the onset of Purkinje cell degeneration [51]. Failure of TRPC3 protein phosphorylation by mutant PKCγ from SCA14 has been shown in cellular assays [52]. TRPC3 protein is mainly distributed in the cerebellar Purkinje cell layer during the stage of dendritic growth and functions in both dendritic growth and survival of Purkinje cells [50]. TRPC3 protein is highly expressed in the soma and dendrites of Purkinje cells during postnatal development of the cerebellum [50,53], and the high level of TRPC3 protein expression continues to the period of adulthood, suggesting that TRPC3 protein may also regulate the growth and refinement of dendritic trees of Purkinje cells in the cerebellar cortex during adulthood [50]. SCA44 SCA44 is caused by mutations of mGluR1 and missense mGluR1 mutants can result in increased receptor activity [54]. Activation of mGluR1 signaling has been reported in the moonwalker mouse model [50]. mGluR1, a subtype of the group I mGluRs, is most strongly expressed in Purkinje cells starting during the embryonic period [55][56][57][58]. Although no marked abnormality in cerebellar anatomy of mGluR1 deficient mice was found, the Purkinje cell dendritic arbors were found to be smaller and have a reduced complexity of Purkinje dendritic branches [59]. In dissociated cultures of rat cerebellar Purkinje cells and granule cells, pharmacological inhibition of mGluR1 by application of the subtype-selective antagonist of group I metabotropic glutamate receptors 7-(hydroxyimino)cyclopropa[b]chromen-1a-carboxylate ethyl ester (CPCCOEt) reduced the number of surviving Purkinje cells and the size of their dendritic arbors. These findings have been confirmed by rat in vivo experiments via local injections of LY367385, a highly selective and competitive mGluR1a receptor antagonist, mGluR1 antisense oligonucleotides, or systemic administration of CPCCOEt [60]. In contrast, in cerebellar organotypic slice cultures derived from P8 mouse pups and maintained for 12 days, pharmacological inhibition of mGluR1 by (RS)-alpha-methyl-4-carboxyphenylglycine (MCPG), a competitive metabotropic glutamate receptor antagonist, had an only minor negative effect on Purkinje cell dendritic morphology [61]. These studies indicate that mGluR1 signaling is essential for Purkinje cell growth and survival particularly at earlier developmental stages. At later developmental stages, mGluR1 signaling has an important regulatory function in the period of rapid dendritic expansion in cerebellar slice cultures. When mGluR1 signaling was induced by treatment with Dihydroxyphenylglycine (DHPG), a group I mGluR activator, a severe reduction of Purkinje cell dendritic growth was found [62]. SCA Pathogenesis Is Linked to Purkinje Cell Dendritic Development The pathogenesis of SCA is still unknown and gain-of-function, dominant-negativefunction and loss-of-function mechanisms have been proposed. However, none of them can adequately explain the pathology. For example, in SCA14, PKCγ knockout mice show relative normal cerebellar development and function making a simple loss-of-function explanation unlikely [63]. Indeed, inhibition of PKCγ activity by pharmacological inhibitors in cerebellar organotypic slice culture may even promote Purkinje cell dendritic development [39]. The toxic gain-of-function hypothesis has been proposed because aggregation of PKCγ was observed to exert a toxic effect on cells in some studies. For the PKCγ(H101Y) mutation, it has been reported that the PKCγ(H101Y) transgenic mouse exhibits an ataxic phenotype with altered morphology and loss of Purkinje cells. PKCγ(H101Y) protein leads to a decrease in the overall activity of the PKC enzyme [64]. In addition to a decrease of kinase activity of PKCγ mutations, several PKCγ mutants with increased kinase activity have also been found [52]. The PKC activator PMA does lead to a reduction in dendritic growth of Purkinje cells, suggesting that increased kinase activity may be an explanation for some of the cases. In the PKCγ(S361G) transgenic mouse model of SCA14, Purkinje cells exhibited severe morphological abnormalities that resembled the inhibition of dendritic growth induced by PMA treatment. These results support the concept that increased kinase activity of PKCγ is involved in the pathogenesis of SCA14 [28,30]. More recently, a PKCγ mutant has been reported to be caused by PKCγ(R76X), which produces a short peptide with a pseudosubstrate domain that served as a pan-PKC inhibitor. The PKCγ(R76X) peptide is thought to act in a dominant-negative manner by inhibiting PKC activity and causing the death of Purkinje cells [65]. A similar situation occurs in other types of SCAs, such as SCA44, and the physiological function of different mutants causing the disease protein may vary and have different effects on dendritic development of cerebellar Purkinje cell. Based on the above findings, in this manuscript we explore a possible link between molecules regulating Purkinje cell dendritic development and the pathogenesis of SCAs. This idea has been supported by some molecules, such as the RORα protein, which is not a causative gene of SCAs but plays an important role during the progression of disease. The expression of RORα protein is downregulated in Purkinje cells from the SCA1 mouse model and changes of Purkinje cell development mediated by the RORα protein determine disease severity of SCA1 [12,66]. These results suggest that mutations in or dysregulation of the molecules that play a role for Purkinje cell development may contribute to SCAs and cerebellar diseases. Dysregulated Gene Expression in SCAs The normal cellular pathway for growth, development and differentiation depends on accurate gene expression. Dysregulation of gene expression is an acknowledged characteristic of several SCAs and has been proposed to trigger the pathogenesis of SCAs [4,5]. Several mutant SCA causing proteins are recognized as key factors associated with the regulation of gene expression and contribute to gene regulation on the transcriptional level. In some representative types of SCAs caused by polyglutamines, such as SCA1 and SCA7, the causative proteins are ataxins, which interact with transcriptional regulators. Ataxin-1 can directly interact with Capicua, the mammalian homolog of Drosophila Capicua and modulate the repressor activity of Capicua in mammalian cells. However, in the Ataxin-1 with polyglutamine expansion, the binding to and the repressor activity of Capicua are changed. Capicua is a key transcriptional regulator and plays an important role in gene expression especially during developmental processes [6]. In SCA7, the Ataxin-7 protein plays a role for gene expression through its interaction with the co-activator SAGA complex, an extremely conserved complex involved in gene expression [7,8]. SAGA is influenced by the mutant Ataxin7 and clusters in a dysfunctional negatively charged SAGA complex, that is proposed to be involved in the pathogenesis of expanded polyglutamine diseases [9]. Some SCA causative genes are directly responsible for the regulation of gene expression. The TATA-Box binding protein (TBP) is a general transcription factor and mutants of TBP cause a CAG repeat expansion resulting in SCA17 [10]. TBP contains polyglutamine regions and these regions play a role in the ability of TBP to promote or inhibit transcription by interaction with regulators or by binding to different promoter areas. Mutations of TBP cause reduced affinity for binding to the relative region of DNA [11]. After completion of transcription as the first step of gene expression, protein synthesis in the next step involves essential enzymes called aminoacyl-tRNA synthetases. Importantly, mutation of genes encoding aminoacyl-tRNA synthetases can also cause cerebellar ataxia in a mouse model [67]. These studies suggest that mutated proteins of SCAs can affect gene expression directly or indirectly by changing the activity of signaling proteins resulting in a dysregulation of transcription. Many studies, therefore, have focused on the analysis of gene expression on the global transcriptional level by using animal or cellular models and aimed to identify genes with a strongly altered expression. These genes are expected to pinpoint the pathways associated with SCAs. Abnormalities of mGluR1 Signaling Associated with Pathogenesis of SCAs Based on the global transcriptional data, several dysregulated molecules have been found from different mouse models with cerebellar ataxic phenotype, indicating potentially important pathways associated with pathogenesis of SCAs and cerebellar ataxia. For example, staggerer mice exhibit a characteristic severe cerebellar ataxia due to an underdeveloped cerebellar cortex and unaligned Purkinje cells [68]. Overlap based analysis of microarray data from the SCA1 mouse model [12] and staggerer mouse [13] was performed and mGluR1 was identified as a common molecule from overlapping the data of both mouse models. This molecule and associated signaling will provide a better understanding of the disease mechanisms of SCAs and cerebellar ataxia. A variety of different mouse models have been created referring to molecules of the mGluR1 signaling complex, including mGluR1, Gαq, PLC, PKCγ, ITPR1 and TRPC3 [43,[69][70][71][72][73][74][75][76][77]. Some of these molecules are known to cause SCAs or other disorders relating to the cerebellum. For example, PKCγ mutants cause SCA14 [28][29][30] and PKCγ is downstream of mGluR1 signaling. SCA15 is caused by mutations of ITPR1 [40], another downstream molecule of mGluR1. In SCA1, mGluR1, ITPR1, PKCγ and Homer3 have been found to be downregulated on the transcriptional level [12,51,78] and for mGluR1 this reduction of expression has been confirmed on the protein level [79]. Disruption of mGluR1 has been reported in the mouse models of SCA3 [80] and SCA5 [18]. Mutations within GRM1 coding for mGluR1 are relatively rare. Recently, SCA44 has been reported, with heterozygous dominant mutations in the GRM1 gene showing typical phenotypes of SCA disease, but with different characteristics, possibly due to different functional changes in different mutants [54]. The function of an mGluR1 truncation mutation was tested by cellular experiments and this mutation resulted in a decreased receptor activity and decreased downstream target phosphorylation, suggesting that a loss of function of this mutation interferes with downstream signaling of mGluR1 [54]. These findings demonstrate an important role of mGluR1 signaling in Purkinje cells and show the relationship of altered mGluR1 signaling and SCA pathogenesis. Evidence for enhanced mGluR1 signaling is also present in different types of SCA. For example, elevated calcium is reported in Purkinje cells of the SCA2 mouse model caused by ATXN2 Q58 [81] and mutant ataxin2 can interact with IP3R [82]. Our previous study has reported increased IP3R1 expression (encoded by ITPR1, causative gene of SCA15 and SCA29) in an SCA14 mouse model [83]. The two other SCA44 causing mGluR1 missense mutations showed increased receptor activity compared to wild type mGluR1, suggesting that increased activity of mGluR1 leads to increased ligand sensitivity and ligand-independent activation, which is related to the fact that both missense mutations are closely located in the area of mGluR1 activation, resulting in excessive mGluR1 signaling by positive feedback with increased intracellular calcium levels [54]. The moonwalker (Mwk) mouse model with severe ataxia and abnormal Purkinje cell development, is caused by point mutations of TRPC3 (the causative gene of SCA41). This mutant TRPC3 can activate the cation channel, downstream of mGluR1 signaling [50,84]. These studies suggest that an increased activity of the mGluR1 pathway might also be associated with pathogenesis in some types of SCAs. The Role of Recently Identified Dysregulated Molecules Researchers have used the strategy of overlapping microarray data from different types of SCAs. Recently, these mouse models have been used to identify some key molecules which could contribute to uncover potentially shared molecular mechanisms related to the pathogenesis of SCAs [14,[85][86][87]. RGS8 RGS8 is dysregulated in SCA1, SCA2, SCA7 and SCA14, indicating a role in pathogenesis of SCAs [14,85,86,88]. RGS8 has been reported to be strongly expressed in rat cerebellar Purkinje cells and it appears to be enriched in brainstem and nucleus accumbens [89][90][91][92]. RGS8 mRNA selectively interacts with ATXN2 and mutant ATXN2 reduced RGS8 expression in the SCA2 mouse model [86]. In a mouse model of SCA14 with increased PKCγ activity, RGS8 function has been studied in more detail. The increased RGS8 expression could partially counteract the negative effect of activated mGluR1 signaling during Purkinje cell development [85]. Since RGS8 is belonging to the R4 subfamily, its function is directly linked to Gq protein. RGS8 inhibits the M1 muscarinic acetylcholine receptor-Gq-mediated signaling in Xenopus oocytes [93] and has a strong inhibitory function for Gαq-and Gαi/odependent receptor activity [94]. However, RGS8 was also demonstrated to function via direct interaction with the relative receptor. RGS8 is able to interact with the third intracellular loop of melanin-concentrating hormone (MCH) receptor 1 (MCHR1) and inhibits the calcium mobilization induced by melanin-concentrating hormone [94]. Increased RGS8 expression through the inhibition of the MCHR1 signaling in the hippocampal CA1 region may be related to the antidepressant-like behavior of RGS8 transgenic mice [95]. Although the expression of RGS8 protein has been reported in the hippocampal CA1 region, RGS8 knockout mice have normal brain development and no major abnormalities in other or-gans [95,96]. RGS8 protein is also expressed in testis, but RGS8 knockout mice are viable and fertile [96]. Electroconvulsive seizures in rats caused an increase in RGS8 mRNA expression in the prefrontal cortex [97], suggesting a potential role for RGS8 in seizures. INPP5A INPP5A protein is identified as a common molecule dysregulated in SCA1, SCA2, SCA7, SCA14 and SCA17 [14,[85][86][87]. The molecule INPP5A is an enzyme of the inositol polyphosphate 5 phosphatase family. In cellular signaling, INPP5A functions as an enzyme that inactivates IP3 to terminate downstream signaling [98][99][100]. The absence of Inpp5a protein leads to progressive degeneration of Purkinje cells. In SCA17 knock-in mice, reduced Inpp5a expression was reported, which was associated with increased IP3 levels. Importantly, overexpression of Inpp5a gene causes a reduction of IP3 levels in the cerebellum and rescues the symptoms of Purkinje cell degeneration in SCA17 mice [87]. Similar to SCA2, overexpression of Inpp5a alleviates Purkinje cell degeneration in SCA2 mice [101]. STK17B Down-regulated mRNA of STK17B is found in SCA1, SCA7 and SCA41 mouse models. In a recent study about STK17B gene function in Purkinje cells, STK17B signaling has been identified as a downstream effector of PKCγ. Reduction of STK17B protein is confirmed specifically in the Purkinje cells from SCA14 mouse models [14,102,103]. STK17B, also known as DAP kinase-related apoptotic kinase 2 (DRAK2), is located on chromosome 2 (2q32.3) and was first isolated from human placenta and liver cDNA libraries [104]. It belongs to the serine/threonine kinase family of the death associated proteins (DAP). STK17B gene is related to STK17A gene, also known as DRAK1 gene, and both of them may constitute of a novel sub-family, which was originally thought to have the function of inducing apoptosis [104]. The STK17B protein structure includes an N-terminal autophosphorylation region and a C-terminal region with a nuclear localization signal. Another putative nuclear localization signal has been reported in the kinase domain [105]. STK17B has been relatively well studied in immunology as it is expressed in the immune system. The findings from the immune system may provide further insight for future studies on the function of STK17B in the brain. STK17B has been associated with calcium mobilization and homeostasis [105,106]. As the exact role of STK17B in brain is currently unclear, further studies would be helpful to better understand the function of STK17B in the nervous system in the future. STK17B protein has been reported to have a prominent expression in the brain, including the olfactory lobe, pituitary, superchiasmatic nuclei, ventricular zone and cerebellum [107]. Importantly, increased phosphorylation of STK17B protein causes negative effects on cerebellar Purkinje cell dendritic development. This negative effect can be partially rescued by a newly designed STK17B protein inhibitor Cpd16 [103]. These new findings give more insights for the treatment of SCAs. The new inhibitor could also be a potential drug for SCAs. Taken together, these studies add new molecules associated with the alteration of a calcium channel and a compound to pharmaceutically manipulate mGluR1 signaling (Figure 1), which is thought to be an important factor in the pathology of SCAs. Shared mGluR1-PKCγ Signaling Pathway in SCAs Since the different SCAs share the same cerebellar phenotypes, it is reasonable to assume that the underlying common pathogenic signaling could refer to the pathogenic features in the cerebellum, e.g., Purkinje cell degeneration. Through the review of recent studies of SCA mouse models, we summarize and find that mGluR1-PKCγ signaling is a common pathway that is dysregulated early in the onset of SCAs and is associated with Purkinje cell dendritic development. Dysregulated expression of common molecules is not only related to Purkinje cell dendritic development, but to the pathology of disease. This indicates similar signaling events which occur in the early stage of disease. For the group I type, e.g., SCA1, SCA2, most of their SCA genes are responsible for transcription [1]. A change of mGluR1-PKCγ signaling should be an indirect response during the early state of the disease. For the group II type, the SCA-genes are directly pointing to shared signaling or relevant downstream signaling, such as mGluR1, PKCγ, ITPR1, and TRPC3 protein [70,71,[73][74][75]77]. Elucidating these shared pathways will help us to possibly modulate and monitor pathogenesis of different SCAs (Figure 2). Although the alteration of mGluR1-PKCγ signaling is thought to affect dendritic development of Purkinje cells and to occur much earlier than at disease onset, dendritic development of Purkinje cells may not be the direct cause of the adult ataxia phenotype. The cause of ataxia is thought to be related to changes in the firing pattern of mature Purkinje cells. For example, deletion of the SCA13-associated gene KCNC3 affects the frequency of firing of Purkinje cells and increases the excitability of Purkinje cell dendrites [108]. Since mouse models of ataxia share the alteration of output signals in Purkinje cells, researchers have found that proper function of mGluR1 receptors and mGluR1 signaling are involved in the prevention of ataxia, which in turn suggests a link between the mGluR1 signaling pathway and ataxias. Further studies have shown that gene mutations related to mGluR1 or related signaling molecules cause the failure of climbing fiber maturation during the establishment of Purkinje cell innervation [109]. However, the suggested mGluR1 signaling pathway is certainly not able to explain the mechanism of all types of SCA. In a mouse model of SCA27, there was no significant change in the mGluR1 response, but the AMPAmediated currents were impaired [110], suggesting that the mGluR1 pathway is not unique and that there are other potential signaling pathways that may cause SCAs. Recently, an adult-stage RNA profiling was presented in a study using the SCA2 mouse model, with the expectation of finding signaling pathways important for Purkinje cell degeneration. Inpp5a and RGS8 have also been identified as dysregulated molecules Shared mGluR1-PKCγ Signaling Pathway in SCAs Since the different SCAs share the same cerebellar phenotypes, it is reasonable to assume that the underlying common pathogenic signaling could refer to the pathogenic features in the cerebellum, e.g., Purkinje cell degeneration. Through the review of recent studies of SCA mouse models, we summarize and find that mGluR1-PKCγ signaling is a common pathway that is dysregulated early in the onset of SCAs and is associated with Purkinje cell dendritic development. Dysregulated expression of common molecules is not only related to Purkinje cell dendritic development, but to the pathology of disease. This indicates similar signaling events which occur in the early stage of disease. For the group I type, e.g., SCA1, SCA2, most of their SCA genes are responsible for transcription [1]. A change of mGluR1-PKCγ signaling should be an indirect response during the early state of the disease. For the group II type, the SCA-genes are directly pointing to shared signaling or relevant downstream signaling, such as mGluR1, PKCγ, ITPR1, and TRPC3 protein [70,71,[73][74][75]77]. Elucidating these shared pathways will help us to possibly modulate and monitor pathogenesis of different SCAs (Figure 2). Although the alteration of mGluR1-PKCγ signaling is thought to affect dendritic development of Purkinje cells and to occur much earlier than at disease onset, dendritic development of Purkinje cells may not be the direct cause of the adult ataxia phenotype. The cause of ataxia is thought to be related to changes in the firing pattern of mature Purkinje cells. For example, deletion of the SCA13-associated gene KCNC3 affects the frequency of firing of Purkinje cells and increases the excitability of Purkinje cell dendrites [108]. Since mouse models of ataxia share the alteration of output signals in Purkinje cells, researchers have found that proper function of mGluR1 receptors and mGluR1 signaling are involved in the prevention of ataxia, which in turn suggests a link between the mGluR1 signaling pathway and ataxias. Further studies have shown that gene mutations related to mGluR1 or related signaling molecules cause the failure of climbing fiber maturation during the establishment of Purkinje cell innervation [109]. However, the suggested mGluR1 signaling pathway is certainly not able to explain the mechanism of all types of SCA. In a mouse model of SCA27, there was no significant change in the mGluR1 response, but the AMPAmediated currents were impaired [110], suggesting that the mGluR1 pathway is not unique and that there are other potential signaling pathways that may cause SCAs. Serological testing using antibodies could help neurologists to diagnose cerebellar ataxias in clinical practice. Recent studies have shown that selective antibodies against molecules of the mGluR1 pathway are often present in patients with Autoimmune Cerebellar Ataxia. Importantly, many of these antigens are also associated with the pathogenesis of spinocerebellar ataxias [113][114][115]. This clinical evidence suggests that the mGluR1 signaling pathway may be a common pathophysiological mechanism not only in SCAs but also in other conditions with signs of cerebellar ataxia. Conclusions and Outlook The evidence of an association between Purkinje cell development and SCAs reveals an important role of molecules involved in Purkinje cell development and function and pathogenesis of SCAs. The molecules involved in mGluR1-PKCγ signaling are all strongly expressed in Purkinje cells [116,117]. Overlapping cerebellar pathogenic symptoms and abnormal Purkinje cell dendritic growth in SCAs accompany dysregulation of these common genes, suggesting the existence of shared cellular pathways linking multiple forms of SCAs [14,85]. In this review, we provide evidence that suggests that mGluR1-PKCγ signaling might be an important pathway shared in several different types of SCAs. In Table 1 we list those subtypes of SCA, for which abnormalities of signaling or changes of Recently, an adult-stage RNA profiling was presented in a study using the SCA2 mouse model, with the expectation of finding signaling pathways important for Purkinje cell degeneration. Inpp5a and RGS8 have also been identified as dysregulated molecules in this adult mouse study. In addition, there are also many molecules associated with calcium signaling, such as Camk2a and Camk4, which have been described as downstream candidate factors of mGluR1 signaling in synaptic function [111,112]. Although these molecules have not been identified in the developmental phase and few studies have reported functions of these molecules for dendritic development of Purkinje cells, it is worthwhile to investigate them to gain a deeper understanding of SCA pathogenesis. Serological testing using antibodies could help neurologists to diagnose cerebellar ataxias in clinical practice. Recent studies have shown that selective antibodies against molecules of the mGluR1 pathway are often present in patients with Autoimmune Cerebellar Ataxia. Importantly, many of these antigens are also associated with the pathogenesis of spinocerebellar ataxias [113][114][115]. This clinical evidence suggests that the mGluR1 signaling pathway may be a common pathophysiological mechanism not only in SCAs but also in other conditions with signs of cerebellar ataxia. Conclusions and Outlook The evidence of an association between Purkinje cell development and SCAs reveals an important role of molecules involved in Purkinje cell development and function and pathogenesis of SCAs. The molecules involved in mGluR1-PKCγ signaling are all strongly expressed in Purkinje cells [116,117]. Overlapping cerebellar pathogenic symptoms and abnormal Purkinje cell dendritic growth in SCAs accompany dysregulation of these common genes, suggesting the existence of shared cellular pathways linking multiple forms of SCAs [14,85]. In this review, we provide evidence that suggests that mGluR1-PKCγ signaling might be an important pathway shared in several different types of SCAs. In Table 1 we list those subtypes of SCA, for which abnormalities of signaling or changes of expression of important components of the mGluR1-PKCγ signaling pathway have been reported. Interestingly, this signaling pathway is associated with the recently identified common genes STK17B, RGS8, INPP5A [14,[85][86][87]103]. Since by now there is no known efficient treatment strategy for SCAs, the design of new therapeutic strategies would be important which either suppress the progression of SCAs or at least alleviate the symptoms of the disease. STK17B is identified as a downstream mediator of mGluR1-PKCγ signaling. Cpd16, a new inhibitor STK17B, has been reported to partly rescue the phenotype of Purkinje cell abnormality [103,118]. It might be promising to study the utility of Cpd16 in SCA patients in the future. RGS8 is reported as a key molecule with dysregulated transcription in different SCA mouse models. RGS8 is also a kinase which locates upstream of mGluR1 signaling [85]. Strategies of designing drugs for RGS family members have been applied for cancer treatment [119,120]. Such drugs would also be promising candidates for targeting RGS8 in order to modulate mGluR1 signaling to improve the SCA associated disease symptoms in the future. Information was obtained from the references and Online Mendelian Inheritance of Men (OMIM) of SCAs. In summary, despite a variety of different mouse models to investigate the pathology of SCAs, it is still unknown why different SCAs produce common deficits in the cerebellum. However, in many mouse models, abnormalities in genes involved in cerebellar developmental have been identified. In order to better understand this question, identifying potential dysregulated molecules to point out common signaling pathways was shown to be an efficient method. Future studies would need to combine diverse forms of SCAs using patient samples and mouse models in order to better define the common pathological mechanisms underlying diverse forms of SCAs. Conflicts of Interest: The authors declare no competing interest associated with the manuscript.
2022-08-18T15:13:03.942Z
2022-08-01T00:00:00.000
{ "year": 2022, "sha1": "79380ac80e141ce6f4d43804f5dc39574790c2f2", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/23/16/9169/pdf?version=1660575266", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "905b27daacf631572e8f1d9314f898cd00b3e7e9", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
11308822
pes2o/s2orc
v3-fos-license
Acoustical properties of air-saturated porous material with periodically distributed dead-end poresa ) A theoretical and numerical study of the sound propagation in air-saturated porous media with straight main pores bearing lateral cavities (dead-ends) is presented. The lateral cavities are located at “nodes” periodically spaced along each main pore. The effect of periodicity in the distribution of the lateral cavities is studied, and the low frequency limit valid for the closely spaced dead-ends is considered separately. It is shown that the absorption coefficient and transmission loss are influenced by the viscous and thermal losses in the main pores as well as their perforation rate. The presence of long or short dead-ends significantly alters the acoustical properties of the material and can increase significantly the absorption at low frequencies (a few hundred hertz). These depend strongly on the geometry (diameter and length) of the dead-ends, on their number per node, and on the periodicity along the propagation axis. These effects are primarily due to low sound speed in the main pores and to thermal losses in the dead-end pores. The model predictions are compared with experimental results. Possible designs of materials of a few cm thicknesses displaying enhanced low frequency absorption at a few hundred hertz are proposed. VC 2015 Acoustical Society of America. [http://dx.doi.org/10.1121/1.4916712] I. INTRODUCTION Air-saturated porous materials are most efficient for noise reduction applications if the characteristic sizes of the pores or of the interparticle spaces are on the order of the viscous and thermal boundary layer thicknesses.At audible frequencies, the order of magnitude of the characteristic sizes ranges from a few hundred micrometers to a few millimeters.The pores should also be interconnected and opened to the surroundings.The models developed over the years are able to predict accurately the acoustic behavior of highly porous absorbing materials such as for instance reticulated polyurethane foams or fibrous materials. 1It was shown more recently that these models are not accurate enough to properly describe the acoustic properties of other materials that can contain partially opened or dead-end pores.Dead-end pores are closed at one end so that fluid flow does not take place in all the pores of the medium.A model capable of accounting for this feature was recently developed and used to successfully describe the acoustical properties of materials with lower porosity such as metallic foams and those with surface dead-end pores. 2 It was found that the presence of dead-ends had the effect of increasing the absorption coefficient at frequencies controlled by the average length of the dead-ends.This motivates the present study.Structured materials with well-controlled microgeometry including dead-end pores can be designed and fabricated by making use of recent technologies such as precision machining or three-dimensional (3D) printing.The designed materials slab could contain, for example, circular perforations.Some of the perforations should go in through the thickness of the layer while others should end inside it to create dead-end pores. The present contribution is concerned with the theoretical and numerical study of a structured perforated material containing periodically spaced dead-end pores.Waves propagating in periodic structures are known as "Bloch waves."Examples of such structures are ducts with periodically distributed lateral cavities or resonators (see Refs. 3-5, for example).The periodicity introduces frequency stop bands, i.e., frequency intervals where no propagating waves are supported by the structure.Most studies deal with the situation where the structure period is on the order of the wavelength to observe the stop bands (example, sonic crystals).The distances between the perforations and dead-ends considered in the present study are about 1 cm or less.Therefore the wavelengths on the order of the period correspond to frequencies above 10 kHz.However, the stop bands due to a) Portions of this work were presented in "Sound propagation in narrow tubes with periodically spaced lateral cavities" by O. Umnova, P. Leclaire, T. Dupont, and R. Panneton, Symposium on the Acoustics of Poroelastic Materials, Stockholm, Sweden, December 2014.b) Author to whom correspondence should be addressed.Electronic mail: Philippe.Leclaire@u-bourgogne.frresonances of the lateral dead-ends are also predicted at low frequencies, typically a few hundred hertz, much lower than the frequencies corresponding to the period.This constitutes the central originality of the present contribution.The deadend pores considered here are simple closed cavities.However, the model can account for more complex geometries including Helmholtz resonators.The aim of this work is to extend the model for the acoustical properties of porous materials with dead-end porosity developed earlier 2 to account for periodicity in the spatial distribution of deadends within the thickness of the material.The model presented here provides a simple tool for optimizing the material inner structure to achieve the desired acoustical properties. The paper is organized as follows: In Sec.II, a dispersion relationship for waves propagating in the channel (called main pore in the following) with periodically distributed side branches 3 is recalled and extended to account for the multiple side branches (called dead-end pores in the following) at one node.The transfer matrix method (TMM) is then developed to predict absorption and transmission characteristics of the finite thickness material slab with dead-end porosity.In Sec.III, the low frequency limit of the model is investigated when the distance between neighboring deadend pores is small compared to the wavelength of sound in the main pore.Simple expressions for the dynamic density and compressibility are derived.The limitations of the model are established by comparing its predictions with the transfer matrix approach developed in Sec.II.In Sec.IV, the effect of the dead-ends on the behavior of a single main pore is investigated and the limitations of the low frequency approximation are discussed by comparing its prediction with those of the TMM.In Sec.V, the model is validated by comparing its predictions for the absorption coefficient of the material slab with FEM simulations.Experimental results on samples obtained from 3D printing are also presented and compared with the model.In Sec.VI, possible designs of perforated materials with lateral dead-ends featuring improved absorption at low frequencies are suggested.Their absorption properties are simulated using the models developed.The main findings are summarized in the final section. II. SOUND PROPAGATION IN THE MATERIAL WITH PERIODICALLY DISTRIBUTED DEAD-END PORES-FULL ANALYTICAL TMM MODEL In the previous study (Ref.2), no interactions between the dead-ends were taken into account either for the metallic foams with randomly distributed dead-ends or for the structured material with surface dead-ends (Fig. 9 in Ref. 2).However, a periodic arrangement with interactions is possible when the dead-end pores are opened into the main pores as shown in Fig. 1.In this case, the interaction between the dead-end and the connected pores occurs in the bulk of the material slab.Only the straight perforations going through the thickness of the material layer are visible on the surface. When the dead-ends are distributed periodically along the length of the main pores, two distinctive cases can be identified in the material behavior.If the wavelength of sound traveling through the main pores is comparable to the distance between the dead-ends, stop and pass bands may appear.However, in the small pores on the order of the viscous and thermal boundary layers thicknesses, these effects will be severely affected by the strong viscous and thermal losses.In the case where the separation distance between the dead-ends is much less than the wavelength, the effective properties of the porous material (i.e., its effective density and compressibility) are modified by their presence.The validity of the plane wave approximation is assumed throughout the paper i.e., the radii of all pores are assumed small compared to the wavelength of sound. Following Bradley, 3 a pore with cross-sectional area A mp (the subscript "mp" stands for "main pore") with periodically distributed identical side branches with crosssectional area A de (the subscript "de" stands for "dead-end") and length d is considered.There are N dead-ends per period h.A configuration with N ¼ 2 is shown in Fig. 1.It is assumed that Reðk mp ffiffiffiffiffiffiffi ffi A mp p Þ ( 1; Reðk de ffiffiffiffiffiffi ffi A de p Þ ( 1 so that the wave inside the pores is plane.Here, k mp and k de are the wavenumbers in the main pore and in the dead-ends.The period h can be comparable to the wavelength.In this case, the wavenumber q of Bloch waves (i.e., waves that propagate through a periodic structure) is defined by the following dispersion equation, which is equivalent to Eq. (27) in Ref. 3, cos ðqhÞ ¼ cosðk mp hÞ þ iX sinðk mp hÞ; (1) where in which the number of dead-end pores per node N appears when applying the pressure and volume velocity continuity at the entrance of the junction Ref. 3 (Appendix), Ref. 6 (p.290).Zs de is the normalized surface impedance of the deadend.In case of a simple dead-end pore, and Contrary to Ref. 3, the difference between the characteristic impedance of air in the main pore and in the dead-end pore is accounted for in Eqs. ( 3) and ( 4).This difference may arise due to the difference in shape or in crosssectional area of these pores if viscous and thermal losses are present.The side branches of different nature (Helmholtz resonators for instance) can be easily accommodated by using an appropriate surface impedance instead of Eq. ( 3). Here Z mp and Z de are the characteristic acoustic impedances of air inside the main pore and in the dead-end pores.A time dependence in the form expðÀixtÞ is assumed.It is easy to generalize Eq. ( 2) for the case of N non-identical dead-end pores per period, In this case, the characteristics of the individual dead-ends are denoted by the superscript (k).If we define y ¼ expðik mp hÞ; then the following matrix: relates forward and backward propagating Bloch waves on the right and on the left from the period of size h along the thickness. If n periods are considered, then forward and backward propagating Bloch waves on the right and on the left from this arrangement are related by the matrix M, Now the equation for pressure reflection r n and transmission t n coefficients for n periods in an open channel (main pore of Fig. 1 with infinite length so that no reflection occurs outside the dead-end arrangement area) is which gives Here the fact that det M ¼ 1 (product of matrices bearing the same property) was used.If the reflection coefficient r 0 n from a rigidly backed structure containing n unit cells is to be calculated (main pore of Fig. 1 with hard back after the last dead-end), it is given by where P is the amplitude of the incident and reflected waves at the rigid surface.Eliminating P from Eq. ( 11) results in To model the sound interaction with a porous material containing straight pores (of surface perforation rate /) with dead-ends, each pore is associated with an air channel of cross-sectional area A so that as illustrated in Fig. 2. For the plane wave approximation to be valid, it is necessary that where k ¼ x=c is wavenumber in air and c is the sound speed in air. Then the amplitudes of the forward and backward traveling waves in the hypothetical channels and at the entrance to the main pores, p r 6 and p l 6 , are related by where where / 0 ¼ /ðz 0 =Z mp Þ and z 0 the characteristic acoustic impedance of air.This means that the reflection and transmission coefficients of an open ended porous material slab, R n and T n and a reflection coefficient R 0 n of a hard backed porous slab can be calculated using equations similar to those derived for a FIG. 2. Modeling transmission and reflection through the material surface.Each pore is associated with an air channel of cross-sectional area A given by Eq. ( 13). single pore, Eqs.(10a), (10b), and (12).The former can be calculated by and the latter by where the elements M 0 ij of a matrix M 0 are used.This matrix is given by the product of M and T, The absorption coefficient of a hard backed slab is calculated by III. LOW FREQUENCY APPROXIMATION Now it is assumed that the distance h between the dead-ends is much less than the wavelength of sound in the main pore, i.e., Reðk mp hÞ ( 1.In this case, the configuration with dead-end pores can be replaced by the main pore filled with a fluid described by the effective wavenumber q and the effective impedance z.To derive the expressions for q and z, a simple self-consistent model similar to a coherent potential approximation (CPA) (Ref.7) is used.In this method the configuration shown in Fig. 1 is replaced by a pore filled with a fluid with still unknown effective properties.Then the following "gedankenexperiment" is performed: If a unit cell of an original periodic arrangement is inserted into this pore, it will not disturb the properties of an effective fluid representing exactly the same periodically arranged unit cells as the inserted one.This implies that if a wave travels through the pore filled with effective fluid, its reflection coefficient from the inserted cell will be 0 and the transmission coefficient will be equal to expðiqhÞ.In addition, the implicit assumption that the sample is of infinite length or, at least sufficiently long to include many wavelengths is made.The period insertion is illustrated in Fig. 3. Assuming no reflections at x ¼ Àh/2, the boundary conditions for pressure and particle velocity at this location are where a 6 are the amplitude of the forward and backward waves propagating between x ¼ Àh/2 and x ¼ 0. All quantities are normalized to the amplitude of the incident wave on the cell from the effective medium.At x ¼ 0, the wave amplitude are modified due to the presence of the dead-end pores.Generalizing the transfer matrix derived in Ref. 3 to the case of N identical dead-ends, the amplitudes b 6 of the waves propagating between x ¼ 0 and x ¼ h/2 can be related to a 6 in the following way: Finally, with the transmission coefficient being equal to expðiqhÞ, the boundary conditions at Combining Eqs. ( 20) and ( 21) provides the ratio a À =a þ as a function of z, Z mp , and k mp h.Combining Eqs. ( 22) and ( 23) provides the ratio b À =b þ as a function of a À =a þ .The ratio b À =b þ is then replaced in the combined Eqs. ( 24) and (25) to provide At low frequencies, in a first order expansion over a small parameter k mp h, cosðk mp hÞ is approximated by 1 and sinðk mp hÞ is approximated by k mp h and the following expressions for the characteristic acoustic impedance are obtained: For the wavenumber q, the low frequency asymptotic behavior can be determined by an expansion to the second order of Bradley's dispersion Eq. ( 1).Alternatively, an expansion of the dispersion Eq. ( 27) can be considered.Because e iqh contains cosðqhÞ, the expansion should be done to the second order to achieve the same precision.The method proposed here consists in determining first the real part cosðqhÞ and imaginary part sinðqhÞ of the exponential.Upon inserting Eq. (26) in Eq. ( 27), it can easily be shown that And consequently since e iqh ¼ cosðqhÞ þ i sinðqhÞ, the following split is the only solution: These results show that the present approach ("gedankenexperiment") leads to a dispersion relation [Eq.( 27) or (29)] that is equivalent to Eq. ( 1), and in addition, Eq. ( 26) provides an expression of the equivalent characteristic impedance z as a function of k mp .At low frequencies, the wavenumber q of the effective medium can be considered small and is obtained with the help of an expansion to the second order of Eq. (30) or to the first order of Eq. (31), which only involves sinðqhÞ.The same result is obtained in both cases, It is now possible to obtain expressions for the effective density q e ¼ zq=x and for the effective compressibility C e ¼ q=ðzxÞ of the fluid in the pore with dead-ends, where q mp ¼ Z mp k mp =x and C mp ¼ k mp =xZ mp are the effective density and compressibility of the fluid in the main pore and C de ¼ k de =xZ de is the compressibility of the fluid in the dead-end pores.It follows from Eq. (33) that the presence of the dead-end pores does not affect the effective density of the fluid in the main pore at low frequencies.However, it could significantly modify its effective compressibility.Now, Eqs. ( 28) and (32) are conveniently rewritten as The characteristic impedance z m of the material with perforation rate / can be calculated from Eq. (35) as and the wavenumber is defined by Eq. ( 36). The dependence of q mp , q de and C mp , C de on frequency and radius of both types of pores can be described by classical theories of wave propagation in cylindrical tubes (see Ref. 1, Chap. 4 for a review and description of these theories).The cylindrical pores can also be described using general models of wave propagation in porous media such as the Attenborough model 8 or the Johnson, Koplik, Dashen model 9 with macroscopic parameters corresponding to cylindrical pore structure.These models are generalized by Champoux and Allard 10 to account for thermal effects.To make our results easy to generalize to other pore geometries, models of porous materials, the models by Johnson et al. and by Champoux-Allard (synthesized in the "JCA model") are used to describe sound propagation in both main and deadend pores.The following expressions are used for the effective density and compressibility of fluid in the main and dead-end pores (subscripts "de" and "mp" are omitted in the following two equations): where N pr is the Prandtl number, r the airflow resistivity, K the viscous characteristic length, a 1 the tortuosity, q 0 the air density, g the dynamic viscosity, K 0 the thermal characteristic length and j 0 the thermal permeability which is a parameter defined in the model by Lafarge et al. 11 Here r and j 0 are parameters of a single pore and not of the bulk material.Different pore geometries can be accounted for by choosing different sets of parameters in the JCA model.For circular cross-section uniform cylinder, K and K 0 are equal to the pore radius.In the calculations presented in Sec.IV, the main pore and the dead-ends are supposed to be straight and cylindrical and so the data displayed in Table I are used.If the slab is hard backed and its thickness is L, then its surface impedance is calculated as z s ¼ iz m cotanðqLÞ; (40) and the absorption coefficient is IV. SINGLE MAIN PORE WITH LATERAL DEAD-ENDS: MODEL PREDICTIONS AND LIMITATIONS OF THE LOW FREQUENCY APPROXIMATION In this section, the comparisons between the full analytical TMM model accounting for periodicity in the arrangement of the dead-ends and the low frequency approximation are presented.The limitations of the latter are identified. A. Cylindrical pore with long lateral dead-ends First, a single cylindrical pore with lateral dead-ends is considered to study the limitations of the low frequency approximation.Identical dead-end pores with length d ¼ 3 cm are assumed distributed along the main pore with a period h ¼ 1 cm.The radius of the main pore is a mp ¼ 3 mm and the radius of dead-end pore is a de ¼ 1 mm, N ¼ 8 lateral dead-ends per period are considered.First, real and imaginary parts of the wavenumber q defined by ( 1) are calculated and compared to those predicted by a low frequency approximation [Eq.( 32)].The frequency range is chosen so that Reðk mp Þa mp 0:5 to justify the use of a plane wave approximation. Two resonances of dead-ends [Reðk de Þd ¼ p=2 and Reðk de Þd ¼ 3p=2] are observed at frequencies 2709 Hz and 8200 Hz.These resonances are well below the Bragg frequency (17 241 Hz), which is outside the range where the plane wave approximation is valid.The low frequency model [Eq.(32)] accurately predicts the frequency of the first resonance, while overestimating both real and imaginary parts of the wavenumber at the resonance due to strong dispersion.As for the second resonance, the low frequency model slightly overestimates its frequency (within 2% error) and lacks accuracy around it.Figure 5 compares the low frequency model predictions for the absorption coefficient predictions of a single hard backed pore of two different lengths. Two lengths of the main pore (L ¼ 2 cm and L ¼ 5 cm) are considered.The first length corresponds to two elementary cells per length, while the second corresponds to five.In both cases, resonances of the dead-ends correspond to the maxima in the absorption coefficient dependence on frequency.However, the behavior around the resonances of the dead-ends is distorted by the quarter-wavelength resonances of the hard-backed pore.These resonances happen roughly when ReðqÞL ¼ p=2.Due to strongly dispersive behavior of the resonance modes as shown in Fig. 4, multiple quarter wavelength resonances of the structure occur in the frequency range considered.So for L ¼ 2 cm [Fig.5(a)] ReðqÞL ¼ p=2 at 1820, 4900, and 8360 Hz.Low frequency model predicts reasonably accurately the absorption coefficient behavior around the first resonance of the dead-ends, while it becomes inaccurate and not able to resolve the interaction between the resonances of the dead-ends and those of the hard backed structure at higher frequencies.The absorption coefficient of the cylindrical main pore without deadends is shown in both Figs.5(a) and 5(b) to illustrate a significant increase in absorption due to the presence of deadends. B. Main pores with short lateral dead-ends It follows from the expression for the effective compressibility [Eq.( 34)] that if the length of dead-ends is much shorter than the wavelength Reðk de dÞ ( 1, the effective compressibility is approximated as where V de ¼ NA de d and V mp ¼ A mp h are volumes of the dead-ends and the main pore portion per period h of the structure.This means that a significant decrease in the phase velocity of sound through the pore could be achieved if the total volume of dead-ends per period significantly exceeds As in the previous calculations, the radius of the main pore is a mp ¼ 3 mm and the radius of dead-end pore is a de ¼ 1 mm.Again, N ¼ 8 lateral dead-ends per period are considered.In the frequency range below 9 kHz, no resonances of the dead-ends are observed.The lowest resonance frequency corresponding to Reðk de Þd ¼ p=2 is 22 000 Hz.The lowest Bragg frequency (for h ¼ 1 cm) is 17 241 Hz, which is also outside the frequency range considered.For this value of the period, the disagreement between the low frequency and the full analytical TMM model predictions for sound speed become noticeable at around 4000 Hz, which corresponds to jk mp jh % 0:7.For a shorter period h ¼ 3 mm, the low frequency model predictions remain accurate in the whole frequency range.Absorption coefficient predictions for a hard backed pore with length L ¼ 5 cm are shown in Fig. 6(b). The absorption coefficient of the structure with deadends significantly exceeds that of the cylindrical main pore, especially when the dead-ends are closely spaced (h ¼ 3 mm) and consequently a significant reduction in sound speed is achieved.The absorption coefficient dependence in this case shows multiple peaks due to quarter-wavelength resonances of the structure. V. MODEL VALIDATION A. Comparison between different approaches Comparisons among the present analytical model, transfer matrix approach (TMM), and virtual measurements obtained with a 3D acoustical FEM simulations using COMSOL software have been performed.A three microphones method 12 is used to get the virtual FEM measurements.In the FEM model on COMSOL, parabolic tetrahedral elements were used to mesh the different domains of the tube and an effective fluid of density and bulk modulus given by the JCA model 9,10 fills the pores. In the FEM simulations (Fig. 7), several aspects have been considered to optimize the precision and computation time: A sufficient number of elements per domain to be meshed has been chosen in order to ensure a good precision while keeping the computation time minimum, an adaptive mesh has been used with increased number of elements in the vicinity of geometrical discontinuities or in smaller domains (with respect to other domains), sufficiently smooth variations were considered in the mesh element sizes in the vicinity of geometrical discontinuities and domains size variations.In addition, it was made sure that a sufficient spatial sampling was considered with respect to the domain discretized.A criterion of one-tenth of the minimum wavelength corresponding to a maximum frequency of 5 kHz was used to choose the minimum size of the meshing elements.In consideration of these requirements, typical values of 20 000 elements per meshed domains were used.The convergence was also verified by varying the meshing parameters and by making sure the chosen convergence criterion was met in each case. At low frequency, the FEM simulation confirms the predictions of the analytical TMM model and the low frequency approximation (see Fig. 8).The small discrepancy between TMM and FEM observed above 3500 Hz might be due to the sound radiation end effects at the junctions between main pores and dead-ends.Another possible reason could be the breaking of the validity of the plane wave approximation that requires the distances between the main pores to be much less than the wavelength to discard sound diffusion effects. B. Comparison with experimental results for a sample obtained from 3D printing Experimental results on 3D printed materials with deadend pores (MP50) studied by Dupont et al. 13 were compared with the model.This sample was built using 3D printing technology.The sample shown in Fig. 9 has four types of pores.The pore characteristics are listed in Table II.The overall perforation rate of the sample is / ¼ 23.4%. For this sample, the TMM model described in Sec.II has been modified to account for the three types of dead-end pores and for pores without dead-ends.Equation ( 16) has been used to calculate pressure reflection coefficient in the channel associated with each pore as shown in Fig. 2. A uniform distribution of pores at the material surface was assumed.Due to this, the overall perforation rate of the sample was used to calculate the surface area A of the channels [see Eq. ( 13)].After that, the pressure was averaged across the surface of the sample.The comparison between the measurements and the model predictions for the absorption coefficient is shown in Fig. 10(a). The experimental curve was obtained by averaging three sets of results obtained from measurements at different times on three identically designed samples in repeatability experiments.The simulation accounts for the end correction of the main pores, which corresponds to a tortuosity correction because the stream lines at the entry face and exit face of the sample are not straight, especially for low perforation rates. 14The predicted absorption peak is due to the presence of dead-ends.The predicted resonance is broader than the observed one.It is thought that this is due to the fact that the dead-ends in the fabricated sample are slightly thinner than expected in the material design.The 3D printing process uses a powder the particles of which are glued together with the help of a liquid binder.Successive layer are bound together to form the 3D structure.The structure is then heated so that the binder is evaporated, and the particles of the powder sintered.This process leaves a microporosity.To remove the influence of this microporosity, the sample including the dead-end pores has been covered by a varnish that may have reduced the pore diameter.Measuring with precision the actual diameter of the dead-end pores on the fabricated 3D sample is currently a difficult task.The error in the 3D printing process is estimated to be 0.01 mm for a sample of a few centimeter thickness.A simulation using a smaller diameter for the dead-end pores shows that the absorption peak is narrowed as expected.This provides an indirect confirmation that the pores are thinner than expected.Because the reduction in pores diameter is difficult to measure with precision, only the results with the nominal desired diameter are shown in Fig. 10. The discrepancy above 3000 Hz might be due to microporosity of the sample created in the 3D printing process.However, the low frequency match is fairly good.For the transmission loss, it is noticed that the presence of the deadends removes the anti resonance (dip) predicted when the sample thickness (the main pore length) is half the wavelength.This can be explained by the fact that the dead-ends render the effective fluid more compressible and slow down the wave so as to shift the anti resonance to lower frequencies.In addition, the distribution in dead-end lengths will have the effect of reducing the quality factor of the anti resonance and of making the transmission loss smoother.The theoretical and experimental results in Fig. 10 seem to confirm these remarks. VI. POSSIBLE DESIGNS AND NUMERICAL SIMULATIONS In this section, materials designs involving periodically spaced dead-ends in the thickness are proposed.Equation (42) provides a tool in the first steps toward the design of high performance materials at low frequencies using periodic dead-end pores.The underlying idea is to increase the compressibility at low frequency C e of the equivalent fluid, which can be rewritten At constant pore radii, the compressibility can be increased by increasing the number of dead-ends per node N and by reducing the period h.This last condition is compatible with small thickness requirements in the material design.C e can be increased by increasing the length d.However, d must remain much smaller than the wavelength for Eq. ( 43) to remain valid. Examples of possible designs are proposed, with square perforation design (see Fig. 11) with four dead-ends per node and one with eight dead-ends per node (Fig. 12).The length, sizes of the main pores and dead-ends, the perforation rate can be varied to obtain best performance, especially at low frequencies.The perforation rate can be adjusted with the help of additional perforations without dead-ends.In the following simulations, the frequency range of calculations is chosen so that Re½k ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi A mp =p/ p 0:5 to justify the use of plane wave models. In addition to the sizes of the main pores and of the dead-ends, the criteria for the design are that the material FIG. 9. A porous sample with deadends (after sealing the circumference) used in the measurements.The sample diameter is 44.4 mm, and its thickness is L ¼ 30 mm. should contain as many dead-ends per nodes as possible while the perforation rate corresponding to the main pores should be chosen optimal.The number of nodes is also important and this parameter indirectly dictates the possible material thickness.Despite the low perforation rate, both materials are efficient absorbers of low frequency sound. Future more refined optimization work could include a de and a mp , i.e., the pore radii as additional design parameters.However, because a de and a mp are related to the viscous length and thermal characteristic lengths of the dead-ends and of the main pores, respectively, they also act in C de and in C mp and using them as additional tuning parameters is not as straight forward as for the other parameters.Their influence could be studied in an advanced optimization scheme only. VII. CONCLUSION A model for the wave propagation in straight pores with lateral periodically spaced dead-ends is proposed in this work.A low frequency limit of the model is used to derive effective properties of the porous material within this microstructure.The model predicts the possibility of a strong low frequency sound absorption achieved by thin (only a few centimeters) material slabs. A significant decrease in sound speed in the main pore is predicted at low frequencies.In turn, this low sound speed is responsible for the increase in absorption coefficient well below the frequency predicted by the sole resonance of the dead-ends.The decrease in sound speed may not only be achieved by increasing the length of the dead-ends but also by their number per node or by decreasing the spacing between them [Eq.( 43)].The decrease in sound speed results from the changes in effective compressibility of the fluid in the pores due to the presence of dead-ends while the effective density is not affected [Eqs.(33) and ( 34)].This suggests that the mentioned increase in absorption coefficient at low frequencies is the result of thermal exchanges between the fluids filling the main pores and the dead-ends. The predicted absorption coefficient and transmission loss are compared to the full transfer matrix model and to FEM COMSOL simulations.Experimental results on 3D printed materials are also used to validate the model.Low frequency absorption peaks were observed for a fairly thin sample in accordance with the model predictions. The model reported in this study provides a simple and efficient tool that can be used in the design of thin low frequency porous absorbers with low surface perforation rate. FIG. 1 . FIG.1.Main pore (cross-sectional area A mp ) with periodically arranged dead-end pores, N ¼ 2 identical dead-end pores with cross-section area A de and length d per period h.The dead-ends are located at "nodes." FIG. 3 . FIG. 3. (Color online) A pore filled with effective fluid and a single unit cell inserted in it.The arrow shows a propagating pressure wave. FIG. 4 . FIG. 4. Comparisons of the real (a) and imaginary (b) parts of the normalized wavenumber in main pore with long lateral dead-ends predicted by the analytical TMM model and the low frequency approximation.Dashed line, analytical TMM model (1); solid line, low frequency approximation (32), N ¼ 8 identical lateral dead-ends of length d ¼ cm per period h ¼ 1 cm, dead-end radius a de ¼ 1 mm, and main pore radius a mp ¼ 3 mm. FIG. 6 . FIG. 6. Normalized sound speed (a) and absorption coefficient (b) of a single main pore with short lateral dead-ends as a function of frequency.Predictions for periods h ¼ 3 mm and h ¼ 1 cm are shown.Legend as in Fig. 5. Parameters as in Fig. 4 except that dead-end length is d ¼ 3 mm.Absorption coefficient is calculated for a hard backed pore with length L ¼ 5 cm. ) 4 1 1 - FIG.10.Experimental results (curve averaged over three repeatability measurements) on (a) the absorption coefficient and (b) on the transmission loss for the MP50 sample (Fig.9) and comparison between experimental results (plain) and TMM predictions (dashed line).The dashed-dotted curve shows model predictions for the material without dead-ends. TABLE II . Pore characteristics of the four types of pores of the sample presented in Fig.9.
2018-04-03T00:22:29.547Z
2015-04-27T00:00:00.000
{ "year": 2015, "sha1": "c4802a2fe93f0d1c7985532c20ab7afa0c583a3e", "oa_license": "CCBY", "oa_url": "https://hal.archives-ouvertes.fr/hal-01323687/file/LASA.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParseMerged", "pdf_hash": "c4802a2fe93f0d1c7985532c20ab7afa0c583a3e", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
259198946
pes2o/s2orc
v3-fos-license
Identifying gaps in health literacy research through parental participation Introduction: Involving patients and the public in design, conduct and dissemination of research has gained momentum in recent years. While methods to prioritize research on treatment uncertainties have been successfully applied for various disease entities, patient and public involvement has not been prominent to prioritize research in health literacy (HL). This study aimed to set up a participatory process on identifying HL research gaps from a parental perspective in two use cases: early childhood allergy prevention (ECAP) and COVID-19 in children with allergies (COVICAL). Methods: To prepare and empower parents, we developed and provided preparatory webinars, introductory materials, i.e., factsheets and a brochure, and a scientific podcast with seven episodes. Recruitment was carried out by our cooperation partner German Allergy and Asthma Association e. V., via local day care centres and paediatricians as well as via snowballing. The identification of research gaps took place within five workshops with n= 55 participants, four face-to-face-workshops across Germany, one online workshop. Research ideas and needs were reviewed for overlap and redundancy and compared to the existing research state of the art. Results: More than 150 initial research ideas and needs were collected which after review were reduced to a total of 37 ECAP, 33 COVICAL and 7 generic HL research questions. These were particularly related to the ease of finding and presenting good quality health information, information environment, health communication, professional education, and HL testing. Conclusions: Involving parents in the formulation of HL research priorities proved to be challenging but feasible. Research ideas often reflect wishes directed at health professionals and the health system, i.e., organizational, and systemic HL. An e Delphi process will follow to elicit the TOP 10 research priorities in each use case. This project will help to plan patient/parent centred HL research in ECAP and COVICAL. Keywords: participatory health research; health literacy research; research gaps; patient and public involvement; co-researcher; early childhood allergy prevention; COVID-19 in children with allergies Introduction Participatory approaches reflect and focus actual needs, knowledge and interests of 66 patients, parents and citizens in general (Jilani et al., 2020). This may reduce "research 67 waste" by focusing research on patient relevant issues through involvement (Buhr & Tannen Since both use cases enatil uncertainty further research into HL in relation to ECAP and 117 COVICAL is warranted. Fostering HL concerning ECAP and COVICAL is an important public 118 health concern, as e. g. low parental HL is linked to poorer health outcomes in (young) 119 children, and lowers effectiveness in preventing disease in children (Buhr & Tannen This study aimed to set up a participatory process to identify HL research gaps from a 124 parent's perspective in the fields of ECAP and COVICAL. 125 . CC-BY-NC 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (1) conflicts of interest in national and selected international ECAP-guidelines, (2) living 134 systematic reviews on ECAP and COVID-19 related HL, (3) how health professionals 135 translate available evidence into practice, (4) the degree to which health information on the 136 internet meets parents' needs, (5) factors influencing new parents' HL and ECAP 137 behaviours, (6) measurement of ECAP and COVID-19 related HL. 138 Representatives of each of the HELICAP work packages, of the HELICAP's coordinating 139 centre, and a patient representative from the German Allergy and Asthma Association 140 (DAAB) formed a Task Force (TF), guiding the study process. The twelve TF-members have 141 different scientific backgrounds (i.e., medicine, sociology, (health) educational sciences, 142 cultural sciences, psychology, public health) and career levels (for details see appendix). 143 We started with a preparatory phase that included the development of introductory 144 information, followed by interactive workshops to identify research gaps related to HL. 145 146 Our study was guided by the James Lind Alliance's Priority Setting Partnership framework 147 (James Lind Alliance, 2021). This framework provides the fundamental basis of the planning, 148 implementation, and evaluation of the participatory process. However, due to the scope and 149 . CC-BY-NC 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted June 20, 2023. context of the study, we adapted the methodology to a framework for prioritization in the field 150 of HL research. To support the integrity of our findings, we used both the REPRISE reporting 151 framework (Tong et al., 2019) Preparatory phase 164 To prepare and empower parents, we developed and provided webinars, introductory 165 materials, i.e., factsheets and a brochure, and a scientific podcast. 166 -Webinars: We designed webinars based on each of the six HELICAP working 167 packages, scheduled for 1.5 hours. The webinars followed a common guidance: 1) 168 introducing briefly the HELICAP research fields and participatory research; 2) 169 explaining the topic with regard to its meaning for and relevance towards parents; 170 and 3) a discussion with the participants to give room for questions, ideas and 171 comments. Each of the webinars was conducted twice to provide alternative time 172 slots. In total we conducted 12 webinars during February-May 2022. 173 . CC-BY-NC 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted June 20, 2023. ; factsheets with a mix of textual and visual information summarised the content of the 175 webinar, contained room for participants' ideas, notes, or questions, and offered 176 references for further reading. 177 -Brochure: Based on the factsheets and the discussions at the webinars, we created 178 a 16-page brochure as a single written plain language summary. The brochure 179 includes questions and insights that emerged during the webinar discussion with 180 participants and aimed at informing participants of the workshops to support their 181 preparation. 182 -Podcasts: As both, the DAAB and parents who participated in the webinars 183 repeatedly emphasized the importance of communicating the research project via a 184 freely available audio format, we created a scientific podcast with seven episodes 185 (again based on the six webinars, plus a general introductory episode). Each episode 186 lasts about 20-35 minutes and is moderated by the DAAB representative. 187 All material is in German language and publicly available via the HELICAP website 188 (HELICAP, 2022a, 2022b, 2022c), the podcasts also via an audio streaming service (Spotify, 189 2022). 190 Interactive workshops to elicit research gaps 191 We conducted five interactive Workshops in October and November 2022 -four on-site in 192 Regensburg, Hannover, Freiburg und Magdeburg and one online workshop -to identify 193 which topics our target groups were missing that could help them make good health-related 194 decisions for themselves and their children regarding ECAP and COVICAL. 195 To facilitate participation in the workshops, to provide a low-threshold access and create a 196 feel-good atmosphere, we chose central locations used for family activities. The participants 197 . CC-BY-NC 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. We organized the workshop as a moderated focus group discussion with four parts: 216 introduction, main activity, summary, and conclusion (cf. table 1). The group-discussions 217 were recorded, and the moderators took minutes in a supportive manner. The main activity 218 covered three major topics: 219 a) "Accessing Health Information": Reflecting own behaviour when searching online for 220 a child health-related information. 221 . CC-BY-NC 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. During the main activity, we followed a procedure that allowed all participants to elaborate on 227 each of the three topics. At the beginning, participants formed small groups with three to five 228 persons. The small groups rotated through the three topics with 25 minutes time to 229 deliberate on each, supported by a moderator. In the online workshop, we reduced the time 230 to work on each topic to 15 minutes. 231 The units were designed as follows: First, participants were given a short task: 232 a) to search the internet on the topic of COVID-19 in children with allergies 233 b) to look at or try out different HL tests (e. g. HLSEU-Q16, Berlin Numeracy Test, CHC-234 Szenario 1, S-TOFHLA, and HELICAP questionnaire) 235 c) to read a text which addresses the evidence shift in allergy prevention. 236 Second, participants reflected on their experiences performing this task, started to discuss 237 research needs, and made notes. Finally, the moderator summarized the collected ideas on 238 cards and presented them on board to ensure that completeness. 239 . CC-BY-NC 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted June 20, 2023. is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted June 20, 2023. is challenging for parents. We therefore extended the identification of research gaps to the 243 collection of uncertainties, questions, and needs (in the following, referred to as "ideas for 244 research"). After removing duplicates, we sorted the ideas derived in the five workshops, and 245 translated the findings into potential research questions. 246 The discussions during the different sessions were not always strictly focused, despite the 247 efforts of the moderators. There was some overlap in the topics and new topics were raised 248 by the parents. After a review of the material, it was therefore necessary to expand the 249 original three topics of the units (accessing health information, measuring HL, understanding 250 health information) to a total of five categories, into which the ideas were inductively divided: 251 "Health Information", "Information Environment", "Health Communication", "Professional 252 Education", and "Health Literacy Testing". 253 To achieve a uniform level of abstraction and complexity the members of the TF 254 reformulated the individual ideas in a two-stage process into scientific research questions 255 and reviewed each individual aspect for overlap and redundancy. 256 In addition, each research question was assigned a unique identification number (ID). 257 Wherever applicable, general phrased ideas were put into research questions with reference 258 to ECAP and/or COVICAL. We focused on research questions for which subject-matter 259 expertise exists in the HELICAP research group. Data processing was done using MAXQDA 260 and Microsoft Excel. 261 Participants subsequently received an initial results overview via email, which is available at 262 the HELICAP homepage (HELICAP, 2022d) and information about the further course of the 263 study. 264 . CC-BY-NC 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted June 20, 2023. What are the advantages of a target group-specific "peer to peer" mediation of health information? What are the benefits of peer to peer exchange of health information? What are the advantages of a targeted "peer to peer" exchange on ECAP? What are the advantages of a targeted "peer to peer" exchange on COVICAL? is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted June 20, 2023. Table 3). 274 . CC-BY-NC 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted June 20, 2023. After reviewing the material, the following five categories could be extracted: 282 Health Information: Participants expressed their desire that access to reliable health 283 information (on the internet) needs to be much easier, for example by linking trustworthy 284 websites or a barrier-free positioning of prevention topics. A comprehensible, target group-285 specific and multimedia presentation of health information was also described as desirable. 286 ID27/28 "Support with research on the internet", ID8 "Participatory development of 287 brochures -citizens&apos; council for the production of health information", ID5 288 "Seal, certificate, emblem for good websites -"approved" by reputable source -289 support in assessing the seriousness of websites (also via radio, television etc.), 290 ranking the quality of information" 291 Information Environment: The parents clearly expressed the wish for a better exchange with 292 other parents, e.g., on prevention issues. Paediatricians offer information about institutional 293 contact points but cannot establish contact with other parents. 294 . CC-BY-NC 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted June 20, 2023. ; HL testing. Parents wanted research to explore if testing for individual HL improves person-320 oriented and needs-based counselling. They wanted to know if self-testing for HL helps to 321 better navigate the internet for health resources. Participants expressed concern about 322 possible stigmatisation by a health professional if patients achieve a low HL level. They also 323 reasoned about possible / optimal conditions to conduct HL-Tests. 324 ID35: "Self-assessment: How good am I in the subject?", ID38 "Test causes 325 stigmatization and unequal treatment by doctors/health professionals" and ID46 326 "Preliminary consultations with a trained assistant better than an anonymous test 327 situation!" 328 329 The parental ideas, needs and questions resulted in a total of 45 research questions. Most of 330 those research questions address both ECAP and COVICAL (n=32). Five research 331 questions address ECAP only, one addresses COVICAL only. Seven questions do not 332 explicitly refer to either ECAP or COVICAL, but are directed to more general issues, such as 333 "How can HL be integrated best into the curriculum of schools?". This means we collected a 334 total of 37 research questions for ECAP, 33 for COVICAL and 7 generic HL questions. 335 Table 4 shows the finally derived research questions collected in the participatory process 336 for the two use cases. 337 . CC-BY-NC 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted June 20, 2023. CC-BY-NC 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. Recruiting parents to the participatory process to identify HL research gaps on ECAP, and 362 COVICAL proved to be challenging in our study. 363 . CC-BY-NC 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted June 20, 2023. ; One reason was that HL was not readily understood by our target group. Parents were not 364 familiar with neither the expression nor the concept of HL and HL research. This became 365 apparent during the workshops but also in the preparatory webinars, despite our efforts with 366 a variety of different preparatory materials and a short thematic introduction at the beginning 367 of the workshops; and despite a relatively high average level of education (54.3% of 368 participants with high school diploma). 369 The specific topics -determined by the focus of our research group -might have further 370 hampered the recruitment of parents: Often participatory approaches identifying research 371 gaps are carried out with people directly affected by a condition, disease or health problem. . The topic appears not to be that salient in the population to be of major 376 concern. Often parents become aware of childhood allergies only when the problem occurs. 377 That is reflected in the composition of our sample, where 41.3% of the parents had children 378 with manifest allergies, sometimes highly allergic. Those parents expressed some kind of 379 "remorse" ("if I would have known that earlier") reported on their very difficult journey to find 380 relevant and trustworthy information and health care providers and were motivated to 381 participate in the workshops because they wanted to "help other parents in future". Under 382 that assumption, for COVICAL it should have been easier to recruit parents. However, this 383 was not the case, partly due to the fact, that at the time of recruitment (year 2022) the 384 question on how to deal with COVID-19 in children with allergies/asthma was not that 385 important any longer. A third reason might be that our participatory approach is not linked to 386 a specific region, institution, or community and does not have solving the problem by 387 developing an intervention in focus. In contrast to the Australian Optimizing Health Literacy 388 and Access (OPHELIA) process (Ophelia, 2022), our study is not designed to put HL-389 . CC-BY-NC 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. Ethics approval and consent to participate: Participation in the entire study was voluntary 440 and could be discontinued at any time without giving reasons. The study was reviewed and 441 . CC-BY-NC 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted June 20, 2023. ; positively approved by the Ethics Committee of the Otto-von Guericke-University 442 Magdeburg. 443 Availability of data and materials: All data generated or analysed during this study are 444 included in this published article and its supplementary information files 445 . CC-BY-NC 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. . CC-BY-NC 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted June 20, 2023. ; https://doi.org/10.1101/2023.06.15.23291427 doi: medRxiv preprint Your-Good-Health.aspx 544 . CC-BY-NC 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted June 20, 2023. ; . CC-BY-NC 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) The copyright holder for this preprint this version posted June 20, 2023. ;
2023-06-21T01:35:00.597Z
2023-06-20T00:00:00.000
{ "year": 2023, "sha1": "b39fefc920e2ffb07949be6eb92b890756342396", "oa_license": "CCBYNC", "oa_url": "https://www.medrxiv.org/content/medrxiv/early/2023/06/20/2023.06.15.23291427.full.pdf", "oa_status": "GREEN", "pdf_src": "MedRxiv", "pdf_hash": "b39fefc920e2ffb07949be6eb92b890756342396", "s2fieldsofstudy": [ "Medicine", "Education" ], "extfieldsofstudy": [ "Medicine" ] }
257797332
pes2o/s2orc
v3-fos-license
Herpes Zoster: A Case Report of a Rare Ramification Leading to Secondary Infection The herpes virus causes herpes zoster (HZ) (shingles). It develops years later in elderly patients who were affected by the varicella-zoster virus in their childhood. The virus gets reactivated and typically localizes its symptoms to a particular dermatome. If left untreated, it can lead to dental complications, such as osteonecrosis, tooth exfoliation, periodontitis, calcified and devitalized pulps, periapical lesions, and root resorption, in addition to developmental irregularities, such as abnormally short roots and missing teeth. Here, we present the case of a 61-year-old male affected by a rare bacterial superinfection followed by an HZ infection. Our report aims at making clinicians aware of the various potential complications that can develop after an HZ infection. Introduction Herpes zoster (HZ) is caused by the reactivation of the latent varicella-zoster virus, which remains latent in the cranial or sensory ganglia after the primary infection in association with immunosuppression and mechanical or psychological stress leading to secondary infection, which manifests as vesicular rash and radicular pain in the affected dermatomal area [1,2]. It affects 20-30% of the population and produces severe prodromal symptoms such as eye discomfort, ocular abnormalities, or skin rash in specific dermatomes [2]. HZ can affect any of the three branches of the trigeminal nerve. The involvement of the mandibular and maxillary branches without the involvement of the ophthalmic branch is relatively rare and accounts for only 1.7% of HZ cases [3]. Oral manifestations of HZ appear when the second and third divisions of the trigeminal nerve are affected [3]. These are often self-limiting infections, and if the patient exhibits significant pain symptoms, antiviral therapy is administered both systemically and topically depending on the intensity of the pain [4]. The most common post-zoster-related complications include ocular complications, facial palsy, postherpetic neuralgia (PHN), bacterial superinfections, osteonecrosis, periodontitis, exfoliation of teeth, calcified and devitalized pulps, periapical lesions, and root resorption [3,4]. Here, we report a case of actinomycotic osteomyelitis of the maxilla, a very rare bacterial superinfection that occurred after an orofacial HZ infection. To our knowledge, this is the first reported case of actinomycotic osteomyelitis of the maxilla following an HZ infection. Case Presentation A 61-year-old man reported to the Department of Oral Medicine and Radiology with the chief complaint of multiple ulcers and swelling on the left side of the face for the past week. History revealed that he had no comorbidities and had undergone an extraction of the left upper back tooth 28, after which he developed rigors, fever, and vesicles, which later ruptured into ulcers on the left side of his face. The patient also reported a history of numbness and swelling associated with the ulcers. The patient consulted a physician and was prescribed antiviral drugs, steroids, and antihistamines. However, the patient had no symptomatic relief with the medications prescribed and reported an increase in the severity of disease presentation associated with burning pain when he visited the department. Intraoral evidence of multiple ulcers with erosions and irregular borders surrounded by erythema was found, which were limited to the left side involving the retromolar region, buccal mucosa, hard palate, and commissure of the lip ( Figure 2). There was no evidence of bleeding. Paresthesia was elicited. Based on the patient's history and clinical appearance, an HZ infection was provisionally diagnosed. During the investigation, a salivary polymerase chain reaction test (PCR) revealed the presence of the varicella-zoster virus, and blood investigation results were within limits. Based on confirmatory investigations, the patient was treated for an HZ infection with tablet valacyclovir (1 g), methylcobalamin (750 µg), and pregabalin (75 mg) once daily and ointment mupirocin 2% and ointment acyclovir 1% thrice daily for one week. A follow-up revealed completely healed ulcers with persistent burning pain. The patient was asked to continue methylcobalamin (750 µg) and pregabalin (75 mg) once daily for one month, suspecting the case was leading to PHN. On his next visit after one week, the patient was completely asymptomatic with healed lesions (Figure 3). However, he reported an additional complaint of bleeding and pus discharge from the gums with mobile teeth from 21 to 24 ( Figure 4). Further history revealed nasal regurgitation on intake of oral fluids. The patient was subjected to a swab test to access microbial culture, which revealed a few gram-positive cocci along with pus cells in association with Enterococcus faecalis. On further investigation, a fungal culture test was negative, and an antibiotic sensitivity test revealed cephalexin and ciprofloxacin resistance. An intraoral periapical radiograph in relation to 21, 22, 23, 24 revealed ill-defined periapical radiolucency with bone loss ( Figure 5). Occlusal radiograph revealed mixed radiopaque and radiolucent area superior to periapical radiolucency with altered trabeculae pattern ( Figure 5). The patient was further subjected to computerized tomography of the paranasal sinus with threedimensional reconstruction, which revealed bony erosion involving the left alveolar process and the hard palate involving the pre-molar region, resulting in fistula formation between the oral cavity and maxillary sinus, suggestive of a left oro-antral fistula ( Figure 6). c: The coronal view revealing fistula formation between the oral cavity and the maxillary sinus. An incisional biopsy performed following the extraction of 22, 23, 24 revealed peripheral stratified squamous epithelium and connective tissue stroma with mixed inflammatory cell infiltration, predominantly lymphocytes, plasma cells, and neutrophils surrounding numerous areas of bacterial colonies, basophilic radiating filaments along with the evidence of necrotic bones (Figure 7), proving to be actinomycosis osteomyelitis. Therefore, this case was a rare complication of post-HZ infection, leading to actinomycotic osteomyelitis on final diagnosis. Further, the patient was administered ceftriaxone 1 g intravenously for one week. On subsequent visits, the patient was symptomatically better, with no nasal regurgitation (Figure 8). Discussion Maxillary and mandibular alveolar bone necrosis caused by an HZ infection is uncommon. Spontaneous tooth exfoliation and post-herpetic alveolar necrosis have been documented in 51 cases as of 2021 [5][6][7][8][9][10][11][12]. Gupta et al. reported 46 similar instances in a recent literature review, with a mean age of 52 years [5]. According to Gholami et al., bone necrosis often manifested as a serious infection between nine and 150 days following the beginning of shingles [6]. The average duration was 30 days. In the present case, the patient presented with an edematous gingiva with pus discharge following two weeks of shingles infection. To ensure the patient's overall well-being, regular and prolonged follow-ups over several months may be recommended. Gholami et al. reported the case of a 53-year-old woman with mandibular osteonecrosis after 28 days of shingles infection with microscopic features of osteomyelitis along with intertrabecular spaces filled by necrotic tissue and bacterial colonies [6]. In this paper, we reported a case of actinomycotic osteomyelitis with the classical feature of actinomycosis Splendore-Hoeppli reaction, radiating filamentous ( Figure 7). Histologically, the bacilli are bordered by eosinophilic amorphous material with a club-shaped configuration known as the Splendore-Hoeppli reaction seen in Infective conditions such as actinomycosis, aspergillosis, blastomycosis, botryomycosis, and candidiasis [13]. To our knowledge, this is the first case to report actinomycotic osteomyelitis followed by shingles infection. The anaerobic gram-positive bacterium Actinomyces israelii often causes actinomycosis, an uncommon kind of saprophytic bacterial illness. Factors including the disruption of the oral mucosa and/or systemic factors such as diabetes mellitus or other immunocompromised conditions, poor dental hygiene, a tooth infection, or trauma enhance the chances of an actinomycotic infection. The mandibular area is more commonly affected than the maxillary region due to extensive vascularity [14]. Although our patient did not have any history of trauma, due to poor oral hygiene, actinomycosis infection could have manifested. In the review of the English-language literature reported by Fazili et al., a significant number of patients with poor oral hygiene and a history of alcoholism were identified in a total of 32 cases of actinomycosis infection [15]. The other hypothesis is that the antiviral drug could lead to immunopathological death of antigenpresenting cells, macrophages, follicular dendritic cells, and some CD4+ T cells, resulting in general immunosuppression and reducing the immune system's ability to respond to outside antigens. In addition, it also speeds up the exhaustion and deletion of CD8+ T-cell responses that are specific to the lymphocytic choriomeningitis virus [16,17]. Therefore, our patient, who was originally prescribed an antiviral medication to treat HZ, may have developed maxillary actinomycotic osteomyelitis as a result of the immune systeminhibiting effects of the medication. There was a case of actinomycosis osteomyelitis following COVID-19 infection that involved invasive actinomycosis infection, including the unusual location of the maxilla. This case also featured a decreased lymphocyte count following COVID-19 infection. The administration of severe immunosuppression led to actinomycosis. Further local actinomycotic spread was facilitated by the inclusion of necrotic tissues [18]. Combining surgical and pharmacological therapy is a widely recognized therapeutic strategy, despite ongoing debate. Only after a long-term, strict antibiotic treatment has failed, surgery is performed. As infection may reappear after a period of inactivity, long-term follow-up is important [14,16,18]. In our case, the patient's symptoms were relieved with ceftriaxone 1 g intravenous medication for one week, and the patient was advised o undergo regular follow-ups. Conclusions We summarize that actinomycotic osteomyelitis may be brought on by HZ as a complication. Maxillary osteomyelitis is a relatively uncommon illness, and in this instance, actinomyces was the etiological agent, making it much less likely. Actinomycotic infection should be ruled out in patients who have had HZ because it may be the only, or a significant, reason for their recurrent or chronic oral infections. To ensure patients' well-being, actinomycosis as a potential complication of HZ must be kept in mind during diagnosis. Additional Information Disclosures Human subjects: Consent was obtained or waived by all participants in this study. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
2023-03-29T15:35:24.601Z
2023-03-01T00:00:00.000
{ "year": 2023, "sha1": "1b0e532bbafdeec8936fa5b6a11ed31393ef72c9", "oa_license": "CCBY", "oa_url": "https://assets.cureus.com/uploads/case_report/pdf/142774/20230327-12671-15i58fh.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1c1ebeed970864b7324398fc58faba3a41cb4659", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
109938625
pes2o/s2orc
v3-fos-license
Concerns of young protesters are justified In March 2019, German-speaking scientists and scholars calling themselves Scientists for Future, published a statement in support of the youth protesters in Germany, Austria, and Switzerland (Fridays for Future, Klimastreik/Climate Strike), verifying the scientific evidence that the youth protestors refer to. In this article, they provide the full text of the statement, including the list of supporting facts(in both English and German) as well as an analysis of the results and impacts of the statement. Furthermore, they reflect on the challenges for scientists and scholars who feel a dual responsibility: on the one hand, to remain independent and politically neutral, and, on the other hand, to inform and warn societies of the dangers that lie ahead. > be taken now. Discussion and action are not mutually exclusive. Many social and technological innovations already exist which can maintain quality of life and improve human well-being without destroying our natural resources (e. g., Klima-Allianz Deutschland 2018, WBGU 2011. In all German-speaking countries, neither the necessary scale nor speed of change are being achieved in the restructuring of the energy, food, agriculture, resource, and mobility sectors. Germany will fail to meet the climate protection targets it has set itself for 2020 (UBA 2019), and the achievement of the goals of the German Sustainability Strategy for 2030 is at high risk (German Council for Sustainable Development 2018, SRU 2018. Moreover, there is still a lack of an effective climate protection law. Austria has set itself goals in its climate and energy strategy that do not in any way do justice to the Paris Agreement (CCCA 2018, Wegener Center für Klima und Globalen Wandel 2018, Schleicher and Kirchengast 2019) and even for this purpose, neither the necessary measures nor the financial means are provided (CCCA 2018). At the same time, soil degradation and surface coverage per person and year in Austria are the highest in Europe (UBA 2018). Switzerland has reduced its greenhouse gas emissions only slightly since 1990; at the same time, emissions caused abroad have increased considerably (BAFU 2018). In the first parliamentary debate on the total revision of the CO 2 Act, the lower house proposed to abolish domestic reduction targets and to offset Swiss emissions abroad. In effect, the law has failed for the time being (Schweizer Parlament 2018). t present, many young people are demonstrating persistently for climate protection and the preservation of our natural resources. As scientists and scholars, and based on robust scientific evidence, we declare: these concerns are justified and supported by the best available science. The current measures for protecting the climate, biodiversity, and forest, marine, and soil resources, are far from sufficient. The Paris Agreement of 2015 (UNFCCC 2015)obliges countries under international law to keep global warming well below 2°C. In addition, all countries have promised efforts to limit global warming to 1.5°C. It is critical to immediately begin reducing net CO 2 emissions and to eliminate them to zero worldwide between 2040 and 2050 at the latest (IPCC 2018). A more rapid reduction would increase the probability of not exceeding the 1.5°C limit. The use of coal should be nearly ended by 2030, while the burning of oil and natural gas should be reduced simultaneously until all fossil fuels have been replaced by climate-neutral energy sources. Consid ering global climate justice, Europe must achieve this transition more quickly (IPCC 2018, Global Carbon Project 2018. While the need for participation and discussion remains, action must A BOX: Some important facts 1. The global mean temperature has already risen by 1°C (relative to 1850 to 1900) (IPCC 2013(IPCC , 2018. Half of the rise has occurred during the last 30 years (NASA 2018, IPCC 2014). 2. The years 2015The years , 2016The years , 2017The years , and 2018 were, globally, the warmest years in the modern record (NASA 2019). 3. The temperature rise is almost entirely due to human-made greenhouse gas emissions (U.S. Global Change Research Program 2017, IPCC 2013. 4. Already the current temperature rise increases the probability of extreme weather conditions in several regions of the globe, such as strong precipitation and heatwaves, leading to elevated rates of regional droughts, floods and forest fires (e. g., IPCC 2012IPCC , 2013IPCC , 2018, National Academies of Scienc es, Engineering, and Medicine 2016). 5. Global warming is a risk factor for human health (Watts et al. 2015(Watts et al. , 2018. Besides the above-mentioned direct consequences, its indirect consequenc es include the lack of food security and the spread of pathogens and disease carriers. 6. If humanity fails to limit global warming to 1.5°C, as envisaged by the Paris Agreement, additional severe consequences must be expected for humanity and nature at large in many parts of the world (IPCC 2018). 7. In order to restrict warming to the 1.5°C limit with high probability, net emissions of greenhouse gases (in particular CO2) must be swiftly reduced and must, at the global level, reach zero within the next 20 to 30 years (IPCC 2013(IPCC , 2018. 8. Instead, CO2-emissions continue to rise. Given the policy proposals currently on the table, global warming is likely to cross 3°C by the end of the century and will increase afterwards due to continued emissions and positive feedback dynamics (Climate Action Tracker 2018). 9. Based on current emissions, the remaining CO2-budget left for reaching the 1.5°C goal will last for about ten years. For the 2°C goal, the budget is likely to last for about 25 to 30 years (MCC 2018, IPCC 2018). 10. Afterwards, humanity lives on a "CO2-overdraft-loan": any emitted greenhouse gases have to be removed later from the atmosphere with tremendous efforts (e. g., Rogelj et al. 2018, Gasser et al. 2015. Today's young people are already supposed to pay off this loan. If this fails, the following generations will suffer from the severe consequences of global warming. 11. Rising temperatures increase the probability of crossing climatic tipping points in the earth system dynamics, i. e., positive feedback loops will become more likely (Schellnhuber et al. 2016, Steffen et al. 2016, 2018. This would result in a situation, where returning to the current temperature regime would become unrealistic for future generations. 12. Oceans are currently absorbing around 90 percent of the additional heat (IPCC 2013). They have furthermore absorbed about 30 percent of the CO2 emitted so far. Consequences are rising sea levels, melting of sea ice, acidification and dissolved-oxygen depletion in the oceans. Meeting the goals set by the Paris Agreement is essential to protect humanity and nature, and to mitigate the loss of marine biodiversity and ecosystems, specifically the currently endangered coral communities (IPCC 2018). 13. The human basis of life is threatened in several areas by the crossing of "planetary boundaries". As of 2015, two boundaries are exceeded with a degree of uncertainty (climate and land use change) and two further are crit ically exceeded: the destruction of genetic variability (biodiver sity) and the phosphorus and nitrogen biogeochemical cycles (Steffen et al. 2015). 14. We presently face the largest mass-extinction event since the era of the dinosaurs (Barnosky et al. 2011). Global extinction rates are 100 to 1000 times faster as compared to before humanity exerted its influence (Ceballos et al. 2015, Pimm et al. 2014. The past 500 years saw the extinction of more than 300 land-dwelling vertebrate species (Dirzo et al. 2014); the abundance of investigated vertebrate species has dropped on average by around 60 percent from 1970 to 2014 (WWF 2018). 15. Causes for biodiversity loss are on the one hand habitat destruction by agriculture, deforestation, as well as land consumption by settlements and roads. On the other hand, invasive species play a role, as well as depletion due to over-collection, overfishing and overhunting (Hoffmann et al. 2010). 16. Global warming adds to this: with undiminished CO2 emissions, half of the plant and animal species of the Amazon basin or the Galapagos islands, for example, can be expected to have vanished by 2100 (Warren et al. 2018). Similarly, global warming is the major threat for the survival of coral reefs (Hughes et al. 2017, 2018, IPCC 2018. 17. The loss of agricultural areas and soil fertility, as well as the irreversible destruction of biodiversity and ecosystems threaten the basis of life and limit the options of current and future generations (IPBES 2018a, 2018b, Secretariat of the CBD 2014, Willett et al. 2019, IAAST 2009a, 2009b. 18. Insufficient protection of soil, ocean, fresh-water resources and biodiversity acts as a risk multiplier in the face of global warming (Johnstone and Mazo 2011). It increases the risk that water shortage and famine in many countries will trigger or aggravate social and military conflicts, and contribute to the migration of larger human populations (Levy et al. 2017, World Bank Group 2018, Solow 2013. 19. A sustainable diet with reduced meat, fish and milk consumption, as well as a reorientation of agricultural methods to resource-saving food production are necessary for the protection of land and marine ecosystems and the stabilisation of climate change (Springmann et al. 2018). 20. Meat production produces less than one fifth of the calories used worldwide on more than four fifths of the agricultural area (Poore and Nemecek 2018), and emits a significant proportion of greenhouse gases (FAO 2013). Since the agricultural area includes permanent pastures and meadows as well as croplands, and most of the former cannot be converted to cropland, another comparison is also illustrative: more than one third of the global cereal harvest is used currently as animal feed (FAO 2017). 21. A transition to increased direct consumption of plant-based foods will reduce both the need for cropland and the level of greenhouse gas emissions while providing additional health benefits (Springmann et al. 2016). 22. Direct government subsidies for fossil-based industries amount to more than 100 billion U.S. dollar per year (Jakob et al. 2015). Taking social and en vironmental costs (in particular health costs, but also air and water pollution) into account, global post-tax subsidies for fossil fuels are significantly higher. According to experts of the International Monetary Fund (IMF) they amount to about five trillion U.S. dollar per year -that is 6.5 percent of global gross domestic product (2014) (Coady et al. 2017). 23. According to the polluter pays principle, the cost of climate damages should be attributed to the burning of fossil fuels. One possible approach is the introduction of CO2 prices. As long as a sufficient supply of low-cost renewable energies is not achieved, the resulting financial burden will need to be distributed in a socially responsible way. Examples are direct transfers or tax reductions for particularly affected households or lump-sum payments for citizens (Klenert et al. 2018). 24. Based on already established sustainable energy technologies, a strong reduction in costs and an increase in production capacities is possible. This would, in turn, render a change from burning fossils to an energy system fully based on renewable energy financially feasible and create new economic possibilities (Nykvist and Nilsson 2015, Creutzig et al. 2017, Jacobson et al. 2018, Teske et al. 2018, Breyer et al. 2018, Löffler et al. 2017, Pursiheimo et al. 2019. Scientists for Future The young people rightly demand that our society should prior itize sustainability and especially climate action without further hesitation. Without far-reaching and consistent change, their future is in danger. This change means, among other things: we will introduce renewable energy sources with new courage and the necessary speed; we will consistently implement energy-saving measures; and, we will fundamentally change our patterns of nutrition, mobility and consumption. > GAIA 28/2 (2019): 79 -87 Scientists for Future Politicians in particular have a responsibility to create the neces sary framework conditions in a timely manner. In particular, climate-friendly and sustainable action must become simple and cost-effective, while climate-damaging action must become unattractive and expensive, for example, through effective CO 2 pricing (e.g., EFI 2019), elimination of subsidies for climate-damaging actions and products, efficiency regulations and social innovations. A socially balanced distribution of the costs and benefits of change is essential. The enormous mobilisation of the Fridays for Future/Climate Strike movement shows that young people have understood the situation. As scientists and scholars, we strongly support their demand for rapid and forceful action. As people who are familiar with scientific work and deeply concerned about the current developments, we consider it as our social responsibility to point out the consequences of inadequate action (see also Ripple et al. 2017). On ly if we act quickly and consistently can we limit global warming, halt the mass extinction of animal and plant species, preserve the natural basis for life and create a future worth living for present and future generations. This is exactly what the young people of Fridays for Future/Climate Strike are calling for. They deserve our respect and full support. 1 Rogelj et al. 2018, Gasser et al. 2015. Bereits die heute lebenden jungen Menschen sollen diesen "Kredit" wieder abbezahlen. Gelingt dies nicht, werden viele nachfolgende Generationen unter den gravierenden Folgen der Erderwärmung leiden. 11. Bei zunehmender Erwärmung der Erde werden gefährliche klimatische Kipp-Punkte des Erdsystems, also sich selbst verstärkende Prozesse, immer wahrscheinlicher (Schellnhuber et al. 2016, Steffen et al. 2016, 2018 (Nykvist and Nilsson 2015, Creutzig et al. 2017, Jacobson et al. 2018, Teske et al. 2018, Breyer et al. 2018, Löffler et al. 2017, Pursiheimo et al. 2019). Process and Results 2 Since 2018, several youth movements, such as Fridays for Future in Germany and Austria or Climate Strike in Switzerland call for immediate and decisive climate and sustainability action. They are adamant that their demands are firmly based on the results of scientific studies. In the spring of 2019, several participants in this movement were being defamed. Faced with children and young adults who began to politically fight for their right for a sustainable, peaceful future, many media outlets and politicians did not engage with the substance of the demands. Rather, they preferred to question the forms of protests and the competence of the young people (von Lucke 2019). Following the lead of a Belgian initiative (Vicca et al. 2019), a small group of German speaking scientists and scholars decided to pro-actively analyse the assumptions and demands of the young protesters, and to counter false and conspiracy-theory-based interventions. The question, whether this action might strengthen or weaken the youth movement, was initially controversially debated within the team. However, the plans were discussed with members of the movement and they welcomed them. A time plan was developed aimed at not unduly diluting media attention away from the youth movement. The statement and the associated selection of facts was prepared within four weeks by scientists from Germany, Austria, and Switzerland, with a broad diversity of backgrounds and at various stages of their careers. It was then circulated by email for two weeks and released at several press conferences on March 12, 2019 (with members of the youth movement as guests). The open signing period ended ten days after the press conference, at which point it had gathered over 26,800 signatures from scientists. 3 All who signed did this on their personal behalf and not on that of their affiliated institutions. Every signee was required to indicate the level of present or past direct involvement in science or scholarship, particularly, whether having scientifically published or not. We verified email addresses, checked the data for systematic errors and falsifications, and scrutinized a sample of almost 500 signees in detail. 4 A large fraction of the signees had published scientific or scholarly works (71.1 percent) and a further 24.8 percent are currently actively working in science and academia (e.g., PhD candidates). Of all those who signed, four percent belong to the categories Degree at the level of a Master without publications or Citizen Scientists without publications. 63 percent of all signees have a doctorate or professorship. As intended, signees come from a broad diversity of scientific and scholarly disciplines. We consider it necessary to form an al liance that goes far beyond specialists in climate and biodiversity science, sustainability, social science, or engineering. We will not achieve a sustainable future without, for example, including aspects of political participation, education, gender, and justice issues (including climate justice). We need the diverse gifts, experiences, and insights of all disciplines to solve the unprecedented problems that humanity is facing. The statement is not a petition to government and politics. Like all scientific or scholarly publications, it addresses the public. In open, democratic societies, all citizens are entitled to sufficient knowledge so that they have the opportunity to participate competently in the discussion of public affairs and guide and control the professionalized exercise of power. Politics has acknowledged the contribution of Scientists for Future to the political debate. On March 15, 2019 the German Bundestag held a session on The Federal Government's attitude to the climate strikes of the Fridays for Future movement and the Scientists for Future petition (Deutscher Bundestag 2019). In the weeks afterwards, members from Scientists for Future were invited for talks by several parties on the federal and local level. In Austria, the initiators of the Fridays for Future movement entered into discussions with the Federal President, the Federal Minister for Sustainability and Tourism and the Federal Minister for Education, Science and Research. They called on the Federal President to convene a committee of political decision-makers and scientists. 5 In Switzerland, several cantons have declared a climate emergency and the Swiss Freisinnig-demokratische Partei (FDP) has announced a turnaround in climate policy (Neuhaus 2019). Around the same time, several independently organized statements or letters in support of the youth movement were published in other countries. The organizers of these statements or letters then formed an alliance to also publish a joint, international statement (Hagedorn et al. 2019) on April 12, 2019. > FORUM 2 The following text was not part of the original statement signed by over 26,800 scientists and scholars. 3 The distribution was 21,679 from Germany, 2,773 from Switzerland, and 2,222 from Austria, plus 129 from other German-speaking regions. 4 About three percent of all signatures were rejected. Only a very small fraction involved wilful falsifications. The majority of rejections were situations where the provided information was insufficient to judge whether the signees were indeed scientists or scholars. 5 www.bundespraesident.at/aktuelles/detail/news/fridays As an unfunded, small, grassroots, volunteer group without institutional support, Scientists for Future had very limited means. The fact that it gained so much recognition in such a short time span and that so many people volunteered in spreading the word to our colleagues indicates that their statement resonates strongly with many scientists and scholars. The statement and the demands of Fridays for Future On April 8, 2019, the German Fridays for Future movement released a catalogue of demands for climate action (Fridays for Future 2019). This catalogue was prepared over several months by a working group of the youth movement. Scientists for Future had provided reviews of draft versions of the demand catalogue. Differences between the demands and our statement exist. For example, to limit global warming to 1.5°C above pre-industrial levels, we conclude in accordance with IPCC (2018) that net zero emissions will have to be reached globally between 2040 and 2050 at the latest, whereas Fridays for Future demands net zero to be reached in Germany by 2035. This is not a contradiction, since we consider it scholarly justified to include aspects of climate justice, that is, that different countries face different challenges and responsibilities. Strong, industrialized countries have more capabilities to be leaders and innovators in the transformation process. At the same time, these countries have a higher responsibility based on their historic emissions. Thus, industrialized countries should not only bear a greater contribution of the costs (Kartha et al. 2018), they also need to act faster to allow poorer and less developed countries to follow without undue risks to their economy and development. In our statement, we discussed the global carbon budgets of IPCC (2018) and its implications. Fridays for Future derived a demand for Germany which includes aspects of climate justice. Reflection In the increasingly complex and interwoven relation between humans and the earth system, scientists and scholars play a critical role in knowledge production and application and are called upon to actively feed their knowledge into the public arenas of opinionforming (Jahn 2013). The relations between the expert knowledge sphere, the public sphere, and the sphere of political decision-making are complex -for good reasons (Heidenreich 2018). When reflecting on our own understanding of the relationship between experts and the political processes, we assert that the mode of interaction must depend 1. on the extent of risks that humans are exposed to as a consequence of a decision, and 2. on the time available for corrective action. To consider an example: experts consulted about a transportation issue may conclude that it would be best to build a bridge of a specified quality at a certain place. The political process may come to a wide variety of conclu -sions: build no bridge at all, build it elsewhere, or build a cheaper bridge with higher maintenance costs and a shorter lifespan. Such decisions justify a critical expert publication, but little more. However, when it is decided to build a bridge that is liable to break in unpredictable ways or emits poisonous substances endangering the livelihood of local communities, a different role for scientists and scholars is called for. This difference is not about questioning the precedence of the democratic process, but about fulfilling an obligation to society through pro-active dissemination of knowledge. Just like medical experts have an ethical duty to warn of an impending epidemic, we consider it our ethical obligation to raise our voices to warn about the dangers of climate change, pollution and biosphere degradation. The findings of earth system sciences over the past decades have clearly shown that climate change, degradation of the Earth's biosphere, and environmental pollution are caused by human societies and are approaching or, in some cases, have already transgressed thresholds that many consider dangerous or associated with high risks (e.g., IPCC 2014, Ceballos et al. 2015). These conclusions have led to many international agreements on biodiversity, climate, and sustainability, such as the Aichi Targets to halt and reverse biodiversity loss, the Paris Agreement of the UN Framework Convention on Climate Change, and the UN 2030 Agenda, respectively the Sustainable Development Goals (SDGs). Unfortunately, many scientific, social, and economic parameters indicate that these agreements are currently not sufficiently translated into political action and economic and societal practice. In order to counteract the risks of crossing thresholds and irreversible tipping points in the Earth system (Steffen et al. 2015), societies urgently need to transform fundamentally towards sustainable practices. With respect to the topic of time, we understand that it is important to take the time to understand consequences of political decisions (Heidenreich 2018). However, for the preservation of the natural foundations of life, acting without sufficient speed has serious consequences. Humanity cannot simply press a "pause button" in the ongoing accumulation of greenhouse gases in the atmosphere, the degradation of ecosystems and soils, or the drastic reduction of species populations leading to extinctions. For example, a frequently encountered assumption is that the magnitude of climate change would depend primarily on the volume of emissions produced today. In reality, however, it depends on the overall history of these emissions, that is, on the total accumulated emissions over time. Consequently, postponement of action will not just delay a solution but effectively result in stronger adverse climate forcing. Humanity is aware of the problems of global warming for more than half a century. Revelle et al. (1965) warned in an official US government report of rising sea levels due to CO 2 emissions and recommended "economic incentives to discourage pollution" in which "special taxes would be levied against polluters" (Revelle et al. 1965). But while humanity is divided about the available options and best ways forward, it is, in practice, taking what we consider the dangerous decision to continue business as usual. We carry on limiting our response to debates and largely symbolic actions FORUM (or, in the case of scholars, communicating the need for societal change mostly in scholarly journals and communities). Similarly to how non-communication is communication (Watzlawick et al. 1967), a non-decision is a decision. We perceive our statement here as a scientifically substantiated warning about the probable consequences of this concrete decision, not as a statement of impatience with political processes in general. We fully endorse our political constitution. We do, however, believe that our democratic system can and needs to undertake evolutionary changes to its collective time management (Heidenreich 2018), both in terms of agility and in terms of re-adjusting the balance between short-term and long-term benefits and costs. Scientists for Future is an unfunded volunteer expert group, not a political campaigning group. As scientists and scholars, we are committed to distinguish between political conviction and scientific and scholarly results. We are aware that this is not easy because research is not immune to political influence. Researchers may work for publicly funded academic institutions, governments, corporations, companies or NGOs. They work in frameworks deciding where to invest resources, which money to accept, and which fraction of research to highlight in their science communication. Furthermore, everyone has personal ethical and political convictions. The conclusion should not be to separate research from society, but to embrace a framework of responsible research that included societal and ethical reflections (Helming et al. 2016). Our professional ethos does not limit us to speak only when asked. We believe that while our societies must become more scientific, our scientists must become more socially (and politically) aware (Jahn et al. 2015). The motivation to review the assumptions of the youth movement is based on ethical considerations and, in consequence, political. We do, however, hold ourselves accountable to the scientific process of review and transparency. Accordingly, our statement is not the result of prior opinions, but of a pain staking review process that ensured that the statement is sci entifically well founded. Producing and disseminating knowledge is part of shaping societies. Whether we speak or remain si lent: we are part of the political debate. Remaining neutral and silent about our established state of knowledge on global environmental change would be a violation of our professional responsibilities towards our societies. We thank Adam Wilkins, Rob Stevenson, Adina Arth, Jens Jetzkowitz and Johannes Fischer for their support in improving this text.
2019-04-13T13:02:47.400Z
2019-04-11T00:00:00.000
{ "year": 2019, "sha1": "02d4df1dfcd9fde63dec9231003db551f4a70471", "oa_license": "CCBY", "oa_url": "https://refubium.fu-berlin.de/bitstream/fub188/25010/1/Hagedorn_Concerns_2019.pdf", "oa_status": "GREEN", "pdf_src": "MergedPDFExtraction", "pdf_hash": "51bfbd1d6019ec369391f8ad5e60431f315ba8fc", "s2fieldsofstudy": [ "Political Science" ], "extfieldsofstudy": [ "Medicine", "Economics" ] }
248380540
pes2o/s2orc
v3-fos-license
Keratinocyte Response to Infection with Sporothrix schenckii Sporotrichosis is a subacute, or chronic mycosis caused by traumatic inoculation of material contaminated with the fungus Sporothrix schenckii which is part of the Sporothrix spp. complex. The infection is limited to the skin, although its progression to more severe systemic or disseminated forms remains possible. Skin is the tissue that comes into contact with Sporothrix first, and the role of various cell lines has been described with regard to infection control. However, there is little information on the response of keratinocytes. In this study, we used the human keratinocyte cell line (HaCaT) and evaluated different aspects of infection from modifications in the cytoskeleton to the expression of molecules of the innate response during infection with conidia and yeast cells of Sporothrix schenckii. We found that during infection with both phases of the fungus, alterations of the actin cytoskeleton, formation of membrane protuberances, and loss of stress fibers were induced. We also observed an overexpression of the surface receptors MR, TLR6, CR3 and TLR2. Cytokine analysis showed that both phases of the fungus induced the production of elevated levels of the chemokines MCP-1 and IL-8, and proinflammatory cytokines IFN-α, IFN-γ and IL-6. In contrast, TNF-α production was significant only with conidial infection. In late post-infection, cytokine production was observed with immunoregulatory activity, IL-10, and growth factors, G-CSF and GM-CSF. In conclusion, infection of keratinocytes with conidia and yeast cells of Sporothrix schenckii induces an inflammatory response and rearrangements of the cytoskeleton. Introduction The Sporotrichosis is a mycosis that involves epidermis, dermis and subcutanous tissue and is caused by thermally dimorphic fungal species of the complex Sporothrix spp. One of its etiological agents is the fungus Sporothrix schenckii [1,2]. This fungus is found in mycelial form in the environment (saprophytic or infectious phase) at a temperature of 20-30 • C and enters the skin by traumatic inoculation of propagules that infect the skin and subcutaneous tissue, developing into yeast form (parasitic phase) at 37 • C [3,4]. Conversion of the mycelial form to yeast form is necessary for the infection that causes the most common clinical forms, lymphocutaneous and fixed cutaneous, to be established [5]. 20-30% of the cases have a fixed presentation, which is characterized by a nodular lesion called sporotrichoid chancre. The lymphangitic form, on the other hand, presents with of each phase and adjusting them in a sterile saline solution to reach 300 × 10 6 CFU/mL via the nephelometric method. Cell Infection Monolayers of keratinocytes were prepared in 12-well plates at a concentration of 400 × 10 3 cells per well. They were infected for 2 h with a suspension of conidia or yeasts with a 1:1 multiplicity of infection (MOI). After 2 h of infection, the monolayers were washed with 1X PBS, and samples were taken at 2, 4, 6, 8, 10, and 12 h following infection. Uninfected cells were used as negative controls for all cases. To prevent extracellular growth of Sporothrix, the infected monolayers were treated with 1 µg/mL amphotericin B (Thermo Fisher Scientific). Cytotoxicity Analysis Cell infection kinetics were analyzed, and cell culture supernatants were recovered at each post-infection time. The cytotoxicity percentage of the keratinocytes was determined following the protocol of the CytoTox 96 ® Non-Radioactive Cytotoxicity Assay (Promega, Madison, WI, USA) at each post-infection time, and uninfected cells were used as a negative control. Analysis of Changes in the Cytoskeleton of Infected Keratinocytes Keratinocyte monolayers were prepared as described above, but on sterile coverslips placed on 12-well plates, and infected with the conidia or yeast suspensions in a manner similar to that described above. Changes in actin distribution were analyzed at 2, 6, and 10 h post-infection. At each time, cells were fixed with 4% paraformaldehyde for 30 min, washed with 1X PBS, and stained with 80 ng Rhodamine (TRITC) Phalloidin (Sigma-Aldrich, Steinheim, Germany) for 20 min. Excess Phalloidin was removed through five washes with 1X PBS. Subsequently, the samples were mounted on slides using Vectashield-DAPI (Vector Laboratories, Inc., Burlingame, CA, USA). Opsonization of Conidia and Yeast Cells of S. schenckii Suspensions of 30 × 10 6 conidia or yeasts were opsonized with 1 mL of fresh human serum for 30 min at 37 • C. Excess serum was removed by washing with 1X PBS, and the opsonized conidia and yeasts suspensions were adjusted to 300 x10 6 CFU/mL in F-12 medium. Analysis of Surface Receptor Expression in Infected Keratinocytes To determine the expression of Toll-like receptors (TLR2, TLR4 and TLR6), Mannose receptor (MR) and Complement receptor 3 (CR3), the keratinocyte monolayers, were infected with conidia and yeasts as described above. As a positive control, the cells were stimulated for 24 h with PMA (phorbol myristate acetate; 1 µg/10 6 cells), and uninfected cells were used as a negative control. The preparations were fixed with 4% paraformaldehyde (Sigma-Aldrich) for 30 min and washed three times with 1X PBS. All samples were incubated for 30 min at room temperature with a 3% BSA solution (Sigma Aldrich). Then, 50 µL of each monoclonal antibody diluted at a 1:200 was added and allowed to react overnight at 4 • C. For each receptor tested, the following antibodies were used: mouse anti-hTLR2 IgG (Santa Cruz Biotechnology, Inc., Dallas, TX, USA), mouse anti-hTLR4 IgG (eBioscience, Santa Clara, CA, USA), goat anti-hTLR6 IgG (Santa Cruz Biotechnology, Inc.), mouse IgG-anti-hCR3 (ABCAM, Cambridge, UK). Cells were then washed with 1X PBS and incubated for 90 min at 37 • C with the mouse anti-IgG secondary antibody developed in goat TRITC-labeled (Sigma Aldrich), goat anti-IgG developed in donkey FITC labeled (Santa Cruz Biotechnology, Inc.) respectively. For determining the mannose receptor, anti-mannose IgG-FITC antibody (Santa Cruz Biotechnology, Inc.) was used. Finally, the preparations were washed with 1X PBS, and mounted on slides using Vectashield DAPI (Vector Laboratories, Inc.). Fluorescence signals were observed in a confocal scanning system (LSM5 Pascal, Zeiss, Jena, Germany). The fluorescence intensity analysis was performed with the LSM5 program (version 4.0.0.241, Confocal Zeiss, Ostfildern, Germany). For this purpose, at least 50 cells were counted per field, and the data are shown as the mean of the fluorescence intensity (MFI). Statistical Analysis In all the determinations, the data were represented as the mean ± standard deviation (SD). The determinations were performed in triplicate, except the cytokine analysis, which was performed in duplicate. Data was analyzed with the two-way ANOVA test, followed by a post-hoc Tukey test, with the statistical program GraphPad Prism version 8.0 (GraphPad Software, San Diego, CA, USA). A value of p < 0.05 was considered to be statistically significant. S. schenckii Yeast Cells Induce a Cytotoxic Effect S. schenckii undergoes a morphological transition in response to temperature, and this adaptation is important for the establishment of infection. An efficient transition from conidia to yeast has an impact on its virulence [22]. To establish a possible difference in cytotoxic capacity between conidia and S. schenckii yeasts, keratinocytes were infected with both phases of the S. schenckii fungus at a 1:1 MOI, and cytotoxicity was determined by the LDH release assay as described above. At 2 and 4 h post-infection, no significant differences were observed in the percentage of cytotoxicity induced by conidia and S. schenckii yeasts, compared with the control group of uninfected cells. From 6, 10, and 12 h post-infection, an increase in the percentage of dead cells was observed when they were infected with yeasts of the fungus ( Figure 1). Infection with conidia only showed a significant increase in cytotoxicity at 10 h post-infection. Sporothrix schenckii Induces Changes in the Actin Cytoskeleton in Human Keratinocytes Once it was established that yeasts induced a higher percentage of cell death compared to conidia, cytoskeletal rearrangements were analyzed in cells infected with both phases of the fungus. Actin filaments were stained with rhodamine phalloidin, and the cell nucleus with DAPI, and then analyzed via confocal microscopy. As shown in Figure 2, the keratinocytes that were not infected showed a homogeneous distribution of the actin filaments, without cellular prolongations and with a longitudinal organization. In contrast, keratinocytes infected with conidia of S. schenckii showed morphological changes starting 2 h post-infection. The cells presented a loss in the longitudinal distribution of the filaments and focal points of actin were observed throughout the cytoplasm. These changes were maintained until 10 h post-infection. Moreover, the yeast-infected keratinocytes showed the formation of membrane protuberances, as well as actin focal points, starting 2 h post-infection, suggesting a reorganization of the cytoskeleton. These alterations were observed until 10 h post-infection. Closer observation of infected cells at first showed that abundant actin focal points and membrane projections had formed at post-infection observation times. In addition, yeast-like structures were observed at the ends of the membrane projections ( Figure 3). In Sporothrix schenckii Induces Changes in the Actin Cytoskeleton in Human Keratinocytes Once it was established that yeasts induced a higher percentage of cell death compared to conidia, cytoskeletal rearrangements were analyzed in cells infected with both phases of the fungus. Actin filaments were stained with rhodamine phalloidin, and the cell nucleus with DAPI, and then analyzed via confocal microscopy. As shown in Figure 2, the keratinocytes that were not infected showed a homogeneous distribution of the actin filaments, without cellular prolongations and with a longitudinal organization. In contrast, keratinocytes infected with conidia of S. schenckii showed morphological changes starting 2 h post-infection. The cells presented a loss in the longitudinal distribution of the filaments and focal points of actin were observed throughout the cytoplasm. These changes were maintained until 10 h post-infection. Moreover, the yeast-infected keratinocytes showed the formation of membrane protuberances, as well as actin focal points, starting 2 h post-infection, suggesting a reorganization of the cytoskeleton. These alterations were observed until 10 h post-infection. Closer observation of infected cells at first showed that abundant actin focal points and membrane projections had formed at post-infection observation times. In addition, yeast-like structures were observed at the ends of the membrane projections ( Figure 3). In contrast, no changes in the actin cytoskeleton were observed in the control group, in which stress fibers were observed. Cell infection kinetics were performed with conidia and yeast cells of S. schenckii in keratinocytes at a 1:1 MOI for 10 h. Actin filaments were stained with rhodamine phalloidin (red), and the cell nuclei with DAPI (blue). White arrows indicate the formation of membrane protuberances. Images in 60x. Overexpression of TLR2, TLR6, MR and CR3 Receptors by Keratinocytes Infected with Conidia and Yeast Cells of Sporothrix schenckii The keratinocytes express different PRRs on their cell surface [15][16][17][18][19][20], so we decided to analyze the expression of various cell receptors during infection with conidia and yeast cells of S. schenckii. Keratinocytes infected with S. schenckii conidia ( Figure 4A) showed an overexpression of the MR, TLR2, CR3, and TLR6 receptors starting 2 h post-infection for a maximum of 10 h, compared to the control group. At the same time, there was a discrete overexpression of TLR4. The results were confirmed by the mean fluorescence intensity analysis (Figure 4B), and the MFI for TLR2 had a significant value at 2, 6 and 10 h post-infection. In the Overexpression of TLR2, TLR6, MR and CR3 Receptors by Keratinocytes Infected with Conidia and Yeast Cells of Sporothrix schenckii The keratinocytes express different PRRs on their cell surface [15][16][17][18][19][20], so we decided to analyze the expression of various cell receptors during infection with conidia and yeast cells of S. schenckii. Keratinocytes infected with S. schenckii conidia ( Figure 4A) showed an overexpression of the MR, TLR2, CR3, and TLR6 receptors starting 2 h post-infection for a maximum of 10 h, compared to the control group. At the same time, there was a discrete overexpression of TLR4. The results were confirmed by the mean fluorescence intensity analysis ( Figure 4B), and the MFI for TLR2 had a significant value at 2, 6 and 10 h post-infection. In the case of TLR4, the MFI did not show a significant difference compared to the control group of uninfected cells, while in contrast, the cells treated with PMA showed a significant expression of TLR4. TLR6 expression reached significant MFI values at all times evaluated, with a maximum of 32 at 10 h. The MR and CR3 receptors showed significant differences at each of the times with a maximum MFI of 29 and 21, respectively, at 10 h post-infection. After a 24-h stimulation with PMA, the cells showed overexpression of TLR2, TLR4, TLR6, MR, and CR3. S. schenckii yeast infection kinetics ( Figure 5A) found an increase in the production of TLR6, MR, CR3, and TLR2 receptors starting 2 h post-infection, with a maximum expression at 10 h, compared to the control group of uninfected cells. The TLR4 expression was very discrete at the same post-infection times. The mean fluorescence intensity analysis ( Figure 5B) confirmed the observations, finding a maximum overexpression of TLR2 at 10 h post-infection with an MFI of 15; while the maximum MFI for TLR6 was 37, the MR had a maximum value of 52, and the CR3 had a maximum value of 31. In the case of TLR4, no significant elevation of MFI was found with respect to the control group, except for the MFI of cells treated with PMA. Infection of Keratinocytes with Sporothrix schenckii Induces the Production of Proinflammatory Cytokines, Chemokines and Growth Factors To determine whether infection by S. schenckii induces cytokine production, infection kinetics were performed with Sporothrix schenckii conidia as described above. A total of thirty elements were determined, including cytokines, chemokines, and growth factors to be evaluated with the LUMINEX system, in the keratinocyte culture supernatants at each post-infection time. All the elements analyzed were described in the methodology section, and only the molecules whose production increased during the infection are presented. of uninfected cells, while in contrast, the cells treated with PMA showed a significant expression of TLR4. TLR6 expression reached significant MFI values at all times evaluated, with a maximum of 32 at 10 h. The MR and CR3 receptors showed significant differences at each of the times with a maximum MFI of 29 and 21, respectively, at 10 h post-infection. After a 24-h stimulation with PMA, the cells showed overexpression of TLR2, TLR4, TLR6, MR, and CR3. The chemokines evaluated were RANTES, MCP-1, IL-8, and IP10. The keratinocytes infected with Sporothrix schenckii conidia showed a significant increase in the production of MCP-1 and IL-8. Production of MCP-1 and IL-8 started at 8 h, and reached their maximum concentration at 12 h, attaining levels of 390 pg/mL and 270 pg/mL, respectively. The keratinocytes produced RANTES significantly at 12 h post-infection (10 pg/mL). The production of the IP-10 chemokine was discrete and late at 12 h, reaching a maximum level of 1.5 pg/mL that was not statistically significant ( Figure 6A). sion at 10 h, compared to the control group of uninfected cells. The TLR4 expression was very discrete at the same post-infection times. The mean fluorescence intensity analysis ( Figure 5B) confirmed the observations, finding a maximum overexpression of TLR2 at 10 h post-infection with an MFI of 15; while the maximum MFI for TLR6 was 37, the MR had a maximum value of 52, and the CR3 had a maximum value of 31. In the case of TLR4, no significant elevation of MFI was found with respect to the control group, except for the MFI of cells treated with PMA. On the other hand, during infection with conidia, production of two growth factors, granulocyte colony-stimulating factor (G-CSF) and granulocyte-macrophage colonystimulating factor (GM-CSF), was observed. Maximum levels of G-CSF and GM-CSF were 61 pg/mL and 12 pg/mL, respectively ( Figure 6C). The proinflammatory cytokines produced by the infected keratinocytes were IL-6 with 5.0 pg/mL, TNF-α with 2.5 pg/mL, IFN-γ with 6.4 pg/mL, and IFN-α with 7.0 pg/mL. These cytokines reached their maximum levels at 12 h post-infection ( Figure 6B). Interestingly, conidial infection also stimulated the production of the anti-inflammatory cytokine IL-10 with a significant and elevated level of 27 pg/mL at 12 h ( Figure 6B). In summary, the highest levels of soluble mediators produced by keratinocytes infected with S. schenckii conidia were the chemokines MCP-1 and IL-8, followed by G-CSF, IL-10, GM-CSF, IFN-α, IFN-γ, IL-6, and TNF-α, in decreasing order ( Figure 6). With regard to the production of mediators produced by keratinocytes infected with Sporothrix schenckii yeast cells, the chemokines analyzed were RANTES, MCP-1, IL-8, and IP10. The MCP-1 and IL-8 chemokines were significantly elevated, which were produced in high amounts in late observation times, reaching a maximum concentration of 480 pg/mL for MCP-1, and 407 pg/mL for IL-8, at a post-infection time of 12 h. However, no important or significant increase was observed for RANTES, which reached a maximum concentration of 10 pg/mL towards the end of the infection kinetics. The IP-10 chemokine, on the other hand, was produced late, reaching very low levels; i.e., a concentration of just 2 pg/mL at 12 h post-infection ( Figure 7A). Data are presented as the mean ± standard deviation (SD) of two independent experiments. * p < 0.01, ** p < 0.001 and *** p < 0.0001. As mentioned above, thirty molecules produced by keratinocytes infected with both phases of the S. schenckii fungus were evaluated, including chemokines, cytokines, and growth factors. Figure 8 is a heat map showing the production of these molecules at different times, following infection with conidia and yeasts of the fungus. It was observed that when infected with both phases of the fungus, keratinocytes significantly produced During infection with yeasts, late production of G-CSF and GM-CSF was observed with a concentration of 62 pg/mL and 11 pg/mL, respectively. Likewise, the keratinocytes infected with yeasts produced proinflammatory cytokines. Those that significantly increased their levels included IL-6 with 7.5 pg/mL, IFN-α with 7 pg/mL, and IFN-γ with 6 pg/mL. All of these cytokines reached their maximum level at 12 h post-infection. On the other hand, TNF-α had a concentration of 1.9 pg/mL at 12 h post-infection, without there being a significant increase. Furthermore, infected keratinocytes also produced elevated levels of the anti-inflammatory cytokine IL-10, reaching a concentration of 28.8 pg/mL at 12 h. As mentioned above, thirty molecules produced by keratinocytes infected with both phases of the S. schenckii fungus were evaluated, including chemokines, cytokines, and growth factors. Figure 8 is a heat map showing the production of these molecules at different times, following infection with conidia and yeasts of the fungus. It was observed that when infected with both phases of the fungus, keratinocytes significantly produced the chemokines IL-8 and MCP-1. G-CSF, IL-10, and GM-CSF were also produced to a lesser degree, and the cytokines IL-6, IFN-γ, IFN-α at a much lower concentration. In contrast, TNF-α production was significant only with conidial infection. the chemokines IL-8 and MCP-1. G-CSF, IL-10, and GM-CSF were also produced to a lesser degree, and the cytokines IL-6, IFN-γ, IFN-α at a much lower concentration. In contrast, TNF-α production was significant only with conidial infection. Discussion Sporotrichosis is a mycosis that affects epidermis, dermis, and subcutaneous tissue, and it is caused by a thermally dimorphic fungal species of the complex Sporothrix spp. One of its etiological agents is the fungus Sporothrix schenckii [1]. A transition from mycelium to yeast is required for infection to settle in tissue, which successfully occurs among the species of the clinical clade [22]. Histologically, the lesions are pyogenic and granulomatous, showing infiltration of neutrophils, scarce eosinophils, mononuclear phago- Discussion Sporotrichosis is a mycosis that affects epidermis, dermis, and subcutaneous tissue, and it is caused by a thermally dimorphic fungal species of the complex Sporothrix spp. One of its etiological agents is the fungus Sporothrix schenckii [1]. A transition from mycelium to yeast is required for infection to settle in tissue, which successfully occurs among the species of the clinical clade [22]. Histologically, the lesions are pyogenic and granulomatous, showing infiltration of neutrophils, scarce eosinophils, mononuclear phagocytes, lymphocytes, and plasma cells. In addition, asteroid bodies and yeast cells are observed [23]. As the largest organ of the human body, the skin not only functions as a physical barrier, but also provides defense against these microorganisms [24]. Keratinocytes are among the epidermal cell lines that maintain the integrity of this barrier and tissue homeostasis [25]. In addition, when coming into contact with a microorganism, they participate by mediating antimicrobial responses, promoting a pro-inflammatory environment, and producing antimicrobial peptides in infections with actinomycetes, viruses and fungi [26][27][28]. However, information is limited on the involvement of keratinocytes in the pathogenesis of fungi such as S. schenckii. In this study, we evaluated the keratinocyte response during infection with conidia (infective phase) and yeast cells (parasitic phase) of S. schenckii. Previous studies have shown differences in the virulence levels of the Sporothrix spp. complex, attributed to the thermotolerance and production of melanin that confers resistance to antifungals such as amphotericin B and terbinafine [29,30]. Our results showed that keratinocytes are susceptible to infection with conidia and yeast cells of S. schenckii, causing a certain degree of cell death (Figure 1). To date, there are no in vitro cytotoxicity studies of Sporothrix spp. The most frequently reported virulence studies are in murine models, where it has been observed that Sporothrix schenckii shows different levels of virulence, and that this depends on the amount of inoculum administered [31][32][33]. Our study was conducted with a multiplicity of infection (MOI) of 1:1, and despite the low microbial load, the percentage of viable cells was affected, albeit at a low proportion. Another factor to consider is the origin of the clinical isolate. It has been reported that in the murine model, isolates from patients with disseminated sporotrichosis lead to a more severe disease compared to isolates from patients with lymphocutaneous sporotrichosis [34]. The clinical isolate we used in this study was from a patient with disseminated sporotrichosis; thus, the cytotoxic effect observed could be attributed to its clinical origin. On the other hand, the infection of the host cell by the pathogenic fungus begins with its adhesion to the cell surface, and this interaction is fundamental for the pathogenesis of mycoses [35]. Previous studies have shown that opportunistic fungi such as C. glabrata adhere to the cell surface of human osteoblasts and induce the polymerization of actin filaments, which cause the formation of membrane projections that trap yeasts that get internalized by cells [36]. It has also been described that S. schenckii interacts with epithelial cells, leading to their morphological alteration and the loss of rearrangement of the microtubular network [37]. However, the role of the actin cytoskeleton in the internalization of S. schenckii in keratinocytes remains unknown. We found that infection with conidia and yeast cells of S. schenckii induced changes in the cell morphology of keratinocytes ( Figure 2). These changes consisted of the reorganization in the polymerization of actin filaments, the formation of cellular projections, and the loss of stress fibers (Figure 3). These results are consistent with those produced by other pathogenic fungi such as Malassezia pachydermatis, Aspergillus fumigatus, Paracoccidioides brasiliensis, and Cryptococcus neoformans that induce rearrangement of the actin cytoskeleton in non-phagocytic cells [38][39][40][41]. The rearrangement of the actin filaments in keratinocytes and the formation of membrane protrusions could suggest the internalization of conidia and yeast cells of S. schenckii as a possible mechanism to infect the host cell. The changes observed are similar to those that M. tuberculosis induces upon being internalized in lung epithelial cells, through a mechanism of macropinocytosis [42]. The possibility that S. schenckii triggers the macropinocytosis mechanism to be internalized by keratinocytes is a fact that must be explored. Keratinocytes contribute to the inflammatory process by producing mediators of the innate immune system during infectious processes [21]. The expression of these effector molecules begins by recognizing pathogen-associated molecular patterns (PAMPs) through the different pattern-recognition receptors (PRRs) [21]. Some components of the fungal cell wall, such as chitin, mannans and β-glucans are recognized by the PRRs [43,44]. The cell wall of the S. schenckii conidia is composed of rhamnose and mannose, whereas the yeasts have rhamnomanian peptides. The polysaccharide moiety of these rhamnomanian peptides is composed of D-mannose, L-rhamnose, and galactose polysaccharides [45,46], and it is unknown whether the PRRs can recognize them in keratinocytes. The expression of PRRs by keratinocytes during S. schenckii infection is also not known in detail. Our results showed that keratinocytes infected with conidia overexpress MR, TLR6, CR3 and TLR2 (Figure 4). Similarly, in infection by yeast cells, MR, TLR6, CR3 and TLR2 receptors are overexpressed ( Figure 5). High MR expression and low TLR4 expression were observed in both infections. Mannose receptor overexpression plays an important role in antifungal response [47]. This receptor recognizes mannosyl-fucosyl ligands and glycoconjugates present in fungi and is part of the C-type lectin receptors (CLRs) [47,48]. It is expressed in dendritic cells, macrophages, and also in human keratinocytes [48,49]. Therefore, overexpression of this receptor in keratinocytes during S. schenckii conidia and yeast infection could indicate that it actively participates in pathogen recognition and infection control, as has been described in Candida albicans infection [49]. On the other hand, CR3 is known to be involved in antifungal response, and is essential for the phagocytosis of particles opsonized by the iC3b complement fragment [50]. Studies have shown that Histoplasma capsulatum enters macrophages through this receptor [51]. Furthermore, previous research on THP-1 macrophages showed that they are capable of phagocytizing opsonized yeasts through CR3, and opsonized and non-opsonized conidia of S. schenckii through the MR [11]. Our results showed a high expression of CR3 during infection with both phases of the fungus. Most likely, this receptor is involved in the internalization of opsonized conidia and yeasts, although its role in the internalization of these pathogens has been described in phagocytic cells [11]. Similarly, the signaling pathways that the MR and CR3 could trigger in keratinocytes that lead to cell activation resulting in an antifungal state by the keratinocyte and/or the production of cytokines and chemokines are unknown. Research has shown that Pneumocystis stimulates nuclear NF-kB translocation through MR activation in alveolar macrophages, and the activation of this pathway is dependent on the multiplicity of infection [52]. In contrast, blocking the MR suppressed NF-kB expression in mast cells infected with Bordetella pertussis [53]. Similarly, activation of the NF-kB signaling pathway is also mediated by TLRs. The involvement of TLR2 and TLR4 in the interaction with conidia and yeast cells of Sporothrix schenckii in keratinocytes and their consequent activation of NF-kB has been demonstrated [54]. This activation triggers an inflammatory response mediated by IL-6 and IL-8 [54]. In contrast, our results showed a non-significant expression of TLR4, and an overexpression of TLR2 and TLR6. Both receptors can form heterodimers that recognize zymosan, i.e., mannan particles containing β (1,3)-glucan found in fungal cell walls [55]. The TLR2 and TLR6 heterodimer could recognize zymosan in the cell wall of S. schenckii. In our study, the recognition of S. shenckii conidia and yeasts by keratinocytes activated signaling pathways that resulted in the production of the proinflammatory cytokines IL-6, TNF-α and IFN-α, IFN-γ; anti-inflammatory cytokine IL-10; chemokines IL-8, MCP-1 and growth factors GM-CSF, G-CSF (Figures 6-8). TLR2 expression is involved in the production of IL-6 and IL-8 that promote the generation of an inflammatory state [54]. Additionally, during infection with both phases of the fungus, we observed not only elevated levels of IL-8, but also high levels of the MCP-1 chemokine. The production of MCP-1 can correlate with the cellular infiltrate of macrophages, neutrophils and eosinophils, observed in S. schenckii lesions, due to its chemoattractant function [56]. The results also showed low TNF-α production during infection with S. schenckii conidia, and no production during infection with yeasts. Synthesis of this cytokine has been shown to be through the MR via the P38 MAPK signaling pathway in human epithelial cells [57]. In addition, MR and TLR4 have been reported to be involved in the production of TNF-α and IL-6 [12]. Thus, low TLR4 expression could influence low TNF-α production during yeast infection. The production of IFN-γ and IFN-α was significant during infection with both phases of the fungus. IFN-γ showed the most sustained production, and the synthesis of both types of interferons has been reported in inflammatory processes in keratinocytes [58,59]. Although the production of IFN-γ has been described to be mainly limited to immune response cells such as T cells, macrophages or NK cells [60], it is interesting that in the model of infection by both phases of the fungus, keratinocytes preferentially produce it over IFN-α. As such, the implications of this situation should be studied in greater detail. IFN-γ has an immunomodulatory function and is synthesized through the interferon regulatory factor (IRF) [60]. It is not known whether IRF is involved in the production of this cytokine by keratinocytes. A relevant data in this model of infection is the production of IL-10 by keratinocytes in late post-infection times with yeasts or conidia. IL-10 is a primary cytokine in the modulation of inflammatory responses, in addition to regulating the growth of various cell lines including keratinocytes [61]. In human peripheral blood mononuclear cells, the production of IL-10 has been evaluated during infection with conidia and yeast cells of S. schenckii [12]. Production of this cytokine has been reported through the activation of the Dectin-1, TLR2 and MR receptors [12]. The presence of IL-10 from infected keratinocytes may contribute to an environment that favors infection by interfering with proper cellular immune response, although it can also contribute to containing the damage produced by hyperinflammation [62]. Further in vivo studies are necessary to establish the role of IL-10 in sporotrichosis. Like the other mediators analyzed, a significant and late production of two growth factors was observed: GM-CSF and G-CSF. GM-CSF stimulates the differentiation and proliferation of macrophages, eosinophils, and granulocytes. It also induces the migration and proliferation of keratinocytes by stimulating the healing process [63]. G-CSF is an important factor in the proliferation and differentiation of neutrophils [64]. To date, there is no report on the production of these growth factors in the infection of keratinocytes by Sporothix spp. However, the production of GM-CSF and G-CSF has been reported in keratinocytes treated with dinitrochlorobenzene, and they have been attributed a proinflammatory effect [65], which could be contributing to the pathogenesis of Sporotrichosis. Conclusions Overall, this study shows the responsiveness of keratinocytes during early infection with conidia and yeast cells of S. schenckii. It begins with recognition by keratinocyte receptors including MR, CR3, TLR 6, and TLR2 for both phases of the fungus. After recognition, the keratinocyte undergoes changes in its cytoskeleton that induces the formation of membrane protrusions that can facilitate the internalization of conidia or yeasts. The infection promotes the production of cytokines, creating a pro-inflammatory, and, above all, chemotactic environment with very high production of MCP-1 and IL-8. This chemotactic environment will be responsible for the recruitment of other cell lines at the infection site. Whereas in late post-infection times keratinocytes produce IL-10, which could mediate an anti-inflammatory response and therefore aid in the survival of the pathogen, it could also eventually contribute to a protective effect for the host by decreasing hyperinflammation. At the same time, keratinocytes produce growth factors that could help repair damaged tissue during infection, or in the same way, contribute to the pro-inflammatory environment characteristic of the disease. Figure 9 depicts the hypothetical model of the findings of this study. Data Availability Statement: The data presented in this study are available on request. Acknowledgments: A.P.R.(Araceli Paredes-Rojas) is a PhD student in the Immunology Program of the Escuela Nacional de Ciencias Biológicas from Instituto Politécnico Nacional, and was a fellow of CONACyT and the BEIFI system of the Instituto Politécnico Nacional. JLH and JICS are members of the SNI (Sistema Nacional de Investigadores, CONACyT-México); JICS, LECR and APR are professors with a PRODEP profile. JLH is a COFAA and EDI Fellow at the Instituto Poltécnico Nacional. The graphical abstract and Figure 9 were created with BioRender.com. Conflicts of Interest: The authors have no conflict of interest to declare. Data Availability Statement: The data presented in this study are available on request. Acknowledgments: A.P.R. (Araceli Paredes-Rojas) is a PhD student in the Immunology Program of the Escuela Nacional de Ciencias Biológicas from Instituto Politécnico Nacional, and was a fellow of CONACyT and the BEIFI system of the Instituto Politécnico Nacional. JLH and JICS are members of the SNI (Sistema Nacional de Investigadores, CONACyT-México); JICS, LECR and APR are professors with a PRODEP profile. JLH is a COFAA and EDI Fellow at the Instituto Poltécnico Nacional. The graphical abstract and Figure 9 were created with BioRender.com. Conflicts of Interest: The authors have no conflict of interest to declare.
2022-04-26T15:16:14.300Z
2022-04-23T00:00:00.000
{ "year": 2022, "sha1": "526e1d1db056eb1bc345ed7dac223367689257d0", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2309-608X/8/5/437/pdf?version=1650707401", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "6901e55b4de84f61bb89049439b1f26ca0e89786", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
260915724
pes2o/s2orc
v3-fos-license
Identification of an Additional Metal-Binding Site in Human Dipeptidyl Peptidase III Dipeptidyl peptidase III (DPP III, EC 3.4.14.4) is a monozinc metalloexopeptidase that hydrolyzes dipeptides from the N-terminus of peptides consisting of three or more amino acids. Recently, DPP III has attracted great interest from scientists, and numerous studies have been conducted showing that it is involved in the regulation of various physiological processes. Since it is the only metalloenzyme among the dipeptidyl peptidases, we considered it important to study the process of binding and exchange of physiologically relevant metal dications in DPP III. Using fluorimetry, we measured the Kd values for the binding of Zn2+, Cu2+, and Co2+ to the catalytic site, and using isothermal titration calorimetry (ITC), we measured the Kd values for the binding of these metals to an additional binding site. The structure of the catalytic metal’s binding site is known from previous studies, and in this work, the affinities for this site were calculated for Zn2+, Cu2+, Co2+, and Mn2+ using the QM approach. The structures of the additional binding sites for the Zn2+ and Cu2+ were also identified, and MD simulations showed that two Cu2+ ions bound to the catalytic and inhibitory sites exchanged less frequently than the Zn2+ ions bound to these sites. Introduction Dipeptidyl peptidase III (DPP III, EC 3.4.14.4) is a monozinc metalloexopeptidase that hydrolyzes dipeptides from the N-terminus of its substrates, consisting of three or more amino acids [1].Because of its affinity for some bioactive peptides, such as angiotensins and opioid peptides, it has recently attracted the attention of several research groups.As early as 2016, angiotensin-(1-7) was shown to be hydrolyzed by DPP III in renal epithelial cells [2], while Pang et al. revealed a link between DPP III and the renin-angiotensin system (RAS) and, thus the potential use of DPP III in the treatment of hypertension [3].Recently, Komeno et al. [4] demonstrated the cardio-and reno-protective effects of dipeptidyl peptidase III in diabetic mice.They found that the beneficial role of DPP III is mediated, at least in part, by the cleavage of a cytotoxic peptide, Peptide 2, which was increased in diabetic mice compared with normal mice.In addition, DPP III has recently been proposed as a biomarker for cardiac shock [5].In its interaction with Keap1, unrelated to its peptidase activity, DPP III is involved in human cancer development, the oxidative-stress response, and neuron protection [6][7][8]. Because of the demonstrated importance of DPP III in the regulation of various physiological processes and because it is the only metalloenzyme among the dipeptidyl peptidases, we considered it important to better understand the process of binding and exchange of metal dications, which are abundant in the human body, in DPP III, and their influence on its structure and function.Abramić et al. [9,10] showed that elevated concentrations of zinc ions (10-30 µM) inhibited rat and human DPP III activity, and Hirose et al. [11] demonstrated the restoration of rat DPP III activity through the addition of either Zn 2+ , Cu 2+ , Ni 2+ , or Co 2+ to the apoenzyme. There is only one zinc ion in the active site of the crystallographically determined structures of DPP III (PDB IDs of structures of the following: human-3FVY, 3T6B, 3T6J, 5EGY, 5E2Q, 5E33, 5E3A, 5E3C, 5EHH [12,13]; yeast-3CSK [14], fungal-5YFB, 5YFC, 5YFD [15]; bacterial DPP III-5NA6, 6NA7, 5NA8, 5ZUM, 6EOM [16][17][18]), but previous studies [19][20][21] clearly indicated the possibility of another metal-ion binding, which inhibits the enzymatic activity of DPP III.The binding of another metal ion in the so-called inhibitory metal-binding site, which is directly adjacent to the catalytically active site of the enzyme, has been observed in the crystallographic structures of three zinc-dependent enzymes, in which, as in DPP III, the catalytic zinc is coordinated with two histidines and the carboxyl groups of the amino acids Glu or Asp: carboxypeptidase A, thermolysin, and LpxC (the PDB codes of the corresponding structures are 1CPX, 1LND, and 1P42) [22][23][24].Young and Siemann showed that in anthrax lethal factor (LF), metal ions are exchanged in such a way that the binding of the metal to the inhibitory binding site precedes the release of catalytic zinc [20].Using the same procedure that Young and Siemann used to determine the potential binding site of the inhibitory Zn 2+ ion in LF, we can determine the binding site of the inhibitory metal in DPP III.We have shown computationally that human DPP III (hDPP III) can take up a second zinc ion that binds immediately next to the catalytically important ion and displaces the zinc in the active site, while the zinc that originally occupied the active site leaves the enzyme [19]. By combining several experimental methods (HR-ICP-MS, ITC, stopped-flow, and fluorescence measurements), we studied the stoichiometry and the thermodynamic and kinetic parameters of the binding of various divalent metal ions, Zn 2+ , Cu 2+ , Mn 2+ , and Co 2+ to purified recombinant human DPP III.In addition, we investigated the binding of the metal ions computationally.Using QM calculations, we determined the thermodynamic parameters for the binding of Zn 2+ , Cu 2+ , Mn 2+ , and Co 2+ to hDPP III, and using MD simulations, we determined the main binding modes of two Cu ions to hDPP III. The aim of the proposed research is to increase knowledge about the modes of binding and relative affinity of several physiologically relevant transition metals to dipeptidyl peptidase III and their influence on enzyme activity. Results Several experimental methods were used to determine the stoichiometry and affinity of metal ions binding to DPP III. Metals in Excess Inactivate DPP III in Stopped-Flow Experiments The flow curves of the hydrolysis of Arg-Arg-2NA in the presence of different metal ions were determined using stopped-flow instruments by incubating the metal solution with apo hDPP III.Since we failed to obtain an activation or inhibition curve with the apoprotein, we used the native protein (not treated with chelators to remove metal ions). The native hDPP III sample was incubated with a solution of zinc, copper, cobalt, and manganese nitrates.Activation was observed up to a metal:protein ratio of 1:1, after which gradual inactivation occurred.Similar activity curves were obtained for all the metals, with some slight differences in the areas where the ratio of metal to protein was less than 1:1 (see inset in Figure 1).The only metal that showed a peak in activity at equimolar concentrations was Zn 2+ .For the other metals, Cu 2+ , Co 2+ , and Mn 2+ , the activity was generally stable up to a ratio of 1:1, after which inhibition occurred.From the results of the increase in activity, it appeared that the native protein was not saturated with zinc, so we performed measurements using inductively coupled plasma mass spectrometry (ICP-MS).Relative enzyme activity as a function of metal concentration.Using a stopped-flow setup with two syringes, the native protein (10 nM) was incubated with metal solutions (Zn 2+ , Cu 2+ , Co 2+ , and Mn 2+ ) in Tris-HCl buffer (40 mM, pH 7.5) and rapidly mixed with the substrate (final concentration of 200 µM), and the progress of the reaction was monitored at 332 nm.Inset: Close-up view of data points at equimolar metal and protein concentrations. Dissociation Constants of the Catalytic Binding Site The Kd value for the enzyme was determined by measuring its activity in 20 mM Tris-HCl buffer pH 7.4 with a 1000-fold molar excess of Zn 2+ , Cu 2+ , and Co 2+ ions (10 µM) compared with the concentration of hDPP III (10 nM) and an excess of DPA as a chelator (Figure 2).The Kd value (which was identical to the concentrations of the free ions Zn 2+ , Cu 2+ , and Co 2+ required to reach half of the maximum activity of the enzyme) was determined by non-linear regression and corrected for metal-ion-buffer binding, with final values of 6.7 × 10 −11 M for the Zn 2+ , 2.8 × 10 −12 M for the Cu 2+ , and 3.2 × 10 −9 M for the Co 2+ . Strong inactivation with excess zinc was also observed.Since no data were available for the binding constants of the manganese and DPA complexes, we could not measure the dissociation constant for the manganese. Dissociation Constants of the Catalytic Binding Site The K d value for the enzyme was determined by measuring its activity in 20 mM Tris-HCl buffer pH 7.4 with a 1000-fold molar excess of Zn 2+ , Cu 2+ , and Co 2+ ions (10 µM) compared with the concentration of hDPP III (10 nM) and an excess of DPA as a chelator (Figure 2).The K d value (which was identical to the concentrations of the free ions Zn 2+ , Cu 2+ , and Co 2+ required to reach half of the maximum activity of the enzyme) was determined by non-linear regression and corrected for metal-ion-buffer binding, with final values of 6.7 × 10 −11 M for the Zn 2+ , 2.8 × 10 −12 M for the Cu 2+ , and 3.2 × 10 −9 M for the Co 2+ . Additional Metal-Binding Site Confirmed Using ICP-MS Measurements were performed in 25 mM ammonium acetate, pH 7.4, at a mass concentration of protein of 0.1 to 0.2 mg mL −1 .The holoproteins were prepared with the ad- Strong inactivation with excess zinc was also observed.Since no data were available for the binding constants of the manganese and DPA complexes, we could not measure the dissociation constant for the manganese. Additional Metal-Binding Site Confirmed Using ICP-MS Measurements were performed in 25 mM ammonium acetate, pH 7.4, at a mass concentration of protein of 0.1 to 0.2 mg mL −1 .The holoproteins were prepared with the addition of six moles of metal ions to one mole of protein, and with subsequent washing with the buffer to remove the unbound metal ions (see Section 4).The results are shown in Table 1.The native protein was not saturated with zinc metal, as we previously observed in the stopped-flow experiments.Our results show that the native protein was, in fact, very similar to the apoprotein.The highest metal-to-protein ratio measured was 2:1, for copper.Only one mole of Zn 2+ was bound to one mole of protein.The results confirm that the hDPP III protein has a high affinity for Cu 2+ and Zn 2+ , leading to integer values for the metal-to-protein content, while Co 2+ and Mn 2+ do not remain tightly bound; values below 1 were observed, specifically about 0.3 for cobalt and 0.02-0.03for manganese.The results were reproducible with different batches of protein.These results confirm the possibility of an additional metal-ion-binding site in hDPP III.Subsequently, ITC measurements were performed to describe the thermodynamics of the binding of these metals. Dissociation Constants of the Additional Metal-Binding Side We used isothermal titration calorimetry (ITC) to quantify the metal-protein interactions.This method should provide a description of all the thermodynamic parameters of the interaction, but careful experimentation and data processing are required.All the titrations of hDPP III with the metal ions were tested as reverse titrations (protein in metal ion).The main results are shown in Figure 3. The overall results indicated two binding sites (Table 2).For all the metals except cobalt, the data indicated at least two binding sites, based either on the stoichiometry (n) or on the shape of the curve.The affinity of the active site for Zn 2+ and Cu 2+ was, theoretically, too large to be measured by this method.However, for Co 2+ , we measured the binding of 0.63 metal ions per protein molecule.This binding was endothermic (Figure 3c), and the K d (corrected for the interaction of the metal ion with the buffer, and the buffer protonation) was 13 nM (Table 3), as expected for the binding of the Co 2+ to the active protein site.The data from the reverse titration were in agreement with the direct titration (Tables S1 and S2).For manganese, the data were fitted to two sets of sites, but with a total metal content per protein of less than 1.For the first (active) binding site, the apparent K d value was similar to that for cobalt, but for the second (additional) binding site, it was significantly higher.The binding to the first site was exothermic, while to the second site, it was endothermic.For zinc, we could not obtain the first baseline, with the data suggesting two binding sites, similar to manganese.In these cases, we eliminated all the points that were potentially associated with another binding event and analyzed the data using "one set of sites" fitting.The stoichiometry was 1.5 zinc ions per protein molecule for direct titration and 2.5 for reverse titration.Results could only be obtained for the binding to the second site, as the first site bound the Zn 2+ with much larger affinity.For all of these metals, the data from the direct and reverse titrations were in general agreement. Int. J. Mol.Sci.2023, 24, x FOR PEER REVIEW 5 of 24 titrations of hDPP III with the metal ions were tested as reverse titrations (protein in metal ion).The main results are shown in Figure 3.The overall results indicated two binding sites (Table 2).For all the metals except cobalt, the data indicated at least two binding sites, based either on the stoichiometry (n) or on the shape of the curve.The affinity of the active site for Zn 2+ and Cu 2+ was, theoretically, too large to be measured by this method.However, for Co 2+ , we measured the binding of 0.63 metal ions per protein molecule.This binding was endothermic (Figure 3c), and the Kd (corrected for the interaction of the metal ion with the buffer, and the buffer protonation) was 13 nM (Table 3), as expected for the binding of the Co 2+ to the active protein site.The data from the reverse titration were in agreement with the direct titration (Tables S1 and S2).For manganese, the data were fitted to two sets of sites, but with a total metal content per protein of less than 1.For the first (active) binding site, the apparent Kd value was similar to that for cobalt, but for the second (additional) binding site, it was significantly higher.The binding to the first site was exothermic, while to the second site, it was endothermic.For zinc, we could not obtain the first baseline, with the data suggesting two binding sites, similar to manganese.In these cases, we eliminated all the points that were potentially associated with another binding event and analyzed the data using "one set of sites" fitting.The stoichiometry was 1.5 zinc ions per protein molecule for direct titration and 2.5 for reverse titration.Results could only be obtained for the binding to the second site, as the first site bound the Zn 2+ with much larger affinity.For all of these metals, the data from the direct and reverse titrations were in general agreement.For copper, we did not obtain satisfactory results.The direct and reverse titrations of protein and Cu 2+ did not match exactly, which can be observed immediately on the signature plot (Figure S1).Both binding processes were exothermic, but the second had a significant entropic effect.Although both titrations indicated that we measured binding to the same-additional-binding site (based on the overall stoichiometry n > 1), the reverse titration showed an additional binding event.We suspect that there may have been other interactions in the reaction (e.g., Cu 2+ -buffer, or Cu 2+ -His-tag) that interfered with the binding of the metal to the hDPP III, so we cannot consider the data obtained to have been accurate.The amount of His-tag remaining in the protein sample was determined using immunoassays; the data are given in the supplementary material (Figures S2 and S3).The measurements were repeated in the 50 mM MOPS-NaOH buffer at pH 7.4 for Zn 2+ and Cu 2+ (see Table S2).Again, we could not use the data obtained with the copper ions because the direct and reverse titrations did not match.However, the apparent parameters measured in the direct titration in both buffers agreed, with the largest difference observed in the stoichiometry (n = 1.2 in the sodium cacodylate buffer and n = 0.8 in MOPS-NaOH). Mutants were made to test the predictions about the amino acid residues forming the additional binding site.We tested the E508D variant and the E316A H568Y double mutant.Overall, we found no significant differences in the binding of the zinc ions to the wild type and variants of hDPP III, either in the stoichiometry nor in the apparent K d values (Table S3).However, the apparent K d value increased significantly for the copper ions (Table S4).From these data, we can surmise that we indeed measured the binding of metal ions to a protein-binding site in the direct titration experiments.Therefore, we interpreted the data from the direct titrations as showing the binding of Cu 2+ to the additional metalbinding site (data in Table 2).In the reverse titration with the Cu 2+ , we probably measured an unknown, competing interaction. The apparent K d values (Table 2) were modified using the known constants for metalbuffer interactions.The K d values thus determined are given in Table 3.The cobalt ions bound only to the active site, and the K d value from the ITC data indicated an affinity close to nM, which was consistent with the data from the fluorimetric assay.The K d values for the binding of the zinc and copper to the additional binding site were in the order of 10 −7 M and 10 −8 M, respectively.These affinities were four orders of magnitude lower than the fluorimetrically measured affinities for the catalytic site.We performed QM calculations on simplified models of the hDPP III metal center to determine whether other divalent cations, such as Cu 2+ , Co 2+ , and Mn 2+ , can replace native zinc and how the presence of water molecules and the amino acid residues of the second coordination sphere affect this process. Since higher concentrations of these metal ions have been shown to inhibit the enzymatic activity of hDPP III, we also studied the binding of the Zn 2+ and the Cu 2+ in both the catalytic and an additional binding site next to the catalytic site, when the same type of metal was bound in both sites. Selected distances in the hDPP III structures: model 1, model 2, model 3, and model 4 with metal ions (Zn, Cu, Co, and Mn) before (I) and after the energy optimization are given in Tables S5-S8. To determine the competition between the cognate Zn 2+ and the other biogenic metal species, such as Cu 2+ , Co 2+ , and Mn 2+ , for the active metal-binding site in the hDPP III, the relative Gibbs free energies were calculated according to Equation (1): where M1 represents the cognate Zn 2+ , M2 is either the Cu 2+ , Co 2+ , or Mn 2+ and P is the protein (see Table 4).Table 4.The relative Gibbs free energies (compared to Zn 2+ ) for non-native metals for the binding site of hDPP III.Calculations were performed according to Equation (1), using the three models with different levels of complexity (see above) to approximate the active protein site.(A) All calculations were performed with ε = 4. (B) The energy optimization in vacuum followed by a single-point energy calculation with ε = 78. Metal Cation (M 2+ ) Relative Affinities/kcal mol −1 The enzyme active site itself was represented by three models with different levels of complexity (see Section 4). The relative Gibbs free energy for the binding of the Cu to the inhibitory active site when the active site was occupied was calculated as 28.68 kcal mol −1 , according to the following equation: where M1 represents the cognate Zn 2+ , M2 is either the Cu 2+ , the Co 2+ , the or Mn 2+ , and P is the protein. Molecular-Dynamics Simulations In our previous work [19], we determined the main binding modes of the second zinc ion to hDPP III and traced the exchange of the ion in the additional, inhibitory, binding site with the catalytic ion.In this work, we investigated the binding of Cu ions to hDPP III.To clarify the relative stability of various di-copper hDPP III structures, MD simulations of the optimized structures of solvated di-metal protein (see Table S9 for some changes in geometry that occur during optimization) were performed (for details, see Section 4.2.2).In total, about 4 µs of MD simulations were performed with three different initial structures of di-copper hDPP III (Table S10; for the definitions of the initial structures, see Section 4.2.2 and Figure S10).The structure of the protein backbone remained stable during the MD simulations of all the structures (Figure S4), which was not true for the initial positions of the metal ions.The largest fluctuations of the Cu1 ion (Cu1 denotes the copper ion in the active center, and Cu2 denotes the second copper ion) were determined for the structures in which the copper ions were bound in mode 1 (SM1 structure) (Figure S5).In one of three simulated replicas of this structure, SM1-1, Cu1 and Cu2 exchanged positions (Figure 4; this was the only simulation in which the exchange of Cu ions occurred), and in one (SM1-3), the Cu1 left the catalytic site (see Figure S5).The expulsion of the Cu from the active site of the protein resulted in the decrease in the binding affinity of this copper ion for the enzyme, as approximated by the LIE energies (Table S11, P-Cu1 energy).The exchange of Cu ions occurred during the first 15 ns of the MD simulations (Figure 4).During the equilibration, the Y318 and H568 left the coordination sphere of the Cu2 and E316 ligated to both the Cu1 and the Cu2 (Figure 5), where they remained during the first 10 ns of the MD simulations (Figure 6).Over the next 5 ns, the Cu2 moved away from the E316, while the E508, which coordinated both metal ions throughout the simulation (Figure 6), rotated about 180 • around the Cβ-Cγ bond (the dihedral Cα-Cβ-Cγ-Cδ changed from about 80 • to −80 • , Figure S6), and the Cu1 and Cu2 exchanged positions (Figure 7).affinity of this copper ion for the enzyme, as approximated by the LIE energies (Table S11, P-Cu1 energy).The exchange of Cu ions occurred during the first 15 ns of the MD simulations (Figure 4).During the equilibration, the Y318 and H568 left the coordination sphere of the Cu2 and E316 ligated to both the Cu1 and the Cu2 (Figure 5), where they remained during the first 10 ns of the MD simulations (Figure 6).Over the next 5 ns, the Cu2 moved away from the E316, while the E508, which coordinated both metal ions throughout the simulation (Figure 6), rotated about 180° around the Cβ-Cγ bond (the dihedral Cα-Cβ-Cγ-Cδ changed from about 80° to −80°, Figure S6), and the Cu1 and Cu2 exchanged positions (Figure 7).During the simulation of the third replica of the DPP III structure with the Cu ions bound in mode 1 (SM1-2 simulation), the Cu1 constantly remained close to its initial position, and the H455 and E508 coordinated it throughout the simulation, but the H450 rotated about 100 • around the Cβ-Cγ bond after about 70 ns (the dihedral Cα-Cβ-Cγ-Cδ changed from about −73 • to 25 • ), and it was not coordinated with Cu1 during the following 250 ns.However, after about 330th ns of the MD simulation, it returned to its initial position and remained there until the end of the simulation (Figures 8 and S5). In the simulations of the structures with copper ions bound in mode 1 and mode 2 (SM1 and SM2 structure, respectively), the Cu1 remained close to its initial position throughout the simulations, ligated to the amino acid residues H450, H455, and E508 (Figure S7), and either to the hydroxide ion in the replica of the SM1 structure or to the water molecule (in the replica of the S2 structure).The fluctuations in Cu2 were largest in the SM2 structure.In all three replicas of this structure, it moved toward the lower domain in the direction of the entrance of the interdomain cleft, and it was accommodated between the E316 and the E329 (Figures 9, S8 and S9).According to the LIE, PCu1-Cu2 energies, this is the most favorable way of binding the second copper ion (Table S11).In the simulations of the SM2 replica, the copper ions were mostly tetra-coordinated and, occasionally, penta-coordinated (it should be noted that the interaction of a metal ion with the carboxyl group of Glu is either monodentate, m, or bidentate, b).The Cu1 was coordinated by the H450, H455, E508 m , and one water molecule (occasionally, two water molecules), and the Cu2 was coordinated by the E316 m , E329 b , and one water molecule in the SM2-1 and SM2-2 replicas.and by E316 m , E329 m , and two water molecules in the SM2-3 replica. bound in mode 1 (SM1-2 simulation), the Cu1 constantly remained close to its initia sition, and the H455 and E508 coordinated it throughout the simulation, but the rotated about 100° around the Cβ-Cγ bond after about 70 ns (the dihedral Cα-Cβ-C changed from about −73° to 25°), and it was not coordinated with Cu1 during the fo ing 250 ns.However, after about 330th ns of the MD simulation, it returned to its i position and remained there until the end of the simulation (Figures 8 and S5).In the simulations of the structures with copper ions bound in mode 1′ and m (SM1′ and SM2 structure, respectively), the Cu1 remained close to its initial pos throughout the simulations, ligated to the amino acid residues H450, H455, and E508 ure S7), and either to the hydroxide ion in the replica of the SM1′ structure or to the w molecule (in the replica of the S2 structure). The fluctuations in Cu2 were largest in the SM2 structure.In all three replicas o structure, it moved toward the lower domain in the direction of the entrance of the domain cleft, and it was accommodated between the E316 and the E329 (Figures 9, S S9).According to the LIE, PCu1-Cu2 energies, this is the most favorable way of bin the second copper ion (Table S11).In the simulations of the SM2 replica, the copper were mostly tetra-coordinated and, occasionally, penta-coordinated (it should be n that the interaction of a metal ion with the carboxyl group of Glu is either monoden m, or bidentate, b).The Cu1 was coordinated by the H450, H455, E508 m , and one w molecule (occasionally, two water molecules), and the Cu2 was coordinated by the E E329 b , and one water molecule in the SM2-1 and SM2-2 replicas.and by E316 m , E329 m two water molecules in the SM2-3 replica.In the simulations of both replicas of the SM1′ structure, both copper ions r near their original positions, as described above, the Cu1 was ligated with H45 E508 m , and the hydroxide and Cu2 was mostly ligated with E316 b , E508 m , and hy in the SM1′-2 replica and with E316 m , hydroxide, and two water molecules in th replica (Figures 10 and S9).In the simulations of both replicas of the SM1 structure, both copper ions remained near their original positions, as described above, the Cu1 was ligated with H450, H455, E508 m , and the hydroxide and Cu2 was mostly ligated with E316 b , E508 m , and hydroxide in the SM1 -2 replica and with E316 m , hydroxide, and two water molecules in the SM1 -1 replica (Figures 10 and S9). In the simulations of both replicas of the SM1′ structure, both copper ions remained near their original positions, as described above, the Cu1 was ligated with H450, H455, E508 m , and the hydroxide and Cu2 was mostly ligated with E316 b , E508 m , and hydroxide in the SM1′-2 replica and with E316 m , hydroxide, and two water molecules in the SM1′-1 replica (Figures 10 and S9). Discussion The binding site of the catalytic Zn ion in DPP III is very similar to those in thermolysin (TML), carboxypeptidase A (CA), and anthrax lethal factor, i.e., in all of these, the zinc ion is coordinated by two histidines and a glutamate.In addition, all of these proteins can bind other divalent ions, such as Cu, Co, and Mn, excess metal ions inhibit their enzyme activity, and an inhibitory binding site has been identified [20,22,23].It has also been shown that the binding of zinc and other metals in excess can lead to the inhibition of DPP III in humans, rats, and microorganisms [9][10][11].All of these findings indicate that metal ions can bind not only at the catalytic site of DPP III, but also at an additional, so-called inhibitory binding site [19][20][21]. Discussion The binding site of the catalytic Zn ion in DPP III is very similar to those in thermolysin (TML), carboxypeptidase A (CA), and anthrax lethal factor, i.e., in all of these, the zinc ion is coordinated by two histidines and a glutamate.In addition, all of these proteins can bind other divalent ions, such as Cu, Co, and Mn, excess metal ions inhibit their enzyme activity, and an inhibitory binding site has been identified [20,22,23].It has also been shown that the binding of zinc and other metals in excess can lead to the inhibition of DPP III in humans, rats, and microorganisms [9-11].All of these findings indicate that metal ions can bind not only at the catalytic site of DPP III, but also at an additional, so-called inhibitory binding site [19][20][21]. In this work, we focused on attempting to determine and identify the presence of an additional metal-binding site. In the stopped-flow experiments, we showed that an excess of all the tested metals over the equimolar concentration to the protein leads to enzyme inhibition (Figure 1).The highest activity for all the metals was measured before or at the time point when the ratio of metal ions to protein molecules reached 1:1.The further addition of metal ions resulted in a similar decrease in activity for all the metal ions.We noted a slight difference between the activation of the protein by the zinc and the other metals: for Zn, the activity increased until the ratio of Zn to protein reached 1:1, and then decreased.For the other metal ions, the activity was mostly constant (maximum) up to the metal-to-protein ratio of 1:1, with Cu 2+ , an exception, reaching a peak at a Cu-to-protein ratio of 0.1. The K d values for the binding of the metal ions in the active site were measured fluorimetrically, as previously described [25][26][27].Our results are in good agreement with previously published reports [11,28], with the greatest affinity for Cu 2+ being picomolar and an order of magnitude higher than for the Zn 2+ , and three orders of magnitude higher than for Co 2+ , which is nanomolar. The ICP-MS experiments gave clear and reproducible results.The method required the use of an ammonium-acetate buffer.When exposed to six molar equivalents of metal ions, the protein molecules retained 1.1 ions of zinc, 2.0 ions of copper, 0.3 ions of cobalt, and no manganese after the washing step.The presence of two copper ions per protein molecule indicated the presence of a secondary metal-binding site.Interestingly, only one zinc ion was bound.This seems to indicate the preference of the secondary site for copper or a kinetically more labile binding of zinc in the inhibitory site (leading to an exchange with the ion bound to the active site, which has already been suggested by molecular simulations [19]).After the washing step, no manganese remained bound to the enzyme, suggesting that manganese may bind only weakly.Surprisingly, the cobalt bound only a fraction of the protein molecules (30%).Previously, cobalt was shown to activate DPP III [9,11], presumably by binding in the catalytic metal-binding site.Interestingly, when the proteins were supplemented with manganese and cobalt, we were able to detect a considerable amount of zinc in the samples-between 20% and 50%.However, the source of this zinc could not be determined because all the samples were treated with the same procedure and no additional zinc was found in the apo and native protein samples.The simplest explanation for this finding could be that minute amounts of the chelators DPA and EDTA remained in the treated samples and were saturated by the addition of excess metals, allowing the minimal amounts of zinc ions present in the solutions to bind to the protein [29]. Another problem we encountered was the difference in reactivity between the native and apoproteins.The enzyme activity was greatly reduced in the apoprotein compared to the native protein.However, it could not be restored by the reconstitution of the holoprotein.We believe that, in order to successfully and completely remove metal ions, we overtreated the protein, rendering it mostly inactive.On the other hand, the native protein responded in a predictable and reproducible manner to the changes in metal-ion concentration.In the ICP-MS experiments, no differences were observed between the native and apo proteins.Our results indicate that the native protein is de facto an apoprotein, since the metal-ion content of the protein was very low (up to 10%, Table 1), and the stopped-flow experiments showed that the enzyme activity reached its maximum at equimolar concentrations of the metal ion.This suggests that recombinant protein production in E. coli, at least according to the protocol we used, produces a protein that does not contain sufficient amounts of metal-ion cofactors. The results of the ICP-MS seemed to confirm our suspicion that there is an additional metal-binding site in hDPP III molecules.Since this method could only provide the stoichiometric relationship, we performed calorimetric experiments to obtain thermodynamic data on protein-metal complex formation.The ITC measurements revealed that Co 2+ binds to only one binding site on hDPP III, presumably the active site.We were able to determine the K d value and compare it with the fluorimetrically determined value, which resulted in a very good agreement between the two methods.The stoichiometric values for the Zn 2+ and Cu 2+ (n > 1) indicated the binding of at least two metal-ion-binding sites per protein molecule, which was further supported by the weak but measurable binding of Mn 2+ to two binding sites (with non-integer stoichiometric values).The binding of the Zn 2+ was reproducible in direct and reverse titration, with the largest difference in the stoichiometry, measured as n = 1.5 in the direct and n = 2.5 in the reverse titration.The partial exothermicto-endothermic transition in the ITC curves also suggests the presence of at least two separate binding sites [30], with the low-affinity binding site directly measured [31].The results of the immunoassays (see Supplement) suggest that the residual His-tag of the TEV protease, by binding two metal ions [32], may have increased the measured metal-ion content in the reverse titration by up to 20%, theoretically giving 2.4 metal ions per protein molecule, which is very close to the measured value in the reverse titration.The interaction we measured was used to determine the K d value, which was in the order of 10 −7 M. A similar, submicromolar affinity has already been measured for the lower-affinity binding site of zinc in β-lactamases [33,34]. For the Cu 2+ , the reverse titration did not confirm the direct titration, possibly due to interfering reactions with the buffer or the His-tag.Therefore, we did not consider these data as reliable.However, the results obtained for the binding of the Cu 2+ to mutants E508D and E316A H568Y (Table S4) and the change in K d suggest that the interaction measured by direct titration occurred with the protein molecule, in the vicinity of the catalytic-ion-binding site.We speculated that in the direct titration, we were measuring the binding of the Cu 2+ to the same additional site as the Zn 2+ .This speculation led us to a K d value of 10 −8 M for this binding interaction, showing a slight preference for Cu 2+ over Zn 2+ for the additional binding site. In the crystal structures of the metal-inhibited proteins TML and CPA, the second Zn ion is located near the active Zn ion, with the hydroxide ion bridging these two ions.In TML, the inhibitory Zn ion is additionally coordinated by His, Glu, and Tyr residues, whereas in CPA, it is bound to only one protein residue, Glu.In our previous study [19], we found that the inhibitory Zn ion binds preferentially to E508 (which bridges two metal ions), E316, and, occasionally to H568 or Y318.In the simulations, we found that the zinc in the catalytic region moved toward the entrance of the interdomain gap in the presence of the second zinc ion at the inhibitory binding site.At the same time, the metal ion from the inhibitory binding site moved to the catalytic center. In 2016, Lo et al. showed that the replacement of the cognate Zn ions with the Cu ions in the LF increased the enzymatic activity.The fluorimetric measurements and QM calculations performed in this work showed that Cu binds with higher affinity to DPP III than Zn, and the enzymatic activity of Cu-DPP III is lower than that of Zn-DPP III.The latter observation is consistent with the results of the ITC measurements (through which we measured the binding of Zn and Cu at the additional, inhibitory binding site) and the results of the MD simulations, which predicted the lower mobility of the Cu bound in the inhibitory binding site compared with the Zn.It appears that zinc ions bound to the catalytic site and the inhibitory site are exchanged more frequently than copper ions; the zinc ions ejected from the active site rapidly leave the protein after exchange, while copper ions remain nearby. The preference of DPP III for zinc is well established, as measured in native and recombinant proteins [35,36].In general, our data suggest that the binding of transition metals to the hDPP III protein follows the universal Irving-Williams series of complex stabilities [37], Mn 2+ < Co 2+ << Cu 2+ > Zn 2+ .The correct metallation is achieved through the cellular regulation of available amounts of metal ions [38]. In addition to the additional binding site suggested by the equivalence of the active sites of TML, CPA, and DPP III, which is located between the lower and upper domains of DPP III, the MD simulations showed that the Cu ion can bind with high affinity to the lower domain of DPP III, where it is bound to E316, E329, and a water molecule.The calorimetric measurements revealed different thermodynamic signature plots for the binding of the metal ions to DPP III.While the binding of copper ions is enthalpically driven, the binding of Zn and Co ions is entropically driven.On the other hand, the binding of the manganese ion in the active site is enthalpically controlled, while the binding to the additional site is entropically controlled.In general, different contributions to the Gibbs energy in enthalpy and entropy correspond to different binding modes [39].There are several potential reasons for these differences, one of which is the difference in affinity of the metal ions for the buffer molecules, with Cu 2+ shown to interact more intensely with the buffer components than other metal ions [40].The other potential reason for this difference is the high affinity of Cu 2+ for the binding site in the lower domain of DPP III, which seems to make the protein more rigid than the binding of the second metal near the catalytic site.The residues E316, Y318, E329, E508, and H568, all comprising the additional metal binding site(s), are conserved among DPPIIIs in eukaryotes and prokaryotes [17], suggesting a possible common mechanism of enzyme-activity regulation in this protease family. General The human dipeptidyl-peptidase III (hDPP III) was expressed in E. coli strain BL21-CodonPlus(DE3)-RIL+ and purified by Ni-NTA affinity chromatography and fast protein liquid chromatography (FPLC).Protein concentrations were determined using microvolume spectrometer BioDrop (Biochrom, Cambridge, UK) by measuring protein A 280 (absorbance at 280 nm), adjusted by the mass-extinction coefficient.All solutions were prepared using mQ ultrapure water.Further processing is described in detail in the following sections. Bacterial Transformation and Protein Expression For the purpose of protein expression, E. coli strain BL21-CodonPlus (DE3)-RIL+ (Stratagene, San Diego, CA, USA) was transformed with pET28MHL plasmids containing the gene for hDPP III with a removable His-tag (original plasmid was a kind gift from Karl Gruber).Transformants were selected on kanamycin plates and grown in overnight bacterial cultures in liquid LB medium supplemented with kanamycin, at a final mass concentration of 100 µg mL −1 , at 37 • C and 250 rpm.These cultures were used to inoculate an expression culture of 500 mL in medium of the same composition and under the same conditions.Expression cultures were grown to an optical density at 600 nm (OD 600 ) ~0.6.After cooling to 18 • C for 30 min, protein expression was induced by the addition of IPTG at a final concentration of 0.25 mM.Expression was continued for 20 h at 18 • C. By centrifugation at 5500 rpm for 20 min, the bacterial cells were pelleted and stored at −20 • C until purification. Site-Directed Mutagenesis Variants with altered amino acid sequences were prepared using QuikChange II XL Site-Directed Mutagenesis Kit (Agilent), following the instructions of the manufacturer.The sequences of the mutagenic primers are listed here, with altered bases in lowercase and altered codons underlined: Mutants' full-length gene sequence was determined at Macrogen Europe.Proteins were expressed and purified using the same protocol as the wild type. Immunoassays/Western Blot We performed Western-blot assays of protein samples used for stopped-flow, ICP-MS, and ITC measurements to confirm that His-tag removal from hDPP III was performed successfully.One to two micrograms of protein were loaded to a 12% gel, and SDS-PAGE and transfer to PVDF membrane was performed according to Laemmli and Towbin [41,42], using Bio-Rad Mini-Protean Tetra Cell and Mini Trans-Blot systems (Bio-Rad, Hercules, CA, USA).Membrane was stained using amido black and blocked using blocking buffer (5% (w/v) powdered milk in Tris-HCl-buffered saline solution with 0.1% (v/v) Tween 20).For detection, we used 1:5000 dilution of mouse Profinia anti-His (Bio-Rad 620-0203) as primary antibody and 1:20,000 dilution of goat anti-mouse IgG (H+L) HRP-conjugate (Proteintech SA00001-1, Proteintech, Rosemont, IL, USA) as secondary antibody.Antibodies were diluted in blocking buffer.The chemiluminescence was produced using Amersham ECL Prime Western Blotting Detection Reagent (Cytiva, Marlborough, MA, UASA) and detected on Alliance Q9 mini (Uvitec, Cambridge, UK), with exposition times from 30 s to 5 min.To quantify detected bands, we used an internal calibration curve with hDPP III and TEV protease, and analyzed blots using ImageJ (Bethesda, MD, USA). Protein Purification Bacterial pellets were resuspended at 4 • C in lysis buffer (50 mM Tris-HCl, 300 mM NaCl, pH 8.0) with 1 mg mL −1 lysozyme and sonicated to break up the bacterial cells. Lysates were centrifuged at 14,500× g to separate soluble proteins from cell-residue precipitates.The soluble fraction above the precipitate was filtered through a 0.45-micrometerdiameter pore filter before application to a Ni-NTA column.The volume of the column was 6 mL for about 50 mL of bacterial cell lysate.The lysate was applied to a column (equilibrated in lysis buffer) at a flow rate of 0.5 mL min −1 , followed by elution at a flow rate of 1.0 mL min −1 wash buffer (lysis buffer with the addition of 20 mM imidazole), and proteins were finally eluted in buffer with 300 mM imidazole (50 mM Tris-HCl, 300 mM NaCl, 300 mM imidazole, pH 8.0).Protein concentration was determined by BioDrop, by measuring protein absorbance at 280 nm.The His-tag used for affinity chromatography was removed by TEV protease.Further purification was performed by gel-filtration chromatography on a FPLC Åkta protein-chromatography system (Pharmacia, NJ, USA), using a Superdex S200 16/60 column.The collected protein fractions were analyzed by SDS-PAGE.The gels were stained with Coomassie Brilliant Blue R-250.Protein aliquots were stored at −80 • C. Preparation of Apo hDPP III Metal-free apoprotein was prepared from wild-type hDPP III.The Wt hDPP III was dialyzed in dialysis vials or dialysis tubing (10 MWCO) for 24 h at room temperature in pH 7.4 buffer containing 25 mM ammonium acetate, 10 mM ethylenediaminetetraacetic acid (EDTA), and 1 mM dipicolinic acid (DPA).The removal of excess EDTA and DPA was performed through sequential washing with 25 mM ammonium acetate pH 7.4, and the concentration of the protein sample was performed through filtration on Amicon Ultra 15 (30 MWCO) columns (Merck). Preparation of Holoprotein Apoprotein (approximately 6 µM protein) was incubated for 1 h with 36 µM solutions of metal nitrites Zn 2+ , Cu 2+ , Co 2+ , and Mn 2+ in 25 mM ammonium acetate buffer, pH 7.4, after which the excess metal was immediately removed by washing with the same ammonium acetate buffer on an Amicon Ultra 15 (30 K) column, using centrifugation.Final protein-sample concentrations were determined as described above and diluted to a mass concentration of 0.05 mg mL −1 with the washing buffer.The exchange of zinc and metals was monitored by fluorescence spectrophotometry on a stopped-flow instrument (SX20 stopped-flow spectrometer Applied Photophysics, Beverly, MA, USA) using two syringes.One syringe was filled with WT protein concentration 10 nM and metal-nitrate solution from a basic solution with a concentration of 15 mM and diluted in 40 mM Tris-HCl buffer, pH 7.5.The second syringe was filled with substrate (200 µM Arg-Arg-2NA).The reference reaction contained 200 nM protein and 200 µM substrate.All measurements were performed at room temperature at a single wavelength, 332 nm.Pro-Data Viewer analysis software v4.2.5, supplied by the manufacturer, was used for data analysis.The dissociation constant of proteins with Zn, Cu, and Co dications was estimated by assessing the activity of the enzyme with metal nitrates in 20 mM Tris-HCl buffer, pH 7.4, with DPA serving as the chelator.Experiments were performed on a Cary Eclipse Fluorescence Spectrophotometer (Agilent Technology, Santa Clara, CA, USA), which measures the release of β-naphthylamine upon cleavage of the synthetic substrate Arg-Arg-2NA.All experiments were performed under the same conditions, at room temperature, at an extinction wavelength of 332 nm and an emission wavelength of 420 nm, for 60 s of reaction time.The experiment was performed with aliquots of 10 nM enzyme hDPP III, 0.8 mM substrate Arg-Arg-2NA, 10 mM DPA chelator, and standard metal-nitrate solutions. 4.1.11.Analysis of Metal-Ion Content by ICP-MS According to the described protocol for the preparation of apo and holoproteins, the metal content was detected using the triple-quadrupole Agilent 8800 (Agilent Technologies, Tokyo, Japan) ICP-MS instrument.Prior to analysis, samples were diluted 4-fold with an alkaline diluent solution containing 0.7 mM NH 3 , 0.01 mM EDTA, 0.07% (v/v) TX-100, and 3 µg L −1 of internal standards (Ge, Rh, Tb, Lu, and Ir) (SCP Science, Baie D'Urfé, QC, Canada). Matrix-matched calibration was used for the quantification of Zn, Cu, Co, and Mn concentrations (multielement calibration curve made from single element standards from SCP Science).A calibration curve for S was prepared separately by diluting working S standards (Inorganic Ventures, Christiansburg, VA, USA) with the diluent solution. The accuracies of measurements were checked using commercially available reference materials: ClinChek®Serum Controls (Level I and II) (Recipe, Munich, Germany) and SeronormTM Serum (Level I and II) (Sero AS, Billingstad, Norway) prepared by 20-fold dilution of reconstituted freeze-dried reference material with diluent solution.Analyzed elements in the referent biological samples were within ±9% of the certified values.4.1.12.Isothermal Titration Calorimetry (ITC) Isothermal titration calorimetry (ITC) experiments were performed on a Malvern PEAQ-ITC microcalorimeter (MicroCal, Inc., Northampton, MA, USA).Experiments were performed in 50 mM sodium cacodylate, pH 7.4, and 50 mM MOPS-NaOH buffer, pH 7.4 at 25 • C. All standard solutions of metal ions were nitrates.Protein was dialyzed and metal salts dissolved in the same buffer solution, which was used for further dilutions.For direct titration, protein solution (20-40 µM) was in the cell (200 µL) and metal solution (200-400 µM) was in the syringe (40 µL).For reverse titrations, protein solution was concentrated using Amicon concentration devices (120-500 µM) and loaded in the syringe, while metal solution (10-60 µM) was in the cell.All experiments were performed under the same conditions of temperature 25 • C, reference power 30.0 µW, high feedback, stirring speed 500 rpm, spacing 150 s, and initial delay 60 s to allow equilibration.Experiments to correct for heat of dilution (buffer-buffer, peptide-buffer, buffer-protein) were performed for all experiments.During analyses, all control experiments were subtracted from every binding experiment.The MicroCal PEAQ-ITC analysis software v1.30, supplied by the manufacturer, was used for data analysis.One set of sites and two sets of sites fitting models were used to find the best fit for experimental data.All parameters are presented as average value and standard deviation of at least two, and mostly three or four measurements.Apparent K d values were corrected for metal-buffer interactions [43] using Equations ( 3) and ( 4).The K MB values were obtained from previous works [43][44][45], and those not measured at pH 7.4 were recalculated using buffer pK a values, as described previously [44]. Quantum Mechanical Calculations (QM) The 3D structures of 14 different complexes, as well as composition models representing the active enzyme site of the enzyme hDPP III with different metal ions, Zn 2+ , Cu 2+ , Co 2+ , and Mn 2+ , were optimized by density functional theory (DFT) calculations in combination with the density-based solvation model (SMD) [46], as implemented in the program Gaussian 09.To allow comparison, all structures were also optimized in vacuum. Model preparation The closed structure of the ligand-free hDPP III (PDB_id 5EGY) served as a template.In terms of complexity, four different models were investigated: three mononuclear binding sites and one binuclear binding site.The simplest model contained only the metal ion, one water molecule, and the side chains of the amino acid residues of the first coordination sphere.The more complex model also contained the amino acid residues of the second coordination sphere, and the most complex model additionally contained two further water molecules (see Figure 11).In the experimental structure, the zinc ion is coordinated by H450, H455, E508, and a water molecule.The E507 and E512 belong to the second coordination shell of the metal ion and stabilize the H450 and H455 from the first coordination shell.Thus, the simplest model (model 1, Figure 11a) involves a metal ion in the active site coordinated by amino acids H450, H455, and E508, as well as a water molecule.The other two models include one metal in the active site, amino acids H450, H455, E507, E508, and E512, and one (model 2, Figure 11b) or three (model 3, Figure 11c) water molecules.The binuclear binding site (model 4, Figure 11d) was constructed from the QM part of the optimized QM/MM structure (Figure S1) in our previous work [19] and included amino acid residues coordinating metal ions, the catalytic (H450, H455, and E508) and inhibitory metal ion (E508, H568, and Y318, E316), and E451.The carboxyl group of E508 bridged metal ions. Details of QM calculations The geometry of the constructed models was optimized using the DFT method in combination with density-continuum-based solvation model (SMD) with a dielectric constant (ε) of 4, which was used to simulate the protein environment of the metal ions.The use of DFT calculations has been shown to reliably reproduce geometric and biological systems, as well as the thermodynamic data associated with their transformation [47][48][49].The calculations were carried out using the Gaussian 09 suite of programs [50], employing the unrestricted B3LYP functional.The B3LYP uses the non-local correlation functional expressed by Lee, Yang, and Parr [51] and a hybrid three-parameter-exchange functional devised by Becke [52].All calculations were performed with a double-ζ basis set 6-31G(3d,p), employing the unrestricted B3LYP functional.In this way, the electronic energies, Eel, of the optimized systems were obtained.According to previous studies, the B3LYP/6-31G(3d,p) level of theory represents a good balance between accuracy and computational resources for obtaining the necessary structural and thermodynamic parameters for the systems representing biological systems with metal dications, such as Zn 2+ , Cu 2+ , Co 2+ , and Mn 2+ and metal-ion hydration [48,[53][54][55][56][57].Each metal complex was optimized in the gas phase, with the methyl groups capping the models of constrained amino acid residues.coordinated by amino acids H450, H455, and E508, as well as a water molecule.The other two models include one metal in the active site, amino acids H450, H455, E507, E508, and E512, and one (model 2, Figure 11b) or three (model 3, Figure 11c) water molecules.The binuclear binding site (model 4, Figure 11d) was constructed from the QM part of the optimized QM/MM structure (Figure S1) in our previous work [19] and included amino acid residues coordinating metal ions, the catalytic (H450, H455, and E508) and inhibitory metal ion (E508, H568, and Y318, E316), and E451.The carboxyl group of E508 bridged metal ions.Frequency calculations were performed at the same 3LYP/6-31G(3d,p) level of theory in order to confirm that the minimized structure represented a true local minimum on the potential energy surface of the respective metal complex.No imaginary frequencies were found for any of the structures studied. The differences in Eel, Eth, and S between the metal in the active enzyme site and in the solvent in Equation ( 5) were used to evaluate the metal-exchange Gibbs energy in the gas phase, ∆G, at T = 298.15K and 1 atm, according to: The relative affinity of different metals towards the DPP III active site was calculated using Equation ( 6) and, for systems in vacuum, Equation (7), where M1 is cognate Zn 2+ and M2 is either Cu 2+ , Co 2+ , or Mn 2+ . Molecular-Dynamics Simulations (MD) DPP III Structures With Di-Copper Sites and System Preparations.By analogy with the di-zinc structures of DPP III, we have considered three modes of binding of Cu ions to DPP III.In all of them, Cu1 is located at the position of the catalytic metal ion and is coordinated to H450, H455, and E508.In the DPP III structure with metal ions bound in so-called mode 1 (structure SM1) Cu2 is coordinated to Y318, H568, and E508 which bridges Cu1 and Cu2, while in mode 2 (structure SM2), Cu2 is instead to Y318, coordinated to E316.In mode 1 (structure SM1 ) Cu ions are coordinated with the same amino acid residues as in mode 1, but Cu1 and Cu2 are bridged by OH-in addition to E508 (Figure S10). System Preparations For molecular modeling, the protonation of the charged residues and histidines was adjusted to a pH of about 7.5, as expected under physiological conditions.Thus, the arginine and lysine residues were positively charged in our models, whereas the glutamate and aspartate residues were negatively charged, with the exception of E451, which was neutral according to our previous results on di-zinc DPP III.The histidines were neutral and the position of the hydrogen atoms on the imidazole ring was chosen according to their ability to form hydrogen bonds with neighboring amino acid residues or to coordinate a metal ion.The protein was parameterized using the ff19SB force field [58] and standard unbound parameters for Cu + [59] available within AMBER suit of programs.The system was solvated using the truncated octahedron of TIP3P water molecules [60]. The distance of the molecular surface from the box was at least 11 Å.Na + ions were added to achieve electroneutrality.All MD simulations were performed using the AM-BER20 suite of programs [61]. Classical MD simulations Prior to the productive MD simulations, the systems were optimized in three cycles with different constraints.In the first cycle (1500 minimization steps), aimed at relaxing the solvent molecules, the protein and zinc ion were constrained by a harmonic potential with a force constant of 32 kcal mol −1 Å −1 .In the second cycle (3500 minimization steps), only the protein backbone was constrained with a force constant of 12 kcal mol −1 Å −1 , while the entire system was minimized in the third cycle (2500 minimization steps) without additional constraints.The systems were heated in three steps from 0 to 300 K, from 0-100 K, from 100-200 K, and from 200-300 K during 50 ps.This was followed by a 3 ns density equilibration at 300 K.A time step of 0.5 fs was used for the heating simulations and 1 fs for the equilibration simulations. In the productive MD simulations we used the algorithm SHAKE [62] and a time step of 2 fs.During heating, the NVT ensemble was used, while equilibration and production MDs were performed with the NPT ensemble, with a cutoff value of 11 Å.During the simulations, the temperature was controlled using the Langevin thermostat [63] with a time interval between temperature rescaling of 0.5 ps during heating and density equilibration and of 1 ps during MD simulations.Pressure was controlled using the Berendsen barostat [64] with a relaxation time of 1.0 ps.A total of 4 µs of productive classical MD simulations were performed for various di-copper DPP III structures. Data analysis Calculations of geometry parameters (RMSD, Rgyration, and RMSF metal-ion coordination) and analysis of linear interaction energies (LIE) were performed using the cpptraj module [65] of the AmberTools20 program package.Figures were generated using PyMOL (PyMOL Molecular Graphics System, version 1.5.0.4,Schrödinger LLC, New York, NY, USA). Conclusions In this work, we demonstrated the binding of Zn 2+ and Cu 2+ metal ions to an additional metal-binding site of the hDPP III molecule.Under the conditions used for the ICP-MS experiments, two Cu 2+ ions remained bound to the protein.Using ITC, the binding of Zn 2+ to the additional binding site was confirmed in direct and reverse titrations, and the affinity of the interaction was quantified as a 10 −7 M K d value.The QM and molecularmechanics calculations also revealed the existence of an additional, high-affinity binding site for copper and zinc ions near the catalytic-metal-ion-binding site.On the other hand, both the experimental (fluorescence) and the computational method (QM calculations) showed a higher affinity of the copper ion than the zinc ion for the active binding site. Furthermore, the MD simulations showed that Cu and Zn ions can exchange their positions at the catalytic and additional binding sites, suggesting that when two zinc ions bind, one of them leaves the protein more frequently than when two Cu ions bind to hDPP III. Thus, the additional (inhibitory) binding site was biochemically confirmed by experimental and computational methods; however, the physiological relevance of our results might be questioned.Overall, we conclude that we collected sufficient data to support our hypothesis that DPP III has an additional metal-ion-binding site, whose affinity for zinc is four orders of magnitude lower than that of the active site.The location of the active site is similar to that identified in other distantly related proteases, suggesting a common mechanism of regulation of enzyme activity by excess metal ions. Figure 1 . Figure 1.Relative enzyme activity as a function of metal concentration.Using a stopped-flow setup with two syringes, the native protein (10 nM) was incubated with metal solutions (Zn 2+ , Cu 2+ , Co 2+ , and Mn 2+ ) in Tris-HCl buffer (40 mM, pH 7.5) and rapidly mixed with the substrate (final concentration of 200 µM), and the progress of the reaction was monitored at 332 nm.Inset: Close-up view of data points at equimolar metal and protein concentrations. Figure 1 . Figure 1.Relative enzyme activity as a function of metal concentration.Using a stopped-flow setup with two syringes, the native protein (10 nM) was incubated with metal solutions (Zn 2+ , Cu 2+ , Co 2+ , and Mn 2+ ) in Tris-HCl buffer (40 mM, pH 7.5) and rapidly mixed with the substrate (final concentration of 200 µM), and the progress of the reaction was monitored at 332 nm.Inset: Close-up view of data points at equimolar metal and protein concentrations. Figure 2 . Figure 2. Determination of the dissociation constant of native DPP III.DPP III (10 nM) in 20 mM Tris-HCl buffer pH 7.4 was exposed to metals (Zn 2+ , Cu 2+ , and Co 2+ , 10 µM).An excess of DPA was used to achieve the free metal-ion concentrations indicated in the figure. Figure 2 . Figure 2. Determination of the dissociation constant of native DPP III.DPP III (10 nM) in 20 mM Tris-HCl buffer pH 7.4 was exposed to metals (Zn 2+ , Cu 2+ , and Co 2+ , 10 µM).An excess of DPA was used to achieve the free metal-ion concentrations indicated in the figure. Figure 4 . Figure 4. Exchange of Cu1 (red) and Cu2 (black) positions during the simulation SM1-1 replica of SM1 structure (hDPP III with Cu ions bound in mode 1), shown according to their distance from H450 (a) and H455 (b). Figure 5 . Figure 5. Coordination of Cu1 (light violet) with Cu2 (orange) in the SM1 structure: initial, represented as white sticks, and after the equilibration (blue sticks). Figure 7 . Figure 7. Coordination of Cu1 (light purple) and Cu2 (orange) with protein residues in the structure obtained after 3 ns (a) and 20 ns (b) of MD simulations of SM1 structure at room temperature, replica SM1-1 (the water molecules are not shown, for clarity). Figure 8 . Figure 8. Coordination of Cu1 and Cu2 (both metal ions are represented as orange spheres) final structure obtained after 500 ns of MD simulations at room temperature, replica SM1-2. Figure 8 .Figure 9 . Figure 8. Coordination of Cu1 and Cu2 (both metal ions are represented as orange spheres) in the final structure obtained after 500 ns of MD simulations at room temperature, replica SM1-2.Int.J. Mol.Sci.2023, 24, x FOR PEER REVIEW Figure 11 .Figure 11 . Figure 11.The models used in this study.Models of the catalytic-metal-ion-binding site in hDPP III are shown in figures (a-c), and the model of hDPP III with two bound metal ions is shown in figureFigure 11.The models used in this study.Models of the catalytic-metal-ion-binding site in hDPP III are shown in figures (a-c), and the model of hDPP III with two bound metal ions is shown in figure (d).The smallest model 1 (a) comprises only the first coordination sphere of the metal ion (the metal ion coordinated with three amino acids and one water molecule) The models shown in (b,c) also include the amino acids from the second coordination sphere, i.e., five amino acids in total, and they differ only in the number of water molecules.Model 2 (b) has one and model 3 (c) has three water molecules.In all structures (a-c), the positions of the metal ions M 2+ (M represents ions Zn 2+ , Cu 2+ , Co 2+ , and Mn 2+ ) is indicated by the sphere, and the amino acids and water molecules are shown as sticks.In bimetallic model 4 (d), the catalytically active metal ion is indicated by MA and the inhibitory ion by MI.Distances are in Å. Table 1 . Number of metal ions relative to protein molecules (N) measured by inductively-coupledplasma mass spectrometry using hDPP III holoenzymes.The results are shown as the average and standard deviation of two measurements. Table 3 . Dissociation constants for the binding of metal ions to DPP III, determined using ITC-apparent values were corrected for the interaction of metal ions with the buffer.
2023-08-16T15:15:48.946Z
2023-08-01T00:00:00.000
{ "year": 2023, "sha1": "b6e59656ab38c7364a00b9b8b2c6d772b5693a8f", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/24/16/12747/pdf?version=1691984531", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0ef60fd69418ae9b03f5dda8a38177870d690f91", "s2fieldsofstudy": [ "Chemistry", "Biology" ], "extfieldsofstudy": [] }
220884113
pes2o/s2orc
v3-fos-license
Level of Knowledge on Stroke and Associated Factors: A Cross-Sectional Study at Primary Health Care Centers in Morocco Background: Stroke is increasingly becoming a major cause of disability and mortality. However, it can be prevented by raising awareness about risk factors and early health care management of patients. Objective: The aim of this study is to assess the level of knowledge on stroke, its risk factors, and warning signs in the population attending urban primary health care centers in the city of Agadir, Morocco. Methods: This is a multicentric cross-sectional study with a descriptive and analytical purpose. The study was conducted at five urban primary health care centers in Agadir in centralwest Morocco. All persons over the age of 18 years who consulted the health centers and who agreed to fill in the questionnaire were recruited, except for the foreign population and health workers. An interview questionnaire was used to assess the level of knowledge on stroke. Findings: A total of 469 participants were involved in the study. The median knowledge score was 8 (Interquartile range 4–13). High blood pressure (55.7%), depression and stress (48.8%) were the most well-known risk factors. Sudden weakness of the face, arms or legs (37.3%) was the main warning sign cited by the participants. Multivariate analysis revealed that illiteracy (OR 1.92; CI95%: 1.08–3.44) primary education (OR 3.43; CI95%: 1.63–7.21), rural residential (OR 1.67; CI95%: 1.07–2.59), no history of stroke among respondents (OR 16.41; CI95%: 4.37–61.59) and no history of stroke among relatives, acquaintances, or neighbors (OR 4.42; CI95%: 2.81–6.96), were independently associated with a lower level of knowledge of stroke (Table 4). Conclusions: The low level of knowledge on stroke among this Moroccan population indicates the importance of implementing stroke education initiatives in the community. More specifically, proximity education and awareness programs ought to be considered to anchor lifestyle preventive behaviors along with appropriate and urgent actions regarding the warning signs of stroke. confirmed the persistence of a low level of knowledge among the general public about stroke, and more specifically about risk factors and warning signs [10][11][12][13]. In Morocco, no previous study has been published exploring the level of knowledge of the Moroccan population about stroke. For this reason, the present investigation represents a first proposal in Morocco to assess the level of knowledge about stroke, as well as the factors associated with it, among people attending health centers belonging to the network of primary health care centers in Agadir in central-western Morocco. Design and study area This study involved a cross-sectional survey with a descriptive and analytical aim, conducted in five urban primary health care centers in Agadir, in the Souss Massa region in the center-west of Morocco. Agadir Ida-Outanane province is located in central-western Morocco. It covers an area of 2297 km 2 , with a total population of 600,599 inhabitants [14]. Inclusion and Exclusion Criteria Participants, aged 18 and over (patients, patients' companions and visitors), attending urban primary health care centers held as part of the study to benefit from preventive or curative care, were included in the study. The foreign population (non-Moroccan) and health workers were excluded. Sample and recruitment of study participants The sample size was calculated based on a 5.0% error range, a 95% confidence interval (CI) for a total Moroccan population of 600,599 inhabitants in the province of Agadir Ida-Outanane [14], and an anticipated population proportion of stroke knowledge deficiency of 50%. The calculation was carried on the website of the sample size calculator: OpenEpi [15]. The minimal sample size required for the study was 385 persons. With an assumed response rate of 75%, a sample size around 469 participants was included. The sample (n = 469) was distributed over the five urban primary health care centers based on the percentage of the population served by each center relative to the total population served by the five urban primary health care centers selected for the study [16]. For this purpose, the sample selected for each urban primary health care center is presented in Table 1, as organized by the urban primary health care centers. In each urban primary health care center, respondents were chosen at random before giving their approval to participate in the study. Instrument and Data Collection A face-to-face questionnaire survey was used for data collection of respondents with the first part including sections reserved for: socio-demographic characteristics (age, sex, marital status, level of education, spoken languages, place of residence, socioeconomic level [family income]), professional occupation (according to the classification of the High Commission for Planning of Kingdom of Morocco), health insurance, body mass index, regular physical exercises, medical history and associated comorbidities (high blood pressure, diabetes, hypercholesterolemia, cardiac disease, history of stroke in respondent or immediate family, and history of stroke in relatives, acquaintances, or neighbors), toxic habits (smoking, alcohol consumption). In addition, a second part includes questions exploring the general knowledge about stroke, its risk factors, as well as the warning signs of a stroke. Patients were asked to identify risk factors and warning signs. For this survey, the risk factors of stroke were derived from the list established through the INTERSTROKE study [17]. Therefore, high blood pressure, diabetes, smoking, hypercholesterolemia, sedentary lifestyle, obesity, cardiac disease, unhealthy diet, oral contraceptive use, excessive alcohol consumption, previous stroke and family history of stroke were the selected risk factors of stroke. The warning signs were shown to participants in a list format, and were derived from Schneider et al.'s US survey [18]. These included sudden numbness or weakness in the face, arm or leg; sudden confusion, trouble speaking or understanding others; sudden poor vision in one or both eyes; sudden dizziness, difficulty walking or loss of balance; and sudden headache with no known cause. Twenty-two questions were used to assess the respondents' level of knowledge on stroke. The first component focused on generalities about stroke (4 questions), a second related specifically to risk factors for stroke (13 questions), and a third concentrated on warning signs of stroke (5 questions). One point was awarded for each correct answer given, and zero for any other answer. The sum of all points obtained was converted into a knowledge score of up to 22 points. Two groups were generated using the K-means clustering method: a group with a high level of knowledge (n = 205 persons) and an average knowledge score of 15, and another group with a low level of knowledge (n = 264 persons) and an average knowledge score of 4. Data management and statistical analysis The qualitative variables were presented as frequency and percentages, with mean ± standard deviation (SD) or median (interquartile range, IQR) for quantitative variables. The Chi-square test (χ 2 ) or Fisher's exact test, were performed according to their particular application conditions, to look for differences in proportions of categorical variables between two groups (group of respondents with a low level of knowledge on stroke and those with a high level of knowledge on stroke). Furthermore, univariate and multivariate logistic regression analyses were conducted to identify factors associated with the low level of stroke knowledge in the study population. All independent variables with a p-value <0.25 in the univariate analysis were taken into account in the multivariate logistic regression analysis. P val ues <0.05 were considered to indicate statistical significance. Data management and statistical analysis was done using the SPSS for Windows software package (ver. 13.0; SPSS Inc., Chicago, IL, USA). Ethics approval and consent to participate The study has been approved by the ethics committee for biomedical research of the Mohammed V Faculty of Medicine and Pharmacy in Rabat (N/R: Folder Number 18/20), and informed consent was obtained from each subject. Sociodemographic and clinical characteristics of the study sample A total of 469 participants were surveyed in the study. The population consisted of 190 men (40.5%) and 279 women (59.5%) with an M/F ratio of 0.68. The average age was 38.86 ± 17.01 years with extremes of (18-87) years. The median age was 35 years with an IQR of (23-51). High blood pressure was reported in 143 persons or 30.5% of the study population, diabetes in 126 or 26.9%, dyslipidemia in 40 or 8.5%, cardiopathy in 38 or 8.1%. A history of stroke was reported in 21 respondents, or 4.5%. A history of stroke was found in immediate family in 129 (27.5%). Two hundred and seventy-six persons (n = 276), or 58.8%, had a history of stroke among relatives, acquaintances, or neighbors ( Table 2). General knowledge on stroke, risk factors and warning signs of stroke Concerning study participants' knowledge regarding generalities on stroke, 78.3% of respondents reported that stroke is a preventable disease, 78.7% indicated that stroke is a curable disease, and 94.5% reported stroke as a pathology requiring urgent managerial actions. Furthermore, approximately 86.6% considered stroke a disabling disease. Regarding the population's knowledge of stroke risk factors, high blood pressure was the most reported risk factor for stroke among the respondents at 55.7% followed by depression and stress at 48.8%, previous history of stroke with 37.1%, and smoking at 36.5%. For warning signs, sudden numbness or weakness in face, arm or leg was mentioned by 37.3%. Similarly, sudden dizziness, difficulty walking or losses of balance, or coordination problems were mentioned by 34.5% of the surveyed population ( Table 3). Level of knowledge on stroke among the study population The average knowledge score was 8.87 ± 5.76. The median knowledge score was 8 (IQR 4-13). For socio-demographic variables, there is a significant difference between the low level knowledge group and the high level knowledge group according to age (p = 0.0085), level of education (p < 0.001), spoken languages (p = 0.003), place of residence (p = 0.003), health insurance (p < 0.001) and professional occupation (p = 0.047). Concerning clinical characteristics, a significant difference was found between the low level knowledge group and the high level knowledge group based on: the notion of obesity or overweight (p = 0.027), cardiac disease as associated comorbidity (p = 0.048), regular physical exercise (p = 0.001), history of stroke among the respondents (p < 0.001), history of stroke in immediate family (p < 0.001) and history of stroke among relatives, acquaintances, or neighbors (p < 0.001). As for toxic habits, a significant difference was reported only between the low level knowledge group and the high level knowledge group in relation to alcoholism (p = 0.020). Furthermore, there was no significant difference between the group with a low level of knowledge on stroke and the group with a high level of knowledge on stroke based on the presence of some associated comorbidities and toxic habits in the population surveyed (diabetes, high blood pressure, dyslipidemia, smoking, p > 0.05) ( Table 2). Factors associated with low-level stroke knowledge among the study population According to the univariate logistic regression analysis: age ( After introducing the following variables: age, sex, education level, place of residence, socioeconomic level, health insurance, professional occupation, obesity or overweight, hypercholesterolemia as associated comorbidity, notion of cardiac disease as associated comorbidity, smoking, alcoholism, regular physical exercise, no history of stroke among the respondent, history of stroke in immediate family and no history of stroke among relatives, acquaintances, or neighbors in the multivariate regression model, the following factors were significantly associated with a lower level of knowledge Table 5). Discussion In this study, more than three-quarters of the population were aware of the preventable and urgent nature of the stroke. These results are similar to those found in previous studies [19,20]. Also, the majority of respondents mentioned that stroke is a disabling disease, which is consistent with the results found in a study of Arab-Muslim Israe-lis which highlighted that stroke is always associated with physical burden, disability, and dependence [11]. As for respondents' knowledge of risk factors for stroke, this study has found that high blood pressure, depression and stress were the most well-known risk factors with a percentage near 50%. This is similar to the results of a wide range of studies conducted in several countries [11-13, 19, 21-25]. A remarkable lack of knowledge of the population regarding the risk factors for stroke, and especially the most well-known and classic ones, have been detected in our context. By this logic, two-thirds did not recognize diabetes or hypercholesterolemia as risk factors for stroke and almost half of the population did not recognize high blood pressure as a risk factor for stroke. These results could be explained by the limited and insufficient access of the Moroccan population to services related to the diagnosis, treatment, and control of noncommunicable diseases provided in primary health care centers. Additionally, a significant segment of the population uses unconventional and traditional medicine, which would limit their chances to be educated about and raise awareness of risk factors [26]. Moreover, the majority of the surveyed participants showed an unsatisfying level of awareness regarding warning signs of a stroke. This result could be explained in macroscopic context, by the lack of mass education and awareness campaigns for the benefit of the general public. A few campaigns are occasionally organized on World Stroke Day, usually in cities where a university hospital is based, in which the acronym FAST (F: Face, A: Arm, S: Speech, T: Time) is adapted in dialectal Arabic language for use in the awareness campaign educational materials. Additionally, this low level of warning sign recognition could be linked at the microscopic level to the lack of individualized awareness sessions at the first signs suggestive of stroke, which would benefit the Moroccan population and, more specifically, people at cardiovascular risk in the context of medical consultations. This lack of knowledge of the warning signs of stroke potentially impacts on the early use of specialized hospital centers for possible management of stroke patients. This finding was missed in a recent Moroccan systematic review study [6]. To address this concern, the High Authority of Health in France recommended that the treating physician inform patients at risk (vascular history, high blood pressure, diabetes, arteriopathy of the lower limbs, and so on), as well as their entourage, about the main signs of stroke to contribute to rapid access to neurovascular units [27]. Overall, this study revealed there is clearly a poor level of knowledge in the population surveyed about stroke. This is identical to the findings in several countries around the world [10][11][12]28]. However, other investigations have shown a good level of knowledge about this disabling disease [19,29]. In this regard, the variability in the level of knowledge of the population regarding stroke in studies is the expression of a phenomenon whose determinants are multiple. The present study revealed in the multivariate logistic regression analysis that illiteracy, primary school, rural residential, no history of stroke among the respondent and no history of stroke among relatives, acquaintances, or neighbors were independently associated with a lower level of knowledge about stroke. The low level of education has been associated in the Moroccan context with a poor level of knowledge, the Souss Massa region illiteracy rate (33.1%) being slightly higher than the 2018 national rate (32.2%) reported by the High Commission for Planning of Morocco [30]. This is consistent with the results of a range of studies in which low education level has been the factor most associated with a low level of knowledge in the population surveyed about stroke [19,[31][32][33]. Similarly, other investigations have confirmed an association between a higher level of education and a good state of knowledge [12,[34][35][36]. As for place of residence and its relationship to the level of awareness of the surveyed population, this could be explained by access to healthy lifestyle advice for the population living in urban areas, unlike that of rural areas, confirmed recently by the results of the national survey on common risk factors for non-communicable diseases [26]. On the other hand, there was a study conducted in Mexico, which suggested that due to the increased frequency of awareness and information campaigns in rural areas and due to the consolidated "physician-patient" relationship in rural primary health care centers that more preventive education on common cardiovascular disorders, such as stroke, may be found [37]. Moreover, as a result of a first stroke, the risk of a new incident increases considerably. These recurrent strokes account for 25-30% of all strokes as a result of the failure of secondary prevention, and they are probably more disabling and more likely to be fatal than initial strokes [38,39]. Since the state of knowledge among stroke survivors is of crucial importance in the secondary prevention of recurrent strokes, it has been demonstrated, in present investigation, that a personal history of stroke is a protective factor against a low level of knowledge. This result is similar to that found in several investigations [36,[40][41][42], while other studies have shown the persistence of a low level of knowledge in patients surviving after a stroke [43][44][45][46]. Similarly, a case-control study has found that the level of knowledge in patients after a stroke or transient ischemic accident was low compared to randomly select healthy individuals [47]. This could be explained by the individualized information and awareness sessions conducted in the hospital setting by health professionals involved in the management of stroke patients, which generates an accumulation of knowledge related to the disease throughout the care pathways. Presumably, it could be the consequence of anxiety about the risk of having another stroke, which develops a curiosity in patients to know additional details concerning the disease, especially those for whom the unexpected occurrence of the stroke induces an anxious anticipatory state [48]. The no history of stroke among relatives, acquaintances, or neighbors is found to be a risk factor for a low level of knowledge. This result is probably due to the consolidated interpersonal and social relations with patients in the Moroccan community during visits. Bolstering this result is a French study which has demonstrated the importance of interpersonal contact in the dissemination of medical information and, more specifically, information about stroke [49]. In another study, a parent was shown to be the primary source of knowledge. To this end, the education of a single person within a family could play a crucial role in raising public awareness of stroke [10]. This study has several limitations. The location of the study constitutes the first constraint, which has focused exclusively on people attending urban primary health care centers despite the recruitment of rural residents with a percentage close to 50%. Another limitation is related to the cross-sectional nature of the study, which reflects only the current level of knowledge of the population surveyed and does not take into account changes over time. Additionally, the adoption of questions about risk factors and warning signs in the list format may result in an overestimation of the current knowledge of the surveyed population. Conclusion This study showed important lack of knowledge about risk factors and warning signs of stroke in this sample of the Moroccan population. There is a need to adopt the community-based approach focused on the delegation of education and awareness tasks to experts' patients, stroke survivors or patients' caregivers, such as community health workers (relays). This is to implement proximity prevention programs characterized by flexibility at the temporospatial level to meet the specificities and real needs of communities in terms of education and awareness, to replace the human and logistical constraints associated with the implementation of education and awareness campaigns of the general Moroccan public. Such a poor disease knowledge is strongly correlated to the low educational level. Thus, this indicator calls for further development of sociological studies in order to strengthen the therapeutic protocols taking into account the social status of patients, their cultural context, their ability to verbalize, their perception of the disease, and of the medical language. Data Accessibility Statement All data generated or analyzed during this study are included in this published article.
2020-07-30T02:03:50.458Z
2020-07-23T00:00:00.000
{ "year": 2020, "sha1": "a1999f1d47be6fbdb02f43d707b36b3840768177", "oa_license": "CCBY", "oa_url": "http://www.annalsofglobalhealth.org/articles/10.5334/aogh.2885/galley/3003/download/", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a7e2a4ed00ce1ccfcfc7c58fe52bd0244b0b74b0", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
164208731
pes2o/s2orc
v3-fos-license
Erigeron annuus (L.) Pers. Extract Inhibits Reactive Oxygen Species (ROS) Production and Fat Accumulation in 3T3-L1 Cells by Activating an AMP-Dependent Kinase Signaling Pathway Obesity is one of the major public health problems in the world because it is implicated in metabolic syndromes, such as type 2 diabetes, hypertension, and cardiovascular diseases. The objective of this study was to investigate whether Erigeron annuus (L.) Pers. (EAP) extract suppresses reactive oxygen species (ROS) production and fat accumulation in 3T3-L1 cells by activating an AMP-dependent kinase (AMPK) signaling pathway. Our results showed that EAP water extract significantly inhibits ROS production, adipogenesis, and lipogenesis during differentiation of 3T3-L1 preadipocytes. In addition, EAP decreased mRNA and protein levels of proliferator-activated receptor γ (PPARγ) and CCAAT/enhancer-binding protein alpha (C/EBPα). Moreover, EAP suppressed mRNA expressions of fatty acid synthase (FAS), lipoprotein lipase (LPL), adipocyte protein 2 (aP2) in a dose-dependent manner. Whereas, EAP upregulated adiponectin expression, phosphorylation levels of AMPK and carnitine palmitoyltransferase 1 (CPT-1) protein level during differentiation of 3T3-L1 preadipocytes. These results suggest that EAP water extract can exert ROS-linked anti-obesity effect through the mechanism that might involve inhibition of ROS production, adipogenesis and lipogenesis via an activating AMPK signaling pathway. Introduction Obesity is a major public health problem around the globe because it is implicated in metabolic syndromes, including type 2 diabetes, and cardiovascular diseases. In 2005, the World Health Organization (WHO) reported that 1.6 billion adults are overweight and 0.4 billion are obese among adults worldwide [1]. High-calories diet and deskbound lifestyle are the most dominant factors contributing to obesity [2]. Due to the rapid increase of obesity-related diseases, cellular and molecular mechanisms underlying fat metabolism need to be clarified. Obesity is affected by both the number and size of adipose tissue that is accelerated by adipogenesis and lipogenesis progression [3,4]. Regulating adipogenesis, therefore, provides a promising therapeutic approach for preventing obesity. Moreover, recent reports have explored the mechanism study of adipocyte life cycles. such as adipogenesis, lipogenesis, and lipolysis using 3T3-L1 cells [5,6]. Cell Culture and Differentiation 3T3-L1 cells were purchased from the ATCC. Cells were seeded into 24-well plates and were cultured in DMEM containing 10% BCS and 1% P/S at 37 • C with 5% CO 2 . Two days after complete confluence was reached (D0), cells were cultured in MDI differentiation medium (DMEM containing 0.5 mM/L IBMX, 1 µM/L dexamethasone, and 10 µg/mL insulin), 1% PS, and 10% FBS for three days (D3). To evaluate the effects of EAP extract on adipocyte differentiation of 3T3-L1 preadipocytes, cells were cultured in MDI in the presence of various concentrations of EAP extract. Cells were then maintained in regular medium containing 1% PS, 10% FBS, and 10 µg/mL insulin (D5). After five days of induction, the medium was changed to DMEM containing 1% PS and 10% FBS (D7). On day seven, cells were harvested for further experiment. Cell Viability Assay Cell viability of 3T3-L1 preadipocytes and adipocytes was assessed using WST-1 assay. 3T3-L1 preadipocyte and adipocytes cells were incubated with EAP extracts (0-300 µg/mL) for 24 h and seven days, respectively. WST-1 was added to the cultured medium, followed by incubation for 2 h. Cell viability was measured in absorbance at a wavelength of 570 nm. NBT Assay and Oil Red O (ORO) Staining The effect of the EAP on ROS production was determined by NBT assay during the differentiation of 3T3-L1 cells. NBT is reduced by ROS to a dark-blue, insoluble form of NBT called formazan. On day seven after induction, the cells were incubated for 90 min in PBS containing 0.2% NBT. Formazan was dissolved in 50% acetic acid, and the absorbance was determined at 570 nm. The effect of the EAP on fat accumulation in 3T3-L1 cells was evaluated by ORO staining. 3T3-L1 cells were fixed with 4% formaldehyde in PBS for 1 h at room temperature and washed twice with 60% isopropanol. After performing ORO staining for 1 h at room temperature, cells were washed with water to remove the excess stain. Stained cells were allowed to air dry and ORO stained cells were eluted with DMSO for quantitative analysis. The absorbance was measured at a wavelength of 490 nm on a spectrophotometer. Real-Time Polymerase Chain Reaction (RT-PCR) Total RNA was isolated from cells after seven days of maturation using high pure RNA isolation kit (Roche Applied Science, Penzberg, Germany). NanoDrop (NanoDrop 2000c, Thermo Scientific, Waltham, MA, USA) was used to quantify total RNA. Then 1 µg of total RNA was converted into cDNA using a cDNA synthesis kit (Roche Applied Science, Penzberg, Germany). Real-time quantitation was performed using Light Cycler 480 (Roche Diagnostics, Manneim, Germany). PCR reaction mix contained Light Cycler 480 SYBR Green I Master (Roche, Germany). The real-time PCR conditions were as follows: 95 • C for 10 min followed by forty-five cycles at 95 • C for 15 s, 60 • C for 5 s, 72 • C for 15 s. All experiments were performed three or more times. Expression levels of target genes were normalized against glyceraldehyde-3-phosphate dehydrogenase (GAPDH) or β-actin as internal controls. Primers used in the experiment are shown in Table 1. Analysis of Protein Level Cells were harvested using a cell scraper and lysed to obtain whole cell lysate. Cells lysates were clarified by centrifugation at 12,000 g for 30 min. Protein concentrations were measured with bicinchoninic acid (BCA) protein assay kit (Pierce Biotechnology, Waltham, MA, USA). Then 20 µg of the protein extract was mixed with 2 × sample buffer, heated at 95 • C for 5 min, separated by 10% SDS-PAGE, and transferred to PVDF membrane at 100 V for 90 min. Membranes were blocked with 1 × TBST comprising 5% skim milk at room temperature for 1 h, incubated with primary antibody overnight at 4 • C, washed five times with 1 × TBST (10 min each wash), incubated with secondary antibody at room temperature for 1 h, and washed five times with 1 × TBST (10 min each wash). PPARγ, C/EBPα, SREBP-1c, phospho-AMPK (p-AMPK), total-AMPK, and phospho-ACC (p-ACC), CPT1 antibodies were purchased from Cell Signaling Technology (Beverly, MA, USA). The target protein was detected with Luminata ™ Forte Western HRP substrate (Millipore, Tokyo, Japan). The density of a specific band was analyzed using image J software (NIH, Bethesda, MD). Statistical Analysis Experimental results are presented as mean ± standard deviation (SD) of three experiments. All results were statistically analyzed by Duncan's multivariate analysis. Difference between averages was considered statistically significant when the p-value was less than 0.05. EAP Water Extract Inhibits Lipid Accumulation and Ros Production in 3t3-l1 Adipocytes To observe effects of EAP extracts on adipocytes differentiation, 3T3-L1 cells were treated with MDI in the presence or absence of various EAP extracts (water, 30%-ethanol, 50%-ethanol, 70%-ethanol, and 100%-ethanol) at a concentration of 100 µg/mL. Among the various EAP extracts, water extract has the greatest inhibitory effect on adipogenesis ( Figure 1). Thus, it was further analyzed for its effect on adipocyte differentiation and molecular mechanisms involved in its effect. To evaluate the effect of EAP water extract on the cell viability of preadipocyte and adipocytes, cultured 3T3-L1 cells were treated with various concentrations of EAP water extract and cultured for 24 h or seven days followed by cell viability using WST-1. Treatment with EAP water extract did not significantly affect cell viability ( Figure 2). Thus, we used this extract at up to 200 µg/mL in subsequent experiments. We further determined the effect of EAP water extract on lipid accumulation and ROS production in 3T3-L1 adipocytes by ORO staining and NBT assay. EAP water extract treatment dose-dependently inhibits lipid accumulation during adipogenesis ( Figure 3). Moreover, the production of dark-blue formazan, which represents ROS production, was decreased in adipocytes treated with EAP water extract when compared with MDI treated cells ( Figure 3). treatment dose-dependently inhibits lipid accumulation during adipogenesis ( Figure 3). Moreover, the production of dark-blue formazan, which represents ROS production, was decreased in adipocytes treated with EAP water extract when compared with MDI treated cells ( Figure 3). treatment dose-dependently inhibits lipid accumulation during adipogenesis ( Figure 3). Moreover, the production of dark-blue formazan, which represents ROS production, was decreased in adipocytes treated with EAP water extract when compared with MDI treated cells ( Figure 3). Values are presented as mean ± standard deviation. Differences between means were considered statistically significant at P < 0.05. Effect of EAP Water Extract on Adipogenic and Lipogenic Gene Expressions We evaluated the effect of EAP water extract on mRNA and protein expression of adipogenic target gene during adipogenesis of 3T3-L1 cells using real-time RT-PCR and western blot. Our results revealed that EAP water extract at a concentration of 200 μg/mL significantly downregulated the mRNA expression of PPARγ and C/EBPα (Figure 4). Moreover, EAP dose-dependently suppressed protein levels of PPARγ, C/EBPα and SREBP-1c in 3T3-L1 cell parallel with mRNA expression of PPARγ and C/EBPα gene. Garcinia cambogia at a concentration of 100 μg/mL, used as the positive control, decreased protein levels of PPARγ, C/EBPα and SREBP-1c in 3T3-L1 cell ( Figure 5). These results indicate that EAP water extract exerts the anti-adipogenic effect by blocking PPARγ and CEBPα expression, which might have implications in anti-obesity effects. Values are presented as mean ± standard deviation. Differences between means were considered statistically significant at p < 0.05. Effect of EAP Water Extract on Adipogenic and Lipogenic Gene Expressions We evaluated the effect of EAP water extract on mRNA and protein expression of adipogenic target gene during adipogenesis of 3T3-L1 cells using real-time RT-PCR and western blot. Our results revealed that EAP water extract at a concentration of 200 µg/mL significantly downregulated the mRNA expression of PPARγ and C/EBPα (Figure 4). Moreover, EAP dose-dependently suppressed protein levels of PPARγ, C/EBPα and SREBP-1c in 3T3-L1 cell parallel with mRNA expression of PPARγ and C/EBPα gene. Garcinia cambogia at a concentration of 100 µg/mL, used as the positive control, decreased protein levels of PPARγ, C/EBPα and SREBP-1c in 3T3-L1 cell ( Figure 5). These results indicate that EAP water extract exerts the anti-adipogenic effect by blocking PPARγ and CEBPα expression, which might have implications in anti-obesity effects. Values are presented as mean ± standard deviation. Differences between means were considered statistically significant at P < 0.05. Effect of EAP Water Extract on Adipogenic and Lipogenic Gene Expressions We evaluated the effect of EAP water extract on mRNA and protein expression of adipogenic target gene during adipogenesis of 3T3-L1 cells using real-time RT-PCR and western blot. Our results revealed that EAP water extract at a concentration of 200 μg/mL significantly downregulated the mRNA expression of PPARγ and C/EBPα ( Figure 4). Moreover, EAP dose-dependently suppressed protein levels of PPARγ, C/EBPα and SREBP-1c in 3T3-L1 cell parallel with mRNA expression of PPARγ and C/EBPα gene. Garcinia cambogia at a concentration of 100 μg/mL, used as the positive control, decreased protein levels of PPARγ, C/EBPα and SREBP-1c in 3T3-L1 cell ( Figure 5). These results indicate that EAP water extract exerts the anti-adipogenic effect by blocking PPARγ and CEBPα expression, which might have implications in anti-obesity effects. Values are presented as mean ± standard deviation. Differences between means were considered statistically significant at P < 0.05. In order to confirm whether this inhibitory effect of EAP water extract on lipid accumulation is mediated through inhibition of adipogenesis and lipogenesis involving PPARγ, C/EBPα, and SREBP-1c, we further checked the FAS, LPL, aP2 and adiponectin gene expressions using RT-PCR. As shown in Figure 6, EAP water extract markedly suppressed mRNA expression of fatty acid synthase (FAS), LPL, and adipocyte protein 2 (aP2) in a dose-dependent manner. However, EAP water extract dose-dependently increased adiponectin gene expression. These results indicated that EAP water extract inhibits lipid accumulation and ROS production in 3T3-L1 adipocytes via inhibition of adipogenesis and lipogenesis. In order to confirm whether this inhibitory effect of EAP water extract on lipid accumulation is mediated through inhibition of adipogenesis and lipogenesis involving PPARγ, C/EBPα, and SREBP-1c, we further checked the FAS, LPL, aP2 and adiponectin gene expressions using RT-PCR. As shown in Figure 6, EAP water extract markedly suppressed mRNA expression of fatty acid synthase (FAS), LPL, and adipocyte protein 2 (aP2) in a dose-dependent manner. However, EAP water extract dose-dependently increased adiponectin gene expression. These results indicated that EAP water extract inhibits lipid accumulation and ROS production in 3T3-L1 adipocytes via inhibition of adipogenesis and lipogenesis. Results are expressed as fold increase compared to the control after normalizing against GAPDH expression level. Values are presented as mean ± standard deviation. Differences between means were considered statistically significant at P < 0.05. EAP Water Extract Enhances AMPK Phosphorylation and Its Downregulation ACC To elucidate how EAP water extract inhibited lipid accumulation and ROS production during the adipocyte differentiation, AMPK phosphorylated protein level and ACC were measured. We determined whether EAP (200 μg/mL) controlled the differentiation of adipocytes and vitality metabolism via the AMPK pathway in 3T3-L1 cells. The results show that EAP enhances AMPK phosphorylation and ACC in MDI-induced adipocyte differentiation. In addition, EAP upregulated CPT1 protein level (Figure 7). Garcinia cambogia also upregulated both phosphorylation AMPK and ACC as well as CPT1 protein levels in 3T3-L1 cells. These results indicated that EAP water extract may inhibit adipogenesis and lipogenesis via an activating AMPK signaling pathway. Figure 6. Effects of EAP on mRNA expressions of FAS, LPL, aP2, ACO, ACC and adiponectin genes in 3T3-L1 adipocytes. EAP: Erigeron annuus (L.) Pers. water extract. Cells were treated with EAP (50, 100, and 200 µg/mL) for seven days during differentiation. At day seven, mRNA expression levels of FAS, LPL, aP2 and adiponectin were determined by real-time PCR. Results are expressed as fold increase compared to the control after normalizing against GAPDH expression level. Values are presented as mean ± standard deviation. Differences between means were considered statistically significant at p < 0.05. EAP Water Extract Enhances AMPK Phosphorylation and Its Downregulation ACC To elucidate how EAP water extract inhibited lipid accumulation and ROS production during the adipocyte differentiation, AMPK phosphorylated protein level and ACC were measured. We determined whether EAP (200 µg/mL) controlled the differentiation of adipocytes and vitality metabolism via the AMPK pathway in 3T3-L1 cells. The results show that EAP enhances AMPK phosphorylation and ACC in MDI-induced adipocyte differentiation. In addition, EAP upregulated CPT1 protein level (Figure 7). Garcinia cambogia also upregulated both phosphorylation AMPK and ACC as well as CPT1 protein levels in 3T3-L1 cells. These results indicated that EAP water extract may inhibit adipogenesis and lipogenesis via an activating AMPK signaling pathway. Band densities of p-AMPK and p-ACC were quantified and normalized with those of AMPK and ACC expression levels, respectively. Values are presented as mean ± standard deviation. Differences between means were considered statistically significant at P < 0.05. Discussion Erigeron annuus (L.) Pers. (EAP) is a naturalized plant belonging to the daisy family. EAP has been used to treat a variety of diseases in Korea, Japan, and China, including bronchitis, cough, and convulsions [17,18]. However, the antiobesity activity of EAP has not been reported yet. This study evaluated the effect of EAP water extract on lipid accumulation and ROS production in 3T3-L1 cells. 3T3-L1 preadipocyte cells were treated with EAP water extract at varying concentrations. To further clarify the mechanisms involved in this effect, we investigated the effects of EAP on expression levels of adipogenic and lipogenic target gene as well as the AMPK signaling pathway. Our data demonstrated that EAP water extract attenuated the lipid accumulation by up to 67.4% compared to MDI-treated control cells (Figure 3). Treatment with EAP water extract also dose-dependently inhibits mRNA and protein expression of PPARγ and C/EBPα paralleled to reduced lipid accumulation in adipocytes ( Figure 5). During differentiation of preadipocyte, adipogenesis is activated through the action of transcription factors, such as PPARγ, C/EBPα, and SREBP-1c [25]. SREBP-1c is one of the earliest lipogenic genes besides LPL and FAS [26,27]. In the present study, EAP water extract can inhibit adipogenesis and lipogenesis by downregulating adipogenic and lipogenic markers. Recent studies have shown that C/EBPα, C/EBPβ, and C/EBPδ can also regulate adipocyte gene expression. In the early stage of adipocyte differentiation, C/EBPβ and C/EBPδ activation can increase the expression of C/EBPα, PPARγ, and other lipogenic agents Band densities of p-AMPK and p-ACC were quantified and normalized with those of AMPK and ACC expression levels, respectively. Values are presented as mean ± standard deviation. Differences between means were considered statistically significant at p < 0.05. Discussion Erigeron annuus (L.) Pers. (EAP) is a naturalized plant belonging to the daisy family. EAP has been used to treat a variety of diseases in Korea, Japan, and China, including bronchitis, cough, and convulsions [17,18]. However, the antiobesity activity of EAP has not been reported yet. This study evaluated the effect of EAP water extract on lipid accumulation and ROS production in 3T3-L1 cells. 3T3-L1 preadipocyte cells were treated with EAP water extract at varying concentrations. To further clarify the mechanisms involved in this effect, we investigated the effects of EAP on expression levels of adipogenic and lipogenic target gene as well as the AMPK signaling pathway. Our data demonstrated that EAP water extract attenuated the lipid accumulation by up to 67.4% compared to MDI-treated control cells (Figure 3). Treatment with EAP water extract also dose-dependently inhibits mRNA and protein expression of PPARγ and C/EBPα paralleled to reduced lipid accumulation in adipocytes ( Figure 5). During differentiation of preadipocyte, adipogenesis is activated through the action of transcription factors, such as PPARγ, C/EBPα, and SREBP-1c [25]. SREBP-1c is one of the earliest lipogenic genes besides LPL and FAS [26,27]. In the present study, EAP water extract can inhibit adipogenesis and lipogenesis by downregulating adipogenic and lipogenic markers. Recent studies have shown that C/EBPα, C/EBPβ, and C/EBPδ can also regulate adipocyte gene expression. In the early stage of adipocyte differentiation, C/EBPβ and C/EBPδ activation can increase the expression of C/EBPα, PPARγ, and other lipogenic agents [28,29]. We found that mRNA expression levels of FAS, LPL, and aP2 in 3T3-L1 cells after EAP treatment were significantly downregulated compared to those in DW-treated cells ( Figure 6). LPL is linked to ACC and the hydrolysis of plasma triglycerides that contributes to fatty acid synthesis [30]. FAS is a lipogenic enzyme which is an important enzyme catalyzing the last step of fatty acid synthesis [31]. aP2 can stimulate adipogenesis which is controlled by C/EBPα and PPARγ transcriptional levels [32]. aP2 gene is vital to pathways linking obesity to insulin resistance. These results indicate that EAP water extract can prevent MDI-induced expression of a gene linked to adipogenesis and lipogenesis. The major regulator of energy metabolism is AMPK. Phosphorylated AMPK plays a role in the regulation of adipocyte differentiation [33]. AMPK has evolved to become a motivational target, especially for the treatment of obesity. It is possible to regenerate a key to inhibit lipogenesis of 3T3-L1 cells by natural complexes [34]. For this reason, AMPK modulation has been predicted to be a key to controlling obesity based on scientific research. We elucidated that EAP water extract could inhibit adipogenesis and phosphorylation level of AMPK and substrate ACC. AMPK can lead to fatty acid β-oxidation through inactivation of ACC and aggregate CPT1 expression gene in adipose tissue [16]. We found that EAP (200 µg/mL) could control the differentiation of adipocytes and vitality metabolism via the AMPK pathway in 3T3-L1 cells. EAP could enhance AMPK phosphorylation and ACC through MDI-induced adipocytes differentiation. EAP also upregulated CPT1 gene expression (Figure 7). These results suggest that EAP water extract inhibits lipid accumulation via activating AMPK signaling pathway. In conclusion, our data demonstrate that EAP water extract inhibits lipid accumulation and ROS production in 3T3-L1 cells. Moreover, EAP water extract is capable of inhibiting adipocytes differentiation via downregulation of expression levels of C/EBPα and PPARγ. Furthermore, EAP water extract promotes AMPK phosphorylation and its downstream ACC. Phosphorylated AMPK also increases the expression of the fatty acid oxidase CPT1 gene. Based on these findings, EAP water extract may be useful as an effective natural product to prevent obesity and obesity-related metabolic syndrome. Future identifying studies on bioactive compounds in EAP water extract and in vivo tests using animal HFD-induced obesity models are needed to examine whether EAP water extract could be used as a therapeutic agent and developed as an anti-obesity material, such as a functional food.
2019-05-24T04:48:09.910Z
2019-05-01T00:00:00.000
{ "year": 2019, "sha1": "af29e85240ce38495dfa3137c0a82bbff9969946", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/antiox8050139", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "af29e85240ce38495dfa3137c0a82bbff9969946", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
192539600
pes2o/s2orc
v3-fos-license
Time Delay Recurrent Neural Network for Speech Recognition In Automatic Speech Recognition(ASR), Time Delay Neural Network (TDNN) has been proven to be an efficient network structure for its strong ability in context modeling. In addition, as a feed-forward neural architecture, it is faster to train TDNN, compared with recurrent neural networks such as Long Short-Term Memory (LSTM). However, different from recurrent neural networks, the context in TDNN is carefully designed and is limited. Although stacking Long Short-Term Memory (LSTM) together with TDNN in order to extend the context information have been proven to be useful, it is too complex and is hard to train. In this paper, we focus on directly extending the context modeling capability of TDNNs by adding recurrent connections. Several new network architectures were investigated. The results on the Switchboard show that the best model significantly outperforms the base line TDNN system and is comparable with TDNN-LSTM architecture. In addition, the training process is much simpler than that of TDNN-LSTM. Introduction Intelligent virtual assistants such as Siri, Alexa and Cortana are becoming smarter and more capable. Many people become more reliant on then since they make our lives easier. One of the key components in these intelligent products is speech recognition, converting speech into text automatically by computers. Nowadays, neural networks have been applied in almost all commercial speech recognition systems to achieve state-of-art recognition accuracy. Speech is a signal with long temporal contexts. Thus it is very important for the acoustic model to capture the long-term temporal dependencies of speech. Many efforts have been spent in improving the temporal modeling capability of acoustic models. At the feature level, feature representations such as TRAPs [1], wavelet based multi-scale spectrotemporal representations [2] and deep scattering spectra [3] have been proposed to improve the context modeling capability of the system. These features can be spliced and fed into a feed-forward neural network in order to model wider temporal contexts. Model-based approach, which is the focus of this paper, can also be utilized to address this problem. Recurrent neural networks (RNNs) have cycle connections in the hidden layers [4]. Ideally, history information will be kept in the recurrent hidden nodes and theoretically unlimited contexts information can be utilized. Unfortunately, the gradient vanishing problem substantially deteriorate the performance of RNNs. This is because the gradient vanishing or exploding problem limit the capability of RNNs to model the long range context dependencies to 5-10 discrete time steps between [5]. Variants of RNNs such as long short-term memory (LSTM, [4], [5], [6], [7], [8]) have been successfully applied to speech recognition to achieve state-of-the-art recognition accuracy. But LSTM needs much more time to train, compared with other feed-forward networks. Time Delay Neural Network (TDNN) uses a feed-forward architecture, and has been proven to be powerful in handling the context information of speech signal [9]. The long range context information of speech signal is utilized through a carefully designed hierarchical structure. In a TDNN architecture, the first layer process input from narrow contexts of the speech signal. The deeper layers will process input by slicing the output of the hidden activations from the previous layer in order to learn wider temporal relationships. However, splicing continuous windows of frames in traditional TDNN structure leads to overlap and redundancy. To improve efficiency, sub-sampling is proposed [9]. Subsampling is a method allowing gaps between feature frames at each layer [9]. It can help decrease the number of parameters and increase the computation efficiency. The success of TDNN shows that the most important information related to the recognition of current frame lies in a relatively narrow context. On the other hand, efforts to combine LSTM and TDNN have been done and improvement is observed [6], [8]. Especially in [8], the authors conducted a lot of experiments to evaluate different stacking structures of TDNN-LSTM network. The improvement by using TDNN-LSTM indicates the necessity of utilizing longer context information. In this paper, we focus on other ways to extend the context modeling capability of TDNN. Because of the complexity of LSTM, we prefer not to use it in our structures. Mainly two methods are investigated: 1) instead of using LSTM, a RNN layer is used in a TDNN-RNN network. 2) direct recurrent connections are added and the new network is called Time-Delay Recurrent Network (TDRNN). Besides, the following issues are investigated in this paper:  How to combine TDNN and RNN?  The number of layers of RNN in TDNN model.  Locations to add the recurrent connections in a TDNN model.  Exploration of more complicate recurrent structure. The following of this paper is organized as follows. In section 2 we describe the proposed structure in details, followed by the experimental setup in Section 3. In section 4 we present the experimental results and then the conclusions are given in section 5. Finally we will have some discussion about the redundancy. Time Delay Neural Network In a time delay neural network, the temporal context is modeled by using a hierarchical architecture. Each layer in a TDNN operates at a different temporal resolution. The outputs of the activation from previous hidden layer are spliced as the input of the current layer. Therefore, the current layer operates at a much wider context, compared with the previous layer. As we go to higher layers of the network, increasingly wide context is seen by the network. Similar to Convolutional Neural Networks (CNNs [9]), the transforms in the same layer of a TDNN are tied across time in order to reduce the number of parameters and make the transformation invariant to time shift of the input [9]. TDNNs are seen as a precursor to the CNNs. [9] proposed a method to subsample the TDNN network. The splicing configuration {-1,1} means that we splice the input at current time step minus 1 and the current time step plus 1 (i.e. the current frame is dropped). Sub-sampling reduces the dimension of the input and thus the model size. Figure 1 shows a TDNN with sub-sampling. The overall input contexts of TDNNs are limited, for example, asymmetric context windows of up to 16 frames in past and 9 frames in the future are investigated in [9]. The success of TDNNs indicates that the most valuable information for the recognition of the current frame lies in a relatively narrow context. This is true even when recurrent models are used. Truncated Back Propagation Through Time is widely used in LSTM training to limit the context [8], [10], [11]. In an unidirectional LSTM [5], [8], [11], the left context length is usually set to 20. Adding RNN layer in a TDNN strucutre then we get a TDNN-RNN structure. It is similar to TDNN-LSTM structure but is more efficient. TDNN-RNN The improvement by using TDNN-LSTM [6] indicates the necessity of utilizing longer context information. Due to the complexity of LSTM, it takes much more time to train the TDNN-LSTM model. We believe that the TDNN architecture has captured the most valuable context information. There is no need to add another very complicate component. Instead of TDNN-LSTM, we will explore the effectiveness of the TDNN-RNN structure as show in Figure 2. In this architecture, we add another RNN layer in the middle of a TDNN. The added RNN component might be able to utilize additional context to further improve the recognition accuracy. TDRNN Another architecture that we explored in this paper is the one with direct recurrent connection in the TDNN layer. Figure 3 shows an example of this new type of architecture which we call Time Delay Recurrent Neural Network (TDRNN). Same as the TDNN, the recurrent layer is tied across different time steps. As the most important contexts have been modeled in TDNN, adding limited additional context may be enough for the TDRNN to achieve the best performance. Finally, we empirically found that it is better to add another transform to the recurrent connection as shown in Figure 4. The output of the TDRNN layer in figure 4 is fruther handled by one fullconnect neural network, and then serves as the input of next time step. We call this structure as deep recurrent edge, and the optimal number of full-connect neural network should be investigated. Experimental setup All of the models are evaluated on the 300 hours Switchboard conversational telephone speech task [12] and the Nnet3 recipe in Kaldi toolkit [13] is used to build our experimental systems. The feature frame alignment is performed using GMM-HMM baseline recognition system as described in [9] 40dimension Mel-frequency cepstral coefficients(MFCCs) without cepstral truncation are used as input. The input features are spliced by concatenating the "{-2,1,0,1,2}" frames. Feature adaptation is utilized by appending 100-dimension iVector with the MFCC input. Finally, the resulting 300dimension feature is transformed by a 300-dimension linear discriminant analysis (LDA) and is used as the model input. Data augmentation technique is adopted to generate three copies of the training data with speed perturbation rates of 0.9,1.0 and 1.1. We uses a symmetric context configuration as shown in Table 1 for our baseline TDNN model, and the complete baseline TDNN structure is shown is figure 1. Using this configuration, the left and right contexts of the input signal are all 16. Cross entropy training criteria is used to trained all the models reported in this paper. The number of hidden nodes for the baseline TDNN is set to 1024. We then go on to compare TDRNN and the baseline TDNN. The context length of recurrent layer at this step is limited to the context length of TDNN baseline (i.e. 16). The context length at this step is much shorter than the one in TDNN-LSTM model mentioned above, so is more efficient. To improve the training speed, the Classical Block Momentum(CBM) Blockwise Model-Update Filtering (BMUF) algorithm [14] is applied with 16 parallel jobs and the block momentum factor of 0.9. Experimental Results We present results on the Switchboard subset (labeled as swbd) and the complete Hub5'00 evaluation set (labeled as hub5). A develop set (labeled as dev) with 4000 sentences is selected randomly from the full training data set. Language model is built from Fisher transcripts as described in [9]. Experimental results are reported in word error rate (WER). Results of TDNN-LSTM and TDNN-RNN models are presented in table 3. We also investigated RNN layers with different depth. From the table we can see that deep1-TDNN-RNN model performs comparable to TDNN-LSTM models. The training of the TDNN-RNN model is much faster than the TDNN-LSTM model and TDNN-RNN has fewer parameters. Results of step two is shown in table 4 and table 5. In table 4 we present the performance of TDRNN and RNN with different configurations. The table consist of six TDRNN structure with one TDRNN layer, seven TDNN-RNN structure with one RNN layer and four TDRNN structure with multiple TDRNN layer. Comparing one-layer TDRNN and one-layer TDNN-RNN, we found that the TDRNN performs a little better than the TDNN-RNN. By replacing RNN with TDRNN, we decrease the number of neural network layer in the model and improvement is obtained. On the other hand, we discover that experiment with recurrent layers near input and output performs bad, like TDRNN(6),TDNN-RNN(1) and TDNN-RNN(7). This phenomenon may be due to the mismatch of dimensions between recurrent layers and input/output. As for the results of multiple TDRNN layer, we find that adding more TDRNN layers show little impact to WER, or even a little worse, and one recurrent layer should be suitable. The result is different from the TDNN-LSTM experiment presented in [8], and the latter one have three LSTM layer. Table 5 investigates about the information processing power of recurrent structure by adding fullconnected neural network (NN) layer on the recurrent edge. We define the number of NN layer on the recurrent edge as depth so deep1-TDRNN(2) is a structure with TDRNN on layer 2 and depth 1. The results show that TDRNN with depth 1 perform well, and adding depth couldn't decrease the WER. Besides, TDRNN still outperform TDNN-RNN. In order to compare the training efficiency, we also draw the likelihood-training time curve, as shown in figure 5. From the figure we can see the training time of TDRNN models with different layers. The training time statistics is done roughly in same computer cluster, but the relative efficient is comparable. Experiment in step one is included in the figure with dotted line. Comparison of likelihood between experiment in step one and step two is meaningless, because the training procedure are different. But we can see a significant improvement on training convergence time in step two. The result shows that TDRNN can be trained mush faster than TDNN-LSTM. Figure 5. Likelihood change for several models in step one and step two, all of the experience is perform with same computer cluster. The training time statistics is done roughly, but the relative efficient is comparable Conclusion & Discussion In this paper, we investigate adding recurrent connections directly on the TDNN structure. The new model is called Time Delay Recurrent Neural Network. TDRNN performs a little better than TDNN-RNN and also has lesser layer. We try to improve the modeling ability for TDRNN layer and we find that TDRNN with one neural network layer on the recurrent edge (i.e. deep1-TDRNN) perform best. It obtains about 6% relative improvement, compared with the TDNN baseline. We didn't apply the complex training process for LSTM in our experiment, instead we use the TDNN training process to train the TDRNN model, so the model is very efficient. On the other hand, we reach this improvement without considering the extra context for RNN, instead we keep the context length same as TDNN baseline so the context length is reduced. The TDRNN model has the potential to outperform TDNN-LSTM model. At this part we discuss about the redundancy of TDNN-LSTM model. Many experiment have shown that LSTM is good at long term information modeling. However, the successful application of context truncation when computing gradient BackPropagation Through Time (BPTT) also indicates that the context that are far from current frame show little impact to WER. TDNN is also good at longterm modeling, but the context is finite. The best combination of TDNN and recurrent models should not only success the efficient context modeling ability of TDNN, but also develop the infinite context modeling ability of recurrent model. However, combination of TDNN and LSTM may have the redundancy because the complexity of LSTM, and that's why we investigate the deep TDRNN. In the future, we would like to make a global evaluation on these combination.
2019-06-14T13:20:44.818Z
2019-05-01T00:00:00.000
{ "year": 2019, "sha1": "b1a0ddcf3141d2aed69eaf12b83c863504314566", "oa_license": "CCBY", "oa_url": "https://iopscience.iop.org/article/10.1088/1742-6596/1229/1/012078/pdf", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "12586d34c834728cce2d6d28e5381ac0e2f69ddf", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Physics", "Computer Science" ] }
248286906
pes2o/s2orc
v3-fos-license
NEW AMS CHRONOLOGY FOR THE EARLY BRONZE III/IV TRANSITION AT KHIRBAT ISKANDAR, JORDAN ABSTRACT We present the first Bayesian 14C modeling based on AMS ages from stratified sediments representing continuous occupation across the Early Bronze III/IV interface in the Southern Levant. This new high-precision modeling incorporates 12 calibrated AMS ages from Khirbat Iskandar Area C using OxCal 4.4.4 and the IntCal 20 calibration curve to specify the EB III/IV transition at or slightly before 2500 cal BCE. Our results contribute to the continuing emergence of a high chronology for the Levantine Early Bronze Age, which shifts the end of EB III 200–300 years earlier than the traditional time frame and increases the length of EB IV to about 500 years. Data from Khirbat Iskandar also help direct greater attention to the importance of sedentary communities through EB IV, in contrast to the traditional emphasis on non-sedentary pastoral encampments and cemeteries. Modeling of AMS data from Khirbat Iskandar bolsters the ongoing revision of Early Bronze Age Levantine chronology and its growing interpretive independence from Egyptian history and contributes particularly to re-examination of the EB III/IV nexus in the Southern Levant. INTRODUCTION Early Bronze Age Levantine society experienced a particularly dramatic transformation through the mid-and latter portions of the third millennium BCE. Following the construction of fortified settlements in Early Bronze II and III, southern Levantine society witnessed the pervasive abandonment of these communities across the region by the end of Early Bronze III and during Early Bronze IV (known alternatively as the Intermediate Bronze Age or Intermediate EB-MB) (D'Andrea 2014a(D'Andrea , 2020de Miroschedji 2014;Prag 2014;Greenberg 2019;Richard 2020). The prevailing archaeological narrative has emphasized a drastic shift from EB II-III agrarian town life to EB IV mobile pastoralism, influenced especially by Dever's model of EB IV seasonal transhumance between lowland winter encampments and upland summer settlements and cemeteries (Dever 1980(Dever , 1995(Dever , 2014. Traditionally, the timing and explanation of Levantine EB III town abandonment was attributed to the collapse of the Egyptian Old Kingdom, which was followed by fragmented political authority during the First Intermediate Period between about 2300/ *Corresponding author. Email: sfalcon1@uncc.edu 2200 and 2000 BCE (see discussion in Sharon 2014). A reevaluation of EB IV society has stemmed from several related lines of investigation during the past three decades. The importance of EB IV sedentary agrarian communities has been illuminated through the reevaluation of regional survey data (e.g., Palumbo 1991Palumbo , 2008 and excavations at a growing number of sedentary settlements (Figure 1). These excavations document EB IV villages at Tell Abu en-Ni'aj (Falconer and Fall 2019), Tell Um Hammad (Helms 1986;Figure 1 Map of the Southern Levant showing EB III and EB IV archaeological sites discussed in text. Kennedy 2015), Khirbet al-Batrawy (Nigro 2006), Tell Iktanu (Prag 2011), Sha'ar Ha-Golan (Eisenberg 2012), Kfar Vradim (Covello-Paran 2020), and Horbat Qishron (Smithline 2002), including walled settlements at Dhahret Umm el-Marar (Falconer and Fall 2019), Khirbet Um al-Ghozlan (Fraser 2017) and Khirbet el-Meiyiteh (Bar et al. 2013). Evidence elsewhere shows that EB III settlement was followed by EB IV reoccupation at Hazor (Bechar 2017), Beth Shean (Mazar 2006), Jericho (Nigro 2003), Bab edh-Dhra' (Rast and Schaub 2003), possibly Megiddo (Adams 2017) and, most notably for this study, at Khirbat Iskandar in central Jordan Richard 2020). Khirbat Iskandar has been a key site for the ongoing re-evaluation of EB IV settlement, based on its stratified evidence from EB III and IV, including the apparent re-use of EB III fortifications (Richard and Long 2007a, 2007bRichard et al. , 2018Richard 2020) and possible construction of fortifications in EB IV (Richard 2016;D'Andrea et al. 2020). Over roughly the last two decades, Early Bronze Age chronology has been revised drastically through site-specific and regional Bayesian radiocarbon modeling (Bronk Ramsey 2009a). This comprehensive revision shifts the constituent Early Bronze Age subperiods earlier, thereby disarticulating them from their previous historically-based conventions. Highlights of this high chronology include a start date for EB I well before 3500 cal BCE, a compressed onecentury time span for EB II, and an earlier transition from EB II to III (2900 cal BCE) (Table 1) (e.g., Bruins and van der Plicht 2001;Golani and Segal 2002;Bourke et al. 2009;Regev et al. 2012aRegev et al. , 2014Regev et al. , 2017cf. Nigro et al. 2019). These changes set the stage for the elucidation of the EB III/IV interface, which is being shifted 200-300 years earlier, based especially on analytically robust Bayesian models for the end of EB III occupation at Numeira, Khirbet Yarmouk/Tel Yarmuth, Tell el-Mutesellim/Megiddo, Khirbet Kerak/ Beth Yerah and Tell es-Safi/Gath (Regev et al. 2012b(Regev et al. , 2019Shai et al. 2014) and for the founding of the EB IV village at Tell Abu en-Ni'aj (Falconer and Fall 2019;Fall et al. 2021). A major challenge of EB III/IV chronology building lies in the paucity of sites that offer both stratified occupations spanning the EB III/IV transition and AMS datasets suitable for chronological modeling. At the time of their influential revision of Early Bronze Age chronology, Regev et al. noted that "no sites currently exist where both EB III and EB IV/ IBA have been 14 C dated" (2012a: 559). This study addresses this challenge by presenting Bayesian modeling of a suite of 12 calibrated AMS ages from stratified EB III and EB IV levels in Area C at Khirbat Iskandar, Jordan. Our modeling solidifies the EB III/IV transition date as a key component of the emerging Early Bronze Age chronology, which disarticulates Levantine and Egyptian chronologies (e.g., Kutschera et al. 2012), and opens the door for independent assessment of Levantine EB III/IV settlement and societal dynamics. The 2.7 ha tell of Khirbat Iskandar, Jordan lies in the lower reaches of the Wadi Wala, which drains west to the Dead Sea ( Figure 2). Two initial trenches on the northeastern edge of the site were excavated in 1955 by Parr (1960). Subsequently, 15 seasons of field work, research and restoration between 1981 and 2019 have been directed by Richard and co-directors Long (since 1994) and D'Andrea (since 2015). Excavations in 32 5 × 5 m squares distributed in three locations on the tell (Areas A, B, and C) reveal stratified evidence of a permanently settled Early Bronze Age fortified community. The most recent excavations in 2019 investigated Areas B and C. The Area C excavations produced evidence from EB IV (Phases 1-3, from earliest to latest), which was stratified immediately above deposition from EB III (in four phases labeled Pre-Phase 1D, Pre-Phase 1C, Pre-Phase 1B, and Pre-Phase 1A, from earliest to latest) (D'Andrea et al. 2020). Pre-Phase 1D includes a burned layer that appears to correlate with a destruction layer overlying the latest EB III occupation identified thus far in Area B. This stratigraphic correlation situates the Pre-Phase 1 deposits in Area C in the latter portion of EB III. The Phase 1 ceramics include transitional EB III-IV vessel forms that place this phase very early in EB IV (Long 2010: 37;Richard 2010: 69-111, 272-273;D'Andrea 2014aD'Andrea : 133, 2016D'Andrea : 545, 2019D'Andrea : 66, 2020, while the Phase 2 and 3 vessels incorporate attributes found typically at EB IV sites along the southern Jordan Rift ( , while the earlier phases reveal domestic areas that include broad-and long-room houses with associated domestic features. Most importantly for this study, the 2019 excavations show that the Area C stratigraphic sequence spans the EB III/IV interface at Khirbat Iskandar. Thus, chronometric evidence from the stratified occupational evidence in Area C at Khirbat Iskandar is particularly well-positioned to provide a high-precision determination of the timing of the EB III/IV transition based on modeling of AMS ages from a sequence of samples from contiguous EB III and EB IV strata. MATERIALS AND METHODS The context for all materials excavated at Khirbat Iskandar is identified according to Area (A, B or C), Square (numbered according to the grid of squares in each Area), Pail (numbered according to soil layers) and Locus (numbered in reference to three-dimensional features). During the 2019 Khirbat Iskandar excavations, archaeological sediments from contexts such as floors, ovens, pits, and mudbrick layers were processed by water flotation to recover plant macroremains. This study incorporates 10 new AMS ages along with two AMS ages from charcoal samples excavated previously from Area C, Square 2 (Long 2010: 43; Holdorf 2010: 267), which were analyzed at the University of Arizona Accelerator Mass Spectrometry Laboratory (AA-50178) and the University of Tubingen (lab number unreported). We submitted 10 seed samples recovered in the 2019 excavations through flotation of sediment from Area C, Squares 6 and 8 for AMS 14 C analysis at the University of Georgia. These samples were pretreated using the standard laboratory methods of the Center for Applied Isotope Studies at the University of Georgia. Seeds were inspected under microscope and manually cleaned to remove superficial contaminants, followed by acid/alkali/acid (AAA) pretreatment as follows. Subsamples were treated in 1N HCl at 80°C, decanted, and rinsed with MilliQ water, then treated with 0.1 M NaOH at room temperature and rinsed to neutral with MilliQ water. The samples were treated with HCl a second time at 80°C for 15 min, rinsed repeatedly with MilliQ water, and dried at 105°C. Approximately 2-3-mg subsamples were encapsulated in tin, and the elemental concentrations (%C and %N) and stable isotope New AMS Chronology for EB III/IV Transition 241 ratios (δ 13 C and δ 15 N) were measured using an elemental analyzer isotope ratio mass spectrometer (EA-IRMS). Values are expressed as δ 13 C with respect to VPDB and δ 15 N with respect to AIR. Each 2-3-mg subsample of pretreated material was combusted at 900°C in an evacuated and sealed quartz tube in the presence of CuO to produce CO 2 . The CO 2 samples were cryogenically purified from the other reaction products and catalytically converted to graphite using the method of Vogel et al. (1984). Graphite 14 C/ 13 C ratios were measured using the CAIS 0.5 MeV AMS. Sample ratios were compared to the ratio measured from the Oxalic Acid I standard (NBS SRM 4990), and the results are presented as percent Modern Carbon (pMC). The quoted uncalibrated date is given in radiocarbon years before 1950 (years BP), using the 14 C half-life of 5568 years. The error is quoted as one standard deviation and reflects both statistical and experimental errors. The dates have been corrected for isotope fractionation using the δ 13 C value. fig. A5). Accordingly, values of A model > 60 are used to identify statistically robust Bayesian models, and calibrated ages with A ≤ 60 would be treated as statistical outliers and would not be incorporated in our Bayesian modeling (Bronk Ramsey 2009b). RESULTS Khirbat Iskandar provides a sequence of 12 AMS 14 C ages from six stratigraphic phases designated as EB III or EB IV on the basis of Area C stratigraphy and ceramic chronology ( Table 2). These ages are reported in radiocarbon years BP (Before Present, with the present defined as 1950 CE) following international convention (Stuiver and Polach 1977). We modeled these ages in a chronological sequence of six stratigraphic phases: Pre-Phase 1D, 1B, and 1A (EB III strata), and Phases 1, 2, and 3 (EB IV strata), from earliest to latest, respectively. Modeled ages are presented in Table 3. Our optimal Bayesian model for Khirbat Iskandar (Figure 3; We also modeled the calibrated seed and charcoal from Khirbat Iskandar as two contiguous phases in which the earlier Oxcal phase includes the five ages from the EB III strata (Pre-Phase 1D, 1B, and 1A), and the later Oxcal phase includes the seven ages from the EB IV strata (Phases 1-3.) Ages within each Oxcal phase were ordered stratigraphically. This two-phase model once again estimates the EB III/IV boundary at or slightly before 2500 cal BCE (Table 4). As a further consideration, we assessed the modeling influences of the two charcoal ages from Phases 2 and 3, which model well with the other five EB IV dates in the two-phase model noted above. The individual calibration distributions for these two samples provide no indication of an "old wood" effect of inbuilt age (Dee and Bronk Ramsey 2014). All four models produce strikingly consistent determinations of the EB III/ IV transition at Khirbat Iskandar at or slightly before 2500 cal BCE (see Table 4). DISCUSSION Bayesian modeling of the Pre-Phase 1 seed ages from Khirbat Iskandar correlates the lower strata in Area C with the tail end of Early Bronze III occupations at the best radiocarbon dated sites across the Levant. For example, seven seed ages from Tell es-Safi/Gath, in the Hebron region, model the end of its EB III occupation between 2680 and 2580 cal BCE (Shai et al. 2014), while a model of 17 AMS ages from Tel Yarmuth/Khirbet Yarmuk, in the foothills of the Shephelah, ends its EB III occupation about 2500 cal BCE (Regev et al. 2012b). Models that include further AMS ages from Numeira, Bab edh-Dhra', Tell es-Sakan, Hebron and Khirbet Kerak/Beth Yerah (Rast and Schaub 1980;Weinstein 1984Weinstein , 2003Regev et al. 2012a;Regev et al. 2019) also terminate EB III at each of these sites about 2500 cal BCE (Regev et al. 2012a). This substantial body of data represents a growing consensus that " : : : EB III ended at the latest ∼2450, perhaps before 2500 BC" (Regev et al. 2012b: 505, emphasis original). Our modeling positions the Pre-Phase 1 ages from Khirbat Iskandar Area C at the very end of EB III, prior to 2500 cal BCE, in chronological accordance with this consensus derived from this wide variety of AMS-dated Levantine sites. For comparative purposes, well-dated evidence for the beginning of EB IV in the Southern Levant features the seven-phase modeled sequence of 25 seed ages for Tell Abu en-Ni'aj , but is otherwise limited to three AMS seed ages from Khirbet el-'Alya New AMS Chronology for EB III/IV Transition 247 (Bar et al. 2013;Lev et al. 2020), the two earliest of four charcoal dates from Ein-Ziq (Avner and Carmi 2001; see also Dunseth et al. 2016) and the earliest one of three charcoal ages from Nahal Refaim (Segal and Carmi 1996). Another noteworthy site, Bab edh-Dhra', has five ages that correlate with Phases 2 and 1 at Tell Abu en-Ni'aj, late in EB IV after about 2350 cal BCE . In contrast, our modeling incorporates seven ages from a single area at Khirbat Iskandar that fit squarely within the modeled intervals for Phases 7 and 6, very early in EB IV at the outset of the Tell Abu en-Ni'aj sequence Fall et al. 2021). These ages parallel the calibrated distributions for the early EB IV dates from Khirbet el-'Alya, Ein-Ziq and Nahal Refaim. Thus, Bayesian modeling of AMS ages from Khirbat Iskandar Phases 1-3 places the beginning of EB IV occupation in Area C at 2500 cal BCE, in keeping with the best-dated evidence for early EB IV occupations elsewhere in the Southern Levant. Among the sites with deposition from both EB III and EB IV, chronological gaps have been inferred between these periods at Beth Shean/Tell el-Hosn based on ceramic chronology (Mazar 2012: 28) and at Hazor/Tell el-Waqqas and Jericho/Tell es-Sultan based on stratigraphy (Nigro 2003: 131, 138;Lev et al. 2021). Modeling of AMS ages from Hazor estimates the end of its EB III occupation by 2580 cal BCE, followed by "many decades of abandonment" prior to resettlement in EB IV (Lev et al. 2021). Six AMS ages support a model of subsequent EB IV occupation beginning after 2400 cal BCE and ending by 2200 cal BCE, based on 1σ boundary ranges (Lev et al. 2021: fig. 13 Kenyon (1981: 167, 214). The modeled 1σ distributions for these ages lie between about 2400 and 2250 cal BCE (Nigro et al. 2019: fig. 12), correlate with mid-EB IV Phases 4-1 at Tell Abu en-Ni'aj , and accord with a beginning date for EB IV about 2500 cal BCE, as attested increasingly at other Levantine sites. In overview, the majority of comparative radiocarbon-dated evidence from Early Bronze Age sites in the Southern Levant stems from settlements with occupations that either end in late EB III or begin in early EB IV. Bayesian modeling of growing AMS datasets has narrowed the time frame for the EB III-IV transition to the mid-third millennium BCE. Khirbat Iskandar now offers a unique stratigraphically continuous sequence of 12 AMS ages from a single excavation area that spans the end of EB III and the beginning of EB IV, and provides a focused model of the EB III/IV transition about 2500 cal BCE. As Regev et al. point out, fixing this interface "at ca. 2500 cal BC forces reconsideration of the synchronism between Egypt and the Southern Levant that has far reaching implications for the history and chronology of both regions" (2014: 260). In particular, it "disconnect[s] the end of the Early Bronze III period from the end of the Egyptian Old Kingdom" (Höflmayer et al. 2014: 540), and no longer supports a direct correlation of Egyptian political dissolution and Levantine town abandonment (see Regev et al. 2014). CONCLUSIONS Bayesian modeling of 10 new AMS seed ages plus two previous AMS charcoal dates from Khirbat Iskandar provides a new stratigraphically-based calculation of the EB III/IV transition in the Southern Levant. Our modeling is the first to be based on samples from a continuous stratigraphic sequence in a single excavation area from the end of EB III to the beginning of EB IV. We model the EB III/IV transition at about or slightly before 2500 cal BCE, which is 200-300 years earlier than assumed traditionally. These data contribute strategically to the continuing corroboration of a high chronology for the Early Bronze Age in the Southern Levant. The evidence from Khirbat Iskandar also emphasizes the importance of sedentary settlements in EB IV society, which hold the greatest promise of providing stratified seed samples for continued AMS dating and rigorous chronological modeling of this enigmatic period in Levantine prehistory. On a larger scale, further chronological revision will contribute to the ongoing "audit of the synchronisms between Egyptian and Levantine chronologies" : 261) and a correspondingly revised interpretation of Levantine social dynamics in the third millennium BCE.
2022-04-21T15:24:35.329Z
2022-04-01T00:00:00.000
{ "year": 2022, "sha1": "c647fc5a3ece473c23bd0539171c85ae4e654e5e", "oa_license": "CCBY", "oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/5E496395EDAA8E08D985A9ED5A9FD456/S0033822222000224a.pdf/div-class-title-new-ams-chronology-for-the-early-bronze-iii-iv-transition-at-khirbat-iskandar-jordan-div.pdf", "oa_status": "HYBRID", "pdf_src": "Cambridge", "pdf_hash": "f8b39acec83a6560dcf90653430abe7793febc1b", "s2fieldsofstudy": [ "History" ], "extfieldsofstudy": [] }
2513342
pes2o/s2orc
v3-fos-license
Epidermal Cyst of Parotid Gland: A Rarity and a Diagnostic Dilemma Epidermal cysts are common skin lesions but they occur very rarely in the oral cavity, especially in the salivary glands. Very few cases have been reported in the literature and, here, we present one such rare case of epidermal cyst in the right parotid gland in a 62-year-old female patient. Introduction Epidermal/epidermoid cysts are common lesions occurring in the skin [1]. Only 1.6% occur in the oral cavity and are rare [2]. However, primary epidermal cysts of salivary glands appear to be very rare and literature search for the past 25 years revealed only very few cases in parotid gland [3] and some cases in submandibular gland [1,4,5]. The epidermal cyst is a benign cyst and develops out of ectodermal tissue. The several synonyms are epidermal cyst, epidermal inclusion cyst, infundibular cysts, and keratin cysts [6]. The diagnosis of an epidermal cyst in the parotid gland becomes very essential and it is a very rare entity and it could be easily mistaken for a salivary gland abscess, neoplasm, and other cysts [7]. Therefore, an excisional biopsy is necessary for a prompt diagnosis and confirmation. Case History A 62-year-old female patient presented to our outpatient department with a complaint of swelling on the right side of the face in front of the ear for two years. The swelling was insidious in onset and gradually progressed to reach the present size. There was no history of pain, fever, difficulty in swallowing, or any discharge from the swelling. There were no other swellings present anywhere else in the body. There was also no history of trauma or any previous surgeries reported in the facial region. On examination, there was a localized ovoid swelling in the right preauricular region. The swelling was 6×8 cm in size and extended around 2 cm below the lobule of the right ear. There was no lifting of the ear lobe and the colour over the swelling was of normal skin colour with no surface discharge (Figures 1 and 2). On palpation, the swelling was soft in consistency, nontender, and nonpulsatile and was movable below the skin. Intraorally, there was no swelling present and multiple teeth were missing and mobility in tooth numbers 45, 46, and 47 was present ( Figure 3). Ultrasound was carried out and it showed hyperechoic cystic lesion in the right parotid region measuring 4.2 × 6.1 cm. There was no vascularity in the lesion and no evidence of calculi in the duct or glands. So a benign parotic cystic salivary gland lesion was given as a diagnosis. Patient underwent surgical intervention and superficial parotidectomy was carried out. The cyst was removed in toto and gross examination revealed a globular mass measuring 4.5 × 6 cm in size and cut surface yields a pultaceous material ( Figure 4). Sections were made and histopathological examination revealed stratified squamous epithelium with an intraluminal laminated keratinized material confirming the diagnosis of epidermal cyst in the right parotid gland (Figures 5 and 6). Post operatively the healing was uneventful Discussion Epidermal cysts are common skin lesions that consist of epithelial lined cavities which are filled with viscous or semisolid epithelial degradation products [8]. Epidermal cysts of the oral cavity are a very rare entity and only 1.6-6.9% of all epidermal cysts are thought to be located in the oral cavity [9]. Epidermal cysts usually occur secondary to obstruction while dermoid cysts arise from developmental epithelial remnants or they are secondary to traumatic implantation of epithelial fragments [10]. Epidermal cyst of parotid gland is a very rare benign cystic lesion and is seen in young to middle age adults [6]. The exact histogenesis of salivary epidermal cyst is uncertain, but it may have arisen from developmental branchial pouch analogue epithelium which can occur in salivary gland [11] or could be due to obstruction in salivary duct within the substance of the gland leading to epithelial lining cavity filled with viscous semisolid epithelial degradation product [3] as seen in our case. The cysts clinically are painless swellings without any attachment to the overlying skin or involvement of facial nerve [6]. If the cyst stays for longer time, it might get infected forming sinus or fistulas [3]. The different causes of swelling in the parotid region may include branchial cleft cyst which is "congenital", or Case Reports in Dentistry 3 may be "acquired" due to inflammation, obstruction, neoplasm, calculi and trauma [6]. Also if it occurs in the submandibular region, it can be mistaken for salivary gland abscess, neoplasm, tuberculous lymphadenitis, metastatic node, or any cyst [1,12]. The diagnosis can be proven by various investigations like FNAC, ultrasound, and CT [2,13]. The diagnosis of the cystic lesion is challenging due to difficulty in determining the benign or malignant processes. Malignant lesions are frequently suspected when there is a rapid enlargement with associated lymphadenopathy or facial nerve paralysis [6,14]. The treatment is surgical excision of the cyst. Care should be taken not to rupture the cyst which can lead to postoperative inflammation and also to preserve the vital structures during surgery [3]. Histopathological examination of the cyst is required for confirmation of diagnosis. Histologically, epidermal cyst has stratified squamous epithelial lining and is usually filled with cheesy material or keratin. But a dermoid or epidermoid cyst contains skin adnexa or other epidermal structures like sebaceous gland or hair follicle. Implantation dermoid is not derived from epidermal appendages and may contain foreign body [9] even though it appears very similar to epidermoid cyst. Recurrence is very rare. Conclusion Epidermal cysts of the parotid gland origin are extremely rare and a diagnostic challenge, but still, epidermal cysts should be considered as a differential diagnosis in cases of painless long standing enlargement of parotid gland which is soft in consistency.
2018-04-03T02:35:07.773Z
2015-01-06T00:00:00.000
{ "year": 2015, "sha1": "aee409f4c6c03a51c1190a6ecd3dad23e6c567ca", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/crid/2015/856170.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "aee409f4c6c03a51c1190a6ecd3dad23e6c567ca", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
221802436
pes2o/s2orc
v3-fos-license
Numerical Methods to Compute Stresses and Displacements from Cellular Forces: Application to the Contraction of Tissue We consider a mathematical model for wound contraction, which is based on solving a momentum balance under the assumptions of isotropy, homogeneity, Hooke's Law, infinitesimal strain theory and point forces exerted by cells. However, point forces, described by Dirac Delta distributions lead to a singular solution, which in many cases may cause trouble to finite element methods due to a low degree of regularity. Hence, we consider several alternatives to address point forces, that is, whether to treat the region covered by the cells that exert forces as part of the computational domain or as 'holes' in the computational domain. The formalisms develop into the immersed boundary approach and the 'hole approach', respectively. Consistency between these approaches is verified in a theoretical setting, but also confirmed computationally. However, the 'hole approach' is much more expensive and complicated for its need of mesh adaptation in the case of migrating cells while it increases the numerical accuracy, which makes it hard to adapt to the multi-cell model. Therefore, for multiple cells, we consider the polygon that is used to approximate the boundary of cells that exert contractile forces. It is found that a low degree of polygons, in particular triangular or square shaped cell boundaries, already give acceptable results in engineering precision, so that it is suitable for the situation with a large amount of cells in the computational domain. Introduction Wound healing is a complicated, but crucial biological mechanism. In this manuscript, we consider wound healing after skin injury. Since severe (burn) injuries involve a considerable loss of soft tissue, secondary healing takes place. It involves the formation of a blood clot, in case of a cutaneous wound, the regeneration of collagen (extracellular matrix), and revascularisation (which is the re-establishment of a small blood vessel network); see [1] for a biological overview. One of the side effects of secondary healing that follows after a serious skin trauma, is skin contraction. Skin contraction takes place as a result of mechanical, pulling forces that are exerted by the cells (i.e. mainly fibroblasts and myofibroblasts) that are responsible for the regeneration of collagen [2]. Contractions can result in a significant, temporary, or even permanent decrease of area or volume of the damaged tissue. Reductions by 5-10 % of the original wound area have been observed in human skin and in mammalian skin of rodents, even larger reductions have been observed. Such a reduction of skin area or volume leaves residual stresses and strains in the newly repaired skin, as well as in its direct surroundings. This may cause discomfort or even painful sensations to the patient and in extreme cases, contractions may lead to dysfunctionalities of joints. If a contraction is so extreme that the patient develops a disability, then the contraction is referred to as a contracture. For many of the biological mechanisms that take place during wound healing, mathematical models have been developed. The current manuscript focusses on the formation of a contraction post wounding. Fibroblasts enter the wound site as a result of chemotaxis due to the TGFbeta gradient. Next to the regeneration of collagen, fibroblasts also exert pulling forces to their immediate environment [3]. In some cases, due to being triggered by the high concentration of TGF-beta, fibroblasts differentiate to myofibroblasts, which are known to exert even larger forces than fibroblasts. These larger pulling forces result into the contraction of the tissue around the injury towards the wound centre [4][5][6]. In the literature, several attempts to model the contraction phenomenon can be found [7][8][9][10][11]. The current manuscript focusses on hybrid models for simulating wound contraction in a small scale, where we consider cells as individual entities. We will consider point forces for modelling the balance of momentum, respectively. The modelling framework will entail Dirac Delta functions (distributions), where these pulse-like forces will lead to singularities of the solution in terms of a lower (local) degree of regularity, even such that the solution no longer falls within the finite-element space in which one looks for the solution. Some of the issues have been treated in [12], [13] and [14], regarding well-posedness and finite-element solutions. The treatment of momentum using point forces that we consider in the current paper was developed in [15], [7] and [8]. The quest of several alternative methods is motivated by finding ways to improve accuracy, and by the need of efficiency to simulate the mechanical processes occurring in the skin after a serious (burn) trauma. There are different approaches that treat point forces on the boundary of a cell. One may include the region covered by the cell as part of computational domain. This idea develops into the immersed boundary approach. On the contrary, the 'hole approach', is based on excluding the cell from the computational domain and treat the cell forces as a boundary condition. In this paper, we will focus on the balance of momentum where inertia is neglected and where we assume Hooke's Law to be satisfied. Further, we will use the infinitesimal strain approach. To the best of our knowledge, this paper is the first study that assesses the relation between the 'hole approach' and the immerse boundary approach both analytically and computationally. The paper is structured as follows. In Section 2, we will discuss the singularity problem occurring in the solution of partial differential equations. Section 3 investigates the 'hole approach' as an alternative to the immersed boundary method, and consistency between these approaches is verified. For a large number of cells in the computational domain, various polygonal approximations of the cell boundary are discussed. In Section 4, we compare the immersed boundary approach to the 'hole approach' and show the results from the polygonal cell approach using various polygonal degrees. Finally some conclusions are presented. Boundary Value Problems with Point Source From the definition of the Dirac Delta function, it immediately follows that there is a singularity in the solution to the partial differential equations(PDEs) in some cases. This singularity causes that the solution is irregular and even unbounded if the dimensionality exceeds one. If the PDEs are solved in an infinite domain with Dirac Delta distributions, the solution is known as Green's function. Inspired by this, hereby, we use the Green's function as an intermediate to determine whether there is a singular solution in a given finite domain. In the following contents, we will investigate the solutions in Laplacian equation and elasticity equation respectively. Proof. Considering Laplacian equation with Dirac Delta function in an infinite region the solution to which is known as the Green's function iŝ Denote v = u −û and then u is extracted as u = v +û. Combining Eq (2.1) and Eq (2.2), a new boundary value problem is derived: The weak form of (BV P 1 ) is for all φ ∈ H 1 (Ω). Note that the solution of v is classic, which is a sufficient condition that v is in H 1 space. However, the Green's function is not lying in H 1 , since 0∈Ω ∇û 2 dΩ → ∞ regardless of the dimensions d > 1. Since u =û + v, andû / ∈ H 1 (Ω), it immediately follows that u / ∈ H 1 (Ω). To simplify the equation with E = 1 here, the equations above can be combined to Laplacian equation in one dimension: which contains a solution in H 1 (Ω). For dimensions above one, unfortunately, we have found the Green's function in three dimensions in [16]. Therefore, the theorem only states the situation in three dimensions. Given an open bounded domain 0 ∈ Ω ⊂ R 3 , and the boundary value problem below: where the strain tensor and stress tensor are defined as respectively. Then there does not exist a solution u ∈ H 1 (Ω) such that u can solve (BV P 3 ). Proof. From [16], the Green's function in three dimensions is where µ and ν is the second Lamé parameter and the Poisson ratio, and i, j present different coordinates. Further, δ ij represents the Kronecker Delta function. The displacement vector of each coordinate can be expressed bŷ (2.10) Thus, similarly as before, letting v = u −û, then the problem becomes in Ω, σ(v) · n + κv = −(σ(n ·û) + κû), on ∂Ω. ∂û i (x) ∂x j 2 is infinite over the domain Ω containing the original point. Here, we will calculate the integral of ∂û x (x) ∂x 2 as an example: Then we rewrite the equation with spherical coordinates as Integrating with respect to r and noting that the inferior of the integral is 0, then where K i (φ, θ), i = 1, 2 is the expression of φ and θ. For other derivative parts, they end up with the same situation in Eq (2.12), that is, for every part of integral 0∈Ω ∇û dΩ, the integral does not exist. Hence, it can be concluded that the Green's function in isotropic open bounded domain is not in H 1 (Ω), which leads to the consequence that the solution to (BV P 3 ), expressed by u = v +û, is not in H 1 (Ω) either. Remark 2.2. Theorems 1 and 2 can also be proved for the case of homogeneous Dirichlet boundary conditions. Mathematical Models of Point Forces in Wound Healing 3.1 The Immersed boundary method in R 2 The (myo)fibroblasts exert pulling forces on their immediate surroundings in the extracellular matrix. These forces are directed towards the cell centre and they cause local displacements and deformation of the extracellular matrix. The combination of all these forces cause a net contraction of the tissue around the region, where the fibroblasts are actively exerting forces. The fibroblasts, which are responsible for the regeneration of collagen, enter the wound area after serious trauma due to chemotaxis. Since after restoration of the collagen, the fibroblasts die as a result of apoptosis (programmed cell death), the forces that they exert on their environment disappear. If the deformations are relatively large, then residual stresses remain and permanent displacements remain. Therefore, we consider two types of forces: temporary forces (f t ) and plastic forces (f p ). Here, we will only treat the temporary forces and the way we treat them has been formalized by [15], [7] and [8]. For the temporary force of cell i, the cell boundary Γ i is divided into line segments in the two-dimensional case. We assume that an inward directed force is exerted at the midpoint of every line segment in the normal direction to the line segment. The total force is a linear combination of every force at every segment. Hence, at time t, the total temporary force is expressed by where T N (t) is the number of cells at time t, N i S is the number of line segments of cell i, P (x) is the magnitude of the pulling force exerted at point x per length, n(x) is the unit inward pointing normal vector (towards the cell centre) at position x, x i j (t) is the midpoint on line segment j of cell i at time t and ∆Γ i,j N is the length of line segment j. Here, x i (t) is a point on the cell boundary of cell i at time t. The equation for conservation of momentum over the computational domain Ω is given by: In the above equation inertia has been neglected. We treat the computational domain as a continuous linear isotropic elastic domain. Therefore, we use Hooke's Law: where E is the Young's modulus of the domain, ν is Poisson's ratio and is the infinitesimal strain tensor, that is, The above PDE provides a good approximation if the displacements are relatively small. Further, we define the inner product of two second-order n × n tensors (matrices) A and B as follows: where a ij and b ij are the entries of A and B, respectively. On the outer boundary ∂Ω, we use the following Robin boundary condition where κ is a positive constant representing a spring force constant between the domain of computation and its far away surroundings, and u denotes the displacement vector. Note that if κ → ∞, then u → 0 which represents a fixed boundary, and κ → 0 represents a free boundary in the sense that no external force is exerted on the boundary. For the case of only one cell i in the computational domain, we need to solve the following boundary value problem: in Ω, σ · n + κu = 0, on ∂Ω. (3.5) Let V (Ω) be a completion of the Hilbert space H 1 (Ω) with smooth functions [14], then the corresponding weak form of Eq (3.5) on Ω is The 'Hole Approach' in R 2 Since the force is actually applied on a continuous curve, rather than working on the complete computational domain, we remove the region occupied by the cell. It leaves the computational domain with a hole that is occupied by the cell. Then the force on the cell boundary is modelled by a boundary condition on the boundary of the hole (cell). Therewith, we have boundary conditions on the external boundary, as well as a force boundary condition on the boundary of the cell. The boundary value problem we are working on becomes where n(x) is the unit normal vector pointing out of Ω\Ω C , Ω is the complete computational domain including the cell and extracellular regions, Ω C is the region occupied by the cell, and ∂Ω C is the boundary of the cell. The corresponding weak form for Eq (3.6) is Note that to this problem, it can be proved by combining Lax-Milgram's lemma with Korn's Inequality that a unique solution in H 1 (Ω) exists. In the analysis to come, we assume that the cell stays at the same position and keeps the same shape, hence we have x(t) = x. Proposition 3.1. Let u H and u I , respectively, be solutions to the 'hole approach' (see Equation (3.6)), and to the immersed boundary approach (see Equation (3.5)). Let ∂Ω C denote the line over which internal forces are exerted, and let ∂Ω be the outer boundary of Ω. Then as ∆Γ −→ 0, Proof. To prove that the above equation holds true, we integrate the PDE of both approaches over the computational domain Ω. For the immersed boundary approach, we get then after applying Gauss Theorem in the LHS and simplifying the RHS, we obtain By substituting the Robin's boundary condition and sending N i S → ∞, i.e. ∆Γ i,j → 0, the equation becomes Subsequently, we do the same thing for the 'hole approach'. Then, we get − Ω ∇ · σdΩ = 0, and we apply Gauss Theorem: Using the boundary conditions, we get which is exactly the same as Eq (3.7). Hence we proved that Hence, the two different approaches are consistent in the sense of global conservation of momentum and therefore the results from both approaches should be comparable. The only difference between the two approaches is that the 'hole approach' does not consider the stiffness of the cell, since the cell is treated as a hole in the domain. The immersed boundary method contains the internal stiffness of the cell. Therewith, if the cell stiffness is sent to zero, the two formalisms should deliver the same results. Hereby, we are going to prove this transition mathematically and we will see that numerical computations indeed confirm this behaviour. Before we state and prove a proposition that asserts the transition, we introduce the following energy norm: , where κ is a positive constant. Note that the energy norm is a proper norm according to the definition of norm in [17]. Proposition 3.2. Numerical approximations based on simplicial, continuous finite-element basis functions, to the weak forms of the immersed boundary approach in Equation (3.5) and the 'hole approach' in Equation (3.6), yield the same results upon using the following stiffness for the immersed boundary approach where E is a constant, Ω C is the cell region, Ω\Ω C is the extracellular region and Ω C is surrounded by Ω. Proof. Due to the symmetry of the tensor (φ), ∀φ, it follows that Hence, rewriting the weak form of the immersed boundary approach taking N i S → ∞, i.e. ∆Γ i,j → 0, (W F I ) becomes Substituting Eq (3.8) into the above weak form, implies that Hence, the weak form for the adjusted immersed boundary approach, denoted by (W F I ) is given by: Recalling the weak form of the 'hole approach': We are aware that due to the singularity caused by Dirac Delta distributions in the immersed boundary approach, the solution is no longer in H 1 (Ω). Therefore, following the procedure of discretizing the continuous function space in [12], we approximate the solution by the finite element space V h (Ω) ⊂ H 1 (Ω), such that the solution of (W F I ) can be found in this subset that consists of simplex-based basis functions that are continuous. Subsequently, (W F I ) is given by Applying the same discretizing procedure on the weak form of the 'hole approach', we derive the updated weak form as follows: Note that the above weak forms are identical. Next we demonstrate that the solutions are necessarily the same (hence not determined up to a function or a constant). Since we want to prove the consistency of these two approaches, we rewrite u h in ( Since φ h is a test function, which we can choose freely, such that the provided integrals make sense; Since the energy norm is a proper norm, it can be concluded that v h = 0, in Ω. Hence, we have proved u h I = u h H in Ω. In Proposition 3.2, we have proved the convergence between the finite element solutions to the adjusted immersed boundary approach and the 'hole approach'. Next to it, we are going to prove the convergence between the finite element solution to the adjusted immersed boundary approach and the (exact) solution to the 'hole approach'. Polygonal Cell Approach If we consider a domain in which many cells are moving and exerting forces, then the aforementioned two approaches will be very expensive from a computational point of view. Therefore, we will simplify the cell boundary to a low-order polygon, such as to a triangle or square. Furthermore, if the cell size is smaller than the mesh size, then we cannot break the cell boundary into finite segments by the mesh for both approaches. Inspired by finite boundary segments which actually build up a polygon, we can simulate the circular cell by different kinds of polygons. Eq (3.5) is still used as the basis for the computation of the forces that are exerted by the cells. However, we study the use of just a few boundary segments per cell in such a way that the total force exerted by the cell is the same regardless the order of the polygon. The cells exert forces on their immediate environment and hence all the points of the computational domain will be displaced. The displacement vector will induce a contraction of the near cell region. This contraction is quantified by the area of the near-cell region. According to [18], for each nodal point, the new position is where X stands for the initial position and x(t) is the position at time t. Defining the gradient matrix of displacement J = ∇ X u, the matrix notation can be worked out as dx = ∂x ∂X dX = (I + ∇ X u)dX = (I + J )dX, (3.10) where ∂x ∂X is the Jacobian matrix. The volume can be calculated by: dx = det(I + J )dX, (3.11) that is, theoretically where Ω 0 is the initial domain. However, to compute the area in Eq (3.12) numerically, we need to apply quadratures like Newton-Côtes quadrature or Gaussian quadrature, which increase the computation expense if we want to track the area at each iteration. Thus, to improve the computational efficiency, another possibility to compute the area of Ω is based on connecting all the nodal points on the boundary to build up a polygon. Then this polygonal area is an approximation of the deformed area since the displacement of each nodal point is available. To calculate the polygon area, one can use shoelace method derived by [19] in 1769. Suppose we have a polygon with n vertices, then the area is calculated by where (x i , y i ), i = 1, . . . , n is the coordinate of vertex i and (x n+1 , y n+1 ) = (x 1 , y 1 ). Note that the vertices should be sorted in counter clockwise or clockwise direction. To have a better insight of how these different computational approaches affect the cell and the near-cell region, we calculate the reduction of the area with respect to the initial area. If we denote the area after deformation by A Ω and the original area is A 0 Ω , then the ratio is calculated by (3.14) 4 Numerical Results The Immersed Boundary Approach and The 'Hole Approach' We use the finite element method to analyse the performance of the immersed boundary approach and 'hole approach'. Since we are interested in the behaviour of the solution in the vicinity of the positions where point forces are exerted, we introduce a subdomain Ω w near the locations where the point sources are exerted. This near-by subdomain, as well as the entire computational domain and the circular line where the forces are exerted are shown in Figure 4.1. The meshes for the two approaches are the same, except for the use of a 'hole' in the hole-approach. The circular curve where the forces are applied models a cell boundary, with its inner region modelling a myofibroblast that exerts forces on its direct environment. (a) The immersed boundary approach (b) The 'hole approach' The values of the parameters used in this simulation have been listed in Table 4.1. Note that all these parameter values are only for testing the sensitivity of the approaches. We compare the results from the immersed boundary approach to the results from the 'hole approach'. Figure 4.2 displays the initial cell in blue and the nearby region which is included in the red square, as well as its deformations in black curves. It can be seen that there is a large difference between the results from the two approaches. In particular, the magnitude of the displacement from the 'hole approach' is more than 13 times as large as the displacement from the immersed boundary approach. This discrepancy is caused by the interaction with the region inside the circular cell, which is incorporated in the immersed boundary approach and not in the 'hole approach'. Therefore, we adjust the stiffness of the region inside the circular where γ is a small positive constant. In the following contents about the adjusted immersed boundary approach, we use γ = 10 −5 if there is no further declaration. Then we redo the simulations and plot the results in Figure 4.3. The results of area and total strain energy in the subdomain Ω w have been documented in Table 4.2, and as a result of the use of Eq (3.8), it can be seen that the 'hole approach' and the adjusted immersed boundary approach are consistent since the area reductions are less than a percent. Further, it can be observed that the order of accuracy of the 'hole approach' is slightly better, whereas the adjusted immersed boundary approach is about a factor of four more economical from a computational efficiency point of view. (a) The immersed boundary approach (b) The 'hole approach' Due to multiple choices of γ, the value of γ determines the accuracy and convergence of the adjusted immersed boundary approach. In this manuscript, to investigate the effect of γ, it varies from 10 −6 to 10 −3 with steps of a factor of 10. In Table 4.3, besides the area reduction, the convergence rate of the L 2 -norm of the solution and the total strain energy in Ω w are shown. It can be concluded that the value of γ does have a modest impact in the current range, and the influences on various categories are distinct. In other words, for the Eq(4.1)) and the 'hole approach' (Eq (3.6)) when the same mesh structure used except the hole and the same parameter values applied (Table 4.1). The black line shows the deformed cell and Ω w and the other colour lines represent the original status. area reduction, it is verified that the smaller value γ is, the closer the result is to the one in 'hole approach'. Nevertheless, there is 'bell shape' behaviour appearing for the convergence rate of u L 2 , although the differences are not strikingly large. Further, we observed that, in the perspective of the strain energy in Ω w , the larger γ is, the better the convergence rate. Polygonal Cell Approach In the applications that we study, we are interested in multiple cells that are migrating through the computational domain. In typical situations, the cell size is much smaller than the domain size and the cell size could even be smaller than the element size from the discretization. Therefore, it is expensive from a computational point of view to divide the cell boundary into many mesh points and line segments in these applications. Hence, we are interested in the numerical accuracy if each cell is approximated by a simple polygon like a triangle or square instead of a high order polygon. In the presence of multiple small cells, we will study the impact of the polygonal order on the numerical results. The values of the input parameters are given in Table 4.4. In the multi-cell simulations, we locate the cells according to a Point Poisson Process with rate parameter λ, where we choose λ = 15 from [20]. The cell radius has been scaled down to 0.1 of the radius in the previous calculations. The computational domain and the near-cell region are the same as in the earlier simulations. In order to visualize the deformation of the cell and the subdomain Ω w , we set the magnitudes of the forces exerted by the cells to 10. In the simulations, we use the immersed boundary method with low order polygonal approximations of the circular cells. We investigate the performance in terms of the numerical solution with respect to the degree of polygons. An example of a simulation is shown in Figure 4.4, where multiple cells are shown as circles, and the contraction of the region is shown. The cell size is smaller than the mesh size, so we applied the polygonal cell approach here to investigate the area reduction of the region. The numerical numbers that we investigate are the area reduction due to the pulling forces exerted by the cells and the computation time. In all the calculations where we vary the degree of the polygonal approximation of the cells, we use the same number of cells and the same positions of the centres of the cells. Upon increasing the degree of the polygon, one gradually converges to a circle. In the current computations, we use a maximum number of eight nodes on the cells, that is, we use octagons as the highest polygonal order. The smallest order of polygonal approximation is the triangular shape. We selected the polygons such that the area of each cell is equal in all simulation runs as well as the centres of the cells. Figure 4.5 displays the computation time and relative reduction of area as a function of polygonal degree with multiple cells. Lower order of polygonal approximation admits the advantage that computation time can be reduced due to a lower number of function evaluations from point forces. In the computations, it has turned out that the use of triangles gave a reduction of computation time of roughly fifty percent with respect to the octagonal representation of the cell boundaries according to the histogram in Figure 4.5. The dash line in Figure 4.5 shows that a triangle or square representation of the circles already reproduces the results of the octagonal representation very well, since there is tiny fluctuation. In one word, due to the efficient computation time and good reproduction of the octagonal results in area reduction, we recommend to approximate the cell boundary by a triangle or square if a large number of small cells are used. Discussion and Conclusions In this paper, we mainly discussed different approaches to solve linear elasticity problems with point sources forces that are exerted on cell boundaries. In order to simulate wound contraction, it is crucially important to solve the equations for balance of momentum. The body forces are determined by (myo)fibroblasts that exert forces on their immediate extracellular environment. Since we model the forces by the use of point forces which makes the solution not be in the H 1 Sobolev space for dimensions exceeding one, we analysed the relation between the immersed boundary approach and the 'hole approach' and it has been computationally illustrated that the transition from the immerse boundary to the 'hole approach' has a continuous nature with respect to the elasticity in the cellular region. We proved that the finite-element approximations of the two approaches are the same if the stiffness in the cell is neglected. For large numbers of (migrating) cells, it becomes very beneficial to reduce the polygonal order of the representation of the cell boundary. The results indicate that an approximation of a cell boundary by a triangle or square is already sufficiently accurate, and the triangular representation is the least timeconsuming. Furthermore, the computation of the subdomain area by the use of connecting all the boundary vertices to compute a 'polygon' area is the most efficient procedure, combined with applying shoelace method.
2020-09-21T01:01:08.812Z
2020-09-18T00:00:00.000
{ "year": 2020, "sha1": "824d600e2fbe477ad5ad59982d46e0514bbe36a8", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.cam.2021.113892", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "824d600e2fbe477ad5ad59982d46e0514bbe36a8", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Mathematics", "Computer Science", "Physics" ] }
234619899
pes2o/s2orc
v3-fos-license
Wave Model for the Design of Sustainable Coastal Infrastructures at an Industrial Site in Tuban, East Java This study focuses on the use of computational model in the design of a breakwater structure, which aims to determine the propagation pattern of the long-term ocean waves, in order to understand their propagation from the deep waters, and to determine the distribution of their energy around a proposed breakwater construction site. The method used is computational simulation of the wave model using the 2D Boussinesq Wave (BW) Module of MIKE21 software. The simulation used an incoming wave of 4.6 m high, which corresponds to the 100 years return-period value. The results show that the existing breakwater layout can protect the harbour by reducing the incoming waveheight by up to 75%. At the proposed design condition, the propagation pattern of the incoming wave slightly differs from the existing condition. The presence of the slopes on both sides of the channel changes the wave direction outwards due to shoaling effects, and consequently, larger concentration of wave energy occurs at some parts of the proposed breakwater design. Results from the model are useful for the design of the new breakwater structures, which is designed according to the predicted wave energy distribution. Background An industrial facility in Tuban, East Java, is facing problems with the shoaling of its navigation channel and the damaged breakwater which protects its harbour and ports. Along with the planned revitalisation of the PT Trans-Pacific Petrochemical Indonesia (TPPI) ports and navigation channel, a new breakwater structure is to be developed to strengthen an existing and aging breakwater consisting of cellular cofferdam, as illustrated in figure 1. It is desirable to understand the distribution of wave forces acting on the new structure compounds, so that an efficient and safe design of the new structure can be achieved. The presence of the proposed deeper navigation channel is also of great importance to be investigated, with resprect to its efftect on the incoming wave behaviour and the impact on the harbour and the ports in the area [1,2,3]. As one of the design considerations for the design of the new TPPI port rubble mound breakwater, the magnitude of the incident wave must be determined, especially in extreme conditions [4]. In this case, wave propagation from the deep sea towards the coast, especially the TPPI and surrounding areas, must be modeled. The wave propagation model must be able to show the phenomenon of shoaling, refraction ofthe direction of propagation due to variations in bathymetry, diffraction by natural barriers or structures (buildings), and reflection by coastlines or structures (buildings). Numerical models are powerful tools to assist in estimating the wave transformation and deformation in engineering practice [5]. Mathematical or numerical modeling of the ocean waves have been widely used in the design of coastal structures [6] [7]. The Boussinesq model, in particular, was used to characterize the long wave agitation in the ports [8]. This study aims to model the long-term (100 years) return-period ocean waves, to understand their propagation from the deep waters to the coasts, and to determine the distribution of wave energy in terms of significant wave heights around the planned breakwater construction site, which is of rubble mound type [9]. Location of study: TPPI industrial site, Tuban, East Java The location of the study is situated around PT. TPPI Tuban, Tuban district, in the northern part of East Java (6.77 0 S, 111.95 0 E). The area comprises of a foreland (tanjung Awar-awar), with sandy beaches at some parts and charecterised by the presence of TPPI Industrial site. The dominant current in the area flows to and from the west-northwest, at an average speed of 0.15 m/s. The dominant wind direction throughout the year is from the east and the west [10]. The seabed of Tuban waters consists mainly clay, varying from very soft to soft and becoming firm at depth [11]. This study is mainly based on the field data and results of numerical modeling. Field survey was conducted at the location on January 2019 [12], while the modeling activities were carried out at the Computing Test Laboratory, Port Infrastructure Technology and Coastal Dynamics (BTIPD) -BPPT in Yogyakarta [13]. The domain (area) of the study is shown in the following figure. MIKE 21 BW is capable of reproducing the combined effects of all important wave phenomena of interest in port, harbour and coastal engineering which includes shoaling, refraction, diffraction, wave breaking, bottom friction, moving shoreline, and partial reflection and transmission. A major application area of MIKE 21 BW is determination and assessment of wave dynamics in ports and harbours and in coastal areas. Applications related to the 2D MIKE21 BW module include determination of wave disturbance caused by wind-waves [14] and propagation of long wave into a port [8]. Misra (2011) demonstrated the use of MIKE21 BW to investigate wave interaction with deepdraft navigation channels [2]. Work Stages Methods of the study include field data collections and analysis such as sea water level, bathymetry and coastline measurements. Wind and wave data are gathered from other sources. Computational simulation of the wave model is carried out using the 2D Boussinesq Wave (BW) Module of MIKE21 software from DHI (Danish Hydraulic Institute). The existing condition and one layout of the proposed design of breakwater structure were tested, each of which were simulated and analysed against an incoming wave from the Northwest of 4.6 m high, which corresponds to the 100 years return-period value. The study stages are summarised as: -secondary data collection of regional wind and wave data -secondary data collection of bathymetry -digitization of coastline data from Google Earth -field survey: bathymetry and tide data of the TPPI area. -sea wave data analysis and determination of wave heights and wave periods for return periods of 100 years. -simulation using the Boussinesq Wave (BW) module of MIKE21 software from DHI. -analysis of results Governing Equations The equations used to describe this wave propagation in MIKE21 Boussinesq Wave (BW) modeling are as follows [14]: where the Boussinesq dispersion terms 1 dan 2 are defined by ) and (5) subscript x, y and t denotes partial differentiation with respect to space and time respectively. 2.3.Model setup and scenarios The setup and scenario to model the incoming waves at extreme conditions (100 years return period) were done to ensure that the model can show the phenomenon of shoaling, refraction in the direction of propagation due to variations in bathymetry, diffraction by natural barriers or structures (buildings), and reflection by coastlines or structures [14]. The following points describe the model specifications and modeling scenarios (see Figure 3): a. Model domain is a rectangle measuring 4.2 km x 3.2 km. b. 2 model geometries were tested: current conditions (existing) and developed (design) conditions. The existing geometry includes the current condition of the coastline, the current structure of existing breakwater, and the current condition of bathymetry of the port and navigation channel. The developed (design) condition is the present condition with the addition of a new rubble mound breakwater and dredged navigation channel to -13 m. c. The model domain is discretized (divided) into a computational cell (grid) matrix of 905x675 elements, each sized at 5 meter. d. Bathymetry data from GEBCO and field surveys are interpolated in the computational domain. e. The input wave is an incoming 100 years return period wave coming from the Northwest or 315 degrees, which is 4.6 m high and 8.7 sec period [10] as shown in table 1. f. The water level is assumed to be at HWL (high water level) conditions, which is +0.96 meters from MSL (mean sea level) according to survey results by BTIPDP [12]. g. The incoming wave is assumed to occur in the western monsoon (early January), where the dominant direction of the wave is from the Northwest [10]. h. Sponge layer (damping layer) is applied to the outer boundaries (open boundaries) of the model domain, while the porosity layer is defined on the coastline and coastal structures such as breakwater and docks. i. Data extraction and analysis is carried out at several points (locations) so that the wave height at the return-period condition can be determined, as well as the amount of wave energy reduction by the existing coastal structure and the proposed (designed) structures. Figure 4 below shows the variation of water surface elevation at one moment in time, for existing configuration under an incoming extreme wave of 4.6m, which corresponds to the height of the wave that occurs approximately once in 100 years. The wave propagation patterns for existing conditions are shown in the Figure, where the color scale shows the water level. The water level variation in figure 4 shows the propagation pattern of the incoming wave. The presence of refraction (changes in direction) due to variations in bathymetry, as well as reflection by breakwater structures and other structures at the TPPI port can be seen clearly from the results of the modeling. At some locations, especially on the outer side of the breakwater structure, superposition between incoming and reflected waves produces waves that reinforce or reduce each other, depending on the phase difference between the incident wave and the reflected wave. 3.2.Developed (design) conditions The propagation patterns of the 100-yearly wave for the proposed design conditions are shown in Figure 6. The color scale shows the water level. The results show that there is refraction due to variations in bathymetry on both sides of the navigation channel where the is persistent slope starting from the harbour towards the open waters. It also shows the reflection by the breakwater structure and other structures in the TPPI port. In the gap (passageway) between the existing and the new breakwater structures, the wave height gradually decreases. The presence of a sediment control structure also dampens the waves approaching the sea-water intake (SWI) pool. Due to the changes in wave propagation patterns in the proposed (design) condition, especially near the navigation channel, the outline of the toe of the breakwater was modified as described in the picture [9]. Figure 9 shows the detailed waveheight (Hs) distribution around the head of the proposed rubble mound breakwater. The waves strike the front part of the new rubble mound breakwater structure at different waveheights as observed at several points. The peak Hs value occurs at the tip at around 4.9m. The wave energy, as represented by the wave height, varies along the sides of the breakwater, thus the minimum strength that is needed at different parts of the breakwater also varies accordingly. This must be taken into consideration when selecting the appropriate armor unit for the safe and cost-effective breakwater. In other words, the weights and sizes of the armor units can be varied for efficiency. Figure 9. Distribution of significant wave height (Hs) at the head of breakwater. Conclusions It has been shown that the Boussinesq Wave (BW) module of MIKE 21 is capable of providing good estimation on the propagation pattern and wave energy distribution, which is useful in the design of a coastal structure such as breakwater. The non-linear BW model has taken into account phenomena such as refraction, diffraction and reflection due to interaction of the waves with the coasline and coastal structures. Distribution of significant wave height (Hs) for 100 years return-period wave is obtained at the harbor, ports, and around the proposed coastal structure. For an incoming wave of 4.6 m high, the existing breakwater can reduce the wave height in the harbor to about 10-15% of the original value. However, the results for the new (proposed) design shows that slightly bigger waves penetrate the harbor due to wave energy concentration along the navigation channel, which agrees with the results from previous studies. Waveheight around the proposed breakwater varies, where the maximum value of about 5 m occurs at the front part. The sea-side of the mid-section is subject to 2.7 to 3 m wave, while on the harbour-side can be as low as 0.5 m. This variation of waveheights around the structure can be used as guidelines in determining the required strength at different parts of the structure. Therefore, modeling of ocean wave has proved to be an essential stage in the design of coastal structures such as breakwaters, in order to achieve efficient design.
2020-10-28T18:33:38.696Z
2020-09-01T00:00:00.000
{ "year": 2020, "sha1": "f41bd41f05c8fbeab35dddc204d3dc23808a04ef", "oa_license": "CCBY", "oa_url": "https://iopscience.iop.org/article/10.1088/1742-6596/1625/1/012049/pdf", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "c25037a2728dd0aaea4a14189971188d4b10baef", "s2fieldsofstudy": [ "Environmental Science", "Engineering" ], "extfieldsofstudy": [ "Physics", "Engineering" ] }
181564873
pes2o/s2orc
v3-fos-license
Parametric evaluation of shear strength parameters on the stability of cut slope: a case study from Mahabaleshwar road section, India Stability of rock cut slopes depends upon the type of material, discontinuity attributes and geometry present in any location. Although, gravity remains the constant important factor in dictating the slope failure but other parameters like shear strength and available shear stress along the slope also decides the stability of the slopes to great extent. The strength of the material comes from the internal bonding between the mineral grains, contact between the particles and the ability of the material to respond to the stress conditions. Variation of these material attributes fluctuate the cohesion and angle of internal friction that constitutes the most important properties in defining the strength of any material. Rock resists shear stress by these two internal mechanisms. Numerical simulation by Finite Element Method technique is attempted for assessing the stability cut slope. An attempt has been made in this study to document the behavior of strength of the material in terms of stability of slopes by parametric study of cohesion and internal friction. This study carried to understand how the factor of safety changes with reference to change in cut slope height, cohesion and internal friction of the discontinuities that attributes the shear strength of discontinuities. The study is based on Finite Element Modeling (FEM). From the study it is found that factor of safety has strongly proportional relation with cohesion and internal friction but shown inversely proportional relation with height of cut slope. INTRODUCTION The study area is selected along the national highway NH-72 near Mahabaleshwar, Maharastra, India. Several unstable slopes along the road section between the Poladpur and the Mahabaleshwar fail mostly in rainy season are the study sites for this study. The area comprises basaltic rock of Deccan Trap with three sets of joints. Rock slope with very thin to no soil and almost no vegetation was observed along the study area. Stability of slopes in this study is assessed by parametric study of cohesion and internal friction for the rock types of moderate strength and slightly weathered condition. Therefore, generalized values of both the parameters are assumed to depict the natural conditions for the mentioned rock types. The area receives heavy rainfall during monsoon ranging between 900-1100 mm/year. Numerical simulation of Mahabaleshwar road cut slope is analyzed in this study where frequent small scale landslides occur mainly in the rainy season. The NH-72 is an important link between Poladpur to Mahabaleshwar where thousands of commuters travel every day. The blocking of roads due to failed slope causes huge distress. To tackle this problem, a numerical modeling tool, finite element modeling (FEM) approach is utilized in this study to evaluate the stability of slopes under varying conditions of the two internal properties of the material. FEM is considered for the present purpose of study because the material considered is devoid of any discontinuities and anisotropy resulting from mineral composition and stability of jointed rock mass depends upon the cohesion and internal friction of the joints of the rock mass (Panthee et al. 2016). Frequent field visits were conducted throughout the study to collect samples and to properly define the slope attributes. A set of geotechnical tests -shear box and tribometer, that including determination of cohesion and internal friction on the properly cut samples were also conducted in the laboratory to gauge into the prevailing condition of rocks. From the field JRC values were collected. METHODOLOGY Stability of slopes has been a great area of concern these days. From building homes in mountainous regions to road cut slopes along highways, it is a cause of concern to mankind and property . Rainfall results in the saturation of the material and disrupts its coherence. Cohesion is the measure of internal bonding of the rock material while internal friction is the result of inter-particle contact. Rock resists shear stress by two internal mechanisms, cohesion and internal friction. One of the main parameters which define the stability of slopes is shear strength of the material concerned which also depends upon the confining stresses. Cohesion and internal friction is related to shear strength of the material through Coulomb's law which states that, τf = c +σ n tanφ Where, τf is the shear stress along the shear plane at failure, c is the cohesion, σn is the normal stress acting on the shear plane, and φ is the friction angle of the shear plane Accordingly, three types of geological material may exist. These are: 1. c= 0 (materials exhibiting no cohesion, such as dry sand) 2. c and ϕ materials (soils and rocks with both cohesion and internal friction) 3. ϕ = 0 (materials exhibiting no internal friction) Numerical Modeling Numerical methods such as the Finite Element Method (FEM) have now been successfully applied to slope stability analysis over the years (Singh et al. 2013a, b andKainthola et al. 2013). It is now assumed as the best alternative to traditional limit equilibrium methods, possibly because of less number of priori assumption is required for the analysis ). The FEM is the most widely sought numerical methods for rock mechanics problems in engineering geology because of its flexibility for treatment of material heterogeneity, complex boundary conditions, in situ stresses and gravity (Jing 2003). The material considered in the present study is anisotropic basalt, with discontinuities for the sake of making simulations easier and understanding the role of cohesion and internal friction in the stability of slopes. Slope geometry is constructed to approximate the actual slope by collecting data and samples through frequent field visits. Upon the definition and plotting of the geometry and assigning material properties obtained from tests conducted in laboratory, the geometry has to be further divided into finite elements to perform finite element calculations which are taken care by the FEM tool based on proper understanding. A general formula used to calculate FOS from the values of shear strength and shear stresses is depicted below. Where, c (MPa) is cohesion φ, angle of internal friction ψ, dip of the sliding surface A, the area of sliding surface and W, the weight of the block lying above the sliding surface. The strength of the rock mass is determined by the combined strength of the rock and the presence of discontinuities in it. Depending on the size of the weakness zone, the problems can be treated as a continuous or discontinuous problem. For large scales, a fault or weakness zone can be treated as a joint and can be analyzed as a discontinuity. The problem in this study is assumed to follow continuous behavior because of neglecting the discontinuities. Analysis of problems in geotechnical engineering requires the specification of a set of initial stresses. This is done by loading the rock body of its own weight caused by gravity. This represents the equilibrium state of the undisturbed rock body. There are two generally used methods in the FEM software (Plaxis). The first set uses the K0 procedure while the second set uses the gravity loading procedure. For the current scenario, gravity loading procedure is applied instead of K0 procedure because of its ability in modeling non-horizontal stress conditions. The slope has been modeled using shear strength reduction (SSR) technique which is commonly used in various rock engineering environments. The discontinuities were incorporated into the model based on field conditions. The meshing used in the model is graded 3-node triangle, which is further refined near the slopes. Gravity loading The initial stresses corresponding to the initial phase are assumed to be zero. These are then set up but applying the self-weight of the material concerned in the first calculation phase. In the second phase, FOS is calculated by incorporating the results obtained from the tests. Stability of slopes is then tested for a range of values of cohesion and internal friction. A correlation between slope height, internal friction and cohesion is made in the study to better understand the role of these parameters on the stability of slopes. Fig. 2: Relation between FOS and cohesion for varying Phi as per gravity loading Internal friction is caused by contact between particles and is defined by the internal friction angle, φ. By better understanding the rock's internal properties, one can aim to reduce the stability problems that may occur. Cohesion and angle of internal friction are directly proportional to the factor of safety while the bench height is inversely proportional. It can be seen from the two sets of plots that factor of safety increases with an increase in angle of internal friction (Fig. 1) and cohesion (Fig. 2). Further correlation shows that the factor of safety increases with increase in the angle of internal friction for a constant slope height (Fig. 3) and decreases with increase in slope height for a particular angle of internal friction (Fig. 4). Singh et al. (2013c) have also conducted similar study and suggested that shear strength parameters along with height of the slope are very important parameters for overall stability of slopes and that as the height increases and shear strength parameters decrease, global safety factor also decreases. Results from this study have also been compiled in tabular format for these data sets (Table 1). Kainthola et al. (2015) and Singh et al. (2013a, b and c) have carried out stability of cut slope in almost similar terrain and similar rock type. Their research also indicated the cohesion and internal friction are key parameters for stability. The output of present research is also shown the similar result. Singh et al. (2015) have studied failed slope in similar condition and point out the shear strength reduction up to the failure level was the finding of the failure cause. The present study also shown that safety factor decreases with decreasing the shear strength value. Similarly, Panthee et al. (2016) were study of stability of tunnel failure with reference to joint persistency cohesion and internal friction for different rock types and found persistency cohesion and internal friction of joints are highly influencing parameters in stability which supports the present study. CONCLUSIONS Stability pattern of the road cut slope along NH 72 is analyzed in this study. Due to heavy rainfall, the two main internal strength parameters of the materials viz cohesion and internal friction loses its coherence and decreases the strength of the material. So, to model this complex behavior in anisotropic condition parametric study was conducted for different possible ranges of these parameters. Numerical tool like FEM approach was adopted in this study to reach to a reasonable correlation between shear strength parameters and slope height. The results clearly illustrate the factor of safety increases with an increase in both shear strength parameters as expected. On increasing the slope height, while keeping the angle of internal friction and cohesion constant, we see a decrease in the factor of safety with height rendering the slope unstable. From the study it is found that factor of safety has strongly relation with cohesion and internal friction of the discontinuity and factor of safety decreases as height of cut slope increases. The rain, during the season, decreases the cohesion and internal friction of the discontinuities and failure were resulted.
2019-06-07T22:36:33.145Z
2016-12-31T00:00:00.000
{ "year": 2016, "sha1": "8e27795ad18e0ce80281c951be8faf5a0f0a7a88", "oa_license": null, "oa_url": "https://www.nepjol.info/index.php/JNGS/article/download/24094/20388", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "a523fff990cb31825c64be564465f3228e433257", "s2fieldsofstudy": [ "Engineering", "Geology", "Environmental Science" ], "extfieldsofstudy": [ "Geology" ] }
259633427
pes2o/s2orc
v3-fos-license
WORKING CAPITAL EFFICIENCY, LIQUIDITY, AND SOLVENCY ON PROFITABILITY OF INDONESIAN STATE-PRIVATE BANK : Banks are the largest service companies in Indonesia, based on data from the Financial Services Authority, which greatly influences market fluctuations in the services sector. The components used to determine the level of strength and capability of banking institutions include financial ratios related to profitability. This study aims to analyze the efficiency of working capital, liquidity, and solvency on the profitability of Indonesian BUMN (state-private enterprise) banks. This study was conducted using a quantitative approach. This study employed data from all financial reports published on the website of the Indonesia Stock Exchange and the website of the research object banks. While the sample used was limited to 8 years of financial reports, from 2014 to 2021, the collected data was tested with multiple regression using SPSS v. 25. This study found that working capital efficiency and liquidity did not affect profitability, while solvency did. Furthermore, when analyzed simultaneously, operating capital efficiency, liquidity, and solvency influence profitability. INTRODUCTION State-private banks are service and financial industries that can profitably manage their finances. However, the ability to improve performance as measured by the level of profitability is not only influenced by internal factors but also by external factors. Internal factors; (1) the bank's ability which is supported by sufficient capital,. Thepany's capital must be adequate to support short-term and long-term operational costs, and short-term operational costs mean that operating costs are carried out within under one year, while long-term operational costs are financing for operational costs over a period of more than one year. (Fatimah, 2014) ; (2) the ability of a bank to manage its performance. Financial ratios, including working capital efficiency, liquidity, solvency, and profitability, can measure financial performance. Working capital efficiency is a policy that focuses on a company's ability to make a profit (Fatimah, 2014) ; (Kumara & Saputra, 2014). Working capital management is needed by a company because it can help companies improve their performance (WI Sari et al., 2021). The main target of working capital management is to guarantee the company's ongoing operations and affect the profitability level. Companies that are able to maintain working capital, the company can be sustainable in running their business (Saerang, 2014). Another thing that can affect profitability is the level of liquidity. Liquidity is the ability of a company to pay all of its obligations, and the liquidity ratio is used as a ratio to see a company's ability to use its current assets (Sadiah & Priyadi, 2015;Tabe, 2022). When a company cannot monitor its liquidity level, one day, it will experience problems in asset management. The instrument used to measure the level of liquidity is to compare current assets with current liabilities ( KAN Sari & Sudjarni, 2015). Thus, the total assets must exceed the current liabilities because current assets can be relied upon to pay short-term debts or under one-year-old (WI Sari et al., 2021). Therefore, liquidity analysis is needed by both large companies and small companies, where this ratio can predict profitability. In addition to the liquidity ratio, the solvency ratio also allows for predicting profitability. Solvency is the company's ability to pay its obligations using its assets. When a company has a greater ratio than its assets, it is likely that the company will experience high risk (Luthfiana, 2018). It means that the company that has high debt means that the company has a cost burden by paying interest, so the company needs help increasing its profitability. Apart from the internal factors that have been described that can increase profitability, there are also external factors, including investor confidence, customer trust, product innovation, etc. Investor confidence in the company is very necessary because the source of capital must always be maintained sustainably. Of the several factors, this study is only focused on internal factors that can increase profitability. Several factors have been described above as factors that can affect profitability. Previous researchers have widely discussed these factors. However, they still need to be carried out in banking industry companies that focus on state banks, where state banks have different funding sources, namely, the government is involved as the owner of capital or shares. , the argument is a novelty in this study. Based on this, this study answers several things (a) does the efficiency of working capital have an effect on profitability; (b) whether liquidity has an effect on profitability; (c) whether solvency has an effect on profitability. Grand Theory The relationship between several variables related to company performance, the relevant basic theory is signaling theory. The signaling theory that was first discovered by Spence in 1972 explained that in the labor market, there is always asymmetric information, so Spence created a signaling criterion to be able to strengthen decision-making in recruiting workers in companies. Wolk's (2001) Signal theory explains how a condition occurs within the company internally so that it will provide information to outsiders as a basis for decision-making. Working Capital Efficiency Concept Definition Working capital is an investment in the form of current assets, which can be cash and accounts in the current assets group (Sutopo & Fajria, 2015). Working capital is also interpreted as an asset that is invested and experiences turnover so that it changes form to another form for company operations (Aslina, 2021). Working capital is the overall value in the current assets group, which is positioned as an item that can experience changes quickly. (Kristanto et al., 2020) , while efficiency in using working capital is the ability of a company to utilize working capital to increase company value (Munandar et al., 2019). The definition can be concluded that working capital is the value invested in the current assets group used for company operations, where these current assets have the ability to change form in a short period of time. Sources of working capital come from internal and external companies. The company's internal source of working capital, namely working capital formed from the company itself in the form of (1) profit that is not distributed to shareholders, this profit is profit from the previous year and the current year, which is not distributed in the form of dividends; (2) depreciation. This depreciation comes from asset depreciation which depends on the use of the depreciation method. While working capital from external sources can be in the form of (1) capital from outside the company, which is temporary with debt status; (2) company owner capital that does not have a maturity period (Saragih, 2018). Working capital can be measured by comparing total sales with working capital (Zul Safar, 2020) = 100% Definition of the concept of liquidity Liquidity is the company's ability to pay off its short-term debt with current assets when billed ( KAN Sari & Sudjarni, 2015). A liquid company is stated that the company can operate correctly. A good company size can be seen from its liquidity level (Lestari et al., 2017). Liquidity can be measured by using its total current assets compared to its total current liabilities as research (Natalia & Jonnardi, 2022). Definition of Solvency Concept Solvability is the ratio to measure the ability of assets in a company which is debt financing, or the ratio to measure the ability of the debt burden borne by the company to fulfill assets (Susilawati, 2012). This ratio can see how much a company can meet long-term debt (Luthfiana, 2018). The solvency ratio uses assets intended to cover and pay fixed free. And it shows the proportion of the use of debt in investment financing (Nuryanto et al., 2014). The solvency ratio can also measure liquidity in a company's long term that focuses on the balance sheet on the right side (Awaloedin et al., 2020). Solvability can be measured by comparing total debt with equity as research (Nuryanto et al., 2014) = 100% Profitability Concept Definition Profitability is the ability of a company to earn profit in one period (Hermuningsih, 2012). Company profit is significant because it is an assessment and measurement of company performance, which indicates that if the profit earned by the company is high, it can be said that the company is in a good position. And so should if a company with low profits is considered not good (Hermuningsih, 2008). The profitability ratio is an analysis of a financial statement, namely a balance sheet. When a financial balance is published daily, it describes the total assets, total debt, and total capital and shows the company's financial position in a certain period. At the same time, the profit and loss report is a financial report that describes the total income and total operational costs, operations, and net income for a period. Profitability ratios can be measured by comparing net income to total assets as research conducted by (Luthfiana, 2018) = 100% Relationship between variables and hypothesis development Effect of Working Capital Efficiency on Profitability Working capital efficiency proxied by Working Capital Turnover. When the company's working capital has increased, the funds sourced as operational activities can be covered with sales proceeds. However, if sales decrease, various unforeseen costs arise, affecting profitability (Wijaya & Isnani, 2019). As research by Riyanto et al. ( 2019) reveals, the efficiency of working capital influences profitability. The meaning signal theory also explains that if capital turnover is fast, it can provide a signal that profitability is also higher. The formulation of the first hypothesis is as follows: H1 = Efficiency of Working Capital influences profitability Effect of Liquidity on Profitability Liquidity is proxied by the current ratio. The higher the current ratio, the more likely the company is to be able to pay off its current debts (Tabe, 2017). As research (Hadiningrat et al., 2017) states that liquidity influences profitability, the signal given to outsiders is that companies with high current ratios have high profitability. The formulation of the second hypothesis is as follows: H2 = Liquidity influences profitability Effect of Solvency on Profitability Solvability is proxied by the debt-to-equity ratio. The debt-to-equity ratio compares the company's total liabilities to its total equity. The higher the equity ratio, the higher the company's solvency. As with research (Chandra et al., 2021) that the Debt to Equity Ratio contributes to profitability, the signal theory applies that when a company has sufficient equity, it can overcome its solvency. The formulation of the third hypothesis is: H3 = solvency influences profitability RESEARCH METHODS Research Approach This study uses a quantitative research type approach, where this research is viewed from the paradigm of giving pressure to test theories with various measurements on each variable used, then performing analysis with data and statistical procedures. Population and Sample The population of this study is the financial statements of state-private banks consisting of BRI, BNI, Bank Mandiri, and BTN. Meanwhile, the sample is limited to the financial statements of the last eight years, from 2014 to 2021. Data collection technique Data collection techniques are carried out through financial reports downloaded from the IDX website ( www.idx.co.id ) and the website of the research object bank. The data focused on financial ratios, which are the measurement ratios in each variable used, namely working capital, liquidity, solvency, and profitability ratios. Data analysis method Data analysis was carried out to test the hypothesis. Hypothesis testing is done by multiple regression with statistical testing with the help of Statistical Product and Service Solutions SPSS version 25, and hypothesis testing is carried out after the classical assumption test to ensure that a regression equation has an unbiased estimate. RESULTS AND DISCUSSION Research result Normality Test The target of this test is to see whether the data distribution is normal. .200 c,d The table above has a significant value of 0.200, greater than 0.05, so the data is typically distributed. Linearity Test The target of this test is to see whether or not each independent variable is linear with the dependent variable. The table above F count = 4, 156 significant value of 0.0 1 less than 0.05, then the Efficiency of Working Capital, Liquidity, and Solvency influence profitability simultaneously. Working Capital Efficiency Against Profitability Working capital efficiency t test table where t count = 0.650 significant value 0.521 is greater than 0.05, then Working Capital Efficiency does not affect profitability. It means that the increase and decrease in the working capital efficiency of state-private banks is fine with profitability. Why? First, working capital efficiency has increased, so it is challenging to increase profitability. As research conducted ( Marantika, 2012) revealed that if a company determines a large working capital, it allows the level of liquidity to be maintained, but the opportunity to earn profits decreases and ultimately does not have an impact on profitability. Therefore, when connected with the signal theory, it can be stated that the theory has not been tested in assessing the efficiency of working capital with profitability. Second, according to data frequency distribution calculations from 2014 -2021, the scale 201 -300 and 301 -400 show an average of 25%. Even on a scale of 501 and above, it shows an average of 50%. The scale of the sales comparison divided (Current Assets-Current Liabilities) means that the scale reaches 5:1. Liquidity Against Profitability Liquidity t test table where t count = 0, 401 significant value 0, 691 greater than 0,05, then liquidity has no effect on profitability. It means that increases and decreases in liquidity do not affect profitability. Why? First, the management of current assets by state-private banks is not optimal, so some assets are not productive. With the existence of non-productive assets, the profits obtained by the bank can be reduced. Hadiningrat et al. (2017) revealed that using assets that are not optimal means that companies do not get maximum profits. The meaning of the signal theory is that higher the liquidity, the greater the guarantee of profitability does not apply. Second, the size of the company's liquidity level is independent of the profitability of state-private banks. So if the company experiences an increase in liquidity by one unit, the profitability that the company will receive will remain the same. Solvability Towards Profitability Test the hypothesis of the solvency t-test table where t count = -3.2 73 significant value 0.003 smaller than 0.05, so solvency has an influence on profitability. It means that when solvency increases, profitability also increases. Conversely, when solvency decreases, profitability also decreases. Several arguments support this finding. Banks depend on loan funds to meet their funding sources. In general, stateprivate banks use funds from external sources or loans and then the rest or part of savings funds so that the size of the company's debt greatly affects the profitability obtained by the bank, so in increasing the profitability of the company, it is necessary to continue to increase the amount of its debt, because where solvency has an impact on profitability state-private banks. In their research, Chandra et al. (2021) state that using high debt can increase solvency, but companies also need to maintain the risk of paying higher interest expenses. Therefore, the application of the signal theory is that the greater the source of funding, the profitability of the company continues to be binding. The efficiency of Working Capital, Liquidity, and Solvency on Profitability is carried out to answer the effect simultaneously. It can be seen that the value of f count = 4.156 with a significant level value of 0.015 < 0.05. It means that working capital efficiency, liquidity, and solvency simultaneously influence profitability. CLOSING The conclusion of this study shows that the efficiency of working capital has no effect on profitability because it is experiencing a dilemma if a company determines a large working capital, it allows the level of liquidity to be maintained, but the opportunity to earn decreased profits ultimately does not have an impact on profitability. Liquidity also has no effect on profitability because there are indications that the management of current assets by state-private banks is not optimal, and assets that have not yet been used are still unemployed. Meanwhile, solvency significantly influences profitability because state-private banks depend on loan funds to fulfill their capital sources as the function of banks which generally collect funds from third parties to be channeled back to the public. This research implies that state-private banks should improve capital management work efficiently so that the bank can achieve the profits Which are wanted, as well as be capable of increasing profitability significantly and must reduce idle assets so that they do not become a burden that can reduce company profits. Furthermore, companies must maintain n level of solvency because it is proven that increasing the portion of funding sources through debt can have an impact on increasing profitability. This research can be developed by increasing the number of other independent variables, for example, by looking at the soundness of a bank in terms of the capital sector and extending the research period.
2023-07-11T17:57:01.196Z
2023-06-06T00:00:00.000
{ "year": 2023, "sha1": "4803fc7ee534c922bdaf106ba4305ab9a0952a03", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.24252/assets.v13i1.31923", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "6121a215b6de6792515cb30fcb3dc8dfc74c0a39", "s2fieldsofstudy": [ "Economics", "Business" ], "extfieldsofstudy": [] }
236556970
pes2o/s2orc
v3-fos-license
Celastrol Exerts Neuroprotective Effect via Directly Binding to Hmgb1 Protein in Cerebral Ischemic Reperfusion Dandan Liu China Academy of Chinese Medical Sciences Institute of Chinese Materia Medica Piao Luo China Academy of Chinese Medical Sciences Institute of Chinese Materia Medica Liwei Gu China Academy of Chinese Medical Sciences Institute of Chinese Materia Medica Qian Zhang China Academy of Chinese Medical Sciences Institute of Chinese Materia Medica Peng Gao China Academy of Chinese Medical Sciences Institute of Chinese Materia Medica Yongping Zhu China Academy of Chinese Medical Sciences Institute of Chinese Materia Medica Xiao Chen China Pharmaceutical University Junzhe Zhang China Academy of Chinese Medical Sciences Institute of Chinese Materia Medica Nan Ma China Academy of Chinese Medical Sciences Institute of Chinese Materia Medica Jigang Wang (  jgwang@icmm.ac.cn ) China Academy of Chinese Medical Sciences Institute of Chinese Materia Medica https://orcid.org/0000-0002-0575-0105 Background Tripterygium wilfordii Hook. f. (TwHF)-based prescription is widely used in the treatment of autoimmune diseases, tumor and so on in China for centuries [1,2]. Out of many bioactive constituents isolated from Tripterygium wilfordii, celastrol has attracted closed attention for more than 70 years in terms of possible medicinal property [3]. Celastrol exhibits a diversity of pharmacological effects in a wide range of disorders, such as cancer, diabetic, obesity [4], neurodegenerative diseases [3]. An abundance of existing researches have proved neuroprotective effects of celastrol in neurodegenerative diseases through antioxidant and attenuating neuro-in ammation [5]. Besides, celastrol excellently relieved acute ischemic stroke-induced injury by promoting microglia/macrophage M2 polarization [6], reducing the expressions of p-JNK, p-c-Jun and NF-κB [7], and inhibiting high mobility group box 1 (HMGB1)/NF-κB signaling pathway to exhibit anti-in ammatory and antioxidant actions in transient global cerebral ischemic rats [8]. However, there are few studies about whether celastrol has neuroprotection effect for cerebral ischemic-reperfusion (I/R) injury and speci c binding protein targets. Neuro-in ammatory processes have been implicated in the pathophysiology of multiple stages of cerebral I/R injury, and targeting neuro-in ammation has always been an attractive treatment in stroke [9]. It has taken several periods for the intensive study of HMGB1 in in ammation related diseases, which is currently one of the crucial pro-in ammatory alarmin of stroke. The non-histone DNA binding protein HMGB1 is primarily located in the cell nucleus and behaves different biological functions according to cellular locations, binding receptors and redox states. HMGB1 shifts to the cytoplasm and extracellular space by activated immune cells or passively released by necrotic or damaged cells, with dynamic redox states due to distinct posttranslational modi cations [10] and activated in ammatory immune reaction [11]. Outside of the cell, HMGB1 serves as a damage-associated molecular pattern (DAMP) or alarmin to mediate in ammation through receptors including advanced glycation end products (RAGE), toll like receptor 2 and 4 (TLR2, TLR4) [12]. HMGB1 constitutes three domains: A box, B box (positively charged) domains and carboxyl terminus (negatively charged) acidic tail. The three cysteines located at positions 23, 45 (A box) and 106 (B box) mainly determine the redox state and physiological functions of HMGB1. The fully reduced HMGB1 only possesses chemotaxis by binding to CXCL12, and stimulates immune cell in ltration through the CXCR4 receptor in a collaborative way. An intramolecular disul de bond of HMGB1 in cysteines C23 and C45 with critical C106 in a reduced state has pro-in ammatory activity like cytokines via the TLR4/MD-2 complex, induces nuclear NF-κB translocation and produces tumor necrosis factor (TNF) in macrophages. Meanwhile, chemotactic and cytokine activities disappeared after all the cysteines oxidized (sulfonyl HMGB1) [13,14]. Therefore, disul de HMGB1 isoform is a biomarker of in ammation and blocking extracellular disul de HMGB1 isoform maybe a potential direction for the treatment in ammation and immune diseases, including stroke. Quantitative chemical proteomics technology based on small molecule compound probe and chemical labeling has been widespread used for seeking targets and elucidating the mechanism of natural and traditional medicines [15], including Artemisinin [16], andrographolide [17], curcumin [18], aspirin [19]. With the help of activity-based celastrol probe (cel-p), tandem mass tags (TMT) labeling, liquid chromatography-tandem mass spectrometry (LC-MS/MS) and cellular thermal shift assay (CETSA), we elucidated the neuroprotective mechanism and targets of celastrol on stroke I/R injury, and uncovered that celastrol directly bond with HMGB1 to inactivate its cytokine activity and targeted HSP70 and NF-κB to exert anti-in ammatory activity. Animals All experiments were carried out for the sake of minimizing the number and suffering of animals. All The Sprague-Dawley rats within 12h of birth (Vital River Laboratories, Beijing, China) were used for primary cortical neurons isolation. Male Sprague-Dawley rats (260-280g, Vital River Laboratories, Beijing, China) used for middle cerebral artery occlusion (MCAO) experiment were housed in standard breeding environment without restriction to diet and drinking. Click chemistry, pull down and LC-MS/MS reagents: NaVc; CuSO4; TAMRA Azide and Biotin-azide were obtained from Sigma, USA. High capacity neutravidin agarose resin; sequencing grade modi ed Trypsin; TMT 10 plex reagent set; TEAB and Pierce™ Quantitative Fluorometric Peptide Assay Kit were purchased from Thermo Fisher Scienti c, USA. Oasis HLB Extraction Cartridge was obtained from Waters. THPTA was obtained from Click Chemistry Tools. Rat primary cortical neurons isolation and RAW 264.7 cell culture The neonatal Sprague-Dawley rats within 12h of birth were used for primary cortical neurons isolation as previously established with minor revise [20]. Brie y, the cortex of newborn rats was sterile separated in pre-cooling (4°C) DMEM/F-12 (1:1). The minced cortex tissue was digested with 0.2mg/ml DNase and 2mg/ml papain, and inactivated by adding 10% volume FBS. The cell suspension was washed twice with DMEM/F-12 (1:1) and re-suspended in DMEM/F-12 (1:1) containing 10% FBS and 1 × PS. The cells suspension passed through 300 mesh sieves were seeded on L-polylysine pre-coated ori ce plate or dish and incubated at 37°C incubator with 5% (v/v) CO 2 . 4-6h later, the DMEM/F-12 (1:1) was replaced with complete Neurobasal-A medium replenished with 1× PS, 1× Glutamax-I and 1× B27. The culture medium was change half every 2-3 days, and all the experiments were carried out on the seventh day unless otherwise stated. RAW 264.7 cell was cultured in DMEM containing 10% FBS, 1× PS and maintained in a cell incubator. Cells were passage every 2-3 days and TNF-α was testing in cell within 20 passages. Oxygen glucose deprivation (OGD) insult The transient OGD model was constructed to simulate cerebral I/R injury in cultured primary neurons as previously described [21]. Brie y, the Neurobasal-A medium was displaced with deoxygenated, sugar free DMEM, and the cells were incubated in a hypoxia chamber (STEMCELL Technologies, Canada ) lled with 95% N 2 + 5% CO 2 for 4 h in 37°C incubator and returned to normal culture condition according to the experimental requirement. In contrast, control cells were cultured in normal culture conditions. After reaching the established time, cell viability evaluation was determined by CCK8 assay or cells were collected for other experiments. For CCK8 assay, absorbance was measured using a multimode plate reader (PerkinElmer, USA) at 450 nm. Proteome reactivity pro les of primary neuron treated with cel-p Fluorescence labeling pro ling of cel-p binding proteins was conducted in living primary neuron with or without celastrol competitor and OGD model refer to previous operation [22]. Similarly, increasing concentrations of cel-p (0-1.6 μM) or cel-p (0.8μM) + competitor (celastrol 2 ×, 4 ×, 6 ×, 8 ×) were added into the 6 well plates with or without OGD interfere and incubated for 4 h in cell incubator. Then supernatant of cell lysate were collected and BCA method was used for protein concentration quanti cation. The click chemistry reaction was conduct with NaVc (100 mM stock solution, nal concentration 1 mM), THPTA (100 mM stock solution, nal concentration 100 μM), CuSO4 (100 mM stock solution, nal concentration 1 mM) and TAMRA Azide (5 mM stock solution, nal concentration 50 μM) in equal amounts (100 μg) of extracted protein for 2h at room temperature. The protein was precipitated with 1 ml pre-cooling (-20°C) acetone, and re-dissolved with 30 μl 1× SDS loading buffer. 15 μl of sample was separated with 10% sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) gel, and the labeling pro les were visualized with in-gel uorescence scanning in laser scanner (Azure Sapphire RGBNIR, USA) and then the gel was stained with CBB. Cellular imaging of cel-p Cellular imaging experiment was conduct with uorescence microscopy as described previously to verify the utility of cel-p for imaging of potential cellular targets [23]. To track the cellular distribution of cel-p, living primary neurons were incubated with 0.8 μM cel-p for 0-6 h. The cells were xed with 4% paraformaldehyde solution for 10 min and 0.2% Triton-X 100 permeated 15 min. Click chemistry reaction was carry out (regents and concentration referenced 2.3.3) for 2h and washed thrice to remove excessive agents. The nuclei were stained with Hoechst for 10min. The images were obtained with confocal uorescence microscopy (Leica TCS SP8 SR, Germany). Pull down, TMT labeling and targets validation Pull-down, TMT labeling and LC-MS/MS experiments were carried out to identify the interacting cellular targets of celastrol and primary neuron according to previous description with certain modi cation [24]. Primary neuron cells cultured in 100 mm dishes were divided into the following groups and continue incubated for 4 h : Control (less than 1% DMSO), cel-p (4 μM), and celastrol 8 × (cel-p 4 μM + celastrol 32 μM Target engagement assay of celastrol with HMGB1 was performed the same with previous article with minor modify [25]. The primary neurons in 100 mm dishes were collected with PBS containing protease inhibitor. The cells were subjected to freezing and thawing cycles in liquid nitrogen and repeated mechanical crushing to obtain cell lysate supernatant by centrifugation. Then equivalent supernatant (1 mg) was treated with either DMSO or celastrol (20 μM) 1h at room temperature with gently shaking. The treated supernatant was divided into ten equal parts and heated according to designated temperatures. The cooled samples were centrifugation again to obtain supernatant and conducted Western blot analysis. Western blot analysis Supernatants of neuron lysate or rat cerebral cortex tissues lysate were obtained with RIPA lysate and protease inhibitor. For the cytoplasm and nuclear protein extraction, the nuclear-cytosol extraction kit was used on the basis of instructions. Fully reduced and disul de bond HMGB1 isoforms were detected in rat primary cortex neuron and condensed culture supernatant with non-reducing PAGE gel, and samples were collected avoid from reducing agent (β-mercaptoethanol or DTT). After OGD and celastrol treatments as detailed above, the culture supernatant was collected at 0, 4, 8, 12, 24, 48 h and centrifuged to discard cell debris. Then the supernatant was concentrated 20 folds with Amicon Ultra-4 50kDa and Amicon Ultra-4 10kDa. Protein content was identi ed using the BCA assay, and the denatured sample was separated with 10%, 12% or 15% SDS-PAGE gel. The bands were visualized with enzyme-linked chemiluminescence in the detection system (Azure C400, USA). Immuno uorescence staining For animal Immuno uorescence, after series dewaxing and dehydration, rat cerebral cortex para n slice were incubated in antigen retrieval for 10 min in 95°C and permeabilized 15 min in 0.2% Triton X-100. The slices were blocked 1h in 5%BSA, incubated with primary antibodies against HMGB1, HSP70, or NF-κB at 4 °C overnight and 2 h with secondary uorescence antibodies (goat anti-rabbit, 1:500; goat anti-mouse, 1:500, Abcam) avoiding from light. After 10 min of Hoechst staining, the slices were photographed with laser scanning confocal microscope. For cell Immuno uorescence, the treated cells were washed with PBS, xed in 4% paraformaldehyde solution and permeabilized in 0.2% Triton-X 100. The rest procedures were the same with animal Immuno uorescence. Expression and puri cation of HMGB1 A box and B box Recombinant human HMGB1 box A (residues 1-89) and box B (residues 90-175) were cloned in a modi ed pET-24d vector (Novagen, Madison, WI) expressing a protein with an N-terminal 6-His tag. The E. coli BL21 was transformed with pET24d-HMGB1 A box and pET24d-HMGB1 B box, cultured in LB medium containing 50g/ml kanamycin at 37℃ to an absorbance of 0.8 at 600 nm, and expression was induced with 0.4 mM Isopropyl-D-1-thiogalactopyranoside (IPTG) for 12h at 16℃ before being harvested by centrifugation. Cell pellets were suspended in lysis buffer (20 mM For activity assay of celastrol and HMGB1 protein, recombinant human HMGB1 protein and celastrol combined HMGB1 activity were measured by stimulating TNF-α in RAW 264.7 cells for 24h. For con rming whether celastrol blocked the binding of receptors TLR4 and RAGE to B box, the semi in vivo precipitation of B box complex experiment was conduct as previously with minor revise [26]. 100 μg B box proteins were rst reacted with or without equivalent amount (equimolar with B box, 10 -2 μmol) and ve folds amount of celastrol (5 ×10 -2 μmol) in Ni-beads column for 1 h. Then 500 μg lysate of primary neurons was add into the Ni-beads column and reacted 2 h at 4a, and eluted to precipitate B box complex. The denatured complex sample was subjected to immunoblot analysis with antibodies against TLR4 and RAGE. The same B box with equivalent amount of celastrol elution did not incubate with cell lysate was used for TNF-α analysis in RAW 264.7 cells to research whether celastrol reduced the ability of B box inducd TNF-α secretion. Induction of MCAO and neurological defect assessment in rats By inserting an lament to occlude the right middle cerebral artery (MCA) of male Sprague-Dawley rats (250-280g), we established cerebral I/R model in vivo as described previously [27]. Brie y, the rats fasted overnight were anesthetized with continuous supply of 3% iso urane + 95% oxygen mixture. A 4-0 mono lament was inserting into the internal carotid artery through a tiny incision in the external carotid artery to block the cerebral blood ow of MCA. 90 min later, the lament was withdrawn to resume blood stream providing. The Sham group did the same steps without inserting the lament. The neurological de cits of rats were assessed at 24, 48, 72 h after reperfusion respectively by an experimenter who was blinded to the group information. Zea-Longa ve-point scores was used to assess neurologic de cits according to previous description [27]. The animals without symptoms of neurological impairment or dying after surgery were rejected. 72h after reperfusion, infarct volume was determined by 2, 3, 5-triphenyltetrazolium chloride (TTC) stating and calculated with Image J software as previously [27]. Nissl-stained cells in the rat cerebral cortex were observed at 200 × magni cations with a light microscope to assess Nissl body damage. Statistical analysis Data were presented as the mean ± SEM. Raw data were statistically analyzed with Graph Pad Prism 5.0. The density of Westren blot bands was quanti ed using Image J software. The data were analyzed using one-way ANOVA. Fisher's least-signi cant difference post hoc test was used to test the differences between two groups. P value less than 0.05 was considered statistically signi cant. Results Celastrol and cel-p showed similar neuroprotective effect in vitro Cel-p was synthesized with an alkynyl handle to the carboxyl terminal of celastrol (Fig. 1A). We rst detected the cytotoxicity of celastrol and cel-p in living primary neuron. As shown in Fig. 1B, the IC 50 results suggested that cel-p closely mimicked the original compound in biological activity (celastrol IC 50 = 2.2 μM, cel-p IC 50 = 2.0 μM). For cell viability assay, primary neurons were incubated in 96 well plates for 7 d and then experienced OGD model to evaluate the neuroprotective effect of celastrol and cel-p. As shown in Fig. 1C, celastrol exhibited obvious neuroprotective effect on the OGD model of primary neurons. The optimal dose of celastrol was 0.1-0.8 μM, and the optimal administration time was 48 h after OGD model. Cel-p showed similar neuroprotective effect with celastrol (Fig. 1D). Therefore, celastrol remained biological activity after introducing biorthogonal reaction groups and cel-p could be used to instead the celastrol for subsequent research. Celastrol signi cantly decreased pathological changes of MCAO rats Previous researches indicated that celastrol reduced neurological de cit, brain water content and infarct volume in rat permanent cerebral ischemic model [6,7]. Therefore, we mainly evaluated whether celastrol exhibited neuroprotective effect for rat cerebral I/R injury. At 72 h after MCAO model, celastrol (1mg/kg, i.p.) signi cantly decreased the infarct volume ( Fig. 2A, B), improved behavior indexes (Fig. 2C) and reduced cortex pathological changes of Nissl staining (Fig. 2D, E) of MCAO rats compared with Model group. The neuroprotective effect of celastrol was similar to positive control edaravone injection (6mg/kg, i.v.). Cel-p possessed high bioconjugation e ciency We evaluated the labeling pro les of cel-p in living primary neurons. As shown in Fig. 3A, cel-p showed strong labeling e ciency and concentration dependent labeled living primary neurons protein and produced obvious visible bands at as low as 0.8 μM probe concentration in 4 h of incubation time. As shown in Fig. 3B, the labeling pro les of cel-p (0.8μM) became weaker in the presence of competitor celastrol (1.6-6.4μM), suggesting that cel-p bond similar intracellular targets with celastrol. Cellular imaging experiments with click chemistry reaction were performed to study the cellular location of cel-p in living cells. As exhibited in Fig. 3C, cel-p mainly localized in the cell cytoplasm within 2 h and gradually entered into the cell nucleus, indicating that the cel-p was able to label cytoplasm and nuclear proteins after 4h of labeling. These data demonstrated that cel-p possessed high bioconjugation e ciency under in situ condition and was a suitable substitute of celastrol for subsequent chemical proteomics procedures. Celastrol directly targeted HMBG1 and did not affect the expression of HMGB1 Next, we proceeded to identify cellular targets of celastrol by quantitative chemical proteomics technology. The protein with enrichment ratio R (cel-p/cel-p+8× cel) > 1.5, p < 0.05 was set as signi cant hits, and the protein information labeled by cel-p under this standard was analyzed. The identi ed protein hits were systematically analyzed and displayed by corresponding volcano plots after cel-p (4 μM) with or without celastrol 8 × (32 μM) treatment for 5h. A total of 1405 proteins were identi ed by cel-p target recognition experiment, of which 120 were highly reliable. A complete list of identi ed proteins was provided in Table S1. On the basis of these criteria, HMGB1 was identi ed as a one of the direct binding proteins of celastrol, which has fairly high credibility and is currently one of the crucial pro-in ammatory alarmin of stroke (Fig. 4A). Primary neurons pull down, Western blot and Immuno uorescence assays veri ed that celastrol 8× could completely compete the binding of cel-p to HMGB1 protein, further demonstrated that HMGB1 was a direct binding protein of celastrol ( Fig. 4B-D). The CETSA results also proved the direct binding of celastrol and HMGB1 protein, so as to decrease the protein degradation with increasing temperature compared with Control group (Fig. 4E-F). As shown in Fig. 4C-D, the HMGB1 protein in celastrol 8 × (3.2 μM) group was almost invisible. In order to con rm whether high concentration of celastrol or cel-p affected the expression of HMGB1 in primary neurons, we treated normal living primary neuron with high concentration of celastrol (20 μM and 40 μM) for 5 h to evaluate its effect on HMGB1 expression with Western blot. In OGD model, we also examined the effect of cel-p (4μM) or cel-p + celastrol 8 × on HMGB1 expression in neuron cells lysate or living neuron cells. As shown in Fig. 4G, high concentration of celastrol or cel-p did not decrease the expression of HMGB1. We speculated that high concentration of celastrol occupied the binding sites of HMGB1, which led to the failure of HMGB1 protein to bind to HMGB1 antibody, rather than the degradation of HMGB1 by high concentration of celastrol or cel-p. In additon, we con rmed that celastrol had no effect on HMGB1 expression in normal cells in a certain dose and time range (Fig. 4H-K). Unexpectedly, we did not observe the time-dependent increasing secretion of HMGB1 in OGD model. On the contrary, the expression of HMGB1 in Model and M + cel groups both decreased in a time-dependent manner ( Fig. 4L-N). In conclusion, celastrol directly targeted HMGB1 and did not affect the expression of HMGB1. Celastrol played neuroprotective effect through HSP70 and NF-κB p65 As mentioned above, we did not nd that celastrol affected the expression of HMGB1 in primary neurons with or without OGD insult. Celastrol induced HSP70 response and suppressed NF-κB activation to inhibit in ammatory responses and regulate innate immunity response in previous researches [28 -30]. Therefore, we established OGD model in vitro and MCAO model in vivo to mimic cerebral I/R injury and tested whether celastrol affected the distribution changes of HMGB1 in the cytoplasm and nucleus and the expression changes of protein HSP70 and NF-κB p65. We found that the expression of HMGB1 in the cytoplasm of Model and M + cel group signi cantly increased, while the expression of HMGB1 in the nucleus obviously decreased (Fig. 5A-F).In summary, HMGB1 overall expression was barely affected by celastrol, and celastrol could hardly affect the distribution of HMGB1 in the cytoplasm and nucleus 48h after OGD injury. In contrast, celastrol signi cantly increased both of the overall and nucleus expression of HSP70 and decreased the overall and nucleus expression of NF-κB p65, which was consistent with previous studies [28 -30] (Fig. 5A-F). The Immuno uorescence results were in line with Western blot results in vitro (Fig. 5J). Similar Western blot and Immuno uorescence results were also observed in MCAO rats ( Fig. 5H-N). Compared with Model group, celastrol (1mg/kg) remarkable increased the expression of HSP70, down regulated the expression of NF-κB, and had no effect on the expression of HMGB1 (Fig. 5 H-N) in rats MCAO model. Celastrol did not affect the secretion and redox state of HMGB1 HMGB1 in the concentrated supernatant of primary neuron OGD Model group increased gradually in a time-dependent manner, and reached peak at 24h (Fig. 6 A). The secreted HMGB1 in Model and M + cel (0.8 μM) group in 48h was almost the same (Fig. 6B), which indicated that celastrol did not affect the secretion of HMGB1 after OGD injury. HMGB1 in concentrated supernatant was actively secreted in response to OGD injury at 48h. Celastrol hardly affected the redox state of HMGB1, presented as the disul de bond form HMGB1 showed a faster mobility in non-denaturing page gel and celastrol did not affect its mobility (Fig. 6C). In addition, the disul de bond but not fully reduced form HMGB1 occupied the vast majority form of Model and M + cel group in primary neurons injured by OGD compared with Control group (Fig. 6D). According to the present results, celastrol could not affect the secretion and expression of HMGB1 protein, nor affect the redox state of HMGB1. As shown in Fig. 6E, HMGB1 includes two DNA binding domains (A, B box) and the acidic C-terminal tail. Three cysteine residues (Cys23, 45 and 106) in A, B box mainly determine the redox state of HMGB1. The disul de bond between Cys23 and Cys45 and reduced Cys106 is indispensable for the binding of HMGB1 to TLR4 and cytokine-inducing activity [10]. Hence, we focused on whether celastrol bond with HMGB1 to weaken its cytokine activity. Celastrol directly bond to HMGB1, HMGB1 A box and B box Celastrol exhibited strong combining ability with recombinant human HMGB1 protein in a concentration dependent manner (Fig. 7A-B). Celastrol almost competed the binding of IAA-yne to HMGB1 or DTT reduced HMGB1 (Fig. 7C-D), which indicated that celastrol could occupy the Cys106 of disul de HMGB1 or Cys23, 45, 106 of reduced HMGB1. This result was consistent with previous studies that celastrol was proposed to exert cellular effects by forming covalent adducts with cysteine residues of proteins [31][32][33]. The binding ability of celastrol to HMGB1 protein was stronger than GA and Metformin, which were recognized HMGB1 inhibitors by binding to A, B box and C-terminal acidic tail of HMGB1 respectively ( Fig. 7E-F). Celastrol also almost completely blocked the binding of IAA-yne to cysteines group of recombinant human HMGB1 A, B box. The binding of cel-p to A and B box could not be blocked by IAA (Fig. 7 G), which indicated that celastrol could bind to other sites of HMGB1 in addition to the cysteines 23, 45 and 106. Celastrol remarkably blocked the cytokine activity of HMGB1 and B box Celastrol did not affect the expression and redox state of HMGB1 after OGD injury. In addition, previous research indicated that only disul de bond form HMGB1 possessed cytokine activity [10]. Therefore, we mainly explored whether celastrol in uenced the disul de bond form HMGB1 induced TNF-α increase in RAW 264.7 cells. According to the description of the manual, EC 50 of HMGB1 for stimulating RAW 264.7 cells TNF-α production was 0.7855-0.8342μg/ml, so we chose 0.8 μg/ml HMGB1 to induce the secretion of TNF-α. As shown in Fig. 8A, 0.8μg/ml HMGB1 obviously increased the TNF-α secretion compared with control group, and 0.1μM or 0.05μM celastrol apparently decreased the TNF-α secretion in RAW 264.7 cells. 0.02μg/ml and 0.2μg/ml recombinant human HMGB1 B box obviously increased the TNF-α secretion compared with control group, and 1× (10 -6 μmol and 10 -5 μmol) celastrol apparently decreased the TNF-α secretion in RAW 264.7 cells (Fig. 8B). According to the published literature, TLR4 is the only receptor of HMGB1 for producing cytokine by binding to the B box cysteine 106 [34]. Then we utilized recombinant human B box, B box-celastrol complex to verify that 1× and 5× celastrol obviously blocked the binding of receptors TLR4 and RAGE to B box (Fig. 8C). In conclusion, celastrol remarkably blocked the cytokine activity of HMGB1 and B box by directly binding to them to block the combination of in ammatory receptors with them. Discussions Our studies demonstrated that: (1) celastrol exhibited neuroprotective effect for ischemic stroke in vitro and in vivo; (2) celastrol directly bond the HMGB1; (3) celastrol did not affect the expression of HMGB1, increased the HSP70 and decreased the NF-κB expression to play anti-in ammatory effect in vitro and in vivo; (4) celastrol bond the 106 cysteine of disul de bond HMGB1 or 23, 45, 106 cysteines of fully reduced HMGB1; (5) celastrol scavenged the overproduced TNF-α induced by disul de bond HMGB1 and B box; and (6) celastrol played anti-in ammatory effect by binding to the B box to block the combination of TLR4 and RAGE with HMGB1 B box . Taken together, our ndings suggested that the neuroprotective action of celastrol for ischemic stroke was due to its inhibition of neuroin ammation effect through up regulating HSP70 and decreasing NF-κB expression and directly binding with HMGB1 protein. The speci c experimental process was shown in Fig. 9. With the maturation of Mass-spectrometry based proteomics technology, stable isotopes label has been widely used in the research of biomarkers for diseases and drugs targets through quantitative measurement of relative or absolute protein amounts in healthy versus disease states [35]. TMT and iTRAQ are commercially available and widespread application isobaric tag for they allow multiplexing of up to 10 samples with high-resolution instruments and a range of sample types applicable [36]. TMT isobaric labeling could simultaneous identi cation and quanti cation of the complex protein mixtures components with the key work ow of sample denaturation, digestion, isobaric tagging of tryptic peptides, fractionation, mass spectrometric analysis and data processing [37,38]. First, we veri ed that celastrol remained biological activity after introducing biorthogonal reaction groups. Celastrol exhibited neuroprotective effect for ischemic stroke in vitro and in vivo. With the aiding of cel-p, TMT label and LC-MS/MS technologies, we identi ed 1405 proteins in cel-p target recognition experiment, of which 120 were highly reliable. HMGB1 was identi ed as a direct binding protein of celastrol with fairly high credibility (Fig. 4). CETSA is a label-free biophysical technique for studies of target engagement in cells and tissues based on ligand binding affects protein stability and cellular studies of proteins redox modulations [39]. Generally speaking, many proteins unfold after heating and precipitate rapidly in cells, and drug binding protein can stabilize the protein and reduce its degradation with increasing temperature compared with no treated cells [40]. CETSA based on immunoassays (such as Western blot, proximity ligation assays, mass spectrometry) detection is a hot technique to validate ligand binding of drugs to proteins in lysates, cells and tissues, which is based on measuring the protein melting curves changes after different heating steps and quantifying the amount of remaining soluble protein [40,41]. We investigated whether celastrol combined with HMGB1 and stabilized HMGB1 in cell lysate samples subjected to temperatures from 37 to 82 °C. The results showed that compared with DMSO treated control group, the celastrol treated group signi cantly stabilized the HMGB1 and decreased its degradation with temperature increase (Fig. 4E-F). HMGB1 is highly expressed in the nucleus of multiple cell types, and the redox state of intracellular and extracellular HMGB1 is dynamic mainly related with the 23, 45, and 106 cysteines. Disul de bond form of HMGB1 (C106 in thiol and C23, C45 form disul de bond) is requisite for TLR4/MD-2 interaction to induce TNF release and NF-κB activation [10]. HMGB1 can be released by passive or active secretion via multiple pathways. Passive release of HMGB1 occurs rapidly during primary necrosis with fully reduced or disul de forms, or nuclear retention and passive release during cell apoptosis secondary necrosis with mainly fully oxidized form (sulfonyl HMGB1). Active secretion happen late stage in pyroptosis with posttranslational modi cation and mainly disul de form [12]. Previous studies showed that celastrol signi cantly suppressed HMGB1/NF-κB pathway to alleviate in ammatory pain and exhibit neuroprotective effect in transient global cerebral I/R, and inhibited HMGB1 expression to decrease myocardial I/R injury [8, 42,43]. Different from previous studies, the peak expression of HMGB1 was not detected in the Model group, and celastrol did not affect the expression of HMGB1. The results of this experiment may be related to the type of cells we selected. Because mammalian neurons are terminally differentiated, postmitotic cells, and the isolated rat cortex primary neurons have little ability of division and proliferation in vitro in the absence of inducer [44]. The neuron cell culture medium supernatant Western blot results proved that primary neurons suffering from OGD injury mainly actively secreted disul de bond form of HMGB1, and the formation of disul de bond could hardly be prevented by celastrol (Fig. 6). Plasma HMGB1 rapidly increases and acts as a pro-in ammatory cytokine to activate microglia, aggravate excite-toxicity induced neuronal death and aggravate brain injury during the acute damaging phase of ischemic insult [45,46]. Early HMGB1 translocation and release occurs mostly in injured neurons and acts as a proin ammatory cytokine by interacting with receptors of RAGE, TLR2 and TLR4 [47]. High levels of HMGB1 in the serum and cerebrospinal uid (CSF) are related with severity of animal ischemic brain damage. In addition, HMGB1 in the serum of lipopolysaccharide (LPS) administered MCAO animals was up-regulated and mainly disul de bond type [48]. Blockade of HMGB1 with antagonists has been veri ed an effective treating style for animal stroke model, including GA [49], HMGB1 A box, anti-HMGB1 monoclonal antibody [50]. Previous studies demonstrated that peripheral disul de HMGB1 produced more obvious pro-nociceptive activity than all-thiol HMGB1 via activating TLR4 other than RAGE [51]. Here, we con rmed that celastrol did not affect the secretion, redox state and expression of HMGB1 both in normal and OGD insult neuron cells. Apart from binding with the cysteines, celastrol also occupied other sites of HMGB1. Considering that only disul de bond form HMGB1 has cytokine property, we focused on the effect of celastrol on disul de bond form HMGB1. 106 cysteine is the main binding site of TLR4 receptor to HMGB1 to play cytokine activity. In our research, celastrol directly bond to HMGB1 A, B box and blocked the binding of HMGB1 B box to its receptors TLR4 and RAGE, resulting in in ammatory activity loss. Celastrol disrupted the TNF-α inducing capacity of HMGB1 and B box in RAW264.7 cells. Therefore, celastrol directly bond to HMGB1 to make it lose in ammatory activity, rather than reducing its secretion or changing its redox activity. In addition, celastrol played antiin ammatory effect in cerebral ischemic injury by targeting HSP70 and NF-κB. Although celastrol and its numerous derivatives exhibit potential therapeutic effects against various diseases, none of them have been approved for clinical use due to their toxic effects, low solubility and narrow therapeutic dose range [52]. Therefore, how to solve the toxicity of celastrol and improve its e cacy is the next focus direction. In addition, a growing body of evidence supports the idea that in ammation plays different roles in different stages of stroke [53]. HMGB1 shows different activities according to redox modi cations, and may play a more complex role in ischemic stroke to be explored. In addition to cytokine activity, HMGB1 also exerts bene cial effects in axonal regeneration, endothelial activation, angiogenesis, neurovascular repair and remodeling [11,54]. Therefore, it is necessary to consider carefully about promoting or inhibiting HMGB1 in different stages of stroke. Conclusion In summary, we performed a proteome-wide investigation of direct cellular protein binding targets of celastrol in primary neuron, and identi ed 120 targets with fairly high credibility through quantitative chemical proteomics approach. The present study demonstrated the neuroprotective effect of celastrol against cerebral I/R injury through targeting HSP70 and NF-κB and disrupting the cytokine activity of disul de bond HMGB1 in rat primary cortical neurons OGD model and adult rats MCAO model, which may provide a potential therapeutic direction for ischemic stroke therapy. By directly binding to the B box, celastrol blocked the binding of TLR4 and RAGE receptors with B box to exhibit anti-in ammatory activity. To the best of our knowledge, this is the rst study to evaluate the direct binding of celastrol to protein HMGB1. We hope that the data and ndings from the present study can provide useful guidance for the clinical use of celastrol in future. Supplementary Files This is a list of supplementary les associated with this preprint. Click to download. SupplementaryTableS1.xlsx
2021-08-02T00:06:55.787Z
2021-04-16T00:00:00.000
{ "year": 2021, "sha1": "a89ba5c2cfaa6fca9816e810e6ad5eaa93e55eac", "oa_license": "CCBY", "oa_url": "https://jneuroinflammation.biomedcentral.com/counter/pdf/10.1186/s12974-021-02216-w", "oa_status": "GREEN", "pdf_src": "MergedPDFExtraction", "pdf_hash": "2977debff8615a749ff6efa1cb94332745a6cea3", "s2fieldsofstudy": [ "Medicine", "Materials Science" ], "extfieldsofstudy": [ "Chemistry" ] }
237253927
pes2o/s2orc
v3-fos-license
An optimal feature enriched region of interest (ROI) extraction for periocular biometric system With the onset of COVID-19 pandemic, wearing of face mask became essential and the face occlusion created by the masks deteriorated the performance of the face biometric systems. In this situation, the use of periocular region (region around the eye) as a biometric trait for authentication is gaining attention since it is the most visible region when masks are used. One important issue in periocular biometrics is the identification of an optimal size periocular ROI which contains enough features for authentication. The state of the art ROI extraction algorithms use fixed size rectangular ROI calculated based on some reference points like center of the iris or centre of the eye without considering the shape of the periocular region of an individual. This paper proposes a novel approach to extract optimum size periocular ROIs of two different shapes (polygon and rectangular) by using five reference points (inner and outer canthus points, two end points and the midpoint of eyebrow) in order to accommodate the complete shape of the periocular region of an individual. The performance analysis on UBIPr database using CNN models validated the fact that both the proposed ROIs contain enough information to identify a person wearing face mask. Introduction Uniquely identifying individual in this era of pandemic on day to day basis is a challenge in the biometric world. Primary reason is the necessity to use face mask to protect from the spreading of corona virus. Finger print biometric systems are also not considered as a safe option due to the fear of corona virus spread [9]. One of the solutions may be to use eye or iris biometrics but it needs a lot of user cooperation. Under this situation, periocular region as a biometric trait is receiving popularity [1] since it is contact less and the performance is least affected by the face mask. The term periocular refers to the periphery of eyes which contains eye, eyebrow and pre-eye orbital region as shown in Fig. 1. Unlike other biometric traits such as iris, face and fingerprints, the area of periocular region is not defined in literature. Performance of a periocular biometric system strongly depends on the area of the periocular region. It is found that large size periocular region may provide high accuracy but can take more execution time whereas small size periocular region may provide comparable recognition accuracy with less execution time. This is due to the fact that large size ROI obviously contains a greater number of features as compared to small size ROI. But the question is, are all of the features extracted from large size ROI actually worthy to improve recognition accuracy of the system? Or, one should try to extract an optimum size ROI to get enough number of features while maintaining the recognition accuracy of the system. The requirement of small size ROI is also important in the situation of COVID-19 pandemic since a large region of the face is generally covered with the face mask as shown in Fig. 2. Motivating from the above facts, this research proposes two novel methods to extract two different optimum sized periocular regions of interest, which include the critical components 1. Proposed a novel methods to extract ROIs of two different shapes: 1) polygon and 2) rectangular shape from periocular region images which contain sufficient features for recognition when subjects are wearing face masks. 2. Demonstrated that the extracted polygon shaped and rectangular shaped ROIs using the proposed methods are capable enough to obtain a better recognition accuracy as compared to the state-of-the-art rectangular ROIs. The proposed methods can be implemented to create a highly robust contact less biometric authentication system suitable for this pandemic situation. Literature review In the effort to mitigate the spreading speed of corona virus, face masks are playing a major role. But, use of face mask covered most of the face area which can be a major hurdle for face recognition in person authentication. Damer et al. [7] study the effect of wearing face mask on face recognition systems and found significant drop in recognition accuracy when subject is wearing face mask. Finger print authentication can be a solution but it requires to touch the surface of the scanner which may increase the possibility of contamination and spread of any infectious diseases [18]. Hence in this critical scenario of COVID 19, this solution cannot be acceptable when every automated system needs to work on contact less approach. Iris or sclera matching to identify an individual cannot be considered as a good solution because it requires a lot of user cooperation. In this scenario, use of periocular region as a biometric trait for person authentication is a good solution. The reasons are 1) Periocular region based biometric system works on contact less approach 2) It requires very low user cooperation 3) Since area and shape of periocular region is not defined, system can consider any shape of ROI which has enough feature for person authentication. The pioneer work to contemplate the adequacy of periocular region as a compelling biometric trait was performed by Park et al. [19]. Subsequently, several works were published to prove the utility of periocular region as supporting feature to iris [4], its usefulness in soft biometric classification [6] and in smart phone authentication [25]. In spite of the popularity of periocular region as a reliable and contact less biometric trait, the primary challenge for the researcher's community is its undefined size and this problem became more complex when most of the face area is covered with face mask. In literature, various strategies and reference points were considered by the researchers to segment the region of interest from periocular images for matching. Park et al. [21] considered the iris center as reference point and extract a rectangular ROI region with dimension of (6 × Riris) × (4 × Riris) where Riris denotes the radius of iris whereas Ahmed et al. [2] extracted a rectangular ROI region with dimension of (4 × Diris) × (3 × Diris) where Diris denotes the diameter of iris with iris center as reference point. Center of iris is performing well as reference points but these methods were not applicable when eyes were partially or fully closed, face of the subject is tilted and if gaze angle is not frontal. In order to handle the problem of gaze angle, Mahalingam et al. [15] used eye center instead of iris center as reference points. This method was working well, but again, it was not applicable on the subjects with tilted head/ face or with partially open eyes. To solve this problem, some investigators proposed to use eye corners as reference points [6,11]. The reason is, eye corner also known as Canthus points are least effected by partial open eyes or tilted face. In another approach found in literature proposed by Dong and Woodard [8], Le et al. [13] and Nguyen et al. [16], instead of using any reference points they consider critical components of periocular region such as shape of eye brow itself as the region of interest for matching. Proenca et al. [22] proposed a unique method to extract ROI by considering center of mass of cornea as reference points and claimed that their proposed method is least sensitive to gaze angle of eyes. Most of the approaches in literature provided a fixed size rectangular region of interest. Considering this fact, Bakshi et al. [3] proposed a human anthropometric based method and implemented an approach to extract dynamic region of interest. They considered width of eyebrow, width of face, height of face, area of face and distance between eyebrow and eye center of individual to create and extract a rectangular region of interest. Here the area of ROI can be varied based on the above parameters. Considering the popularity of deep learning concepts, Proenca and Neves [23] implemented a CNN model and found that components inside ocular globe (such as iris and sclera) does not play a critical role in improving the performance of periocular system. Instead, they may be the cause of degradation in performance. For supporting this disruptive hypothesis, they provided some supporting facts such as 1) effect of corneal reflections in iris and sclera, 2) components in ocular globe are subject to motion because of body or head movement and 3) partial occlusion of iris and sclera because of unknown movements of eyelashes or eyelids. On the contrary, Zhao and Kumar [27] implemented an attention mechanism-based CNN model to make focus on some of the important components of periocular region such as eye shape and eyebrow. The key assumption of their work was that, there may be some critical components in periocular region which required more attention and may provide more discriminative features at the time of matching. After rigorous study of various state of the art methods implemented in the domain of periocular biometrics, it is found that none of the researchers analyzed the efficiency of periocular biometric system when a large part of face is occluded with the face mask and what solution can considered to improve the performance of the system in this scenario. Considering the above facts and current pandemic situation, this research proposes two algorithms to extract optimal feature enriched regions of two different shapes (polygon and rectangle) from the visible periocular area. This research also considered the importance of critical components such as eyebrow, shape of eye, eye socket and canthus points and included them in the proposed polygon and rectangular shaped ROIs. Database used For the evaluation of proposed work, raw input images from publicly available UBIPr periocular database created by Padole and Proenca [20] is used. This database contains total 10252 images represented in RGB color space in .bmp format. Images were captured using CANON EOS 5D digital Camera in highly controlled lab conditions and setups such as four meter to eight meter in steps of one meter distance variation, different illumination, frontal, 30 and -30 degree pose variation and occlusion variability. To create metadata of images, researchers were manually annotated the images for iris center, canthus points and inner, outer, mid points of eyebrow. Annotations also included information about gaze angle, gender, pigmentation level, eye closure and presence of glasses. Sample images from UBIPr database is shown in Fig. 3. Proposed ROI extraction algorithms The core objective of this research is to identify an optimal size periocular ROI for authentication when subject is wearing face mask. From the literature review, it is found that most of the existing ROI extraction algorithms [2,14,19,21] extracts rectangular ROIs and use some multiplication factors which needs to be calculated empirically in order to calculate the length and breadth of the rectangular ROI. Thus, the objective was to find a ROI extraction method without using any multiplication factors and dynamically adaptive to each person's periocular region. For this, some experiments have been performed by extracting different ROIs on images with masks. It is found that feature enriched region around the eye should include four critical features of periocular region i.e. canthus point, eye socket, shape of eyebrow and eye shape. Considering the above fact, this research proposes two different shapes of ROIs -rectangular and polygon covering all the four critical features in the visible region when most of the face area is occluded because of face mask. Example images to illustrate both the ROIs are shown in Fig. 4. Rectangular shaped ROI extraction and matching The objective was to identify a rectangular shaped ROI which includes four critical features of periocular region i.e. canthus point, eye socket, shape of eyebrow and eye shape. To find the height of the rectangular ROI, the distance between the midpoint of the line connecting the canthus points and the midpoint of the eyebrows calculated and it is marked as d in Fig 5. By visualizing the symmetry of the eye shape, it has been decided to take a distance d above and a distance d/2 The rectangular ROI extraction algorithm uses the end points and midpoints of eyebrow and canthus points as reference points and is given in algorithm 1. Fig. 5 Illustration of rectangular ROI extraction method An example image after extracting rectangular ROI from UBIPr database is shown in Fig. 6 Polygon shaped ROI extraction and matching This research also proposes a polygon shaped ROI which is smaller in size than the rectangular ROI yet includes the four critical features of periocular region i.e. canthus point, eye socket, shape of eyebrow and eye shape. This polygon shaped ROI may be useful in situations where the face is highly occluded because of mask or because of some specific hair styles (such as Pixie cut) as shown in Fig. 7. The height of the proposed polygon ROI will be same as that of the rectangular ROI and the width of the upper side of the polygon is the distance between the eyebrow end points and the lower side is the distance between the canthus points. The construction of polygon shaped ROI is shown in Fig. 8. The polygon ROI extraction algorithm also uses the end points and midpoints of eyebrow and canthus points as reference points and is given in algorithm 2. An example image after masking the region outside polygon ROI from UBIPr database is shown in Fig. 9. Methodology used The complete methodology used for ROI extraction, feature extraction from ROI and classification is shown in Fig. 10 and Fig. 11 In the proposed methodology, deep CNN models are used for feature extraction and classification and is described in Sect. 3.5 and Sect. 3.6. The input to the CNN model is the image corresponds to the extracted ROI. For the rectangular ROI, no further processing is required but the polygon ROI need to be converted into an image and the method is described in Sect. 3.3.1. Extraction of RGB pixel values from polygon region and creation of input image to feed them in to CNN In order to convert the polygon shaped ROI into an image suitable for inputting the CNN, a row-wise pixel scan (raster scan) has been performed on the images with masked polygon ROI region. The masking process simply converts all the three -Red, Green and Blue channel value of an image pixel to zero. Hence in order to extract the pixels from the polygon ROI, it is enough to extract only those pixels for which the value of any of the three channels (Red, Blue and Green) is not zero. After complete scan of the image, the obtained pixels are stored in a 3-dimensional matrix. This 3D matrix is converted to .jpeg image. Proposed CNN architecture for feature extraction and classification By considering the popularity of non-handcrafted features and to analyse the performance of proposed polygon and rectangular shaped ROI, a deep CNN model has been designed. The proposed model contains three residual connections and five convolutional layers. Each convolution layer is followed by one ReLU layer. It is followed by a Fully connected layer, a SoftMax layer and finally the Classification layer as shown in Fig. 12. Here, convolution layers are used to extract features by applying filters on images, ReLU layer maps all the negative values to zero, residual connections are used for optimal gradient flow. The output of the whole process is feed in to fully connected layer. Fully connected layer with SoftMax and Classification layer is used for classification and to predict the best class label for input test images. The description of five subsequent convolutional layers are as follows: 1. First two Convolutional layers (Conv 1 and Conv 2) contains 16 filters of size 5 × 5 with stride of two pixels. 2. Rest of the three Convolutional layers (Conv 3, Conv 4 and Conv 5) contains 16 filters of size 3 × 3 with stride of two pixels. VGG19 CNN model and transfer learning An off the shelf pretrained deep CNN model (VGG19) with transfer learning approach is also implemented to analyse the effectiveness of both rectangular and polygon shaped ROIs. Transfer learning approach aims to transfer knowledge (features, weight etc.) from previously learned tasks to newer task, when training samples for the newer task are insufficient to train a robust model [10]. In image classification, transfer learning is used to create a bridge between source feature space to target feature space using a translator in order to transfer the learning from source to target. In this study, we have implemented transfer learning on pre-trained deep CNN VGG19 model by freezing initial ten layers of the pretrained model. Freezing means, while backward pass the weights on these layers will not Experiments All experiments were carried out using the MATLAB r2018b on a system with Intel Core i7-8750H GPU Processor @ 2.2 GHZ, 8GB DDR4 RAM with windows 10 operating system. Performance of both the ROIs (polygon and rectangle) extracted using the proposed algorithms were analyzed with two different CNN classifier models described in Sect. 3.4 and Sect. 3.5. After evaluating a lot of different set of hyper parameters such as different learning rates (0.0001, 0.0003, 0.0005 and 0.0008), minibatch size (2,4,8,16,32) and epoch (starting from 2 to 10), the final parameter specification used to train the proposed CNN model and VGG19 model is shown in Table 1. Three different experiments were carried out to evaluate the performance of both the ROIs (polygon and rectangle) extracted using the proposed algorithms. Experiment 1: Complete UBIPr database In this experiment, complete UBIPr dataset is utilized and divided it into training, validation and test set. The partition is based on the Pareto principal-Pareto Principal suggest to divide dataset in to 80: 20 ratios. 80% of the images from all 344 subjects are selected randomly and kept for training and validation set and the remaining 20 % are kept for testing set. From training and validation set, again 80 % of the images are randomly selected to train the model and remaining 20% are used for validating the model. Details for number of images used for training, validation and testing are shown in Table 2. Experiment 2: Images with pose variation In this experiment to analyse the effectiveness of both the extracted ROIs, a non -ideal scenario in which images suffered with pose variation (30 and -30 degree) were used for testing. Both the models were trained using frontal images (0 degree pose variation) only. Details of number of images used in training as well as testing of both the model is shown in Table 3. Both the proposed CNN and VGG19 architecture were used for testing the performance of the ROIs in recognition. Experiment 3: Subject are wearing glasses In this experiment, one more non-ideal scenario in which testing dataset contains only those images of subjects wearing glasses were also used to analyse the performance of both the extracted ROIs. Details of number of images used for training as well as testing of both the models is shown in Table 4. Both the proposed CNN and VGG19 architecture were used for testing the performance of the ROIs in recognition. Result and discussion The key innovation of this study is the identification of optimum size periocular region of interest for biometric authentication when subject is wearing face mask. Thorough analysis has been performed to analyze the efficiency of the proposed polygon and rectangular shaped periocular ROI from different perspectives such as the performance of both the ROIs in two different non-ideal scenarios when images are suffered with pose variation and when subjects are wearing glasses, training time taken by both the models and size of ROIs. Recognition accuracy To analyse the performance of both polygon and rectangular shape ROI using two different CNN architecture (proposed CNN and VGG19) this study implemented closed identification system scenario and used Rank 1 recognition accuracy as performance metrics. The Rank 1 accuracy calculated for both polygon and rectangular ROI extracted from UBIPr database using both proposed CNN and VGG19 model in Experiment 1, Training time The total training time taken by the proposed CNN architecture and pretrained CNN model for both polygon and rectangular shape ROI for complete UBIPr database is shown in Table 8. It is observed that training time taken by the proposed CNN is much less compared to VGG19 also the training time taken by both CNN models with polygon ROI is less compared to rectangular ROI even though it is marginal. It may be due to the smaller number of features in polygon ROI compared to rectangular ROI. Size of ROI A comparative analysis for the size of polygon and rectangular ROI in terms of number of pixels they contain is done with reference to a selected image and is shown in Table 9. It is observed that number of pixels contained in polygon ROI is around 39 % less as compared to rectangular ROI and still has all necessary features to obtain acceptable recognition accuracy. Comparison with pre-existing work in literature The proposed approach is also compared with the state-of-the-art works on complete UBIPr database. The comparison results clearly show an improvement in recognition accuracy as shown in Table 10. Based on the experiments on UBIPr database using both polygon and rectangular shaped ROI, it is observed that both the ROIs performed well with proposed five-layer CNN model and VGG19 model. Moreover Experiment 2 and Experiment 3 shows the effectiveness of proposed ROIs in non-ideal scenarios when the test dataset contains images with pose variation and when subjects are wearing glasses. The proposed rectangular ROI region is 18% to 20% less in area compared to the rectangular ROI used in our previous work [12], but still performs better. This shows that the ROI extracted using the proposed algorithms are feature enriched small regions around both the left and right eye. Illustration of generalization capability of proposed algorithms to extract polygon and rectangular ROI To examine the generalizability of the proposed algorithms for ROI extraction, we have applied both the ROI extraction algorithms on different images randomly chosen from publicly available Masked Face Net Dataset [5] as well as on images captured using mobile phone camera. Some of the images illustrating both polygon and rectangular ROI are shown in Fig. 16. Here, the first three images were randomly chosen from MASKED FACENET DATASET, and last three images were captured in real time using I phone XR having 12-megapixel camera with an f/1.8 aperture in a controlled environment. From Fig. 16, It is observed that the proposed ROI are within the visible region when subjects are wearing different types of face masks contains all the required critical features within that small ROI. Conclusion and future work Human beings are presently confronting the covid-19 global pandemic which has shaken the whole world. With lots of worries, this situation is giving us an opportunity to think out of the box to develop strategies and tactics to mitigate its effects. Indeed, the digital technology in every domain needs to be upgraded. This paper deals with the problem of occlusion, caused by face mask in the biometric systems. As a solution, this research proposes algorithms to extract different shape (polygon and rectangular) ROIs from the visible periocular region when subjects are wearing masks. The proposed algorithms use five reference points (inner canthus point, outer canthus point, end points and midpoint of eyebrow) in order to include the complete shape of the periocular region of the individual. End point of eyebrows ensures the inclusion of complete eyebrow shape and canthus point ensures the inclusion of eye shape of the individual in the proposed ROIs. The proposed rectangular ROI shows marginal improvement in recognition accuracy compared with the polygon shaped ROI due to larger area and thus more features. However, the polygon ROI may be useful when the subject's face is highly occluded due to face mask or hair. This paper also proposes a simple five convolutional layer CNN model with residual connections for evaluating the performance of the proposed ROIs. The pretrained VGG19 CNN model is also used for evaluation and it is found that the training time taken by the proposed CNN model is much less compared to VGG19, yet gives comparable recognition accuracy. The performance of the proposed method is also compared with the state-ofthe-art rectangular ROI based methods and the experimental results provide very strong support to the proposed ROIs which are unique of its type and to the best of our knowledge nothing like this is proposed by anyone in the area of periocular biometrics for subject identification when half of the nose area is covered with the face mask. In future, we are aiming to reduce the number of reference points required to extract optimal size periocular ROIs in order to reduce the complexity of ROI extraction algorithms.
2021-08-22T05:32:49.480Z
2021-08-20T00:00:00.000
{ "year": 2021, "sha1": "a097a8a1e316a386489d6fb07406221217a0a533", "oa_license": null, "oa_url": "https://link.springer.com/content/pdf/10.1007/s11042-021-11402-0.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "a097a8a1e316a386489d6fb07406221217a0a533", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
213194636
pes2o/s2orc
v3-fos-license
Isolated Penile Edema After Diagnostic Paracentesis Diagnostic paracentesis is a routinely practiced, typically safe procedure performed in the emergency department. Genital swelling post-paracentesis is a rare complication with few documented case reports. We report a case of isolated penile edema after a diagnostic paracentesis performed in the emergency department. The patient is a 63-year-old male who came to the emergency department with a two-day history of isolated penile swelling after undergoing a diagnostic paracentesis in the emergency department as part of his workup during a recent hospital admission. On exam, the paracentesis site was noticeably low, beneath the inguinal ligament on the right side. His genital exam showed a circumcised penis with significant soft tissue swelling that involved the entire penile shaft sparing the glans and scrotum. There was no penile tenderness on palpation or urethral discharge. The testicles and scrotum revealed no signs of edema or tenderness, hernias, or abnormal lie. Of note, the patient reported that he had a less severe episode of penile swelling approximately one year ago after a paracentesis in a similarly low site, which resolved spontaneously. The features and timing of this presentation, added to the patient’s previous episode over a year ago, pointed to this being a sequela of the paracentesis he had undergone during his last hospital stay. After evaluation and consultation with the urology service, he was discharged home with expectant management and outpatient follow-up. His symptoms resolved spontaneously after one week. To our knowledge, there have been no published reports of isolated penile edema after a diagnostic paracentesis. This case could be used when teaching the proper technique for performing a paracentesis and its potential complications. Introduction Patients who present to the emergency department (ED) with new-onset ascites, patients with worsening ascites in the setting of liver disease, or those with clinical deterioration may require a diagnostic or therapeutic paracentesis [1,2]. Paracentesis is seen as a typically safe procedure that can be performed under ultrasound guidance to relieve abdominal distension and provide a specimen for analysis and culture [3][4][5]. Of the rare complications post-paracentesis, the best chronicled are ascitic fluid leak (most common), bleeding, bowel perforation, and infection [5,6]. Genital swelling post-paracentesis is a rare complication with few documented case reports. We performed a literature review using PubMed, Google Scholar, and Scopus, which showed that there has never been a reported case on isolated penile edema post-paracentesis. Our search keywords included paracentesis, genital edema, scrotal edema, and penile edema. Our literature search yielded only three case reports of swollen genitalia after a paracentesis dating back to the 1970s, with two of them describing post-paracentesis scrotal edema and one case of labial edema [7][8][9][10]. Here we report a case of isolated penile edema after a diagnostic paracentesis performed in the ED. Written consent was obtained from the patient for the writing and use of images in this case report. Case Presentation This is a case of a 63-year-old male who presented to the ED with a two-day history of painless penile shaft swelling. He had a past medical history of liver cirrhosis, hepatitis C, and alcohol abuse. The patient stated that the swelling began one day prior to presentation and that it was not improved or worsened by any specific factors, was painless, and was not associated with other symptoms such as difficulty urinating, dysuria, testicular pain or swelling, or nausea. He also denied penile or groin trauma, or being sexually active. On physical exam, the patient was well appearing and in no apparent distress. His abdomen was slightly distended with pronounced abdominal veins but soft and without tenderness to palpation. His right inguinal area had a piece of gauze taped to his skin, and upon removal, we noticed a puncture wound which was clean and dry and was noted to be a few centimeters medial to the patient's right anterior inferior iliac spine in the inguinal region and appeared to be below the inguinal ligament ( Figure 1). FIGURE 1: The paracentesis site from three days prior (circle) The site was clean and dry and did not exhibit any swelling, tenderness, erythema, or warmth. His genital exam showed a circumcised penis with significant soft tissue swelling that involved the entire penile shaft sparing the glans and scrotum ( Figure 2). There was no penile tenderness on palpation or penile discharge. The testicles and scrotum revealed no signs of edema or tenderness, hernias, or abnormal lie. FIGURE 2: Notable edema of the penile shaft without scrotal swelling During our interview, we discovered that three days prior to the presentation, the patient visited the same ED for evaluation of an episode of upper gastrointestinal bleeding (UGIB) in a setting of decompensated cirrhosis for which he was evaluated and subsequently admitted to the hospital. During his ED evaluation, he underwent a diagnostic paracentesis that ruled out spontaneous bacterial peritonitis. Documentation about the procedure did not specify if ultrasound guidance was used. He was admitted for the management of decompensated cirrhosis and UGIB. Of note, the patient reports that he went to the ED with a less severe episode of penile swelling, which occurred approximately one year ago after a paracentesis, which resolved spontaneously without extra care. The features (i.e., site of paracentesis) and timing of this presentation, added to the patient's previous episode over a year ago, pointed to this being a sequela of the paracentesis he had undergone during his last hospital stay rather than an acute infectious process such as cellulitis or spontaneous bacterial peritonitis. While in the ED, the patient was evaluated by the urology service. After their evaluation, we agreed that the swelling was likely a complication due to the paracentesis and they only recommended that the patient wear tight underwear for scrotal support. Urine analysis performed in the ED was negative for urinary tract infection and was sent out for culture which did not show bacterial growth. The patient was discharged home with return parameters and reassurance that the swelling would resolve over the next couple of days. During a subsequent telephone encounter, the patient reported that the swelling fully resolved about one week after our evaluation. Discussion Paracentesis is a routine procedure performed at the patients' bedside to provide symptomatic relief and assist in diagnostic efforts. With improvements in imaging modalities (i.e., ultrasound guidance), procedures like paracentesis have become more ubiquitous with relatively low complications [4,5]. Even though ultrasound guidance is not always necessary, it reduces the incidence of complications and injury to nearby structures compared to paracentesis based solely on physical exam [6]. Operator's comfort and experience with the use of ultrasound play a major role in the outcomes of the procedure. In this case report, we explore a rare complication of paracentesis, isolated penile edema, which could be associated with a negative psychological impact due to genital disfigurement and slow recovery. We believe that this episode of penile edema is directly related to the location of the paracentesis entry point. Currently, there is no consensus on the best location for needle entry during an abdominal paracentesis, and the ideal location may differ between patients [6]. However, in general the bilateral lower quadrants of the abdomen are considered safer than the midline due to the increased thickness of the abdominal wall at the midline and increased risk of hematoma. One study considers the left lower quadrant ideal as it is not as thick as the infraumbilical midline and does not risk perforating a distended cecum (i.e., post-lactulose administration) in the right lower quadrant [11]. Others recommend paracentesis site in the relatively avascular areas beneath the umbilicus on either side laterally with caution to avoid the bilateral epigastric vessels, previous scarring, or engorged veins [6,12]. There is no literature supporting paracentesis below the inguinal ligament. This patient's penile swelling can be attributed to two different mechanisms: low puncture site and hypoalbuminemia (average albumin during visit 3.0 g/dL (normal range 3.2-4.9 g/dL)) . Examining the patient's paracentesis site showed that the needlestick was relatively low in the right lower quadrant just below the inguinal ligament. We hypothesize that the low needlestick led to a post-procedure leak which dissected the abdominal wall fascia and extended down to the penis causing swelling. The penis is surrounded by superficial dartos fascia and deep Buck's fascia, which surround the corporal bodies and are important for erectile function. The superficial dartos fascia is continuous with the scrotum and perineum Colles' fascia and the abdominal wall Scarpa's fascia [13]. Theoretically, a low paracentesis stick could lead to fluid tracking along the abdominal Scarpa's fascia and track down to the penile dartos fascia causing separation of soft tissue and swelling. The second contributing factor in this patient is his low protein state secondary to chronic liver disease. The patient suffers from alcoholic liver cirrhosis leading to hypoalbuminemia and low oncotic pressure, which in turn leads to fluid accumulation in the extravascular space and dependent areas such as the penis and scrotum if proper paracentesis technique is not observed. Interestingly, this was the patient's second episode with penile complications postparacentesis. The reproducibility of this patient's complication further demonstrates the anatomical relationship between the fascial layers of the abdomen and genitalia and the risk of such complications after paracentesis. Penile swelling in this setting is a self-limiting phenomenon, which resolved with conservative management. Even though this disfigurement is not as life-threatening as some of the other complications after a paracentesis, it can be distressing to patients and negatively affect their quality of life. We hope that this case report will provide an example of this uncommon complication to help providers identify it and manage it. Conclusions This is the first written case report of a patient experiencing isolated penile swelling after a diagnostic paracentesis performed in the ED. We hope that this case report can be a valuable example of a potential rare complication post paracentesis. This case could be used when teaching the proper technique for performing a paracentesis and its potential complications. Additional Information Disclosures Human subjects: Consent was obtained by all participants in this study. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
2020-03-21T09:10:59.627Z
2020-03-01T00:00:00.000
{ "year": 2020, "sha1": "e43e69cbc74655a6733371ec03c8852ae377d989", "oa_license": "CCBY", "oa_url": "https://www.cureus.com/articles/28963-isolated-penile-edema-after-diagnostic-paracentesis.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2fc49e22342d4eca6f10f81bd62f9d52561e1950", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
253286620
pes2o/s2orc
v3-fos-license
The Meaning of Suicidal Behaviour for Portuguese Nursing Students Background: The nursing perspectives on suicidal behaviors may influence the quality of assistance and suicidal prevention. This phenomenon is scarcely investigated among nursing students. Aims: The aim of this study is to understand the meanings of suicidal behavior for Portuguese undergraduate students. Methods: This qualitative study utilized Grounded Theory and Symbolic Interactionism. We collected data in Portugal in 2017–2018 with 13 undergraduate students. Results: Students compared suicidal behavior to “A complex and close haze” and considered it “A neglected phenomenon”. Suicidal behavior was predominantly perceived as an emotional distress that requires assistance. The students compared the person and society as “The car and the road: behavior influenced by communication and interaction” and valorized social dimensions and repercussions of suicidal behavior. Limitations: Lack of triangulation in the data and the sampling restricted to nursing students of a single institution are considered limitations of this study. Conclusions: This study can contribute to the development of academic education strategies and psychosocial support for nursing students. Introduction Suicide is a global concern that must be given priority in the public health and policy agenda. It is estimated that over 700 thousand people die by suicide every year c In Portugal, in 2019, the suicide mortality rate was 11.5 per 100,000 inhabitants, a higher rate than other types of violent deaths. These numbers are probably underestimated, as suicide is underreported in Portugal for several reasons, and the country is one of the countries with the highest number of violent deaths from undetermined causes in the European Union [1,2]. Suicidal behavior is defined as a set of thoughts and actions linked to the desire to cause one's own death. It can be observed along a continuum that includes self-destructive thoughts, threats, gestures, and attempts to death by suicide [1,3]. In the literature, there are different theoretical models that seek to explain suicidal behavior and the complex interaction between biopsychosocial factors associated with this phenomenon. The most accepted models have in common the multifactorial perspective of suicidal behavior (associated with emotional, social, and physiological factors), the search to understand the complex mechanisms related to this behavior, and the transition from suicidal thoughts to suicidal actions [4][5][6]. The risk factors for suicidal behavior include previous suicide attempts, mental disorders, harmful use of psychoactive substances, stressful events, hopelessness, unbearable emotional suffering, feeling of failure, imprisonment, loneliness, lack of social support, accessibility to lethal means, exposure to suicide, violence, feelings of worthlessness, impulsiveness, aggression, among others [1,5,6] Albeit it is argued that suicide can be prevented, the topic is hugely complex, still stigmatized, and poorly understood. These barriers ultimately increase the pain of those who suffer from suicidal thoughts and behaviors as well as prevent their families from seeking and obtaining effective and qualified assistance [1]. Qualitative studies carried out in Portugal reveal that suicide is mainly portrayed as an enemy, malefic, pathological, mysterious, and a threatening entity that causes high mortality in the Portuguese population and is associated with a public moral duty of prevention [7,8]. Preventive interventions must be comprehensive and involve multiple sectors of society [9]. Nursing professionals can play a crucial role in suicide prevention and provide significant insight into the prognosis of people at risk of suicide [10][11][12][13]. Suicidal behavior is considered an enigmatic phenomenon, and assistance to people with suicidal behavior is considered by nursing professionals as a critical and challenging moment, which evokes feelings varied and requires knowledge, skills, and emotional control [14][15][16]. Research reveals that nurses and nursing students often feel emotionally affected by suicidal behavior and have difficulties understanding, empathizing, and verbally interacting with those who attempted suicide [14,[17][18][19]. When nursing professionals manifest judgments that lack empathy and do not feel prepared or supported for care, they may play a limited role in care (restricted to physical demands) [14]. Qualitative studies carried out in different countries show that the meanings attributed to suicidal behavior may converge or be dissonant with scientific advances in the understanding of suicide and its prevention. The meanings attributed to suicide are dynamically reconstructed, and it is common to use personal beliefs to support people with suicidal behavior [19,20]. A study carried out with Brazilian professionals revealed that they considered suicide as an instigating, unacceptable, and intolerable behavior [14]. A study carried out in Ghana suggests that suicidal behavior is reprehensible and objectionable, and its perceptions may favor moralistic attitudes and prescriptive approaches [21]. A Belgian study carried out on psychiatry specialist nurses identified different approaches, which may be focused on verifying and controlling the risk of suicide or understanding and connecting with the person [12]. In some contexts, it was identified the understanding of suicidal behavior is a health condition that requires care and interpersonal connection [13]. Studies show a predominance of negative attitudes toward suicidal patients and that these attitudes seem to be associated with a lack of appropriate training for health workers [22][23][24]. On the other hand, adequate training is associated with favorable changes in attitudes and competencies in assisting suicidal patients [25,26]. These findings stress the importance of qualified academic training for health professionals. Studies indicate that skills and knowledge for suicide prevention have been insufficiently addressed in the academic environment. These gaps may favor the continuity of some negative beliefs and behaviors identified in society, such as judgments, discriminatory attitudes, lack of understanding, and the search for blame, admiration, or condemnation [18,27,28]. The representations of suicidal behavior among health students may impact peers' support during the academic path, the help-seeking behaviors, and can also interfere with the quality of care provided to people with suicidal behavior [18]. Nursing care may be influenced by the nurse's beliefs about how nursing professionals should proceed when working with suicidal patients. These perceptions are demonstrated to be associated with the training nurses received, and their skills in suicide risk assessment, and care planning [19,21,29,30]. However, the available evidence on what may shape nursing professionals' perceptions of suicidal behavior is still scarce [31,32]. There seems to be a limited understanding of nursing students' personal and professional experiences with suicidal behavior and the influence this may have on their learning [16,28]. Additionally, little is known regarding the educational content on suicide in undergraduate nursing curricula internationally [33], and the currently available studies on these issues are predominantly quantitative and restricted to suicide-related attitudes or specific components of professionals' experiences, limiting a broader understanding of the representations of suicide among prospective nurses. In addition, there is a lack of studies conducted on nursing students. Knowledge of the meaning of suicide from the perspective of nursing students could shed light on the needs, potentials, facilitators, and limitations of academic training on suicidology, as well as the assessment of experiences and educational strategies related to suicide. Therefore, the aim of the present study is to investigate the meaning of suicidal behavior from the perspective of Portuguese nursing students. Materials and Methods The present study was designed to answer the following guiding question: What is the meaning of suicidal behavior from the perspective of Portuguese nursing students? The qualitative approach was adopted to meet the objective of this study as it is suitable for apprehending and interpreting representations, meanings, motives, beliefs, and values to obtain in-depth knowledge of different dimensions of social phenomena [34]. The study was presented in accordance with the Consolidated Criteria for Reporting Qualitative Research (COREQ), thereby ensuring the comprehensive and explicit reporting of the study [35]. The main strategies used to promote rigor in this research were: transparency in the description of the method; returning to the participants for validation when the accuracy/interpretation of interview transcripts was in question; rigorously and critically following Grounded Theory (GT) procedures; promoting regular discussions about both the reflexivity and credibility of the preliminary interpretations with a group of researchers; and an external audit of the research process [34,36,37]. In this study, we employed Symbolic Interactionism (SI) as a theoretical framework. SI provides the theoretical basis for our understanding of how meaning is developed through interaction. These theoretical assumptions guided the researcher's perspective during the analysis of the phenomenon investigated. SI considers that behavior (observable external actions and internal experiences) is guided by the individual's definitions of reality and that these definitions are derived from social interactions in which active individuals exert mutual influence [38]. Applying SI to the present investigation, suicidal behavior can be viewed as continually defined and redefined by nursing students through a dynamic and interactive interpretative process. We employed the version of GT proposed by Strauss and Corbin (2008) as the methodological framework of this study. We chose to use GT since it allows for the construction of a substantive theory with an emphasis placed on the social and psychological processes involved thereof (including meanings, perceptions, and how individuals continually reinterpret and react to the phenomenon). In this study, GT was used to determine methodological procedures such as theoretical sampling, memo writing, the constant comparative method, coding and categorizing, and theory generation [34]. SI is considered a component of the theoretical underpinnings of GT methodology [34] and the qualitative health research literature has reinforced the solidified conceptual linkage between SI and GT [39]. The study was conducted at the Nursing School of Coimbra, in Coimbra, Portugal. This higher education institution offers a degree course in nursing. Undergraduate students in nursing from the 5th semester onwards were eligible to participate in the study. The decision to approach the students in the final semesters was justified by the fact that they are most likely to have had contact with patients admitted to the hospital for having attempted suicide. Theoretical sampling was adopted to guide participant inclusion in the study according to their potential to describe experiences or their possible contributions to better understanding the investigated phenomenon. Initially, we used purposive sampling when inviting the first participant of the study, and further data were collected on the basis of theoretical sampling, which aims to maximize the opportunities to explore and compare events, concepts, characteristics, situations, experiences, and definitions, thus ensuring the constitution and refinement of study categories [34]. As the data were collected and analyzed, subsequent decisions about the methodological sampling of participants and the type of data collected were guided by the emerging theory. In this study, the interruption of data collection and the addition of new participants were determined by the theoretical saturation, which occurred when the objective of the study was reached, the categories of the study were developed, coherent, and articulated, and data became repetitive and added no relevant information for the understanding of the studied phenomenon. Additionally, for ethical reasons related to suicide prevention [40], we included in the study the number of students considered necessary and sufficient to achieve the proposed objective. During the theoretical sampling process, we invited 16 students to participate in the study. Three refused to participate due to a lack of availability. Thus, the research was developed with 13 students. We collected data from November 2017 to February 2018 through individual, audiorecorded, open, semi-directed, or semi-structured interviews. The initial interview with the participants was guided by the following question: "Can you describe the meaning of suicidal behavior?" Other questions were subsequently added to clarify our analysis of their meaning of suicidal behavior. We continuously modified the interview process according to the analysis of the data obtained. A structured questionnaire was also employed to obtain demographic information (age, gender), and data relating to the participants' academic backgrounds (semester of the undergraduate program, attending discipline on mental health, class, scientific events, courses or lectures on suicide prevention, contact with suicidal patients, and reading the literature on the subject). The interviews were conducted in a private room at a time previously arranged with the participants according to their availability. All the interviews were conducted by a trained researcher (first author) who had no previous relationship with the participants, did not belong to the staff of the institution, and was not involved in educational activities. Each participant attended one or two interviews, approximately 40 min in duration. The transcribed interviews were analyzed through open, axial, and selective coding, in accordance with GT. We employed a constant comparative process for the identification of patterns and variations in the data, asking questions and sampling based on evolving theoretical concepts. Hypotheses about the emerging concepts and their relationships were developed and tested through the use of constant comparative data analysis [34]. The participants in the study, as indicated by GT, validated the results. During open coding, the data were broken into discrete parts, closely examined, and compared in terms of the similarities and differences exhibited between described events, situations, or relevant participant characteristics. Through axial coding, the categories were related to their subcategories, and the properties and dimensions of categories were refined to form precise explanations thereof. In selective coding, the categories were clearly integrated and refined; the theoretical scheme was reviewed to verify its internal consistency. Memos and diagrams were composed to support the development of the research. The research was conducted entirely in Portuguese and subsequently, the final report was translated into English. The research began after receiving approval from the Research Ethics Committee of the Nursing School of Coimbra (ethical approval number: 377_12-2016). We initially obtained a list of nursing students enrolled in the institution and invited them to confidentially participate in the study. Eligible participants were asked to take part in a study investigating the meaning of suicidal behavior. They were informed about the development and purposes of the study, and all participants provided written informed consent prior to their participation. We informed potential participants that their anonymity would be preserved, that they were free to refuse to participate in the study, would not be paid to participate, and that they could withdraw from their participation at any time without consequence. Considering the potential distress this could have upon the participants, the Informed Consent included a statement in which the researchers offered their support in the event of such an outcome. Although support was offered, none of the participants required it. The participants were also informed that in the educational institution there was a professional who could meet their emotional demands, if necessary. The work complied with all standards and recommendations concerning research involving humans. Results In total, 13 students participated in the study. Most participants were 22 years old (53.8%), women (92.3%), in the seventh semester of the undergraduate program (53.8%), had already attended a discipline on mental health (92.3%) and a course on suicide prevention (92.3%), stated they had been in contact with patients at risk of suicide (69.2%), had read literature on the subject (69.2%) and had not participated in scientific events, courses or lectures on the subject (76.9%). Qualitative data analysis resulted in the following categories: "A complex and close 'haze'", "The car and the road: behavior influenced by communication and interaction" and "A neglected phenomenon". These categories and their respective descriptions are presented below. A Complex and Close "Haze" Participants described suicidal behavior as a complex and close phenomenon, compared to a haze in terms of the abstraction, boundaries, characteristics, and magnitude of this behavior. The haze was a metaphor for the psychological state of suicidality, which includes overwhelming and permanent suffering, hopelessness, loss of self-orientation, and a dysfunctional way to handle a desperate situation. "It's like a foggy day, where you can't see the blue of the sky. And the person stops believing skies exists, that the blue in the sky is real." (P5, 2018). The respondents considered suicidal behavior complex, intriguing, multi-causal, with imprecise boundaries, and marked by uniqueness, misunderstandings, and controversies. Students' views on suicidal behavior seemed obscure or had not been thought about or discussed before. Suicidal behavior was also considered common in society, something that can occur at any stage in life, associated with regular, everyday pressures and hardships, and present in the social circle of many respondents. "The Car and the Road": An Interactional and Communicative Phenomenon Suicidal behavior was considered essentially interactional and communicative. It was not seen by the students as an isolated or individual event. Social dimensions were strongly present in representations of the risk factors, protective factors, and effects or consequences of suicide. A metaphor of a car (representing the individual) and a road (representing the social interactions at both a macro level, i.e., society, and a micro level or personal relationships) was used to describe the social dimensions of well-being or suicidal behavior. "It's like I [person with suicidal behavior] was a car and the people who support me were the road. I am the one who is in movement, but if the road goes away, I fall." (P5, 2018). "We do not live alone in the world; we have the people around us, who affect us in several ways, including this one." (P3, 2017). The statements revealed the belief that suicidal behavior and communication or interaction could exert a mutual and heavily influence in terms of prevention and support (information, suicide risk assessment, support, treatment, and respect), neglect (non-recognition of risk, passivity) or pro-suicidal effects (inadequate communication, increased suffering and risk factors, contagion). The students believed that the person who suffers from suicidal thinking and behaviors, intentionally or not, could be sending messages, "a cry for help" and "a cry for attention", or trying to provoke feelings or influence others. "One person can make a cry for attention just to get support or as a cry for help". "When she is thinking about suicide, she is thinking: 'm going to relieve the burden, the weight that I mean for other people. If people get rid of me, they will be happier. I won't be missed here." (P5, 2018). The consequences of suicidal behavior were also deeply linked to social dimensions that include the feelings, reactions, judgments, and impact on the history of lives. Death by suicide can have a profound effect on the people who are close to the deceased. Respondents who knew someone close who died by suicide claimed suicide leaves "permanent marks" and "affects mainly the people who remain". "When there is a suicide, people usually analyze the social context involving the person." (P7, 2018). "I feel that suicide doesn t happen to the person who dies, it happens to the other people (. . . ) he or she stops feeling anything, while great suffering begins for the bereaved people." (P1, 2017). A Neglected Phenomenon Suicidal behavior was considered a difficult phenomenon to identify and prevent effectively in the present moment. According to the participants, these difficulties were related to the characteristics of the psychological states of suicidality, and to society's responses to this phenomenon. Thus, prevailed in society, reactive actions regarding the suicidal behavior (responses after the manifestation of the suicidal behavior) than proactive actions (early prevention, anticipation of actions with initiative). The difficulties with identifying a suicide state and, therefore, being unable to prevent it were the covert, non-specific, or not evident symptoms, the recurrence of crises with unforeseeable consequences, lack of hope and assistance to seek help, and the possibility of an abrupt suicide attempt. "Suicide is a process that begins with negative thoughts, and it can go unnoticed by people who are not very close or attentive [. . . ] and it may be recurrent [. . . ] and when we're at that stage we don t want anyone's help." (P5, 2018). " [. . . ] It is difficult to distinguish things and we cannot always see clearly if a person is suffering. Some think there are fewer better days, because fewer good days exist and we neglect them, but sometimes I think it is very hard to see." (P3, 2017). The limited proactivity of society in preventing suicide could also be related to ignorance, lack of preparation and difficulties approaching the subject, stigma, discrimination, and condemnatory attitudes towards suicidal behavior. Participants also mentioned the generalization and trivializing of other people's suffering, the conventional and impersonal way of supporting them, a limited bond with the person at risk, and personal attitudes or beliefs. "Just yesterday I had a meeting with a lecturer and with my colleagues. There was this colleague of mine, and we started talking about it [suicide], but he didn't feel comfortable so we chose to remain silent because no one knew what to say." (P11, 2018). "I think that the prevailing concept, in general, is, for example: "He s just sad", "You have to be strong to get through this". I think it's something along those lines. [. . . ] they don't pay attention, they neglect it, because they know other people have been there and overcame the situation." (P12, 2018). According to the respondents, suicidal behavior was an attenuated or deconstructing taboo in society. However, participants say that the demystification process is still superficial, insufficient to promote changes, and the discussion about suicide is yet restricted to certain groups. The social responses to suicidal behavior were associated with geographic locations (with distinct cultural aspects), level of education, access to knowledge about mental health, social isolation, and personality factors (especially openness and flexibility). "There's the social stigma. What they say in the villages and out there is that the person who commits suicide is weak [. . . ] my mother is from a village and she would say psychologists are for crazy people [. . . ]. In the village, people are less literate, they rarely go to the city and their only means of communication are the TV and the radio, they are very isolated inside themselves." (P5, 2018). Discussion The current study aimed to investigate the meaning of suicidal behavior from the perspective of Portuguese nursing students. Suicidal behavior was considered complex and diverse, intriguing, and multifactorial. It was represented as a neglected risk due to its characteristics (involving hidden emotional states and behaviors considered nonspecific, abrupt, or recurrent) and social issues that impair suicide prevention (negligence, non-recognition of risk, communication inappropriate, and lack of proactivity). Suicidal behavior was considered essentially interactional and communicative. Individual, relational and social dimensions were present in risk and protection factors, in prevention, functions, intentionality, and consequences of suicidal behavior. The most accepted theoretical models regarding suicidal behavior have a multifactorial perspective and reinforce the relevance of individual, interactional and communicative dimensions [4][5][6]. In this study, suicidal behavior was perceived by the participants as a complex, but a common and neglected phenomenon. Previous research suggests that suicide is frequent in Portuguese society although notoriously underreported [7,8]. The silence around suicide in Portugal can limit the legitimacy, visibility, social relevance, and political discussion on the subject, which favors the perpetuation of inequities, stigma, and ignorance [7]. In this study, the students had difficulties describing suicidal behavior, stated by themselves, still as something obscure. At the same time, they also know they will be working with this uncommon phenomenon whose understanding is limited. In the literature, suicide is often described as a complex phenomenon with different connotations, hardly comparable to other potentially fatal conditions, and overshadowed by the lack of a clear and publicly stated definition [14,19,21]. Previous studies revealed that many health professionals, students, and a large proportion of the general population believe that people suffering from suicidal thinking and behavior are responsible and aware of the consequences of their suicidal actions. In addition, suicidal behavior is often associated with stigmatized and negative attitudes, and misunderstanding and is considered a transgression or reprehensible act [14,18,21,23,30]. In our sample, differently from other studies, the legitimation of emotional distress as a condition requiring care prevailed over the perception of suicidal behavior as a transgression of health care. Although different perspectives coexisted, it is possible to identify a pattern in the perception of suicidal behavior as a dysfunctional way of dealing with despair and suffering, and the students showed interest, empathy, and efforts to understand what someone experiences when they feel suicidal. In another study with nursing students, suicidal behavior was considered an act that could be communicative when associated with manipulation or a request for help [18]. In the current study, suicidal behavior was essentially interactive and communicative. The relational and social aspects were considered in the evaluation of risk factors and protection, prevention, functions, intentionality, and the consequences of suicidal behavior. These perceptions may be influenced by academic training, but this result may also reveal individual experiences related to social support and mental health during the undergrad period Understanding adequately the social dimension of suicidal behavior and mental health issues is important for nursing care planning, social support and to avoid misunderstandings, blame or self-blame related to suicidal behavior in others. Appropriate pedagogy and student support services must be considered for nursing undergraduate students as the emotional intensity of dealing with suicide prevention is a focal point in preparing and supporting prospective nursing professionals [16,23,28]. A review of qualitative studies indicates that it is necessary to improve nurses' relational skills and monitor the emotional impact related to suicide in order to promote more qualified care [15]. According to nursing students, the quality of social interactions could be related to prevention, neglect, or pro-suicidal effects. They vehemently criticized the predominance of reactive rather than proactive (preventive) attitudes related to suicidal behavior. Nursing care for people with suicidal behavior demands interaction, engagement, and connection with the person, understanding them, and establishing a therapeutic bond. This bond needs to have authenticity, trust, reflexivity, proximity, empathic assessment, and construction of collaborative care capable of favoring openness, building a support network, and better coping with crises [11,13]. Regarding the reactions to suicide, studies with students and health professionals point out that suicide can provoke reactions of emotional distress, concerns, doubts, fear, frustration, and shock [14,[17][18][19], but it can also favor learning and proactive actions related to better risk management of suicide in clinical practice and personal determination to prevent suicide [27]. Studies indicate that suicide risk can be assessed by nursing students based on personal judgments and beliefs [19,20]. In addition, in different countries, health professionals also report difficulties in assessing suicide risk and developing appropriate interventions [12,14,21]. In our study, the students attributed the difficulties of recognition and prevention to the characteristics of the psychological states of suicidality (symptoms, recurrence, and unforeseeable consequences), and society's responses to this phenomenon (unprepared to support, stigma, discrimination, condemnatory attitudes, generalization and trivializing the suffering, fragile or superficial social relationships). The use of standardized patient sessions, case studies, simulations, blending theory, role play, sharing of personal experiences of suicide, and reflexive approaches in training has shown promising results in the improvement of professional competences [16,26,28,33]. The students believed that the taboo surrounding suicide is in a process of deconstruction, but they also pointed out that the ongoing demystification is slow because people avoid talking about suicide. They also stated that the younger generation was more understanding than the previous ones. According to the Directorate General of Health in Portugal, stigma, and taboos related to suicide are still important problems in Portugal [2,8]. However, the literature indicates that open communication about suicide is fundamental for the evaluation of risk and protective factors, and for the development of preventive strategies [1,11,13]. Studies have suggested that the taboo and avoiding talking about suicide may also be prominent in the academic environment [18,27,28] and there are contexts in which nursing students controversially criticize taboos related to suicidal behavior, and reinforce by judgments, discriminatory attitudes, and avoiding talking about this subject (including in clinical practice) [18,27,28]. In our study, the undergraduate students also pointed out that the social stigma towards suicide and mental health issues is associated with geographic locations, level of education, and social isolation. These issues could be addressed in future studies in Portugal. The limitations of this study involve a lack of triangulation in the data, the small sample size, and the sampling restricted to nursing students of a single educational institution within a restricted geographic territory. However, investigations that use GT can be adapted and expanded, from the emergence of new data or new sample groups. To our knowledge, this is the first study to address the perceived meanings of suicidal behavior amongst Portuguese nursing students. Conclusions Suicidal behavior was predominantly represented as emotional distress that requires assistance, and it was considered diverse, intriguing, and multifactorial. It was considered essentially interactional and communicative. Individual, relational, and social dimensions were present in risk and protection factors, in prevention, functions, intentionality, and consequences of suicidal behavior. Suicidal behavior was perceived as a neglected risk due to its characteristics (involving hidden emotional states and behaviors considered non-specific, abrupt, or recurrent) and social issues that impair suicide prevention (negligence, non-recognition of risk, communication inappropriate, and lack of proactivity). It is relevant to investigate the elements that can harm prevention (taboos, stigma, ignorance, unpreparedness, generalization, and trivialization of the suffering of others) among other health professional students. Additionally, it is important to promote deep and open discussion about suicide and investigate the cultural issues and other characteristics (geographic locations, level of education, and social isolation) apparently related to stigma and taboos regarding suicide in Portugal. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Data Availability Statement: The data presented in this study are not available due to ethical restrictions. Conflicts of Interest: The authors declare no conflict of interest.
2022-11-04T19:38:19.297Z
2022-10-29T00:00:00.000
{ "year": 2022, "sha1": "dac3b014c18924dbe15b625bda32327a3aea9be5", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-4601/19/21/14153/pdf?version=1667036674", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c87cbcbca451ddc765afd7d716c2dcad455ddb68", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
109156812
pes2o/s2orc
v3-fos-license
Study on signal priority implement technology of tram system The signal intersection delay is the key factor that affects the speed of Tram. Consequently, it affects the development of Tram System. Signal priority implement technology of Tram system has been researched based on characteristics such as the right of way, vehicle speed, vehicle length, and acceleration and deceleration behavior. In this article, the method based on the signal priority implement technology about different intersection types and sectional forms is presented. © 2013 The Authors. Published by Elsevier B.V. Selection and/or peer-review under responsibility of Chinese Overseas Transportation Association (COTA). Background In recent years, the modern tram has become the focus of many large cities. But in the area of the city center, trams are affected by social traffic, especially the intersection signal. As a result, the preponderance of tram speed is greatly constrained. The operating speed is generally about 20km/h, with no significant advantage compared with conventional buses. Nowadays, the intersection signal delay has become the main constraints of tram speed and further confines the development of tram system. Therefore, the research of signal priority technology has great significance in the development of tram system. Key Components of Tram Signal Priority System The tram signal priority system consists of three parts, which is shown in Figure 1. (1) Vehicle intelligent terminal(VIT) VIT was installed in every tram. It records the information of each tram. (2) Road side unit (RSU) RSU was installed in road side which is 100 meter away from the stop line. RSU read the OBU information when the tram passed. (3) Signal priority request unit (SPRU) SPRU was installed inside the traffic signal controller. SPRU receives request signal from RSU and transforms request signal to information which can be recognized by traffic signal controller. Signal Priority Operation Principle At time t , the tram arrives at the RSU which is 80 to 100 meters away from the stop line. Then RSU reads the information from VIT which is installed in the tram and sends the information to SPRU. The run time t during which tram moves from RSU to stop line can be estimated according to the result of traction calculation. Thus the tram is expected to arrive at the stop line at time t t . SPRU adjusts the control signal according to the signal operation condition at time t and t t . The tram can get priority by the adjusted signal timing. Intersection classification It is required to undertake corresponding research for different types of intersections, such as Crossintersection, T-intersection, etc. Different trams section type should also be take into consideration, including central-layout pattern, both-sides-layout pattern, and one-side-layout pattern. Furthermore, the traffic signal design is influenced by different traffic flows and features including motorized vehicle, non-motorized vehicle and pedestrian. All the factors listed above should be considered in tram signal priority research. Intersection is divided into 5 types based on intersection type and trams section type. These 5 types are: entral straight pattern, side straight pattern, side to central pattern, central turning pattern, side right-in and right-out pattern. The specific intersection features and signal priority schemes are researched in the sections below. Central Straight Pattern In central straight pattern, tram lane was set in the middle lane. Trams traveled through the intersection in a straight line. The sectional form is shown in Figure 2. The traffic signal control adopted four-phase control mode, which is shown in Figure 3. In phase A, straight passage of east-west approaches was allowed and the other traffic directions were forbidden. In phase B, left-turn passage of east-west approaches was allowed and the other traffic directions were forbidden. In phase C, straight passage of north-south approaches was allowed and the other traffic directions were forbidden. In phase D, leftturn passage of north-south approaches was allowed and the other traffic directions were forbidden. Tram special phase X was designed in which only tram can cross the intersection and all the other vehicles and pedestrian moving were forbidden. Different control schemes were applied based on tram arriving time. (1)If tram was expected to arrive at the stop line at the end of phase A, phase A would be extended so that tram can cross. When tram passed through the intersection, phase B would be implemented. The phase sequence did not be changed, and was still in normal process as A-B-C-D. (2) If tram was expected to arrive at the stop line at the end of phase D, phase D would be truncated so that phase A returned earlier and tram can cross. When tram passed through the intersection, phase B would be implemented. The phase sequence did not be changed, and was still in normal process as A-B-C-D. (3) If tram was expected to arrive at the stop line at other time, tram special phase X would be insert. The phase sequence was changed to A-B-X-C-D or A-B-C-X-D. The traffic signal control adopted three-phase or four-phase control mode, which is shown in Figure 5. In phase A, straight passage of east-west approaches was allowed and the other traffic directions were forbidden. In phase B, left-turn passage of east-west approaches was allowed and the other traffic directions were forbidden. In phase C, straight and left-turn passage of north-south approaches was allowed and the other traffic directions were forbidden. Tram special phase X was designed in which only tram can cross the intersection and all the other vehicles and pedestrian moving were forbidden. Different control schemes were applied based on tram arriving time. (1)If tram was expected to arrive at the stop line at the end of phase A, phase A would be extended so that tram can cross. When tram passed through the intersection, phase B would be implemented. The phase sequence did not be changed, and was still in normal process as A-B-C. (2) If tram was expected to arrive at the stop line at the end of phase C, phase C would be truncated so that phase A returned earlier and tram can cross. When tram passed through the intersection, phase B would be implemented. The phase sequence did not be changed, and was still in normal process as A-B-C. (3) If tram was expected to arrive at the stop line at other time, tram special phase X would be insert. The phase sequence was changed to A-B-X-C. Phase A Phase B Phase C Phase X Side to Central Pattern In Side to Central Pattern, the position tram lane changed from road side to middle lane. Trams traveled through the intersection in a turning line. The sectional form is shown in Figure 6. The traffic signal control adopted two-phase mode, which is shown in Figure 7. In phase A, straight passage of north-south approaches and left-turn passage of north approach was allowed and the other traffic directions were forbidden. In phase B, left-turn passage of east approach was allowed and the other traffic directions were forbidden. Tram special phase X was designed in which only tram can cross the intersection and all the other vehicles and pedestrian moving were forbidden. Different control schemes were applied based on tram arriving time. (1)If tram was expected to arrive at the stop line at the beginning of phase A, tram special phase X would be insert after minimum green time of phase A. Afterwards, remaining time of phase A would be resumed. The phase sequence was changed to A-B-A(remaining)-B. (2)If tram was expected to arrive at the stop line at the end of phase A, tram special phase X would be insert after minimum green time of phase A. When tram passed through the intersection, phase B would be implemented. The phase sequence was changed to A-X-B. (3)If tram was expected to arrive at the stop line at the beginning of phase B, tram special phase X would be insert after minimum green time of phase B. Afterwards, remaining time of phase B would be resumed. The phase sequence was changed to B-X-B(remaining)-A. (4)If tram was expected to arrive at the stop line at the end of phase B, tram special phase X would be insert after minimum green time of phase B. When tram passed through the intersection, phase A would be implemented. The phase sequence was changed to B-X-A. Phase A Phase B Phase X . The traffic signal control adopted three-phase or four-phase control mode, which is shown in Figure 9. In phase A, straight passage of east-west approaches was allowed and the other traffic directions were forbidden. In phase B, left-turn passage of east-west approaches was allowed and the other traffic directions were forbidden. In phase C, straight and left-turn passage of north-south approaches was allowed and the other traffic directions were forbidden. Tram special phase X was designed in which only tram can cross the intersection and all the other vehicles and pedestrian moving were forbidden. Different control schemes were applied based on tram arriving time. (1)If tram was expected to arrive at the stop line at phase A, tram special phase X would be insert between phase C and phase A. When tram passed through the intersection, phase A would be implemented. The phase sequence was changed to C-X-A-B-C. (2)If tram was expected to arrive at the stop line at phase B, tram special phase X would be insert between phase A and phase B. When tram passed through the intersection, phase B would be implemented. The phase sequence was changed to A-X-B-C. (3)If tram was expected to arrive at the stop line at the beginning of phase C, tram special phase X would be insert between phase B and phase C. When tram passed through the intersection, phase C would be implemented. The phase sequence was changed to B-X-C-A. (4)If tram was expected to arrive at the stop line at the middle of phase C, tram special phase X would be insert after minimum green time of phase C. Afterwards, remaining time of phase C would be resumed. The phase sequence was changed to C-X-C(remaining)-A. (5)If tram was expected to arrive at the stop line at the end of phase C, phase C would be truncated and tram special phase X would be inserted. When tram passed through the intersection, phase A would be implemented. The phase sequence was changed to C-X-A. Phase X Phase A Phase B Phase C Fig.9. Traffic Signal Phase of Central Turning Pattern Side Right-in and Right-out Pattern In Side Right-in and Right-out Pattern, tram lane was set in the side of the road. Trams traveled through the intersection in a straight line. The intersection is T-intersection, and vehicles can only turn right from and to south approach. The sectional form is shown in Figure 10. Tram special phase X was designed in which only tram can cross the intersection and all the other vehicles and pedestrian moving were forbidden. Different control schemes were applied based on tram arriving time. (1)If tram was not detected to arrive, right-turn passage from and to south approach is always allowed. (2) If tram was detected to arrive, tram special phase X would be inserted. When tram passed through the intersection, phase A would be resumed. The phase sequence was changed to A-X-A. Phase A Phase X Fig.11. Traffic Signal Phase of Side Right-in and Right-out Pattern Conclusions Key components of tram signal priority system have been researched, and Signal Priority Operation Principle is presented according to the expected arriving time of tram. Intersection is divided into 5 types based on intersection type and trams section type. According to these, we try to propose effective strategies and approaches of tram signal priority. Adjustment scheme of traffic signal timing has also been designed based on the estimated arriving time of tram. The research result can be used in trams operation scheme. Travel time for trams can be reduced through these measures, which will lead to increased transit quality of service.
2019-04-12T13:58:48.310Z
2013-11-06T00:00:00.000
{ "year": 2013, "sha1": "edcf388dc2a647472cb4c383d82841427d76db26", "oa_license": null, "oa_url": "https://doi.org/10.1016/j.sbspro.2013.08.104", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "c7e32ad7e2854a31a5f3153d37dd902ec518fc44", "s2fieldsofstudy": [ "Engineering", "Computer Science" ], "extfieldsofstudy": [ "Engineering" ] }
9460917
pes2o/s2orc
v3-fos-license
Development and validation of an index to assess hospital quality management systems Objective The aim of this study was to develop and validate an index to assess the implementation of quality management systems (QMSs) in European countries. Design Questionnaire development was facilitated through expert opinion, literature review and earlier empirical research. A cross-sectional online survey utilizing the questionnaire was undertaken between May 2011 and February 2012. We used psychometric methods to explore the factor structure, reliability and validity of the instrument. Setting and participants As part of the Deepening our Understanding of Quality improvement in Europe (DUQuE) project, we invited a random sample of 188 hospitals in 7 countries. The quality managers of these hospitals were the main respondents. Main Outcome Measure The extent of implementation of QMSs. Results Factor analysis yielded nine scales, which were combined to build the Quality Management Systems Index. Cronbach's reliability coefficients were satisfactory (ranging from 0.72 to 0.82) for eight scales and low for one scale (0.48). Corrected item-total correlations provided adequate evidence of factor homogeneity. Inter-scale correlations showed that every factor was related, but also distinct, and added to the index. Construct validity testing showed that the index was related to recent measures of quality. Participating hospitals attained a mean value of 19.7 (standard deviation of 4.7) on the index that theoretically ranged from 0 to 27. Conclusion Assessing QMSs across Europe has the potential to help policy-makers and other stakeholders to compare hospitals and focus on the most important areas for improvement. Introduction In a recent review on instruments assessing the implementation of quality management systems (QMSs) in hospitals, the authors conclude that hospital managers and purchasers would benefit from a measure to assess the implementation of QMS in Europe. The results of the review show that there is currently no well-established measure that has also be used to assess the link between quality management at hospital level, quality management activities at departmental level and patient outcomes [1]. In the context of the European cross-border directive and the Council Recommendation on patient safety, it is even more important to measure and compare the implementation of QMS across countries to get insight into existing prerequisites for safe patients' care and possible gaps in the quality management within or between countries. QMSs definition used in this article is 'as a set of interacting activities, methods and procedures used to monitor, control and improve the quality of care' [2]. In the recent review, 18 studies to assess the implementation of QMSs have been described. Only nine of these studies reported methodological criteria in sufficient detail and were rated as good [1]. Only two of them have been used in several European countries, e.g. the European Research Network on Quality Management in Health Care (ENQuaL) questionnaire for the evaluation of quality management in hospitals [3,4], and the Methods of Assessing Response to Quality Improvement Strategies (MARQuIS) questionnaire and classification model for quality improvement systems [5]. Despite their good evaluation, both instruments have important limitations. The ENQuaL questionnaire was developed in 1995 and does not cover more recent quality management topics, such as 'use of indicator data' and 'learning from adverse events' [2]. The MARQuIS questionnaire is very long (113 items) and focusses mainly on leadership (36 items), policy, planning, documents (20 items), quality strategies (laboratory) (20 items) and structure (19 items), but less on the evaluation of care processes by indicator data. The latter is an important step in the quality improvement cycle [5]. The objective of this study was to develop and validate an up-to-date and more concise survey instrument to assess hospital QMSs in European countries, and to compute an index-the Quality Management Systems Index (QMSI)representing its developmental stage. Specifically, we report on its structure, reliability, validity and descriptive statistics of the QMSI and its scales. Conceptual considerations A broad range of activities can be used by an organization to maintain and improve the quality of care they deliver. These activities might change over time because of new evidence, changing expectations of the public or new (national) regulations regarding accountability. When developing a multi-item measurement instrument, we need to know the underlying relationship between the items (quality activities) and the construct to be measured, e.g. the QMS. The new instrument has, like the earlier developed ENQuaL and MARQuIS questionnaire, partly been based on the nine enabler and resulting themes of the existing theoretical framework of the European Foundation of Quality Management model (EFQM) [6], but some of the questionnaire items had to be changed to represent actual developments in quality management practice. Development of the instrument The questionnaire was a web-based multi-item and multidimensional instrument to assess the development of QMSs in hospitals. The aim of the questionnaire was to focus on the managerial aspects of quality management such as policy documents, formal protocols, analyzing performance and evaluating results, and not on leadership, professional and patient involvement or organizational culture, as these are different theoretical concepts within the Deepening our Understanding of Quality improvement in Europe (DUQuE) framework ( Fig. 1) and which are assumed to influence the implementation of QMS. The literature review revealed that earlier studies have distinguished six domains of quality management, e.g. procedures and process management, human resource management, leadership commitment, analysis and monitoring, structures and responsibilities and patient involvement. Most of the instruments do not cover all the domains. If they do, they have a large number (179) of items [1]. Figure 1 Conceptual model of DUQuE. Validation of a quality management systems index Several steps were applied to develop a more concise instrument that still covers the most important domains of QMS presented in the literature. To select items from earlier questionnaires (ENQuaL and MARQuIS) [2,5] or develop new items for the DUQuE questionnaire, we first used the expert opinion of other DUQuE project members. They considered the most relevant and possibly most influencing activities for the improvement of patient-related outcomes (n = 10). The experts have a long history in healthcare, especially in quality management in the various countries. For a concise instrument, only the most relevant activities are important. Second, items related to the more managerial focal areas of the theoretical framework of the EFQM model were selected ( policy documents, human resources, processes and feedback of results such as patient and professional experience, comparison of clinical and societal performance). The wording of the items and the answer categories were compared with accreditation manuals and the review on existing instruments. In the end, most questionnaire items for the new instrument came from the ENQuaL and MARQuIS questionnaire, but because of our focus on managerial aspects of quality management, not all focal areas of these instruments were selected. Finally, the answer categories of all items were standardized with a focus on the extent of implementation and four answer categories. The questionnaire was first developed in English and was translated into seven languages using a forward-backward translation process for validation. Respondents could rate each item on a four-point-Likert-type scale, with answer categories ranging from 'Not available' to 'Fully implemented' and from 'Disagree' to 'Agree'. The content validity of the final questionnaire used in the DUQuE project was approved and judged completely by the 10 experts from different quality research areas involved in the project who were not involved in the Quality Management Systems Index (QMSI) development. The questionnaire consisted of 56 items divided over 5 dimensions: quality policy (10 items), quality resources (9 items), performance management (7 items), evidence-based medicine (13 items) and internal quality methods (17 items). Setting and participants The study took place within the context of the DUQuE project that ran from 2009 to 2013 [7,8] in 7 European countries: France, Poland, Turkey, Portugal, Spain, Germany and Czech Republic. These countries represent the diversity of Europe (e.g. countries from the East/West, North/South, regional/national healthcare system, system in transition/longer established system). In each country, 30 hospitals were randomly recruited if they had >130 beds and were treating patients with acute myocardial infarction, hip fracture and stroke and handled child deliveries. The conditions were chosen for their high financial volume, high prevalence of the condition and percentage of measureable complications, and the different types of patients and specialists they were covering [7,8]. Of the 210 approached hospitals, 188 were able to participate (89.5% response rate). The DUQuE QMSI questionnaire was administered online to the quality managers of the 188 participating hospitals (response N = 183; 97%). A quality manager of a hospital was defined as the person who is responsible for the coordination of quality improvement activities. He/she should have a good overview of all activities toward quality improvement (questionnaire instruction). The quality manager was allowed to ask other people in the hospital if he/she was not sure about the right answer, but only one questionnaire per hospital was expected to be filled in. The instruction also said that it was not necessary for a hospital to have all activities mentioned in the questionnaire and that it was expected that hospitals would be in different phases of implementation for different activities. Ethical approval was obtained by the project coordinator at the Bioethics Committee of the Health Department of the Government of Catalonia (Spain). Data collection. Respondents who participated in the DUQuE project were invited by a letter and personally by the country coordinator. Questionnaires were completed anonymously and directly entered in the online data platform. The data were collected between May 2011 and February 2012. All participants were sent passwords to access the web-based questionnaire and sent reminders. Statistical analyses. We began by describing the hospitals and quality managers that provided responses to the main questionnaire used to develop the index. Next, we used psychometric methods to investigate the structure, reliability and validity of the QMSI instrument. We assumed that our ordinal data approximated interval data and conducted exploratory factor analysis and confirmatory factor analysis, reliability coefficient, item-total scale correlation and inter-scale correlation analyses [9][10][11]. These were done separately for each of the theoretical themes. We explored the factor structure of the questionnaire using split-file principal component analysis with oblique rotation and an extraction criterion of eigenvalues of >1 while requiring three or more item loadings. Items were grouped under the factor or scale where they displayed the highest factor loading. Only items that had loadings of at least 0.3 were assigned to a factor [10]. Confirmatory factor analysis was then used on the second half of the sample to determine whether the data supported the final factor structure. A root mean square residual of <0.05 and a non-normed fit index of >0.9 indicated good fit of the scale structure to the data. We then performed reliability analysis using Cronbach's alpha where a value of 0.70 or greater indicated acceptable internal consistency reliability of each scale [12,13]. We also examined the homogeneity of each scale using item-total correlations corrected for item overlap. Item-total correlation coefficients of 0.4 or greater provided adequate evidence of scale homogeneity. Finally, we assessed the degree of redundancy between scales by estimating inter-scale correlations using Pearson's correlation coefficients, where a correlation coefficient of <0.7 indicated non-redundancy [11,14]. Once we had a final factor structure, we computed the score for each of the scales by taking the mean of items used to build the scale. We used appropriate multiple-imputation techniques to handle missing data for hospitals with missing data for four or fewer scales used to build the final QMSI [15]. The scores of the extracted scales of our analysis were then summed in order to construct the final QMSI. We subtracted the number of factors or scales from this sum in order to bring the lower bound of the scale down to zero. In order to validate our instrument, we further examined correlations with two other measures of quality management based on on-site visits by external auditors. These other constructs were the Quality Management Compliance Index (QMCI) and the Clinical Quality Improvement Index (CQII) [16]. The QMCI measures the compliance of healthcare professionals, managers or others responsible in the hospital with quality management strategies. The CQII measures the implementation of clinical quality strategies by healthcare professionals. Both measures are based on on-site visits of external auditors and are described separately in this supplement [16]. We used Pearson's correlation coefficients to assess the relationship between QMSI, QMCI and CQII, deeming coefficients between 0.20 and 0.80 as acceptable [10,14,17]. If the QMSI measures the implementation of QMS in hospitals, it is expected that there would be a positive non-collinear relationship between the QMSI and the two more independent measures of quality management: QMCI and CQII. Because only some parts of the content of the three instruments overlap, the coefficients will not be very high. Participants A total of 188 hospitals participated in the DUQuE project. Quality managers of all the hospitals responded to the questionnaire, but five quality managers provided not enough data to calculate the nine scales and the QMSI. Background characteristics of the participating hospitals and the quality managers who filled in the questionnaire are given in Table 1. Table 2 gives an overview of factor loadings, Cronbach's alphas and corrected item-total correlations for each of the nine scales retained from factor analysis that were used to build the QMS index. These nine scales were quality policy documents (three items), quality monitoring by the board (five items), training of professionals (nine items), formal protocols for infection control (five items), formal protocols for medication and patient handling (four items), analyzing performance of care processes (eight items), analyzing performance of professionals (three items), analyzing feedback and patient experiences (three items) and evaluating results (six items). We eliminated 10 of the original 56 items in the questionnaire due to low factor loadings. As seen in Table 2, factor loadings ranged from 0.34 ('benchmarking') to 0.89 ('professional training in quality improvement methods'), with most items achieving acceptable factor loadings (> 0.40). Confirmatory analysis supported this final structure (not reported here). Structure, reliability and validity Cronbach's alphas for internal consistency reliability were satisfactory for all scales (Cronbach's alpha = 0.72-0.87) except 'analyzing feedback & patient experiences' (Cronbach's alpha = 0.48). Based on the theoretical importance of feedback of patient experiences and benchmarking, we decided to keep this scale in the QMSI. The item-total scale correlations were acceptable within the range of 0.20 to 0.80. The correlation coefficients for items in the scale 'feedback of patient experiences and benchmarking' were consistently lower than those for the other scales. As shown in Table 3, the inter-scale correlation ranged from 0.11 (between 'feedback of patient experience' and 'formal protocols for infection controls') to 0.70 (between 'evaluating results' and 'analyzing performance of care processes'). For all scales, each inter-scale correlation was below the pre-specified 0.70 threshold and deemed acceptable. The validity of the QMSI was further explored by analyzing its correlations with two other measures of quality management, namely the QMCI and the CQII. Correlation coefficients were within the acceptable range of 0.20 to 0.80 (Table 4). Descriptive statistics for the QMSI and its scales Descriptive statistics of items used to build the scales and the index are provided in Table 5. All items of the questionnaire were on a Likert-type response scale from 1 to 4. The average score on the individual items was ∼3, with a lower average score for items related to the analysis of the performance of professionals. Some floor and ceiling effects were found, where a high proportion of the respondents had a score at the lower or upper end of the answer categories, e.g. especially for patient complaint analysis and monitoring patient opinions. More than 80-90% of the hospitals had implemented the activities. The overall QMSI ranged from 0 to 27 points based on nine scales. The mean score of participating hospitals is 19.7 points (SD of 4.7). Discussion We set out to develop and validate an index (QMSI) to measure QMSs in European hospitals. We found that the QMSI has 46 items to be reliable and is valid for the assessment of QMSs in European hospitals. The answers to the 46 items could be summarized in an index to express the extent of implementation of quality management activities, such as quality policies, methods for continuous improvement and procedures for patient complaint handling or staff education. The QMSI was found to be useful to differentiate between hospitals on nine separate scales and on the index as a whole. The nine scales of the QMSI represent the managerial aspects of quality management and leave room for the investigation of associations of quality management with leadership, patient and professional involvement and organizational culture. These latter concepts are assumed to influence the extent of implementation of quality management in hospitals. Comparison with earlier studies The newly developed DUQuE instrument has good psychometric properties, consists of up-to-date questionnaire items, can be used in various European countries and is not too time-consuming for respondents (46 items; 9 dimensions). Earlier developed instruments have between 17 and 179 items, and 3-13 dimensions [1]. The DUQuE instrument for QMS covers four of the six domains found in the literature. Intentionally, the QMSI does not cover the domains leadership and patient involvement, because these are in the DUQuE framework influencing factors for the implementation of QMS and not part of the managerial aspects of quality management itself. The clear sampling frame with random hospitals across EU countries has a higher external validity than existing research on QMS. In line with previous research, it seems that there is no individual focal area that accounts for the entire variance associated with the implementation of QMS. Quality management is a combination of policy, monitoring quality improvement by the board, professional development, monitoring of performance of processes and knowing relevant patient-related outcomes. Limitations of the study This study has some limitations. The QMSI is based on the perception of the quality manager of the hospital. Although data from the questionnaire were self-reported, it has been shown through on-site visits that they seemed to be reasonably reliable. Despite the random selection of hospitals, selection bias among participating hospitals cannot be ruled out. Especially in some countries, the number of participating hospitals was smaller than that was initially planned for that country. Furthermore, the final study sample was too small to carry out a cross-culture validation. Therefore, further data collection and analysis will be needed before we can recommend the instrument for official use in cross-country comparisons. A positive point is that hospital and country coordinators did not report problems with the understandability or applicability of the questionnaire. Implications for research, policy and practice As patients and purchasers expect the best possible quality of care, healthcare providers have to prove that they constantly work on quality improvement and safer healthcare. Our study has developed an efficient instrument to measure the implementation of quality management strategies on nine focal areas. We also kept some items with ceiling effects, which can still support policy-makers at EU-level stimulating the development of QMS in less-developed countries. Areas with floor effects, like monitoring physician performance, are recognized as important for the years to come and for further spread in European hospitals. The instrument and resulting index could be used for future comparative studies on quality management and for baseline assessment by hospitals or purchasers. A questionnaire is less time-consuming than a site visit and seems to give a quite reliable overall picture of the development and implementation of a QMS. Earlier research has shown that this kind of instrument is useful for monitoring Analyzing performance of professionals (N = 174) 1-4 2.6 (1.0) 1. Hospital (management) board 'walk rounds' to identify quality problems and issues (management visits work units to discuss quality and safety issues) QMSI is the sum of all nine scales (minus 9). This result includes data that have been subjected to multiple imputations. implementation of QMSs over time [18]. More importantly, it can be used to test the assumption that enforcing certain quality management policies and strategies will lead to the desired effects, possibly linking management strategies to quality and safety outcomes. Forthcoming DUQuE work will lend additionally validity to the QMSI through the investigation of its relationships with other constructs and outcomes such as hospital external assessment, quality orientation of hospital boards, social capital, organizational culture, safety culture, clinical indicators and patient-reported experience measures. Conclusion The newly developed and validated index (hence, instrument) of the implementation of QMSs presents an important tool for measuring, monitoring and, potentially, improving quality management in European hospitals. The QMSI is part of a broader group of instruments developed in the European research project 'Deepening our Understanding of Quality improvement in Europe' (DUQuE).
2017-04-02T14:08:52.055Z
2014-03-11T00:00:00.000
{ "year": 2014, "sha1": "0bab976d013e0298b4f2ab0deecefc419da8de6a", "oa_license": "CCBY", "oa_url": "https://academic.oup.com/intqhc/article-pdf/26/suppl_1/16/5189392/mzu021.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "1bb50d84c6d03d2d3737cefc4ced4744216d35f1", "s2fieldsofstudy": [ "Medicine", "Political Science" ], "extfieldsofstudy": [ "Medicine" ] }
270666625
pes2o/s2orc
v3-fos-license
Resource demands in telco data centers Telecommunication (telco) cloud services have emerged as crucial components in the modern digital landscape, offering extensive capabilities for data management, connectivity, and service provision. However, research on telco clouds lacks comprehensive data on the characteristics of production workloads, which is fundamental for designing effective resource management systems, such as workload schedulers and power management mechanisms. To this end, this paper addresses a substantial gap in telco cloud research by creating a comprehensive dataset that encapsulates crucial information regarding the pattern demands of applications within telco data centers. In addition, the proposed dataset contributes to the field by enabling strategic network configuration, optimizing data center sizing, facilitating proactive decision-making for data center operations, but its applicability extends beyond these cases. These examples illustrate the practical value of the dataset in enhancing efficiency, reducing operational costs, and ensuring optimal performance within telecommunication data centers. Background & Summary The rapid expansion of telecommunication (telco) data centers (DCs) is intrinsically tied to the evolution of Network Function Virtualization (NFV) 1 .NFV constitutes a cloud computing paradigm enabling the creation of network functions in the form of virtual machines and containers (i.e., virtual network functions -VNFs), and it has evolved within a broader technological ecosystem that includes the virtualization of the mobile core 2 .As such, the scope of NFV has expanded beyond traditional network functions (e.g., firewalls, and load balancers) and now includes additional applications many of which are data-intensive 3 .For these applications, it would be more efficient to run some VNFs at the edge of the network, i.e., closer to data sources.Through this lens, telco networks undergo a drastic transformation process including the introduction of telco edge clouds which, in addition, are meant to host cloud-native applications.Telco edge DCs bring network and processing capabilities closer to end users than typical centralized DCs, which may cause transmission delays owing to large distances.This makes it possible to respond in almost real-time, which is essential for applications like Internet of Things and 5G.In addition to optimizing latency, NFV and edge computing enhance the general scalability and agility of telecommunication networks.In other words, programs may now adapt their capacity on the fly to meet changing demand.Established scaling strategies provided by Virtual Infrastructure Managers (VIMs), like Kubernetes 4 , enable this flexibility. In the competitive landscape of telco services, communication service providers (CSPs) face pressure to deliver compelling features and services while effectively managing DC energy and costs 5 .CSPs accommodate a diverse array of workloads originating from both external clients and internal services, all sharing a common infrastructure.Maintaining optimal performance, availability, and reliability under these conditions necessitates sophisticated yet practical and scalable resource management strategies 6,7 . However, existing research in resource management within telco DCs lacks a comprehensive understanding of the fundamental characteristics of these workloads 3,8 and is mainly focused on general DC network workloads 9,10 or on static traffic scenarios 11 .Key aspects such as the lifespan (time from creation to termination) or consumption patterns of resources by production applications remain largely unexplored in prior studies.The absence of detailed data and analyses regarding workload patterns and resource utilization hampers the development of effective resource management systems tailored to the specific demands of telco DCs.The primary objective of the proposed dataset is to replicate real-life demand patterns observed in operational telco DCs.This goal is driven by the inherent challenges of private telco datasets, making open-sourcing or sharing them under Non-disclosure agreements (NDAs) difficult.To illustrate the importance of this dataset and its intended users, consider situations where telecommunication providers and data center operators could gain advantages from a comprehensive collection of simulated yet realistic telco workloads.For instance, telecom service providers could utilize this dataset to enhance resource management efficiency by optimizing server allocation, predicting future demands, and minimizing operational costs.DC operators, on the other hand, can leverage the dataset to fine-tune their infrastructure, ensuring seamless operations while maintaining energy efficiency.The dataset is produced by deriving input traffic load patterns from real-life data obtained from an operational subscriber-driven telco service and then simulating these workloads in an AWS environment with Kubernetes deployments.This approach ensures that the dataset not only captures the intricacies of actual telco workloads but also offers practical insights for improving resource management strategies. Methods Workload assumptions.In telco edge DCs workloads often exhibit periodic patterns, mainly stemming from the subscriber-driven nature of telco services.Cloud-native applications in such environments demonstrate the ability to dynamically scale their resources in response to the fluctuating load.The two prevailing scaling practices in modern virtualized infrastructures are horizontal and vertical scaling.Horizontal scaling enables the creation of new and the termination of existing application instances, whereas vertical scaling enables the dynamic allocation of resources to existing instances.Throughout this work, the horizontal scaling paradigm was adopted, since it is admittedly more common in Kubernetes environments. Initially, access was granted to traffic load metrics from a subscriber-driven service deployed within the production environment of a network operator.This service is a VNF operating at the data plane of the 5G mobile core.Notably, load metrics exhibit substantial fluctuations, with lower loads observed during early morning hours and significant peaks during late evening hours, as shown in Fig. 1. Within this context, a service deployment entails multiple service instances, each capable of handling a specific fraction of the overall load.For example, we can assume that each instance is intended to manage up to 10% of the total load.Further, the service orchestrator often performs automated tasks related to load threshold setting and reactive scaling to carry out the practice of horizontal scaling, which is defined as raising or lowering the number of application instances in response to fluctuating load.This implies homogeneity among service instances, ensuring a predictable performance for each instance.Consequently, all instances have fixed resource requirements, typically encompassing CPU cores, memory, and DPDK-enabled interfaces.Since such services make up the majority of telco infrastructures, similar patterns are used to simulate several subscriber-driven services.These might include components of the 4G Evolved Packet Core, various elements within the 5G mobile core, and additional edge services pertaining to Industry 4.0.Crucially, the resource consumption pattern of any application is dependent on a pattern of instances, that are in turn driven by a pattern of total application load. Load patterns. The set of applications A is considered with resource requirements R = {CPUcores, memory, GPU} and are used to produce the input traffic workloads in the telco DC.With the assumption that the load is not totally random, load patterns can be decomposed into three categories: diurnal, staggered, and fixed, each carrying a distinct variance level. • Diurnal Demand Patterns.These patterns are defined by their adherence to a 24-hour cycle, reflecting the daily levels of user activity.They are marked by significant variability.For example, this pattern can be seen in deployments implementing VoIP and Mobile Core services, which typically experience lower activity during late night and early morning hours.services.An example of a deployment that follows a staggered demand pattern is a daily backup process, which is typically run during off-peak hours to ensure resource availability and minimize service disruption.• Fixed Demand Patterns.In contrast to the variable nature of diurnal and staggered patterns, fixed demand patterns exhibit a consistent resource demand level over time, as it is assumed that they are not influenced by exogenous variables such as the user traffic.Fixed demand patterns cater to critical services that require continuous operation, such as database clusters that cannot be shut down.Additionally, fixed demand patterns may also be found in conservative and inflexible deployments, which are meant to handle worst-case loads. The predictability of fixed patterns translates to a stable base load for DC operations. Workload Deployment. As evidenced by recent research [12][13][14] , telco operators embrace the agility, scalability, and container orchestration capabilities offered by Kubernetes.The paradigm of cloud-native telco infrastructures was adopted, and a dataset tailored to Kubernetes-based VIMs was devised.To this end, the term node was used to refer to Kubernetes compute nodes, and the term deployment was used to refer to applications.Compute resources are consumed by replica pods, since each deployment is realized via a set of replica pods whose size varies across time in response to volatile loads, as per the horizontal scaling practice in Kubernetes.Additionally, a set of nodes that are logically organized into a single group is referred to as node cluster or simply cluster. Herein, the deployments (applications) that were utilized to create synthetic resource demand are discussed.Importantly, it is punctuated that the number of replica pods of each deployment varies based on loads observed in real-life telco deployments. Six distinct Kubernetes deployments are implemented, each defined by specific resource demands per replica pod, a maximum pod count, and a load pattern type, detailed in Table 1.For instance, the base-diurnal deployment is realized via a set of replica pods each requesting 1 CPU thread and 1GB of memory.The load pattern of the base-diurnal is of type diurnal, implying that the load of this deployment is low during early morning hours, it progressively increases and peaks at late evening hours, then drops again and repeats this cycle throughout the experiment.As the load l t (a) of a deployment a varies within its normalized values range (0, 1] , the number of the corresponding replica pods is computed via: This assumes that each replica pod can manage a portion of the overall deployment load, with the spawning and termination of replica pods following the practice of horizontal scaling.For example, in Scenario A, if l t (CPU-intensive) = 0.6, then the CPU-intensive requires ⌈0.6 ⋅ 25⌉ = 15 replica pods to serve its load, whereas in Scenario B, the corresponding number of the same deployment for the same load level is ⌈0.6 ⋅ 5⌉ = 3. Staggered patterns are effectively inverted diurnal patterns, whereas fixed patterns, as their name implies, do not have any load volatility and they are always realized via Max Pods replica pods.By using different distributions of Max Pods over the various deployments, different evaluation scenarios can be designed, since these affect the overall pattern of aggregated resource demands.For example, in a dynamic scenario (i.e., Scenario A), pods of diurnal deployments yield the biggest fraction of resource demand compared to staggered and fixed, and high variations in overall resource demands are anticipated.Conversely, in a static case (i.e., Scenario B), most demands are attributed to pods of the base-fixed deployment, hence small variations in overall resource demands are expected.For the sake of completeness, note that stress-ng was utilized to create replica pods in Kubernetes.The resource demands of each replica pod are configured based on resource requests and limits specified in the corresponding Kubernetes deployment configuration. Data Records A dataset of pod resource demands for a period of approximately 20 days (i.e., from 2023-10-13 12:04:00 to 2023-11-02 08:46:00) is available at our Zenodo repository 15 .Specifically, data files containing detailed information regarding pod resource requests on a daily basis were exported.Data points are collected at a 30-second granularity.The files are collated, organized, and categorized based on scenarios and pod classifications.The pod file consists of seven columns, presenting details about CPU, memory, and GPU demands of individual pods at a given timestamp, along with the respective serving node that is mentioned in the Node column.The Scenario column distinguishes rows accordingly based on the workload scenario.The study considers 3 scenarios; Scenario A, where resource demands are highly dynamic, Scenario B, where resource demands are static, and Scenario C, the balanced scenario, where demands are moderately dynamic.In the corresponding data repository, the Pods request dataset is stored in a zip .csvfile titled pods_request_workloads, and the rows are sorted based on the timestamp column.There are no missing values, meaning that every pod (UID) corresponds to a Node; one Node can serve multiple pods, but one pod can be served by only one Node at each timestep.Figure 2 represents a visualized example of the data, where for every timestamp multiple pods have specific demands, and Table 2 illustrates a fraction of the dataset.Every pod has constant resource demands which do not change across its lifespan. technical Validation The dataset is designed to meet specific requirements in the telco domain, and therefore the hosting infrastructure, the number and types of servers, and the workload characteristics are carefully chosen to be typical for such a use case.Amazon Web Services (AWS) is used to emulate a virtualized DC, and the relevant data is collected and stored using Prometheus (i.e., a time-series database) leveraging the OpenTelemetry API.The deployments of Scenarios mentioned in Table 1 are executed in a Kubernetes cluster consisting of 34 AWS EC2 nodes.Each deployment comes with its own overall traffic pattern (i.e., diurnal traces resemble the real-life traffic trace from the operational telco service, staggered traces are inverse diurnal, whereas fixed traces are static).For the first few days all nodes are active despite the highly volatile resource demand.During this period, replica pods of various Kubernetes deployments are spawned and terminated through horizontal scaling, and we resort to the default Kubernetes Scheduler to assign pods to nodes.Throughout this initial phase of the experiment, the resource demands of pods are recorded and stored into Prometheus.As such, it becomes apparent that the proposed dataset originates from actual workloads deployed on an operational infrastructure.This holds true both for the initial phase of experimentation and for the subsequent optimization phases.In the following subsection, node and application instances (Pods) characteristics are discussed in-depth.Fig. 2 Visualized example of pod request file.In every timestamp, there are several pods with specific demands and every timestamp is grouped in a scenario case.Every scenario presents a varied workload, showcasing the nature of pod requests in telco data centers.Infrastructure Setup.Herein, an heterogeneous DC S offers the set of resources R = {CPU cores, memory, GPU}, and is comprised of 34 nodes with 572 CPU cores, 2120 GB of memory and 2 GPU units in total.Each node is characterized by a type (e.g., medium, xlarge, 2xlarge, 4xlarge, and 8xlarge) which determines its maximum number of CPU threads, its memory capacity, and its GPU units.Figure 3 provides an overview of the nodes comprising our AWS testbed, along with their specifications. Workload Scenarios.For the workload cases, three distinct scenarios were designed to encompass all occasions within the telco data centers as described in Table 1.These scenarios are characterized by their varying proportions of pods, each adhering to different workload patterns -diurnal, staggered, or fixed.In the dynamic scenario, for example, the workload consists of pods exhibiting dynamic fluctuations, with a predominant emphasis on those following the diurnal pattern.Following, the different scenarios are explained: • Scenario A -Dynamic: In this case, pods of diurnal deployments yield the largest fraction of resource demand compared to staggered and fixed patterns, indicating an anticipation of high variations in overall resource demands.Figure 4 represents a dynamic workload during the days with high demands during the daytime ( ≈ 80%) and decreased demands around the night ( ≈ 20%).• Scenario B -Static: This scenario is comprised of resource demands that are characterized by the stability of fixed patterns, which, as their name implies, do not exhibit any load volatility.The majority of resource demands in this scenario originate from the 420 pods of the base-fixed deployment, with the contribution of the remaining deployments being limited to the overall resource demand.Figure 5 depicts a Static workload, where it seems that the CPU demands are steadily around 84% during the date.• Scenario C -Balanced: In the balanced scenario, which involves a harmonious distribution of resources, there are 284 pods of the base-fixed deployment and 70 pods of the base-diurnal deployment.This combination results in a well-distributed and moderate load across the deployments, contributing to a balanced resource demand profile.Figure 6 shows a Balanced workload with augmented needs in resources at day ( ≈ 85%) and less, but still high, requirements around the off-peak period ( ≈ 60%). Usage Notes The proposed dataset serves as a critical contribution to the telco research domain given the current shortage of openly available data related to telco workload resource demands.In the following, we elaborate on three use cases which can be facilitated by the proposed dataset. Strategic Network configuration. In the dynamic landscape of telecommunications operations, the telco workload dataset emerges as a functional tool.This resource provides telco operators with the means to strategically configure their networks, going beyond the basics of optimizing resource management and server allocation.Notably, the dataset's richness in diverse resource types, such as CPU cores and Memory, empowers operators to examine cross-resource correlations, gaining useful insights.This depth of understanding is pivotal for developing sophisticated prediction models, enabling operators to anticipate spikes in demand and proactively adjust network configurations.Additionally, the dataset proves invaluable for proactive decision-making, allowing operators to spawn additional load balancers or defer maintenance/upgrade tasks in anticipation of workload increase.In essence, the versatility of the dataset stands as a foundation for telco operators, amplifying their capacity to make informed decisions across a spectrum of network-related scenarios.This, in turn, elevates the overall efficiency and resilience of telecom infrastructures. Data center Sizing. In the realm of infrastructure management, the persistent challenge of underutilized servers during off-peak periods leads to inefficient resource allocation and unnecessary energy consumption within data centers.Leveraging these comprehensive workloads, providers can devise robust methodologies to dynamically determine the required number of servers over time.A viable solution involves workload consolidation-transferring tasks from underutilized servers to those with higher usage and subsequently shutting down the former.This decision-making ability significantly enhances resource management, ensuring computational resources are allocated precisely when needed, thereby optimizing energy usage and overall operational costs.This adaptive server utilization approach minimizes energy waste during low-demand periods while aligning with the demand curve to maintain resource availability. proactive Data center Sizing.The dataset's usability extends to a combined application of two steps: forecasting the future resource demands of the data center 16 and, based on predictions, determining the necessary nodes to efficiently serve the load.This integrated approach allows for the activation of only the essential servers, with the remaining being deactivated to minimize energy consumption and reduce operational expenses.In the conducted experiments within the AWS infrastructure detailed in the Infrastructure Setup section, the provided workloads were employed to train machine learning algorithms for forecasting resource demands.Subsequently, decision-making algorithms were implemented to ascertain the optimal number of nodes required in the forthcoming hours, aligning with anticipated pod demands.Figure 7 visually represents the requested and available resources (CPU, memory, GPU) in the infrastructure across time, separated by different experiments outlined in Table 3.The available resources were determined through the decision-making process, where the optimal number of servers was selected.Additionally, each scenario included a fallback strategy; if the algorithms deactivated more servers and subsequently encountered pods that couldn't be accommodated, some servers would be reactivated.Notably, during the Data Collection phase, no algorithms operated, and all servers remained active. Observing the results and Fig. 7, scenario 2 emerged as the most notable experiment, exhibiting the highest gain by utilizing only the necessary number of servers.Detailed information on node availability is stored in the Zenodo repository 15 •Fig. 1 Fig. 1 Daily load of a subscriber-driven service running in production (normalized values). Fig. 4 Fig. 4 Dynamic Workload of pods across time depicting the CPU demands of applications in percentage. Fig. 5 Fig. 5 Static Workload of pods across time depicting the CPU demands of applications in percentage. Fig. 6 Fig. 6 Balanced Workload of pods across time depicting the CPU demands of applications in percentage. Fig. 7 A Fig.7A visual representation of the three resource types (CPU, Memory, GPU) over time.With orange is depicted the aggregated requested demands of pods in percentage and with blue is the available resources that the infrastructure provides to accommodate the applications. Table 2 . Sample of file pods_request_workloads.csv that stores information about pods requests sorted by the time. in the file named nodes_allocatable, also available on Github.The file is devoid of any missing values, except for the Status and Condition columns, where NaN values indicate that the node is inactive at that timestamp.This is the case where the higher cost saving was achieved.It didn't use overprediction, which means that only essential nodes were active, but with cases of long pending pods that couldn't be served This is the less energy and cost saving case where the demand is fixed and almost all the nodes are up and running.In general, in a static environment, it is difficult to save too much.
2024-06-23T06:17:26.452Z
2024-06-21T00:00:00.000
{ "year": 2024, "sha1": "d973ce964153452fbf0099a4f9b90f86da271330", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "aef5ddfa73e6d97b81fe85099aa4cc69a2edee09", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
20698004
pes2o/s2orc
v3-fos-license
Bacterial Community Associated with Healthy and Diseased Pacific White Shrimp (Litopenaeus vannamei) Larvae and Rearing Water across Different Growth Stages Bacterial communities are called another “organ” for aquatic animals and their important influence on the health of host has drawn increasing attention. Thus, it is important to study the relationships between aquatic animals and bacterial communities. Here, bacterial communities associated with Litopenaeus vannamei larvae at different healthy statuses (diseased and healthy) and growth stages (i.e., zoea, mysis, and early postlarvae periods) were examined using 454-pyrosequencing of the 16S rRNA gene. Bacterial communities with significant difference were observed between healthy and diseased rearing water, and several bacterial groups, such as genera Nautella and Kordiimonas could also distinguish healthy and diseased shrimp. Rhodobacteraceae was widely distributed in rearing water at all growth stages but there were several stage-specific groups, indicating that bacterial members in rearing water assembled into distinct communities throughout the larval development. However, Gammaproteobacteria, mainly family Enterobacteriaceae, was the most abundant group (accounting for more than 85%) in shrimp larvae at all growth stages. This study compared bacterial communities associated with healthy and diseased L. vannamei larvae and rearing water, and identified several health- and growth stage-specific bacterial groups, which might be provided as indicators for monitoring the healthy status of shrimp larvae in hatchery. INTRODUCTION The intestine of shrimp and their ambient water are both complex ecosystems that harbor diverse bacterial communities, in which some microorganisms are probiotic while some are pathogenic. Microbial dysbiosis might profoundly impact the development and physiological function of their hosts (Whiteson et al., 2014;Rungrassamee et al., 2016;Xiong et al., 2016). Some studies have declared close correlations between the occurrence of shrimp disease and associated bacterial communities Zhang D. et al., 2014). These accumulated knowledge of the complex bacterial communities in aquaculture has refined our perception of which microbial groups could cause diseases. In fact, growing efforts are made to predict the incidence of shrimp disease and find prevention methods from the bacterial perspective (Xiong et al., , 2015Zhang D. et al., 2014). Xiong et al. (2015) compared the bacterial communities between healthy and diseased shrimps, and found that Bacilli, Flavobacteriales, Acidimicrobiales, and Alteromonadales were more abundant in healthy shrimps, whereas Actinomycetales, Sphingobacteriales, and Vibrionales were dominant in diseased shrimps. It was also demonstrated that some bacterial groups (such as Flavobacteriales and Thiotrichales) could be considered as "health indicators" for predicting shrimp's health status, and some other bacteria (such as Rhodobacterales and Planctomycetales) could be considered as "disease indicators" . Additionally, some studies have demonstrated that bacterial communities in shrimps varied along with growth stages. Huang et al. (2014) found that Comamonadaceae of Betaproteobacteria was prevalent in 14-day-old postlarvae (PL14) and 1-month-old juvenile (J1) shrimps, while Flavobacteriaceae of Bacteroidetes and Vibrionaceae of Gammaproteobacteria were dominant in 2-month (J2) and 3-month-old juveniles (J3), respectively. Rungrassamee et al. (2013) found Photobacterium was the major group in PL15 while Vibrio was the dominant group during juvenile stages. Although there were some differences between these two studies, they all found bacterial communities in shrimps shifted along with their development. However, these previous studies mainly focused on shrimp at juvenile or adult stages, the last two stages in the entire development of shrimp (i.e., egg, larvae, postlarvae, juvenile, and adult). Little is known about shrimps at larval stages including nauplius, zoea, and mysis, when shrimps are susceptible to bacterial diseases due to their underdeveloped digestive and immune systems. For example, the zoea 2 syndrome and mysis mold syndrome were prevalent at zoea and mysis stages, respectively, which would result in mass mortalities in shrimp hatchery (Vandenberghe et al., 1999). Thus, it is very necessary to examine whether there are relationships between the health status of shrimp and the associated bacterial community. The bacterial community associated with larval shrimp has been investigated in a few studies, but they were basically conducted using culturedependent (Hameed, 1993;Zheng et al., 2016) or fingerprint methods (Pangastuti et al., 2010;Xue et al., 2015). For example, Xue et al. (2015) found that Flavobacteriaceae was abundant in rearing water from nauplius 6 to zoea 2 and Rhodobacteraceae was the dominant group from zoea 3 to postlarvae using denaturing gradient gel electrophoresis (DGGE) analysis. Our previous study also observed that bacterial communities were changed along with the growth stages of shrimp using culturedependent methods (Zheng et al., 2016). For excavating the stage-specific bacterial groups in different larval stages in depth, pyrosequencing data is urgently needed. The purpose of this study was to describe the total bacterial communities in L. vannamei larvae (i.e., zoea, mysis, and early postlarvae periods) by 454 pyrosequencing, and attempt to identify the healthy and/or diseased indicators for further application. Total 39 samples were collected from a commercial hatchery, including rearing water samples from ponds with healthy shrimps (WH) and that with diseased shrimps (WD), and shrimp samples from ponds with healthy shrimps (SH) and that with diseased shrimps (SD). The distinct bacterial groups between WH and WD, and between SH and SD were identified by various statistical analyses. Finally, only the bacterial communities in healthy rearing water and shrimp along with different developmental stages were analyzed. Rearing of Shrimp Larvae At zoea stage, live microalgae Thalassiosira sp. was used to feed larva for twice daily until they reached zoea 3 stage. After that, brine shrimp (Artemia) was added into ponds until postlarvae stage. Shrimp flakes were used at all stages for six times daily. There were no water exchange, antibiotic or commercial probiotics supplement throughout all stages. Sample Collection Shrimps and rearing water were collected from a commercial marine shrimp hatchery from 10 March to 28 April, 2014 in Hainan, China. Healthy shrimp and rearing water were taken from ponds where shrimp larvae had normal feeding behavior, black intestine and/or no apparent sign of disease by visual inspection. Samples covered all key developmental periods: zoea 1 (Z1), zoea 3 (Z3), mysis 1 (M1), mysis 3 (M3), postlarvae 1 (P1), postlarvae 3 (P3), and postlarvae 6 (P6) (a developmental time line was shown in Figure 1). Diseased shrimps and rearing water were obtained from ponds where shrimps presented poor growth, inactivity, lack of appetite, empty digestive tracts and/or low survival rate. Shrimp larvae were collected randomly from each pond. Details of the experimental design for sampling was shown in Supplementary Table S1. The surface of shrimp larvae was sprayed with 75% ethanol, and then washed with sterile seawater three times to remove adherent microorganisms. Rearing water was collected with 250 ml sterilized beaker from four different locations in each pond and then pooled. One liter of pooled rearing water was filtered through a 0.22 µm polycarbonate filter (Millipore). All the samples were stored at −80 • C for 2 months until DNA extraction. DNA Extraction The whole shrimp larvae (Z1: 200 larvae; Z3: 120 larvae; M1: 80 larvae; M3: 50 larvae; P1: 30 larvae; P3: 20 larvae; P6: 15 larvae) were homogenized using a sterilized glass homogenizer without dissecting intestine due to their small size. The homogenate was mixed with 900 µl of TE buffer (1 M Tris-HCl, 0.5 M EDTA, pH 8.0) and transferred into 2 ml Eppendorf tubes containing 0.3 g quartz sand. The mixture was vigorously beaten on a FastPrep-24 Homogenization System (MP Biomedicals, Irvine, CA, United States) for four times (1 min for each time at a speed of 6.0 m/s), followed by centrifugation at 500 × g for 5 min FIGURE 1 | Partial developmental time line of shrimp. Bold type represents the stage chosen to be analyzed in this study. and the supernatant was transferred into a new Eppendorf tube. The following steps and DNA extraction from rearing water were performed according to Yin et al. (2013) with some modification. Briefly, 6 µl of lysozyme (20 mg/ml) were added into each tube, which was incubated at 37 • C for 30 min, and then 6 µl of protease K (10 mg/ml) and 60 µl of 10% (w/v) SDS were added and incubated at 65 • C for 20 min. Equal amounts of chloroformisoamyl alcohol (24:1) was used to extract DNA. The supernatant was precipitated with 0.6-0.7 volume isopropanol for 2 h and DNA was resuspended in 50 µl TE buffer. PCR Amplification, 454 Pyrosequencing, and Data Analysis PCR primers 341F and 1073R (sequence was shown in Supplementary Table S2) were selected to amplify the V3-V6 region of the 16S rRNA gene. The PCR reaction system (20 µl) contained 1 × FastPfu Buffer, 2.5 µM of dNTPs, 0.1 µM of each primer, 1 U of FastPfu Polymerase and 10 ng of template DNA. PCR was performed in triplicate at 95 • C for 3 min, followed by 27 cycles of 95 • C for 30 s, 55 • C for 30 s, 72 • C for 45 s, and a final extension step of 72 • C for 10 min. The triplicate PCR products were combined and purified using an AxyPrep DNA Gel Extraction Kit (Axygen, Hangzhou, China), and then quantified using a Quant-iT PicoGreen double-stranded DNA assay (Invitrogen, Carlsbad, CA, United States). Amplicons from each reaction mixture were pooled with equimolar ratio and subjected to emulsion PCR to generate amplicon libraries. Sequencing was carried out using a Roche Genome Sequencer FLX Titanium platform at Majorbio Bio-Pharm Technology Co., Ltd., Shanghai, China. The produced DNA sequences were processed in QIIME toolkit, version 1.9.1 (Caporaso et al., 2010). Specifically, raw reads were quality filtered and trimmed with Usearch 7.1 1 . Reads completely matching the barcodes and having a maximum single mismatch to the primers were retained. Sequencing adaptor, barcodes and primer sequences were removed. The sequences were further screened by the following thresholds: 0 ambiguous bases, maximum homopolymer stretches of 10 bp, minimum reads length of 200 bp and minimum mean quality score of 20. Several sequences ubiquitous in air, soil and human body and closely related to the potential contaminants, including Bradyrhizobium, Brevundimonas, Burkholderia, Delftia, Erythrobacter, Lactococcus, Legionella, Methylobacterium, Mycobacterium, Neisseria, Novosphingobium, Propionibacterium, Sphingobium, Sphingomonas, Sphingopyxis, Staphylococcus, Stenotrophomonas, and Streptococcus (Nunoura et al., 2015), were removed. Quality-filtered reads were clustered into operational taxonomic units (OTUs) at a 97% similarity level using UPARSE pipeline (Edgar, 2013). Quantitative PCR Quantitative PCR was performed to quantify the abundances of 16S rRNA gene in rearing water and shrimp larvae. The 16S rRNA gene universal primer sets Eub338F/518R (Supplementary Table S2, Yin et al., 2013) were used for quantifying total bacteria. Each 20 µl of quantitative PCR reaction contained the following components: 10 µl of SYBR Green Real-time PCR Master Mix (TaKaRa, Tokyo, Japan), 1 µl of each primer (10 µM), 6 µl of H 2 O, and 2 µl of template DNA. The quantitative PCR was carried out in triplicates. To determine 1 http://drive5.com/uparse/ FIGURE 3 | Non-metric multidimensional scaling (NMDS) analysis of all samples based on OTU level. Bray-Curtis similarity metric was used with PRIMER 6. Circle and triangle represent shrimp and water samples, respectively. Blue, purple, and red represent zoea, mysis, and postlarvae stages, respectively. Green: rearing water before larvae were released into pond. Symbols filled and unfilled color indicated healthy and diseased, respectively. the relationship between PCR cycle threshold (Ct) value and copy numbers, standard curve was obtained by amplifying the 10-fold serially diluted plasmids (pUCm-T, purchasing from Sangon Company, China; the inserted sequence of 16S rRNA gene was shown in Supplementary Table S2), and the copy number of 16S rRNA gene was calculated according to the standard curve. All amplification efficiencies were >99%. Statistical Analysis The alpha diversity index, Chao 1 (Chao and Bunge, 2002) and Shannon estimators (Magurran, 1988) were calculated using Mothur (Schloss et al., 2009). Good's coverage (Good, 1953) was calculated to evaluate the sampling depth. Linear discriminate analysis (LDA) effect size (LEfSe) (Segata et al., 2011) with default parameters (except for LDA value, which was above 3.0 for rearing water and 2.5 for shrimp larvae) was used to determine bacterial lineages with significant differences (P < 0.05) between healthy and diseased samples at various taxonomic levels. Principal component analysis (PCA) was performed by Canoco 5 software at the genus level. Level of statistical significance was determined by t-test. Analysis of similarity (ANOSIM) of bacterial communities for different statuses and growth stages of shrimp, and non-metric multidimensional scaling (NMDS) analysis for all samples were carried out using PRIMER 6 (Clarke and Gorley, 2006) based on the Bray-Curtis similarity. The sequence derived from 454 pyrosequencing were deposited in the National Center for Biotechnology Information (NCBI) Short Read Archive database under accession number SRP080243. Samples and Rearing Environment A total of 39 samples were obtained, including 13 WH, 8 WD, 11 SH, and 7 SD samples (Supplementary Table S1). The parameters of rearing system were shown in Table 1. Table S3). Chao 1 and Shannon indices of water samples were higher than those of shrimp samples (P < 0.05), whereas showed no significant differences between WH and WD (P > 0.05), and between SH and SD (P > 0.05) (Figure 2). Similarly, there was also no significant difference in the number of OTUs between WH and WD, and between SH and SD (Figure 2). According to the result of quantitative PCR, the bacterial 16S rRNA gene abundance ranged from 1.5 × 10 6 to 4.7 × 10 7 copies/ml in water and 2.4 × 10 7 to 3.1 × 10 9 copies/g in shrimp larvae. There was significant difference (P < 0.05) of 16S rRNA gene abundance in the rearing water between zoea (1.5 × 10 6 -6.9 × 10 6 copies/ml) and mysis (7.3 × 10 6 -3.0 × 10 7 copies/ml) stages, but no significant difference (P > 0.05) was observed between mysis and postlarva (6.0 × 10 6 -4.7 × 10 7 copies/ml) stages (Supplementary Figure S1). The 16S rRNA gene copy numbers in shrimp increased with growth stages, but with no significant difference (Supplementary Figure S1). Distinct Bacterial Groups between WH and WD, and between SH and SD Bacterial communities were compared among all samples using NMDS analysis at the OTU level. Bacterial communities in rearing water were separated from shrimp (Figure 3). The low similarity (Figure 3) of bacterial communities between WH and WD was confirmed by the result of ANOSIM (r = 0.281, P = 0.003), suggesting that bacterial communities were distinct between healthy and diseased water. Based on the above analyses, we used LEfSe to find the potential discriminating taxa between healthy and diseased water. The results showed that there were 31 bacterial taxa distinguishing WD from WH with LDA value greater than 3.0 ( Figure 4A). One class, 4 orders, 4 families, and 13 genera were enriched in WH, including Acidimicrobiia (from class to order levels), Salinisphaerales (from order to genus levels), Order Incertae Sedis (from order to genus levels), Order III, Cytophagaceae (family level) and Coxiellaceae (family level). Moreover, there were many groups only enriched at genus level, including Arenibacter, Cohaesibacter, Marixanthomonas, Meridianimaribacter, NS10 marine group, NS3a marine group, Paracoccus, Roseicyclus, Salinihabitans, Spongiibacter, and Thalassobius (Figures 4A,B). In WD samples, one order, two families and four genera were enriched, including Kordiimonadales (from order to genus levels), Idiomarinaceae (from family to genus levels), Cobetia and Nautella. Although SH and SD could not be separated from NMDS analysis (Figure 3) and no significant difference (r = 0.169, P = 0.055) was observed as well, there were still several bacterial taxa which could distinguish these two groups by LEfSe. One phylum, one class, two orders, three families, and one genus were enriched in SH, including Actinobacteria (from phylum to class levels), Caulobacterales (from order to family levels), Corynebacteriales (order level), Bdellovibrionaceae (family level), and Burkholderiaceae (from family to genus levels), while one order, four families, and four genera were enriched in SD, including Kordiimonadales (from order to genus levels), Family XII, NS7 marine group (family level), NS9 marine group (family level) and genera Exiguobacterium, Pediococcus, and Nautella (Figures 4C,D). Interestingly, the genus Nautella in diseased rearing water and shrimps both showed the largest effect size (LDA value > 4.0) (Figures 4B,D). The relative abundance of Nautella in WH, WD, SH, and SD were 6.19, 24.68, 0.19, and 3.00%, respectively (Supplementary Figure S2). Four classes (Alphaproteobacteria, Gammaproteobacteria, Flavobacteriia, and Actinobacteria) were shared in WH and WD, but varied in their relative abundance (Figure 5). There were more Flavobacteriia and Cytophagia in WH while the abundance of Gammaproteobacteria and Alphaproteobacteria increased in WD. Gammaproteobacteria was always the overwhelming bacterial groups in shrimp, but the relative abundance of Alphaproteobacteria is lower in SH than that in SD ( Figure 5). Different Bacterial Communities along with Growth Stages Bacterial communities along with growth stages were analyzed to study the changing trend they followed throughout the key developmental stages and to find out the stage-specific groups. PCA at family level (Figure 6) showed that bacterial groups in the rearing water under different growth stages were clustered separately, which was also confirmed by ANOSIM analysis (P < 0.05) ( Table 2). The PC1 axis (24.66%) discriminated the zoea from postlarvae stage, while the PC2 axis (43.70%) discriminated the mysis from zoea and postlarvae stages except for sample M3-2. Rhodobacteraceae was abundant in rearing water at all tested growth stages, whereas its relative abundance displayed a decreasing trend at mysis and postlarva stages (Figure 7). Meanwhile, some bacterial groups exhibited stagespecific signatures. Specifically, Flavobacteriaceae was abundant at zoea stage compared with that at mysis stage (P < 0.05). Subsequently, its abundance decreased and BD1-5 clade of Actinobacteria increased at mysis stage (P < 0.05). At postlarva stage, Microbacteriaceae (phylum Actinobacteria) increased being the dominant bacterial group (P < 0.05) (Figure 7). Detailed bacterial community composition of water samples at genus level was exhibited by heatmap (Supplementary Figure S3). An unclassified genus of Rhodobacteraceae was predominant in all Bold type (P < 0.05) indicates significant difference between two groups. Correlation (r) and significance (P) values are shown. rearing water samples. Although Nautella was prevalent at zoea stage, its abundance decreased at mysis and postlarva periods (Supplementary Figure S3). A total of 25 bacterial groups were found to have significant differences along with different growth stages using LEfSe. Two orders, two families and three genera were enriched at zoea stage, including Xanthomonadales (order level), DB1-14 (order level), Cryomorphaceae (family level), Alteromonadaceae (family level), Polaribacter (genus level), Roseicyclus (genus level), and Roseobacter clade CHAB-l-5 lineage. Two classes, two orders, one family, and two genera were enriched at mysis stage, including Cytophagia (from class to genus), Sphingobacteriia (from class to order), and Roseibacillus (genus level). One phylum, two classes, three orders, three families, and two genera were enriched at postlarvae stage, including Actinobacteria (from phylum to genus) and Rickettsiaceae (genus level) (Figure 8). By contrast, little variation of bacterial community in shrimp was observed along with the growth of shrimp. Enterobacteriaceae of Gammaproteobacteria was the most abundant group, accounting for more than 85% at all growth stages (Figure 7). Correspondingly, Enterobacter and some unclassified genera of Enterobacteriaceae was the most abundant genera in shrimp at all growth stages, followed by an unclassified genus of Rhodobacteraceae, then genera Ruegeria, Aquimarina, and Vibrio. These results indicated that bacterial community change in the rearing water only have limited influence on that of shrimp larvae. DISCUSSION Bacterial communities in juvenile shrimps have been described extensively (Rungrassamee et al., 2013;Huang et al., 2014;Zhang D. et al., 2014;Xiong et al., 2015), but in larval shrimp it is poorly understood. Here, we compared bacterial communities associated with healthy and diseased L. vannamei larvae and the related rearing water along with shrimp development. The results showed that distinct bacterial communities assembled between healthy and diseased water, indicating that some specific bacterial groups might be applied as indicators for monitoring the health status of shrimp larvae in hatchery. The intestine of aquatic animals and rearing water were reported to be fertile grounds for various microorganisms. Hameed (1993) observed that the total culturable bacterial count of Penaeus indicus larval rearing water ranged from 9.0 × 10 2 to 1.0 × 10 5 cfu/ml. In this study, the bacterial 16S rRNA gene abundance ranged from 1.5 × 10 6 to 4.7 × 10 7 copies/ml in the rearing water. It was reported that the average number of 16S rRNA gene copies in one bacterium is 4.14 (Lee et al., 2009). Thus, there were ∼3.6 × 10 5 to 1.1 × 10 7 bacteria/ml rearing water in our study, which was approximately two orders of magnitude higher than the results based on culture-dependent method (Yasuda and Kitao, 1980;Hameed, 1993;Kennedy et al., 2006). It was also confirmed the idea that most of the bacteria in environment were hard to cultivate. Furthermore, in this study, there were ∼5.8 × 10 6 to 7.6 × 10 8 bacteria/g larvae, higher than the number of Hameed's (1993) results that cultivable bacterial counts ranged from 8.1 × 10 4 to 1.2 × 10 8 cfu/g at larval stage. Comparing the healthy and diseased rearing water samples, we found that more Flavobacteriia and Cytophagia in WH while more Gammaproteobacteria and Alphaproteobacteria in WD ( Figure 5). Xiong et al. (2015) also demonstrated that Flavobacteriia and Gammaproteobacteria was abundant in healthy and diseased shrimp, respectively. In fact, Flavobacteriia was reported to have a specialized ability in degrading complex organic matter and biopolymers such as cellulose and chitin (Kirchman, 2002;Williams et al., 2013), implying that members of this bacterial taxa might have positive effect on improving rearing water quality. It was reported that high abundance of Gammaproteobacteria presented in diseased shrimps was attributed to Vibrio (Rungrassamee et al., 2016). Unexpectedly, Vibrio was rarely detected in diseased water and shrimp in this study, which was consistent with the results of Zhang D. et al. (2014) that also observed low and almost unchanged relative abundance of Vibrio in diseased shrimp. At times there appears to be no close relationship between the emergence of disease and the abundance of Vibrio (Sung et al., 2001). Although bacterial communities at high taxonomic levels have no difference between SH and SD, several distinguished bacterial groups were identified from LEfSe analysis. Specially, the genus Nautella with the largest effect size (LDA value higher than 4.0) was enriched in both diseased rearing water and shrimp (Figure 4). Sakami et al. (2014) reported that Nautella was common in rotifer culture tanks, but there were other studies found that bacteria in this genus were pathogenic toward red alga Delisea pulchra (Gardiner et al., 2015) and brine shrimp (Artemia) (Zheng et al., 2016). Therefore, Nautella might be provided as a diseased indicator for monitoring the health of shrimp. Additional experiments are suggested to prove whether there is a close relation between the health of shrimp and Nautella. Several bacterial groups were enriched in healthy water and shrimp, such as genus Meridianimaribacter (Figure 4 and Supplementary Figure S2). Member in genus Meridianimaribacter was frequently found in both healthy water and shrimp in our previous study using culture-dependent method (Zheng et al., 2016). Meridianimaribacter was also found in the intestinal tract of shrimp after adding probiotics (Luis-Villaseñor et al., 2013). It was reported that Meridianimaribacter was dominant in healthy larviculture water of shrimp (Xue et al., 2015). Possibly, bacteria in Meridianimaribacter have a beneficial effect on the health of its hosts and we propose that it might be considered as a probiotic candidate and an indicator of healthy status in shrimp larvae aquaculture. The bacterial community of larval rearing water was primarily dominated by Rhodobacteraceae of Alphaproteobacteria, agreed with previous studies (Huang et al., 2014;Xue et al., 2015). Rhodobacteraceae may act as the keystone species in rearing water and may have a potential interaction with shrimp at different growth stages, which need to be further characterized. Several bacterial groups exhibited different relative abundance in different growth stages. Flavobacteriaceae (phylum Flavobacteriia), BD1-5 clade (class) and PeM15 clade (order) (phylum Actinobacteria), and Microbacteriaceae (phylum Actinobacteria) have a relative high abundance at the zoea, mysis and postlarvae periods, respectively, which might be stage-specific bacterial groups. Several studies demonstrated that diets had influence on the bacterial communities of shrimp (Huang et al., 2014;Zhang M. et al., 2014). In this study, shrimp at zoea stage were fed with microalgae Thalassiosira sp. until they reached zoea 3 stage. After that, the diet was changed to Artemia. This shift of diet from microalgae to Artemia might explain the variance of bacterial composition in rearing water with growth stages. Contrastingly, Gammaproteobacteria was the most abundant group in shrimp, which was in accordance with that in other shrimp species (Liu et al., 2011;Chaiyapechara et al., 2012;Rungrassamee et al., 2013) and other aquatic animals including fish (Verner-Jeffreys et al., 2003;McIntosh et al., 2008), shellfishes (Payne et al., 2007;Meziti et al., 2010) and bivalves (Sandaa et al., 2003;Tanaka et al., 2004). Consistent with the previous study of Chaiyapechara et al. (2012), our results revealed that bacterial community in shrimp was different from that in rearing water as well. At a finer taxonomic level, we found that Enterobacteriaceae of Gammaproteobacteria was dominant in shrimp while Vibrionaceae or other families were abundant in other studies (Rungrassamee et al., 2013(Rungrassamee et al., , 2014Huang et al., 2014). It has been documented that members of Enterobacteriaceae were abundant in digestive tract of freshwater and marine fish (Ringø and Birkbeck, 1999;Merrifield et al., 2009;Wang et al., 2014) and healthy pigs (Schierack et al., 2007), but not in shrimps. In general, bacteria in this family frequently attached to fecal matter in intestine. In other studies, shrimp intestines were dissected and the residue inside was removed before genomic DNA extraction; however, the whole shrimp larvae were used in our study due to their small size. We speculated that the fecal matter in larval intestine contributed to the high abundance of Enterobacteriaceae. In general, our results revealed that bacterial members of rearing water assembled into distinct communities along with growth stages but showed little variation in shrimp. Different diets at different stages might explain the variance of bacterial composition in rearing water. Further studies are needed to confirm the observation with L. vannamei larvae of this study in other hatcheries or other shrimp species. Overall, the present study built significant relationships among shrimp larva at two healthy statuses (healthy and diseased) and growth stages, which may provide instructional insights for using specific bacterial groups to indicate healthy status. In the future, we are supposed to take new strategies toward predicting diseases rather than only focusing on how to treat them. Certainly, we cannot eliminate the possibility that diseases caused from other types of organisms, such as fungi and viruses. ETHICS STATEMENT This study was carried out in accordance with the recommendations of Animal Ethics Committee of Shandong Province, China. The protocol was approved by the Animal Ethics Committee of Shandong Province, China. AUTHOR CONTRIBUTIONS X-HZ and McY designed the study. YZ did the experiment and wrote the manuscript with assistance of MnY and JL. YQ and LW helped to analyze the data and revise the manuscript. ZL contributed to collect samples from shrimp hatchery. All authors approved the final manuscript. ACKNOWLEDGMENTS This work was supported by the Industry, Education and Research project of Tongwei Co., Ltd., China (no. TW2013M002), the National Natural Science Foundation of China (no. 31502171) and the International Science and Technology Cooperation Programme of China (no. 2012DFG31990).
2017-08-15T05:50:41.151Z
2017-07-18T00:00:00.000
{ "year": 2017, "sha1": "24b74660e9724a4c3805d308205d1d22f574239a", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2017.01362/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "24b74660e9724a4c3805d308205d1d22f574239a", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
248239921
pes2o/s2orc
v3-fos-license
Gamma-ray burst data strongly favor the three-parameter fundamental plane (Dainotti) correlation relation over the two-parameter one Gamma-ray bursts (GRBs), observed to redshift $z=9.4$, are potential probes of the largely unexplored $z\sim 2.7-9.4$ part of the early Universe. Thus, finding relevant relations among GRB physical properties is crucial. We find that the Platinum GRB data compilation, with 50 long GRBs (with relatively flat plateaus and no flares) in the redshift range $0.553 \leq z \leq 5.0$, and the LGRB95 data compilation, with 95 long GRBs in $0.297 \leq z \leq 9.4$, as well as the 145 GRB combination of the two, strongly favor the three-dimensional (3D) fundamental plane (Dainotti) correlation relation (between the peak prompt lumininosity, the luminosity at the end of the plateau emission, and its rest frame duration) over the two-dimensional one (between the luminosity at the end of the plateau emission and its duration). The 3D Dainotti correlations in the three data sets are standardizable. We find that while LGRB95 data have $\sim50$% larger intrinsic scatter parameter values than the better-quality Platinum data, they provide somewhat tighter constraints on cosmological-model and GRB-correlation parameters, perhaps solely due to the larger number of data points, 95 versus 50. This suggests that when compiling GRB data for the purpose of constraining cosmological parameters, given the quality of current GRB data, intrinsic scatter parameter reduction must be balanced against reduced sample size. INTRODUCTION In the standard general-relativistic spatially-flat ΛCDM model (Peebles 1984), dark energy is a time-independent cosmological constant Λ that sources ∼ 70% of the current cosmological energy budget and the observed currently accelerating cosmological expansion. The predictions of this model are consistent with most current cosmological observations, such as Hubble parameter [H(z)], type Ia supernova (SNIa) apparent magnitude, cosmic microwave background (CMB) anisotropy, and baryon acoustic oscillation (BAO) measurements (see, e.g. Farooq et al. 2017;Scolnic et al. 2018;Planck Collaboration 2020;eBOSS Collaboration 2021). The measurements, however, are not yet decisive E-mail: shulei@phys.ksu.edu † E-mail: maria.dainotti@nao.ac.jp ‡ E-mail: ratra@phys.ksu.edu enough (see, e.g. Dainotti et al. 2021cDainotti et al. , 2022bDi Valentino et al. 2021b;Perivolaropoulos & Skara 2021;Abdalla et al. 2022) to disallow other cosmological models. Here we also consider dynamical dark energy models as well as models with non-zero spatial curvature. In this paper we study long GRBs, those GRBs with burst duration longer than 2 s. The measured quantities for the GRBs are the redshift z, the characteristic time scale T * X which marks the end of the plateau emission, the measured X-ray energy flux FX at T * X , the measured γray energy flux F peak in the peak of the prompt emission over a 1 s interval, and the X-ray photon indices of the plateau phase α plateau and of the prompt emission αprompt. We make use of the 50 Platinum GRBs, spanning the redshift range 0.553 ≤ z ≤ 5.0, introduced in Dainotti et al. (2020), that we previously studied (Cao et al. 2022d), and a new LGRB95 sample consisting of 95 long GRBs, spanning 0.297 ≤ z ≤ 9.4 and also taken from Dainotti et al. (2020), as well as the combined LGRB145 data set of 145 GRBs, to test whether they are better described by the three-dimensional (3D) fundamental plane (Dainotti) correlation between the peak prompt luminosity, the luminosity at the end of the plateau emission, and its rest frame duration (Srinivasaragavan et al. 2020;Dainotti et al. 2016Dainotti et al. , 2020Dainotti et al. , 2021a or by the two-dimensional (2D) Dainotti correlation between the luminosity at the end of the plateau emission and its rest frame duration (Dainotti et al. 2008(Dainotti et al. , 2011b(Dainotti et al. , 2013(Dainotti et al. , 2015b, 2 and to constrain cosmological-model and GRB-correlation parameters. The Platinum sample is a compilation of the higher-quality (lower intrinsic dispersion) GRBs considered in Dainotti et al. (2020), and are tabulated in Table A1. 3 The remaining 95 long GRBs considered in 1 The latest Lusso et al. (2020) QSO flux compilation assumes a model for the QSO UV-X-ray correlation that is not valid above a much lower redshift, z ∼ 1.5 − 1.7 (i.e., above these redshifts the assumed QSO UV-X-ray luminosities correlation relation is different in different cosmological models), meaning that these QSOs can be used to determine only much lower-z cosmological constraints (Khadka & Ratra 2021. 2 The 2D and 3D Dainotti correlation relations are discussed in Sec. 4 3 This is a correction of table A1 of Cao et al. (2022d) that incorrectly accounted for the GRB K-corrections. In Sec. 4 we discuss our improved method of accounting for the spectral evolution of Dainotti et al. (2020) constitute the LGRB95 sample listed in Table A2. LGRB95 data have a ∼ 50% larger intrinsic scatter parameter, which contains the unknown systematic errors, than the Platinum data. Based on information criteria, we discover that Platinum, LGRB95, and LGRB145 data strongly prefer the 3D Dainotti correlation over the 2D one. Although LGRB95 data have ∼ 50% larger intrinsic scatter parameter values than Platinum data, they provide consistent but slightly tighter cosmological-model and GRB-correlation parameter constraints than do Platinum data, perhaps solely due to the larger number of data points, 95 versus 50. LGRB145 data provide tighter constraints on GRB-correlation parameters than those from the individual GRB data sets. Our paper is organized as follows. We present the main features of the cosmological models we use in Sec. 2 and describe the data sets we use in Sec. 3. We outline our analyses methods in Sec. 4 and present results in Sec. 5. Our summary and conclusions are in Sec. 6. COSMOLOGICAL MODELS We study the two-and three-parameter Dainotti correlations by simultaneously constraining cosmological model parameters and GRB correlation parameters in six spatiallyflat and non-flat dark energy cosmological models. 4 To do this we need to compute, in each cosmological model, the luminosity distance, as a function of redshift z and the cosmological parameters p, where the comoving distance is c is the speed of light, and H(z, p) is the Hubble parameter. The expansion rate function E(z, p) ≡ H(z, p)/H0, where H0 is the Hubble constant, 5 are given below for each of the cosmological models we consider. GRBs and both the prompt and afterglow photon indices. These corrections are not appreciable and do not affect any of the qualitative conclusions of Cao et al. (2022d). (2022), Mukherjee & Banerjee (2022), and references therein. 5 Since GRB data are unable to constrain it, in this paper we set H 0 = 70 km s −1 Mpc −1 . As in Cao & Ratra (2022), we assume one massive and two massless neutrino species, with the nonrelativistic neutrino physical energy density parameter Ων h 2 = mν /(93.14 eV) = 0.06 eV/(93.14 eV), where h is the Hubble constant in units of 100 km s −1 Mpc −1 . The non-relativistic matter density parameter Ωm0 = (Ων h 2 + Ω b h 2 + Ωch 2 )/h 2 , where the current value of the baryonic matter energy density parameter is set to Ω b = 0.05 6 and the current value of the cold dark matter energy density parameter (Ωc) is constrained as a free cosmological parameter. In the ΛCDM models the expansion rate function where Ω k0 is the spatial curvature energy density parameter and ΩΛ = 1 − Ωm0 − Ω k0 is the cosmological constant dark energy density parameter. In the flat ΛCDM model the constrained cosmological parameter is Ωc (although we display Ωm0 in the plots), whereas in the non-flat ΛCDM model there is one additional cosmological parameter, Ω k0 , to be constrained. In the XCDM parametrizations where ΩX = 1 − Ωm0 − Ω k0 is the current value of the dynamical dark energy density parameter of the X-fluid and wX is the X-fluid equation of state parameter (wX = −1 correspond to ΛCDM models). In the flat XCDM parameterization the constrained cosmological parameters are Ωc and wX, whereas in the non-flat XCDM parametrization Ω k0 is also constrained. In the φCDM models Pavlov et al. 2013) where is the scalar field, φ, dynamical dark energy density parameter that can be determined by numerically solving the Friedmann equation (5) and the equation of motion of the scalar field An inverse power-law scalar field potential energy density is assumed and in these equations, H =ȧ/a is the Hubble parameter, a is the scale factor, an overdot is a time derivative, a prime is a derivative with respect to φ, mp is the 6 Since GRB data are unable to constrain Ω b . 7 For discussions of observational constraints on φCDM see Zhai et al. (2017), Ooba et al. (2018bOoba et al. ( , 2019, Park & Ratra (2018, 2019b, 2020 Planck mass, α is a positive constant (α = 0 correspond to ΛCDM models), and κ is a constant that is determined by the shooting method in the Cosmic Linear Anisotropy Solving System (class) code (Blas et al. 2011). In the flat φCDM model the constrained cosmological parameters are Ωc and α, whereas in the non-flat φCDM model Ω k0 is also constrained. DATA In this paper we analyze three different GRB data sets to study two-parameter or two-dimensional (2D) Dainotti correlation and the three-parameter or 3D fundamental-plane (Dainotti) correlation. These contain only long GRBs, with burst duration longer than 2 s, and are taken from the compilation of Dainotti et al. (2020). For these data sets, the measured quantities for a GRB are the redshift z, the characteristic time scale T * X which marks the end of the plateau emission, the measured X-ray energy flux FX at T * X , the prompt peak γ-ray energy flux F peak over a 1 s interval, and the X-ray photon indices of the plateau phase α plateau and of the prompt emission αprompt. The data sets we use here are summarized next. Platinum sample. This includes 50 long GRBs that have a plateau phase with angle < 41 • , that do not flare during the plateau phase, and that have a plateau phase that lasts longer than 500 s. The first criterion follows from the evidence that those with angle > 41 • are outliers of the Gaussian distribution; the second criterion eliminates flaring-contaminated cases; and, the third criterion eliminates cases where prompt emission might mask the plateau (Willingale et al. 2007. The Platinum GRBs are listed in Table A1 of Appendix A, which is a correction of table A1 of Cao et al. (2022d). This sample spans the redshift range 0.553 ≤ z ≤ 5.0. LGRB95 sample. This sample includes the remaining 95 long GRBs from the compilation of Dainotti et al. (2020). As discussed below, this GRB data set has a larger intrinsic scatter parameter σint than the Platinum GRBs. These GRBs are listed in Table A2 of Appendix A. This sample spans the redshift range 0.297 ≤ z ≤ 9.4. LGRB145 sample. This sample is a combination of the Platinum sample and the LGRB95 sample and spans the redshift range 0.297 ≤ z ≤ 9.4. DATA ANALYSIS METHODOLOGY The 3D fundamental plane (or 3D Dainotti) correlation (Dainotti et al. 2016(Dainotti et al. , 2021a is where the X-ray source rest-frame luminosity with the power-law (PL) plateau K-correction the peak prompt luminosity are the normalization at 50 keV in units of photons cm −2 s −1 keV −1 in the PL and cutoff power-law (CPL) models, respectively, α PL and α CPL are the PL and CPL photon indices, respectively, and E peak is the peak energy in the νFν spectrum in units of keV, where ν is the photon frequency proportional to E and Fν is the photon energy flux per unit frequency. When ∆χ 2 ≡ χ 2 PL − χ 2 CPL > 6, the CPL model is used to compute the prompt K-correction, otherwise the PL model is used. In the preceding equations, {Co, a, b} are the GRB correlation parameters to be constrained, T * X (s) is the time at the end of the plateau emission, FX and F peak are the measured X-ray and γ-ray energy flux (erg cm −2 s −1 ) at T * X and in the peak of the prompt emission over a 1 s interval, respectively. Here we have improved upon the analysis of Cao et al. (2022d), that assumed the GRB K-corrections are the same throughout the burst duration, by also considering the prompt emission photon index. We consider the sliced photon index in the spectrum starting from the time of the beginning of plateau emission to the time of the end of plateau emission. We use the photon counting (PC) mode for the majority of cases and the window timing (WT) mode for only a few cases where we do not have the PC mode. This procedure differs from previous analyses in which photon indices were computed using an average of both WT and PC modes. The 3D fundamental plane (or 3D Dainotti) correlation reduces to the 2D Dainotti correlation when b = 0. Note that the 3D fundamental plane Dainotti relation is a combination of this 2D Dainotti LX − T * X correlation and another 2D Dainotti correlation between the peak prompt luminosity and the luminosity at the end of the plateau emission (Dainotti et al. 2011a(Dainotti et al. , 2015a. The natural log of the likelihood function (D'Agostini 2005) is where, in the 3D fundamental plane relation case, N is the number of data points and σint is the intrinsic scatter parameter that contains the unknown systematic uncertainty. Note that σ log L X = σ log FK plateau and σ log L peak = σ log FK prompt , where log FK plateau ≡ log FX +log K plateau and log FKprompt ≡ log F peak + log Kprompt. In the 2D Dainotti correlation case we fix b = 0 in equations (16) and (17). We avoid the circularity problem by simultaneously constraining cosmological-model and GRB-correlation parameters, and if the GRB correlation parameters are independent of the cosmological models used in the analysis then the GRBs are standardizable (Khadka & Ratra 2020c). The simultaneous fitting technique also allows for the determination of GRB-only cosmological constraints, unlike the cosmological constraints determined from GRBs that have been calibrated using other data (which are then correlated with data used in the calibration process), that can be directly compared to (or combined with) constraints determined from other data. We list the flat priors of the free cosmological and GRB correlation parameters in Table 1. Since these GRB data sets cannot constrain Ω b and H0, we set Ω b = 0.05 and H0 = 70 km s −1 Mpc −1 in our analyses. By maximizing the likelihood functions, we obtain the unmarginalized best-fitting values and posterior distributions of all free cosmologicalmodel and GRB-correlation parameters. We use the Markov chain Monte Carlo (MCMC) code MontePython (Audren et al. 2013;Brinckmann & Lesgourgues 2019) that interacts with the class code cosmological model physics. We use the python package getdist (Lewis 2019) to perform our analyses. The definitions of the Akaike Information Criterion (AIC), the Bayesian Information Criterion (BIC), and the deviance information criterion (DIC) can be found in our previous papers (see, e.g. Cao et al. 2022c,d). ∆AIC, ∆BIC, and ∆DIC are the differences between the AIC, BIC, and DIC values of the other five cosmological models and those of the flat ΛCDM reference model, while ∆AIC , ∆BIC , and ∆DIC are the differences between values of 2D and 3D Dainotti correlations in the same cosmological model. Negative (positive) values of these ∆ICs indicate that the model under investigation fits the data better (worse) than does the reference model. Relative to the model with the minimum IC, ∆IC ∈ (0, 2] is defined to be weak evidence against the model under investigation, ∆IC ∈ (2, 6] is positive evidence against the model under investigation, ∆IC ∈ (6, 10] is strong evidence against the model under investigation, and ∆IC > 10 is very strong evidence against the model under investigation. LGRB145 (d) Figure 1. One-dimensional likelihood distributions and 1σ, 2σ, and 3σ two-dimensional likelihood confidence contours for flat ΛCDM from various combinations of data. The zero-acceleration black dashed lines in panels (a) and (b) divide the parameter space into regions associated with currently-accelerating (left) and currently-decelerating (right) cosmological expansion. RESULTS The posterior one-dimensional probability distributions and two-dimensional confidence regions of cosmological-model and GRB-correlation parameters for the six cosmological models are shown in Figs LGRB145 ∆DIC, for all models and data sets, are listed in Table 2. We list the marginalized posterior mean parameter values and uncertainties (±1σ error bars and 1 or 2σ limits), for all models and data sets, in Table 3. In Cao et al. (2022d) we showed that the 3D Dainotti correlation parameters in all six cosmological models, determined from the Platinum data set, were mutually consis-tent, confirming that the 3D Dainotti correlation Platinum GRBs are standardizable. 8 When we compare the 3D Dainotti correlation parameters results in Table 3 for the six different cosmological models, we see that the 3D Dainotti LGRB145 (d) Cosmological parameters zoom in Figure 3. One-dimensional likelihood distributions and 1σ, 2σ, and 3σ two-dimensional likelihood confidence contours for flat XCDM from various combinations of data. The zero-acceleration black dashed lines divide the parameter space into regions associated with currently-accelerating (either below left or below) and currently-decelerating (either above right or above) cosmological expansion. The magenta dashed lines represent w X = −1, i.e. flat ΛCDM. correlation LGRB95 GRBs are standardizable. 9 Similarly, 9 Note that although the highest (non-flat XCDM) and lowest (flat ΛCDM) values of 1D marginalized b and Co constraints from LGRB145 data differ by 1.44σ and 1.45σ, respectively, as shown the determined 2D Dainotti correlation parameters, a and Co, are independent (within the errors) of the cosmologiin Fig. 7 their 1σ 2D marginalized contours overlap so the 3D Dainotti correlation LGRB145 GRBs are also standardizable. LGRB145 (d) Cosmological parameters zoom in Cao & Ratra (2022), and divide the parameter space into regions associated with currently-accelerating (either below left or below) and currently-decelerating (either above right or above) cosmological expansion. The crimson dash-dot lines represent flat hypersurfaces, with closed spatial hypersurfaces either below or to the left. The magenta dashed lines represent w X = −1, i.e. non-flat ΛCDM. cal model used in the analysis, for the Platinum, LGRB95, and LGRB145 GRBs. However, unlike the 3D Dainotti correlation cases and the 2D Dainotti Platinum case, from the cosmological models which result in the largest and smallest σint values, in the 2D Dainotti correlation LGRB95 and LGRB145 cases, although 1D marginalized σint constraints are in > 2σ tension, as shown in panels (a) and (b) of Fig. 8, their 2D marginalized contours are within 2σ. These results indicate that the 2D Dainotti correlation LGRB95 and LGRB145 data need more careful study. LGRB145 (d) Cosmological parameters zoom in Figure 5. One-dimensional likelihood distributions and 1σ, 2σ, and 3σ two-dimensional likelihood confidence contours for flat φCDM from various combinations of data. The zero-acceleration black dashed lines divide the parameter space into regions associated with currently-accelerating (below left) and currently-decelerating (above right) cosmological expansion. The α = 0 axes correspond to flat ΛCDM. For the 3D Platinum data set, the constraints on the intrinsic scatter parameter σint range from a low of 0.365 +0.038 (∼ −0.3/0.5σ) or higher (∼ 0.1σ) than those from Platinum data. The constraints on the intercept Co range from a low of 12.73 ± 4.69 (flat ΛCDM) to a high of 20.75 +7.14 −7.22 (non-flat XCDM), with a difference of 0.93σ, which are either higher (∼ 0.3/0.8σ) or lower (∼ −0.1σ) than those from Platinum data. Although the constraints of σint from LGRB95 data are > 2σ larger than those from Platinum data, which means that LGRB95 data do not fit the 3D Dainotti correlation as well as Platinum data do, the LGRB95 and Platinum 3D Dainotti correlation parameters (a, b, and Co) are mutually consistent within 1σ, so they both obey same 3D Dainotti correlation and so can be jointly analyzed. For the joint 3D LGRB145 data set, the constraints on the intrinsic scatter parameter σint range from a low of 0.458 +0.030 −0.035 (non-flat XCDM) to a high of 0.480 +0.030 −0.036 (flat φCDM), with a difference of 0.47σ. The constraints on the slope a range from a low of −0.929±0.074 (non-flat XCDM) to a high of −0.872 ± 0.073 (flat ΛCDM), with a difference of 0.55σ. The constraints on the slope b range from a low of 0.572±0.094 (non-flat XCDM) to a high of 0.743±0.073 (flat ΛCDM), with a difference of 1.44σ. The constraints on the intercept Co range from a low of 12.71±3.78 (flat ΛCDM) to a high of 21.68 +4.93 −4.91 (non-flat XCDM), with a difference of 1.45σ. Although the constraints of b and Co from LGRB145 data are ∼ 1.4σ away, their 2D contours overlap within 1σ. These results show that for both GRB samples the correlation slope a remains consistent within 1σ with the value of the correlation slope corrected for selection biases, a = (−0.75 ± 0.11, −0.69 ± 0.07) for the Gold and Long GRBs, respectively , highlighting that the physics of the correlation, i.e. that the energy reservoir remains constant, is consistently maintained, independent of the sample and cosmology used (here we do account for the selection biases correction as in Dainotti et al. 2022c). Also, the positive correlation between L peak and LX is maintained at the 1σ level when compared with the intrinsic correlation corrected for selection biases which yield b = (0.7 ± 0.07, 0.64 ± 0.11) for the Long and Gold GRBs, respectively . Again, regardless of the sample and cosmological model used, the underlying physics of the correlation is preserved confirming the reliability of our results. In comparison with the cosmological parameter constraints from Platinum data, LGRB95 data provide slightly tighter constraints. For Ωm0 constraints, LGRB95 data provide higher 1 or 2σ lower limits than most of those from Platinum data with only 1σ lower limits. For Ω k0 constraints, LGRB95 data provide more restrictive and lower posterior mean values than those from Platinum data. Closed hypersurfaces are favoured but except for non-flat φCDM, flatness is more than 1σ away. LGRB95 data provide lower 2σ upper limits of wX than those provided by Platinum data, while they do not constrain α. It is worth noting that LGRB95 data provide slightly more restrictive constraints on both the cosmological-model and the 3D Dainotti correlation parameters, than do Platinum data, likely a consequence of the larger number of data points, 95 versus 50, more than compensating for the larger σint value, ∼ 0.52 − 0.54 versus ∼ 0.37. Based on AIC and BIC, non-flat XCDM is favoured the most by both LGRB95 and LGRB145 data, with the evidence against non-flat XCDM and non-flat ΛCDM being either weak or positive, and with the evidence against the remaining models being either strong or very strong. However, based on DIC, non-flat ΛCDM and non-flat XCDM are the most favoured model by LGRB95 and LGRB145 data, with positive evidence against the remaining models, and with positive evidence against non-flat ΛCDM and either strong or very strong evidence against the remaining models, respectively. From the AIC, BIC, and DIC results we find that the 3D Dainotti correlation is very strongly favoured over the 2D Dainotti correlation by all three of the GRB data sets. Therefore, although some of the cosmological parameter constraints are more restrictive in the 2D Dainotti correlation cases, possibly because there is one less free parameter to constrain in the 2D correlation cases, we do not discuss them in detail. Leaving aside the 2D Dainotti correlation LGRB145 data set, we briefly discuss the results from the 2D correlation Platinum and LGRB95 data. Overall, these GRB data used with the 2D Dainotti correlation prefer higher values of Ωm0 and lower values of Ω k0 and wX (non-flat XCDM), whereas they do not provide restrictive constraints on wX in flat XCDM (except for Platinum data) and α in φCDM models. However, the constraints on the Platinum and LGRB95 2D Dainotti correlation parameters a and Co are cosmological model-independent, with the 2D a values being more negative and less restrictive and the 2D Co values being larger and more restrictive than those from the corresponding 3D Dainotti correlation data sets. SUMMARY AND CONCLUSION In addition to 50 Platinum GRBs, we use LGRB95 data that contains 95 long GRBs, as well as the joint 145 GRB data compilation, to study whether the 2D or 3D Dainotti correlation is more favoured by data, as well as to constrain cosmological-model and GRB-correlation parameters, in six flat and non-flat dark energy cosmological models. Based on AIC, BIC, and DIC results, we find that the 3D Dainotti correlation is much more strongly favoured than the 2D one by the GRB data sets we study. We also find that LGRB95 data obey the 3D Dainotti correlation and are standardizable. Platinum and LGRB95 data provide mutually consistent constraints on both cosmological-model and GRB-correlation parameters, and also provide cosmological-model independent 3D Dainotti correlation parameter constraints. Therefore, we can combine Platinum with LGRB95 data to form the LGRB145 data set and use it for similar analyses. We find that while LGRB95 data have ∼ 42−49% larger values of intrinsic scatter parameter σint ∼ 0.524 − 0.543 than σint ∼ 0.365 − 0.369 of Platinum data, they provide somewhat tighter constraints on cosmological-model and GRB-correlation parameters, perhaps mostly due to the larger number of data points, 95 versus 50. We recommend that when compiling GRB data for the purpose of constraining cosmological parameters, given the quality of current GRB data, attention be placed on also expanding the sample size, in addition to attempting to reduce the value of σint of the compilation. 10 LGRB95 data favour higher values of Ωm0 and lower values of Ω k0 than do Platinum data, whereas the joint LGRB145 data favour even higher and lower values of Ωm0 and Ω k0 than both Platinum and LGRB95 data, respectively. All these GRB data do not provide restrictive constraints on wX and α. LGRB145 data also provide tighter constraints on GRB-correlation parameters and the intrinsic scatter parameter. Given the current paucity of GRB data it is therefore necessary to increase the sample size to allow the 3D correlation to have cosmological constraints comparable to those from SNIa data. A detailed study on simulating GRB constraints based on the Platinum sample to determine the number of Platinum-quality GRBs needed to reach constraints similar to those from a number of recent SNIa data sets is presented in Dainotti et al. (2022c). To achieve similar GRB constraints one needs to wait for more GRB data from future missions, and one can use machine learning techniques and lightcurves reconstruction on these larger data sets that can enable smaller scatter (47.5%) on the 2D and 3D correlation parameters. We note that a major restriction on the use of more current GRBs as cosmological tools is the lack of redshift for many GRBs. Only 26% of the total number of GRBs observed by Swift have reliable redshifts. Work on the inference of redshifts is underway (Dainotti et al. 2019) and does not require waiting for a new mission. Once reliable redshifts are determined for the GRBs with X-ray plateaus, we an-ticipate having a Platinum-quality sample twice as large as the current one as well as also anticipate doubling the size of the LGRB95-quality sample. Current GRB data alone cannot provide very restrictive cosmological constraints comparable to those from betterestablished probes such as CMB, BAO, H(z), or SNIa measurements, but one can do joint analyses of GRB data with these data to get more restrictive cosmological parameter constraints (Xu et al. 2021;Cao et al. 2022c,d;Cao & Ratra 2022). We also look forward to a larger, better-quality, compilation of GRB data from the SVOM mission scheduled to be launched in 2023 (Atteia et al. 2022), and possibly the THE-SEUS mission in 2037. In conjunction with machine learning techniques, these new data should provide significantly more restrictive GRB cosmological parameter constraints that could be comparable with those from SNIa data. wX corresponds to flat/non-flat XCDM and α corresponds to flat/non-flat φCDM. c This is the 1σ limit. The 2σ limit is set by the prior and not shown here.
2022-04-20T01:15:41.971Z
2022-04-19T00:00:00.000
{ "year": 2022, "sha1": "8116244a9c917173f2f8bdea9476341df0c8e6f3", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "8116244a9c917173f2f8bdea9476341df0c8e6f3", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Medicine", "Physics" ] }
119192681
pes2o/s2orc
v3-fos-license
Intrinsic colors and ages of extremely red elliptical galaxies at high redshift In order to know the formation epoch of the oldest elliptical galaxies as a function of mass and observed redshift, a statistical analysis for 333 extremely red objects (EROs) classified as old galaxies (OGs) at 0.8<z<2.3 is carried out. Once we get M_V and (B-V) at rest for each galaxy, we calculate the average variation of this intrinsic color with redshift and derive the average age through a synthesis model (the code for the calculation of the age has been made publicly available). The average gradient of the (B-V) color at rest of EROs/OGs is 0.07-0.10 Gyr^{-1} for a fixed luminosity. The stars in these extremely red elliptical galaxies were formed when the Universe was ~2 Gyr old on average. We have not found a significant enough dependence on the observed redshift and stellar mass: dt_{formation}/dt_{observed}=-0.46+/-0.32, dt_{formation}/(d log_10 M_*)=-0.81+/-0.98 Gyr. This fits a scenario in which the stellar formation of the objects that we denominate as EROs-OGs is more intense at higher redshifts, at which the stellar populations of the most massive galaxies form earlier than or at the same time as less massive galaxies. Introduction It is usually sustained that galaxies at high redshift are intrinsically bluer than at low redshift (e.g., Dickinson et al. 2003;Rudnick et al. 2006;Labbé et al. 2007), as would expected if their populations were younger and with lower mass/luminosity ratios. However, analysis of this intrinsic color evolution is not free from caveats, due mainly to the difficulty in disentangling selection effects. For instance, Dickinson et al. (2003, Fig. 2) showed a clear lack of red objects between 2 < z < 3 with (m 1700Å − m B ) AB,rest > 3, while there were many of these red objects at z < 2; so they consequently claimed that there is an evolution in color. However some red galaxies were probably missed in that analysis. As a matter of fact, if we take the subsample of old elliptical galaxies without extinction (f old ≥ 0.95) with 2 < z < 3 from Miyazaki et al. (2003) (N = 12 galaxies) we find that all of them have 2.5 < (m 1700Å − m B ) rest < 4.5 with an average of 3.6. Miyazaki et al.'s galaxies are much redder than Dickinson et al.'s, which shows that the Dickinson et al. (2003) sample does not contain the reddest galaxies. On the question of the ages of galaxies and when were they formed, there also some uncertainties. There are many observations at different redshifts of galaxies nearly as old as the Universe, although the density of those galaxies at high redshift is significantly lower (Renzini 2006;Abraham et al. 2007). Early-type massive galaxies with early formation were found at z ∼ 1 (di Serego Alighieri et al. 2006;, z = 1 − 2 (Spinrad et al. 1997;Daddi et al. 2005;Longhetti et al. 2005;Toft et al. 2005;Trujillo et al. 2006), z = 2 − 3 (Daddi et al. 2005;Labbé et al. 2005;Cassata et al. 2008, Kriek et al. 2008 Toft et al. 2005;Chen & Marzke 2004), z = 4 − 5 (Chen & Marzke 2004;Rodighiero et al. 2007) and z > 5 (Wiklind et al. 2008). Most of this information is obtained from photometry, but there are also some spectra massive elliptical galaxies at z = 1.4 − 2.2 , Kriek et al. 2009) revealing them to be old galaxies. Semi-analytical ΛCDM models (De Lucía et al. 2006) claim that the formation of low mass galaxies is first to give way to mergers of very massive galaxies, which is apparently at odds with observation of these massive galaxies at high redshift, even if they were formed through dry mergers in a downsizing scenario. Ferreras et al. (2009) also found that very massive galaxies do not have significant evolution at z < 1.2. However, Schiavon et al. (2006), who have taken spectra of red galaxies (U − B > 0.25) at z ∼ 0.9, derived their age to be on average 1.2 Gyr, which means that some galaxies were formed at lower redshifts. Arnouts et al. (2007) also show evidence for a major build up of the red sequence between z = 2 and z = 1. In this paper, we pay further attention to the determination of intrinsic color and age variation for different redshifts in a statistical way for two different samples of very red elliptical galaxies in order to constraint their formation epoch. Data We select galaxies classified as Extremely Red Objects (EROs) within the redshift range 0.8 ≤ z ≤ 2.3 and, within this group, those classified as Old Galaxies (OGs) with negligible intrinsic extinction; that is, passively evolving populations of elliptical galaxies. The different methods of selecting OGs are quite consistent with each other (Fang et al. 2009). Galaxies will be selected with available fluxes in the three near infrared filters (JHK), plus at least two filters in the optical for which the flux signal/noise is greater than 3. We use two sources of publicly available data: 1. ECDFS catalog (Taylor et al. 2009a), which gives photometry in ten filters: U, U 38 , B, V, R, I, z', J, H, K s from ISAAC(VLT)+Hubble data. For the selection of EROs-OGs, we adopt the color criterion (i 775 − K) AB > 2.42 (EROs), (J − K) AB < 0.20(i 775 − K) AB + 0.39 (Fang et al. 2009). We do not have the magnitude at i 775 but we get this with the corresponding color correction using the adjacent filters. A total of 276 galaxies. 2. Miyazaki et al. (2003) give photometry in eight filters: B, V, R, i', z', J, H, K s from Subaru/XMM-Newton+UH2.2m. Their sample of EROs-OGs was selected using (R − K) AB > 3.35 (EROs), and within these sources, by means of spectrum fitting, as OGs without extinction with a fraction of old population f old ≤ 0.95. A total of 57 galaxies. Estimating roughly the average intrinsic color/age of EROs/OGs is our aim here. The term "ERO" reflects the observed characteristic color of a galaxy, not its intrinsic properties. For this reason, and because of using magnitude-limited samples, they have different ranges of stellar masses, M/L ratios and intrinsic colors at different redshifts. It is not a homogeneous sample of galaxies with the same intrinsic characteristics at all redshifts; there are biases. Nonetheless, throughout the paper we shall separate the dependence on redshift from the dependence on luminosity/mass. Color (B − V ) at rest We take AB apparent magnitudes, corrected for Galactic extinction (although negligible), for different wavelengths, m AB (λ i ), (i = 1, ..., N f ), for 5 ≤ N f ≤ 10, with the corresponding error bars (in our case in optical and near infrared). As has already been said, we consider only the points with a flux signal/noise above 3. We will use data with available redshifts, most of them photometric. The average systematic error of the photometric redshifts is ∆z (1+z) ∼ −0.025 for ECDFS (Taylor et al. 2009a, §7.3), which is small, so we do not take it into account here; similarly for the Miyazaki sample. There is a statistical error for each photometric redshift, but we expect that these uncertainties will nearly cancel in the statistical analysis. With this, we calculate the rest luminosities in two filters. This is done through spectral energy distribution (SED) fitting using templates of galaxies with the software Interrest v2.0 (Taylor et al. 2009a). The calculations for the sample ECDFS have already been carried out by Taylor et al. (2009a). The calculation for Miyazaki et al. (2003) was carried out by us with the Interrest v2.0 software, by changing to the Subaru filters. In this paper, we do the calculations only with a pair of filters at rest (Johnson B and V), applying the correction to convert AB into the Vega calibration: (Frei & Gunn 1994). The (U − B) rest color is not used in this paper because it is more sensitive to redshift uncertainties and uncertainties in the emission-line corrections (Rudnick et al. 2006). Moreover, the (U − B) color depends more strongly than (B − V ) on metallicity for a given age; it is also more sensitive to α-enhancement (Cassisi et al. 2004). There are uncertainties in the rest color, due to the error bars of the apparent magnitudes, the deviation of the assumed shape from the true SED (spectral features which move through the filter bands), errors in the photometric redshifts, etc. In any case, we do not expect important systematic errors, and the statistical errors can be reduced when we calculate the average for bins with a large number of galaxies. In Fig. 1, we give the results of the average colors as a function of the age of the Universe when the galaxy is observed, with H 0 = 73 km/s/Mpc, Ω m = 0.24, Ω Λ = 0.76. Both samples give approximately the same results; thus the selection of OGs among EROs is shown to be quite consistent with both independent methods. There is a significant average gradient in color of 0.124 ± 0.014 and 0.118 ± 0.021 Gyr −1 respectively for samples ECDFS and Miyazaki et al. Note that the galaxy gap in the lower right corner of Fig. 1 is at least partially an artifact of the sample intrinsic color bias as a function of the redshift. This is expected because at the lowest redshift, the (i − K) and (R − K) limits used will pick out only the very oldest and reddest galaxies, whereas, at higher redshifts, younger and bluer galaxies will be included. Note even that at the very lowest redshifts in Fig. 1 the much smaller Miyazaki sample dominates; this is because the ECDFS criterion is a bit more strigent, so it eliminates even the oldest passive galaxies at z ∼ 0.8. One must therefore interpret Fig. 1 as the intrinsic color as a function of redshift of the objects selected as EROs/OGs, not of a general characterization for elliptical galaxies. If we do a bi-linear fit of the colors as a function of two independent variables t obs. and M V,rest , we get: with a 1 = 0.756±0.014, a 2 = 0.084±0.015, a 3 = 0.065±0.013 for ECDFS; a 1 = 0.814±0.023, a 2 = 0.087 ± 0.027, a 3 = 0.048 ± 0.028 for Miyazaki. This bi-linear fitting allows us to separate the evolution from the biases in absolute magnitudes; so the second term (0.084 or 0.087) gives us the average evolution in color for a fixed luminosity. Galaxies are redder for lower observed redshift and for lower luminosity. The first fact indicates older galaxies at lower redshift, and the second fact is probably related to a higher luminosity for younger populations. Other authors, for instance Labbé et al. (2007) have found however that the most luminous galaxies have redder colors, with a slope of d(U −V ) dM V = −0.09 ± 0.01 (Labbé et al. 2007). Our guess is that we do not find the same luminosity dependence because we have preselected massive galaxies with the constraint that they be EROs/OGs, and within a different range of redshifts. In Taylor et al. (2009b, Fig . 4), we see that d(u−r) dMr is negative for z < 1.25; however, the trend changes for z > 1.25 and this variation of average color with absolute magnitude is null or even slightly positive. 4. Age estimation for early-type galaxies NOTE: A FORTRAN code to carry out the calculations explained in this section is available at http://www.iac.es/galeria/martinlc/codes.html As has been said, we have selected old massive elliptical galaxies with negligible internal extinction, i.e. without gas and dust. We can therefore connect the color and luminosity in V-rest of the galaxy with the average age of the stellar population and its metallicity. There may be some wrong identification of OGs among our galaxies selected with the color method for the ECDFS sample (Miyazaki et al. 2003 andFang et al. 2009 estimate it to be ∼ 25%), and consequently there may be some case of dusty galaxies among our ECDFS sample, but the statistical comparison with the SED fitting method of the Miyazaki et al. sample in Fig. 1 shows that there are neither significant differences nor systematic effects with redshift. Only perhaps in the range 1.2 < z < 1.6 might there be some small difference, where the starburst contamination might be higher (Fang et al. 2009). In order to estimate the average age corresponding to our galaxies, we use a synthesis model: Vazdekis et al. (1996;hereafter V96); see also Vazdekis (1999). There is an agemetallicity degeneracy, but this can be broken approximately with the use of the massmetallicity correlation. Another way to break the degeneracy would be by using two colors (Li & Han 2007, and references therein), but we have only one reliable color at rest and we do not have rest near-infrared colors as necessary in Li & Han (2007). We must also bear in mind that synthesis models return the mean value of a distribution, and a perfect fit to observational data to infer the age is only correct on average, since the individual cases may present some dispersion with respect to the average (Cerviño & Luridiana 2006). For this reason, and because of the large errors in the color of each galaxy, we do not calculate the age of each galaxy separately but the average age of each bin of galaxies (11 in our case) with the same redshift. For each bin of galaxies, the steps are as follows: 1. We assume zero metallicity and derive the age (t 1 ) of the galaxy which is given by the V96 model for the given (B − V ) rest of the galaxy. 2. Given the age t 1 and the zero metallicity, we derive with the V96 model the stellar mass-to-light ratio in the V filter [(M * /L V ) 1 ]. 3. Since we know the luminosity at rest in the V filter (L V ), we can derive the stellar mass of the galaxy: 4. Given the stellar mass of the galaxy, we estimate the metallicity [F e/H] 1 . We use the correlations of metallicity and α-enhancement given by Thomas et al. (2005). The average relation is: with and uncertainty of δ([F e/H]) ≈ 0.11 including the scatter of the correlations and the variations with the environment (low or high densities) of the galaxies. The correlation of mass (or velocity dispersion) with metallicity is also observed in Yamada et al. (2007). There is also a dependence on age, but it is only important for low mass galaxies of velocity dispersion less than 100 km/s (Yamada et al. 2007), which is not the case of our galaxies. It is assumed that the relationship of mass and metallicity does not evolve with redshift in a passive evolution (di Serego Alighieri et al. 2006). A metallicity evolution in galaxies of the same mass is not observed ) so, provided that mass does not correlate with the age, the mass-metallicity relationship should not change too much at high redshift. As is observed in Fig. 2, the accuracy in the metallicity determination mainly affects the reddest (oldest) galaxies. For the youngest galaxies (< 3 Gyr) the errors in metallicity are not very important, so a possible evolution in the relationship of Eq. (4) at high redshift would not affect the results. 5. We derive again the age t 2 with the color (B − V ) rest and metallicity [F e/H] 1 . 6. We repeat steps 2-5, which gives an age t 3 . Since t 3 ≈ t 2 , we do not need to do further iterations and we have got the convergence of the metallicity, mass and age with only three iterations. If t 3 were significantly different from t 2 , we would continue to iterate to get t 4 , t 5 ,... until the convergence is obtained. There is some dependence with the IMF (initial mass function) slope. In Fig. 2, we plot age vs. color for metallicities [F e/H] = 0 and [F e/H] = 0.2 and different slopes in the bimodal IMF, as defined in V96, or Kroupa IMF using V96 model. Slope 1.3 in the bimodal IMF is the standard value and is nearly coincident with the Kroupa case. Within variations of ±1 of the slope the variations of the age are fitted from Fig. 2 for null metallicity by: These variations produce some error in the age determination. However, this error is smaller than that produced by the uncertainty in the color or the metallicity. In any case, they are taken into account. The age of the early-type galaxies is plotted in Fig. 3. The vertical bars include the errors due to uncertainties in the colors, the uncertainty of 0.11 in the mass-metallicity relationship of Eq. (4) and the variations due to the IMF slope change within a range of ±1 given by Eq. (5). There is no zero-point calibration problem except perhaps for the last bin because this affects mainly galaxies older than 5 Gyr (Vazdekis et al. 2001). We must also bear in mind that we are neglecting the systematic errors in the photometric redshifts; were they non-negligible, we would have extra systematic errors in the calculated ages. Figure 3 represents the average age of the given sample among the ERO/OGs, with all the selection effects associated with each redshift. If we separate the dependence on mass from its evolution, subdividing each redshift bin into sub-bins with different luminosity (∆M V = 1), and we do over them a bilinear fit weighted with the square inverse of relative errors, both for average color and average age, we get t gal. = c 1 + c 2 (t obs. − 5) + c 3 log 10 (M * ) , = +0.024 Gyr −1 for the red sequence at a given mass of 2 × 10 11 M ⊙ , equivalent dt obs. ≈ +0.03 Gyr −1 , a smaller color evolution but for larger masses than our sample. Discussion on their formation epoch The average epoch of star formation (the epoch of formation of the first stars, might be lower) is t form. = t obs. − t gal. , shown in Fig. 4. Separating the evolution from the mass dependence, t form. = d 1 + d 2 (t obs. − 5) + d 3 log 10 (M * ), with d 1 = 1.94±0.51, d 2 = −0.46±0.32, d 3 = −0.81±0.98 on average for the ECDFS+Miyazaki samples. The observed EROs/OGs are all formed within a narrow range of epochs, when the Universe was less than 4 Gyr old (z > 1.7). The present fit is compatible with all EROs/OGs formed at the same time at, on average, t form. = 2.0 ± 0.3 Gyr (z ∼ 3 − 4). Again, we remind the reader that the criterion to select EROs/OGs is more restrictive at lower redshift, picking out only very old galaxies, while at higher redshift the range of allowed ages is wider. We might therefore expect that the oldest EROs at high z (low t obs. ) will have an even lower formation age. The age t form. = 2.0 Gyr is a conservative lower limit representing the average sample; there must be some EROs/OGs formed beforehand. Given that dt form. dt obs. = −0.46 ± 0.32 << +1 for a given stellar mass, this means that the galaxies of the present sample are not formed continuously with the same rate, but more intensely at higher redshifts. The stellar populations of most massive galaxies are not formed much later than the less massive ones (d 3 ≤ 0). This agrees the results mentioned in the introduction that very massive evolved galaxies detected at redshifts 1.5-6 were formed in the very early Universe (Daddi et al. 2005: Chen & Marzke 2004Rodighiero et al. 2007;Wiklind et al. 2008). This might appear in contradiction with the result of §3 that galaxies are redder for lower luminosities, but it does not. As said, the mass-luminosity ratio does not remain constant giving higher luminosity for younger objects so it is not contradictory that older/redder objects correspond to lower luminosities and higher masses. It is in fact observed if we compare Figs. 4 and 5 of Taylor et al. (2009b) for z > 1.25: clearly, a strong dependence on stellar mass does not mean a strong dependence on luminosity.
2009-11-19T15:55:29.000Z
2009-11-19T00:00:00.000
{ "year": 2010, "sha1": "b754824c81ac7557bb81373fe4d680fedfcb5499", "oa_license": null, "oa_url": "http://iopscience.iop.org/article/10.1088/0004-6256/139/2/540/pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "b754824c81ac7557bb81373fe4d680fedfcb5499", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
55353486
pes2o/s2orc
v3-fos-license
The mechanical design of the BARREL section of the detector CALIFA for R 3 B-FAIR In this work we present the mechanical concept proposed for one of the sections of the detector CALIFA of the R3B experiment for FAIR. The use of an alveolar structure made of carbon-fiber composites allows for a light and robust solution to hold the active elements with an extreme mass ratio below 0.7%. The active core is supported by structural elements designed to make a fully operational assembly, taking care of different configurations and functionality. All the design has been developed using intensive calculation based in finite elements models and physical simulations. Introduction The detector CALIFA (CALorimeter for the In Flight detection of gamma-rays and light charged pArticles) is one of the key detectors of the R 3 B experiment [1] at the facility FAIR [2].FAIR (Facility for Anti-proton and Ion Research) is the next step beyond the GSI [3], developing physics programs in four main fields, one of those being Nuclear Structure, Astrophysics and Reactions.This program is driven by the so called NUSTAR collaboration, joining the efforts of some seven hundreds researchers of more than 150 institutes from all over the world. The next generation experimental setup for studies of Reactions with Relativistic Radioactive Beams (R 3 B) is one of the eight experiments of the NUSTAR collaboration, based in the use of the intense secondary nuclear beams provided by FAIR.The R 3 B setup will cover experimental reaction studies with exotic nuclei far off stability with emphasis on nuclear structure and dynamics, as well as Astrophysical aspects and technical applications.The R 3 B collaboration includes more than 50 different institutes, and several hundreds of collaborators. The detector CALIFA (CALorimeter for the In Flight detection of gamma-rays and light charged pArticles) surrounds the R 3 B reaction target and will be used in a wide spectrum of experiments.It will feature a high photon detection efficiency and good energy resolution even for beam energies approaching 1 AGeV, as well as the required calorimetric properties for detection of multiple gammaray cascades, and the detection of protons with good energy resolution up to several hundreds of MeVs.CALIFA consists of two sections, a cylindrical Barrel and a Forward EndCap.To minimize the effect of the Doppler broadening on the energy resolution, the segmentation must be large enough.Therefore the polar and azimuthal apertures of the crystals must vary with position along the detector a e-mail: e.casarejos@uvigo.es to adapt to the geometry, as well as the radial length of the crystals, selected to fit with the range of the most energetic particles.The optimum cost-effective solution for the BARREL is based on almost 2000 CsI(Tl) crystals covering an angular range from 43 to 140 degrees in a compact geometry, with an internal radius of 300 mm that maximizes the calorimetric qualities. The BARREL is split into two symmetric halves, which can operate either separately, or closed together, or even unpaired (shifted) in the beam direction.The functional detector consist in the active core of the CsI(Tl) crystals and the mechanical structure to hold and move the active core, based in a carbon-fibre alveolar structure, a barrel-like cover, and an external moving structure.All the necessary front-end electronics, cabling, gas piping, temperature and slow control, etc. will be included. The Technical Design Report (TDR) of the BARREL [4] was approved by the external experts committee of FAIR by the end of 2012.The construction of CALIFA will start in 2013 with the so called DEMONSTRATOR [5].This is a structure based in CALIFA mechanical solutions, covering about 20% of the detector active elements, and available for physics experiments in 2014 at GSI. CALIFA is expected to be ready for commissioning in 2016. CF-composite structure for the core The overall constrains for the active core made of almost 2000 CsI(Tl) crystals include: i/ a robust and safe structure ii/ a minimum of structural material, and iii/ a tight definition of the static positioning and orientation of the crystals.The design solution is an alveolar structure made of carbon fibre composites (CF) to support the crystals.Extensive studies based in GEANT4 simulations and mechanical calculations with finite elements analysis in ANSYS [6] were performed, and guided the engineering design to reach an optimum version of the CF-structure.The design optimization was focused on the one hand, on the optimal segmentation to reduce the Doppler broadening without damaging the calorimetric properties.On the other hand, the R&D worked with the structural materials with the motto 'the less, the better' to minimize the energy losses in the passive material, which is critic for the energy resolution of the protons. In the figure 1 we show the segmentation of the CF-structure in both polar and azimuthal angles.The CF-structure has 16 rings, each made of 32 pieces that minimize the empty spaces.The segmentation was optimized according to a balance between a limited influence in the geometric component of the resolution [7], and a limited number of elements.The self-supported honeycomb structure is mechanically optimized, with a wall thickness below 0.3 mm.The ratio of the mass of the CF-structure Cover structure A modular cover structure holds the CF-structure, and connects with the external structure.In the figure 3 (left panel) we show a drawing of the COVER structure, built up with similar modules. This structure closes the active core to make a gas-and light-tight volume.That is mandatory due to the sensitivity and hygroscopicity of CsI(Tl) crystals.Moreover, the response of the crystal and photosensor is temperature dependent.To cope with the requirements the active volume will be filled with nitrogen renewed in a closed loop at controlled temperature.In the outer surface, the COVER holds the pre-amplifier units, and distributes the external and internal refrigeration gas piping.In the figure 3 (right panel) we show the assembly of the CF-structure, the COVER and the elements on its surface. External structure The external structure must support the active core, allowing for the partition of the system in two autonomous and symmetric halves.The two blocks are closed in the nominal configuration of CAL-IFA.Other configurations are possible: a single half alone, and also both halves set unpaired with a longitudinal shift between them to allow for a clearance of the forward angles of one side. The EXTERNAL structure is gantry-like, with arms that clip the COVER structure, and with additional platforms and rails.The EXTERNAL structure allows for movements in X-Y-Z quasi independently.The opening of the halves is achieved by sliding the base platforms over rails, set at about 15 degrees, also moving away the structure from the big dipole magnet placed next to it in the future installation.The same platforms slide in the longitudinal direction, with an independent system, to allow the relative shifting between the halves.These movements also help for the fine fitting of the core, a clearance of the forward angles, as well as helping in the setup of the future Forward EndCap.In the figure 4 we show two views of the whole assembly in its nominal position (left), and with the two blocks opened (right). Figure 1 . Figure 1.Left panel: three views of a row of the CALIFA BARREL segmentation.The segmentation in the polar angle determines the geometrical limit of resolution for the doppler shift correction.Right panel: the filling of the azimuthal angle is done with equal pieces per ring, covering the maximum volume to skip void spaces. Figure 2 . Figure 2. Left panel: view of the whole set of CF whose pieces form a self-supported honeycomb structure.The wall thickness of the pieces is below 0.3 mm.Right panel: the assembly of the CF structure and the pieces with flaps that hold it at the upper part of the honeycomb walls, required detailed mechanical calculations.The figure show the results of one of those evaluations using a finite elements model. Figure 3 . Figure 3. Left panel: the COVER structure, made of similar modules, makes a rigid and robust structure that supports the active core inside.Right panel: the cover makes a closed volume for the active core, and supports on its surface the units of the pre-amplification stage and temperature control piping. Figure 4 . Figure 4.The EXTERNAL structure supports the CF-structure with arms that join the COVER and the gantry.Moving platforms and rails allow for opening the two halves of CALIFA, as well as other movements, allowing for the fine fitting of the core, a clearance of the forward angles, as well as helping in the setup of the future Forward EndCap.
2018-12-11T19:09:54.471Z
2014-03-01T00:00:00.000
{ "year": 2014, "sha1": "9cdc17adf44fd0c4ce64310352aed8d2b017d5e7", "oa_license": "CCBY", "oa_url": "https://www.epj-conferences.org/articles/epjconf/pdf/2014/03/epjconf_inpc2013_11037.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "9cdc17adf44fd0c4ce64310352aed8d2b017d5e7", "s2fieldsofstudy": [ "Engineering", "Physics", "Materials Science" ], "extfieldsofstudy": [ "Engineering" ] }
252412283
pes2o/s2orc
v3-fos-license
Global Journal of Psychology Research: New Trends and Issues A study on depression among women who have been abused in Jordan Violence against women constitutes a violation of human rights and is one of the most important issues that affect the family, its cohesion and the safety of its members . Most of the studies, conducted on abused women, have indicated that there is a correlation between violence and its effects. This study aimed to identify the level of depression among abused women in Jordan. The sample of the study was deliberately chosen and it included 100 women who were between the ages of 18 and 50 years. To achieve the objectives of the study, Beck ’ s list of depression was used. To ensure the validity of the results, semi-structured interviews were conducted. The results indicated a high level of depression in the study sample of abused women. There were no differences in all dimensions in the depression of abused women following age, social situation and educational level variables. Al-Shawashrah and conducted a study entitled ‘ Rational Thoughts and Their Relationship to Depression among Abused Women in the Triangle Region ’ . The study sample comprised 93 abused women. The study results also showed that there were no statistically significant differences in the level of significance and depression among abused women. The absence of differences was due to profession, type of violence and educational level variables. The results also indicated that the level of irrational thoughts among abused women was average. There are no statistically significant differences at the level of significance ( α = 1010) in the level of irrational thoughts among abused women. The results also showed a positive statistically significant correlation between depression and irrational thoughts among abused women. of Introduction The phenomenon of violence against women is a global phenomenon that is not limited to a particular society or a particular social segment. Rather, it is an issue related to human existence and the relationship between men and women. Its severity and breadth vary from one society to another concerning the progress of awareness of the importance of the role of women in building societies. Women are essential partners for men in the overall development process of any society (Bonomi et al., 2009). Interest in studying violence against women has increased since the 1970s of the 20th century as the problem of violence against women was not known. There were no programmes to protect women or to shelter abused wives and their children. Counselling and treatment programmes did not exist and this interest came in response to the women's liberation movements that expanded all over the world. The United Nations issued Resolution 18 of 1991 to put an end to violence against women. They considered 25 November each year as the International Day for the Elimination of Violence against Women. Thus, it became a turning point in the march of women and their issues. In 1993, women's organisations began to take care of this issue at the International Vienna Conference (Feena) on Human Rights. The issuance of the Universal Declaration on the Elimination of Violence Against Women spread in all countries of the world, including the Arab countries. It shows that women's violence is any violent act that results in physical or psychological harm to women and psychological deprivation of freedom. This can occur in public or private life (Karadsheh, 2010;McFarlane, Symes, Maddoux, Gilroy, & Koci, 2014). Abused women are subjected to various forms of violence, including physical, psychological, sexual, verbal, health, social and economic violence (Banat, 2006;Masarogullari & Uzunboylu, 2017). These forms are associated with a wide range of psychological and behavioural disorders that affect the physical and psychological health of women and their performance in roles. The study of Khoei et al. (2018) indicated that abused women see themselves as incompetent, worthless, unloved and useless. They have no right to control their own lives, are uncertain about themselves in their relationships with others and have unrealistic expectations of improvement. Literature review The World Health Organisation (WHO) defined violence as the intentional use or threat of force or physical exploitation against oneself, another person or a group of persons. It leads to injury, death or health difficulties. More specifically, we find that violence is any psychological or physical aggression that leads to consequences that include harm, physical and psychological pain (Radell et al., 2021). Violence against women takes many different forms (Banat, 2006;Ubadah, 2009), the most common of which are as follows: • Physical violence: It is one of the most obvious forms of violence because the abusive person beats the female victim, taking advantage of his physical ability and the woman's weakness. He uses biting, kicking, slapping and instruments that injure, break and leave clear traces on the woman's body. The beating process goes through stages before it occurs. First, an argument between the parties happens which turns into a conflict, then into insults and develops into a beating, leading to serious physical and psychological consequences for women. • Sexual violence: It means coercion to have sexual intercourse, encouragement, forced prostitution or forced viewing of sex. It is the use of abnormal and perverted methods of morality in marital intercourse and sexual methods to humiliate and blame the woman for the husband's impotence. He considers her responsible for it and uses sexual methods to induce a woman to have sex with another man under duress. Thus, the woman rejects it. • Health violence: It means depriving the wife of health conditions and reproductive health, meaning the wife's ability to become pregnant and have children without being exposed to the dangers associated with the convergence of pregnancies. It is preventing her from medical reviews and good nutrition for the pregnant wife. • Vocal violence: It is represented by insulting and embarrassing the wife in front of others, not showing respect and appreciation for her and neglecting her. It is admiring others in her presence, humiliating her, mocking her and shouting at her. It is the most common type of violence that affects the self-concept of women and their sense of inferiority and low self-esteem. • Psychological violence: It is represented by rejection, neglect and humiliation through coercive and hostile practices that fall on women. These practices exemplify questioning the integrity of their minds and intelligence, reducing their abilities, thoughts and performance. This leads to fear, lack of control over events, depression, the unpredictability of the partner's behaviour, stress, despair, anxiety and low self-esteem. Its effects are exacerbated to include women's psychological health and their ability to socially adapt to the environment. • Social violence: This type of violence is represented by imposing a social siege on women, preventing them from social contact, exercising their productive roles and limiting their involvement in society. It affects their emotional reliability and social standing. Violence appears in the form of depriving the wife of work or continuing education and preventing her from visiting her family, friends and relatives. It also appears as interfering with her relationships, her choice of making friends, her relationships with neighbours, preventing her from expressing her opinions and interfering with the way she dresses. • Physical or economic violence: This is represented by depriving her of work, forcing her to do a job she does not like and seizing her personal property. Besides, it is represented by miserliness and depriving the wife of money, especially if the wife is working or not. Violence is practiced by depriving the wife of her salary or controlling the way it is spent. Forms of violence may leave a range of psychological effects and risks that affect the mental health of abused women. These effects include fear, lack of control over events, depression, stress, despair, anxiety, low self-esteem and addiction to drugs and alcohol (Davies, 2013). Violence against women is defined as any act of aggression against a woman that causes psychological, sexual or physical harm and suffering. It includes threats or deprivation of liberty compulsorily or arbitrarily, whether in public or private life. Causes of Violence against Women There are many reasons behind the practice of violence against women that motivate some husbands to use violence against their wives, which include but are not limited to the following: • Family and social disputes: Karadsheh (2010) stated that there are different causes of violence against women. These causes exemplify family, social conflicts, economic crises, alcohol and drug abuse, overlapping roles within the family and the low level of psychological flexibility of the aggressor and victim. These causes may lead to violence and psychological disorders among the aggressor. • The prevailing culture: The culture supports men's practice of violence and abuse of women as a form of masculinity, playing a major role in the occurrence of violence against women. This violence is to protect men from being described as the condemned or the weak. • The lack of communication and problem-solving skills is a reason for male violence against women. It is because of the inability to communicate, manage conflict effectively and listen to each other, which leads to solving problems they face either by withdrawing or using violence (Banat, 2006). • Low income: Al-Hiyasat (2016) conducted a study in which the results showed that the multiplicity of pressures faced by the family is one of the causes of violence against women. The family is subjected to financial pressure due to the husband's lack of a job opportunity. The husband, who faces pressure in his work, increases the incidence of violence. Most of the abused women were married to husbands of low income and this is the reason. The main cause of violence against women is a preoccupation with the means of technology and social communication. And then, there is an interference of the husband's family in the abused woman's affairs and her private life. • Alcohol abuse: Studies have indicated a relationship between husbands' alcohol abuse and wives' exposure to violence because these husbands consider that the reason may be under the influence of alcohol and do not realise what is happening to them (Wagman et al., 2018). • The media: Films that depict a man's ability to use his power and ability to harm women teach violence to people, and people imitate what they see. Depression Depressive disorder is one of the most widespread mental disorders among individuals. Its symptoms begin in adolescence and range from mild to very severe. Loneliness, sadness, a lack of sleep, participation in social activities and self-confidence are all psychological aspects of depressive disorder. Many researchers agree that depression is nothing but a product of the suffering person's approach to life and many therapeutic methods help reduce depressive symptoms (Gibson-Smith, Bot, Brouwer, Visser, & Penninx, 2018). Radwan (2004) states that there are many symptoms associated with depressive disorders, including: Symptoms of depression. 1-Mood factors are accompanied by feelings of loss of hope, enthusiasm or a decrease in enthusiasm. 2-The factor of self-accusation represents the concept of self-punishment and a sense of guilt. 3-The physical factor includes many physical complaints and sleep disorders. 4-Sad and hopeless mood. 5-Loss of sense of happiness. 6-Motivation disorders. 7-Internal unreliability. 8-Loss of appetite and weight. 9-Sleep disturbances. 10-Physical aches. There are many causes of depression, including the loss or death of a loved one; severe nervous pressure; living with other family members who suffer from depression; during major transitional periods in life, such as divorce or the transition from adolescence to adulthood; troubles and financial problems; poor health; problems in relationships with others; and lifestyle factors such as excessive alcohol or drug abuse (Al-Safasfa & Arabiyat, 2005). Related studies Patel, Weobong, Patel, and Singla (2019) pointed out that there are many consequences of practicing violence against women. For example, women lose their confidence, self-esteem, have isolated feelings and are withdrawn from social life. They are completely dependent on men and have psychological and humiliating feelings -lack of security and frustration -that may lead to thoughts of suicide. Most of the studies, conducted on abused women, have indicated that there is a correlation between violence and its effects because abused women are emotionally affected by the intensity and frequency of violence. The effects exemplify psychological disturbances, low self-esteem, shame, learned helplessness, depression, suicidal tendencies, inability to establish relationships with others and dispersal of thoughts (Ibrahim, 2010;Masoud, 2013;Tukaiev et al., 2019). The danger of violence against women creates a fertile environment for the production of negative thoughts, feelings and the formation of a low self-concept. Women produce negative thoughts directed towards themselves and others and feel helpless and inferior. They cannot cope with stressful events (Jonker et al., 2019). Learned helplessness is one of the most common and complex problems among abused women. It is because of its impact on women's psychological, cognitive and social development. It has a direct impact on psychological resilience, which plays a pivotal role in achieving psychological and social adaptation to them. This applies to their low level of psychological resilience (Koirala & Chuemchit, 2020;Lövestad, Löve, Vaez, & Krantz, 2017). Although there is no single definition of domestic violence, all definitions that deal with it agree that it is the abuse of a family member. Violence against women constitutes a violation of human rights and is one of the most important issues that affect the family, its cohesion and the safety of its members. The issue of violence against women falls within family violence, the disclosure of which is still considered by many societies as a violation of the family's privacy and disclosure of its secrets. Violence in the hands of the husband is common and society forces the abused woman to remain silent about the violence she is subjected to. She is blamed if someone outside the family is aware of the incident of violence (Karadsheh, 2010). Several field studies of humanitarian (NGOs) indicated that at least one out of three women is beaten and humiliated daily (Banat, 2006). The World Health Organisation also reported that nearly 80% of the female victims of homicide are killed by their husbands. And 90% of the women, in general, and abused women, in particular, are killed by weapons or sharp objects. Official statistics in Jordan indicate that the number of women exposed to violence is large and constantly increasing (with 163 cases of assault). 74 cases were of sexual assault and 62 cases were of physical assault in 1998. 312 cases of assault were recorded, including 261 cases of sexual assault and 91 cases of physical assault in 2010. There has been an increase in cases of abuse during the past 10 years. Following the statistics of the National Centre for Forensic Medicine, the centre deals with an average of 800 cases of sexual assault against women annually (Ibrahim & Al-Hiyasat, 2016). (2017) conducted a study entitled 'Violence Directed towards the Wife and Its Relationship to Life Satisfaction and Depression among Wives in Gaza'. The sample comprised 214 married women. To achieve the goal of the study, the researchers used the marital violence scale prepared by Sufian Abu Najila. The life satisfaction scale and the depression scale were prepared by Muhammad Ibrahim Eid. The results showed that there were statistically significant differences in the total degree of marital violence, life satisfaction and depression among abused women in Gaza. These differences were due to the weak economic situation and the place of residence variables. They were in favour of the wives residing with the mother-in-law. Also, there was the absence of statistically significant differences in the total degree of marital violence and life satisfaction among abused women at an average to a high degree. There was the absence of statistically significant differences in the total degree of marital violence and its dimensions among abused women. It was attributed to marriage years and the husband or wife's education variables. Al-Shawashrah and Mahmoud (2014) conducted a study entitled 'Rational Thoughts and Their Relationship to Depression among Abused Women in the Triangle Region'. The study sample comprised 93 abused women. The study results also showed that there were no statistically significant differences in the level of significance and depression among abused women. The absence of differences was due to profession, type of violence and educational level variables. The results also indicated that the level of irrational thoughts among abused women was average. There are no statistically significant differences at the level of significance (α = 1010) in the level of irrational thoughts among abused women. The results also showed a positive statistically significant correlation between depression and irrational thoughts among abused women. Du Rocher and Cummings (2014) conducted a study to identify the impact of marital conflicts and violence on psychological and emotional security, social adjustment and irrational thoughts. These were among a sample of 222 families in the United States of America. The results of the study indicated that marital conflicts and violence affect the social adjustment of families. Wilson, Feder, and Olaghere (2021) conducted a study to reveal the effect of irrationality and marital violence on women's traits and irrational thoughts. The results showed that abused women who experienced marital problems had little connection with family members and friends. The results showed that the dominant personality trait of abused women tended to be withdrawn and depressed, having irrational thoughts compared to women who were not exposed to violence. Pinnock and Daphne (2000) conducted a study to determine whether psychological factors and women's characteristics are related to violence and frustration or not. The study sample comprised 111 black women in Campbell City, in the United States of America, who were subjected to physical and psychological violence. Their ages ranged from 18 to 53 years. Many of them have completed 12 years of education. The Beck Depression Scale, the Health Response Scale and the Daily Distress Scale were used. The results showed that the women who were subjected to violence were characterised by anxiety, tension and depression. Problem of the study The problem of the study crystallised in the researchers' minds about their work as psychological and social specialists and volunteers in the Family and Child Protection Centres. These centres provide psychological support services and interview many abused women. The former discovered there is an impact of the reality that abused women live on their way of thinking, style, appreciating themselves and the psychological disorders they suffer from. The plans and programmes directed at this group are to be effective and help alleviate the psychological and social burdens resulting from women's exposure to violence. It is necessary to identify the level at which they have these problems. Hence, this study tried to answer the following questions. Thus, the researchers found the need to identify the level of depression among abused women in Jordan. The problem of the current study is determined in an attempt to answer the following questions: 1.What is the level of depression among abused women in Jordan? 2.Are there any statistically significant differences at the level of significance (α = 0.05) in the level of depression among abused women in Jordan? Are these differences attributed to the age, marital status and educational level variables? Purpose of the study The study aims to identify the level of depression among abused women in Jordan and investigate the existence of differences in the level of depression among abused women in Jordan following age, marital status, and educational level. Materials and method The current study used the descriptive approach due to its relevance to the subject and purposes of the study. Participants The study population comprised 100 abused women in Jordan. They were aged from 18 to 50. The study sample was chosen intentionally. Study instrument To achieve the objectives of the study, the Beck Depression List was used in its Arabised form by (Hamdi, Abu Hijleh, & Abu Talib, 1988). It was designed to measure depression among abused women. The list originally comprised 21 items and the total score on the list ranged from 1 to 23 degrees. It was noted that the boundary between the normal and the depressed in the original list is a degree 3. The level of depression as a whole is determined as follows: 0-9 = 'no depression'; 10-15 = 'low depression'; 16-30 = 'average depression'; and >30 = high depression. Scale validity Many researchers have found the validity and reliability coefficients of the Beck scale to suit the Jordanian environment. Among these studies, Jaradat (2012) and Al-Daasiseen's (2004) studies studied the psychometric properties of the Beck depression scale, showing the high degree of validity of this scale. Scale reliability The values of the reliability coefficients were extracted for the whole list. The researcher applied the Pearson correlation coefficient to an exploratory sample from outside the study. The sample comprised 30 abused women. The application was repeated on the same sample after a 2-week interval from the first application. The researcher used the test and retest method. The Pearson correlation coefficient was 0.89 and the Cronbach alpha equation was calculated on the sample's internal consistency degrees with a value of 0.91. Interviews To ensure the validity of the results, individual interviews were conducted with 15 abused women in Jordan. The following 5 questions were asked to the abused women: 1) Do you view life pessimistically? 2) Have you experienced a situation of frustration in your life? 3) Do others' perceptions affect you? 4) Are you too sad to bear? 5) Do you expect failure in every job you do? Study procedures After obtaining official approval from the Jordanian Women's Union Centre to conduct the study and collect data, the study's scale adopted the study and verified its validity and reliability by the methods specified for it. The selection of study members from the centres of the Jordanian Women's Union was conducted. Application of the study scale to the study members was next. After data were entered into the statistical programme to be statistically analysed, the results were interpreted and recommendations were made. Analysis The means and standard deviations of the performance of all sample members of the Beck depression scale were extracted. Also, multiple analyses of variance were used to extract the significance of differences in the average feelings of depression following the study variables: age, marital status and education level. Results related to the first question: 'What is the level of depression among abused women in Jordan?' To answer this question, the frequencies and percentages of the level of depression among abused women were extracted, as presented in Table 2. Table 2 shows that the level of depression among the study sample of abused women was high (70%). However, the level of depression among abused women was average (30%). Results related to the second question: 'Are there any statistically significant differences at the level of significance (0.05=α) in the level of depression among abused women?' These differences are due to age, marital status and educational level. To answer this question, the arithmetic means and standard deviations were extracted and nonparametric tests were carried out, as shown in Table 3. The study members' age variable To answer this question, the arithmetic means, standard deviations and ranks' average were extracted. To find out to who these differences belong, the Kruskal-Wallis test analysis was adopted for comparison within the different groups following the age variable. Table 3 shows that the value of chi-squared for the total degree of depression scale was 1.816, which is a non-statistically significant value at the significance level of 0.05. This indicated that there were no differences in the total degree of depression following the study members' age variable. Marital status variable To answer this question, the arithmetic means, standard deviations and ranks' average were extracted. To find out to who these differences belong, the Kruskal-Wallis test analysis for comparison was conducted within different groups following marital status. Table 4 showed that the value of chi-squared for the total degree of depression scale was 0.533, which is non-statistically significant at the significance level of 0.05. This indicated that there were no differences in the total degree of depression following the marital status variable. Educational level variable To answer this question, the arithmetic means, ranks' average and the total rank was extracted. Mann-Whitney U test was used to compare the ranks of depression. Table 5 shows that the overall value of the Mann-Whitney U test for the educational deficiency scale was 37.5. Its z-value was -0.813, which is a non-statistically significant value at the significance level of 0.05. This indicated that there were no differences in the total degree of depression following the educational level variable. Semi-structured interviews to verify the validity of the study results To ensure the validity of the results, semi-structured interviews were conducted. This type of interview is the most common because it helps us collect a huge amount of information through which we can find out the problems the individual suffers from. Also, through this interview, we can re-ask the question in another way that helps the person know what the question means. Also, it is suitable for all age groups due to its verbal and non-verbal communication style. The duration of the interviews ranged from 45 to 60 minutes, during which aspects of the study were covered. These interviews were held in a specialised place within the centre that was provided in cooperation with the administrative commission. Confidentiality within the interview and creating an atmosphere of psychological safety and comfort were ways for abused women to freely answer any question asked or emphasised. The data has been unpacked to arrange, classify and analyse it. Thus, it is easy to identify the availability degree of this problem among abused women. The results showed a great agreement between the results of the interviews and the results of the scale that was applied to abused women. The arithmetic average score of the abused women's answers to these questions was 2.31 compared to the arithmetic mean of the depression scale as a whole (2.39), which indicated the validity of the scale. Table 2 shows that the level of depression was high among the sample of abused women. The researcher attributes this result to the effect of violence on depression since violence has many psychological effects. It begins with discontent with the person, to whom violence is directed, for not being appreciated or respected; so women resort to crying. The presence of irrational thoughts consequently leads to depression. Arab societies, in which the male is superior to the female, exemplify women's exposure to violence. It has become a prevalent custom as a result of gender discrimination. Discussion Women consider themselves one of the main factors for the existence of violence and persecution. Women accept them and consider tolerance and silence the safest solutions. This increases their depression rate and they are affected by the surrounding conditions, such as social, economic and cultural conditions. Thus, women, who are exposed to violence, remain silent and do not reveal what they are exposed. This may make them develop a depressive disorder. This result is consistent with the result of the study by Du Rocher and Cummings (2014) and Ahmed (2021). The results showed that abused women, who have been exposed to marital problems, have little connection with family members and friends. The results also showed that the dominant personality trait of abused women tends to be withdrawal and depression. It is because they have irrational thoughts compared to women who have not been exposed to violence. The study by Al-Shawasha and Mahmoud (2014) showed that the level of depression was high among women exposed to violence, Accordingly, the results indicated that there were no differences in the total degree of depression following the study members' age, marital status and educational level variables. Also, the results indicated that there were no differences in the total degree of depression following the variable of marital status. The researchers attributed this result to the fact that the relationship between the individual and his or her environment is interactive. During this continuous interaction, the personality of the individual is developed. And his or her behaviours take on a certain character, being modified by the experience he or she is going through. These behaviours result from the interaction of biological formation with environmental factors, especially social ones. Also, the reason for the occurrence of depression is exposure to harsh conditions and intolerance of those conditions. Besides, the psychology of women, who are sensitive to feelings, is affected by painful circumstances and experiences, regardless of their marital status. Thus, they are in constant need of tenderness and a sense of security and acceptance, and this result is consistent with a study by Al-Ibrahim (2010). The absence of differences in the total degree of depression, following the educational level variable, may be attributed to the soul's nature that is affected by different violence forms. The role of the educational level of women may not have an impact on the level of depression among abused women. The incidence of depressive disorder does not know the age and does not affect a group. This result is consistent with the result of Al-Shawashrah and Mahmoud' (2014) study. It showed that there were no statistically significant differences in the level of irrational thoughts among abused women due to the educational level variable. Conclusion The study's significance stems from two aspects: the theoretical importance and the practical importance: 5.1. Theoretical significance 1. The current study's significance comes from the importance of abused women who are considered an important component of the marital and family systems of the family. The acts of violence in general and violence against women in particular in Jordanian society have consequences that are reflected on women and society as a whole. This group is rarely targeted with designed studies and counselling programmes for them in particular, despite the difficult psychological and social situation they are in. Abused women, in particular, are usually not targeted by studies. This study is a response to contemporary educational transformations that pay attention to women and violence against women, paying attention to the relationship between violence and learned helplessness. 2. The current study may avail further studies on this topic. 3. Enriching descriptive studies, related to abused women, due to the scarcity of these types of studies in the Jordanian environment, in particular, and the Arab environment in general. Practical importance Counsellors and specialists working in the field of mental health can benefit from the current study by developing specialised counselling programmes. These programmes contribute to training abused women on how to face their psychological and social problems. It may avail human rights and mental health workers in identifying the nature of the abused women's suffering and services. Abused women are to be provided with the required rehabilitation services following their needs. Recommendations Based on the results of the study, the following are recommended: 1.Working to provide qualified psychological and social specialists for early detection of the psychological effects and problems that may appear in abused women. 2.Working on building specialised counselling programmes based on counselling and psychotherapy theories concerned with rehabilitating abused women of different ages. 3.Working on building a variety of rehabilitation programmes to reduce the level of depression among abused women. 4. Conducting further studies that could represent a research extension in the field of abused women as it is a fertile subject for study. However, it was not given its natural right to scientific research.
2022-09-23T15:16:59.717Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "b07628ed06b33371bd868c997af104e9ce63b4ed", "oa_license": null, "oa_url": "https://un-pub.eu/ojs/index.php/gjpr/article/download/7262/9075", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "b07628ed06b33371bd868c997af104e9ce63b4ed", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [] }
225488541
pes2o/s2orc
v3-fos-license
Gurevich-Pitaevskii problem and its development We present an introduction to the theory of dispersive shock waves in the framework of the approach proposed by Gurevich and Pitaevskii (Zh. Eksp. Teor. Fiz., 65, 590 (1973) [Sov. Phys. JETP, 38, 291 (1974)]) based on the Whitham theory of modulation of nonlinear waves. We explain how Whitham equations for a periodic solution can be derived for the Korteweg-de Vries equation and outline some elementary methods to solve them. We illustrate this approach with solutions to the main problems discussed by Gurevich and Pitaevskii. We consider a generalization of the theory to systems with weak dissipation and discuss the theory of dispersive shock waves for the Gross-Pitaevskii equation. INTRODUCTION Any physical theory grows out of particular observations and attempts to interpret them, solving specific problems and gradually constructing * Usp. Fiz. Nauk 191 52-87 (2021); Phys.-Uspekhi 64 48-82 (2021) generalizations. But at the same time, studies can be singled out in the development of each theory that served to transform a collection of particular results and vague ideas into a field of science, with its own physical ideas and tools that allow posing and solving problems characteristic of just that field. In the field of nonlinear physics, known under its modern name as the theory of dispersive shock waves (DSWs), this role goes to Gurevich and Pitaevskii's 1973 paper [1]. They formulated a general approach to constructing a theoretical picture of the formation and evolution of such waves based on the Whitham theory [2] of modulation of nonlinear waves, and solved several typical problems that yielded a quantitative description of typical DSW structures. The Gurevich-Pitaevskii problem can therefore be understood both as the general approach to the DSW theory proposed by these authors and as the particular problems that were posed and solved in [1] and have since then found numerous applications in explaining various physical observations underlying the subsequent development of the theory. The aim of this paper is to give a sufficiently detailed introduction to that domain of nonlinear studies concentrated on a detailed presentation of Gurevich and Pitaevskii's work [1] and related studies. But first we discuss the principal stages in the formation of the DSW theory that eventually resulted in the appearance of paper [1]. Dispersive shock waves are not very common in the world around us.Their first observations were apparently associated with the formation of wave-like structures near the tidal wave front when a wave was advancing sufficiently fast into river beds or narrow straits. This effect was called the undular bore and for an extended period of time was apparently studied by a dedicated community of researchers and engineers dealing with river hydrodynamics. Still, some fundamental facts about such bores have been revealed. In particular, the leading swell of water at the bore front was identified with a solitary wave that had first been observed by Scott Russell [3] and then explained by Boussinesq [4], Lord Rayleigh [5], and Korteweg and de Vries [6]. Benjamin and Lighthill [7] attempted to clarify the conditions under which the undular bore can be described as a modulated periodic solution of the Korteweg-de Vries (KdV) equation. It was then assumed that the modulation of a periodic solution called the 'cnoidal wave' by the authors of [6] was caused by dissipative processes in the wave-like flow of the liquid. It nevertheless transpired from those early works that explaining the formation of an undular bore requires taking the interplay of dispersion and nonlinearity effects into account for shallow-water waves, assuming an essential role of dissipation effects in explaining the wave modulation and the formation of turbulent bores at sufficiently high amplitudes of the tidal wave. However, the problem of a theoretical description of undular bores did not garner much attention outside the community of experts. For example, in classic books [8,9], where various phenomena related to water waves are described in detail, that problem is not even mentioned. The situation changed due to the development of modern nonlinear physics. Back the early 1960s, it became clear that solitary waves, or 'solitons' if using modern terminology, can propagate in different physical systems, in plasmas in particular [10,11], and the KdV equation has a universal character and finds applications in very diverse physical situations with weak dispersion and small nonlinearity. Soliton solutions of the equations of plasma dynamics, in both their original form and in the KdV approximation without dissipation, propagate with their shape being unchanged. If there is dissipation in the system, then propagation of shock waves becomes possible, such that the transition layer width is proportional to the dissipation level. Therefore, the width of such a layer can reach a magnitude of the order of the characteristic width of the soliton. Competition then occurs between dispersive and dissipative effects, and the transition layer is also formed due to the occurrence of a domain of solitontype nonlinear oscillations. As a result, we arrive at the notion of a shock wave in which the transition from one state of the plasma to another occurs via a stationary wave structure of strong nonlinear oscillations. The wave length in this structure is determined by the balance of dispersion and nonlinearity, and the general width of the shock wave, i.e., the characteristic length at which oscillations are modulated, is inversely proportional to the magnitude of dissipation effects. Such a picture of shock waves was proposed by Sagdeev [12], and it was observed in the evolution of ion-sound pulses in plasmas [13,14]. Gurevich and Pitaevskii took a different path to approach the problem. In the second half of the 1960s and early 1970s, they published (in part jointly with Pariiskaya) a series of papers [15][16][17][18], on the dynamics of rarefied plasmas in the framework of kinetic theory. In this theory, the plasma state is described by a distribution function of ions over positions and velocities, and hot electrons are in thermal equilibrium and are distributed over space in accordance with the Boltzmann distribution, with the potential determined by the Poisson equation, with the charge density equal to the difference between ion and electron charge distributions. Particle collisions are disregarded in this theory, and hence dissipative effects are absent, but it is nevertheless obvious that nonlinear and dispersive effects are entirely present. A characteristic feature of this problem setting compared with that considered above is that the focus is shifted to the non-stationary dynamics, different from the stationary propagation of periodic waves, solitons, or stationary DSWs, in which modulation of an oscillating structure was caused by dissipation. In their consecutive treatment of problems starting with a simple self-similar expansion of plasma into a vacuum [15,16] and further on to more complicated dynamics of simple waves [17], where the formation of an infinitely steep front of the distribution function had already been observed, Gurevich and Pitaevskii concluded in [18] that, in the kinetics of rarefied plasmas, the breaking of an analogue of a simple hydrodynamic wave leads to the formation of an evolving oscillation domain with the wavelength of the order of the Debye radius; moreover, if the wave amplitude is small (but not infinitesimally small), then the dynamics of that domain are described by the KdV equation, which, ignoring the dispersion, also leads to breaking solutions. A natural conclusion was that when taking dispersion into account the domain of multivaluedness is to be superseded by an oscillatory domain, with a series of solitons forming on its front in accordance with the balance between nonlinear and dispersive effects, whereas, farther away from the front, the oscillation amplitude decreases, and the solution approaches the dispersionless one. The list of references on the theory of the KdV equation given in [18], contains a reference to Whitham's paper [2]. Such were the preparations to create the DSW theory in [1]: on the one hand, the problem was reduced to the theory of waves satisfying the KdV equation, which made that paper part of the theory of nonlinear waves that was vigorously being developed at the time, and on the other hand, a new problem setup was focused on the question of non-stationary evolution of the wave after its breaking without taking dissipative processes into account. Just that problem was solved in [1] for waves whose evolution is governed by the KdV equation. Subsequently, this theory was extended to numerous other equations and has found diverse applications, ranging from the physics of water waves to nonlinear optics and the dynamics of the Bose-Einstein condensate. This is why paper [1] has many times been cited in both the physical and mathematical literature. In this paper, we present the basic ideas of Gurevich and Pitaevskii's approach to the DSW theory, while staying within methods that are standard for theoretical physics. KORTEWEG-DE VRIES EQUATION As noted in the Introduction, the KdV equation is a universal equation for nonlinear waves, which often arises in the leading approximation in small nonlinearity and weak dispersion. Because Gurevich and Pitaevskii's work that resulted in creating the DSW theory is written in the context of plasma wave physics, we here give a simple derivation of the KdV equation for ion-sound waves in a two-temperature plasma, with the electron temperature T e being much higher than the ion temperature. The thermal motion of ions can then be disregarded and their dynamics can be described by standard hydrodynamic equations, with the separation of ion and electron charges taken into account. We let ρ denote the number of ions per unit volume and M denote their mass, and assume for simplicity that they have a unit charge e and the plasma moves along the x axis with a speed u. As is known (see, e.g., [19]), such a plasma has an intrinsic parameter with the dimension of length, the Debye radius whose ratio to the characteristic wavelength determines the magnitude of dispersive effects (ρ 0 is the equilibrium density in the absence of a wave). For convenience, we discuss the nonlinear and dispersive effects separately. Small deviations from equilibrium are described by linear harmonic waves with ρ − ρ 0 , u ∝ exp[i(kx − ωt)], and we easily find their dispersion law as [19] where the choice of sign is determined by the wave propagation direction. Hence, it follows that dispersive effects are small when the wavelength 2π/k is much greater than the Debye radius r D . The first terms of the expansion in the small parameter kr D give where c 0 = T e /M is the speed of ion-sound waves in the long-wavelength limit. Each harmonic with dispersion law (4) satisfies the equation where we still understand u as the speed of the plasma flow. In the linear approximation, any pulse can be represented as a sum of harmonics, and therefore the evolution of any wave propagating in a certain direction is governed by Eqn. (4) the leading approximation in the dispersive effects. Plasma density perturbations ρ are then related to the flow speed u as with the same choice of sign as in (3). If the wavelength is much greater than the Debye radius, then charge separation can be ignored, the electron and ion densities coincide, and their deviation from the equilibrium density ρ 0 is related to the electric potential by Boltzmann's formula ρ = ρ 0 exp(eφ/T e ). Using it to eliminate the potential φ from the dynamic equations leads to a system of hydrodynamic equations [19], which describe the dynamics of an isothermal gas when the pressure p depends on the density ρ as p = (T e /M )ρ. The local speed of sound, determined by the formula c 2 = dp/dρ = T e /M = c 2 0 , coincides with the above speed of long linear waves and is independent of the local density. If we now consider some suitably arbitrary initial localized pulse, then, as is known from basic gas dynamics, it splits after some time into two pulses running in opposite directions. In each such wave, the local change in density δρ on the background of ρ is related to the local change in the flow speed δu as δρ ≈ ±(ρ/c 0 )δu, which follows from (5), whence ρ x = ±(ρ/c 0 )u x ; because the speed of sound is constant, we do not have to take its dependence on density into account in this case. Substituting this expression into (6) gives a nonlinear equation for smooth pulses with the dispersion disregarded: u t ± (c 0 + u)u x = 0. We have thus found two equations, (4) and (7), which separately describe the evolution of ion-sound waves in the case of either low dispersion or small nonlinearity. In both cases, the dispersive or nonlinear correction amounts to the addition of a small term, in the corresponding approximation, to the simplest equation u t ± c 0 u x = 0 for one-dimensional wave propagation. In the leading approximation, therefore, simultaneously taking both corrections into account amounts to combining them into a single equation. Assuming for definiteness that the wave propagates in the positive direction of the x axis, we obtain the KdV equation for ion-sound waves in plasma: To simplify the notation, it is convenient to transform this equation by introducing the dimensionless variables x = (x − c 0 t)/r D , t = c 0 t/(2r D ), and u = 3c 0 u . Substituting them into (8) and omitting the primes on the new variables, we obtain the currently most popular dimensionless form of the KdV equation: The coefficient 6 in front of the nonlinear term is chosen here so as to simplify the formulas in what follows. x u(x) Рис. 1: Evolution of a typical pulse in accordance with Hopf equation (10). After the instant of breaking, t > t b , the distribution u(x, t) formally becomes a three-valued function of the coordinate x in the domain x − < x < x + . With dispersion ignored, Eq. (9) becomes the Hopf equation which is a dimensionless form of Eq. (7). It readily follows that u is constant along the characteristics x − 6ut = const, which are solutions of the equation dx/dt = 6u. Therefore, if the initial distribution u is described by a function u = u 0 (x) at t = 0 and x = x(u) is the inverse function, then the implicit solution of the Hopf equation is given by which describes the distribution u(x, t) at subsequent times. The most significant feature of these solutions is that the transfer speed of u values increases as u increases and, for typical initial distributions u 0 (x), the solution becomes multi-valued after a certain instant t = t b , as is shown in Fig. 1. Evidently, we have gone outside the applicability domain of the dispersionless approximation: at the instant of breaking t = t b , the derivative of the distribution with respect to x becomes infinitely large at the point x b , and the dispersion term with the third-order derivative in KdV equation (9) is by no means small in the vicinity of x b . As noted in the Introduction, taking dispersion into account suppresses this nonphysical behavior, and in the solution of the full KdV equation the multi-valuedness domain is superseded with an oscillatory domain evolving with time, i.e., a dispersive shock wave. Gurevich and Pitaevskii assumed that this oscillatory domain can be approximately represented as a modulated periodic solution of the KdV equation, which means that the next step in constructing the DSW theory must consist of deriving such periodic solutions which was done by Korteweg and de Vries themselves in [6]. Here, we give the necessary background. As usual, we seek a solution of Eq. (9) as a traveling wave u = u(ξ), ξ = x − V t, where V is the wave propagation speed; we then find that u(ξ) satisfies the ordinary differential equation u ξξξ = V u ξ − 6uu ξ , which, after two elementary integrations, takes the form of the equation where A and B are constants of integration. This equation has real solutions if the polynomial R(u) has three real zeros: ν 1 , ν 2 , and ν 3 with ν 1 ≤ ν 2 ≤ ν 3 . Evidently, the oscillating solution corresponds to the motion of u between two zeros in the interval where R(u) ≤ 0. The constants A, B, and V can be expressed in terms of ν 1 , ν 2 , and ν 3 as It now follows from Eq. (12) that the periodic solution of the KdV equation can be expressed as where the integration constant that is additive with respect to ξ is chosen such that u(ξ) takes the maximum value ν 3 at ξ = 0. Integral (15) can be standardly expressed in terms of elliptic integrals, and their inversion gives the dependence u = u(ξ) in terms of elliptic functions. Omitting the calculations that are routine for nonlinear physics, we get the result where sn is the elliptic sine, and the parameter m is defined as in accordance with the notation in handbook [20]. Using the identity sn 2 z + cn 2 z = 1 allows expressing this solution in terms of the elliptic cosine cn, which is why Korteweg and de Vries called their solution the 'cnoidal wave', similarly to the cosine wave in the linear theory. The properties of such a cnoidal wave are determined by the three zeros, ν 1 , ν 2 , and ν 3 , of the polynomial R(u). In particular, the speed of the wave V and the parameter m are expressed by formulas (14) and (17). The wavelength L can be defined as the distance between two neighboring maxima of u(ξ), and it is then expressed through the full elliptic integral of the first kind K(m) as The cnoidal wave amplitude can be defined by the relation Solution (16) passes into a harmonic linearapproximation wave for a small wave amplitude a ν 2 − ν 1 , when m 1. The wave number k = 2π/L = 2(ν 2 − ν 1 ) and the phase velocity V = 2ν 1 + 4ν 2 = 6ν 2 − k 2 of the wave are then related as V = ω/k, which follows from the dispersion law ω = 6ν 2 k − k 3 that corresponds to the linearized KdV equation u t + 6ν 2 u x + u xxx = 0 for a wave propagating along the uniform state with u = ν 2 . In the opposite limit ν 2 → ν 1 and m → 1, the wavelength tends to infinity and sn(z, 1) = th(z), and hence solution (16) becomes In this case, the profile u = u(x − V t) has the shape of a solitary wave propagating along the uniform state u = ν 1 . Thus, in the limit m → 1, the periodic wave transforms into solitary pulses, or solitons (21), separated by an infinitely long distance. The fundamental assumption of Gurevich and Pitaevskii's approach to the DSW theory was that at sufficiently large times after the instant of breaking, when the length of the emerging oscillatory domain becomes much greater than the local wavelengths L, the DSW evolution can be represented as a slow variation of the parameters ν 1 , ν 2 , and ν 3 in a modulated cnoidal wave (16). The 'slowness' condition here means that the relative change in the modulation parameters ν 1 , ν 2 , and ν 3 or the equivalent variables is small either at distances of the order of the wavelength L or over a time of the order of one oscillation period. Thus, the problem of constructing the theory of DSWs reduces to deriving equations for the evolution of modulation parameters and to obtaining their solutions in specific physical situations. Fortunately, by that time, equations for the modulation of a cnoidal KdV wave had already been derived by Whitham [2]. Unfortunately, in both [2] and his later book [21], Whitham only gave the final result of the calculations, having omitted all the details. Because these calculations are highly nontrivial, we briefly describe them in Section 4 for completeness, but first, with methodological purposes in mind, we discuss a linear-approximation analogue of Whitham's modulation theory. MODULATION OF LINEAR WAVES A well-known result in the theory of modulation of linear waves is that the envelope of a modulated wave packet propagates with the group velocity of the carrier wave. Methods for deriving asymptotic solutions of linear equations have also been developed in much detail to describe such behavior of waves. But we look at problems of this sort from another standpoint, which is very transparent physically and allows an extension to the dynamics of nonlinear waves. As an example, we consider the evolution of a wave described by the linearized KdV equation u t + 6ν 2 u x + u xxx = 0 and having the initial shape of a 'step'. Because the 6ν 2 u x can easily be eliminated by passing to the reference frame x = x − 6ν 2 t, t = t, we write the linear KdV equation as and take the initial condition in the form This problem can easily be solved exactly by the Fourier method, and the result can be brought to the form where Ai(z) is the standard notation for the Airy function [20]. As we can see, the wave profile depends only on the self-similar variable z = x/(3t) 1/3 (Fig. 2). At large x, when z 1, the wave amplitude decreases exponentially into the 'shadow' domain, and in the opposite limit of large negative x, we can use the known asymptotic form of the Airy function to obtain (−z 1) The obtained results confirm the general idea that dispersive effects manifest themselves in oscillatory wave structures originating from pulses with sufficiently sharp fronts. But the shape of the resultant wave structure suggests another approach to its description. Both Fig. 2 and formula (25) suggest that, as x → −∞, this wave can be interpreted as a modulated harmonic wave with a variable wave number and variable frequency and amplitude of oscillations. We represent such a wave as where we introduce the wave phase is natural to define the wave number k(x, t) and the frequency ω(x, t) as which are locally related by the dispersion law ω = −k 3 that follows from linear KdV equation (22). In other words, wave (26) is locally a harmonic wave that is an exact solution of this equation if modulation is ignored. If we consider a piece of the structure with a fixed wave number k(x, t), it immediately follows from the first formula in (28) that this piece moves along the x axis with the group velocity v g = −3k 2 = dω dk (29) in accordance with the known property of the group velocity. It is clear that this way of introducing the group velocity into the theory of modulation of linear waves has a general character. We assume that the modulated linear wave is represented as and that this wave is locally harmonic with good accuracy, with local values of the wave number and frequency defined as and related by the dispersion law for harmonic waves In view of (31), the consistency condition for cross derivatives of the phase (θ x ) t = (θ t ) x leads to the equation where V = V (k) is the phase velocity of the wave. Because a unit-length interval along the x axis contains 1/L = k/(2π) waves, Eq. (33) can be interpreted as the conservation law for the number of waves, with k playing the role of the density of waves and ω = kV the flux. Substituting dispersion law (33) into (32), we arrive at the equation which again states that the wave number k propagates at the speed v g (k) = ω (k) and preserves its value along the characteristic x − v g (k)t = const. Therefore, if changes in the shape of the wave packet are disregarded, a wave packet made of harmonics with the wave numbers close to k = k 0 propagates with the group velocity v g (k 0 ) = ω (k 0 ). We can now return to the problem of the decay of a step-like profile with initial distribution (23) and use Eq. (34) instead of the exact solution expressed in terms of the Airy function. The key role here is played by the observation that the initial distribution does not contain parameters with the dimension of length, but the original problem has some characteristic value of speed c 0 . Therefore, a solution of Eq. (34) can depend only on the self-similar variable ξ = x/t (in dimensional units, Having used this to find k = k(x/t), we can express the phase θ(x, t) from the equation θ x = k if we recall that the frequency ω = −θ t , which is a function of k, can also depend only on the self-similar variable. For the linear KdV equation, the obtained results immediately reproduce the known relations −3k 2 = x/t, k = θ x = −(−x/(3t)) 1/2 , θ = 2 3 (−x/(3t) 1/3 ) 3/2 . Thus, modulation equation (34) has allowed us to easily find some characteristics of the emergent wave structure. To derive the modulation equation for the amplitude a(x, t) of wave (30), it is natural to use the energy conservation law, because expansion of the wave structure with time leads to a redistribution of energy over a progressively larger volume, and in linear systems the energy density is proportional to the amplitude squared. After averaging over the wavelength, the local energy density a 2 (x, t) is transported with the group velocity v g corresponding to the local value of the wave number k, and we can therefore write the energy conservation law as In the case of a linear KdV equation and asymptotic regime (35) of the wave packet evolution, Eq. (36) becomes ta t + xa x = − 1 2 a. This can readily be solved using the standard method of characteristics, with the result a(x, t) = (1/ √ t)f (x/t), where f is an arbitrary function. Assuming that in the problem of the evolution of a step-like shape the amplitude also depends only on the same self-similar variable z = −x/(3t) 1/3 as the wave number k does, it is easy to find that f (x/t) = const · (−x/t) −3/4 , which defines the modulated wave shape up to a constant factor: Thus, we have reproduced the main features of solution (25) without relying on any information on the properties of the Airy function, but rather by just solving modulation equations (34) and (36) of the linear theory. Evidently, the idea of this approach involving the wave number conservation law and other conservation laws with averaged densities and fluxes allows a generalization to nonlinear waves. Exactly that was done by Whitham for modulated cnoidal waves of the KdV equation, and we discuss his theory in Section 4. WHITHAM THEORY We restrict ourselves to describing the general idea of Whitham [2] on averaging conservation laws in the simple case where the evolution of a wave is described by a nonlinear equation for a single variable u, Φ(u, u t , u x , u tt , u tx , u xx , . . .) = 0. We assume that Eq. (37) has traveling-wave solutions when u(x, t) depends on x and t only through the combination u = u(ξ), ξ = x−V t, and for such solutions, Eq. (37) can be reduced to the form where A i is a collection of parameters occurring in deriving (38) from (37). In a periodic traveling wave, the variable u oscillates between two zeros of F (u). We let u 1 (V, A i ) and u 2 (V, A i ), with u 1 < u 2 , denote these zeros, assuming that F is positive in the interval u 1 < u < u 2 . Obviously, the wavelength is and the wave number k and the frequency ω are where we dropped the factor 2π in the definition of the wave number because it is only needed in the nonlinear theory for maintaining correspondence with the lowamplitude limit, and this factor can easily be restored whenever necessary. As a result, the wave number k becomes exactly equal to the density of the number of waves. In a modulated wave u(ξ; V, A i ), the parameters V and A i are slowly varying functions of x and t, changing little over distances of the order of the wavelength L and over a time of the order of 1/ω. This implies that there is an interval ∆, much longer than the wavelength L but much shorter than a certain size l characterizing the wave structure overall: L ∆ l. It is clear that, up to small quantities of the order of ε ∼ ∆/l, averaging over the interval ∆ is equivalent to averaging over the wavelength L. Therefore, we average physical quantities over fast oscillations in the wave in accordance with the rule If a conservation law P t + Q x = 0 is known, then, after the averaging, it takes the form where the dependence on x and t is only present in slowly varying modulation parameters V and A i that enter the averaged quantities. We can regard Eqs (42) as differential equations for these parameters, similarly to how we viewed modulation equations in the linear theory. We can now turn to the derivation of the modulation equations for the cnoidal KdV wave. In a weakly modulated wave, the parameters A, B, V or ν 1 , ν 2 , ν 3 become slowly varying functions of x and t, and we wish to find the equations governing the evolution of these parameters. Calculations can be simplified by recalling that one of the modulation equations is already known. Replacing the elliptic function argument in periodic solution (16) with the phase θ that can be defined up to an appropriate numerical factor, we introduce local values of the wave number and frequency via formulas (31), just as in the linear case; they must then satisfy the conservation law for the number of waves in Eq. (33). In a weakly modulated wave, the values of k and ω are given by Eqs. (40) with variable parameters V and A i , and hence variations of these parameters under the evolution of the wave must satisfy the equation As two missing modulation equations, we use the averaged conservation laws: which can be straightforwardly verified by substituting u t from the KdV equation. We first derive the modulation equations for the parameters A, B, and V . Following Whitham, we express the averaged quantities in terms of the function where the integral is taken over a closed contour encompassing the interval ν 2 ≤ u ≤ ν 3 . The wavelength L = 1/k is then expressed through W as We readily calculate the averaged quantities: The second derivatives u ξξ can be eliminated from the conservation laws with the help of the formula u ξξ = B +V u−3u 2 . After simple calculations using the relation kW A = 1 and the averaged values found above, we obtain the averaged conservation laws: Having substituted k = 1/W A and introduced the 'long' derivative D/Dt = ∂/∂t + V ∂/∂x, we obtain the modulation equations the first of which is the conservation law (43) with the wave number expressed as k = 1/W A . Despite the apparent simplicity of the obtained equations, they are not extremely useful in practice. We therefore reexpress them in terms of ν 1 , ν 2 , and ν 3 . From (14), we find the relations between differentials: Hence, Eqs. (49) expressed in the variables ν 1 , ν 2 , and ν 3 take the form where all the derivatives ofW are represented by integrals similar to (45) and (47). As a clue to further transformations, we note that the right-hand sides of Eqs. (50) contain the same factor W A . Therefore, their linear combinations can be found such that the coefficient in front of one of the derivatives vanishes and the other two coefficients become equal. Indeed, we multiply the first equation in (50) by p, the second by q, and the third by r, add them, and choose the parameters p, q, and r such that the coefficient in front of ν 1,x vanishes and the coefficients in front of ν 2,x and ν 3,x become equal: It immediately follows from these conditions that and we can hence set r = −2, to obtain p = ν 1 ν 2 + ν 1 ν 3 − ν 2 ν 3 , q = 2ν 1 , and r = −2. The right-hand side of this linear combination of Eqs. (50) then takes the form Hence, it follows that, if in a similar linear combination of the left-hand sides of Eqs. (50) the coefficient in front of Dν 1 /Dt vanishes and the coefficient in front of Dν 2 /Dt and Dν 3 /Dt are equal to each other, then the modulation equations take a very simple 'diagonal' form. With the help of the identity which is easy to verify, we obtain because the integrand is a total derivative of a periodic function, and the first condition is thus satisfied. The coefficients in front of Dν 2 /Dt and Dν 3 /Dt have the respective forms and their difference, being an integral of the derivative of a periodic function over the period, vanishes: Hence, K 2 = K 3 , and the combination ν 2 + ν 3 is a convenient modulation variable for which the modulation equations are dramatically simplified. The emerging coefficient K 2 = K 3 in front of D(ν 2 + ν 3 )/Dt can also be expressed in terms of W A . Indeed, K 2 and K 3 can be represented as But the second terms on the right-hand sides vanish due to identities quite similar to those used above, and the remaining non-vanishing terms can be easily brought to the form The equality K 2 = K 3 then leads to the identity substituting which in any of the equations in (52) gives We now equate the left-hand side of our linear combination to its right-hand side in (51) to obtain the equation Cyclic permutations of ν 1 , ν 2 , and ν 3 give two other Whitham modulation equations: Each of the equations obtained by Whitham involve derivatives of only one of the quantities ν 2 + ν 3 , ν 3 + ν 1 , and ν 1 + ν 2 , which means that the equations have acquired a diagonal form. Therefore, the above transformation is similar to the transition from the standard form of gas-dynamic equations to their diagonal form in terms of different variables, called Riemann invariants (see, e.g., [22]). We therefore define the new modulation variables, the Riemann invariants r 1 ≤ r 2 ≤ r 3 of Whitham modulation equations, as and express the other variables through them. In particular, we find and similar formulas for W A /W A,ν2 and W A /W A,ν3 . Finally, because we can represent Whitham equations as with the characteristic velocities where ∂ i ≡ ∂/∂r i . Because formula (18) for the wavelength becomes substitution of (59) into (58) using the known expression for the derivative of the elliptic integral K(m) (see, e.g., [20]) allows expressing the velocities v i as where E(m) is the full elliptic integral of the second kind. This is just the form of modulation equations for cnoidal KdV waves arrived at by Whitham in [2]. The possibility of transforming a system of three first-order equations to diagonal form is a highly nontrivial fact. Fortunately, Whitham was unaware of a theorem stating that such a transformation is in general impossible in systems of more than two equations (see, e.g., [23]). In [21], Whitham himself refers to the possibility of such a transformation as miraculous. It turned out later that, in this case, such a transformation is made possible by the remarkable mathematical property of 'complete integrability' of the KdV equation, discovered two years later [24]. If a solution r i = r i (x, t), i = 1, 2, 3, of Whitham equations for some specific problem is found, then the DSW profile can be determined by substituting this solution into the periodic solution, which in the new variables (Riemann invariants for the system of Whitham modulation equations) takes the form with wavelength (59). As r 2 → r 3 , with L → ∞, we obtain the soliton limit: and in the small-amplitude limit r 2 −r 1 r 2 , the cnoidal wave becomes harmonic: with the wavelength π/ √ r 3 − r 1 , which coincides with the m → 0 limit of (59), as it should. Whitham equations, even if used alone, allow substantial progress in the description of the DSW formation in specific problems, and investigations of this kind were initiated in Gurevich and Pitaevskii's work [1]. GENERALIZED HODOGRAPH METHOD It was Riemann who made the following observation regarding the equations of gas dynamics. For arbitrary one-dimensional flows with the gas density ρ = ρ(x, t) and the flow velocity u = u(x, t) being functions of the coordinate x and time t, the so-called hodograph transformation making x and t functions of Riemann invariants expressed through ρ and u linearizes the equations for x and t; they then allow solutions in a form quite convenient in applications. Whitham modulation equations (57) are similar in form to the equations of gas dynamics after the transformation to the diagonal form, and it is therefore natural to try to apply a similar method to solve Whitham equations. Such a 'generalized hodograph method' was proposed in a very general form by Tsarev [36] as a strategy to solve hydrodynamictype equations with more than two dependent variables. We give some elementary prolegomena to this method, which were used by Gurevich and collaborators to solve Whitham's equations (57) in the Gurevich-Pitaevskii problem. In the simplest case of Hopf equation (10), which is the dispersionless limit of the KdV equation, it is easy to express solution (11) through the initial distribution of u. We now have three equations (57) of a similar form, and we can seek their solution in a similar form: where the w i (r) are the functions to be determined. Differentiating these relations with respect to r j , we obtain −(∂v i /∂r j )t = ∂w i /∂r j , i = j,, where we can eliminate t using (64) As a result, we see that the functions w i must satisfy the Tsarev equations Therefore, if we find the general solution w i (r) of these equations for the given v i (r), we obtain the general solution (64) of Whitham equations (57), which can then be specified for any particular problem. We can find a way to solve Eqs. (65) if we note that these equation can be represented as compatibility conditions for Whitham's equations (57) and some auxiliary equations, for the evolution of Riemann invariants depending on a fictitious 'time' τ with formal 'velocities' w i (r j ). After simple transformations, the condition ∂ 2 r i /∂τ ∂t = ∂ 2 r i /∂t∂τ then gives the equation w j ∂v i /∂r j + v i ∂w i /∂r j = v j ∂w i /∂r j + w i ∂v i /∂r j , which is equivalent to (65). Regarding w i (r) as an analogue of the Whitham velocities, it is natural to seek the solution w i of Tsarev equations in a form similar to (58), [26] Using the expressions v i = 2σ 1 − 2(∂ i ln L) −1 , σ 1 = r 1 + r 2 + r 3 , we represent Eq. (67) as and after a simple calculation arrive at where ∂ ij = ∂ 2 /∂r i ∂r j . Substituting these expressions into Eqs. (65) yields equations for W : To simplify, we define the polynomial where r is an arbitrary parameter, and use the easily verified identity It follows from (18) that, up to an inessential factor, the wavelength is L = dr/ Q(r), where the integral is taken along a closed contour encircling the interval between two zeros r 1 and r 2 of Q(r). Therefore, integrating Eq. (71) along the same contour, we obtain the relation Substituting (58) on the right-hand side of (65), after simple transformations using the established identities, we obtain a system of equations for the potential W : These equations are called the Euler-Poisson equations, and they are the subject of a vast mathematical literature. We here restrict ourselves to the simplest facts that allow us to solve several interesting problems from the Gurevich-Pitaevskii theory for the DSW dynamics. We first note that comparing Eq. (73) with identity (71) implies that is a solution of Eqs. (73) dependent on an arbitrary parameter r. We hence immediately conclude that (74) can be considered the generating function of particular solutions W (k) (r 1 , r 2 , r 3 ) given by the coefficients of the expansion of W in inverse powers of r. When these are substituted into (68), we obtain particular solutions (64) of Whitham's equations in implicit form. These simplest solutions now allow describing the behavior of DSWs in several characteristic instances of the Gurevich-Pitaevskii problem, to which we restrict ourselves in this paper. GUREVICH-PITAEVSKII PROBLEM SETUP To present the general physical ideas regarding the problem setup within the Gurevich-Pitaevskii approach to the DSW theory, we consider results of a numerical solution of the KdV equation with the initial distribution given by a 'tabletop' with somewhat rounded edges: In our dimensionless variables, the dispersive size is equal to unity, and we have therefore chosen the initial tabletop of a sufficiently large width 2l 0 , such that the width of the forming DSW could also grow large, and the applicability condition of Whitham averaging method would safely hold for t 1. As can be seen from Fig. 3, as a result of the evolution of an initial distribution close to the one in (75), two structures form on its edges. At the trailing edge, a rarefaction wave forms, which, ignoring the dispersion, would be described by the hydrodynamic The leading edge of distribution (75) forms the domain of oscillations, i.e., the DSW, and we must find a suitable way to describe it in the hydrodynamic limit of vanishing dispersion. It is useful to briefly discuss here how a similar problem is solved in the theory of viscous shock waves (see, e.g., [22]). As is known, in media with weak dissipation, the wave breaking shown in Fig. 1 is eliminated due to the formation of a very thin transition domain between two states of the medium flow. Inside this domain, strong irreversible processes occur that are determined, for example, by the viscosity and heat conductance of the gas, but, farther away from this transition domain, the flow rapidly becomes an ideal gas flow, where any irreversible processes can be disregarded. In the limit of vanishing viscosity, heat conductance, and other characteristics of dissipative processes, the thickness of the transition domain in our macroscopic description tends to zero and we can replace it with a discontinuity surface of the hydrodynamic variables, with the flow considered dissipation-free on both sides of the surface. The characteristics of the flow and of the thermodynamic state of the gas must satisfy the conditions of mass, momentum, and energy conservation in the transition across the discontinuity, which determine the law of motion of the discontinuity. In our case of interest, DSWs, we must make a similar transition to the hydrodynamic limit of vanishing dispersion. Instead of a discontinuity surface, we now have a domain of oscillations with a vanishing wavelength inside it, and the dynamics of this domain are described by Whitham modulation equations, which on 'macroscopic' scales also have the form of hydrodynamic first-order partial differential equations. Similarly to the case of a usual shock wave, we must incorporate a solution of these equations into the solution of the dispersionless Hopf equation, such that the smooth dispersionless solution continuously matches the averaged characteristics of the modulated oscillating solution. It is obvious that, on the soliton edge of a DSW, this implies that the leading soliton must propagate over the background described by a smooth solution at the matching point. The situation is more delicate at the low-amplitude edge, where we should apparently expect matching with the solution of linear modulation equations (33) and (36). But in the limit of vanishing dispersion, the wave amplitude tends to zero at the matching point and Eq. (36) is satisfied in that limit automatically. Still, the conservation law for the number of waves in Eq. (43), which we used in deriving Whitham equations, turns into its linear limit (33) at the matching point. Therefore, the small-amplitude edge of the DSW moves over a smooth background with some group velocity, which in Whitham's modulation theory becomes a hydrodynamic variable characterizing the DSW. Indeed, taking the limit of vanishing dispersion can be formally regarded as a rescaling, i.e., a transition to 'slow' variables X = εx and T = εt, such that the KdV equation becomes u T + 6uu X + ε 2 u XXX = 0, the wavelength acquires the order of magnitude L ∼ ε, and in the limit ε → 0 the last equation passes into the Hopf equation. In that same limit, the parameter ε drops from the expression for the group velocity v g = −3ε 2 k 2 ∼ (ε/L) 2 ∼ 1, and hence the velocity of the small-amplitude DSW edge is determined only by the values of modulation parameters characterizing the DSW envelope. We emphasize that the DSW picture described here, as proposed by Gurevich and Pitaevskii, is substantially different from the earlier proposals by Benjamin-Lighthill and Sagdeev, according to which the DSW had a stationary character and its overall characteristics were determined by the mandatory existence of weak dissipation, which competed with dispersion. We return to that picture of the transition to the stationary DSW with dissipation taken into account in Section 12. We thus assume that the breaking nonlinear solution of the dispersionless Hopf equation, Eq. (11), is modified by dispersion effects, such that, instead of a multi-valuedness domain, the domain x L < x < x R of wave oscillations occurs in the distribution u(x, t), with its evolution governed by Whitham modulation equations. Outside the domain x L < x < x R , the wave can be described by the smooth solution of the Hopf equation in Eq. (11), and inside it, the DSW is described by expression (61) with good accuracy, with the parameters r 1 , r 2 , and r 3 being a solution of Whitham equations (57). This solution must satisfy boundary conditions that ensure matching with the smooth solution. To clarify the matching conditions, we note that, at these limit points, the average of u(x, t) over wavelengths, can be expressed as In other words, on the right edge, the value r 1 of the background over which soliton (62) is moving is equal to the value of the dispersionless solution u(x R , t) at that point; on the left edge, the background value r 3 of smallamplitude limit (63) equals the u(x L , t) value of the same dispersionless solution. In accordance with the foregoing assumptions, on the right edge x R (t), the DSW turns into a sequence of solitons, and we have r 2 = r 3 , (m = 1) in that case. On the left edge x L (t), with small amplitude of oscillations, we set r 2 = r 1 , (m = 0). The coincidence of two Riemann invariants leads to the equality of the corresponding Whitham velocities (60) at the DSW edges. We obtain and It then follows that, on the trailing edge x = x L (t), where the wave u(x, t) and its averaged value coincide with the Riemann invariant r 3 , its evolution is determined by the limit of Whitham equation (80) which coincides with Hopf equation (10) for u(x, t) in the dispersionless limit. Similarly, on the leading front x = x R (t), where the averaged value u(x, t) coincides with the Riemann invariant r 1 , its evolution is determined by the same Hopf equation: We can thus conclude that the boundary condition is satisfied at the trailing edge of the DSW, and the condition is satisfied at the leading edge. Here, r L and r R are the values that solution (11) of the Hopf equation, which corresponds to the initial profile r = u 0 (x), takes at the DSW matching points. For the solution of form (64), the DSW endpoints must match solution (11) of the Hopf equation, and boundary conditions (82) and (83) can be represented as If we manage to find a solution of Whitham equations (57) satisfying the stated conditions, then we obtain the functions r 1 , r 2 , and r 3 in the entire domain x L (t) < x < x R (t) and therefore describe the oscillating wave envelope for the entire DSW. Before proceeding to solutions of specific problems, we note that Whitham equations, as follows from their homogeneity, have self-similar solutions of the form where γ is an arbitrary self-similarity exponent and R i (z) is a solution of the system of ordinary differential equations where i.e., v i (R) is expressed through R i by the same formulas that express v i (r) through r i . This remark allows finding useful classes of solutions describing DSWs for some especially chosen initial conditions. EVOLUTION OF THE INITIAL DISCONTINUITY IN THE KORTEWEG-DE VRIES THEORY We begin with the simplest example [1], similar to the problem of the evolution of step-like profile (23) in the theory of the linear KdV equation. To simplify formulas, we use the fact that the KdV equation is invariant under the Galilei transformations Using these transformations, the initial step-like profile of an arbitrary amplitude can be represented as In the dispersionless approximation, we obtain the formal solution of the Hopf equation, According to Gurevich and Pitaevskii, a DSW emerges instead of this domain when taking dispersion into account, with the DSW evolution governed by Whitham's equations. In Whitham's hydrodynamic approximation, initial conditions contain no parameters of the dimension of length, and hence the solution of modulation equations must be self-similar (see (86 with γ = 0), i.e., r i = r i (z), z = x/t, where r i (z) satisfy the differential equations (v i − z) · dr i /dz = 0 (see (87)). On the trailing edge z = z L , where the oscillation amplitude tends to zero, we have r 1 = r 2 , and the averaged value u coincides with u = 1 (see (77)), the boundary condition r 1 (z L ) = r 2 (z L ), r 3 (z L ) = 1 must hold. On the leading soliton front z = z R , where r 2 = r 3 and the averaged value u = r 1 vanishes, we have another boundary condition: It is easy to see that we obtain a solution satisfying both boundary conditions if we set Then, m = (r 2 − r 1 )/(r 3 − r 1 ) = r 2 and the last equation in (89) determines the dependence of the self-similar variable z = x/t on r 2 , Taking the limit r 2 → 0, we find the value of the selfsimilar variable on the trailing edge: which means that the oscillation domain expands into the unperturbed domain of the pulse with the speed s L = v g = −6 equal to the group velocity of linear waves on the constant background u = 1 with the dispersion law ω = 6k − k 3 . Indeed, the group velocity dω/dk = 6 − 3k 2 is v g = −6 for the wavelength equal to L(0) = π in accordance with (59), and hence for k = 2π/L = 2. On the leading front, we haver r 2 → 1 and Eq. (90) implies that and hence this DSW edge moves with the soliton speed s R = V s = 4r 3 = 4. The amplitude of the leading soliton is twice the amplitude of the step-like profile. The dependence of r 2 = m on the variable z = 4−z, |z | 1 near the leading front is determined by the equation z ∼ = 2(1 − m) ln(16/(1 − m)), which gives 1 − m ∼ = z /2 ln(1/z ) with logarithmic accuracy. Therefore, the distance between solitons near the leading front (where Overall, the dependence of r 2 = m on z is shown in Fig. 4(a). Substituting the values of Riemann invariants into formula (61) gives an expression for u(x, t) in a DSW: with the dependence x(r 2 ) at a fixed instant t determined by Eq. (90). Therefore, the envelope of the maxima is given by the function u max = 1 + r 2 , and the envelope of the minima, by the function u min = 1 − r 2 . In Fig. 4(b), they are shown with dashed lines. As we can see, Whitham's theory is quite good at describing the DSW at a moderate value t = 15, and it can be verified that the accuracy increases as t increases. Whitham's theory correctly predicts the wave number value corresponding to the small-amplitude edge of a DSW. BREAKING OF THE WAVE WITH A PARABOLIC PROFILE In Section 7, we considered the simplest Gurevich-Pitaevskii problem of the formation of a DSW from a very particular initial profile, a jump-like discontinuity. Although some interesting problems can be reduced to this idealized case, including the problem of DSW generation in a flow past an obstacle [37,38], it is rather remote from the typical wave breaking patterns. As is known (see, e.g., §101 in [22]), there are two main breaking scenarios for a simple wave. In the first scenario, the wave propagates into a quiescent medium and at the instant of breaking the distribution of the wave perturbation acquires a vertical tangent on the interface with the quiescent medium. In the most typical case, the wave amplitude then vanishes in accordance with a square-root law. In the second, more common, scenario, the breaking occurs as a result of the evolution of the distribution with an inflection point: at the instant of breaking, in the dispersionless approximation, this profile also acquires a vertical tangent at the inflection point, and in typical situations can be represented by a cubic parabola. In this section, we consider the first wave breaking scenario, and in Section 9 turn to the second. We thus assume that at the instant of breaking t = 0, the pulse amplitude vanishes in accordance with a squareroot law, Using Galilei and scaling transformations, we can bring x| t=0 ∝ −u 2 to this simple dimensionless form. The solution of the Hopf equation with initial condition (95) is (see (11)) showing that this solution has a domain of multivaluedness for 0 < x < 9t 2 after the instant of breaking t > 0. According to the Gurevich-Pitaevskii theory, when dispersion effects are taken into account, this multivaluedness domain is superseded by a DSW that occupies the domain x L ≤ x ≤ x R . On its small-amplitude trailing edge x L , the DSW matches solution (96) (see (84) It hence follows that we must seek solution (64) with the functions w i that are quadratic in the Riemann invariants in the limit m → 0. Velocities of this type with power-law dependences on the Riemann invariants as m → 0 occur in studying generating function (74), and the required quadratic dependence corresponds to the coefficient W (2) (r 1 , r 2 , r 3 ) at r −2 . Thus, we take w i (r) in form (68) with W = W (2) , which, in view of the linearity of the Euler-Poisson equations, can be multiplied by an arbitrary constant factor C: A specific value of C is determined by the condition of matching with a smooth solution on the small-amplitude DSW edge, where r 3 = u L . On the leading soliton edge x R , the averaged amplitude then vanishes, and this condition yields r 1 = 0 and r 2 = r 3 . Hence, we can satisfy the boundary conditions by taking r 1 ≡ 0 and choosing the constant C such that condition (97) holds. Calculating w 3 at m → 0, we obtain w 3 = − 15 2 Cr 2 3 , and it therefore follows from the matching condition that C = 2/15. Finally, we obtain formulas for a solution of Whitham's equations [25,30] x where W = 2r 2 r 3 − 3 2 (r 2 + r 3 ) 2 , σ 1 = r 2 + r 3 . On the small-amplitude edge, these equations reduce to which immediately implies the parametric representation x L = − 1 3 (r L ) 2 , t = 1 9 r L , of the law of motion of this edge, and hence eliminating r L leads to On the soliton edge at r 2 = r 3 , both equations (99) tend to the same limit x R − 4r 3 t = − 8 15 r 2 3 , and the value of r R 3 is determined by the maximum value of x in the DSW domain, whence r R 3 = 15t/4 and This is the law of motion of the leading soliton edge. It follows from the obtained formulas that we have arrived at a self-similar solution of Whitham's equations (see (86)) with γ = 1, where the Riemann invariants are with the self-similarity variable z = x/t 2 . The dependence of the Riemann invariants R i on z is shown BREAKING OF A CUBIC PROFILE As we have noted, typical wave breaking occurs when the initial wave profile has an inflection point and in the dispersionless limit of the solution of the Hopf equation acquires a vertical tangent at some instant. Because this breaking point remains an inflection point, the second derivative of the profile also vanishes at that point. Assuming that the third derivative of the profile does not vanish at that point, and also choosing the origin at the breaking point and the instant of breaking as zero time, we can approximate the profile near the inflection point with a cubic parabola. As a result, we obtain a solution of the dispersionless Hopf equation corresponding to the initial conditionx(u) = −u 3 at t = 0 in the form It is obvious from the foregoing that this is the most typical distribution at the instant of breaking, and we here discuss the evolution of the corresponding DSW. The main features of the solution were investigated in [1], and an exact analytic solution was obtained in [39]. To solve the problem, we note that the velocities w i (r) in (68) that correspond to the third term W = W (3) in the expansion of generating function (74) have a cubic dependence on r i at the endpoints with m = 0 and m = 1. Using the formula (see (67)) where and σ i are coefficients of polynomial (70), it is easy to evaluate Multiplying w i by −4/35, we satisfy the boundary conditions of DSW matching on the edges with a smooth dispersionless solution in Eq. (103), and we find a solution of Whitham's equations (57) in the form where the functions w i , i = 1, 2, 3, are defined by Eqs. (104 and 105). The expressions for v i and w i , even if somewhat bulky, can be given in terms of elliptic integrals as functions of the Riemann invariants (explicit formulas are presented below in a self-similar form; see Eqs. (134)-(136)). Therefore, system (107) allows finding r i as functions of x and t. Before passing to the selfsimilar form, we consider characteristic properties of the obtained solution. On the small-amplitude edge, we have r 1 = r 2 (m = 0), and Eq. (107) with i = 3 becomes Similarly, on the soliton edge, we have r 2 = r 3 , and Eq. (107) with i = 1 becomes Therefore, these Riemann invariants match the smooth solution on the DSW edges, as they should: where u is the solution (103) of the Hopf equation. In the neighborhood of the trailing small-amplitude edge, we introduce a local coordinate x , and small deviations r i of the Riemann invariants from their limit values, Expanding Eqs. (107) in powers of r i at a fixed instant t, we obtain where we introduce the temporary notation r 1 ≡ r L 1 and r 3 ≡ r L 3 . Subtracting the second equation from the first, we obtain the relation It hence follows that the coefficients in front of r 1 and r 2 in the first two equations in (113) vanish, and therefore x is a quadratic function of r 1 and r 2 : x ∝ r 1 2 , r 2 2 , r 3 . At the point x L , these two equations give x L −(12r 1 −6r 3 )t = 1 5 (−16r 3 1 +8r 2 1 r 3 +2r 1 r 2 3 +r 3 3 ), (115) and the third equation in (113), as we have already noted, reduces to the solution x L − 6r 3 t = −r 3 3 of the Hopf equation. We can hence find the law of motion of the trailing edge. Subtracting Eq. (108) with x = x L from (115) and dividing the result by (r 1 − r 3 ), we obtain the relation Comparing this with (114), we find the relation between values of Riemann invariants on the trailing edge: It then follows from Eqs. (114) and (108) that and hence the small-amplitude edge moves according to the law The amplitude of oscillations here tends to zero as Near the leading soliton front, we introduce small variables: The expansions of Eqs. (73) with only the leading corrections retained have the form where 1 − m = (r 3 − r 2 )/(r 3 − r 1 ), and we revert to the temporary notation r 1 ≡ r R 1 and r 3 ≡ r R 3 . Subtracting the third equation in (122) from the second, we obtain the relation which together with the leading approximation in Eqs. (122), defines the law of motion of the leading edge. Indeed, the difference between Eqs. (124) gives another relation, which, when compared with (123), yields and therefore the soliton edge moves in accordance with the law The distance between solitons on the leading edge depends on x x 00 as The obtained solution, which can be written in the self-similar form is a solution of Eqs. (87) with γ = 1/2: The above relations allow easily finding boundary values of R i . On the trailing small-amplitude edge of the DSW, we have z L = x L /t 3/2 = −12 √ 3 and and on the leading soliton edge, z R = 4 √ 15/9 and The global dependence of R i on z z defined implicitly by the expressions where Thus, system of algebraic equations (134) allows finding the dependence of the invariants R i on z [39]. This dependence is shown in Fig. 6(a), where the dashed line shows the cubic curve z = 6R − R 3 matching the Riemann invariants R 3 and R 1 at the respective points z L and z R . With the dependence of the invariants r i = t 1/2 R i (x/t 3/2 ) on the self-similar variable found, their substitution in (61) gives a description of the DSW forming in the neighborhood of the breaking point due to dispersion effects. This DSW is plotted in Fig. 6(b). The self-similar solution considered here is valid for as long as the smooth part of the solution is described by cubic curve (103) with sufficient accuracy. MOTION OF EDGES OF DISPERSIVE SHOCK WAVES The solutions found in Sections 8 and 9 give an idea of the nature of the DSW evolution at a stage not too distant in time from the wave breaking instant, when the smooth part of the solution remains a monotonic function of the coordinate and is sufficiently close to a parabola or a cubic parabola. But in practice the pulses typically have a finite duration, which raises a question about the DSW shape at the stage when its full length is comparable to or much greater than the initial length of the pulse. The hodograph method outlined in Section 5 allows obtaining a solution to such a problem in the form of a solution to the system of Euler-Poisson equations (73) [25-29, 32, 34, 35]. However, this form of the solution is rather complicated, and even a very detailed quantitative description of the process does not give an intuitively clear picture of the effect. We therefore do not go into the details of that theory and discuss a simpler approach [25,40], which readily yields simple formulas for the principal parameters of the DSW and, in addition, allows a generalization to a rather broad class of other nonlinear wave equations. We first note that 'positive' and 'negative' pulses with the respective initial distributions u 0 (x) > 0 and u 0 (x) < 0 must be distinguished: they exhibit qualitatively different behaviors and must be considered separately. An idea of how they evolve can be gleaned from Fig. 7, where we show the results of a numerical solution of the KdV equation with appropriate initial data. For a positive pulse, breaking occurs on the leading front, and the leading part of the DSW consists of a sequence of solitons (62), moving over the zero background, whereas the trailing small-amplitude edge matches the smooth solution and propagates over an inhomogeneous background. It must be recalled here that, in the case of a localized initial pulse u 0 (x) with a single maximum u m of the distribution at ( Fig. 8(a)), the inverse function consists of two branches, x 1 (u) and x 2 (u) (Fig. 8(b)), and hence the dispersionless solution is given by two formulas (11), one for each branch. At the initial stage of the DSW evolution, its smallamplitude edge matches the solution corresponding to the branch x 1 (u), and at the matching point x L we have On the other hand, at that point the Riemann invariants r 1 , r 2 are equal to zero and r 3 = u ( Fig. 9(a)), wavelength (59) becomes L = π/ √ u, with the corresponding wave number k = 2 √ u, and the velocity of motion of this point, determined by the group velocity of the linear wave on the background u, is equal to v g = 6u − 3k 2 = −6u. Hence, dx L + 6udt = 0 along the path of the smallamplitude edge, and the compatibility condition between Eq. (137) and the equation leads to the differential equation which can be easily solved with the initial condition t(0) = 0, assuming that the breaking occurs at the zero instant on the interface with the medium 'at rest', where u = 0. We hence obtain and substituting this into (137) gives the law of motion of the small-amplitude edge in parametric form: It is easy to verify that these formulas reproduce law (100) for the parabolic initial profile u 0 (x) = √ −x with a single branch of the inverse function x 1 (u) = −u 2 . For a localized initial pulse, the obtained solution is valid until the instant when the small-amplitude edge reaches the point corresponding to the maximum amplitude u m . After that, we must solve Eq. (139) with the replacement x 1 (u) → x 2 (u) and with the initial condition t(u m ) = t m . As a result, we obtain the law of motion of the smallamplitude edge in parametric form: where u 0 (x) is understood as the full initial profile of the pulse, vanishing at x = 0 and tending to zero as x → −∞. If the initial pulse vanishes on the trailing edge at x = −l ≡ x 2 (0), then, as t → ∞, it is obvious that t ≈ A/(12 √ u), where A = 0 −l dx/ u 0 (x), and the law of motion of the trailing edge takes the asymptotic form The asymptotic form of the law of motion can also be easily found for the leading soliton edge of the DSW. We see from Fig. 9(a) for Riemann invariants that, as t → ∞, the plots of r 2 (x) and r 3 (x) elongate into an extended 'tongue', with r 1 = 0 and r 2 ≈ r 3 ≈ u m near the leading edge. Therefore, the leading edge moves with the soliton velocity V s ≈ 4u m and Turning now to the question of the evolution of a negative initial pulse, we see from Fig. 7(b) that the smooth dispersionless solution is adjacent to the soliton edge of the DSW, which therefore propagates over an inhomogeneous background. On that boundary, the Riemann invariants are r 1 = u and r 2 = r 3 = 0 and ( Fig. 9(b)), and hence the soliton edge velocity is V s = 2u or dx R = 2udt, in accordance with (79) must again be made compatible with the dispersionless solution if the edge borders the ith branch of that solution. Eliminating x R , we obtain a differential equation for t = t(u): where x i (u) is the corresponding branch of the inverse function of the initial distribution (Fig. 10). For the branch i = 1, a solution is sought with the initial condition t(0) = 0, which defines a parametric form of the law of motion of the soliton edge: For example, for a parabolic initial pulse u 0 (x) = − √ x, x > 0, we hence find the law of motion x R = −5t 2 . Solving Eq. (148) with the initial condition for a localized initial pulse with a minimum u = u m at x = x m , we obtain the law of motion Negative solitons are nonexistent for the KdV equation, and therefore a negative pulse cannot decay into a sequence of solitons at asymptotically large times. Instead, it transforms into a nonlinear wave packet whose soliton edge moves at t → ∞ in accordance with the law matching a virtually rectilinear asymptotic dispersionless solution u ≈ x/(6t) for x R < x < 0. Accordingly, the leading soliton amplitude in the DSW decreases with time as Near this edge, solutions of Whitham's equations are selfsimilar and depend on the variable z = x/t 1/3 . Although this solution can be obtained in analytic form [43,44], the self-similarity domain is relatively small, and we do not discuss this theory here. The solution of Whitham's equations in the entire DSW domain was obtained in [29,44]. In approaching the small-amplitude edge, the DSW evolution again becomes self-similar, with the modulation parameters depending on z = x/t. We can obtain the asymptotic law of motion of the small-amplitude edge by noting that, according to Fig. 9(b), r 1 ≈ r 2 ≈ u m and r 3 = 0 on that edge, and hence from (59) we can find the wave number k = 2π/L ≈ 2 √ −u m . Therefore, at the matching point, the group velocity of the linear wave is v g = −3k 2 = −12u m and THEOREM ON THE NUMBER OF OSCILLATIONS IN DISPERSIVE SHOCK WAVES An important theorem given in [41] states that, due to the difference between the velocity of the small-amplitude edge v g and the phase velocity of the wave V , the DSW length increases on that edge by (v g − V )dt in a time dt, and therefore the number of wave periods in the domain of oscillations increases with the rate dN dt where all the variables are evaluated at the DSW wave number on the small-amplitude edge. The right-hand side of [? ] can also be interpreted as the flux of the wave number ω = kV into the DSW domain with a Doppler shift due to the motion of the boundary, with the speed v g taken into account. Therefore, the total number of oscillations entering the DSW over all of its evolution time is up to a sign given by (155) The integrand can be interpreted as a Lagrangian of a classical particle with the momentum k and the Hamiltonian ω, which is associated with the wave packet co-moving with the small-amplitude edge of the DSW. The integral is then equal to the classical action S of such a particle and the number of oscillations is It is clear that these formulas are of a general nature and their validity is not limited to the KdV equation. For actual calculations, we must know the main characteristics of the DSW at least on its small-amplitude edge. For example, in the case of the KdV equation, it is easy to find that |k(v g − V )| = 2k 3 ; for the evolution of a unit-height step, as shown in Section §7, the wave number on the small-amplitude edge is k = 2. We hence find the number of oscillations formed in the DSW over time t: N = (8/π)t. For the time t = 15, this formula predicts N ≈ 38, whereas counting the oscillations in Fig. 4(b), which shows the results of a numerical solution of the KdV equation, gives approximately N ≈ 39, in good agreement with the theory. However, the agreement with this asymptotic calculation worsens at smaller times. For example, in the case of breaking of a cubic profile, the values of Riemann invariants on the small-amplitude edge, according to formula (116), are r 3 = u and r 1 = r 2 = −u/4, where u is the wave amplitude at the matching point, depending on time as u = √ 12t (see (116)). Substitution into (59) gives the wavelength L = 2π/ √ 5u and the wave number k = √ 5u = √ 10 · 3 1/4 t 1/4 . Hence, for the number of oscillations formed by the instant t, we obtain N = 40 √ 10 · 3 3/4 7π t 7/4 ≈ 13, 1 · t 7/4 . For t = 1, the number of oscillations N ≈ 13 is somewhat different from the number of oscillations N ≈ 15 ÷ 16 discernible in Fig. 6(b), but can still be considered satisfactory for such a short evolution time. As regards a positive pulse of finite duration, it eventually evolves mainly into a sequence of solitons propagating over the zero background u = r 1 = 0. The group velocity of the small-amplitude edge, which is a hydrodynamic variable in Whitham's theory, then has the meaning of the velocity of the interface between the oscillations that turn into solitons as t → ∞ and the linear wave packet. The number of solitons formed from a localized pulse is determined by the initial profile u 0 (x) and can be evaluated as follows. On the low-amplitude edge, k = 2 √ u and k(v g − V ) = −2k 3 = −16u 3/2 . Integration over t from 0 to t m can be replaced using (139) and (140) with integration over u from 0 to u m , and similarly integration from t m to +∞ transforms with the help of (143) into integration over the same interval of u. As a result, we obtain where The double integral that occurs in substituting (158) into (157) can easily be made single-fold by integration by parts, which leads to the formula where, as usual, u 0 (x) is the initial profile of the wave. This formula was first derived in [42] using profound mathematical properties of the KdV equation associated with its complete integrability [24]. In our presentation, it is a simple corollary of the Gurevich-Pitaevskii approach to the DSW theory. THEORY OF DISPERSIVE SHOCK WAVES FOR THE KORTEWEG-DE VRIES EQUATION WITH DISSIPATION In the Introduction, we discussed the development of the DSW concept, starting with Sagdeev's idea that dispersion effects transform the transition layer of a viscous shock wave into a stationary oscillatory structure, and on to Gurevich and Pitaevskii's idea of the formation of non-stationary DSWs as a result of wave breaking, with the evolution of the DSW modulation parameters governed by Whitham's equations. It must be clear, however, that the existence of small dissipation or other perturbing terms in the KdV equation also leads to the evolution of modulation parameters, which means that Whitham's modulation equations must then be modified accordingly. The picture proposed by Sagdeev must then be described by stationary solutions of modified Whitham's equations that take small dissipation effects into account, in addition to dispersion. In this section, we discuss such a modified Whitham's theory and the simplest corollaries. We assume that the perturbed KdV equation has the form where the perturbing term is small, R ∼ ε 1, and depends on both the field u and its spatial derivatives. Generally speaking, two types of perturbation must be distinguished. For one type, Whitham's equations acquire right-hand sides with the old form of Riemann invariants, and perturbations of the other type lead to a nondiagonal form of the averaged equations diagonalizing which, as noted in Section 4, is typically impossible. We discuss only the first case, which includes physically important problems with small dissipation. We again derive perturbed Whitham's equations by averaging the conservation laws. We then take into account that the conservation law for the number of waves, Eq. (43), preserves its form, while conservation laws (44) acquire right-hand sides: The averaged equations can be transformed just as we did previously, and instead of (49) we now obtain the equations which differ from the preceding equations only by additional termsdependingontheperturbation.Movingtothevariables ν 1 , ν 2 , and ν 3 and introducing Riemann invariants (55) for unperturbed Whitham's equations as the modulation parameters, we find the desired Whitham's equations accounting for the perturbation: where v i are Whitham's velocities (60) of the unperturbed equations and σ 1 = r 1 + r 2 + r 3 . In the particular case of Burgers viscosity, the perturbed Whitham equations were derived in [41,45], and for nonlocal viscosity, in [46]. In the general case, they are derived in form (164) in [47][48][49]. To obtain an insight into the role of small dissipation, we turn to the Gurevich-Pitaevskii problem of the decay of an initial discontinuity. We recall from Section 7 that, at the initial stage of the evolution, dissipation is inessential and the DSW expands in a self-similar fashion. But when its length reaches a size ∼ ε −1 , all terms in Whitham's equations (164) become equally significant, and the transition to the stationary regime of propagation is to be expected, with the full size of the DSW determined by the balance of terms with derivatives with respect to coordinates and dissipative corrections. We therefore seek the solution of Whitham's equations (164) with the invariants r i depending only on the variable ξ = x − V t. It is a simple observation that this system reduces to if we take V to be the wave velocity V = 2σ 1 . Because the profile is stationary, this system must have the integral It is easy to verify that σ 1 is indeed an integral, and the other two symmetric functions σ 2 = r 1 r 2 + r 1 r 3 + r 2 r 3 and σ 3 = r 1 r 2 r 3 satisfy the equations We have thus reduced the problem to solving a system of two ordinary differential equations for σ 2 and σ 3 , with r i being the functions of σ 2 and σ 3 to be found from the cubic equation The problem can be simplified even more if R = 0, in which case we have another integral σ 2 = const, and it remains to solve a single differential equation, It is now convenient to return from the symmetric functions to the variables r i and, for example, regard r 1 and r 2 as functions of r 3 , where r 3 = r 3 (ξ). From (165), we then find This system has two integrals: σ 1 = const and σ 2 = const. Therefore, r 1 and r 2 as functions of r 3 are the roots of the quadratic equation Its roots must be ordered as r 1 ≤ r 2 ; the constants σ 1 and σ 2 are determined by the boundary conditions. We let u L denote the limit value of the wave amplitude as x → −∞ and assume that the wave propagates in a medium with u = 0 at x → +∞. On the small-amplitude edge, where m → 0, r 2 → r 1 , we have u L = r 3 = r L 3 and On the soliton edge, r R 1 = 0 and r R 2 = r R 3 , and substituting these into the definition of σ 1 and σ 2 yields the relation between the integrals. Substituting formulas (172) into (173), we obtain an equation for r L 1 , whose solution gives r L 1 = u L /4, and hence on the small-amplitude edge. The integrals take the same values as on the soliton edge, where r 1 = 0 and σ 3 = 0, and hence Eq. (168) has a double root r R 2 = r R 3 = 3 4 u L . As a result, the amplitude a s = 2r R 3 of the leading soliton and its velocity V s = 4r R 3 , coincident with the shock wave velocity, are Thus, the speed of a stationary DSW is determined only by the magnitude of the discontinuity, in accordance with the general theory of viscous small-amplitude shock waves [22]. Interestingly, not only the speed but also the amplitude of the leading soliton is expressed by universal formulas (175) in terms of the initial discontinuity and is independent of the form of the dissipative term. In the particular case of Burgers-type dissipation, formulas (175) were derived in [50] directly from the perturbation theory without using Whitham's theory. To find a global solution along all of the DSW, we note that, after substituting integrals (174) into (171) and solving this quadratic equation, we obtain r 1 and r 2 as functions of r 3 . Their substitution into expression (59) for m gives an equation whose solution for r 3 allows expressing this Riemann invariant in terms of m, and then r 1 and r 2 can also be represented as functions of m. As a result of these elementary calculations, we obtain The problem is solved when we obtain the dependence of the parameter m on the coordinate ξ. where the right-hand side can be expressed in terms of m for a perturbation R of a given form. We specify this theory by choosing the perturbation as Burgers friction [41,45]: To actually take the averages, it is convenient to pass to the variable υ = (σ 1 − u)/2 that satisfies the equation This elliptic integral is readily reduced to tabulated ones, and we hence obtain the equation The problem solution has thus been reduced to the quadrature This formula, together with (176), parametrically defines the dependence of the modulation parameters, i.e., the Riemann invariants r i of the system of Whitham's equations, on the coordinate ξ, referenced to the DSW front. An example of such a dependence is shown in Fig. 11(a), and the corresponding DSW profile is shown in Fig. 11(b). GROSS-PITAEVSKII EQUATION Besides the KdV equation, which has a universal character, another very important equation, also occurring in very diverse circumstances, is the Gross-Pitaevskii equation, which in particular describes the dynamics of a weakly non-ideal Bose gas at zero temperature [51,52] in the mean field approximation, when the coherent state of the macroscopic Bose gas is described by a classical wave function, similar to the Maxwell field in classical electrodynamics. This theory came to the forefront after the experimental realization of Bose-Einstein condensation of atoms, and the main ideas underlying the theory are available in reviews [53,54]. Here, we restrict ourselves to writing the Gross-Pitaevskii equation for the wave function ψ(r) in the standard notation: where m is the atom mass, ∆ is the Laplace operator, U (r) is the potential of an external field acting on the atoms, and the parameter g, expressed in terms of the atom-atom scattering length a, g = 4π 2 a m characterizes the strength of interatomic interaction; it is repulsive for g > 0 and attracting for g < 0. We are interested in the first case, where the homogeneous state of the condensate is stable and waves can propagate over it. We note that the mathematically equivalent equation occurred in describing self-focusing of light beams in non-linear media [55,56], where the role of time is played by the coordinate along the beam and diffraction replaces dispersion, but the papers just cited discussed only the focusing nonlinearity, for which the state with a homogeneous distribution of light intensity is unstable. Another interpretation of Eq. (181) occurs when describing the evolution of the envelope of a wave packet propagating in a medium with low dispersion and weak nonlinearity [57]. In that case, the first term on the right-hand side corresponds to second-order dispersive effects, which, besides the packet motion with the group velocity, takes its slow spreading into account, and the last term corresponds to the dependence of the medium response on the wave intensity. This situation occurs rather frequently in physics, from the description of deepwater waves to the theory of propagation of light pulses in non-linear optical fibers. In this context, the resultant equation is often called the nonlinear Schrödinger (NLS) equation, but we here use the physical interpretation due to Gross-Pitaevskii, which allows addressing more transparent representations and notions of gas dynamics. In particular, the condensate density is ρ = |ψ| 2 , and its flow speed is expressed in terms of the gradient of the wave function phase [53,54]. If we represent the wave function as then, substituting this into (181), after simple transformations, leads to the system of equations (with U (r) = 0) The first equation is the standard continuity equation corresponding to the conservation of the number of particles in the condensate, and the second equation has the form of a modified Euler equation for the flow of gas with the equation of state p = gρ 2 /(2m) and with the last term containing higher-order spatial derivatives. It is clear that this term corresponds to dispersive properties of the gas caused by quantum dispersion of atoms. If we consider extremely long waves and ignore this term, we arrive at an expression for the speed of sound in the condensate, which depends on the local density ρ. If we turn to linear waves in a homogeneous condensate with a constant density ρ, then a standard calculation gives Bogoliubov's dispersion law [58] where, as the wave number k increases, the sound dispersion law ω = c s k passes into the standard dispersion law of quantum particles ε = ω = ( k) 2 /(2m) when the de Broglie wavelength becomes less than the coherence length We introduce parameters characterizing the state of the condensate: the length ξ c and the speed c s at the characteristic density ρ 0 , which allows us to define convenient dimensionless variables r → r/( √ 2ξ C ), t → c s t/( √ 2ξ C ), and ψ → ψ/ √ ρ 0 . In addition, we restrict ourselves in what follows to only one-dimensional motions of the condensate, and therefore, in the new variables, the Gross-Pitaevskii equation takes the form and its 'hydrodynamic' representation (183) becomes Accordingly, for linear waves, thedispersion law in Eq. (185) becomes It is clear that waves can propagate in both directions of the x axis, and therefore any initial perturbation evolves with time into two wave pulses propagating in opposite directions. For example, if the initial pulse has a shape describing a hump in the condensate density above a homogeneous background, then the numerical solution of Gross-Pitaevskii equation (187) or the equivalent system (187) exhibits the wave evolution shown in Fig. 12. As we can see, the pulse splits into two with time, and each of them experiences breaking with the formation of a DSW. We therefore have the task to describe the evolution of shock waves satisfying the Gross-Pitaevskii equation. In accordance with the Gurevich-Pitaevskii approach, each DSW borders a smooth solution of the dispersionless equations, and we therefore first discuss this last approximation. In the dispersionless limit, the last term in Euler equation (188) can be dropped, and the system takes the simple hydrodynamic form As is standard in the theory of linear waves, local changes in the density δρ and velocity δu of the flow are related as δρ/ρ ≈ ±δu/c, where the choice of sign corresponds to the wave propagation direction. Therefore, for example, in a wave propagating to the right, the differential relation du = cdρ/ρ = dρ/ √ ρ is satisfied, integrating which shows that, in such a simple wave, the flow velocity u and the density ρ are related as u/2 − √ ρ = const, and a similar relation with the other sign in front of the square root holds for a wave propagating to the left. This argument shows that the so-called Riemann invariants, related to the density and velocity of the flow as are natural variables in the physics of waves. Equations (190), when written in these variables, take a simple diagonal form, where the velocities v ± = u ± c have a clear physical meaning of the signal propagation speed, equal to the sum of and the difference between the flow velocity and the speed of sound propagating downstream or upstream. In our case of the Bose-Einstein condensate, they are especially simply expressed in terms of the Riemann invariants: Simple waves are characterized by the constancy of one of the Riemann invariants. For example, for a wave propagating to the right, the invariant r − = r (0) − = const is constant, the second equation in (192) is then satisfied automatically, and the first equation becomes the Hopf equation, which we already discussed in the case of ion-sound waves in plasma. Obviously, because of the relation between ρ and u, this Hopf equation can also be written for only one of these variables, which would then give a dispersionless approximation for unidirectional propagation of waves in the condensate. Additionally taking dispersion (189) into account in the leading approximation, ω ≈ k + k 3 /8, leads to the KdV equation for nonlinear waves in the limit of a large wavelength and a small amplitude. It is easy to see that the nonlinear and dispersion terms have opposite signs in this equation, and therefore soliton solutions correspond to troughs in the density distribution, and the KdV equation describes 'shallow' solitons on a homogeneous background. Naturally, the DSW theory for KdV is entirely applicable to the description of shock waves in a condensate under the condition of their small amplitude and unidirectional propagation. But for deep solitons and large-amplitude DSWs, development of the Gurevich-Pitaevskii theory is required. With the dispersionless approximation equations conveniently written in form (192), we can now turn to the theory of periodic solutions of the Gross-Pitaevskii equation, whose modulations describe the DSWs. If we seek a solution to system (188) in the form of a traveling wave ρ = ρ(ξ), u = u(ξ), ξ = x − V t, then the first equation is readily integrated, and the second, after eliminating the variable u and some transformations, reduces to the equation Evidently, the density ρ oscillates in the range ν 1 ≤ ρ ≤ ν 2 where the polynomial R(ρ) is positive, and a standard calculation similar to the derivation of the cnoidal wave solution of the KdV equation leads to a periodic solution of the Gross-Pitaevskii equation in the form where m = (ν 2 − ν 1 )/(ν 3 − ν 1 ) and the velocity V , unlike the one in the KdV theory, is now an independent parameter. The condensate flow velocity is In the soliton limit, as ν 3 → ν 2 and m → 0, we obtain the solution [59] for a soliton moving over a condensate that has the density ν 2 = ρ 0 and is at rest at infinity. As the depth of the soliton tends to zero, its velocity tends to the speed of sound c 0 = √ ρ 0 , never exceeding it. If the soliton velocity is zero, the density ρ at its center also vanishes; such a soliton is called 'black.' In view of the relation u = φ x , the wave function phase jumps by when crossing the domain occupied by the soliton. For the black soliton, with V → +0, this jump is ∆φ = −π. Because the phase is defined up to 2π, this state of the condensate is not different from the state having the velocity V → −0 and the jump ∆φ = π. Due to this property, a dark soliton moving in an inhomogeneous condensate confined by a trap can change the direction of motion at the points where the density in its center vanishes. Formulas (197) can be combined into the expression (199) for the soliton solution of Gross-Pitaevskii equation (187). In the low-amplitude limit ν 2 − ν 1 ν 3 − ν 1 , m 1 wave (195) degenerates into a trigonometric one, with the wave number k = 2 √ ν 3 − ν 1 and the phase velocity V = ± √ ν 3 related with each other as V 2 = ν 3 = The obtained periodic solution depends on four parameters V, ν 1 , ν 2 , ν 3 , and describing the DSWs requires deriving the corresponding modulation equations. Evidently, the conservation law for the number of waves, Eq. (33), extends to nonlinear waves (195) with the corresponding expression for the wave number in terms of the modulation parameters, and it is easy to find three more conservation laws for Gross-Pitaevskii equation (187), whose averages in principle give a full set of modulation equations. But their transformation into the diagonal form by Whitham's direct method turns out to be technically complicated, and these equations were first derived in diagonal form in [60,61] only after the complete integrability of the Gross-Pitaevskii equation was discovered in [62] and relations between the complete integrability and diagonalization of Whitham's equations were revealed in [63]. We do not go into the details of this theory and give Whitham's equations for the Gross-Pitaevskii equation in the final form, especially because they are quite similar to the already familiar Whitham's equations for modulation of periodic KdV waves and can be investigated by similar methods. In the KdV case, the transition from the parameters ν i to the Riemann invariants r i of Whitham's system is effected by very simple formulas (55), but in the case of the Gross-Pitaevskii equation, the parameters V and ν i are related to the Riemann invariants r i , r 1 ≤ r 2 ≤ r 3 ≤ r 4 ,, by the more complicated expressions It is worth noting that the polynomial R(ν) = 3 i=1 (ν − ν i ) is Ferrari's resolvent for the polynomial Q(r) = 4 i=1 (r − r i ), allowing the roots of the equation Q(r) = 0 to be expressed in radicals in terms of its coefficients. The polynomial Q(r) and symmetric functions of its roots play an important role in the theory of periodic solutions and their modulation for a wide class of integrable equations. The periodic solution of the Gross-Pitaevskii equation can be expressed in terms of the Riemann invariant as where Whitham's modulation equations have the diagonal form where the characteristic velocities are expressed through the wavelength which is similar to (58). Substituting (205) into (206), we obtain On the soliton edge of a DSW with r 2 = r 3 (m = 1), these expressions become and on the small-amplitude edge with r 3 = r 4 and m = 0, we have Similar formulas can be derived in the limit r 1 = r 2 (m = 0). On the DSW edges, as we can see, one pair of velocities merges into a single expression and the other pair takes the form of expressions (193) for dispersionless velocities if Whitham's Riemann invariants are properly identified with the dispersionless Riemann invariants r ± (see (191)). This allows incorporating the solution of Whitham's equations describing the DSW into a smooth solution of dispersionless equations (192). These dispersionless equations, as well as Whitham's equations, can be solved by the hodograph method. For Whitham's system, the solution has the form where and the function W (r 1 , r 2 , r 3 , r 4 ) is a solution to the system of Euler-Poisson equations (73). In particular, as in the case of the KdV equation, an important class of self-similar solutions is represented by the generating function which depends on an arbitrary parameter r and satisfies Euler-Poisson equation (73). The coefficients of its expansion in inverse powers of r give particular solutions of the Euler-Poisson equation, for which the functions w i (r j ) take the particular form In view of the linearity of the Euler-Poisson equations, any linear combination w i = k A k w (k) i of functions (213) also gives a solution (210). Here, the W (k) are expressed in terms of σ i , symmetric functions of the roots of the polynomial Q(r) = 4 i=1 (r − r i ) (the coefficients of the polynomial). In particular, This elementary treatment suffices for solving the Gurevich-Pitaevskii problem in several characteristic cases. EVOLUTION OF THE INITIAL DISCONTINUITY IN THE GROSS-PITAEVSKII THEORY Just as in case of the KdV theory discussed in Section 7, we begin with the simplest problem of the evolution of the initial discontinuity, with the condensate state having different densities and different flow velocities, ρ L , u L and ρ R , u R , on the respective halflines x < 0 and x > 0. The values of Riemann invariants are to be matched in the emerging wave structure, and we therefore specify the condensate state by their values on both sides of the discontinuity: As an example, we consider the evolution of an initial discontinuity in the density distribution with the initial state u L = u R = 0, and assume for definiteness that The numerical solution of the Gross-Pitaevskii equation for this initial condition gives the wave structure shown with a solid line in Fig. 13(a). As we see, this structure consists of two waves joined by the domain of homogeneous flow ('plateau'). Because parameters with the dimension of length are absent in the initial distribution, solutions of both dispersionless equations (192) and Whitham's equations (204) must be self-similar and depend only on the variable z = x/t. Therefore, as can be easily verified, only one of the Riemann invariants can change along these waves. On the left, there is a rarefaction wave, along which the Riemann invariant r + is constant, i.e., √ ρ L = u/2 + √ ρ, where the bar over a variable denotes its value on the plateau. In the solution of Whitham's equations, too, only one of the Riemann Рис. 14: Wave structures formed in the evolution of the initial discontinuity in the theory of the Gross-Pitaevskii equation and the corresponding diagrams of Riemann invariants. invariants r i varies, and we conclude that they can be matched continuously only if the Riemann invariant r 3 varies. The resultant wave structure can be represented by the diagram of the Riemann invariant shown in Fig. 13(b), which schematically shows the dependences of all the invariants on the self-similarity variable z. Because the invariant r 1 is constant along the DSW and matches the invariants r − and r R − on the DSW edges, we obtain one more equation u/2 − √ ρ = − √ ρ R for the parameters of the flow along the plateau. The obtained equations determine the values of flow parameters on the plateau which are in excellent agreement with the numerical solution. The above example shows that the shape of the wave structure resulting from the evolution of the initial discontinuity can be determined by joining pairs of Riemann invariant values corresponding to wave edges with lines having a positive slope and corresponding to self-similar solutions of the form v i = z (for the rarefaction wave, the positivity of the slope is obvious from expression (193) for characteristic dispersionless velocities, and for the DSW it follows from a more detailed investigation of expressions (207)). If there are only two Riemann invariants in the resultant domain, this domain corresponds to the rarefaction wave. If four invariants are defined in that domain, then it corresponds to the DSW. It can be easily verified [64,65] that only six possible diagrams exist, which we present in Fig. 14 together with the corresponding wave structure types. In the cases shown in Fig. 14(a,b), one rarefaction wave and one DSW emerge, and these differ only in the wave propagation directions. In the case shown in Fig. 14(c) ('collision of condensates'), two DSWs emerge on different sides of the plateau. In the cases in Fig. 14(d,e), the condensates on different sides of the discontinuity have opposite velocities and, as the condensates recede, a lower-density plateau appears between them; in Fig. 14(e), the initial velocities are so high that this density decreases to zero. Finally, in the case shown in Fig. 14(f), conversely, the head-on motion of the colliding condensates is so fast that, instead of a plateau, as in Fig. 14(c), a nonlinear periodic wave appears between the DSWs, with the m parameter determined by the boundary values: So that just this combination of wave structures is realized, we must verify that the velocities of the rarefaction wave and DSW edges are ordered in a proper manner. This requires exploring the corresponding solutions of hydrodynamic and modulation equations. It readily follows from the obtained relations that The left edge of the rarefaction wave moves to the left with the speed of sound s L − , equal in modulus to √ ρ L , and the speed s L + of the right edge can be found by equating one of the variables in (218) to its value (216) on the plateau, whence In the DSW in Fig. 13, the values of three Riemann invariants are known, and the dependence of r 3 on z = x/t is determined by the self-similar solution of Whitham's equations: Substituting all these values and the functions r r = r 3 (z) into (202) gives the density profile in the DSW, which is shown with a dashed line in Fig. 13(a), in good agreement with the numerical solution. The velocities of the DSW It is easy to verify that, for ρ L > ρ R , the velocities of the rarefaction wave and DSW edges are ordered in accordance with the inequalities s L − < s L + < s R − < s R + , in agreement with the diagram in Fig. 13(b). The soliton amplitude on the border with the plateau is If we fix ρ L and decrease ρ R from its maximum value ρ L , we see that at ρ R = ρ L /9 the soliton depth a s becomes equal to the background density ρ defined on the plateau by expression (216). This means that this soliton becomes black, and the condensate density distribution acquires a 'vacuum point' [64,65]. As ρ R decreases further, the leading soliton amplitude becomes less than the background density, and the vacuum moves inwards the DSW. For the vanishing density ρ R , the amplitude of oscillations in the DSW tends to zero together with soliton amplitude (223), the plateau disappears together with the left rarefaction wave, but the entire DSW domain becomes a rarefaction wave, Eq. (218), corresponding to the expansion of the condensate into the vacuum. This transformation of the DSW depending on the boundary conditions is illustrated in Fig. 15. Other configurations shown in Fig. 14 can be considered similarly. It must only be kept in mind that, in Fig. 14(f), the modulated waves are matched not with the homogeneous flow on the plateau but with a nonmodulated periodic solution with a known value (217) of the m parameter. The theory expounded here was confirmed quantitatively in a dedicated experiment [66], in which an optical pulse had an artificially produced discontinuity in the light intensity distribution and the evolution of the pulse was governed by the NLS equation, equivalent to the Gross-Pitaevskii equation. Figure 16(a), which is borrowed from that paper, shows the intensity profile of the pulse entering the optical fiber, and Figs. 16(b,d) show the pulse profile at the exit. Figures 16(a,b) show the results of measurements, and Figs. 16(c,e), the results of a numerical solution of the NLS equation. The initial pulse has the shape of two table tops with different heights placed next to each other without a gap, such that a discontinuity in intensity occurs in the center. Its evolution is the main subject of interest here, whereas the rarefaction waves emerging on the outer edges of the structure can be ignored. As we can see, the wave emerging in the center corresponds to the case in Fig. 14(b), and the velocities of the rarefaction wave and DSW edges agree well the theoretical values. The problem of the evolution of a discontinuity, despite its simplicity, is being used in more realistic applications, such as DSW formation in a condensate flowing past an obstacle [67,68], which allows explaining the result of the experiments in [69], at least qualitatively. We also note that experiments with a nonlinear evolution of pulses in a more complicated geometry, both in the physics of condensates [70,71] and in nonlinear optics [72], also allow interpretations within that scheme. In Section 15, we illustrate the method with the solution to a simple problem on condensate motion under the action of a steadily moving piston [73]. PISTON PROBLEM We consider the problem of the flow of a condensate under the action of a piston [73]. We assume that the piston started moving at the instant t = 0 with a constant velocity v p and that, prior to the motion of the piston, the condensate with a constant density ρ 0 was at rest to the right of the piston. It is clear that, as a result of that motion, a wave starts propagating from the piston; if the piston speed is not too high, it is natural to assume that adjacent to it is a homogeneous flow of the condensate with the same speed v p and with some increased density ρ L . Between this homogeneous flow and the condensate at rest far from the piston, there is a DSW, and the values of Riemann invariants on the left and on the right of it can be expressed as The DSW originates instantaneously as the piston starts moving, and hence the solution of Whitham's equations must be self-similar, and the diagram of Riemann invariants must have the form shown in Fig. 17(a). We use the equality r L − = r 1 = r R − to find the density ρ L of the flow adjacent to the piston: This, in turn, determines the value of the Riemann invariant r 4 = r L + . Hence, the values of three invariants that are constant along the DSW are known, and the dependence of invariant r 3 on the self-similarity variable z = x/t is defined implicitly by the equation Using the limit expressions for v 3 in (208) and (209), we find the velocities of the DSW edges as At the location of the deepest soliton adjacent to the homogeneous flow, formulas (195) and (196) give the minimal condensate density and the flow velocity: For a sufficiently low piston speed, v p < 2 √ ρ 0 , the flow velocity u мин is negative, and hence the condensate flows into the domain of increased density ρ L > ρ 0 , as expected. For v p = 2 √ ρ 0 , a vacuum point is formed in the DSW, with the velocity of the left DSW edge becoming equal to the piston speed, and hence the homogeneous flow domain adjacent to the piston disappears. For v p > 2 √ ρ 0 , similarly to the case of the collision of condensates with too high velocities (Fig. 14(f)), the domain of a non-modulated periodic solution of the Gross-Pitaevskii equation occurs instead of the plateau, and this wave structure therefore corresponds to the diagram of Riemann invariants shown in Fig. 17(b). In the periodic wave, the Riemann invariants r 1 , r 2 , r 4 , preserve their values (226), and the condition that the wave velocity coincide with the piston speed V = (r 3 + r 4 )/2 = v p gives r 3 = v p − √ ρ 0 . Thus, in the periodic solution domain, and the condition of matching with the DSW determines the velocity of this DSW edge: The maximum density of the condensate in this structure is The density profile in the DSW can be constructed without difficulty by substituting the Riemann invariants in (202), and the analytic results agree well with numerical computations [73] The Gurevich-Pitaevskii method thus allows completely solving the problem posed in this section. UNIFORMLY ACCELERATED PISTON PROBLEM As in the case of the KdV equation, there are two scenarios for a simple wave breaking: the profile of one of dispersionless Riemann invariants r ± acquires a vertical tangent either at the interface with the condensate, which is at rest, or at the inflection point. We here consider the first case and assume for definiteness that this profile is produced by a uniformly accelerated moving piston [74], such that, at a time t, the coordinate of the condensatepiston boundary is X(t) = at 2 /2. Prior to the instant of breaking, the condensate flow can be described by dispersionless equations (192) with good accuracy, and we now give their solution in the form that we need. Under the action of the piston, the condensate flow is unidirectional and hence can be described by a simple wave with a constant Riemann invariant, of that equation must satisfy the boundary condition u(X(t), t) =Ẋ(t), which states that the flow velocity on the boundary with the piston coincides with the piston velocity. Therefore, r + − √ ρ 0 = at, and using the general solution for the condensate flow on the boundary with the piston gives w = at 2 /2 − ( 3 2 r + − √ ρ 0 )t. After eliminating t = (r + − √ ρ 0 )/a, we obtain the general solution for the condensate flow in the form This solution holds in the entire inhomogeneous flow domain until the instant t b = 2 √ ρ 0 /(3a) when the r + (x) profile acquires a vertical tangent at the point x b = 2ρ 0 /(3a) on the boundary with the condensate at rest. After that instant of breaking, a wave structure involving a DSW emerges, with the distribution of Riemann invariants represented by the diagram shown in Fig. 18(a). We therefore have to find a solution of Whitham's equations with the constant Riemann invariants r 1 = − √ ρ 0 and r 2 = √ ρ 0 , a solution satisfying the condition that r 4 match the invariant r + of dispersionless solution (233) as r 3 → r 2 . The right-hand side of (233) contains linear and quadratic terms in r + . As in the KdV problems considered above, it suffices to take a linear combination of the expressions w and w (2) i that has just that dependence in the limit as r 3 → r 2 . The coefficients of this linear combination are chosen from the condition of matching r 4 with r + , and a straightforward calculation [74] yields a solution in the These formulas implicitly define the dependences of r 3 and r 4 on x and t, and their substitution in (202) gives the DSW density profile, whose envelope is compared in Fig. 18(b) with the results of a numerical solution of the Gross-Pitaevskii equation. Importantly, formulas (234) allow finding the main DSW parameters analytically. For example, in the soliton limit r 3 = r 2 , the difference between these formulas on the boundary x = x L (t) gives the time dependence of r 4 in the form r 4 = 5at/4 + √ ρ 0 /6, substituting which in any of formulas (234) leads to the law of motion of the soliton edge of the DSW: x L (t) = 5 36 In the small-amplitude limit r 3 = r 4 , Eqs.(234) reduce to a single equation on the boundary x = x R (t): with the boundary value x R corresponding to the maximum of this function x(r 4 ) at a fixed value of t. This implies the dependence of t on y = r R / √ ρ 0 : substituting which in the limit expression for (234) gives The obtained formulas define the law of motion of the small-amplitude DSW edge in parametric form. At t = t b (y = 1), the coordinates of both edges are equal to the breaking point coordinate x b , in accordance with the fact that in the asymptotic Gurevich-Pitaevskii approach the DSW has a vanishing length at the instant of formation. The derived laws of motion for the DSW edges agree well with numerical solutions of the Gross-Pitaevskii equation [74]. The solution to the breaking problem for a simple wave expanding into a medium at rest and having a power-law profile r + ∝ (−x) 1/n at the instant of breaking can be found similarly for any integer n (see [75]). MOTION OF EDGES OF 'QUASI-SIMPLE' DISPERSIVE SHOCK WAVES A characteristic feature of a wave formed in the condensate as a result of the motion of a piston was that it expanded into the depth of the condensate at rest, and therefore in the DSW domain two out of the four Riemann invariants of Whitham's system were constant, and only the other two changed in the course of evolution. This is similar to the KdV case considered in Section §10, where one invariant was constant and two others were variable. In [25], DSWs of this type were called 'quasi-simple'. The law of motion of their edges can again be found in the theory of the Gross-Pitaevskii equation following a strategy similar to that presented in Section §10. In view of a close analogy with Section §10, we here give only the basic facts of the corresponding theory [40,76]. For definiteness, we consider the breaking of a simple wave for which the invariant r − = u/2 − c = −c 0 is constant, where c = √ ρ is the local speed of sound, which takes the value c 0 = √ ρ 0 in the unperturbed domain of the condensate. We then have r + = u/2 + c = 2c − c 0 and v + = 3c − 2c 0 , and the solution of dispersionless equations (192) can be written as where x(c − c 0 ) is a function inverse to the initial distribution c − c 0 = w(x) at the instant of breaking t = 0. We first assume that the initial pulse is 'positive,' i.e., c − c 0 > 0. This solution borders the soliton edge of the DSW, which moves with the soliton velocity V s = (r 4 + r 2 ) = c, where we used the fact that r 2 = −r 1 = c 0 along the quasi-simple DSW and r 4 = r + = 2c − c 0 at the matching point. Therefore, dx L − cdt = 0 and dispersionless solution (238) on the boundary with the DSW for x = x L must be compatible with the equation where x L and t are regarded as functions of the local speed of sound c, which varies on the soliton edge as a result of the DSW evolution. After eliminating x L , we hence obtain the equation solving which with the initial condition t(0) = 0, together with the equation defines the law of motion of the soliton DSW edge over a monotonic dispersionless profile in parametric form. If the profile is not monotonic and has a maximum c m = c 0 + z m , then, for t > t m = t(z m ), when the soliton edge borders the branch x 2 (c − c 0 ) of the dispersionless solution, instead of (241) and (242) we easily find the relations where c 0 + c 0 (x) is the initial distribution of the local speed of sound. At asymptotically large times, we hence find In this asymptotic limit, the DSW amplitude becomes much less than the background density ρ 0 , and the Gross-Pitaevskii equation can be approximated for unidirectional wave propagation with the KdV equation; hence, solution (244) coincides with (151) in the corresponding variables. On the low-amplitude edge, in the same asymptotic regime, r 3 ≈ r 4 ≈ r m = 2c m − c 0 and r 2 = −r 1 = c 0 , and therefore formula (205) gives the wavelength In the case of a negative initial pulse with c 0 (x) = c − c 0 < 0, similarly, the small-amplitude edge borders the dispersionless solution (238), with the Riemann invariants of Whitham's system given by r 3 = r 4 = −r 1 = c 0 and r 2 = 2c − c 0 , where c is the local speed of sound on that edge. Therefore, the wavelength is here given by L = π/(2 c 0 (c 0 − c)), i.e. k = 4 c 0 (c 0 − c), and this edge moves over the background with the parameters ρ = c 2 , u = 2(c − c 0 ) with the group velocity The compatibility condition of Eq. (238) with the equation leads to the differential equation whose solution gives a parametric law of motion of the right DSW edge It is easy to rewrite it, with obvious changes, for localized pulses with a single local minimum. In the case of a negative initial pulse, the asymptotic state mainly consists of dark solitons, and it is easy to find the velocity of the deepest soliton on the left DSW edge. We here have r 4 = −r 1 = c 0 and r 2 ≈ r 3 ≈ r m = 2c m − c 0 , whence The number of dark solitons into which the initial negative pulse eventually decays can be found following the same strategy that we used to derive Karpman's formula (159) for the KdV equation. On the small-amplitude edge, we now have k(v g − V ) = k 3 /(4 c 2 + k 2 /4) and k = 4 c 0 (c 0 − c). Substituting these expressions into the general formula (155) and using (248) to replace the integration over t with integration over c, after simple transformations we obtain where c(x) is the initial distribution of the local speed of sound in the wave. The Gross-Pitaevskii equation, just like the KdV equation, is completely integrable, making the inverse scattering transform method [62] applicable to it, which allows finding [77,78] the general expression for the number of solitons originating from the pulse with the given initial distributions of dispersionless Riemann invariant r ± (x): In our case of the evolution of the pulse in the form of a simple wave, r − (x) = −c 0 and r + (x) = 2c(x) − c 0 , and formula (251) reduces to (250). We must note, however, that both formula (159) for the KdV equation and formula (250) for the Gross-Pitaevskii equation can be represented as where k 0 (x) is the wave number on the small-amplitude edge corresponding to the initial distribution of the parameters of the simple wave. Formula (252) apparently is of a general nature and can also be applied to equations that are not completely integrable [79,80], for which the dependence k 0 (x) is to be found by solving equation for the conservation of the number of waves along the trajectory of the small-amplitude edge [81,82]. BREAKING OF A CUBIC PROFILE IN THE GROSS-PITAEVSKII THEORY In the general case, a wave governed by the Gross-Pitaevskii equation breaks in such a way that the profile of one of the dispersionless Riemann invariants r ± acquires a vertical tangent and can be approximately represented by a cubic curve near the inflection point. We assume for definiteness that the invariant r + undergoes breaking, and it hence varies in the neighborhood of that point very rapidly, which allows assuming the r − invariant to be constant. By an appropriate change of variables, it can be ensured that the condensate flow is described by the formulas up to the instant of breaking. These formulas give a solution of hydrodynamic equations (192). Naturally, it is assumed here that r 0 − < r + in the domain of interest, including the solution branch in (253) with r + < 0. For t > 0, solution (253) becomes multivalued. Taking dispersion into account, i.e., solving the full Gross-Pitaevskii equation, eliminates this multivaluedness by the formation of a DSW. Following the Gurevich-Pitaevskii approach, we solve this problem [74,78] in Whitham's approximation by incorporating the solution of Whitham's equations in dispersionless solution (253) such that the equality r 1 = r 0 − holds and the boundary conditions r 4 (x L (t), t) = r + (x L (t), t) при r 3 = r 2 , r 2 (x R (t), t) = r + (x R (t), t) при r 3 = r 4 . are satisfied. Because the right-hand side of the first equation in (253) involves a cubic function of r 3 , we can satisfy all the conditions by taking solution (210) with r 1 = r 0 − and w i = These formulas implicitly define the dependence of the invariants r 2 , r 3 , and r 4 on x and t. In particular, investigating the limit r 3 → r 3 , we can easily find the law of motion of the soliton edge of the DSW: x L (t) = 1 2 r 0 − t − 1 6 The law of motion of the small-amplitude edge is defined in parametric form, with the time t depending on the parameters r 2 and r 4 as and the parameters themselves related as 21(r 0 − ) 2 (4r 4 + r 2 ) − 10r 0 − (20r 2 4 + 2r 2 r 4 + r 2 2 )+ + 16r 4 (8r 2 4 − r 2 r 4 − r 2 2 ) + 9r 3 2 = 0. We see that this particular Gurevich-Pitaevskii problem has also been given a fully analytic solution. CONCLUSIONS We have presented the Gurevich-Pitaevskii theory for DSWs in some detail following [1] and other closely related papers. It remains to briefly mention some avenues of further development of this theory. We first note that, simultaneously with the appearance and development of the theory of DSWs, other important events were taking place in nonlinear physics associated with the discovery of the inverse scattering transform method for solutions of nonlinear wave equations [24,62,83]. A fundamental fact of that method is the relation between the so-called completely integrable equations, a class to which the KdV and Gross-Pitaevskii equations belong, and the associated linear spectral problems. For example, associated with the KdV equation is the problem of the spectrum of a quantum particle moving in the potential u(x, t); the relation is such that, in particular, the parameters of the soliton solution are related to the discrete spectrum of that potential. An extension of this method to periodic solutions of the KdV equation [84,85] has shown that the Riemann invariants of Whitham's system coincide with the endpoints of gaps where the motion of the quantum particle is forbidden in the corresponding periodic potential. This allowed, on the one hand, generalizing the Whitham method to multi-phase solutions [63] and, on the other hand, extending it to other integrable equations. In particular, we have used Whitham's equations for the Gross-Pitaevskii theory, which were found in [60,61] by methods based on the complete integrability of that equation. It turns out as a result that three sets of parameters characterizing the periodic solutions arise naturally in the theory: (1) physical parameters ν i related to the wave amplitude and other quantities that bear a clear physical meaning; (2) the end points λ i of the periodic spectral problem; (3) the Riemann invariants r i of Whitham's modulation system for the considered periodic wave. In the simplest case of the KdV equation, the relations among all these parameters are linear, and this is why Whitham could diagonalize the modulation equations derived for physical parameters by choosing appropriate linear combinations. In the case of the Gross-Pitaevskii equation, the relation between λ i and r i remains linear, and that is why we were able to not invoke λ i in our presentation, but the physical parameters ν i are related to r i (or λ i ) by more complicated formulas (201). This complication, technical at first glance, becomes fundamentally important when the relation between λ i and r i becomes multi-valued: one solution of Whitham's equations corresponds to two different periodic waves. This situation is characteristic of the so-called not genuinely nonlinear equations, in which nonlinear terms can vanish for some amplitude of the wave. This was noted in [86] for a higher KdV equation, an element of a hierarchy of equationsas sociated with the samespectral problem, and also in [87] for the modified KdV equation u t ± 6u 2 u x + u xxx = 0, where the coefficient in the nonlinear term has a maximum or a minimum at u = 0, depending on the sign. In the problem of the evolution of a step-like profile, this led to the appearance of more complicated structures than rarefaction waves and modulated cnoidal waves that we are familiar with from the theory outlined in the foregoing. A classification of such structures evolving from the initial discontinuity in accordance with the Gardner equation u t +6(u±αu 2 )u x +u xxx = 0 that occurs in the theory of internal water waves was given in [88]. In the theory of the modified NLS equation iψ t + 1 2 ψ xx − i(|ψ| 2 ψ) x = 0, which has applications in nonlinear optics and magnetohydrodynamic waves, the use of all three sets of parameters becomes necessary: periodic solutions and Whitham's equations were obtained in [89], and the evolution of the initial discontinuity was analyzed in [90][91][92]. Finally, the most complicated case of this type, a ferromagnet with 'easy plane' anisotropy and the equivalent limit for two-component Gross-Pitaevskii equations, was studied in [93,94]. Besides the development of Whitham's averaging method, the discovery of the complete integrability of the most important equations in nonlinear wave physics has allowed developing other approaches to the theory of DSWs. In particular, it was shown in [96][97][98][99][100][101] that the solution to the Gurevich-Pitaevskii problem in Whitham's approximation can also be obtained as a semiclassical limit of exact multi-soliton solutions of the KdV equation. Another aspect of a more exact theory of DSWs is that, similarly to how the linear problem solution (25) obtained by the averaging method is an asymptotic form of the Airy function, Whitham's approximation for breaking waves is a semiclassical asymptotic form of some special functions that are 'standard' solutions of the Painlevé nonlinear differential equations (see, e.g., [102][103][104]). Solutions expressed in terms of such special functions are also exact at the smallamplitude edge of the DSW. Another area of investigations is to generalize the Gurevich-Pitaevskii approach to equations that are not completely integrable. Naturally, the Whitham theory considered above for the perturbed KdV equation can be generalized to a rather wide class of equations close to completely integrable ones [48,95]. However, a large number of physically important equations do not fall into that category and the modulation equations for periodic solutions of such equations do not have Riemann invariants in any approximation. Still, the general Gurevich-Pitaevskii approach is also valid for them and some important characteristic of DSWs can be calculated with no Riemann invariants defined. The first important statement regarding such systems, made by Gurevich and Meshcherkin [105], was that only a DSW is formed in the breaking of a simple wave, and the constant Riemann invariant of the dispersionless limit transports its value across the DSW, despite the absence of Whitham's Riemann invariant conserved along the DSW. This statement is already sufficient in order to calculate the parameters of the plateau appearing between two wave structures in the evolution of a discontinuity. The next important step was made in [81,82], where it was noted that, on the border with a simple wave, Whitham's system reduces to an ordinary differential equation whose solution gives a relation between the DSW parameters on that edge. Because one of the modulation equations (the conservation law for the number of waves) is certainly known on the smallamplitude edge, the solution of that equation gives a relation between the wave number and the background amplitude of the wave. On the soliton edge, such an equation is absent in general. But it can be verified that, in the case of KdV and Gross-Pitaevskii equations, the equation k t + ω x = 0 holds for pulse expansion into a medium at rest with two constant Riemann invariants, with k being the inverse half-width of the soliton and ω( k) obtained from the linear dispersion law ω(k) by the substitution ω( k) = −iω(i k). According to an old remark by Stokes quoted in a note to §252 in [8], ω( k) determines the soliton velocity: the tails of the soliton propagate with the same velocity as the soliton itself, and on the tails the linearized equations have the same form as in the smallamplitude harmonic limit. Assuming the validity of the equation k t + ω x = 0 in the general case of the breaking of simple waves expanding into a 'quiescent' homogeneous medium with two constant dispersionless Riemann invariants, we can obtain an ordinary differential equation for the parameters along the soliton edge of the DSW. These two equations are entirely sufficient for finding the parameters of the edges of the DSW forming in the evolution of a discontinuity and satisfying an unintegrable equation, as was indeed done in series of studies [79,82,[106][107][108][109][110][111][112]. Requiring the compatibility of the thus obtained ordinary differential equation with the solution of the dispersionless equations on that boundary allows obtaining the equation of motion for the DSW edge propagating over the general profile of a simple wave [40,76,113]. A new type of DSW can occur when taking higherorder dispersion effects into account when the soliton velocity is equal to the phase velocity of linear waves and these are in resonance with other. The general Gurevich-Pitaevskii approach is also applicable in that case [114][115][116][117]. In this paper, we mentioned applications of the Gurevich-Pitaevskii problem to water waves, plasmas, Bose-Einstein condensate, and nonlinear optics. To these, we can add the observations and the theory of DSWs in internal waves in the ocean [118] and the atmosphere [119], and on jets of a liquid in viscous media [120,121]. The Gurevich-Pitaevskii approach to the DSW theory also extends to waves with several spatial variables [123] and finds applications in other areas in physics, including the quantum gravity theory [102]. The reader can find more examples of DSWs, e.g., in review [124] and the references therein. In addition, the creation of the DSW theory was related to the substantial progress in modern mathematical physics, and the reader can glean some aspects of the mathematical theory from reviews [125,126]. To conclude, we can say that in the years that have passed since the appearance of paper [1], the Gurevich-Pitaevskii problem, understood as a general approach to the DSW theory based on Whitham's modulation equations, has become an area of vibrant research in nonlinear physics, with a distinctive problem setting and with profound mathematical methods for solving problems and clear physical ideas that enrich the entire physics of nonlinear waves. I am grateful to L. P. Pitaevskii for discussions of the problems considered in this paper and for his useful remarks.
2020-08-20T10:04:21.750Z
2020-08-01T00:00:00.000
{ "year": 2021, "sha1": "10f7b1cb9ff5284ed8b4c3908bcb7e0870cfcc96", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2104.14835", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "b3173a8be3d2b5f1b85a1415482207aea2c9fdf3", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Physics" ] }
261226624
pes2o/s2orc
v3-fos-license
Whole brain radiation therapy in management of brain metastasis: results and prognostic factors Purpose To evaluate the prognostic factors associated with overall survival in patients with brain metastasis treated with whole brain radiotherapy (WBRT) and estimate the potential improvement in survival for patients with brain metastases, stratified by the Radiation Therapy Oncology Group (RTOG) recursive partitioning analysis (RPA) class. Patients and methods From January 1996 to December 2000, 270 medical records of patients with diagnosis of brain metastasis, who received WBRT in the Hospital do Cancer Sao Paulo A.C. Camargo in the period, were analyzed. The surgery followed by WBRT was used in 15% of patients and 85 % of others patients were submitted at WBRT alone; in this cohort 134 patients (50%) received the fractionation schedule of 30 Gy in 10 fractions. The most common primary tumor type was breast (33%) followed by lung (29%), and solitary brain metastasis was present in 38.1% of patients. The prognostic factors evaluated for overall survival were: gender, age, Karnofsky Performance Status (KPS), number of lesions, localization of lesions, primary tumor site, surgery, chemotherapy, absence extracranial disease, RPA class and radiation doses and fractionation. Results The OS in 1, 2 and 3 years was 25, 1%, 10, 4% e 4, 3% respectively, and the median survival time was 4.6 months. The median survival time in months according to RPA class after WBRT was: 6.2 class I, 4.2 class II and 3.0 class III (p < 0.0001). In univariate analysis, the significant prognostic factors associated with better survival were: KPS higher than 70 (p < 0.0001), neurosurgery (p < 0.0001) and solitary brain metastasis (p = 0.009). In multivariate analysis, KPS higher than 70 (p < 0.001) and neurosurgery (p = 0.001) maintained positively associated with the survival. Conclusion In this series, the patients with higher perform status, RPA class I, and treated with surgery followed by whole brain radiotherapy had better survival. This data suggest that patients with cancer and a single metastasis to the brain may be treated effectively with surgical resection plus radiotherapy. The different radiotherapy doses and fractionation schedules did not altered survival. Background Brain metastases represent an important cause of morbidity and mortality, and are the most common intracranial tumors in adults, occurring in approximately 10% to 30% of adult cancer patients [1]. The risk of developing brain metastases varies according to primary tumor type, with lung cancer accounting for approximately one half of all brain metastases [2]. The prognosis of patients with brain metastases is poor; the median survival time of untreated patients is approximately 1 month [3]. With treatment, the overall median survival time after diagnosis is approximately 4 months [4]. The Radiation Therapy Oncology Group (RTOG) recursive partitioning analysis (RPA) describes three prognostic classes, defined by age, Karnofsky Performance Score (KPS), and disease status [5]. The most widely used treatment for patients with multiple brain metastases is WBRT. The appropriate use of WBRT can provide rapid attenuation of many neurological symptoms, improve quality of life, and is especially beneficial in patients whose brain metastases are surgically inaccessible or when other medical considerations remove surgery from the list of appropriate options [6,7]. The use of adjuvant WBRT after resection or radiosurgery has been proven to be effective in terms of improving local control of brain metastases, and thus, the likelihood of neurological death is decreased [8]. The majority of patients who achieve local tumor control die from progression of extracranial disease, whereas the cause of death is most often due to CNS disease in patients with recurrent brain metastases [7,8]. There is not currently consensus on the optimal radiation schedule for patients with brain metastases. Standard treatment regimens include all of the dose ranges evaluated in the early RTOG studies, and is dependent upon issues such as the severity of CNS symptoms, the extent of systemic disease, and physician preference. In this cohort, we evaluated the prognostic factors and the importance of RPA classification (RTOG) for survival in patients with diagnosis of brain metastasis, who receive WBRT alone or postoperative. Materials and methods The records of 270 patients with brain metastases, who were treated with WBRT at our institution between January 1996 and December 2000, were analyzed retrospectively. At diagnosis of brain metastasis, the follow variables were analyzed for survival: age, sex, location of brain metastasis, primary tumor type, and extent of disease, initial Karnofsky score, dose and fractionation radiotherapy schedule, surgery, chemotherapy and RPA class, showed in table 1. The supportive care (oral prednisone) and neurological status was not evaluated. Chemotherapy was administered to the patients with systemic disease in activity after WBRT. Brain metastases were detected by contrast-enhanced cerebral computed tomography (CT) or magnetic resonance imaging (MRI). WBRT was performed in all patients with cobalt 60 gamma rays or with 4 MV photons of a linear accelerator. The whole brain was irradiated by usual bilateral fields that encompassed the cranium with a 1 cm margin. Individual shielding blocks were fabricated for all patients, when necessary. The total dose was 30-40 Gy, with a median of 35 Gy, in daily fractions of 2.0-3.0 Gy. During the study period two fractionation schemes were used: conventional fractionation with daily fractions of 2 Gray (Gy), five days per week to a planned total dose of 40 Gy (n = 102) and hypofractionation with daily fractions of 3 Gy, five days per wk to a planned total dose of 30 Gy (n = 134). The surgical resection was indicated in single brain metastases with diameter less or equal than 3 cm, favorable localization and control systemic disease. The supportive care (prednisone oral) was introduced in begin of treatment or during radiotherapy. The recursive partitioning analysis (RPA) was used to classify the patients with brain metastases. Class I contained all patients with a Karnofsky performance status (KPS ≥ 70, age < 65 years, controlled primary tumor and no extracerebral metastases), Class III contained patients with a KPS <70, and Class II contained all other patients, showed in table 1. Statistical analysis All patients alive at the time of analysis were censored with the date of last follow-up. The endpoint of the study was overall survival. Survival was calculated from the first day of radiotherapy using the method of Kaplan Meier. Survival curves were compared using the log-rank test. The covariates examined in all cases were: age, sex, location of brain metastasis, primary tumor type, extent of disease, initial Karnofsky score, dose and fractionation radiotherapy schedule, neurosurgery and RPA class. All factors with a P-value ≤ 0.05 at univariate analysis were entered into a multivariate analysis using the proportional hazards model (Cox Regression) with confidential interval of 99%. Results The overall survival rate in 1, 2 and 3 years was 24%, 9.4%, and 4.3%, respectively (figure 1). Three patients were alive in moment of this analysis with a median survival time of 4.42 years (range, 3.8 -5.1). All these patients had single brain metastasis, high KPS, cranial extra disease controlled and were submitted to neurosurgery before WBRT. The median survival time for all the studied patients was 4.6 months (CI 95% 3.7 -6, 4). The RPA class analysis showed strong relation with survival (p < 0.0001) and the median survival time by RPA class in months was: class I 6.2, class II 4.2 and class III 3.0. The significant prognostic factors associated with better survival were: higher KPS (p < 0.0001), neurosurgery (P < 0.0001) and single metastases (p = 0.009), showed in table 2 and figure 2, 3, 4. In multivariate analysis, the factors associated positively with survival were: neurosurgery (p = 0.001, HR = 2, CI99% = 1.2-3.3) and KPS higher than 70 (p < 0.001, HR = 1.56, CI99% = 1.19-2.04), demonstrated in table 3. Discussion Brain metastases are the most common form intra cranial tumor accounting significantly more than one-half of brain tumors in adults. Because of advanced in the diagnoses and management of this condition, most patients receive palliative treatment and majorities don't die from metastases. In this cohort, we evaluate patients with brain metastasis, multiples or solitaries lesions, who receive WBRT alone or WBRT after surgical resection of lesion. The goal of postoperative WBRT in patients with solitary brain metastasis is to destroy microscopic residual cancer cells at the site of resection and others localizations within the brain. Until recently, the value of this approach was derived exclusively from retrospective studies [8,11,12]. veral of this studies found that adjuvant WBRT reduced the recurrence rate and two studies demonstrated prolong survival [12,13]. One randomized trial has examined the role of pos operative WBRT in patients with single metastasis [13]. In this study patients who received radiation were significantly less likely to fail in the brain(18% vs 70%) e were significantly less likely to die of neurological causes. In our series, patients submitted at resection plus WBRT were significantly less likely to die (p = 0,001), mainly the patients with solitary metastasis and higher KPS. S e The Radiation Therapy Oncology Group (RTOG) has attempted to determine the optimal dose fractionation schedules for patients with brain metastasis in various randomized trials [9][10][11]. All these trials have failed to show any benefit in survival for different doses and fractionation schedules of treatment. In this cohort, 40 Gy in 20 fractions or 30 Gy in 10 fractions, were not associated with any benefit to survival. (p = 0,8). The according with our data, patients with good prognosis (RPA class I) who are likely to survive more than six months, such as those with single metastasis with controlled systemic disease, should be treated with prolonged fractionation to decreased the likelihood of late CNS toxicity. Overall Survival by number of lesions (Log Rank) Figure 4 Overall Survival by number of lesions (Log Rank). SURVIVAL ESTIMATE BY RPA The end point of this cohort was to evaluate the different prognostic factors related with overall survival and to analyze the importance of recursive partitioning analysis (RPA) class (RTOG) in patients with brain metastasis. In our data, the prognostic factors associated with better survival were: Higher KPS (p < 0.0001), solitary metastasis (p = 0.009), resection of lesion (p = 0.0001) and RPA class I (p = 0.0001), all these prognostic factors were showed for others authors. [8,14,15,17,18] The others factors (age, gender, chemotherapy, dose and fractionation schedule) analyzed were not associated with any effect in survival. RPA class in this study showed similar results to RTOG protocols [5], with the median survival time for class I (6.2 months), II (4.2 months) and III (3.0 months) (p = 0.0001), respectively. This data demonstrate that the use of RPA class may identify patients most likely to benefit from treatment and allow new therapies to be evaluated on homogeneous patient groups. In this study, patients with multiple brain metastases that received WBRT had poorer survival than patients with single brain metastases (P = 0.0001). We did not evaluate the use supportive care (oral predinisone) plus radiotherapy versus supportive care alone or WBRT alone versus supportive care. However, Horton et al. [19] compared WBRT plus supportive care (oral prednisone) versus supportive care alone. Median survival in the prednisone alone arm was 10 weeks compared with 14 weeks in the combined arm (p-value not stated). The proportion of patients with an improvement in performance status was similar in the prednisone-alone and the combined WBRT and prednisone arms (63% versus 61%, respectively). Data on tumor response, intracranial progression-free duration, quality of life, and toxicity were not reported. In our study no patients received Radiosurgery (SRS); however, a larger recently published trial (RTOG 95-08) [20] provides compelling evidence for the use of SRS boost following WBRT in patients with newly diagnosed one to three brain metastases. In the RTOG 95-08, SRS after WBRT has been validated with level 1 evidence as a standard of care option in the management of patients with single brain metastases. In other recently published prospective randomized Japanese trial, JROSG 99-1, patients were randomly assigned to SRS alone, versus WBRT and SRS. The actuarial 6 month freedom from new brain metastases was 48% in the SRS alone arm, and 82% in the SRS and WBRT arm (P = 0.003). Actuarial 1 year brain tumor control rate for the lesions treated with SRS was 70% in the SRS alone arm and 86% in the SRS and WBRT arm (P = .019) [21]. Clinical trial-based assessments therefore suggest high rates of intracranial failures and reduced local control rates when WBRT is omitted or delayed. In conclusion, WBRT continues to be an efficacious treatment in the management of brain metastasis. Patients with RPA class I may be effectively treated with local resection or radiosurgery followed by WBRT, mainly in those patients with single metastases, higher KPS and cranial extra disease controlled. Despite the use of WBRT, outcomes are poor and efforts should be made to incorporate multimodality approaches including surgery, radiosurgery, chemotherapy, and radiotherapy sensitizers to improve survival. Overall Survival by neurosurgery (Log Rank)
2019-03-11T13:08:35.974Z
2006-06-29T00:00:00.000
{ "year": 2006, "sha1": "d6ab34f639190e755b9e966cb99d7cbe3da95f3d", "oa_license": "CCBY", "oa_url": "https://ro-journal.biomedcentral.com/counter/pdf/10.1186/1748-717X-1-20", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c7e8ec7e3aea9211cc5135d1afd7101ca4fcc0ce", "s2fieldsofstudy": [ "Medicine", "Biology", "Physics" ], "extfieldsofstudy": [ "Medicine" ] }
15611832
pes2o/s2orc
v3-fos-license
Codon Usages of Genes on Chromosome, and Surprisingly, Genes in Plasmid are Primarily Affected by Strand-specific Mutational Biases in Lawsonia intracellularis In this study, the factors driving genome-wide patterns of codon usages in Lawsonia intracellularis genome are determined. For genes on the chromosome of the bacterium, it is found that the most important source of variation results from strand-specific mutational biases. A lesser trend of variation is attributable to genes that are presumed as horizontally transferred. These putative alien genes are unusually GC richer than the other genes, whereas horizontally transferred genes have been observed to be AT rich in bacteria with medium and relatively low G + C contents. Hydropathy of encoded protein and expression level are also found to influence codon usage. Therefore, codon usage in L. intracellularis chromosome is the result of a complex balance among the different mutational and selectional factors. When analyzing genes in the largest plasmid, for the first time it is found that the strand-specific mutational biases are responsible for the primary variation of codon usages in plasmid. Genes, particularly highly expressed genes of this plasmid, are mainly located on the leading strands and this supposed to be the effects exerted by replicational–transcriptional selection. These facts suggest that this plasmid adopts the similar mechanism of replication as the chromosome in L. intracellularis. Common characters among the 10 bacteria in whose genomes the strand-specific mutational biases are the primary source of variation of codon usage are also investigated. For example, it is found that genes dnaT and fis that are involved in DNA replication initiation and re-initiation pathways are absent in all of the 10 bacteria. Introduction When sequences of hundreds of microbial proteincoding genes became available in 1980, Grantham et al. 1,2 analyzed frequencies of 61 codons of all these genes. Consequently, they found that a surprising consistency of choices exists among genes of the same or similar genome. The 'genome hypothesis' was hereby proposed. 1,2 Soon after that, it was shown that evident intra-genomic variability existed in many microorganisms. 3 This variation was interpreted as the effect of nature selection acting at the level of translation, which resulted in the preferential usage of optimal codons. 4 The interpretation was reinforced by the finding that the preferred codons in highly expressed genes were recognized by the most abundant tRNA in Escherichia coli 5 and as well as in Saccharomyces cerevisiae. 6 The selective advantage of optimal codons seems to lie in maximizing if an article is subsequently reproduced or disseminated not in its entirety but only in part or as a derivative work this must be clearly indicated. For commercial re-use, please contact journals.permissions@oxfordjournals.org the efficiency of translation, particularly during periods of competitive exponential growth. 7 In bacteria with slow growth rate, selected codon usage bias may be relatively weak. 7 There may be no such bias in those bacteria for which the competitive growth is unimportant. 8 On the other hand, the preferred codons vary among species based on the changes in the complement of tRNAs in that bacterium. 9 Besides translational selection, replicational and transcriptional selection may also have influence on the codon usage of a gene. 10,11 Replicational selection is responsible for the higher number of genes on the leading strands, and transcriptional selection appears to be responsible for the enrichment of highly expressed genes on these strands. The effects of mutation may be superimposed on biases generated by natural selection. 12 In most bacteria, there are short chromosome segments of unusual base composition due to the relatively recent import of the region through horizontal transfer. 13,14 Genes located in these regions possess distinct codon usage or nucleotide composition from other genes, for example, in E. coli 15 and in Bacillus subtilis. 16 In a single known example, Mycoplasma genitalium, codon usage variation is continuous and associated very strongly with position on the chromosome, perhaps reflecting change in the spectrum of mutations around the genome. 17 In addition, many bacteria exhibit skewed base composition between the leading and lagging strands of replication, although the magnitude of this skew varies considerably among species. 18 Based on the above narration, the variation of codon usage of genes within a species is due to the combined effect of mutation and selection. 12 Among these factors, the bias from asymmetric replication mechanism received special attention of researchers in the past 10 years. 19 Many researches have been performed to analyze the effect on the codon usage (and/or amino acid composition) exerted by the asymmetric mutation and to investigate the underlying mechanism of the different mutation. 20 -27 The skewed base composition between two replicating strand was first observed in E. coli, M. genitalium, Haemophilus influenzae and B. subtilis. 28,29 Then similar observations were obtained in most of the other bacteria. In 1998, for the first time it was found that the asymmetric replication is the major source of codon usage variation. 10 This observation was obtained in Borrelia burgdorferi genome. 10 The effect of asymmetry was so strong that the codon usages of genes on the two replicating strands were separated, distinct. After that, the separated codon usages between two replicating strands were also observed in Treponema pallidum, 30 Chlamydia trachomatis, 11 Buchnera aphidicola, 31 Blochmannia floridanus, 32 Bartonella henselae, 33 Bartonella quintana, 33 Tropheryma whipplei 34 and Chlamydia muridarum. 35 Lawsonia intracellularis is an obligate intracellular Gram-negative bacterial pathogen. 36 Though primarily recognized in pigs, L. intracellularis is spreading to a wide range of mammals such as horses, and hamsters in North America and elsewhere. The bacterial pathogen invades the intestinal epithelial cells, which causes hyperplasia of the infected cells and leads to the process of disease pathogenesis. The disease has two clinical manifestations: an acute hemorrhagic form often referred as porcine hemorrhagic enteropathy, and a more chronic proliferative form often called porcine intestinal adenomatosis. Genome of L. intracellularis PHE/MN1-00 was determined in 2006, which provided a wonderful opportunity to extract a wealth of information on biochemistry, genetics, evolutionary history and pathogenicity of this organism. Traditionally, codon usage data have been used in a wide variety of areas. 10 It is often desirable to use codon usage information to reduce the redundancy of primers for the PCR. Optimizing the codon usage of a gene could increase its expression level. Codon usage tables have been used to identify those ORFs that may encode proteins. Codon usage patterns also have been used to identify ORFs that probably do not code for functional proteins. Because of the importance of the intracellular pathogen and the potential usage of codon usage patterns, the intragenomic variation in codon usage in L. intracellularis PHE/MN1-00 has been investigated through multivariate analysis method in this study. The database The complete genome sequence of L. intracellularis PHE/MN1-00 was downloaded from GenBank ftp site. One chromosome and three plasmids are contained in the complete genome. In this work, Plasmid 1 and Plasmid 2 are not taken into account because they contain too little genes to be analyzed statistically. Plasmid 3 is the largest plasmid and also analyzed. Chromosome has 1 457 619 bp and Plasmid 3 has 194 553 bp. A total of 1180 and 104 protein-coding genes are listed in the annotations of the chromosome and the largest plasmid, respectively. No attempt was made to alter the sequences or to remove those genes of unknown function. The FASTA formatted files, which are used as input files for codonW software, are proved as Supplementary data. In Supplementary File 1, the DNA sequences of 1180 genes located on the chromosome of L. intracellularis are contained. The first 607 genes correspond to those located on the leading strands and the last 573 ones correspond to those on the lagging strands. In Supplementary File 2, the DNA sequences of 104 genes in the largest plasmid (Plasmid 3) of L. intracellularis are contained. The first 68 genes correspond to those located on the leading strands and the last 36 ones correspond to those on the lagging strands. Statistical analysis Most analyses were carried out by using codonW, 37 which can be freely downloaded from the website (http://sourceforge.net/projects/codonw/). GC3 S denotes the frequencies of G and C at the third synonymously variable coding position (excluding Met, Trp and termination codons). N C means the 'effective number of codons' used in a gene. 38 When all sense codons are used randomly, N C takes a value of 61. Lower values of N C indicate stronger bias, with an extreme value of 20 when only one synonymous codon is used for each amino acid. After calculating N C and GC3 S for each gene, N C -GC3 S plot can be made, which shows whether there are genes whose codon usage is affected by genome composition pressure and natural selection or mutation. An expected curve is plotted through the formula: N ¼ 2 þ s þ f29/[s 2 þ (1 2 s) 2 ]g. For each gene, codon adaptive index (CAI) and hydropathy values (gravy) are also calculated by codonW. Correspondence analysis (COA), as implemented in codonW, was used to determine the major source of variation of codon usage among the genes on the chromosome and the genes in Plasmid 3. As suggested by Perrière and Thioulouse, 39 parallel COA on codon counts and on relative codon frequencies were performed and then the results were compared. In addition, COA was carried out for genes on the chromosome, for genes in the largest plasmid and for the genes located on the leading strands of the chromosome, respectively. Relative synonymous codon usage (RSCU) is defined as the observed frequency of a codon divided by that expected when all codons for that amino acid are used equally. Therefore, RSCU values close to 1.0 indicate a lack of bias for that codon. Compared with simple measurements of codon abundance, RSCU values are normalized and are much more independent of amino acid usage. Only those codons for which there is a synonymous alternative were used in the analysis. Hence, the three termination codons and the codons that encode methionine and tryptophan are excluded. Consequently, each gene is described by a vector of 59 variables (codons). COA maps all the genes analyzed into the 59-dimensional space and attempts to identify a series of new orthogonal axes accounting for the greatest variation among genes. The first principal axis is chosen to maximize the standard deviation of the derived variable and the second principal axis is the direction to maximize the standard deviation among directions un-correlated with the first, and so forth. For details about this method, refer to Dillon and Goldstein. 40 GC skew [(G 2 C)/(G þ C)] was used to determine the origin and termination of replication for the chromosome and the plasmid. 28 A non-overlapping sliding window of 1000 bp was employed for the GC skew. For the chromosome, the origin site is assumed to lie between genes LI0775 and LI0776, whereas the termination between genes LI0227 and LI0228. For the largest plasmid, the origin lies before gene LIC020, whereas the terminus between genes LIC062 and LIC063. In order to check whether there have significant differences of codon usage between genes on the leading strands and those on the lagging strands, a x 2 test was employed. Significance was examined at the 5% level (x 2 value of 3.841). Significance was evaluated for the 59 sense codons for which there are a synonymous alternative. Global codon usage of genes on the chromosome It is widely accepted that global codon usage in unicellular species that displays extremely biased genomic composition is predominantly shaped by the compositional pressure. 41 This viewpoint is confirmed again in L. intracellularis genome. As can be seen from Table 1, the global codon usage in 1180 genes on the chromosome in L. intracellularis shows the expected bias toward AT-rich codons. This enrichment is much stronger at the third codon position than at the first two positions. For all of the 18 amino acids (excepting Met and Trp), the frequencies of A-or T-ending codons are much more than those of G-or C-ending synonyms. As suggested by Wright, 38 a plot of N C against GC3 S can gives a useful visual display of the main features of codon usage patterns for a number of genes. If a gene is only subject to G þ C-biased mutational pressure, it will lie on the GC3 S curve. It will lie just below the GC3 S curve if a gene is under selection (either negative or positive) for codons in C and/or G. In other words, the gene will lie over the GC3 S curve if it is subject to other kinds of selection and/or any kinds of mutation pressure. Such a plot for genes on the chromosome is shown in Fig. 1 It can also be seen that one-third of genes lie above the GC3 S curve, and the retaining two-thirds lie below the GC3 S curve. For genes that lie above the GC3 S curve, there may exist mutational or selectional pressure that leads to A-and/or T-ending codons. On inspection, almost all of these genes are located on the lagging strands and most of these genes are those likely to be lowly expressed. It should also be noted that dozens of genes are located far from the majority. These genes have high GC3 S values and high N C values, which are marked by open triangles in the plot. Origin of these genes will be discussed in the later section. 3.2. Strand-specific composition bias at three codon positions of genes on the chromosome GC-skew analysis shows a clear polarity switch at two points, around 284 and 975 kb on the chromosome in L. intracellularis, suggesting that the putative replication terminus and origin sites might be located in these regions. According to the record for this bacterium in the Doric database, 42 GC disparity that is a component of Z curve also shows a clear minimum and maximum at these points for this genome. Comparisons with the consensus sequence for the non-perfect DnaA box motif (ttttcaaca) reveal that a non-translatable region between 983 356 and 984 329 bp possesses cluster of three putative DnaA boxes, thereby confirming the possible locations of the functional chromosomal origins between two genes, LI0775 (trkH) and LI0776 ( psd). The existence of a DnaA gene around 980 kb furthermore confirms this location as replication origin. Table 2 shows the frequencies of nucleotide A, C, G, T and the mean G þ T contents at three codon positions of the genes located on the leading and lagging strands of the L. intracellularis chromosome. Strand-specific skews, known to influence codon usage in other bacteria, 18 are also found to have the same influences on L. intracellularis. The leading strand genes show an excess of G over C and T over A, whereas the case for the lagging strands is opposite. The t-test shows that the mean G þ T content of each codon position of the genes located on the leading strands is significantly different from that on the lagging strands of replication. This indicates that the strand-specific compositional bias has significant influence on nucleotide selection not only at the third codon position but also at the first and second positions. The inter-strand variation in G þ T content is highest in the third and lowest in the second codon position, whereas the intra-strand variation is highest in the second codon position, which is reflected by the highest standard deviations. 3.3. The first trend is associated with strand-specific mutational biases COA of codon usage was used to study extensively and quantitatively the variation of codon usage among the 1180 genes on the chromosome. Because the use of relative codon frequency sometimes introduces other biases and often diminishes the quantity of information to analyze, occasionally resulting in interpretation errors, we compute in parallel COA on codon counts and on RSCU and then compare the results in this study. Fig. 2(a) and (b) shows the positions of the genes along the first and second major axes produced by COA on codon counts and RSCU values, respectively. The closeness of any two genes on each plot reflects the similarities of their codon usages. In the following sections, the factors that drive variation of synonymous codon usage of genes on the chromosome are discussed. Both in Fig. 2(a) and (b), the first axis separate the genes into two clusters with little overlap between them. The following two results could indicate that the two clusters correspond to genes on the leading and lagging strands of replication. (i) The first axis is found to strongly correlate with GC and AT skews, particularly at the third position. At the left of the first axis, genes are characterized by richness in nucleotides G and T, whereas it is opposite at the right. On the other hand, it has been found that there is an excess of nucleotides G relative to C in the leading strands and of C to G in the lagging strands in most bacterial genomes, which is frequently accompanied by an abundance of T over A in the leading strand. 18 (ii) The coordinates of individual genes along the first axis are plotted against the chromosomal locations of the corresponding genes in Fig. 3. Genes on the Watson strand and those on the Crick strand are denoted by black and gray squares, respectively. It is found that Table 2. Base compositions and medium G þ T content at three codon positions for genes located on the leading and lagging strands of the chromosome genes on the left side of Watson strand and those on the right side of Crick strand have low values of coordinates along the Axis 1, whereas, for the other genes, the case is opposite. In fact, genes on the left side of Watson strand and those on the right side of Crick strand just correspond to genes on the leading strands, the other ones correspond to lagging strands. Therefore, it is reasonable to say that two clusters in Fig. 2 correspond to genes on the leading strands and lagging strands, respectively. After marking genes located on the leading and lagging strands by different symbols in Fig. 2, the speculation is confirmed. A x 2 test was performed for RSCU of genes located on the leading versus lagging strands and the results are listed in Table 3. As can be seen, 49 among 59 codons are found to be significantly different between genes on the two strands of replication. The 23 codons used more frequently in the leading strands are G-ending or T-ending, except TTA, ACA, AGA and GCA. Among the 26 codons used more frequently in the lagging strands, 16 are C-ending, eight are A-ending and the exceptions are CTT and ACT. Results of the test confirm that there is a bias toward G, T in the leading strands and toward C, A in the lagging strands of replication. Therefore, it can be concluded that in L. intracellularis, the leading and lagging strands of replication display an asymmetry in the mutational biases and or the differential correction/repair rates, and as shown in several other bacteria, 10,11,30 -35 this difference is the most important source of codon usage variation. Furthermore, there are more annotated genes on the leading strands than on the lagging strands. The numbers are 607 and 573, respectively. However, the two numbers in L. intracellularis are less different than that in B. burgdorferi. 10 Therefore, the effect of replication selection is weaker than that in B. burgdorferi. Three sets of genes, including ribosomal proteins, translation/transcription processing factors and the major chaperones and degradation genes, 43 are chosen as representative of highly expressed genes. For these putative highly expressed genes, the distribution on the two replicating strands is much more skewed. More than 59% of the 61 putative highly expressed genes are transcribed on the leading strands. So, the differences between the leading and lagging strands indicate the combined effects of mutation and selection induced by the replicationtranscription. Replicational selection, although weak, may be responsible for the higher number of genes located on the leading strands, 10,11 and transcriptional selection appears to be responsible for the enrichment of highly expressed genes on these strands. 10,11 Replicational -transcriptional selection coupled with asymmetric mutational bias is, therefore, the most important cause of intra-chromosome variations of synonymous codon usages in L. intracellularis. As mentioned above, the separated codon usages between the two replicating strands have been found in nine bacteria. Among these species, B. burgdorferi shows an extremely strong bias of codon usage. 10 Lobry and Sueoka 22 once described a method of graphical display of the influence of replication bias on leading versus lagging strands. It would be interesting to compare the graphics, namely PR2-plots, obtained for L. intracellularis chromosome with those of B. burgdorferi. As can be seen from Fig. 4(a) and (b), the strand biases of G/C, which could be reflected by the values of the horizontal axis of the plot, are slightly weaker in L. intracellularis than those in B. burgdorferi, whereas the strand biases of T/A (reflected by the values of the vertical axis) are much weaker in the former than those in the latter. In both figures, the strand-specific biases are strong enough to separate the genes on the two replicating strands. 3.4. The second trend may be associated with horizontal gene transfer When analyzing the second trend, it is found that there exists a strong negative correlation between the Axis 2 (COA/RSCU) and GC3 S (r ¼ 20.6559). If analysis is restricted to the 87 genes whose coordinates along Axis 2 are less than -0.2, the correlation is more significant. Marking these genes by open triangles in the N C -GC3 S plot, it is found that these genes are located far from the other ones and have higher N C and GC3 S . And this fact means the different codon usage of these genes with that of the majority. As well known, genes that were recently imported through horizontal transfer would have unusual codon usages and base compositions. In Pseudomonas aeruginosa, putative alien genes showed higher N C values than the majority of genes. 12 From the widely used public database HGT-DB, 14 we downloaded the information for all the predicted horizontally transferred genes in L. intracellularis. Among the 10 genes that have the lowest values of Axis 2, nine are predicted as horizontally transferred genes according to the results in HGT-DB. 14 The exceptional gene is as short as 104 codons. Based on the above analysis, it is reasonable to come to the conclusion that most of the 87 genes may be transferred horizontally. It should be noted that 20 of these genes are located around the replication terminus, which has been shown to be a hot spot of mutation and chromosome recombination. Fig. 5 shows the positions of the 1180 genes along the third and forth major axes produced by COA on Influence of the expression level on the codon usage In order to investigate whether codon usage patterns are further shaped by factors at the level of translation, we conduct COA of codon counts and RSCU values on the genes located on the leading strands of replication, because most of the 61 putative expressed genes are located in that strand. Fig. 6(a) and (b) shows the positions of the genes along the first and second major axes produced by such COA on codon counts and RSCU, respectively. As can be seen from the two figures, almost all of the putative highly expressed genes have positive scores along the second axis, whereas genes presumably expressed at low level are scattered everywhere of the whole distribution area. In addition, we calculate the CAI value for each gene located on the lagging strands. The correlation between the values of Axis 2 and CAI is statistically significant (r ¼ 0.494, P , 0.0001). The above analyses suggest that the second axis is associated with expression level. COA of RSCU of genes in the largest plasmid COA of RSCU of genes in the largest plasmid in L. intracellularis was performed to determine the most important factor that drives synonymous codon usage patterns. The other two plasmids are not analyzed because there are too little genes to perform multivariate statistical analysis. One plot of the two most important axes after the COA is shown in Fig. 7. The first and second axes account for 12.7% and 8.2% of the total inertia of the Fig. 8, 68 genes are found to lie on the leading strands and 36 ones on the lagging strands. There is more significant difference between the numbers of genes on the two replicating strands in the plasmid than those on the chromosome, suggesting that replicational selection in the former is stronger than that in the latter. Marking the two groups of genes by open and filled circles, and then it is found that genes on the leading strands lie on the right side of the Axis 1, whereas lagging strand genes lie on the left side. x 2 test was performed for RSCU of genes located on the leading versus lagging strands in this plasmid. Consequently, 23 among 59 codons are found to be significantly different between genes on the two strands of replication. Among the 13 codons used more frequently in the leading strands, nine are G-ending or T-ending, except TTA, ATA, CCA and GCA. Among the 10 codons used more frequently in the lagging strands, eight are C-ending or A-ending and the exceptions are CTT and TCT. Therefore, the strand-specific mutational biases are responsible for the major variation of synonymous codon usages of genes in the plasmid. No known factors are found to correlate with the second axis of this COA. To test whether the transcriptional selection exerts influence on the genes, CAI values are calculated It is calculated using a non-overlapping sliding window of 1000 bp. As can be seen, there are two clear polarity switches, suggested as putative origin and terminus of replication. No. 2] for 104 genes in the plasmid by using ribosomal protein-coding genes as reference set. Consequently, it is found that all of the seven genes that have the highest CAI values are located on the leading strands. This suggests that transcriptional selection do have influence on the genes of the plasmid. Replicational selection and transcriptional selection are two different kinds of selective pressures. Although the two selective pressures yield similar consequences, they are very distinct. 10 The asymmetric mechanism of replication in the plasmid may also be shown by the following result. CMR database at TIGR lists the function categories of known genes in this plasmid. After comparing the numbers of the known genes on the two replicating strands, it is found that 15 of the 22 cell envelope-related genes are located on the lagging strands. On the other hand, 10 of the 11 genes with mobile and extrachromosomal element functions are located on the leading strands. Underlying mechanisms of replicationaltranscriptional mutation and selection It is widely accepted that codon usage variation in a species is the combined effect of mutation bias and nature selection. 12 In L. intracellularis chromosome, genes located on the two replicating strands are shown to have distinct codon usages. On the other hand, genes, particularly those are highly expressed, are mainly located on the leading strands. It is important to investigate the underlying mechanisms of the two effects. According to McInerney 10 and Romero et al., 11 the former is caused by the strand mutational bias and the latter results from replication-transcription selection. Also, some researchers believe that the strand-specific compositional biases are not only the result of strand mutation biases but also the superimposition of differential mutation rate and differential correction/repair rates. 45 Among the theories aimed at explaining strand mutation biases, it seems that the cytosine deamination theory enjoys the most attention. 19 The deamination of cytosine results in the formation of uracil. In normal circumstance in vivo, cytosine is effectively protected against deamination because of the Watson -Crick base paring. But the rate of cytosine deamination increases 140 times when the DNA is single-stranded. 46 If the resulting uracil is not replaced with cytosine, C to T mutation occurs. During the replicating process, the leading strand is much more exposed in the single-stranded state. Therefore, the C to T mutation occurs more frequently in the leading strands than in the lagging strands and then the excesses of G relative to C and T relative to A are formed in the leading strands. According to Furusawa and Doi,47 such fidelity difference between the leading and lagging strands may make it possible to accelerate the evolution of unicellular and multicellular organisms and avoid the extinction of the population. As far as gene orientation biases are concerned, there exist similar explanations. For genes on the leading strands, RNA polymerase, when transcribing them, moves in the same direction as a replication fork would move during replication, whereas opposite for genes on the lagging strands. Transcriptionreplication should be more effective if one organism maintains most of its genes, particularly highly expressed genes on the leading strands. The high efficiency results from the three factors: (i) the same direction reduces the probability of head-on collisions between the polymerases involved in the replicational and transcriptional processes; (ii) transcription may not be aborted by the replication complex; and (iii) the inverse orientation is very disadvantageous because of the possible lack of solution mechanism for head-on collisions. 48 In highly expressed genes, the selective advantage of transposition to the leading strands is more significant than that of lowly expressed genes. Therefore, highly expressed genes are much more likely to overcome random genetic drift, and these genotypes become fixed more easily in the population. Lowly expressed genes do not interfere with replication to such an extent as highly expressed genes, and, also, the interruption of lowly expressed gene transcription is not nearly as deleterious. So, the selective advantage is not so great in lowly expressed genes. Transposition of a lowly expressed gene from a lagging strand to a leading strand may not offer a sufficient selective advantage and therefore may not become fixed so easily in the population. 10 4.2. Codon usages of genes in the plasmid and asymmetric mechanism of replication Usually, bacterial plasmids replicate using a different mechanism than that of the chromosome of their host cell. As an exception, cumulative skew diagrams showed that plasmid and chromosome of B. burgdorferi adopted a similar bi-directional replication. 49 Such common replication mechanism was consistent with previous suggestions that Borrelia plasmids were actually mini-chromosomes. 50 In this work, GC-skew analysis in Fig. 8 shows clear polarity switch at two points, around 27 and 115 kb in the largest plasmid of L. intracellularis. This suggests that this plasmid replicates bi-directionally from an internal origin as the chromosome does. Leading strands and lagging strands are hence determined based on the putative origin and terminus. COA shows that genes on the leading and lagging strands have distinct codon usages. The same results are observed in genes on the chromosome, which is supposed to be caused by the strand-specific mutational bias. Similarly, the difference between codon usages of genes on the two replicating strands in the plasmid is very likely to result from the different mutation and/ or repair rates. If this speculation is the real case, then common asymmetric replication would be involved in the chromosome and the largest plasmid of L. intracellularis. Not only both replicate bi-directionally from internal origin, but also have biased mutation/repair rate between the two replicating strands. Replicational-transcriptional selection is also found to exert pressure in the largest plasmid of L. intracellularis. The fact that most (68/104 ¼ 65.4%) genes are located on the leading strands suggests the existence of replicational selection. In addition, all of the seven genes that have the highest CAI values are located on the leading strands. This suggests that transcriptional selection has influence on the genes of the plasmid. All the above facts suggest that this plasmid adopts the similar mechanism of replication as the chromosome in L. intracellularis. Perhaps, the largest plasmid of L. intracellularis is one mini-chromosome, as that in B. burgdorferi. 50 4.3. The 'GC-richness' of the putative alien genes in L. intracellularis Over 10 years ago, the tendency of horizontally transferred genes to be A þ T-rich had been noted in species having intermediate G þ C contents. 51 After that, the same phenomenon was observed for Helicobacter pylori and Streptococcus pneumoniae, 52 which have low G þ C contents. This striking pattern raises questions about the nature and the source of these horizontally transferred genes. Lawrence and Ochman 53 hypothesized that the recently transferred genes were adapted to the genomic context of other distant species. The results of Daubin et al. 52 suggest that either the donor genomes are always more A þ T rich than the acceptor genomes or there is a bias toward the internalization of A þ T-rich exogenous DNA in the genome. However, the putative alien genes in L. intracellularis are found to have higher GC than the other genes. And even, some of the putative alien genes are GCrich. Perhaps, someone will think this is an exception. But we do not think so. In contrast, we suppose that this may be a usual pattern of bacterial genomes with very low GC contents. Genomic GC contents of H. pylori and S. pneumoniae, in which AT-rich alien genes are found, are low but not very low. Their GC contents are still 6% higher than those of L. intracellularis, which is 33%. For bacteria with GC content low as 33%, it is difficult to obtain a donor species that have higher AT contents. Therefore, alien genes will be inclined to GC-richer than the other genes in the acceptor species. Common genomic characters of bacteria in which strand-specific mutational biases are strong Up to now, the strand-specific mutational bias has been found to be the most important factor that affects codon usage in genomes of 10 bacteria. Names of the 10 bacteria are B. aphidicola, B. burgdorferi, B. floridanus, B. henselae, B. quintana, C. muridarum, C. trachomatis, T. pallidum, T. whipplei and L. intracellularis, which is found in this work. Investigation on the common genomic characters of these bacteria may be useful. Several characters are analyzed in the following sections. First, chromosomes of the 10 bacteria are all shorter than 2000 kb. According to statistics on fully sequenced genomes, bacteria vary from 160 to .10 000 kb in their chromosomal lengths. However, these species are all small bacteria based on their chromosomal length, although some of these bacteria are not endosymbiont. Hence, we hypothesize that the short length of chromosome is a necessary condition to generate strong enough strand-specific mutational bias. Perhaps, in bacteria with larger chromosome, the mutation pressure is hard to prevail translational selection. Alternatively, among genomes that have suffered reductive evolution, the repair mechanism of replication may be inefficient. Secondly, all of the 10 bacteria have medium or low genomic G þ C content. Among these species, B. aphidicola has the lowest G þ C content as 26%, whereas T. pallidum has the highest ones as 52%. Perhaps, the environment of high G þ C contents is adverse to the generation of strong strand mutation biases. Future experimental works are needed to clarify the relationship between replication mechanism and genomic GC content or genome size. Thirdly, the strong mutation bias may be associated with presence or absence of certain genes involved in chromosome replication. As suggested by Klasson and Andersson, 25 the strong strand-specific mutational bias in endosymbiont genomes coincides with the absence of genes for replication restart pathways. They performed a comparative analysis of 20 g-proteobacterial genomes and found that endosymbiont bacteria lacking recA and other genes involved in replication restart processes, such as priA, displayed the strongest strand bias. 25 Driven by this viewpoint, we investigate the presence and absence of replication F.-B. Guo and J.-B. Yuan restart-related genes in the 10 genomes, in which the strand mutation biases are strong enough to generate distinct codon usages. The analysis is involved with genes mutH, priA, topA, dnaT, fis and recA, which are all initiation and re-initiation associated genes. 25 Consequently, all of these genes are found to be absent in B. floridanus and five absent in B. aphidicola, with mutH as an exception. For the other eight bacteria, priA, topA and recA are present and the other three genes are absent. In a word, dnaT and fis are absent in all of the 10 genomes, whereas both the genes exist in E. coli and other g-proteobacteria, which have not strong mutation biases. Klasson and Andersson hypothesized that cytosine deaminations accumulate during single-strand exposure at stalled replication forks and the extent of strand bias may depend on the time spent repairing such lesions. Inefficient restart mechanisms result in the long time of the replication forks being arrested and hereby lead to high DNA strand asymmetry. 25 As a common character of the genomes mentioned here, we believe that the absence of replication restart involved genes is very likely to appear in the other genomes, found in the future, with strong strand mutational bias. Finally, the strong mutation bias may reflect as the strong cumulative excess of G over C plus T over A along the chromosome. In our previously published work, 35 the Z curve was used to compare the chromosome sequences between genomes with or without strand-specific codon usage. The y-component of the Z curve represents the plus of cumulative excess of G over C and T over A. An index is defined as the changing rate of y-component per unit base and denoted by symbol k. In fact, k equals to (G 2 C þ T 2 A)/(G þ C þ T þ A), where A, C, G and T denote the total number of the corresponding base appearing in the half of chromosome (from replication origin to terminus). After calculating k values for the 10 bacteria mentioned above, it is found that this values for these species are all larger than 0.035 and some even larger than 0.1. However, the value of k for E. coli K-12 is less than 0.02. Therefore, the k value that represents the changing rate of cumulative excess of keto (G þ T) over amino (A þ C) per unit base could be a good measurement of magnitude of strand composition biases and even strand mutation bias. Conclusion Complex factors are found to be responsible for variation of codon usage in L. intracellularis chromosome. All of these factors could be interpreted by the paradigm 'mutational bias-translational selection'. When analyzing genes in the largest plasmid of this bacterium, for the first time it is found that the strand-specific mutational biases are responsible for the primary variation of synonymous codon usages in plasmid. Genes, particularly highly expressed genes of this plasmid, are mainly located on the leading strands and this is supposed to be the effects exerted by replicational-transcriptional selection. These facts suggest that this plasmid may adopt the similar replication mechanism as the chromosome in L. intracellularis. Finally, common genomic characters are found among L. intracellularis and other bacteria in whose genomes the strand-specific mutational biases are the most important source of variation of codon usage. Supplementary data: Supplementary data are available online at www.dnaresearch.oxfordjournals.org. Funding The present study was supported by Doctoral Fund of Ministry of Education of China (20070614011) and National Natural Science Foundation of China (60801058).
2018-04-03T01:01:46.532Z
2009-02-15T00:00:00.000
{ "year": 2009, "sha1": "4a3f5177c5938340c3d538e20be2497f1bfd01fd", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/dnaresearch/article-pdf/16/2/91/17462640/dsp001.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4a3f5177c5938340c3d538e20be2497f1bfd01fd", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
67859101
pes2o/s2orc
v3-fos-license
Severe toxicity from checkpoint protein inhibitors: What intensive care physicians need to know? Checkpoint protein inhibitor antibodies (CPI), including cytotoxic T-lymphocyte-associated antigen 4 inhibitors (ipilimumab, tremelimumab) and the programmed cell death protein 1 pathway/programmed cell death protein 1 ligand inhibitors (pembrolizumab, nivolumab, durvalumab, atezolizumab), have entered routine practice for the treatment of many cancers. They improve the outcome for many cancers, and more patients will be treated with CPI in the future. Although CPI can lead to adverse events (AE) less frequently than for chemotherapy, their use can require intensive care unit admission in case of severe immune-related adverse events (IrAE). Moreover, some of these events, particularly late events, are poorly documented, so a high level of suspicion should be maintained for patients receiving CPI. Intensivists should be aware in general of the known complications and appropriate management of these AE. Nevertheless, a multidisciplinary collaboration remains essential for their diagnosis and management. This review described the most severe complications related to CPI. Electronic supplementary material The online version of this article (10.1186/s13613-019-0487-x) contains supplementary material, which is available to authorized users. Introduction Checkpoint protein inhibitor antibodies (CPI), including cytotoxic T-lymphocyte-associated antigen 4 (CTLA-4) inhibitors (ipilimumab, tremelimumab) and the programmed cell death protein 1 pathway/programmed cell death protein 1 ligand (PD-1/PDL-1) inhibitors (pembrolizumab, nivolumab, durvalumab, atezolizumab), have entered routine practice for the treatment of many cancers. In contrast to classical chemotherapy, CPIs do not target tumor cells; rather they enhance activation of immune cells, particularly T cells (Fig. 1) [1]. They have been associated with better outcomes in a number of solid and hematological malignancies [2]. Moreover, compared with chemotherapy, their tolerance seems to be higher with fewer side effects. These new molecules are mostly prescribed for melanoma and non-small cell lung cancer (NSCLC), but also for other malignancies such as renal cell carcinoma, bladder carcinoma, squamous cell carcinoma of the head and neck, lymphoma [2]. The list of treatment indications will likely extend as the years go by, even as a first-line therapy. The number of patients treated will increase because of expanded indications and better survival [2][3][4][5]. Moreover, the optimal duration of treatment remains unknown. CPIs are associated with immune-related adverse events (IrAE) that need to be carefully monitored and managed during and after treatment. These drugs can promote infiltration of immune cells into normal tissues, which may lead to immune-mediated disorders. Almost every organ may be affected: skin, bowels, liver, lungs, kidneys, eyes, endocrine tissues, central nervous system [6]. In up to 20% of cases, severe and even life-threatening AE can occur and lead to intensive care unit (ICU) admission [7,8]. This review focuses on the most severe IrAE that intensivists may encounter. Maintaining a high level of Lemiale et al. Ann. Intensive Care (2019) 9:25 [1]. b Mode of action of CTLA-4i or PD-1/PDL-1i. PD-1/PDL-1i blocks the connection between PD-1 and PDL-1 and prevents the inhibition of T cells. T cell cytotoxicity then attacks the tumor cells. CTLA-4i blocks the connection between dendritic cells and T cells related to CTL14. CTLA-4i removes the inhibition related to dendritic cell on T cells suspicion is a major challenge, as some of the toxicities might be uncovered later, in patients treated for a longer period of time with prolonged survival and new indications for those CPI. Methods We searched Medline and PubMed for reviews and original articles on CPI for treatment of solid tumors in adults published in English between 1 . We also searched using individual terms such as 'CTLA4 inhibitors' , 'programmed cell death protein 1 pathways' , 'hepatitis' , 'pneumonitis' , 'skin' , 'hypophysitis' , 'colitis' , and 'acute kidney failure' , 'myocarditis, 'neurological complication' , 'encephalopathy' , 'anemia' . Only severe adverse events eventually associated with ICU admission were considered for this review. Most of the articles were descriptive reports or randomized studies with safety outcomes, over the last 10 years of the search period. We used only original reports when available. Statistical analysis Overall proportion of included study with predefined complication was reported as proportion (95% CI). Publication bias was assessed by visually inspecting the funnel plot, and summary estimates of relative risk and their 95% confidence interval were calculated using both fixed and random-effects model. Cochran's χ 2 test and I 2 test for heterogeneity were used to assess inter-study heterogeneity [9]. The χ 2 test assesses whether observed differences among results are compatible with chance alone and the I 2 describes the percentage of the variability in effect estimates that results from heterogeneity rather than from sampling error. An I 2 test for heterogeneity above 0.25 was considered to indicate moderate heterogeneity. Statistically significant heterogeneity was considered present at χ 2 p < 0.10 and I 2 > 0.5. All effect sizes with a p < 0.05 were considered significant. Tests were two-sided. All analyses were carried out with software R, version 3.4.4. The 'meta' , the 'metasens' and 'metaphor' packages were used to produce forest plots and funnel plots. In a meta-analysis of 21 randomized phase II/III immunotherapy trials (including 11,454 patients of whom 6528 received a CPI) conducted between 1996 and 2016, the incidence of fatal IrAE was 0.64%, mostly due to ipilimumab-induced colitis [7]. In patients receiving CPI, grade III-IV (Table 1) colitis occurred in 1.5%, grade III-IV aspartate aminotransferase (AST) elevation in 1.5%, grade III-IV rash in 1.1%, grade III-IV pneumonitis in 1.1%, hypothyroidism was observed in 0.3% of cases. Ipilimumab was associated with a higher risk of grade III-IV colitis than PD-1/PDL-1i [7]. In a recent meta-analysis, PD-1 and PDL-1i seem to be associated with grade III-IV IrAE with similar frequencies [10]. However, the incidence of these IrAE was far lower than the rate of complications from chemotherapy, particularly infections. Grade III-V toxicities were more common with CTLA-4i than with PD-1i (31% vs. 10%) [11]. IrAE leading to death were exceedingly rare for PD-1i (PDL-1i 0.1%, PD-1i 0.3%) and most often secondary to pneumonitis, whereas fatal gastrointestinal (GI) IrAE (diarrhea, colitis, colonic perforation) mostly occurred with CTLA-4i (severe events 31%) [11]. Furthermore, the safety profile of CPI varies among tumor types: melanoma has a higher risk of GI and skin IrAE and lower frequencies of pneumonitis [12,13]. Moreover, combining two CPIs leads to more frequent severe complications in up to 55% of patients [14][15][16]. Also, the incidence of rAE and severe IrAE will probably increase in the future, with the increasing number of patients (See figure on next page.) Fig. 2 Frequency of immune-related adverse events. a Toxicity related to anti-PD-1/PDL-1i; b toxicity related to CTLA-4i. The size of the circles reflects the incidence of toxicity: blue, toxicity of any grade; red, grade III/IV toxicity. None of the circles describe toxicity related to CTLA-4i associated with PD-1/PDL-1i inhibitors (the incidence of combined treatments is higher than the toxicity of each inhibitor). Incidence data from [8] and [47] 0,1-0,2% 0,9-1,1% currently treated and the use of combination regimens already tested in several trials [17][18][19]. The kinetics of IrAE onset remains difficult to describe, but IrAE seem uncommon before 1 months of treatment [6,13]. Although, in a recent report, severe IrAE can appear early during the treatment course [20] (within 40 days with Ipilimumab and anti-PD1-/PDL1 and 14.5 days with combination treatment), late complications of CPI may occur, sometimes up to 1 year after the start of the PDL1, and clinicians must remain aware of possible complications during follow-up [21]. Moreover, IrAE can occur after the CPI has been discontinued [22]. Toxicities associated with PD-1/PDL-1i agents may be slower to resolve than with ipilimumab, and long-term follow-up is therefore advised [23]. Immune-related adverse events (Table 2) This section describes the most severe IrAE according to the frequency and severity of organ involvement (Figs. 2, 3, 4, Additional file 1: Fig. S1). In some recent studies, high-grade toxicity seems to be associated with high tumoral response rates [24,25]. Gastrointestinal disorders GI disorders are the most frequent IrAE and occur particularly with CTLA-4i. Occurrence of colitis after PD-1i/ PDL-1i has been reported only in few patients (< 1%) [23,26]. At ICU admission, clinicians must distinguish diarrhea alone from colitis. Diarrhea may lead to ICU admission because of dehydration and electrolytes disturbances. Colitis is associated with abdominal pain and inflammation. Computed tomography (CT) and/or endoscopy showed evidence of colic inflammation [27]. Endoscopy found histologically confirmed colitis in more than 80% of patients with erythema and ulcerations [27]. Colitis was in some cases refractory to steroid treatment and led to colonic perforation [27,28]. In a recent observational study of 21 patients, two patients had refractory colitis lasting for more than 130 days (10 to 12 times the half-life of ipilimumab). Those two patients had previously received radiotherapy. In addition, association of CPI with chemotherapy or other immune therapy may increase the risk of severe colitis [28]. Diarrhea of varying grade occurs frequently in patients treated with CTLA-4i. However, alternative diagnoses should be evaluated at ICU admission. First, an infectious etiology should be excluded, particularly Clostridium difficile ( Table 2). The incidence of C. difficile in CPI-treated patients remains unknown, with only a small number of cases described [29,30]. The diagnostic workup must include at least stool culture and screening for C. difficile and cytomegalovirus (PCR and/or colon biopsy). CT and endoscopy should be performed if possible to distinguish Although GI adverse events related to PD-1i are rare, severe colitis has been described after long-term PD-1i treatment [26]. Lung disorders, pneumonitis/acute respiratory distress syndrome Although pneumonitis remains rare (4% in NSCLC and 3% in melanoma), it can lead to severe acute respiratory distress syndrome (ARDS) (0.8-1% grade 3 or higher toxicity in the studies included) [31][32][33]. Rare cases of severe pneumonitis have been described in phase I trials with PD-1i and PDL-1i [34,35]. CTLA-4i are rarely associated with pneumonitis although some cases series have shown non-severe [36] or severe pneumonitis [37]. Pneumonitis is more frequent during NSCLC treatment than melanoma treatment, particularly when other lung process is present (tobacco use, chronic obstructive pulmonary disease, etc.) or during combined treatment [33]. Pneumonitis should be distinguished from cancer relapse or infection [38]. One case report described a "flare pneumonitis" after tapering corticosteroids without new treatment with PD-1i [39]. More interestingly, in a descriptive study of 43 cases of pneumonitis related to PD-1/PDL-1i, more than half of the patients described other immune toxicity as well [40]. Common symptoms included dyspnea (53%), cough (35%), fever (12%), and chest pain (7%). ARDS occurred in rare cases [40]. PD-1i-related pneumonitis was described in 20 of 170 patients treated with PD-1i. Among them, five patients had severe pneumonitis occurring within 2.6 months after the beginning of treatment. Cough was the most frequent symptom, followed by dyspnea and fever. The most frequent CT findings were ground-glass opacities in all patients, reticular opacities (19/20 patients) and airspace consolidation (12/20 patients), with a common organizing pneumonia pattern in 13 (65%) patients (Fig. 5). Abnormal findings occurred in the lower lobes with a peripheral distribution [33,39,41]. Another CT pattern encountered was non-specific interstitial pneumonia. Unfortunately, none of the studies described the findings of bronchoalveolar lavage (BAL). Other causes of acute respiratory failure (infection including, Pneumocystis jirovecii pneumonia, relapsing cancer etc.) must be excluded. BAL should be performed in those cases. In one study, lung biopsy was performed in 11 patients. The histological findings were cellular interstitial pneumonitis, common organizing pneumonia, or diffuse alveolar damage. In three patients, no lesion was found [40]. Pneumonitis related to ipilimumab is rare but has been reported as sarcoidosis/granulomatosis-like, rarely associated with ARDS [36,37]. Management of patients with suspected grade III-IV pneumonitis should include clinical examination to search for other associated immune toxicities, leading to higher probability of IrAE pneumonia, and CT should be performed to define the lesions. BAL and potentially lung biopsy should be considered ( Table 2). Myocarditis and cardiac insufficiency Myocardial complications remain rare, far below the rate of toxicities related to radiotherapy and chemotherapy. However, cases reports described grade III/IV IrAE, ranging from cardiomyopathy to acute myocarditis and cardiac arrest [42,43]. This rare complication remains one of the most severe consequences and occurs more frequently with the combination of CTLA-4i and PD-1i or PDL-1i [44]. It may occur at the initiation of therapy or after several weeks of treatment. Cardiovascular risk factors (e.g. hypertension and tobacco use) were not always present in cases reports of cardiac toxicity [45,46]. Interestingly, in a recent study of eight cases, five patients had already at least one other IrAE when the cardiac side effect occurred [47]. The best management of CPI-related myocarditis remains unknown. Wang et al. proposed an algorithm to detect and treat myocarditis, including pre-treatment troponin and EKG [45]. Other causes of myocardial dysfunctions should be ruled out (pulmonary embolism, ischemic myocardial dysfunction) ( Table 2). Treatment may require extracorporeal membrane oxygenation, infliximab, or polyvalent intravenous immunoglobulins [44,47,48]. Some cases of pericardial effusion, sometimes with tamponade, have also been reported [49][50][51]. A few rare cases of pericarditis occurred that were treated with steroids. When histological examination was performed, T cell infiltrates were found with cardiomyocytes fibrosis, in some cases, [46]. Neurologic disorders: encephalopathy, Guillain-Barré syndrome, myasthenia, myelitis Because of the severity of symptoms, neurological toxicity remains one of the most important IrAE, mostly severe lung IrAE. References: [3-5, 13, 16-18, 24, 33, 34, 40, 60, 71, 75, 88-95] associated with CTLA-4i [52]. Although these complications are common, the proportion of grade III/IV cases remains limited. Neurological complications may appear within 4 months after initiation of treatment, but clinicians should maintain a high level of awareness even when these drugs were recently introduced. Furthermore, they are usually prescribed for a long duration which could lead to delayed toxicity. Some cases reports describe permanent disability after neurological toxicity. Central neurotoxicity can take several forms, from headache after ipilimumab induction to chronic encephalopathy or aseptic meningitis [53,54]. Stroke and posterior reversible encephalopathy syndrome after ipilimumab may occur and could lead to ICU admission [52]. Seizures remain rare [54][55][56], peripheral neurotoxicity occurs with Guillain-Barré syndrome or neuromyopathy, after CTLA-4i or anti-PDL-1i treatment [52,53]. Myasthenic syndromes may occur mostly with PD1i treatment [57,58], early after treatment initiation [59] ( Table 2). Other etiologies, particularly metastases, should be ruled out with MRI and or lumbar puncture. In an observational study of 352 patients with melanoma, 10 patients were found to have severe neurological (central and peripheral) complications (including six patients with high-grade complications). Eight of those patients showed a sustained response to steroid therapy and were alive after 8 to 35 months [53]. The high survival rate after neurological CPI justifies ICU admission, but other etiologies should be promptly ruled out as well. Endocrine-related adverse events Endocrine-related AE are irreversible IrAE and lead to continuous substitutive treatment. Failure to diagnose these IrAE can lead to life-threatening complications particularly hypophysitis and adrenal insufficiency. Higher incidences of endocrine-related AE were found with combination therapy and when high dose therapy was used [60]. Thyroid dysfunction Thyroid dysfunction, isolated or associated with hypophysitis, occurred in up to 10% of cases and was severe (grade III/IV) in only 1-2% of patients [43,61]. Thyroid dysfunction (hypothyroidism or hyperthyroidism) could be primary or secondary in origin. Associated hypophysitis should be considered particularly with CTLA-4i [61]. Although thyrotoxicosis has been described in rare cases, primary hypothyroidism is more frequent than hyperthyroidism and mostly related to PD-1i or PDL-1i treatment. Hashimoto's disease has been described in rare cases [6]. Other associated endocrinopathies should also be considered [60]. Hypophysitis First described with CTLA-4i, hypophysitis can also occasionally occur with PD-1i and anti-PDL-1i treatment. Intensivists must be aware of this complication, which can be life-threatening particularly when acute adrenal insufficiency is the first symptom. Hyponatremia and dehydration may lead to ICU admission. Adrenal insufficiency is much more frequent with immunotherapy than with conventional treatment [53]. Hypophysitis was investigated in 211 patients treated with CTLA4i for melanoma and developed early in the course of the treatment. Hypophysitis occurred in 19 (9%) patients within 4 months of treatment and was symptomatic in 83% of these cases. Associated hypothyroidism occurred in 11 (58%) patients, while brain magnetic resonance imaging revealed abnormal findings in only 12 (63%) patients [62]. Hypophysitis seems to be related to hypersensitivity of hypophysis cells carrying CTLA-4 receptors [63]. Severe diabetes mellitus with ketosis was described in 0.4% of cases with PDL-1i. Diabetes was related to pancreatic disorder or to autoimmune insulin-dependent diabetes. (Table 2) [64,65]. Ketoacidosis may require ICU admission [64]. Liver disorders Liver dysfunction, mostly related to autoimmune-like hepatitis, has been described with CTLA-4i treatment but very rarely with PD-1i or PDL-1i treatment [66,67]. In a case series of 11 patients receiving one to four doses of ipilimumab, the authors described acute panhepatitis with CD8 + T-lymphocyte perivenular infiltrate and endothelialitis. Some of the patients had pre-existing risk factors for chronic liver disease with nonalcoholic steatohepatitis or steatosis-associated characteristics. The rate of PD-1i related hepatitis grade III was 0.5% during melanoma treatment and has not been described during lung cancer treatment [4,71]. Pancreatic disorders In a recent study of 496 patients treated for melanoma, pancreatitis disorders occurred in 9 (0.02%) patients including seven patients with grade III/IV pancreatitis, within 6 to 20 weeks after treatment initiation [6]. In earlier studies, elevated lipase was reported in less than 1% of patients [4,71]. Some authors recently demonstrated that high lipase level was not associated with pancreatic disease in most cases and should not be automatically associated with treatment cessation [72]. Skin Skin involvement is frequent: 50% of patients experiment rash or pruritus with CTLA-4i and 22% with PD-1i [73]. However, grade III/IV IrAE are very rare, reported in 0 to 4% of patients after ipilimumab treatment [13] and even more rarely with PD-1/PDL-1i treatment [74,75]. Stevens-Johnson syndrome is one of the most severe complications (Table 2). Kidney disorders According to clinical trials, acute kidney injury (AKI) is relatively uncommon with anti-cancer immune CPI compared with other types of IrAE [76]. However, both circulating anti-double-stranded DNA antibodies and glomerular IgG and C3 deposits have been reported in mice treated with CTLA-4i [77]. During ipilimumab monotherapy, elevated creatinine was reported in 1.4% (any grade) and 0.2% (grade III or IV) of the patients. Similarly, during PD-1i monotherapy, elevated creatinine was reported in 1.7% (any grade) and 0.8% (grade III or IV) of the patients, respectively. However, during combination therapy, the incidence of AKI was higher in clinical trials, resulting in 1.7% of grade III or IV creatinine elevation. The most accurate data were reported in the series by Cortazar et al. and Shirali et al. [78,79]. They reported the clinical and histological features of 13 patients with CPI-related AKI (various cancers, mainly melanoma; various CPIs) who underwent kidney biopsy. The most prevalent pathologic lesion was acute tubulo-interstitial nephritis in 12 patients, including three with granulomatous features, and one case of thrombotic microangiopathy (TMA) ( Table 2). The renal prognosis remains good after discontinuing CPI and in most cases prescription of steroids (Table 2). However, the persistence of kidney failure after 3 weeks, higher age, and greater degree of interstitial fibrosis have been associated with poor prognosis [80]. In some reports, interstitial fibrosis may occur as soon as 10 to 14 days after initiation of treatment. Hematological syndromes Rare cases of hemolytic anemia leading to ICU admission have been described with nivolumab and ipilimumab, IgG or C3 mediated [81]. They may respond to corticosteroids, but rituximab may be required in some cases. Resumption of PD-1i/PDL-1i after resolution of anemia was not always associated with a recurrence of anemia [82,83] (Table 2). Management Due to the potential reversibility with treatment, CPIrelated severe toxicity should lead to ICU admission at least for a time-limited trial, in case of organ failure or for patients at risk of organ failure. Such a trial may include The patient had been treated for NSCLC with pembrolizumab for 2 months. He developed acute respiratory failure. CT showed the typical organizing pneumonia pattern mechanical ventilation, vasopressors, renal replacement therapy, and even extracorporeal membrane oxygenation in selected patients. The recommendations for managing IrAE arise from general clinical consensus, because no prospective trials have been conducted to specifically test whether one management strategy is superior to another. Early recognition and treatment of IrAE are believed to be important in mitigating their severity. For severe grade III-IV, IrAE drug should be discontinued immediately [84]. From a practical standpoint, the management of such patients requires a close collaboration between specialists (e.g. nephrologist, hepatologist, infectious diseases specialist), oncologists, and intensivists (Fig. 6). Although steroid treatment should be initiated as soon as possible, some other etiologies like infections or cancer progression must be ruled out. Table 2 summarizes the diagnostic work up before treatment. Most of the differential diagnoses can be ruled out quickly and work up should not delay the initiation of steroid treatment in case of severe IrAE. Systemic corticosteroids (oral or IV methylprednisolone) must be initiated at a dose of 1-2 mg/kg/day for 3 days and then reduced to 1 mg/kg/ day. Corticosteroids regimen should then follow a gradual tapering over a period of at least 1 month [85]. Whenever IrAEs worsen or do not improve sufficiently after 3-5 days despite the use of adequate steroids dosage, additional immunosuppressive drugs should be considered. Although none of these treatments has been evaluated, they may include: • Antitumor necrosis factor alpha (anti-TNF) [86] in case of colitis or pneumonitis, but not hepatitis because of the risk of hepatotoxicity [8]. • Mycophenolate mofetil (500-1000 mg twice a day) for hepatitis, cardiotoxicity, or pneumonitis [8,45]. • Antithymocyte immunoglobulins for hepatitis, cardiotoxicity, or severe neurotoxicity [8,45]. Hypothyroidism should be managed with thyroid hormone replacement and hyperthyroidism with standard anti-thyroid pharmacotherapy and beta-blockers in symptomatic cases [8]. Long-term treatment with corticosteroids and sometimes anti-TNF drugs may be complicated by severe opportunistic infections such as fungal infection, tuberculosis, or CMV. Therefore, it is recommended to give antibiotic prophylaxis with oral trimethoprim/sulfamethoxazole (400 mg/125 mg 3 times a week) together with steroids and to test patients for tuberculosis before adding any additional immunosuppressive drug (e.g. TNF alpha inhibitors) to corticosteroids [85.]. As the pathophysiological mechanism of IrAE involves excessive activation of the immune system, leading to toxic effects potentially targeting any organ and mimicking autoimmune diseases in their clinical presentation, it may also be difficult to differentiate between the side effects of CPI and the development of autoimmune paraneoplastic syndromes. This was emphasized in a recent report of patients without previous autoimmune manifestations who developed autoimmune encephalitis during immunotherapy [55,87]. Interestingly, the same CPI may be reintroduced after IrAEIrAE resolution in most cases of grade III IrAEIrAE ( Table 2). For grade IV IrAEIrAE, resumption of CPI may be more questionable. However, restarting CPI must be considered after a close collaboration between oncologist, specialist and intensivist, while weighing the individual risk/benefit ratio, and should be shared with the patient (Table 2). Interestingly, some studies described higher cancer response rate for patients who also experienced highgrade IrAEIrAE [13]. These results need to be confirmed. Conclusion Severe immune-related complications of checkpoint protein inhibitor antibodies complications remain rare, but the number of patients treated will continue to rise. Although adverse events may occur less frequently after immunotherapy than after chemotherapy, intensivists should be aware of the side effects of this new type of medication that may require ICU admission. Moreover, some immunerelated complications remain unknown and will reveal themselves with the increasing use of those new therapeutic agents.
2019-02-25T18:09:38.957Z
2019-02-01T00:00:00.000
{ "year": 2019, "sha1": "0f08775834cff481a195a4926a29cf718918c1f2", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/s13613-019-0487-x", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0f08775834cff481a195a4926a29cf718918c1f2", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
238743035
pes2o/s2orc
v3-fos-license
Proteomic Study of Low-Birth-Weight Nephropathy in Rats The hyperfiltration theory has been used to explain the mechanism of low birth weight (LBW)-related nephropathy. However, the molecular changes in the kidney proteome have not been defined in this disease, and early biomarkers are lacking. We investigated the molecular pathogenesis of LBW rats obtained by intraperitoneal injection of dexamethasone into pregnant animals. Normal-birth-weight (NBW) rats were used as controls. When the rats were four weeks old, the left kidneys were removed and used for comprehensive label-free proteomic studies. Following uninephrectomy, all rats were fed a high-salt diet until 9 weeks of age. Differences in the molecular composition of the kidney cortex were observed at the early step of LBW nephropathy pathogenesis. Untargeted quantitative proteomics showed that proteins involved in energy metabolism, such as oxidative phosphorylation (OXPHOS), the TCA cycle, and glycolysis, were specifically downregulated in the kidneys of LBW rats at four weeks. No pathological changes were detected at this early stage. Pathway analysis identified NEFL2 (NRF2) and RICTOR as potential upstream regulators. The search for biomarkers identified components of the mitochondrial respiratory chain, namely, ubiquinol-cytochrome c reductase complex subunits (UQCR7/11) and ATP5I/L, two components of mitochondrial F1FO-ATP synthase. These findings were further validated by immunohistology. At later stages of the disease process, the right kidneys revealed an increased frequency of focal segmental glomerulosclerosis lesions, interstitial fibrosis and tubular atrophy. Our findings revealed proteome changes in LBW rat kidneys and revealed a strong downregulation of specific mitochondrial respiratory chain proteins, such as UQCR7. Introduction Infants born at less than 2500 g are designated with a low birth weight (LBW). LBW individuals have an increased risk of heart disease, diabetes mellitus, and kidney disease. In particular, LBW is associated with an increased risk of chronic kidney disease (CKD) [1,2] or end-stage renal disease [3,4] and has become a global concern [5]. LBW is also significantly associated with a decreased number of nephrons in both humans and animals [6][7][8]. The molecular mechanism(s) underlying the deterioration of kidney function in LBW individuals remain unclear and no association with familial factors could be observed in a national registry composed of 1,852,080 individuals [9]. The current paradigm for LBW pathogenesis reflects Brenner's theory (also known as the glomerular hyperfiltration theory), where adaptive mechanisms are activated in response to nephron loss and a subsequent increase in capillary pressure [10]. Such adaptation results in a vicious cycle promoting further progression of chronic kidney disease (CKD). Until now, the developmental origins of health and diseases (DOHaD) in the kidney have been preferentially explained by this mechanism [11,12], although the molecular pathogenetic mechanisms remain unclear [13]. The most characteristic glomerular change in LBW individuals is focal segmental glomerulosclerosis (FSGS) [14]. We previously reported that the pathological findings of human kidney biopsy specimens were similar between patients with mitochondrial DNA (mtDNA) mutations and those with LBW-related nephropathy [15]. Therefore, we hypothesized that mitochondrial defects could participate in LBW-related kidney disease. Using untargeted and powerful analytic methods as proteomics, we recently unraveled the molecular alterations in human podocytes challenged with hyperglycemia and discovered a specific program of metabolic pathway reprogramming regulated by MEF2C and MYF5 [16]. Likewise, a large-scale untargeted quantitative and comprehensive analysis of the molecular alterations in the kidney proteome could provide a dataset of shared interest for understanding the mechanisms underlying the development of LBW-related nephropathy. However, the omics of LBW kidneys remain unknown, and no dataset is available for performing pathway and biomarker analyses. In particular, the bioinformatic study of proteome changes could allow for prediction of the putative transcription factors, kinases or microRNAs involved in the observed changes based on the detailed knowledge of their targets and directionality of the expected changes in target expression. We recently used this predictive strategy to discover the role of various genetic regulators in different disease contexts [16][17][18][19]. Of note, a clearer understanding of the molecular basis of diabetic nephropathy was obtained using large-scale characterization proteomics, and innovative therapeutic approaches can be derived from this knowledge [20,21]. Therefore, similar approaches could be tested for LBW nephropathy. To investigate molecular candidates involved in LBW-related nephropathy, we developed a rat model of human LBW-related nephropathy and performed a comprehensive untargeted proteomic study of the kidneys between LBW and NBW rats at a young age, when neither group had any pathological changes. These studies could reveal the early pathological factors underlying LBW-related nephropathy. Clinical and Histopathological Characteristics of LBW Rats We generated a rat model of LBW as previously described [22] and summarized in ( Figure 1A) to study early molecular alterations of the kidney. As shown in (Table 1), we compared rats in the two different body weight groups: NBW (n = 7) vs. LBW (n = 7). The body weights of LBW rats were consistently and significantly less than those of NBW rats at birth and remained lower at 4 and 9 weeks of age. The kidney weights of the LBW rats were also significantly less than those of NBW rats at 4 weeks of age. Clinical pathology examination revealed that the glomerular numbers per section of the LBW rats were significantly fewer than those of NBW rats at 4 weeks of age (Table 1). The product of glomerular number per section and kidney weight should be proportional to the total glomerular number. These products of LBW rats at 4 weeks old were significantly lower than those of NBW rats at 4 weeks old (Table 1), which should indicate that the total nephron numbers in LBW rats are lower than those in NBW rats as previously reported [6][7][8]. In addition, although glomerular sizes of LBW rats were significantly lower than those of the NBW group at 4 weeks old, the sizes became compatible between LBW rats and NBW rats at 9 weeks old (Table 1). These results suggested that the glomeruli with originally small sizes in LBW rats were enlarged with body growth, which should be due to glomerular hyperfiltration by intraglomerular hypertension. The molecular mechanisms underpinning the above-described changes in kidney physiology remain unclear and will be investigated in what follows. There were no pathological abnormalities in either group. Glomerular sizes in LBW rats appeared to be smaller than those in NBW rats. There were no pathological abnormalities in either group. Glomerular sizes in LBW rats appeared to be smaller than those in NBW rats. Table 1. Basic characteristics and data from histological analysis of the rats. NBW (n = 7): group of rats born with a normal birth weight, LBW (n = 7): group of rats born with a low birth weight. Data represent mean ± SE. * The unpaired t-test is used for the comparison of two groups. BW, body weight; KW, kidney weight; IFTA, interstitial fibrosis and tubular atrophy. 'na' means not applicable. FSGS Lesions in the Perihilar Area Were Developed in LBW Rats at 9 Weeks Old There were no sclerotic lesions or any other pathological changes in glomeruli and tubules of either the LBW or NBW rats at 4 weeks of age ( Figure 1). However, although glomeruli in 9-week-old NBW rats appeared almost normal (Figure 2a-d), FSGS with a predominance of perihilar lesions of sclerosis was apparently formed in 9-week-old LBW rats (Figure 2e-h). At 9 weeks of age, FSGS lesions were observed in 7.43% of the glomeruli in LBW rats but in only 0.48% of the NBW rats ( Table 1). The incidence of interstitial fibrosis and tubular atrophy (IFTA) was also higher in LBW rats than in NBW rats at 9 weeks of age (Table 1). Furthermore, such histological damage was associated with an increase in serum creatinine levels in LBW rats ( Table 1). The serum creatinine levels of the 9-week-old LBW rats were greater than those of the same age NBW rats, which indicated that the LBW rats also had worse kidney function accompanied by histological damage (Table 1). Early Kidney Proteome Remodeling in LBW Rats To unravel the early changes in LBW rats before pathological changes occurred in the kidney, we performed quantitative proteomic analyses on kidney cortices taken at 4 weeks of age, when the rats in both groups exhibited no pathological changes. A total of 1200 proteins were found in the rat kidney and 437 proteins were detected as differentially expressed (p < 0.05) between LBW and NBW rats at this time point (Table S1A). Further analysis of the proteomic dataset using a multiple comparison method with Benjamini-Hochberg correction allowed us to calculate the adjusted p-value (q value; Table S1B). The threshold for the false discovery rate (FDR) was set at q = 0.01. The dataset is available to the community through the public repository PRIDE using the accession number PXD018948. The volcano plot of the comparative proteomics data (using the q value) is shown in Figure 3A, and the top 10 proteins are listed in Table 2. The volcano plot showed that a large number of proteins were downregulated in the LBW kidney (top left quadrant). Several mitochondrial proteins responsible for energy transduction, such as different F 1 F O -ATP synthase subunits or complex III subunits, were strongly reduced in LBW kidneys (blue dots on the volcano plot), as shown in ( Figure 3B). In contrast, the upregulated proteins revealed an increase in specific regulators of the cell proteome, such as Meprin A subunits α and β in the LBW kidney cortex or activator protein 1 (AP-1), among others. Lastly, several proteins belonging to the functional networks 'cellular assembly and organization' were identified ( Figure S1), suggesting a perturbation of plasma membrane homeostasis in the LBW kidney cortex. Metabolic and Signaling Pathways Altered in LBW Kidney Analysis of the above-described proteomic data was performed using Ingenuity Pathway Analysis (IPA) software version July 2021. The LBW rat kidney differential proteome composed of 1183 proteins was compared to the IPA Knowledge database to detect enrichment in specific pathways using the 'core analysis' module. The results revealed that the main four pathways (ranked by log 10 p-value with Z > 1 and −logP < 3) altered in the LBW rat kidneys were EIF2 signaling, oxidative phosphorylation (OXPHOS; Z-score −1.9), TCA cycle, and sirtuin signaling ( Figure 4A; Table S2). Glycolysis and gluconeogenesis were also reduced, with 48% of the pathway components downregulateded in LBW kidney. These findings indicate that the main machinery required for energy transduction was inhibited in LBW kidneys, suggesting a reduction in energy metabolism. Intermediate metabolism pathways involved in catabolism and energy metabolism were also altered in the LBW kidney proteome, as shown for fatty acid oxidation ( Figure 4B). A significant modulation of two pathways involved in protein translation was also observed: 'EIF2 signaling' (−logp-value = 19.8) and 'Eif4 and P70S6K signaling' (−logp-value = 11.4). Lastly, the catabolism of branched chain amino acids (BCAAs) was strongly altered in the LBW kidneys (valine degradation, −logp-value = 11.1). Predicted Genetic Regulators Involved in LBW Kidney Remodeling Analysis of the proteins with altered content (p < 0.05) in the LBW kidney allowed us to predict putative transcription factors, kinases, proteases, or even microRNAs involved in the observed changes based on the knowledge of the targets associated with specific regulators and the directionality of the expected changes. Such a bioinformatic predictive analysis was performed on the LBW kidney proteome. The results (Table S2) identified a series of regulators potentially involved in LBW kidney proteome reprogramming, indicating that LBW kidney remodeling impacts different signaling pathways. The top two regulators (Z-score and p-value) were rapamycin-insensitive companion of mTOR (RICTOR) and NFE2L2 (NRF2), as a large number of their respective targets were found in the LBW differential proteome dataset (Table 3). These predictive findings unravel the complexity of the early molecules occurring in LBW kidneys and identify different signaling path-ways and regulators potentially involved in the LBW kidney gene expression program and signature. Mitochondrial Biomarkers of LBW Kidney Remodeling The above-described proteomic investigation of the molecular changes occurring in LBW kidney cortices revealed the top alterations in mitochondrial proteins (Table 2). To validate these findings, we performed immunohistological analyses on kidney tissues obtained from 4-week-old LBW rats. The top downregulated protein was UQCR7, a subunit of the respiratory chain complex III (Table 2, Figure 3B). UQCR7 staining showed a positive signal in glomerular tufts, in vascular smooth muscle cells of arterioles, and along the apical membrane of distal tubular cells in NBW rats (Figure 5a-c,e-g). UQCR7 expression was significantly decreased in LBW rats compared with NBW rats (Figure 5i-k). Furthermore, proteomics revealed that key catalytic components of complex V, the F 1 F O -ATP synthase responsible for ATP synthesis, were also reduced in the cortex of LBW rats (Table 2, Figure 3B). Accordingly, immunohistology analysis demonstrated that 'e' subunit (Atp5I) expression was strongly markedly suppressed in tubular cells of LBW rats in contrast with NBW rats (Figure 5d,h,l). These findings indicate that UQCR7 and Atp5I are two mitochondrial proteins indicative of early kidney proteome alteration in LBW rats. was significantly decreased in LBW rats compared with NBW rats (Figure 5i-k). Furthermore, proteomics revealed that key catalytic components of complex V, the F1FO-ATP synthase responsible for ATP synthesis, were also reduced in the cortex of LBW rats (Table 2, Figure 3B). Accordingly, immunohistology analysis demonstrated that 'e' subunit (Atp5I) expression was strongly markedly suppressed in tubular cells of LBW rats in contrast with NBW rats (Figure 5d,h,l). These findings indicate that UQCR7 and Atp5I are two mitochondrial proteins indicative of early kidney proteome alteration in LBW rats. Figure 5. Reduced content of UQCR and Atp5 measured by IHC in LBW kidney. Immunohistochemistry (IHC) analyses of NBW and LBW kidney sections obtained from 4-week-old rats. UQCR7 was expressed in glomerular tufts (blue arrows), in arterioles (mainly by vascular smooth muscle cells) (yellow arrows), and along the apical membrane of distal tubules (red arrows) (a-c). On the other hand, UQCR expression was markedly decreased in LBW rats (e-g). Atp5I was positive in tubular cells in NBW rats (d). Atp5I expression was weakened in LBW rats (h). The numbers of UQCR7-positive tufts per glomerulus (i), semiquantitative scoring of the UQCR7 staining intensity in the glomerular vascular pole (j), the rates of UQCR-positive cells in distal tubular cells on their apical membrane (k), and the percentages of Atp5I-positive tubular cells (l) were statistically analyzed. * p < 0.05 by unpaired t-test. ** p < 0.01 by unpaired t-test. Discussion In this study, we investigated the early molecular changes in the kidney proteome in a rat model of LBW-related nephropathy. We considered the hypothesis of fetal programming proposed by Brenner and colleagues [5,23] and searched for proteomic differences between the kidney cortices from rats with a low vs. normal birth weight. We first generated a rat model of LBW-related nephropathy using glucocorticoid treatment and validated several aspects of the previously described LBW-related kidney disease [6,14]. Several animal models for fetal programming of adult disease have been developed using nutrition, surgery, hypoxia, pharmacology, and stress [24]. The antenatal glucocorticoid Figure 5. Reduced content of UQCR and Atp5 measured by IHC in LBW kidney. Immunohistochemistry (IHC) analyses of NBW and LBW kidney sections obtained from 4-week-old rats. UQCR7 was expressed in glomerular tufts (blue arrows), in arterioles (mainly by vascular smooth muscle cells) (yellow arrows), and along the apical membrane of distal tubules (red arrows) (a-c). On the other hand, UQCR expression was markedly decreased in LBW rats (e-g). Atp5I was positive in tubular cells in NBW rats (d). Atp5I expression was weakened in LBW rats (h). The numbers of UQCR7-positive tufts per glomerulus (i), semiquantitative scoring of the UQCR7 staining intensity in the glomerular vascular pole (j), the rates of UQCR-positive cells in distal tubular cells on their apical membrane (k), and the percentages of Atp5I-positive tubular cells (l) were statistically analyzed. * p < 0.05 by unpaired t-test. ** p < 0.01 by unpaired t-test. Discussion In this study, we investigated the early molecular changes in the kidney proteome in a rat model of LBW-related nephropathy. We considered the hypothesis of fetal programming proposed by Brenner and colleagues [5,23] and searched for proteomic differences between the kidney cortices from rats with a low vs. normal birth weight. We first generated a rat model of LBW-related nephropathy using glucocorticoid treatment and validated several aspects of the previously described LBW-related kidney disease [6,14]. Several animal models for fetal programming of adult disease have been developed using nutrition, surgery, hypoxia, pharmacology, and stress [24]. The antenatal glucocorticoid treatment used for intrauterine growth retardation was validated in previous studies, showing a consistent reduction in glomerular number and filtration rate, increased apoptosis, and altered plasma sodium concentration [25][26][27]. However, when the rats were 9 weeks old, there was no significant difference in glomerular size between LBW rats and NBW rats. This finding suggests that glomerular hyperfiltration might occur in LBW rats and suggest the rat model used in our work is suitable, from the pathophysiology and histology standpoints, for investigating the pathogenesis of LBW-related nephropathy. At 4 weeks of age, there were no pathological abnormalities in the kidneys of either LBW rats or NBW rats. Therefore, prospective and untargeted quantitative proteomic analyses were performed using kidney samples from 4-week-old rats to identify intrinsic factors potentially linked with the pathogenesis of LBW-related nephropathy. The proteomic analysis extensively described in our work revealed that the molecular pathways involved in energy transduction, such as oxidative phosphorylation, the TCA cycle, and glycolysis, were specifically downregulated in the kidneys of LBW rats at early time points of the disease. This observation suggests that alteration in energy metabolism is an early and intrinsic determinant of LBW kidney dysfunction. In particular, the content of respiratory chain complex III and complex V subunits was strongly reduced, as verified by immunohistology. More precisely, the expression of UQCR7, UQCR11, and Atp5I and ATP5L in tubular cells was markedly decreased in the kidneys of 4-week-old LBW rats. UQCR7 expression was markedly decreased not only in glomerular tufts but also in vascular smooth muscle cells, particularly in those of the afferent arterioles. As previously reported, the characteristic pathological change in LBW individuals is FSGS with a predominance of perihilar lesions of sclerosis, probably due to disturbance of intraglomerular hemodynamic status [14,15]. Therefore, dysfunction of OXPHOS in afferent arterioles of LBW rat glomeruli could result in autoregulation failure of intraglomerular pressure, which eventually should develop FSGS lesions. Although analyses using newborn rat kidneys are the most suitable for the pathogenesis of DOHaD in the kidney, the cortex volumes of kidneys at birth are not sufficient for molecular investigations. As we used kidneys from 4-week-old rats when no pathological abnormalities were observed, the results of proteomic and immunohistological analyses may not be the end result but could participate in the cause of nephropathy in adulthood. Our results are in agreement with previous reports that showed tubular dysfunction in LBW individuals [28,29]. The discovery of altered mitochondrial proteostasis in an LBW nephropathy rat model is in agreement with previous studies showing that mitochondria play important roles in renal pathophysiology [15,[30][31][32][33][34][35][36][37]. The pathological similarities between low-birth-weight-related nephropathy and nephropathy associated with mitochondrial cytopathy were further discussed in our previous report [15]. Another finding of our study concerned the catabolism of branched chain amino acids (BCAAs), which was strongly altered in the LBW kidneys (17 proteins). Abnormal BCAA metabolism can result in BCAA depletion in the plasma, but this has not been verified in the blood of LBW patients. However, a prospective study performed on 15 patients with early stages of chronic kidney disease showed a significant decrease in plasma valine and leucine levels compared with controls [38]. BCAAs can also stimulate mitochondrial biogenesis [39], suggesting that the reduction in 97 mitochondrial proteins observed in LBW kidneys could be linked to reduced BCAA metabolism. The survey of the top ten proteomic changes revealed a significant increase in the expression of Meprin A (two subunits) in the kidney cortex of LBW rats. This protein is a protease that cleaves a large number of targets in the kidney, including extracellular matrix (ECM) proteins, modulators of inflammation, and proteins involved in the protein kinase A (PKA) and PKC signaling pathways [40]. Meprins have been implicated in the pathophysiology of diabetic nephropathy (DN), acute kidney injury (AKI), and fibrosis-associated kidney disease. Studies in diabetic mouse models suggested that Meprin A plays a protective role in the kidney [41], raising the need to evaluate in more detail the implication of Meprin A in LBW nephropathy progression or protection. Computer analysis of the untargeted proteomic dataset generated in our work also predicted the alteration of mTOR signaling in LBW. Accordingly, a considerable number of studies have shown the involvement of mTOR and RICTOR in nephropathies [42,43]. Mitochondrial biogenesis and mTOR are two actionable pathways using pharmacological drugs such as bezafibrate and resveratrol for the former [44,45] and rapamycin or evelorimus for the latter [46]. In addition, treatments to restore kidney bioenergetics might provide therapeutic benefits. In support of this claim, recent studies using mitochondrial protective drugs showed improved renal function in swine with atherosclerotic renovascular disease, renal ischemia, and atherosclerotic renal artery stenosis [47][48][49]. Moreover, the predictive analysis of the LBW differential proteomic data identified NFE2L2 (NRF2) signaling as the main inhibited pathway. NRF2 not only protects mitochondria as a major antioxidant orchestrator but also regulates mitochondrial biogenesis [50,51]. Therefore, the reduction in respiratory chain protein levels in LBW kidneys could also be explained by the observed inhibition of NRF2. Therefore, NRF2 activators might also be interesting candidates for the prevention of LBW-related nephropathy, as considered in other nephropathies [52][53][54]. To conclude, the proteomic and histopathology study of an LBW rat model generated a unique dataset of interest for generating novel hypotheses on the pathogenesis of LBW nephropathy. We unraveled changes in proteins such as Meprin A, UQCR7/11 and Atp5I/L that occurred early in the disease process, suggesting that these elements are potential players in the molecular determinants of this disorder. However, our findings remain limited to the rat model used. Technical and methodological limitations also exist in our study, as only 1200 proteins were found in the rat kidney proteome. This number is low as compared to the reference rat proteome dataset although a recent analysis found a similar number of distinct proteins (1290) in rat kidney [55]. Therefore, our findings might have considered only the most abundant proteins of rat kidney. Our study might therefore indicate a new mechanism of DOHaD in kidney diseases other than hyperfiltration theory. Rats The rats were treated in accordance with the guidelines of the Committee on Ethical Animal Care and Use of the National Hospital Organization Chiba-Higashi National Hospital. Eight pregnant rats were fed standard chow ad libitum. By intraperitoneally injecting dexamethasone (DEXA) (0.2 mg/kg) into pregnant rats (n = 5) consecutively at 15 and 16 days of gestation, LBW rats were born at high rates, as shown in previous reports [22]. For controls, we injected the same volume of saline (n = 3). Only newborn male rats were selected for this experiment. We first measured body weights of the normal newborn male rats (n = 22), with a mean ± SD of 6.44 ± 0.47 g. We then selected seven rats weighing less than 5.51 g (under the mean-2 SD of NBW) from 38 newborn male rats obtained from DEXA-injected mothers. As normal controls, we selected seven rats weighing > 6.4 g. Consequently, the means ± SE of body weights of the rats used for the subsequent experiments were 5.01 ± 0.11 and 6.93 ± 0.10 in the LBW (n = 7) and NBW (n = 7) rat groups, respectively (p < 0.001, Table 1). Newborn rats remained with their dams until they were 4 weeks old, at which point they were removed from the cages and provided standard chow ad libitum. When the rats were 4 weeks old, the left kidneys were removed. These extracted kidneys were separated for histological and proteomic analyses. The kidneys were horizontally cut, and specimens from the center were used for light microscopic analysis. For the light microscopy studies, kidney specimens were fixed in 10% neutral-buffered formalin followed by paraffin embedding and routine staining with Masson's trichrome stain, or periodic acid-methenamine-silver (PAM) with hematoxylin and eosin (HE) (PAM-HE) stain. PAM-HE-stained images of paraffin-embedded kidney sections were imported into a vertical slide system. The glomerular size was automatically calculated by manually outlining the glomerular tuft area on the display. The kidney specimens used for the proteomics studies were only obtained from the cortex area. After separating the cortex from the medulla, the isolated cortex specimens were immediately frozen in liquid nitrogen and stored at −80 • C until further analysis. After uninephrectomy, all rats were fed a high-salt diet consisting of 8% NaCl until they were 9 weeks old. After sacrifice, the right kidneys were removed and separated for histological analyses in the same way as those at 4 weeks of age. Immunohistochemical Analysis Primary antibodies were anti-UQCR polyclonal antibody (14793-1-AP; Proteintech) and anti-ATP5I polyclonal antibody (ab122241; Abcam). Antigen retrieval was performed according to the manufacturer's instructions. Numbers of UQCR7-positive tufts per glomerulus were counted in more than ten glomeruli per kidney section. The relative amounts of UQCR7 staining in vascular poles of glomeruli were also scored semiquantitatively in more than ten glomeruli per kidney section as follows: 0 = negative staining, 1 = positive in less than 50% of the circumference of vascular pole, 2 = positive in equal to or more than 50% of the circumference 3 = positive in whole circumference. In addition, percentages of UQCR7-positive cells in distal tubular cells were calculated by counting more than fifty distal tubular cells per section. Positive rates of Atp5I in tubular cells were evaluated. Proteomics Sample preparation and protein digestion, nLC-MS/MS analysis, and database search and results processing were performed as described in [19]. Label-Free Quantitative Data Analysis Raw LC-MS/MS data were imported into the Progenesis LC-MS 4.0 software program (Nonlinear Dynamics Ltd., Newcastle, UK). The data processing included the following steps: (i) feature detection, (ii) feature alignment across the samples, (iii) volume integration for two to six charge-state ions, (iv) normalization to the total protein abundance, (v) import of sequence information, (vi) ANOVA procedure at the peptide level, and filtering for features with values of p-value < 0.05, (vii) calculation of protein abundance (sum of corresponding peptide volume), and (viii) ANOVA testing with Benjamini-Hochberg correction for multiple testing at the protein level and filtering for features with values of Adj. p < 0.05. Of note, only nonconflicting features and unique peptides were considered for calculation at the protein level. Additionally, Progenesis performs an arcsinh correction of data before ANOVA calculation. Quantitative data were considered for proteins quantified by a minimum of two peptides. Protein abundance was normalized to the total protein content determined for each sample using in-gel densitometry methods (SDS-PAGE). MS data were also normalized in Progenesis based on feature median ratio. Furthermore, we selected proteins that showed a statistically significant > 20% change in expression levels (Adj. p-value < 0.05) for differences between two groups (n = 7 in each group). A global analysis of the data was performed using the computer platform Ingenuity Pathway Analysis (IPA; Qiagen). We used the 'core analysis' package to identify relationships, mechanisms, functions, and pathways relevant to the dataset of interest. We also used the 'regulators' package to identify predicted regulators of the proteomic changes. Institutional Review Board Statement: The study was conducted according to the guidelines of the Declaration of Helsinki, and approved by the Institutional Review Board of Chiba Hospital (approval number approval number H22-5). Informed Consent Statement: Not applicable as the study did not involve humans. Data Availability Statement: The proteomic dataset is available to the community through the public repository PRIDE using the accession number PXD018948.
2021-10-14T05:33:29.350Z
2021-09-24T00:00:00.000
{ "year": 2021, "sha1": "a82419633af2c12845781c36ae921d97610d6491", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/22/19/10294/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a82419633af2c12845781c36ae921d97610d6491", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
250590734
pes2o/s2orc
v3-fos-license
Research on the Public Order Reservation System from Surrogacy . Surrogacy technology provides the possibility for families who are unable to have children. Different countries have different legal regulations on surrogacy, which leads to the creation of transnational surrogacy, and at the same time, disputes over transnational surrogacy are also increasing. As we all know, surrogacy is contrary to the public moral order in China, which leads to the issue of public order retention.Private international law, as an important legal system in the field of International law, has an unshakable position and role in dealing with transnational surrogacy disputes. In transnational surrogacy disputes, public order is mainly reflected in the legal regulations of each country on surrogacy, the determination of paternity and the relevant regulations on the ethics of surrogacy, and its role is mainly reflected in the exclusion of the application of foreign laws and the refusal to recognize the judgments of foreign courts. In the application of the law, countries have tried to find a balance between the recognition of foreign judgments and the protection of the public order of the courts. This article will take a comparative and case study approach, using the issue of surrogacy as an entry point to explore the application of public order reservations in this particular area. Introduction According to a survey by the Conference on Private International Law in The Hague (CACCH) [1], there are relatively broad regional characteristics based on differences in the level of economic development of countries. The birthplaces of surrogate children are mainly in countries such as the United States and Ukraine. The scale of transnational surrogacy has grown rapidly in recent years. According to the "Research Report on the Status of Infertility in China" published by the China Population Association and the National Family Planning Commission in 2016, one out of eight average couples in China experience fertility difficulties. The rate of growth has increased from 1 percent in the 1970s to 2 percent to 12.5-15 percent today, a tenfold increase in 30 years. Liu Changqiu, an associate researcher who studies the law of life, believes that the demand for surrogate births in China will reach millions. Cases concerning transnational surrogacy have been gradually exposed to the public eye in recent years due to the evolving media. Transnational pregnancy brings legal challenges to the international social system when it raises issues such as ethics and morality and family religious beliefs. On the other hand, transnational surrogacy has become a tool for profiteering for some people, and poor women in some countries and regions make money through low-cost "womb rentals", which can easily lead to human rights issues. On the other hand, due to different social and cultural backgrounds, the status of surrogate mothers in different countries is different, and the legal regulations are also different, the related legal conflicts also face the challenges of traditional private international law. It is against this background that this paper is conducted. Literature Review There are relatively more cases of transnational surrogacy in practice in foreign countries, and they are also more typical. On the whole, there are more research results in foreign countries, and dozens of monographs and hundreds of papers have been published to study from different perspectives. In addition, the reports of the Hague Conference on Private International Law and the relevant judgments of foreign courts have also made profound analysis and discussions on transnational surrogacy. These studies mainly focus on the conflict between surrogacy and ethics, the human rights of surrogate women, the determination of the paternity of surrogate children and the protection of their rights and interests, and the impact of surrogacy on international and domestic laws. Surrogacy not only impacts on traditional ethics and morality but also involves the issue of safeguarding the human rights of surrogate mothers. Some scholars argue that surrogacy is a manifestation of the "Com modification" of the surrogate mother, which often exploits the surrogate woman because of the inequality between the two parties.It is mainly manifested in Marcelo De Alcantara's paper Surrogacy in Japan of : Legal Implications for Parentage and Citizenship [2], Surrogacy Arrangements in Britain:Policy and Practice Issues for Professinals of Eric Blyth [3], The Birth of Surrogacy in Israel of D.Kelly Weisberg [4].And Kenneth MckNorrie's "Reproductive Technology-A New issue of Private International Law" [5]. In addition, surrogacy is also discussed in very new works. Ian Kerridge, Michael Lowe and Cameron Stewart's Ethics and Law for the Health Professions, Martha A.Field's Surrogate Motherhood: The Legal and Human Issues, etc. Kenneth Mckorrie pointed out that "the paternity determination between surrogate parents and surrogate babies is more applicable to the principle of frequent residence or closest connection." Reproduction argues that transnational surrogacy is contrary to traditional ethics and that this kind of behavior is the exploitation of women [6]. In the article "Fair trade international surrogacy", Humbyrd, a European scholar, believes that the reward earned by surrogate mothers is not equal to their hard work to give birth to live, so they are in the position of being exploited. In reality, the surrogate industry should be regulated, the reasonable consideration mechanism should be determined, and the exploitation of surrogate women should be reduced [7] . In addition, Professor Amrita Pande of Columbia University believes that the essence of surrogacy is the rental of a women's uterus and the materialization of surrogate women [8]; the second is the determination of the parent-child relationship of surrogate children and the protection of their rights and interests, which is roughly the same as the views of Chinese society on surrogacy. Chinese social groups and even the government are trying their best to avoid the black and grey industrial chain that makes women's uterus become a commodity because of surrogacy. In the aspect of the identification of the parent-child relationship of surrogate children, foreign scholars focus on the conflict law rules that should be applied to the identification of surrogacy agreement and parentchild relationship, because there are obvious conflicts in the laws of various countries on the identification of legal parents of surrogacy children. The practice of unifying the substantive methods of various countries to solve the problem is difficult to achieve results in a short period of time, and the application of conflict law rules can build a communication bridge between laws and judges in various countries. In terms of the protection of the rights and interests of surrogate children, British professors Hall and Jolowicz called for international cooperation on the determination of parental rights and protection of rights and interests of surrogate children in "Surrogacy and Adoption in Two Jurisdictions". In addition, the protection of the rights and interests of surrogate children is mainly related to the principle of the best interests of the child [9]. Analysis of public order reservation The reservation of public order is a very important part of private international law and it is also known as a "saving clause". And the definition of the concept of public order is different in many countries. "Public order reservation" is a legal name in China, in France it is customarily called "public order", in Germany it is customarily called "reservation clause"(vorbehaltsklausel), while in common law countries "public policy" is commonly used. "Public order reservation" as a means or system for excluding the application of foreign law. When the application of a foreign law would be contrary to the public order of the forum State under domestic conflict norms, the courts of that State may directly restrict or exclude the application of the foreign law on the ground that it is contrary to the public order of that State. This phenomenon is the public order reservation. It serves to restrict or exclude the application of foreign law on the basis of the "public order" of the country. Although the concept of public order may change with time and place, it is mainly expressed in the principles of a country's political, economic, and legal system, moral norms, and good customs. Surrogacy in the context of legality The US supports cross-border surrogacy and has a regulated legal system in place. In 1973, the U.S. enacted the Uniform Parentage Act, which has been continuously revised, which provides for the protection of the rights of children born out of wedlock and proposes that regardless of the marital status of the parents, "all children shall have equal rights with all parents ", a provision that implicitly states that surrogate children have equal rights with legitimate children. The amendments include provisions on the legality of surrogacy for a fee and the identification of surrogate children, which clarify the position of supporting surrogacy and make clear provisions on the elements of parties to surrogacy, the validity of surrogacy contracts, and the identification of surrogate parentage, establishing a legal system for surrogacy. The Uniform Parentage Act (UPA) is a model law developed by the National Council of Uniform State Laws (NCUSL) and is not legally enforceable. Many states in the U.S. have enacted laws regulating surrogacy by referring to the Uniform Parentage Act, and there is no shortage of court decisions supporting surrogacy. Among the 50 states in the U.S., California has more comprehensive surrogacy legislation, and the 2014 California Family Code provides detailed provisions on surrogacy and paternity determinations, and state courts have been vocal in their support of surrogacy in judicial proceedings. California has become a world center for surrogacy, with excellent surrogacy agency services, first-rate surrogacy technology, complete surrogacy facilities, sperm banks, egg banks, fertility centers, agencies, and other services, the emergence of law firms that focus on surrogacy legal services, the establishment of a surrogacy dispute resolution department in state courts, and the industrialization of cross-border surrogacy, forming a complete industry chain [10]. Ukraine is the surrogacy capital of Europe and one of the markets for cross-border surrogacy for Chinese citizens. In 2002, the Ukrainian Parliament passed the Family Law, which legalized commercial surrogacy and allowed cross-border surrogacy. Ukraine's cost of surrogacy is relatively low in the world, which gives the country a competitive advantage in the international surrogacy market. Ukraine lacks legal regulation of surrogacy, which forms a legal "grey area". Poor government regulation of surrogacy has led to uncontrolled growth in the surrogacy market, and there are growing calls to regulate cross-border surrogacy in Ukraine. In Canada, free cross-border surrogacy is allowed. Canada has enacted the Assisted Human Reproduction Act to regulate the surrogacy market, prevent the use of surrogacy to exploit surrogate mothers and protect the legitimate rights of surrogate children. The assisted Human Reproduction Act provides that cross-border surrogacy is legal in Canada (except Quebec) and applies to all types of families, including single parents, same-sex couples, and heterosexual couples. Surrogacy fees are prohibited in Canada, the surrogate mother cannot benefit financially, and all reasonable expenses incurred by the surrogate mother during pregnancy, including loss of income, are borne by the surrogate mother. Russia is transitioning from allowing cross-border surrogacy to banning it. In 1995, the Family Code of the Russian Federation opened up surrogacy and cross-border surrogacy, and in 2012, the Law of the Russian Federation on Health Protection promoted the use of assisted medical reproductive technologies and fully liberalized the surrogacy market so that families who fully and voluntarily consent to medical intervention have the right to enjoy assisted medical reproductive technologies. technology. The relaxed legal environment and relatively low prices have attracted many people, with about 7,000 cases of cross-border surrogacy in Russia each year. 2020 saw more than 10 deaths of cross-border surrogate children in Russia, abandonment of surrogate children by foreign intended parents, traffic disruptions due to the global epidemic, unemployment of intended parents unable to pay for surrogacy, and other reasons that left more than 500 surrogate babies stranded in women's and infant hospitals [11]. The 2012 Russian law prohibiting foreigners from adopting children of Russian nationality has been undermined by cross-border surrogacy, which has prompted the State Duma to propose amendments to the Russian Federation Law on Health Protection in January 2021 to prohibit Russian citizens from providing surrogacy services to foreigners in Russia, to combat cross-border commercial surrogacy, and to regulate the surrogacy market. Surrogacy in the context of illegality At present, there is no special surrogacy law in China. The legal norms of surrogacy in China are only departmental regulations such as "measures for the Management of Human assisted Reproductive Technology", "Ethical principles of Human assisted Reproductive Technology and Human sperm Bank" and "measures for the Management of Human sperm Bank" issued by the Ministry of Health. According to Article 3, paragraph 2, of the measures for the Administration of Human assisted Reproductive Technology, China is a country that strictly prohibits surrogacy. In China, surrogacy agreements are considered to violate public order and good customs and have no legal effect. For the Chinese government, once surrogacy is legal, it means that women's uterus has become a commodity that can be traded, and the legalization of surrogacy will inevitably give rise to social problems such as the black industrial chain, which will inevitably lead to some people making use of coercion, threats and even personal injury to make some women's wombs become commodities, making these women also become commodities [12]. With China's forbidden attitude and severe crackdown on surrogacy, more and more celebrities choose cross-border surrogacy. The prohibition of surrogacy will not make surrogacy disappear, and the law should respond to emerging social problems in a timely manner. In China's judicial practice, most courts believe that the cross-border surrogacy agreement violates the public order and good customs and the current laws and regulations of China, and has no effect on the effectiveness of surrogacy agreements. As for the identification of the surrogacy parent-child relationship, the judicial practice in China takes the results of DNA identification as the standard [13]. In practice, Chinese courts also use the principle of the best interests of the child to determine the guardianship of surrogate children. when Chinese courts hear disputes over the determination of cross-border surrogate parent-child relationship, according to Article 5 of the Law applicable Law, the court will apply Chinese law. In addition, China's current law does not have legal norms related to cross-border surrogate children, which is bound to be not conducive to the protection of the interests of surrogate children. The reason why surrogacy can rise quietly in China is due to the existence of the social foundation and practical needs. If the fertility barrier is not solved, surrogacy is inevitable. China forbids surrogacy, only to seek foreign surrogacy, and cross-border surrogacy has more legal problems, many problems can not be solved. The Case of Mr. and Mrs. Montserrat Surrogacy is expressly prohibited in France, and in 2000, in California, a French citizen, Mr. and Mrs. Montserrat, obtained twins through surrogacy (the father provided the sperm). The Montserrat wanted to raise children in France, but their paternity would not be recognized by the French government. For this reason while the children were still in the surrogate mother's womb, they appealed in the U.S. Supreme Court of California. Since California has always been more open in surrogacy, the California Supreme Court followed the surrogacy act introduced in 1982 and ruled that paternity between the Montserrat and children was established. When the Montserrat brought their child back to France, they were directly faced with a French review of the legality of their out-of-state surrogacy. In making their decision, the French courts had Journal of Education, Humanities and Social Sciences ALSS 2022 Volume 1 (2022) 158 to take into account previous decisions made in other countries. The Montserrat, who thought they would win their case, had a meaningful ending. The case went all the way from the local court to the French Supreme Court, which ultimately denied the U.S. court's decision on the grounds that it violated local public order and morality. The crux of the matter is how should the kinship of the surrogate child be determined? The French High Court held that "the surrogacy agreement is contrary to the fundamental principle of civil law that the right of the identity of natural persons may not be alienated ......Parentage, as the most basic identity right of natural persons, must be established by law and can never be agreed upon or disposed of by the parties themselves."As for the protection of children's rights and interests, the non-recognition of the surrogacy agreement does not harm the rights and interests of the children in this case, and the two children can still live with Mr. and Mrs. Montserrat and enjoy the rights and interests of French citizens, such as education and security. The Montserrat, therefore, took France to European Court of Human Rights as a defendant. After a hearing, the European Court of Human Rights found that the French Supreme Court had acted contrary to the European Convention on Human Rights on the grounds that individuals are free from unwarranted interference by public authorities in their private lives and that, in the case of the two children, such a decision by France would prevent them from acquiring French nationality by blood, which would prevent them from truly integrating into French society, as well as from inheriting their parents' property. For reasons of non-interference in private life and protection of the rights of the child, the European Court of Human Rights found that the French Supreme Court's decision was in violation of human rights conventions. The key word at the heart of all the decisions on surrogacy is "public order". California also recognized surrogacy because of public order concerns in its own jurisdiction. In that case, the California court held that a surrogate mother has the right to choose a method of procreation that suits her and that, based on the principle of protecting the rights of the child, the court is required to establish guardianship of the child, that is, to establish the child's kinship with his or her own parents [14]. The problem is that different countries have different public orders, and it is necessary for the law to maintain the public order of their own jurisdictions. When transnational surrogacy cases like the Montserrat case occur, different countries play games on the issue of public order, so that the same case will result in different judgments. Chinese Celebrity Surrogacy Cases Due to reproductive disorders, apart from good health and unsuitable age for pregnancy, many Chinese stars do not want to change their appearance and figure as a result of preparing for pregnancy and raising children, so they usually look for surrogacy, but because these are private matters and have not been publicized by the news media, it is almost impossible for them to be discussed by the public. In 2020, a female star was exposed not only for surrogacy but even for abandoning her child, revealing the complex issues involved in the Grey area of surrogacy. It involves emotion, human nature, ethics, law, and morality. Even in some states in the United States where surrogacy is legal and tolerant, it still has many legal and ethical disputes. Over the past decade, some states in the United States where surrogacy is legal, such as California and Nevada, have received many wealthy clients from outside the United States. Some hospitals have received far more customers outside the United States than those in the United States in the past five years. It is worth mentioning that almost all American customers choose surrogacy because they are unable to get pregnant for their own reasons. On the other hand, some foreign customers are fertile, and the reason for surrogacy is simply to keep fit, unwilling to affect work, or unwilling to endure pregnancy and childbirth. "Surrogacy of a child is not the purchase of a commodity, but its essence is the desire and respect for life." said Jiang PeiFang.Like in the Article 14 (f) of the Programme of Action of the International Conference on population and Development states that "all couples and individuals have the fundamental right to freely and responsibly determine the number and spacing of children and to have access to information, education and means for this purpose; in exercising this right, couples and individuals have the responsibility to take into account the needs of their present and future children and their responsibilities to society." The actress responded to the "surrogacy storm" with crazy words, saying that she did not break the law on Chinese soil, as well as abroad. Surrogacy and abandonment undoubtedly have an impact on Chinese traditional morality and modern law. First of all: surrogacy is clearly prohibited in our country. No matter what kind of surrogacy, it is illegal; secondly: the birth mother is the birth mother, and China's Marriage Law does not make specific provisions for identification of the parent-child relationship [15]. In judicial practice, the identification of birth mother follows the principle of "the birth mother is the mother" according to the birth facts; the identification of the biological father is determined according to the blood relationship. This determination is mainly related to the traditional culture and morality of our country. Some relevant court precedents believe that the establishment of the mother-child relationship is not based on biological genetic continuity, but more on the emotional connection brought about by the gestation process of pregnancy in October and the hardships of childbirth, and that the mother-child relationship is determined solely by biological genes. there will be a lack of sociological and psychological support. Finally: abandoning a surrogate fetus may constitute a "crime of abandonment". Article 261 of China's Criminal Law stipulates that the actress's behavior is very likely to constitute the crime of abandonment, but in the eyes of the lawyer, her behavior can hardly constitute the crime of abandonment. First, it is because China's determination of the crime of abandonment is very strict. Secondly, the actual place where the act took place at this time is in the United States, so she has the courage to think that she has abided by Chinese laws on Chinese soil. Legal regulation of surrogacy, determination of paternity, ethics and morality of surrogacy In addition to the objective facts of "birth" and "genetics", there are other methods such as adoption and parental authority orders. In Germany, the Adoption Agreement Act prohibits surrogacy but provides for the adoption of children born to illegal cross-border surrogates by the intended parents, who become the legal parents of the cross-border surrogate children through adoption, so as to ensure that the surrogate children will not be contested or abandoned. In the UK, the intended parents are granted relief by applying for a court parental order to adopt the child. The intended parents can apply to the court for a parental authority order within 6 months after the birth of the surrogate child, and after the court examines and approves the parental authority order, the intended parents become the legal parents of the surrogate child by going through the parental authority transfer procedure according to the Adoption Act. In the transnational surrogacy contract, the entrusting surrogate couple can establish a parent-child relationship with the surrogate child is the embodiment of the validity of the contract. In the countries of transnational paid surrogacy, under the constraints of the surrogacy agreement, the intended parents can establish a parent-child relationship with the surrogate mother according to the agreement with the surrogate mother. Commercial surrogacy is satisfactory in the market, so commercial surrogacy is not prohibited in all countries or regions. Transnational commercial surrogacy can also bring positive economic benefits [16].For example, if the family can be inherited, the grandparents can enjoy the happiness of the family and family harmony. In addition, banning commercial surrogacy will not make commercial surrogacy disappear, but will only make it go underground, and underground activities will become more and more prosperous. Commercial surrogacy is prohibited in our country, but more than 10,000 babies are born in the underground surrogacy market every year [17]. In order to balance the needs of all parties, the maximum extent in the legal regulation of commercial agents, the road country commercial agents of some countries and regions are particularly open position. In these areas, as long as surrogacy does not fraudulently intimidate the surrogate mother and does not harm the public interest, then the validity of the surrogacy agreement will be recognized, and the parent-child relationship generated under the surrogacy agreement will usually be recognized. In 2018, the Hague Conference on Private International Law talked about the identification of paternity, that is, surrogacy law in its latest report. The confirmation of parents is the premise of many other rights of surrogate children and is extremely important to the growth of surrogate children. "The best interests of Children" will be the primary consideration and will be the possible future surrogacy in the international legal instruments of transnational surrogacy. The baseline of consideration in the contract. There is a legal conflict between the parental rights of transnational surrogate children and the reservation of public policies of various countries, so in this case, countries usually take the realization of the "best interests of the child" as the legal guidelines and pass the comprehensive evaluation of the judge. It is estimated that as much as possible, transnational surrogate children should be provided with the spiritual and material support they should have received in the process of growing up. Exclusion of foreign surrogacy laws, refusal to recognize foreign court decisions In the article by Xue Qiaodan [16], the challenges posed by transnational surrogacy to the current public order issues are clearly analyzed. In the surrogacy case of the Montserrat mentioned in this article, the local court has repeatedly denied the decision of the former court on the grounds of public order reservation, and every place has its public order to be maintained, but this will inevitably lead to conflicts between courts in various places on the issue of public order, but the international jurisprudence has given a model of how to reconcile such a contradiction, that is, the partial recognition of the judgment, which means that the part of the preservation of the public order of the country has to make certain concessions, which is also in the trend of globalization. This is also a problem that countries and regions have to face under the trend of globalization. In view of this, the Hague Conference on Private International Law has held frequent meetings in recent years to discuss the issues arising from international surrogacy, and the judicial application of public order reservations and the boundaries of their effects is one of the key issues [18]. Conclusion This paper discusses the famous Montserrat's case, with case of Chinese celebrity surrogacy as the object of study, after synthesizing the studies of domestic and foreign scholars, it is easy to find that the protection of the best interests of the child has become the backdrop for many courts to exclude foreign court decisions due to the different social cultures of each country citing public order reservation as a conflict of laws. The most important one is the determination of paternity, which involves not only the purpose of the surrogacy contract signed by the surrogate couple at the beginning but also the reality of the heirs, etc. In addition, the ethics and morality of surrogacy have long been a controversial issue. How to protect the legal rights of children born through surrogacy while maintaining public order in the country and region has been a challenge that countries have had to face, which has led to different court decisions in the same case. Therefore, it is necessary to explore the limits of public order preservation in the era of globalization in this new situation. In the application of laws and regulations, countries are trying to find a balance between the recognition of foreign judgments and the protection of public order in the courts. In the era of globalization, public order has taken on a highly complex appearance, and existing doctrines are sometimes stretched to their limits. The international scene offers lessons for improving this part of the law, namely the segmented recognition of partial judgments, but how to concretize it in cases is not an instant solution.
2022-07-17T15:10:25.566Z
2022-07-06T00:00:00.000
{ "year": 2022, "sha1": "39b174e49a0ddcdcc4ddc1993a7441b7d398d449", "oa_license": "CCBYNC", "oa_url": "https://drpress.org/ojs/index.php/EHSS/article/download/655/588", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "99cd864e48a3236ca2e9db0bb12ad6b033e6cced", "s2fieldsofstudy": [ "Law" ], "extfieldsofstudy": [] }
52197522
pes2o/s2orc
v3-fos-license
Oral cannabinoid-rich THC/CBD cannabis extract for secondary prevention of chemotherapy-induced nausea and vomiting: a study protocol for a pilot and definitive randomised double-blind placebo-controlled trial (CannabisCINV) Introduction Chemotherapy-induced nausea and vomiting (CINV) remains an important issue for patients receiving chemotherapy despite guideline-consistent antiemetic therapy. Trials using delta-9-tetrahydrocannabinol-rich (THC) products demonstrate limited antiemetic effect, significant adverse events and flawed study design. Trials using cannabidiol-rich (CBD) products demonstrate improved efficacy and psychological adverse event profile. No definitive trials have been conducted to support the use of cannabinoids for this indication, nor has the potential economic impact of incorporating such regimens into the Australian healthcare system been established. CannabisCINV aims to assess the efficacy, safety and cost-effectiveness of adding TN-TC11M, an oral THC/CBD extract to guideline-consistent antiemetics in the secondary prevention of CINV. Methods and analysis The current multicentre, 1:1 randomised cross-over, placebo-controlled pilot study will recruit 80 adult patients with any malignancy, experiencing CINV during moderate to highly emetogenic chemotherapy despite guideline-consistent antiemetics. Patients receive oral TN-TC11M (THC 2.5mg/CBD 2.5 mg) capsules or placebo capsules three times a day on day −1 to day 5 of cycle A of chemotherapy, followed by the alternative drug regimen during cycle B of chemotherapy and the preferred drug regimen during cycle C. The primary endpoint is the proportion of subjects attaining a complete response to CINV. Secondary and tertiary endpoints include regimen tolerability, impact on quality of life and health system resource use. The primary assessment tool is patient diaries, which are filled from day −1 to day 5. A subsequent randomised placebo-controlled parallel phase III trial will recruit a further 250 patients. Ethics and dissemination The protocol was approved by ethics review committees for all participating sites. Results will be disseminated in peer-reviewed journals and at scientific conferences. Drug supply Tilray. Protocol version 2.0, 9 June 2017. Trial registration number ANZCTR12616001036404; Pre-results. the study protocol. Specific criticisms: Page 4 (line 17): the complete response defined as no vomiting and no use of rescue medications is a standard efficacy end point in clinical studies of CINV prophylaxis. Unfortunately, it does not include any direct assessment of the nausea control that is still an unmet need in the management of CINV. Since the traditional CR is not the end point used for planned studies, it is necessary to specify what is meant by CR in the abstract. Page 5 (lines 50-51): primary efficacy end point of the two planned studies is CR during the overall phase, defined as no nausea, no emesis, and no use of rescue medications. However, this end point is commonly referred to as total control of CINV. It is extremely important that the authors use appropriate terminology to avoid confusion in the reader. Page 6 (line 5): the definition of no significant nausea must be specified in the study protocol. Page 6 (lines 52-53): eligible patients must have had a significant CINV despite guideline consistent prophylaxis. The authors should better specify what is meant by "significant CINV". Page 9 (line 38): since the nausea control is included in primary efficacy end point of the two planned studies, the tool used for nausea assessment (e.g., VAS or other) must be specified in the protocol. This is also an important point as "no significant nausea" is a secondary end point of the studies. Page 10 (lines 49-50): the authors state that "health outcomes will include the proportion of complete responders (i.e., participants with no emesis and no use of rescue medications)". This is inconsistent with the primary efficacy end point of the definitive study. In addition, the use of traditional CR instead of the total control of CINV could have a more favorable impact on the results of the economic analysis. REVIEWER Linda A Parker Psychology and Neuroscience, University of Guelph, Guelph, ON N1G 2W1, Canada REVIEW RETURNED 24-Jan-2018 GENERAL COMMENTS While current anti-emetic therapies are quite effective in reducing vomiting, they are much less effective in treating chemotherapyinduced nausea. Therefore, there is a need for better treatments for nausea in particular. Considerable preclinical evidence indicates that cannabidiol (CBD) (e.g, Parker et al, 2000;Rock et al, 2012) and its acidic precursor CBD acid (CBDA) Bolognini et al, 2013) have potential for treating nausea (acute and anticipatory) and vomiting alone and in combination with THC both by injection and by oral administration (Rock et al, 2016). In fact, found a synergistic effect of CBDA and ondansetron in the relief of nausea. In human clinical trials, as the authors review, Duran et al (2010) reported in a small pilot double-blind randomized trial that a THC/CBD cannabis extract (Sativex, GW Pharmaceuticals) had substantial efficacy in reducing emesis and delayed nausea produced by chemotherapy treatment. These findings suggest that it is definitely time to evaluate the potential of combined treatment of CBD and THC, especially for patients who fail to respond to the standard prophylactic anti-emetic regime. Antony Mersiades and colleagues present a protocol for an ongoing pilot and subsequent definitive randomized cross-over double-blind placebo-controlled trial to evaluate an oral cannabinoid-rich THC/CBD cannabis extract for secondary prevention of chemotherapy-induced nausea and vomiting (along with guidelineconsistent anti-emetics) in patients that are unresponsive to conventional anti-emetic treatment. For Cycle A, following an initial 24 hr administration of THC/CBD cannabis extract capsules (or placebo) to confirm tolerability, the patients will be administered either the active or placebo capsules on the day of treatment (1 hr before, immediately following and 4 hr later) and will be able to selftitrate (up to 12 capsules/day) their exposure on a subsequent 4 days. Then the patients will cross-over to Cycle B with the opposite treatment. Finally, in Cycle C they will receive the treatment that they preferred (THC/CBD or Placebo). The delay between cycles is not reported. The definitive study will follow the same design as Cycle A, but for cycle B, the patients will continue treatment with THC/CBD at their maximal tolerated dose from the previous cycle, with further scope to self-titrate according to symptoms. Data will be collected in self-report patient diaries and there will be daily assessment of patient on days 1-6 of each cycle to ensure that the treatments are taken, the patient is maintaining accurate records, to complete a checklist of cannabinoid-specific adverse events and to provide advice if needed. This critical study for improving the quality of life for chemotherapy patients is extremely well designed and will provide definitive evidence regarding the efficacy of THC/CBD treatment (in addition to standard anti-emetic treatment) in reducing nausea and vomiting in chemotherapy patients. It is timely and important for the worldwide health of cancer patients. There is a need to design and conduct appropriate trials with medicinal cannabis extracts in oncology patients for various symptom management issues. I believe your trial protocol sets out a useful template for such trials. The dose titration strategy is of particular importance because the need for dose titration with this type of pharmaceutical intervention is key to ensuring appropriate efficacy comparisons are made. References Thank you very much for your considered appraisal of this trial protocol and role in the broader context of CINV management. We found your critique valuable and have attempted to address your specific comments. Reviewer 1 comment Response Page Page 6 lines 51-53 -It would be helpful to define "significant CINV" Page 9 -Definitive study -Please clarify that this will also be a secondary prophylaxis population (i.e. patient would have had to experienced "significant CINV" -to be defined, see above comment, from a previous cycle) Previous wording The definitive randomised phase 3 study (N=250) will have a parallel group design, to reduce bias given the possibility of carry-over effect from cross-over in subsequent cycles, and to investigate longer-term efficacy over multiple chemotherapy cycles. New wording The definitive randomised phase 3 study (N=250) will assess the efficacy of the addition of TN-TC11M to guideline consistent anti-emetics as secondary prevention of CINV. 7-8 For the definitive study you may want to consider what cycle the patient experienced "significant CINV" in as a Response: Thank you for the suggestion which is worthy of consideration. Given range of patient and treatment factors that can influence the experience of CINV a pragmatic approach has NA stratifying factor in the randomization. been taken to limit the number of stratification factors. Page 6 lines 51-53 -It would be helpful to define "significant CINV" Page 7 and 8 -Under "Background Treatment" section -abbreviations for drug administration should be clarified, they are not necessarily universal. Please also clarify abbreviations in other parts of the manuscript if not already done. I believe for the most part most/all abbreviations are spelled out. One final check would be helpful. Page 9 -Definitive study -Please clarify that this will also be a secondary prophylaxis population (i.e. patient would have had to experienced "significant CINV" -to be defined, see above comment, from a previous cycle) For the definitive study you may want to consider what cycle the patient experienced "significant CINV" in as a stratifying factor in the randomization. This is an interesting research project on the management of CINV. However, some clarifications are needed about the methodology of the study protocol. Reviewer 2 comment/criticism Response Page Page 4 (line 17): the complete response defined as no vomiting and no use of rescue medications is a standard efficacy end point in clinical studies of CINV This study will use the traditional 'complete response' (CR) end-point as the primary outcome measure. This is justified as it remains the most validated tool for the assessment of prophylaxis. Unfortunately, it does not include any direct assessment of the nausea control that is still an unmet need in the management of CINV. Since the traditional CR is not the end point used for planned studies, it is necessary to specify what is meant by CR in the abstract. Page 10 (lines 49-50): the authors state that "health outcomes will include the proportion of complete responders (i.e., participants with no emesis and no use of rescue medications)". This is inconsistent with the primary efficacy end point of the definitive study. In addition, the use of traditional CR instead of the total control of CINV could have a more favorable impact on the results of the economic analysis. New wording The proportion of patients achieving a 'complete response' during the overall phase of treatment (0 -120 hours), defined as no emesis and no use of rescue medications. We feel that an efficacy end-point that includes complete control of nausea, such as 'total control' may be difficult to achieve and could potentially jeopardise the further study of a class of drug that may be highly beneficial in a subset of the population. Nausea remains an important secondary endpoint used to address this issue. The study will separately report, the proportion of subjects experiencing significant nausea, defined as degree of nausea <2 out of 10 using an 11-point rating scale across the acute (0 -24 hours), delayed (24 -120 hours) and overall (0 -120 hours) phases of cycles A, B and C is an important secondary end-point. We acknowledge the unmet need for nausea control a clinically important outcome that is often not represented as an endpoint in intervention trials CINV clinical trials. We have identified that fact that the primary end-point 'complete response' does not include the subject experience of nausea, and have listed it as a limitation of the study in the strengths and limitations of the study section. Removed Health related outcomes will include No nausea Added The definitive study will employ a health economic analysis which will use Added consistent with the primary endpoint). In addition we will conduct a sensitivity analysis to determine the incremental costs to achieve an outcome of no significant nausea, no emesis, and no use of rescue medications. New wording The definitive study will employ a health economic analysis which will use the proportion of patients with 'complete response' (i.e. participants with no nausea, no emesis and no use of rescue medications, consistent with the primary endpoint). In addition we will conduct a sensitivity analysis to determine the incremental costs to achieve an outcome of no significant nausea, no emesis, and no use of rescue medications. Added Limitations  Primary outcome measure (complete response) does not include nausea assessment, to ensure comparability with other CINV trials 2 Page 5 (lines 50-51): primary efficacy end point of the two planned studies is CR during the overall phase, defined as no nausea, no emesis, and no use of rescue medications. However, this end point is Removed -no nausea to leave New wording 3 commonly referred to as total control of CINV. It is extremely important that the authors use appropriate terminology to avoid confusion in the reader. The proportion of patients achieving a 'complete response' during the overall phase of treatment (0 -120 hours), defined as no emesis and no use of rescue medications. Page 6 (line 5): the definition of no significant nausea must be specified in the study protocol. Added (iii) no significant nausea, defined as degree of nausea <2 out of 10 using an 11-point rating scale, 4 Page 6 (lines 52-53): eligible patients must have had a significant CINV despite guideline consistent prophylaxis. The authors should better specify what is meant by "significant CINV". Added Experienced significant CINV, defined as requiring ≥1 dose of rescue medication for vomiting or distress by nausea, and/or ≥ moderate nausea on a 5-point rating scale, at any time during the current chemotherapy regimen despite guideline consistent anti-emetics, 5 Page 9 (line 38): since the nausea control is included in primary efficacy end point of the two planned studies, the tool used for nausea assessment (e.g., VAS or other) must be specified in the protocol. This is also an important point as "no significant nausea" is a secondary end point of the studies. Added Nausea (past 24-hour period), recorded using an 11-point rating scale Response: Thank you very much for your considered appraisal of this trial protocol and role in the broader context of CINV management. We are pleased you find it worthy of publication in it's current format. Reviewer 3 comment Response Page NA NA NA While current anti-emetic therapies are quite effective in reducing vomiting, they are much less effective in treating chemotherapy-induced nausea. Therefore, there is a need for better treatments for nausea in particular. Considerable preclinical evidence indicates that cannabidiol (CBD) (e.g, Parker et al, 2000;Rock et al, 2012) and its acidic precursor CBD acid (CBDA) Bolognini et al, 2013) have potential for treating nausea (acute and anticipatory) and vomiting alone and in combination with THC both by injection and by oral administration (Rock et al, 2016). In fact, found a synergistic effect of CBDA and ondansetron in the relief of nausea. In human clinical trials, as the authors review, Duran et al (2010) reported in a small pilot double-blind randomized trial that a THC/CBD cannabis extract (Sativex, GW Pharmaceuticals) had substantial efficacy in reducing emesis and delayed nausea produced by chemotherapy treatment. These findings suggest that it is definitely time to evaluate the potential of combined treatment of CBD and THC, especially for patients who fail to respond to the standard prophylactic anti-emetic regime. Antony Mersiades and colleagues present a protocol for an ongoing pilot and subsequent definitive randomized cross-over double-blind placebo-controlled trial to evaluate an oral cannabinoid-rich THC/CBD cannabis extract for secondary prevention of chemotherapy-induced nausea and vomiting (along with guideline-consistent anti-emetics) in patients that are unresponsive to conventional antiemetic treatment. For Cycle A, following an initial 24 hr administration of THC/CBD cannabis extract capsules (or placebo) to confirm tolerability, the patients will be administered either the active or placebo capsules on the day of treatment (1 hr before, immediately following and 4 hr later) and will be able to self-titrate (up to 12 capsules/day) their exposure on a subsequent 4 days. Then the patients will cross-over to Cycle B with the opposite treatment. Finally, in Cycle C they will receive the treatment that they preferred (THC/CBD or Placebo). The delay between cycles is not reported. The definitive study will follow the same design as Cycle A, but for cycle B, the patients will continue treatment with THC/CBD at their maximal tolerated dose from the previous cycle, with further scope to self-titrate according to symptoms. Data will be collected in self-report patient diaries and there will be daily assessment of patient on days 1-6 of each cycle to ensure that the treatments are taken, the patient is maintaining accurate records, to complete a checklist of cannabinoid-specific adverse events and to provide advice if needed. This critical study for improving the quality of life for chemotherapy patients is extremely well designed and will provide definitive evidence regarding the efficacy of THC/CBD treatment (in addition to standard anti-emetic treatment) in reducing nausea and vomiting in chemotherapy patients. It is timely and important for the world-wide health of cancer patients.
2018-09-16T08:12:25.118Z
2018-09-01T00:00:00.000
{ "year": 2018, "sha1": "94ceef60b411b962ced097685c7d4fd8b5b22566", "oa_license": "CCBYNC", "oa_url": "https://bmjopen.bmj.com/content/bmjopen/8/9/e020745.full.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c8f5e53ef8beb7f42eead8eb8b3082e470758183", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
206716971
pes2o/s2orc
v3-fos-license
Nanoscale Properties of Human Telomeres Measured with a Dual Purpose X-ray Fluorescence and Super Resolution Microscopy Gold Nanoparticle Probe Techniques to analyze human telomeres are imperative in studying the molecular mechanism of aging and related diseases. Two important aspects of telomeres are their length in DNA base pairs (bps) and their biophysical nanometer dimensions. However, there are currently no techniques that can simultaneously measure these quantities in individual cell nuclei. Here, we develop and evaluate a telomere “dual” gold nanoparticle-fluorescent probe simultaneously compatible with both X-ray fluorescence (XRF) and super resolution microscopy. We used silver enhancement to independently visualize the spatial locations of gold nanoparticles inside the nuclei, comparing to a standard QFISH (quantitative fluorescence in situ hybridization) probe, and showed good specificity at ∼90%. For sensitivity, we calculated telomere length based on a DNA/gold binding ratio using XRF and compared to quantitative polymerase chain reaction (qPCR) measurements. The sensitivity was low (∼10%), probably because of steric interference prohibiting the relatively large 10 nm gold nanoparticles access to DNA space. We then measured the biophysical characteristics of individual telomeres using super resolution microscopy. Telomeres that have an average length of ∼10 kbps, have diameters ranging between ∼60–300 nm. Further, we treated cells with a telomere-shortening drug and showed there was a small but significant difference in telomere diameter in drug-treated vs control cells. We discuss our results in relation to the current debate surrounding telomere compaction. T elomeres are repetitive sequences located at the ends of chromosomes. They have a number of functions including protecting the chromosomes from degradation and preventing individual chromosomes from linking to each other. They are also associated with the aging process and shorten with each cell replication cycle. 1 It is accepted that at least 8−10 base pairs (bps) are lost per cell division due to the "end replication problem", 2,3 while many more bps are thought to be eroded due to reactive oxygen species. Measuring these dynamics is important but technically challenging. Further, the packing density of the DNA in telomeres is currently a focus of research. Bandaria et al. 4 recently proposed that DNA damage response (DDR) complexes are unable to enter into telomeric regions due to DNA compaction. However, Timashev et al. 5 and Vancevska et al. 6 questioned this hypothesis as they found DDR complexes could colocalize with telomeres, even when purported compaction proteins were present. Thus, measuring biophysical dimensions of telomeres is important, as volume and packing density could play a vital role in biological function. There are a number of ways to measure telomere length (in base pairs of DNA), while there are very few methods that can measure biophysical characteristics, like volume. Methods such as southern hybridization can measure average telomere length in bulk homogenate samples. 7 Bulk methods mask the heterogeneity that could occur at each telomere within a nucleus. 8,9 For individual telomeres, a method known as quantitative fluorescence in situ hybridization (QFISH) uses a short fluorescence oligonucleotide sequence (made of DNA, or more commonly, PNA) probe for the repeat motif found in telomeres, visualized with standard fluorescence microscopy. 10 Preparing the cells so that they are in the metaphase allows telomeres to be seen directly on the ends of each chromosome. 11 The length of each telomere is usually estimated based on fluorescence intensity in arbitrary fluorescence units, or as a ratio to a centromeric probe of known length. However, conventional QFISH measurements do not directly measure the absolute length of the DNA nor do they measure the actual spatial dimensions of the telomere due to the light diffraction limitations of standard fluorescence imaging. Here we attempt to measure both of these quantities using a probe that is compatible with both X-ray fluorescence (XRF) and super resolution microscopy (here using dSTORM; direct stochastic reconstruction optical microscopy 12 ). We chose XRF as it has excellent sensitivity to gold and can be absolutely quantitative in measuring numbers of atoms in a sample as well as having micron resolution. We conducted a correlative approach where the two measurements are obtained on identically prepared samples and found that our probe could be used with both imaging modalities. We discuss the limitations and potential improvements that could be made to the method. We also compare our super resolution results to limited literature values and discuss the sensitivity of the technique as a method of determining telomere diameter (or volume). RESULTS We constructed the probe using a gold nanoparticle conjugation kit (Creative-Diagnostics, NY) and a custommade fluorescent peptide nucleic acid (PNA) oligonucleotide (Panagene, Korea). The two components were conjugated together using the manufacturers protocol (for details, see Experimental Methods and the Supporting Information). We call our probe GNP-PNA-A647 to represent the important components (for schema, see Figure 1a). The gold nanoparticles are ideal for detection through the characteristic XRF signal for the Au L-shell, while Alexa-647 is an organic dye that is widely used in super resolution microscopy. By quantifying the GNP signal, it is possible to infer how many DNA bases there are in a telomere, if we can obtain an estimate of the ratio of GNPs to DNA bases. Quantifying the A647 signal using super resolution microscopy provides a measure of the actual spatial dimension of the telomere, rather than the much larger apparent area that is commonly obtained using diffraction-limited modalities, such as confocal microscopy. Article We characterized the GNP-PNA-A647 probe, primarily using transmission electron microscopy (TEM) and automated image analysis (see Figure 1b−e, and Experimental Methods for image analysis techniques). Figure 1e shows the equivalent diameters of the organic shell surrounding GNPs, comparing GNPs (direct from the manufacturer) and GNP-PNA-A647. Note the GNPs from the manufacturer are functionalized with a 5 kDa polyethylene glycol (PEG) layer, thus we call them GNP-PEG. The median shell diameter was ∼6.5 nm and ∼8.5 nm for GNP-PEG vs GNP-PNA-647, respectively, a statistically significant difference using a Mann−Whitney-U-test. Our GNP-PEG measurement is largely in agreement with del Pino et al., 13 who measured a ∼7−10 nm diameter shell on their GNP-PEG constructs, also using a 5 kDa PEG molecule. In addition to TEM measurements, we used UV−vis spectroscopy to measure the characteristic Alexa647 absorption peak at ∼647 nm, which is present in our purified probe construct but not present in GNP-PEG samples (see SI Figure 1). Next, we hybridized our GNP-PNA-A647 probe to telomeres on human cells. We used a standard metaphase spread protocol using Human Embryonic Kidney (HEK-293) cells (see Experimental Methods) to produce well spread out chromosomes on which to perform probe hybridization. We used an in situ fluorescence hybridization (FISH) kit (Dako, Denmark), replacing their probe with our own. It should be noted our FISH protocol (and most FISH protocols in general) uses proteases and ribonucleases to strip the cell of all components other than the DNA contained in the nucleus. Thus, the images shown are solely from DNA. As a control, we probed the cells with GNPs that lack the PNA-A647. HEK cells are known to tolerate colcemid that blocks replication at the metaphase, which facilitated reproducibly harvesting a large number of interphase nuclei and, to a lesser extent, metaphase chromosomes. These can be distinguished based on their appearance: interphase nuclei are circular as all the individual chromosomes are clustered together in a circular arrangement; by contrast, the chromosomes in metaphase have condensed and separated so that that they can be seen individually. There were technical reasons why we did not concentrate on metaphase spreads with XRF: beam targeting and time constraints on the XRF system required focusing on the more numerous interphase nuclei since metaphase spreads are much less common than interphase nuclei in a sample preparation. Probe Sensitivity. The XRF imaging of the samples was performed with the microprobe of the Microfocus Spectroscopy beamline I18 at the Diamond Light Source. 14 The data collection time on each pixel was 10 s, which meant that each region scan generally took about 4 h, depending on the size of the scan. The beam spot size was 2.5 μm giving the pixel size in the images in Figure 2. Processing of the raw data was done with PyMCA software and involves peak fitting, background removal, and estimation of concentration levels with the use of a reference material (AXO, Dresden GmbH). In PyMCA, the reference material is modeled in terms of matrix composition, density, and thickness and is used to determine the photon flux of the experiment. The photon flux value is used in conjunction with the matrix, density, and thickness of our probed samples to measure the concentration of gold. The matrix and density assumed for our samples was the International Commission on Radiation Protection (ICRP) standard soft tissue composition (CNHO, g/cm 3 ). We measured the thickness of the cell nuclei using two independent techniques. We used ion beam analysis (IBA) 15 and measured an average nucleus thickness of ∼45 nm (for details please see the Supporting Information). For comparison, Figure 2. Sensitivity of the probe (a) X-ray fluorescence (XRF) spectrum from a 60 μm × 100 μm scanned region, containing 6 cell nucleus probed with GNP-PNA-A647. The counts (black line) are fitted (red line) so that each peak can be assigned to an element and quantified. The gold peak at 9.7 keV is magnified in the inset. (b) Elemental Zn map made by displaying just the Zn peak from the spectrum shown in part a. The pixel intensity scale bar shows the mass fraction of Zn. Each cell nucleus is segmented (green outline), and a mass fraction average is measured for each nucleus. The white scale bar is 10 μm. (c) Au map of the same region. The average Au mass fraction per nucleus is shown in Table 1, converted to numbers of gold nanoparticles. (d) XRF spectrum from control cells. (e) Zn map from control cells. Note the faint lines are chromosomes from a metaphase spread (f) Au map from control cells. There is no Au signal distinguishable above the background noise. The GNP-PNA-A647 probe measured telomere length on average to ∼0.9 kbps (see Table 1 and Table SI 1). By comparing this length to the qPCR length measurement (see Figure 5), our probe has a sensitivity of ∼10%. ACS Nano Article we used scanning transmission ion microscopy (STIM) 16 and measured an average thickness of ∼65 nm (see the Supporting Information for details) per nucleus. The two estimates are in reasonable agreement given the variability in sample preparation and the intrinsic sample heterogeneity. For XRF data fitting, we performed two fits assuming either a 45 or 65 nm nucleus thickness. The values for the gold concentration shown in Table 1 are the mean value of these two fits with standard deviation. Figure 2b,c shows Zn and Au elemental concentration maps. The Au Lα peak at 9.7 keV is clearly present in the GNP-PNA-A647 probed cells. A similar XRF spectrum can be seen for the control (Figure 2d), where cell nuclei were probed with GNPs but without the PNA-A647 oligonucleotide. Here though, the Zn map reveals the cell nuclei ( Figure 2e) but the Au map ( Figure 2f) shows no discernible features. We segmented the cells (areas shown in green) and extracted an average Au mass fraction per nucleus (i.e., the average pixel intensity). The mass fraction is multiplied by 10 6 to give the ppm (parts-per-million by weight) concentration. The ppm concentration of gold can only be converted into absolute values if one knows the mass of the matrix, which in this case is the mass of the nucleus. The mass of the nucleus is derived from the measured perimeter and average nuclear thickness (55 nm), given the ICRP soft tissue composition standard density (g/cm 3 ). Table 1 is a summary of measurements that allows estimates of the number of gold nanoparticles in each nucleus. This is calculated from the mass of gold in each nucleus, divided by the mass of a single gold nanoparticle (which is 1.01 × 10 −17 g; the volume of a 10 nm sphere is 5.24 × 10 −19 cc and the density of gold is 19.3 g/cm 3 ). To arrive at an approximate number of gold nanoparticles per telomere region, we assumed a binding ratio of 1:18 GNP/ telomeric-DNA as determined by the design of our probe (see Figure 1), and we measured an average 86 ± 23 telomeres per nucleus by counting the number of Alexa-647 fluorescent dots in N = 20 interphase nuclei (see Figure 3a as an example: the variation in telomeres is probably due to aneuploidy in HEK cells, thus no two cells are likely to have exactly the same number of chromosomes and consequently telomeres). Finally, to calculate the number of DNA base pairs per telomere, the number of GNPs per nucleus is divided by 86 (to obtain GNPs per telomere) and then multiplied by 18 (binding ratio). For the 6 cells we measured shown in Figure 1, we obtain on average about 3500 GNPs per cell that equates to 740 DNA a The mass of each cell nucleus is derived from its measured perimeter (see green outlines in Figure 2), thickness (55 nm, measured by IBA and STIM) and density (1 g/cm 3 ). The standard deviation is calculated from the uncertainty of the cell nucleus thickness (±10 nm). The mass of the Au in each nucleus is calculated from the parts per million concentration measured by XRF. The gold nanoparticles are 10 nm in diameter, so assuming that each GNP binds to 18 bases of DNA and the each nucleus has 86 telomeres, the average number of bases per telomere can be calculated (see text for details) ACS Nano Article bases per telomere (see Table 1). When we combine this with data measured on other cells (see SI Figure 4), the average is ∼4400 GNPs per cell, corresponding to ∼900 bps. To measure the actual length of the average telomere in the HEK cells, we used quantitative polymerase chain reaction (qPCR), an established method for measuring telomere length. 17 Briefly, a 84mer "telomere" PCR template (TGAC-CA) 84 is used in a serial dilution to create a standard curve of telomere template concentration (equivalent to telomere length) against cycle time completion. Thus, unknown samples can be measured against this standard. DNA was extracted from the HEK cells and had on average 10.2 ± 0.12 kbps long telomeres (see Figure 4a). By comparing this length to the XRF length measurement using our probe (which gave an average telomere length of 0.9 kbps), our probe has a sensitivity of ∼10%. Clearly, the GNPs are not fully covering the telomeric region, which could potentially be because of their relatively large size (see Discussion). Probe Specificity. The X-ray beam size was far too large to directly visualize individual clusters of GNPs on individual telomeres. To confirm that GNP-PNA-A647 was binding specifically to the telomere regions rather than nonspecifically binding to random DNA, we used silver enhancement. This is a commonly used technique, where silver ions nucleate around GNPs and grow over time, eventually becoming large enough to be seen with light microscopy. For specificity analysis, we compared silver enhanced images (Figure 3c) to images hybridized with PNA-A647 (Figure 3a), a conventional probe. To quantify the images, we automatically segmented and counted the dots using a binary threshold (Figure 3b,d) and then took the ratio between the standard and our probe, as a measure of specificity. From N > 500 dots, the specificity is 87.2% (average 86 dots per PNA-A647 nucleus and an average of 75 dots per silver stained nucleus). As an additional measure, we also compared the spatial distribution of the dots between the standard and our probe, using nearest neighbor analysis. Figure 3e shows a histogram of the data. Here, the distributions are similar and the means are not significantly different in a Mann−Whitney-U test. Super Resolution Analysis of Telomeres. As the next part of our investigation, we used super resolution microscopy to measure the biophysical dimensions of telomeres. Here, we utilized the "blinking" characteristics 18 of the Alexa-647 molecule on our probe, visualized with direct stochastic reconstruction microscopy (dSTORM). The samples were mounted in an oxygen-scavenging buffer (for details see Experimental Methods). Probed nuclei were first imaged using diffraction limited fluorescence microscopy and then subsequently imaged using dSTORM for comparison. Figure 4a shows a representative image of a number of chromosomes (blue), with the telomeres from light diffraction images (red) overlaid with the corresponding super resolution images (white). Figure 4b shows the raw data from a super resolution acquisition from a single telomere and illustrates the >200 events from the region. This raw data is rendered into a super resolution image using a "jittered triangulation" algorithm, 18 illustrated in a magnified view of a telomere in Figure 4c. An analysis of the spatial properties of telomeres was then conducted, in both diffraction limited and super resolution images, and the results are shown in Figure 4d,e. Figure 4d shows the equivalent diameters of the diffraction limited telomeres ranging between ∼550 and 750 nm, while the super resolution resolved telomeres range between 60 and 300 nm. Figure 3e shows that the diffraction-limited telomeres are almost circular, while the super resolution resolved telomeres are ellipsoidal (or irregular). This comparison is important, as it shows how diffraction limited images can lead to erroneous conclusions. Previously it was thought that telomeres were spheroids, 9 while our data shows they are, in general, ovoid or irregular. Furthermore, by measuring the actual dimensions of telomeres, as well as their length, the compaction density of the DNA can be estimated (see Discussion). The circularity (determining surface area) and the DNA density are both important when considering how enzymes enter into and interact with the telomere. Figure 5 shows a comparison between HEK cells treated with the drug azidothymidine (AZT) and untreated control HEK cells. AZT is a putative telomerase inhibitor and has been shown to shorten telomere lengths in human cells. 19 Using qPCR we measured the lengths of telomeres in the treated and untreated cells (Figure 5a). The results show that HEK cells have on average 10.2 ± 0.12 kbps long telomeres, whereas the HEK-AZT treated cells have on average 9.2 ± 0.12 kbps long telomeres, which is significantly different in a t test at P = 0.005. Figure 5b shows the equivalent diameter of >1500 super resolution measured telomeres from treated and untreated cells. Article Here, the median diameter of the untreated HEK and AZT treated cells is 183 and 169 nm, respectively, which is significantly different at P = 0.005 using a Mann−Whitney-U test. Clearly the ranges overlap considerably with a spread of diameters from ∼60−300 nm. Although the difference is small, it is still just detectable and mirrors the results seen with the absolute telomere values measured with qPCR. DISCUSSION We have developed a gold nanoparticle probe, designed to measure length and absolute dimensions of telomeres and have evaluated its specificity and sensitivity. Over the past 20 years, considerable efforts have been made to accurately measure the length, biophysical volume, and density of telomeres, due to their importance in many areas of aging and cell cycle regulation. Here, we will discuss the sensitivity and specificity of our GNP probe and then consider the implications for our super resolution results on the diameters of telomeres. We chose synchrotron X-ray fluorescence to measure the length of telomeres, as it has the potential to absolutely quantify gold with micron resolution, and a beam energy which can be "tuned" to optimize signal from an element of interest, giving it very high sensitivity (ppm). To our knowledge, no attempt has been made to measure telomeres with an XRF-compatible probe, although similar probes have been made for Raman spectroscopy. 19 Indeed, our group has used ion beam analysis (a closely related XRF technique) to absolutely quantify the numbers of gold nanoparticles and titania nanoparticles in individual cells as well as other endogenous biological trace elements such as Ca, P, Na, K, S. 20−22 Analysis of our XRF data indicate that there is on average ∼0.5 pg of gold in probed cell nuclei or about 4400 GNPs. This corresponds to binding ∼920 bases of DNA if we assume that each GNP binds to 18 bases and each nucleus has 86 telomeres. Using qPCR which is a conventional method to measure telomere length, we showed that our HEK cells had telomeres ∼10 kbps long. This means that the sensitivity of our probe is ∼10%. The reason for the low sensitivity is probably due to steric interference that the 10 nm GNPs experience with the chromosome structure. That said, we initially chose 10 nm GNPs as previous reports from Zong et al. 19 had shown good Raman signals from similarly designed 20 nm gold nanoparticle telomere probes. Zong et al. did not do a sensitivity analysis of their probe, rather they compared Raman signal to a centromeric silver probe and measured ratios of gold to silver signal as a means to compare telomere length on drug treated vs untreated cells. Our results indicate that is it very unlikely that their 20 nm sized probe had full coverage of the telomere length. As a way to improve sensitivity, ultrasmall NPs (∼1.5 nm) could be used instead of 10 nm NPs. The caveat here is that there will be less gold atoms to detect per probe, perhaps beyond the sensitivity of XRF. On the other hand, the coverage with ultrasmall gold nanoparticles will likely be much greater, and so there may actually be similar numbers of gold atoms than with larger, but fewer, gold nanoparticles. Indeed, XRF does have the sensitivity to measure 2 nm gold nanoparticles linked to antibodies in ppm concentrations. 23 We then measured the specificity of our probe to be ∼90%, comparable to a standard QFISH probe. In fact, it is not surprising that the specificity of our probe is good, as the PNA part is identical to a conventional QFISH probe, which has proven excellent specificity. 10 However, there is always the possibility of nonspecific interactions between DNA and gold nanoparticles. However, we believe the stringent washes (65°C in 0.2% detergent) we performed after hybridization (see the FISH protocol) satisfactorily removed almost all nonspecifically bound probe. We then used the Alexa647 part of the probe to measure telomeres using super resolution microscopy, which unlike diffraction-limited microscopy gives realistic dimensions. The following discussion describes how even with very large differences in telomere lengths, super resolution actually has relatively poor sensitivity in measuring these differences as a volume or diameter. This means that caution should be used interpreting results measuring compact vs decompact or long vs short telomere regions. There are very few studies to date where super resolution techniques have been applied to telomeres. 4,5,24 However, currently there is particular interest in measuring telomere biophysical properties, as there is controversy about whether DNA compaction influences the molecular interaction between enzyme complexes and telomeric DNA, and in particular, how to interpret the effect of shelterin. Bandaria et al. 4 recently measured the volumes of telomeres by super resolution microscopy in compact vs decompact telomeres, using various shelterin complex knockout cell lines. They proposed that shelterin protects chromosome ends by compacting the telomeric chromatin, thus preventing DNA damage response enzymes from entering into the telomeric space. More recently, the opposite effect was found by Timashev et al. 5 and Vancevska et al. 6 who found that telomeric DNA damage response occurs in the absence of DNA decompaction and questioned whether shelterin really compacted DNA to any substantial degree. An important issue in this debate is measuring the packing density of the telomeres, which our results allow, as we have measured both The diameters of telomeres (N > 1500) in control vs drug-treated were measured using super resolution microscopy. The ranges were between ∼60 and 300 nm, but the drug treated cells had a slightly lower median (169 nm) than the control (183 nm), which was significant different in a Mann−Whitney-U test at P = 0.005. ACS Nano Article the realistic length (using qPCR) and the actual diameter (with super resolution). First, it is important to summarize telomere diameter results from the literature values and compare them to our values. We found that HEK cells had an average telomere length of ∼10 kbps, while they had an average diameter of ∼180 nm and a range between 60 and 300 nm. This in good agreement with literature values that range between ∼60 and 400 nm, depending on the cell line and conditions used. In particular, Vancevska et al. showed that HeLa cells with "long" telomeres (average ∼30 kbps) had an average diameter of ∼190 nm (range 100−300 nm) while HeLa cells with "short" telomeres had an average diameter of ∼130 nm (range 60−200 nm). Timashev et al., using the mouse MEF cell line, reported slightly larger values, ranging from ∼80 to 400 nm with a mean diameter ∼200 nm. (Note: both these papers report radius rather than diameter). Bandaria et al. used HeLa cells and report volumes of telomeres, giving an average diameter of ∼150 nm. Overall, there is very good agreement with the reported values and our own. From our telomere diameter measurements, we can make an estimation of the DNA spacing or compaction. If we consider DNA as a cylinder, the volume is defined by the length and radius of a base pair (0.332 nm length, 1 nm radius), so that the volume of 10 kbps is theoretically 1.25 × 10 4 nm 3 . The average HEK telomere volume is actually ∼140 times more than this (sphere with diameter 180 nm or 1.76 × 10 6 nm 3 ). For a point of reference, the "average" packing density or spacing of genomic DNA is ∼120 times more than the theoretical limit (genomic DNA = ∼3.2 billion base pairs or 3.33 × 10 9 nm 3 ; volume of a spherical 5.5 μm radius cell nucleus = 2.94 × 10 11 nm 3 ). So we can estimate that the telomeric DNA is similarly spaced in comparison with "average" genomic DNA. This is interesting as it is often assumed that telomeres are much more compact than "average" DNA, but as we see in the following discussion, even when proteins are deleted that were thought to have a "compaction" effect, there is hardly a change in the measured diameters of telomeres. We also treated HEK cells with a purported telomerase inhibitor, AZT. We did this to have a direct comparison between "long" vs "short" telomeres. With qPCR, we measured a significantly reduced average telomere length of drug treated HEK cells (about 1 kbps: ∼9 kbps drug-treated compared to ∼10 kbps in controls). This difference could just be seen using super resolution microscopy, where we measured diameters of individual telomeres with ∼170 nm drug-treated vs ∼180 nm control, although the ranges of the two samples overlapped considerably. Similarly, Vancevska et al. compared the sensitivity of super resolution microscopy in measuring the difference between "long" vs "short" telomeres, against a conventional method (Southern hybridization). Here, they used two different strains of HeLa cells genetically modified for cells with long or short telomeres. With southern hybridization there was a clear telomere length difference, where an average "long" cell had telomeres that were ∼30 kbps, while the average "short" cell had ∼10 kbps. With super resolution, the "short" cells had a mean telomere radius of ∼68 nm while the "long" cells had a ∼90 nm, with large overlapping ranges between the two. So, with a conventional technique there was a clear difference, while with super resolution it was measurable, but much less pronounced. Similarly we found that a clear difference in qPCR manifests as only a very marginal difference with super resolution microscopy (see Figure 5). The point is that measuring compact vs decompact, or long vs short telomere regions, with super resolution is challenging due to the large range of telomere diameter values, and results should be interpreted with caution. The real strength of super resolution lies in the ability of analyzing individual telomeres particularly when colocalizing proteins and complexes within the telomeric space, or visualizing the telomere DNA loop as Doksani et al. 24 achieved. CONCLUSIONS We have developed and evaluated a dual purpose probe for human telomeres. We measured the gold signal arising from this probe located on telomeres inside cell nuclei. We measured approximately 4400 GNPs per nucleus. This gold content indicates a probe sensitivity of ∼10%, based on comparisons with qPCR based telomere length measurements. The low sensitivity is probably due to steric interference of a relatively large GNP size not gaining full access to the telomeric space. We then measured the probe specificity to be ∼90%, by comparing silver enhanced samples (which directly indicate the locations of gold nanoparticles) to a standard QFISH probe. We then used the fluorescent part of our GNP-PNA-A647 probe to measure realistic diameters of human telomeres. Here, we show that the average diameter is about 180 nm and that the telomeres are rarely circular. This is important as it relates to the packing density of the DNA and hence how accessible telomeres are to various modifying enzymes. Further, by comparing telomeres from drug treated cells that have on average 1 kbps less DNA in the telomeres, we are able to discuss the sensitivity of super resolution as a technique compared to conventional methods. This work has laid the foundation for a probe design that will be able to simultaneously measure the absolute length and biophysical dimensions of individual telomeres in human cells. Moreover, the approach could have wider uses in measuring other important biomolecules inside cells and tissue. The absolute size of structures and the number of molecules present are often important parameters for understanding basic biological processes. EXPERIMENTAL METHODS Gold Nanoparticle Probe Construct. Briefly, the GNPs are ∼10 nm in size, coated with polyethylene glycol (PEG) terminated with NH moieties (see the manufacturer's Web site for physical characteristics). The probe is an antisense 18-mer peptide nucleic acid corresponding to the telomeric repeat on the chromosomes of (GGGTAA), with a spacer molecule and lysine residue at both ends, and an Alexa-647 molecule at the NH terminus end (we call it PNA-A647). The GNPs are mixed with the PNA-A647 in a molar ratio of 1:100, respectively, left to react for 2 h, centrifuged at 17 000g and washed three times to remove any unbound PNA-A647 (details can be found in the Supporting Information). Absorbance spectroscopy of the GNP-PNA-A647 conjugate shows two clear peaks at ∼520 nm and ∼655 nm associated with absorption from the GNPs and the Alexa-647 dye, respectively, indicating the PNA-A674 has bound to the GNPs (see the Supporting Information). Cell Culture. The HEK-293 cells were a gift from Dr. Mark Russell (University of Exeter). The cells originated from ATCC cell culture bank and were grown according to standard protocols. They were incubated at 37°C in EMEM media with 10% FBS and passaged every 2−3 days. We avoided the use of antibiotics. Metaphase Spread Protocol. Cells were grown to ∼70% confluence in T75 (Thermo Fisher Scientific, U.K.) flasks, yielding ACS Nano Article ∼1 × 10 7 cells. Next, the metaphase-blocking drug KaryoMax colcemid solution (Thermo Fisher Scientific, U.K.) was added at a final concentration of 0.1 μg/mL and incubated with the cells for 4 h. The cells were harvested by incubating with trypsin and then centrifuged at 300g for 5 min to pellet the cells. After aspirating the trypsin, 5 mL of 0.068 M KCl was added in drops to resuspend the pellet, mixing gently with each drop. After leaving the solution at room temperature for 15 min, 0.5 mL of ice-cold fixative (three parts absolute methanol to one part glacial acetic acid) was added. After a further centrifugation step and aspiration of the supernatant, the cells were resuspended in 5 mL of the fixative and stored at −20°C. We used two different substrates to drop the cells onto; we used a silcon nitride (SiN) window (Silson Ltd., U.K.) for cell nuclei intended to be analyzed by XRF, while we used glass coverslip slides for those analyzed by dSTORM. Silicon nitride is a good substrate for XRF as it contains no trace element impurities. However, SiN windows are expensive and fragile so processing cells on them with multiple washing and viewing is technically difficult as they have a tendency to break. To obtain good quality metaphase spreads, cells were "dropped" from a pipet (each drop contains about 20 μL) from a height of about 2 cm, onto a substrate (SiN window or coverslip slide) placed at a 45degree angle. We did this over a water bath to maintain a high humidity environment to prevent the chromosomes from drying too quickly. Fluorescent in Situ Hybridization Protocol. We processed the metaphase spreads we had prepared on either coverslip slides or SiN windows using the DAKO (DAKO Ltd., Denmark) FISH telomere kit (code K5236). This kit provides all key reagents needed for performing fluorescence in situ hybridization for detection of telomere sequences by fluorescence microscopy. The metaphase spreads are first fixed in 4% paraformaldehyde. After 2 washes with Tris Buffered Saline (TBS), pH 7.5, a pretreatment solution of proteolytic enzyme (proteinase K) is left on the samples for 10 min to remove all the proteins. After further washes in TBS and dehydration in ethanol, 10 μL of the GNP-PNA-A647 probe (stock solution diluted 1:100) was added to the sample. The samples were then placed in an oven at 80°C for 5 min under a coverslip, to denature the DNA in the presence of the probe. Next the samples were placed in the dark at room temperature (RT) for 30 min to allow hybridization to take place. The hybridization was followed by a brief rinse with a Rinse Solution (proprietary solution, DAKO kit), and a posthybridization wash with a Wash Solution (0.1% triton-X in TBS) at 65°C for 5 min. Following this, the samples were dehydrated in ethanol and left to airdry. Assessment of Telomere Length by qPCR. DNA was extracted from HEK-293 cells using the PureLink Genomic DNA Mini Kit (Invitrogen/Thermo Fisher, MA) according to the manufacturer's instructions. DNA quality and concentration was checked by Nanodrop spectrophotometry (NanoDrop/Thermo Fisher, MA). Telomere length was determined using a modified qPCR protocol. 17 PCR reactions contained 1 μL of EvaGreen (Solis Biodyne, Tartu, Estonia), 2 μM each primer, and 25 ng of DNA in a total volume of 5 μL. The quantitative real time polymerase chain reaction telomere assay was run on the StepOne Plus, cycling conditions were a single cycle of 95°C for 15 min followed by 45 cycles of 95°C for 10 s, 60°C for 30 s, and 72°C for 1 min. A standard curve is established by dilution of known quantities of a synthesized 84 oligonucleotide containing only TTAGGG repeats. Using the standard curve method, cycle threshold values were plotted on the standard curve to estimate a concentration value for telomere DNA repeat sequences. The average telomere length was calculated as the ratio of telomere repeat copy number to a single gene (36B4) copy number. X-ray Fluorescence. The Diamond Light source beamline uses a pair of KirkPatrick-Baez focusing mirrors to deliver a tunable size beam to the sample, in our case it was approximately 2.5 × 2.5 μm 2 . A silicon drift detector was used at 45°geometry to collect the characteristic photons from the elements in the sample. The excitation energy was set at 12.5 keV. Super Resolution Microscope. The super resolution images were taken using a custom-made microscope. The methodology used to take images and process the data have been published previously. 18 The coverslips were fixed to a custom-made chamber such that the nuclei could be covered with an oxygen-scavenging solution (0.03 M MEA [mercaptoethylamine]) in glycerol buffered with 1× PBS (phosphate buffered saline), optimized for A647 photoswitching. Images were acquired on a commercial Nikon Ti-E inverted microscope with a Nikon 60×, 1.49 NA oil-immersion TIRF objective (Nikon), and an Andor Zyla 4.2 sCMOS camera (Andor Technology). Cell nuclei were found using the DAPI signal from stained chromosomes using 405 nm illumination from an LED light source. The A647 dye bound to the telomeres on the ends of the chromosomes was illuminated with a 645 nm LED, combined with a Cy5 filter cube (Semrock, 655/40). A region of interest (ROI) was imaged first with LED illumination and then in super resolution mode. For dSTORM, illumination was by a 642 nm laser (Omicron, LuxX 642-140) providing approximately 100 W/m 2 focal plane intensity with a spot size diameter of about 40 μm. Cells were illuminated at high intensity for ∼5 s to push a large proportion of the dye molecules into a dark state, whereupon frames were recorded at 50 ms intervals with custom software detecting single molecule events in real time. We collected approximately 20 000 events per ROI. Images were reconstructed using custom software, written in Python. 18 Briefly, each event was processed using a "jittered triangulation" algorithm, which essentially produces a 64 bit matrix (and thus an image), where the value of each event is weighted in comparison to the closeness of its neighbor. Image Analysis. Each super resolution image was saved as a TIF file and was further processed in MATLAB. The image analysis code and example images can be found on github.com/charliejeynes. Briefly, each image is binarized using an Otsu threshold. Then the binarized LED image is compared to the binarized super resolution image, so that only dots that register in the same place in both images (i.e., are truly telomeres and not background dots) are counted. Next, each telomere region is then measured with a number of parameters including area, perimeter, and equivalent diameter. Throughout the paper we use equivalent diameter as it normalizes the often-irregular shapes of the telomeres. The equivalent diameter is calculated as the √(4 × area/π). Merged LED and super resolution images shown in Figure 3 were made from the DAPI (blue) channel for the chromosomes and the A647 (red channel) for the telomeres. Image analysis of the gold nanoparticle organic shells shown in Figure 1 followed a very similar image analysis pipeline to that described above for the super resolution images. * S Supporting Information The Supporting Information is available free of charge on the
2018-04-03T01:07:57.540Z
2017-11-01T00:00:00.000
{ "year": 2017, "sha1": "b649e238342fcb92ea01a5b3fcfe9e7227714bc9", "oa_license": "CCBY", "oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acsnano.7b07064", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "fdc8dcf456272ab65b93c2e22646317a3877318d", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
246025875
pes2o/s2orc
v3-fos-license
Effectively Detecting Left Bundle Branch Block False Defects in Myocardial Perfusion Imaging (MPI) with a Convolutional Neural Network (CNN) . Left bundle branch block (LBBB) is a frequent source of false positive MPI reports, in patients evaluated for coronary artery disease. Purpose: In this work, we evaluated the ability of a CNN-based solution, using transfer learning, to produce an expert-like judgment in recognizing LBBB false defects. Methods: We collected retrospectively, MPI polar maps, of patients having small to large fixed anteroseptal perfusion defect. Images were divided into two groups. The LBBB group included patients where this defect was judged as false defect by two experts. The LAD group included patients where this defect was judged as a true defect by two experts. We used a transfer learning approach on a CNN (ResNet50V2) to classify the images into two groups. Results: After 60 iterations, the reached accuracy plateau was 0.98, and the loss was 0.19 (the validation accuracy and loss were 0.91 and 0.25, respectively). A first test set of 23 images was used (11 LBBB, and 12 LAD). The empiric ROC (Receiver operating characteristic) Area was estimated at 0.98. A second test set (18x2 images) was collected after the final results. The ROC area was estimated again at 0.98. Conclusion: Artificial intelligence, using CNN and transfer learning, could reproduce an expert-like judgment in differentiating between LBBB false defects, and LAD real defects. Introduction Left bundle branch block (LBBB) is a frequent source of false positive reports in myocardial perfusion imaging (MPI). This has been reported in previous studies that have evaluated the relation between myocardial perfusion and LBBB [1,2]. The method of interpretation used in MPI in patients having LBBB, may influence the sensitivity and specificity of the exam. In this paper by Higgins et al. [1], authors concluded that the use of certain features of the MPI scan can aid the clinician in differentiating true perfusion defects, distinguishing underlying ischemia from false defects. We try to evaluate the usefulness of artificial intelligence, to reproduce this method of interpretation, offering a perspective to develop a diagnostic aid tool. Some previous studies have evaluated the value of deep learning in the diagnosis of coronary artery disease (CAD) in MPI [3,4,5]. Betancur J et al. [3] concluded that deep learning improves automatic prediction of obstructive coronary artery disease from MPI, as compared to the current standard quantitative method. However, such studies did not include some clinical information when training the model, such as for example, the existence of a LBBB. Purpose In this work, we evaluated the ability of a CNN based solution, using transfer learning, to produce an expert-like judgment in differentiating LBBB false defect, from left anterior descending artery (LAD) real perfusion defect. The study was conducted considering a small dataset, because collecting a larger dataset needs a proof of utility. Study population The study covered two groups of MPI polar map images, collected retrospectively from our department, with small, to large fixed antéroseptal perfusion defect (small: 1 segment, moderate: 2 segments, large: 3 or more segments, based on the 17 segments model [6]).  The LBBB group included patients where the perfusion defect was judged as false defect by two experts (based on clinical assessment, and GATED-SPECT [1]). Expert judgment was reinforced by a flow up for 3 years (no cardiovascular event). All patients in this group had a LBBB.  The LAD group included patients where the perfusion defect was judged as a true positive by two experts. Expert judgment was confirmed by angiography; >70% narrowing of LAD artery (patients with more than one vessel disease, or LBBB on ECG, were excluded from this group). Study population baseline characteristics, are illustrated in table1 Image Acquisition A conventional single head, Gamma Camera was used for all patients, using Tc99m-Sestamibi radiotracer. Patients had various stress protocols, such as treadmill, pharmacological, or a mixed protocol. Stress and rest exams were performed either the same day, or on two different days. Images Dataset The dataset was composed of 107 perfusion polar maps (42 images in each class for training, with 29% for validation). Stress and rest images were used. Deep learning Several CNN were tested, and ResNet50V2 was chosen for achieving the best results. Only the classification part of the network was re-trained, following a transfer learning approach (training a fully connected layer, with two neurons). Results After 60 iterations, the reached accuracy plateau was 0.98, and the loss was 0.19 (the validation accuracy and loss were 0.91 and 0.25, respectively). A first test set of 23 images was used (11 LBBB, and 12 LAD). The empiric Receiver operating characteristic (ROC) Area was estimated at 0.98, with 95.7% accuracy. A second test set (18 images in each group) was collected after the final results (but without the 3 years follow-up for the LBBB group). The ROC area for the model, was estimated again at 0.98. An example of a man of 70 years old, smoking, having a rest angina, and a LBBB on ECG, is illustrated ( Figure 1). He had a positive stress, with reduced LVEF at 35%. Images of this patient were not used nor in training, nor in validation. Stress image was predicted by our model as an LAD real perfusion defect. The rest image was predicted as a LBBB false defect, which means that according to our model, this patient is having an ischemia in the LAD territory ( Figure 1). This patient had an angiography, confirming a severe narrowing of proximal LAD. He had a revascularization, resulting in an improvement of his LVEF (from 35% to 45%). This example illustrates the capacity of the model, differentiating real from false defect, in a same patient having CAD in the LAD artery, along with LBBB. Study population It is worth clarifying that the two groups used in our study are not of the same cardiovascular risk level, as long as we are looking for false defects in the LBBB group, and real defects in the LAD group. We believe that such contrast is mandatory to train the model on distinct features from each group. Limitations Even if these results are encouraging, this study is still a retrospective one, done on a small number of patients, from a single department. Also, it has to be said that our model was trained on a specific color map used in our department, so the evaluation of the model on other patients from other departments needs to convert images to this specific color map. The aim of this study was to evaluate the ability of deep learning to reproduce an expert like judgment for this problem. The next step could be a multicentric study (different gamma cameras), with coronary angiography as ground truth. Conclusion Artificial intelligence, using CNN and transfer learning -even on a very small training dataset-could reproduce an expert-like judgment in differentiating between LBBB false defect and LAD real perfusion defect. These results are motivating for a multicenter prospective study, to develop a diagnostic aid tool for clinicians, offering an expert like lecture. Such tool, could probably reduce false positive MPI reports, and by the way, reduce the number of unnecessary invasive angiography.
2022-01-19T16:34:11.988Z
2022-01-14T00:00:00.000
{ "year": 2022, "sha1": "9084c87abbb06674dda3207d6664535e00af74ba", "oa_license": "CCBYNC", "oa_url": "https://ebooks.iospress.nl/pdf/doi/10.3233/SHTI210898", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "11e20db9784b063cb15acb8c40f99f17d43ce7e9", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
18518014
pes2o/s2orc
v3-fos-license
Structural Differences in Gray Matter between Glider Pilots and Non-Pilots. A Voxel-Based Morphometry Study Glider flying is a unique skill that requires pilots to control an aircraft at high speeds in three dimensions and amidst frequent full-body rotations. In the present study, we investigated the neural correlates of flying a glider using voxel-based morphometry. The comparison between gray matter densities of 15 glider pilots and a control group of 15 non-pilots exhibited significant gray matter density increases in left ventral premotor cortex, anterior cingulate cortex, and the supplementary eye field. We posit that the identified regions might be associated with cognitive and motor processes related to flying, such as joystick control, visuo-vestibular interaction, and oculomotor control. INTRODUCTION In order to keep up with the demands of a changing environment, our brains adapt quickly and efficiently. This is best demonstrated by gray and white matter structure changes in brain regions associated with practicing specific motor or cognitive skills. Such findings have been reported by many cross-sectional studies done over the past decade comparing trained experts to non-experts. Now termed as experience-dependent structural plasticity, the process is thought to be active throughout our lives (1,2). For instance, various voxel-based morphometry (VBM) studies comparing brains of musicians to non-musicians found increased gray matter density (GMD) in several brain regions in musicians, including the cerebellum, auditory, and motor cortices (3,4). Skilled golfers were reported to have larger GMD in premotor and parietal areas (5). Long-term practice might also result in a decrease of gray matter as was the case with ballet dancers where authors reported decreased gray matter volumes in the left premotor cortex, supplementary motor area, putamen, and superior frontal gyrus (6). Structural changes can also be observed in purely cognitive skills, e.g., mathematicians were shown to have larger inferior frontal and bilateral inferior parietal lobules (7). Bilinguals had larger GMD in the inferior parietal cortex compared with monolinguals (8). Flying a glider is a unique skill as human beings are not naturally suited for operation in a non-terrestrial environment. Very little is known about how the brain of a glider pilot adapts to the needs of being in the air, which are very different from other land-based motor skills. Glider flying involves operation in three dimensions, at considerably variable velocities, altitudes, and g-forces. To avoid motion sickness, pilots must habituate to the unusual visual-vestibular interaction resulting from full-body rotation within a thermal column. Glider pilots have to simultaneously integrate multiple streams of sensory information from visual, vestibular, and kinesthetic systems to form a mental construct of their position and orientation to control the glider. Specifically, pilots use a joystick with one hand to control the roll and pitch of the glider, foot pedals to control the yaw, and a dive brake with the other hand to increase the drag during landing. Co-ordination of all four degrees of freedom is required to be able to fly and land a glider properly. Apart from precise sensorimotor control, flying demands high levels of cognitive control, as pilots have to continuously monitor their performance based on multimodal sensory feedback mechanisms. Moreover, the process has to be predictive, has a low margin of error, and often pilots have to resolve conflicting information coming in from different senses. These factors make flying a very interesting skill to study from a neuroscience perspective and investigating the neural correlates of flying has the potential to throw light on many brain processes involved in motor control, multisensory integration, and cognitive control. Recently, a few studies from our group reported functional activation patterns as subjects tried to fly an aircraft in a flight simulator inside a MRI scanner (9,10). Despite the significance of the findings, the utility of this method in revealing the underlying neural basis of flying as a sensorimotor skill is limited by the space and movement restrictions of fMRI. Looking at structural differences between the brains of pilots and non-pilots presents us with a viable alternative. Previous studies that have looked at physiological differences between pilots and non-pilots, point toward vestibular habituation and adaptation of the vestibulo-ocular reflex (VOR) in pilots (11)(12)(13)(14). VOR is an eye reflex that moves the eye in a direction opposite to the head movement. VOR adaptation in pilots suggests that the mechanism is important for stabilizing images on the retina www.frontiersin.org during head and full-body rotations as the glider rotates in a thermal column. Psychophysical tests have shown that fighter pilots have superior cognitive control as compared to non-pilots as measured by the Eriksen Flanker task (15). The same study also found differences in white matter radial diffusivity (derived from diffusion weighted imaging) between fighter pilots and non-pilots in the right dorsomedial frontal region and parietal lobe. These studies predict that compared to non-pilots, pilots may have changes in GMD in brain regions related to vestibular habituation, motor learning, sensorimotor integration, and cognitive control. Additionally, the results of the flight simulator fMRI studies (9,10) suggest that these brain regions include but are not limited to the ventral premotor cortices, inferior parietal lobule, supplementary motor area, and few areas in occipital and temporal lobe. In the present study, we wanted to investigate the structural correlates of flying a glider by analyzing gray matter differences between glider pilots and non-pilots using VBM. Unlike the previous study done to detect changes in of white matter structure between fighter pilots and non-pilots (15), we looked at gray matter and did not use any masks to restrict our search. ETHICS STATEMENT All subjects gave written informed consent for experimental procedures approved by the ATR Human Subject Review Committee in accordance with the principles expressed in the Declaration of Helsinki. SUBJECTS Thirty right-handed subjects participated in this study. The handedness of the subjects was determined using a questionnaire based on Edinburgh Handedness Inventory (16). Fifteen of the subjects were glider pilots recruited from nearby gliding clubs. The pilots were all well experienced with a mean in-air flight experience of 34.2057 h (SE 5.35), where an average glider flight lasts 10-15 min. All pilots reported using their right hand to control the joystick. All subjects in the control non-pilot group had experience with driving or flying related video games. Age and sex between the two groups was balanced. Pilots had a mean age of 21.3 years (SE = 0.36), while the control non-pilot group had a mean age of 22.4 years (SE = 0.49). Age effects were also controlled for by including age as a confounding regressor in the statistical model. There were 13 males and 2 females in both the pilot and control groups. All subjects were Japanese, came from similar educational and socioeconomic backgrounds, and had no history of neurological, head trauma, or psychiatric disorders. IMAGE ACQUISITION High-resolution anatomical scans were acquired with T1 weighting (TE = 3.06 ms, TR = 2.25 s, matrix size = 256 × 256, voxel size = 1 mm × 1 mm × 1 mm) were acquired on a Siemens Trio 3 T scanner at the ATR Brain Activity Imaging Center. VOXEL-BASED MORPHOMETRY ANALYSIS Voxel-based morphometry is a method used to automatically analyze differences in local brain anatomy (17). T1 weighted structural MR images were used as an input to the VBM pipeline. All T1 images were processed using SPM8 (Wellcome Department of Cognitive Neurology, UCL), running under MATLAB 7.13 on a Linux platform (The Mathworks, Natick, MA, USA). VBM was performed using the VBM extension present in SPM8. The preprocessing involved following steps: 1. After checking raw images for artifacts and setting the origin to Anterior Commissure (AC), they were segmented into GM, WM, and CSF using unified segmentation (18) General linear model as implemented in SPM8 was used for all statistical analysis. Differences in GMD between the two groups were analyzed using one-way ANCOVA. Data were corrected for global brain volume by dividing each voxel by the total intracranial volume and age was added as a regressor of no interest. Voxelwise statistical parametric maps showing differences in GMD between pilot and non-pilot groups were generated by setting the voxel level threshold at t > 4.94, p < 0.05 [corrected for multiple comparisons using false discovery rate (FDR)]. The initial localization of brain regions that were found significant was done using SPM Anatomy toolbox (20) localization was further refined based on anatomical parcelation literature as mentioned in the discussion below. RESULTS Statistical analysis showed that compared to non-pilots, pilots had significantly higher GMD in the left ventral premotor area (lPMv) and right anterior cingulate cortex (rACC) ( Table 1; Figures 1A,B), p < 0.05 FDR corrected for multiple comparisons. Lowering the threshold to p < 0.0001 uncorrected, showed another cluster in the right supplementary eye field (rSEF) within the supplementary motor area, where pilots had a higher GMD compared to the non-pilot group (Table 1; Figure 1C). No regions were found to have significantly lower GMD in pilots (uncorrected p < 0.0001). Frontiers in Neurology | Sports Neurology Individual GMD values within the pilot group extracted from peak voxels of the two significant clusters showed no significant correlation (p > 0.05) with the number of hours of in-air flight experience. DISCUSSION To the best of our knowledge, our study is the first to demonstrate structural differences in the gray matter of glider pilots. We show that pilots have increased GMD in regions that can all be grouped under the premotor areas of the frontal lobe, regions that influence various kinds of motor output through projections to the primary cortex and spinal cord (21). Because of the complexity of the skill and paucity of previous work on the neuroanatomical correlates of flying, it is difficult to say precisely what role these brain regions play in this particular skill. However, based on a literature review, plausible interpretations for their involvement are discussed below; the interpretations are speculative but informative. VENTRAL PREMOTOR CORTEX As per a recent parcelation of the lateral premotor cortex, our lPMv blob lies in the cluster corresponding to area F5 in macaque (22). In the literature, this area has been repeatedly shown to be involved in grasping and manipulation of objects, as well as conditional motor learning (23,24). Learning-dependent activity has been shown to occur in this region as subjects acquire new visuomotor associations to manipulate a joystick (25). In the aforementioned VBM www.frontiersin.org study, golfers (who have to learn precise visuomotor control of golf clubs) (5) were found to have higher GMD in this same region. Functional activation patterns were also observed in this region in the flight simulator studies and are thought to underlie visuomotor processes encompassing the mirror neuron system involved with movement imitation and imitation learning (9,10). All these sources of evidence point toward the involvement of the ventral premotor cortex in acquisition of new visuomotor associations as a pilot learns to control the glider using a joystick. The left lateralization of this cluster can be explained by the fact that all subjects being right handed were used to manipulating the joysticks with their right hand. ANTERIOR CINGULATE CORTEX The rACC cluster is located in the anterior rostral cingulate zone (RCZa) (21). According to most studies, this region of the cortex is said to be involved in conflict monitoring and motor related cognitive control (26,27). Often the different senses involved in flying send conflicting information to the brain, for instance when a glider is soaring, the visual system might give an impression of stillness, while the vestibular system senses self-motion. Thus, conflict monitoring and decision making in conflict is an important aspect of flying. Structural reorganization of rACC as the skill develops is in accordance with the accepted function of this region. A more relevant involvement of ACC in flying comes from a study that reported increased activation of ACC with repeated vestibular stimulation, pointing toward the involvement of this region in adaptation of the VOR (28). As mentioned previously, VOR adaptation has been reported in pilots and can even be used to differentiate pilots from non-pilots (11,14). Several other studies have reported significant activations of ACC in vestibular stimulation and visual-vestibular interaction experiments (29). This interpretation is strengthened by the fact that the flight simulator studies, which had no vestibular component reported no functional activation in this region. SUPPLEMENTARY EYE FIELD The cluster found in the supplementary motor area can be localized to a specialized region called the supplementary eye field (SEF) (30). Across human and monkey studies, this region has been reported to be involved in various aspects of oculomotor control, such as learning oculomotor transformations, smooth pursuit, and cognitive control of the oculomotor system like performance monitoring and prediction (31,32). Amidst all the head and full-body rotation involved in flying, pilots require a fine-tuned oculomotor system to control their eye movements so that the visual image is stable on the retina. We believe that SEF is one of the areas that fulfill this role. SEF is also reported to be involved in suppression of nystagmus, which in turn is related to vestibular habituation in pilots (12,13). Further support to this interpretation comes from the fact that this area was also found active in previous flight simulation fMRI studies (9,10). Thus, the increase in GMD in the SEF is probably involved with the abovementioned oculomotor functions crucial to flying. Evidently, the brain regions found significant in the present study could be responsible for physiological and perceptual processes involved in flying, such as motor learning, vestibular habituation, and cognitive control. The lack of correlation between in-air flight experience and GMD of the brain structures found significant may have several reasons. The lack of a significant correlation may be explained by the fact that habituation is a fast process and by the time a pilot is good enough to fly a real glider on his own, his eyes and vestibular senses are already well habituated. An additional explanation may be that in-air flight experience in our study is not a sensitive measure of differences in individual skill. It may be the case that our sample size is not large enough to capture such small differences in skill-related experience that is thought to be reflected by greater GMD in specific cortical regions. It should be pointed out that the aforementioned study, which looked at trained fighter pilots also did not find any correlations between flying hours and white matter changes (15). To the best of our knowledge, the neural correlates of vestibular habituation are not very well known; accordingly one of the key insights of this study is the possible involvement of ACC and SEF in the process of vestibular habituation. CONCLUSION The results of our study show that glider pilots have increased GMD in ventral premotor cortex, anterior cingulate cortex, and supplementary eye field, which are associated with sensorimotor learning, visual-vestibular interaction, and oculomotor control, respectively. Further studies are needed to evaluate the degree to which performance of flight-related tasks can be predicted from GMD in these regions and the longitudinal pattern of the changes.
2016-05-04T20:20:58.661Z
2014-11-28T00:00:00.000
{ "year": 2014, "sha1": "7333419e2fe2e44a8638bdf9d38fd664256e73e9", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fneur.2014.00248/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ebf09787c667580e0d6710be574e01420757c5c4", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
248577585
pes2o/s2orc
v3-fos-license
Acridine Based N-Acylhydrazone Derivatives as Potential Anticancer Agents: Synthesis, Characterization and ctDNA/HSA Spectroscopic Binding Properties A series of novel acridine N-acylhydrazone derivatives have been synthesized as potential topoisomerase I/II inhibitors, and their binding (calf thymus DNA—ctDNA and human serum albumin—HSA) and biological activities as potential anticancer agents on proliferation of A549 and CCD-18Co have been evaluated. The acridine-DNA complex 3b (-F) displayed the highest Kb value (Kb = 3.18 × 103 M−1). The HSA-derivatives interactions were studied by fluorescence quenching spectra. This method was used for the calculation of characteristic binding parameters. In the presence of warfarin, the binding constant values were found to decrease (KSV = 2.26 M−1, Kb = 2.54 M−1), suggesting that derivative 3a could bind to HSA at Sudlow site I. The effect of tested derivatives on metabolic activity of A549 cells evaluated by the 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide or MTT assay decreased as follows 3b(-F) > 3a(-H) > 3c(-Cl) > 3d(-Br). The derivatives 3c and 3d in vitro act as potential dual inhibitors of hTopo I and II with a partial effect on the metabolic activity of cancer cells A594. The acridine-benzohydrazides 3a and 3c reduced the clonogenic ability of A549 cells by 72% or 74%, respectively. The general results of the study suggest that the novel compounds show potential for future development as anticancer agents. Introduction Heterocyclic compounds, as the most important organic compounds, are frequently present in molecules of interest in medicinal chemistry [1][2][3][4][5]. Heterocycles demonstrate pharmacological activity via several mechanisms. Depending on the type of heteroatom present in the molecule, they may show various properties. Nitrogen heterocycles are among the most significant structural components of pharmaceuticals. They exhibit diverse biological activities and nitrogen heterocycles have always been attractive targets to synthetic organic chemists [6,7]. The prevalence of nitrogen heterocycles in biologically active compounds can be attributed to their stability and operational efficiency in human body and the fact of that the nitrogen atoms are readily bonded with DNA through hydrogen bonding [8]. In 2020, DiFranco and coworkers [9] demonstrated that exposure to a neo-synthetic bis(indolyl)thiazole alkaloid analog, nortopsentin 234, leads to an initial reduction of proliferative and clonogenic potential of colorectal cancer cells. Hamdy et al. [10] published the anti-apoptotic Bcl-2-inhibitory activity of synthesized 3-(6-substituted phenyl-[1,2,4]-triazolo [3,4-b]-[1, 3,4]-thiadiazol-3-yl)-1H-indoles. Shao et al. [11] synthesized and tested aminoindazole derivatives as new irreversible inhibitors of wild-type and gatekeeper mutant FGFR 4 . Among the tested compounds, one aminoindazole exhibited excellent potency in both the biochemical and cellular assays, as well as modest in vivo antitumor efficacy. It is also worth mentioning the results obtained by Carbone et al. [12]. They synthesized a new series of thiazole derivatives and evaluated their ability to inhibit biofilm formation against the Gram-positive bacterial reference strain Staphylococcus aureus and Gram-negative strain Pseudomonas aeruginosa. The results showed that the new compounds affected the biofilm formation without any interference on microbial growth, and thus can be considered as promising lead compounds for the development of a new class of anti-virulence agents. Acridine, a biologically active nitrogen-containing heteroaromatic ring, can be normally found in natural molecules. Todays, acridines are used as building blocks for the syntheses of heterocyclic systems, which have a strong influence on biological, pharmaceutical, and material sciences [13]. The biological activity of acridine derivatives is mostly due to the ability of the acridine moiety to intercalate between base pairs of double-stranded DNA through π-π interactions and interactions with topoisomerase I/II and telomerase. These compounds play an important role in the treatment of a variety of diseases, and they have been used clinically in past decades as antiviral, anticancer, anti-prion, antiprotozoal, anti-inflammatory, antineoplastic, and analgesic compounds [14][15][16][17][18][19][20]. In addition to several side effects, many drugs used in the past have increased resistance to them, resulting in a low therapeutic effect. These factors encouraged chemists to structurally modify acridine and produce different derivatives [21]. The structural modification of the acridine ring is the strategy of choice to improve the physicochemical and pharmacological properties. N-Acylhydrazones, containing the -CO-NH-N= unit, have been the focus of interest for a long time due to their interesting properties, and they have found applications in medicine [22], agriculture [23] and materials engineering [24]. Recently, many compounds containing this moiety have been reported, which demonstrate that the introduction of this pharmacophore may result in high potential activities such as antivirus [25], antibacterials [26], antitumor [27], antileishmanial agents [28], etc. Acylhydrazone derivatives have also been widely used as ligands to prepare various complexes and chemosensors, which are always in the field of focus for researchers in materials due to their reversible acylhydrazone bond [27]. The structure-activity relationship (SAR) showed that the electron-withdrawing groups (-Cl, -NO 2 , -F, and -Br) were favoured in both DNA binding and anticancer activity, with the electron-donating groups (-OH and -OCH 3 ) showing only moderate activity [29]. Our approach was to couple the acridin-4-yl and N-acylhydrazone moiety to obtain a new class of compounds, (acridin-4-yl)benzohydrazides. These compounds are not only a favorable basis to study structural effects (configurational and conformational isomerism) using nuclear magnetic resonance (NMR) spectral parameters, but also to study their interaction with calf thymus DNA and human serum albumin, too. Our study revealed a number of useful regularities in the 1 H, 13 C, and 15 N NMR chemical shifts and coupling constants which depend on the conformation and configuration of N-acylhydrazones. Chemistry The novel compounds were synthesized using a procedure outlined in Schemes 1 and 2. The starting compound, acridin-4-carbaldehyde (2), was prepared according to the methods described in earlier literature [30,31]. Inspired by previously reported synthesis of 4-(bromomethyl)acridine [30], 4-(bromomethyl)acridine was reacted with the sodium salt of 2-nitropropane in dimethylsulfoxide at room temperature to afford the expected product acridine-4-carbaldehyde (2) in moderate yields. For the synthesis of benzohydrazides 1, commercially available aryl aldehydes were converted to their methyl esters, which were reacted with excess hydrazine monohydrate (60%) under reflux to give the corresponding benzohydrazides 1. Our approach to access substituted benzohydrazides 1a-e is complementary to those already reported in the literature [32]. Aldehyde 2 was next allowed to react with a series of benzohydrazides 1 in ethanol providing compounds 3a-e (Scheme 1, Table S1) in good yields. (Acridine-4-yl)benzohydrazides 7b-d were synthesized via a three-step reaction (Scheme 2). The derivatives of series 7 were prepared from aldehyde 2. By using conditions close to those of lit. [33] for the oxidation of aldehyde 2 (2.90 mmol of aldehyde 2, 4.63 mmol of iodine in 8 mL methanol, 8.11 mmol of KOH in 8 mL of methanol), but with an extended reaction time (2 h at 0 • C and 4 h at room temperature (rt)) in order to allow the reaction to go to completion, aldehyde 2 was oxidized with alkaline iodine to directly lead to methyl acridine-4carboxylate (4) isolated in 73-91% yield. Aridine-4-carboxylate (4) was treated by hydrazine hydrate (22.25 mmol) at the reflux temperature of ethanol until the disappearance of the starting material. The expected acridine-4-carbohydrazide (5) was isolated in 75% yield [34]. To synthesize hydrazides 7b-d, we simply treated 5 with aryl aldehydes 6b-d in ethanol as before and obtained the expected [(acridin-4-yl)methylidene]benzohydrazides 7b-d in 80-90% yields (Scheme 2, Table S1). The structural characterization of the synthesized compounds was performed using a combination of 1D and 2D NMR techniques. Hydrazones are suitable compounds for examining the stereospecificity of NMR parameters. Their configuration and conformation can be determined by NMR spectroscopy, as evidenced by a number of published scientific articles [35]. It is well known that hydrazones can exist as geometrical isomers due to the C=N double bond, but C(O)-N and N-N bonds also allow them to exist as conformers [36]. In the 1 H, 13 C, and 15 N NMR spectra for derivatives 3a-e measured in DMSO-d 6 , only a single set of signals was present (Tables S2-S4 and NMR spectra in Supplementary Materials), while in the 1 H, 13 C and 15 N NMR spectra for derivatives 7b-d measured in acetone-d 6 , the signals were duplicated (Tables S5-S7 and NMR spectra in Supplementary Materials). The geometric configuration of the C4=N3 double bond was determined based on the stereospecificity of the heteronuclear one bond spin-spin coupling constant 1 J CH coupling constants with respect to the orientation of the nitrogen lone pair, the 1 J CH coupling constant with anti-orientation of the relevant C-H bond to the nitrogen lone pair being about 160-170 Hz [37]. Heteronuclear one bond spin-spin coupling constant 1 J C4H4 (167.4-168. 6 Hz for 3a-e and 1 J C4H4 = 162. 6 Hz for compounds 7b-d, Figure 1, Table S8) indicated that the C4=N3 double bond existed in the E-configuration in derivatives 3a-e and 7b-d [21]. The determination of the Z C(O)-N2 conformation was performed based on the heteronuclear two bond spin-spin coupling constants ( 2 J C1H2 = 10.8 Hz for 3a and 11.4 Hz for 3c) and the NOESY enhancements from 1D NOESY spectra measured in DMSO-d 6 for derivative 3a (see Supplementary Materials) and 2D NOESY spectra measured in acetone-d 6 for derivative 7c. The duplicate signals in the NMR spectra of (acridine-4-yl)benzohydrazides 7b-d can be attributed to the existence of E N-N /Z N-N conformers. The hydrogen bond N10 ···H2 produce a low-field shift of the H2 lines to 15.00 ppm and an up-field shift of the N10 (from −95.9 to −96.3 ppm; see Supplementary Materials). This downfield shift is a result of a decrease in the electron density around the hydrogen nucleus and the deshielding effect from the electronic currents of the acceptor atom. While this deshielding effect is experienced by the donor nucleus, the chemical shift of the acceptor nucleus moves to a lower frequency due to an overall increase in electronic shielding. The redistribution of electron densities upon the formation of hydrogen bonds gives rise to observable changes in the scalar couplings between nuclei associated with hydrogen bonds [38]. Hydrogen bond N10 ···H2 formation also resulted in a decrease in the 1 J N2H2 coupling constant and a corresponding decrease in the strength of the N2-H2 bond. Similarly, a decrease in the 1 J C4H4 constant of derivative 7 in comparison with compound 3 is associated with the presence of the hydrogen bond C(O)···H4, which also stabilizes the Z N-N form of derivative 7 (Figure 1, Table S8 and NMR spectra in Supplementary Materials). The second form of derivative 7 is the E N-N conformer. In addition to the results of the NMR spectra studies, the structures of both conformers in derivatives 7 were corroborated by the NOESY data. A NOESY cross peak was detected between acridine proton H-3 and proton H-4 in the Z N-N conformer, while NOESY cross-peaks were also recorded between protons H-2 and H-4 and protons H-2/H-4 and H-5 . An additional argument for the existence of E N-N /Z N-N conformers are the presence of chemical shifts which are almost identical for both conformers. Interestingly, no significant preference was observed for one of the Z N-N /E N-N conformers in the case of derivative 7c (Z N-N /E N-N , 10:12), but a small preference for the Z N-N conformer was observed for derivatives 7b and 7d (Z N-N /E N-N , 10:7 (7b), 10:5 (7d), Figure 1). The infrared spectra (IR) of selected compounds (3a-d and 7b-d) and the assignment of characteristic absorption bands with corresponding wavenumbers (in cm −1 ) are listed in Table S9 (IR spectra in Supplementary Materials). The presence of acridine and phenyl moieties in compounds 3a-d and 7b-d are evident from several stretching vibrations of aromatic C-H (ν(CH) ar ) and C=C (ν(C=C) ar ) located in the range 3089-3010 cm −1 and 1603-1506 cm −1 , respectively. The presence of these components is also confirmed by the scissoring (γ(CCH) ar ) and out-of-plane (δ(CCH) ar ) vibrations of CCH which occur in the IR spectra of the compounds at about 750 cm −1 and 1020 cm −1 . The successful preparation of the hydrazides through a condensation reaction of carbohydrazide and aldehyde derivatives was confirmed by the presence of a stretching vibration of the secondary amine in the range of 3190-3369 cm −1 . The wide range of wavelengths observed in these results can be explained by the involvement of the amine group in the hydrogen bond system in solid state. The presence of a hydrazide bond in the case of compounds 3a-d and 7d was also confirmed by the presence of an azomethine vibration (ν(C=N)) at approximately 1620 cm −1 [39]. Moreover, the presence of the carbohydrazide group is also confirmed by a characteristic intense absorption band of about 1650 cm −1 in the IR spectra of all compounds and belongs to the stretching vibration of the carbonyl group (ν(C=O)). The IR spectroscopy results are in good agreement with the molecular structure of the synthesized compounds and NMR spectroscopy measurements. Fluorescence Quenching Properties Human serum albumin (HSA) emission spectra were recorded in the absence and presence of different amounts of acridine derivatives (3a-3d) at a range of 285 to 550 nm upon excitation at 280 nm. The emission maximum of fluorescence intensity of HSA was identified at 340 nm. The presence of derivatives 3a-3d caused a concentration-dependent quenching of HSA fluorescence with a moderate change in the emission maximum which did not alter the shape of the peak (Figures 2 and S1). The fluorescence intensity of HSA in the presence of the acridine derivatives decreases in the relation 3d < 3b < 3c < 3a, with decreases of 58%, 51%, 43% and 38%, respectively. Moreover, the reduction of fluorescence emission of HSA by derivatives 3a-3d was accompanied by an apparent change in the position of the maximum wavelength of the fluorescence emission, with a blue shift of 6 nm (3d) ≈ (3b) < 7 nm (3c) < 8 nm (3a). The observation of a significant blue shift upon the addition of derivatives 3a-3d suggests an increased hydrophobicity of the region near the Trp residues in the presence of the derivatives [40]. These results indicate that the interaction between acridine derivatives and HSA could lead to a change in the secondary structure of the protein, thereby causing changes in HSA in the environment around the Trp residues [41]. A major fluorescence residue of HSA is Trp (Trp 214) located in the binding domain IIA (site I), and the quenching of the fluorescence emission intensity of HSA upon increasing amounts of acridine derivatives suggests the presence of Trp residues of HSA at or near the binding site with the acridine derivatives [42]. The appearance of an isoactinic point at 430 nm might also indicate the existence of an equilibrium between bound and free drugs, with the presence of such an equilibrium possibly emphasizing the formation of the drug-protein complex [43]. Fluorescence quenching can be divided into the two processes of dynamic and static quenching. In dynamic quenching, the fluorophore and quencher are in contact in an excited state, increasing the temperature and resulting in faster diffusion and a higher frequency of collision, which in turn increases the quenching constant [44]. In comparison, static quenching is typical in complexes formed between the fluorophore and quencher in a ground state in which the increasing temperature weakens the stability of the complex [45]. The acridine derivatives and HSA was evaluated at different temperatures (25,30 and 35 • C), as is shown in Figures S1-S3. Quantitative analysis was performed by using the Stern-Volmer Equation (1): where F 0 and F represent the fluorescence intensity of HSA in the absence and presence of acridine derivatives, respectively. K SV is the Stern-Volmer dynamic quenching constant which was determined from the plot of relative intensity F 0 /F of fluorescence vs. the concentration of the acridine derivative. K q is the quenching rate constant of biomolecules, which is known to be around 2.0 × 10 10 M −1 s −1 , and τ 0 is the average lifetime of the fluorophore in the absence of the quencher, with a typical value equal to around 10 −8 for biomolecules [46]. The calculated values of K SV and K q at different temperatures are listed in Table 1. The quenching constant K SV of derivatives 3a-3d were of 10 5 M −1 . The K SV values recorded for derivatives 3a and 3b decreased at increasing temperatures, while those of derivatives 3c and 3d increased at increasing temperatures. The increased K SV values at increasing temperature in the presence of derivative 3c may indicate that the binding forces are mainly hydrophobic (endothermic apolar interactions are strengthened at increasing temperatures) [45]. Additionally, the K q values obtained for HSA of 10 13 M −1 s −1 are greater than the limiting diffusion rate constant of diffusional quenching for biopolymers (2.0 × 10 10 M −1 s −1 ). This would suggest that the observed fluorescence quenching process of the HSA emission by derivatives 3a-3d is not initiated by the dynamic process, but instead by a static process with ground-state complex formation. In order to evaluate the magnitude of the interaction, other parameters such as the binding constant ( K b ) and the number of binding sites (n) on the drug-HSA complex were calculated at three temperatures using Equation (2) [47]: where F 0 and F are the fluorescence intensity of the fluorophore (HSA) in the absence and presence of the quencher-acridine derivative, and [Q] is the quencher concentration. The values of K b and n were determined using the linear regression of a plot of log(F 0 − F/F) to log[Q] ( Figure S3). The values of K b and n at the three temperatures are summarized in Table 2. The results show that values of n are almost equal to 1 at increasing temperatures, a finding which indicates the existence of a single binding site. Since the values of K b were 10 5 M −1 (3a-3c) or 10 4 M −1 (3d), it would therefore be suitable for distribution in the plasma in vivo. The interaction of the quencher derivatives 3a, 3b and 3d with HSA are accompanied by a decrease in K b values and n values at increasing temperatures, a trend which indicates the destabilization of the drug-HSA complex under these conditions. In addition, Table 2 also showed that the values of K b decreased at increasing temperatures, indicating that the interaction process also involves a static quenching mechanism [42]. In the case of acridine derivative 3c, the values of K b and n increased at increasing temperatures resulting in an endothermic reaction and increment of the stability of drug-HSA complex, a process which also implies that the ability of the derivatives to bind to HSA was enhanced at increasing temperatures [40,42]. The findings suggest that derivative 3c could be delivered by HSA in vivo more effectively than derivatives 3a, 3b, and 3d. Based on the K b dependency on temperature, it is, therefore, possible to analyse the temperature-dependent thermodynamic parameters which can be considered as responsible for the formation of a complex [42]. The main thermodynamic parameters which are linked to the binding of small molecules to biomacromolecules, such as enthalpy change (∆H) and entropy change (∆S), were calculated using the van't Hoff Equation (3): where K b is the binding constant, R is the gas constant (8.314 J.mol −1 . K −1 ), and T is the experimental temperature in Kelvin. The values of ∆H and ∆S were evaluated from the slope and intercept of the van't Hoff plot. The relationship between the values of ∆H and ∆S can provide additional information about the primary forces of the interaction between the small molecules and the macromolecules. Thus, if ∆H > 0 and ∆S > 0, a hydrophobic interaction has occurred, while if ∆H < 0 and ∆S < 0, the main binding forces are hydrogen bonding and van der Waals forces [46]. The Gibbs free energy change (∆G) was calculated using the Gibbs-Hemholz Equation (4): The calculated values of the thermodynamic parameters ∆H, ∆S and ∆G are summarized in Table 2. The thermodynamic profile of the interaction of compounds 3a-3d with HSA was created from the calculated thermodynamic parameters. The negative values of ∆G suggest that the interaction between acridine derivatives 3a-3d and HSA is spontaneous, while the negative values of ∆S and ∆H indicate that derivatives 3a, 3b and 3d primarily bind with HSA through hydrogen bonding and van der Waals forces [43]. The interaction of compound 3c with HSA is driven by the positive values of ∆S and ∆H, but the unfavourable positive values of ∆H for the spontaneous binding process are compensated for by the positive value for the entropy change that supports the negative values of the Gibbs free energy. In addition, the binding process is entropically driven (endothermic) with hydrophobic interaction emerging as the main intermolecular force in the interaction of compound 3c with HSA [44,[48][49][50]. Effect of Compounds 3a-3d on HSA Conformation Synchronous emission spectroscopy is fluorescence spectroscopy performed at a constant wavelength, and the technique to provide information on the effect of the binding of small molecules on the microenvironment in the vicinity of tryptophan (Trp) and tyrosine (Tyr) as chromophore residues of protein [51]. Synchronous fluorescence spectra assay simultaneously records the excitation and emission monochromators at a constant wavelength interval; the characteristic spectra of Tyr and Trp residues are found at ∆λ = 15 or 60 nm [48,51]. The shift in the position of fluorescence emission maximum corresponds to changes in the polarity around the chromophore molecule [52]. Specifically, a red shift indicates the increment of hydrophilicity, and a blue shift reveals the increment of hydrophobicity around the fluorophores of serum albumins [48,51]. As is shown in Figures 3 and S4, the fluorescence intensity of the synchronous emission spectra of HSA was found to decrease with increasing amounts of acridine derivatives 3a-3d, a result which further demonstrates the occurrence of fluorescence quenching in the binding process. In synchronous emission spectra, a reduction in fluorescence intensity without any shift implies that no disturbance has occurred in the microenvironment around the particular residue. The synchronous emission quenching for HSA due to the action of the quencher implies that it is most probably located adjacent to the Trp and Tyr residues [49]. The rate of emission quenching for HSA at ∆λ = 15 nm or 60 nm was compared by graphical dependency F/F 0 vs. [quencher] (Figures 3, S4 and S5). The results show that changes in fluorescence quenching intensity occurred more frequently around Trp residues (∆λ = 60 nm) than those of Tyr (∆λ = 15 nm). This would indicate that Tyr contributes more to the quenching of the intrinsic fluorescence of HSA at the excitation wavelength of 280 nm [50]. The evidently higher rate reduces the fluorescence intensity around Trp, further indicating that the binding site is site I in the IIA HSA subdomain. A useful method for studying drug-HSA interactions is three-dimensional fluorescence spectroscopy, which can offer information about the structural changes to the polypeptide backbone structure and the microenvironment polarity around the Trp and Tyr residues [49][50][51][53][54][55][56][57][58][59]. The results of 3D fluorescence are shown in Figures 4 and S6. Peak a (λ ex = λ em ) is second-order scattering, while peak b (2λ ex = λ em ) is Rayleigh scattering. The change at peak 1 (280/339 nm) reflects a change in the polarity of the microenvironment around the Trp and Tyr residues, and peak 2 (230/336 nm) characterizes the polypeptide backbone structure [59]. The results show a significant blue shift for both peak 1 (8 nm) and peak 2 (10 nm) in the presence of compound 3a, and a moderate blue shift for peak 1 (1-4 nm) and peak 2 (4-8 nm) in the presence of compounds 3b-3d. These findings indicate that binding with derivatives 3a-3d causes a partial change in the conformation of HSA and led to a decrease in the polarity surrounding the Trp and Tyr residues [58]. The fluorescence intensities of both peaks decreased after the addition of derivatives 3a-3d, with a more pronounced decrease observed in the fluorescence intensity of peak 2. The changes in the peaks decreased in the following relations: peak 1: 3a > 3c > 3d > 3b; and for peak 2: 3b > 3c > 3d > 3b. The changes observed in the 3D spectra correlate with the results of earlier quenching fluorescence experiments. The decreased intensity of peaks 1 and 2 may be due to the increased exposure of some previously buried hydrophobic regions and may reflect the π-π* transition of fluorophores and the π-π* transition of the C=O bond, respectively [59]. The results also suggest that some environmental and conformational changes to HSA may have occurred upon the addition of the acridine derivatives (Figures 4 and S6). Determining the Binding Site of Acridine Derivatives 3a-3d on the HSA Molecule Displacement experiments were performed using warfarin and ibuprofen as site marker fluorescence probes, as these agents have been proven to bind with HSA at Sudlow sites I and II, respectively [58]. The fluorescence quenching spectra of HSA:acridine derivatives 3a (1:1) (3b-3d) ( Figure 5) in the presence of increasing concentrations of specific site markers, warfarin or ibuprofen, were studied. Interestingly, the addition of warfarin led to a significant quenching of the intrinsic fluorescence maximum of the HSA:acridine complex with a red shift. The red shift can be explained by the increasing polarity of the region surrounding the Trp site [59]. The spectra show a new fluorescence peak maximum at around 375 nm. In the presence of ibuprofen, there was no change in the emission spectra of the HSA:acridine complex (3a-3d). The change in the emission spectra of the HSA:acridine complex in the presence of warfarin and ibuprofen was analysed using Equation (5). where I is the percentage of the initial fluorescence, and F 1 and F 2 are the fluorescence intensity of the HSA:acridine complex in the presence and absence of the site marker [60]. The results show that warfarin caused a significant decrease in the fluorescence intensity of the HSA-acridine complex, while ibuprofen caused only a minimal change in intensity. These results indicate that warfarin competes with the acridine derivatives for binding sites on HSA at Sudlow site I in the IIA HSA subdomain. In addition to the spectra assays, reverse titration studies were also performed. The intensity of the fluorescence of the HSA:site marker complex was recorded at increasing concentrations of acridine derivative 3a, with the obtained results given in Figure S7. Interestingly, the titration of varying concentrations of 3a with both the HSA-ibuprofen and HSA-warfarin complexes showed a quenching of fluorescence emission ( Figure S8). Taken together, these results suggest that the binding between the site marker and HSA is affected by derivative 3a in both cases. The obtained data were analysed using Equations (3) and (4). The linear Stern-Volmer dependence and dependence log(F 0 − F)/F = log[Q] was used to obtain Stern-Volmer quenching constant values (K SV ) and apparent binding constant values (K b ). Previous studies have shown that if a drug has the potential to bind to the same site as the marker, the interaction of this drug with HSA shows an apparent reduction in the K b value [59]. The values of K SV and K b for the HSA-3a complex were found to be 2.67 × 10 5 M −1 and 6.10 × 10 5 M −1 . In the presence of ibuprofen, there was an increase in the experimental values: K SV = 3.28 M −1 and K b = 6.67 M −1 were calculated. A marked change in the quenching constant was also observed after the addition of ibuprofen, a result which could be partly due to the interaction between derivative 3a and ibuprofen. Similar changes and associated interactions between the marker and the studied ligand have been observed in other studies in which competitive experiments were conducted [58]. In the presence of warfarin, the values of the constants were found to have decreased: K SV = 2.26 M −1 and K b = 2.54 M −1 . Derivative 3a competes with warfarin on HSA, which suggests that derivative 3a could bind to HSA at Sudlow site I. According to the obtained results, it is possible to assume that derivative 3a binds to HSA at the same site as warfarin. DNA Binding Properties Changes in absorbance intensity (either hyperchromic or hypochromic) and a shift in the peak position (hypsochromicity or batochromicity) are typical spectral properties closely related to the interaction of a drug with DNA [60,61]. Figure 6 shows the UV-Vis spectra of acridine derivatives 3a-3d in both the absence and presence of varying concentrations of ctDNA. In free form, derivatives 3a-3d displayed absorption bands at 300-500 nm with a maximum at around 400 nm. The absorption spectra of compounds 3a-3d in the presence of increasing concentrations of ctDNA show hypochromicity (16-51%) and a slight hypsochromic shift (∆λ = 1-9 nm, 3a-3d), findings which indicate that the acridine derivatives have interacted with DNA ( Table 3). The hypochromic effect showed a decrease in the following relation: 3b > 3a > 3c ≈ 3d, with derivatives 3b and 3a displaying the most notable hypsochromic shifts. Hypochromism typically occurs as a result of the contraction of ctDNA in the helix axis, and this conformation change is often a result of a surface binding complex developing between the small molecules and the DNA, either through external contact or in the case of strong interaction between the small molecules and DNA [62,63]. Classical intercalating compounds are able to couple their π orbital with the π of the base pair of DNA during intercalation into DNA, thereby decreasing the π → π* transition energies [64] and inducing hypochromism combined with a batochromic shift [65]. In contrast, electrostatic interaction can also show hyperchromism, an effect that indicates the increasing likelihood of electrons in the π-π* transition of the extended resonance system. This process is usually accompanied by a hypsochromic shift as a response to the increasing electron density in π-orbitals; it also stabilizes the orbitals and results in an increase in the energy gap between the π and π* orbitals [18,66]. On this basis, it is possible to state with confidence that the observed changes in the absorption spectra of acridine derivatives 3a-3d in the presence ctDNA are a result of the electron-rich DNA donating electrons to the π-orbitals of the system. Therefore, our results suggest that acridine derivatives 3a-3d had all interacted with ctDNA. The binding constant K b of compounds 3a-3d was determined using the modified Benesi-Hildebrand Equation (6) [51]: where A 0 and A are the absorbances of the acridine derivatives in the absence and presence of ctDNA; ε B and ε B + DNA are the molar extinction coefficients of the acridine derivatives individually and in the bound complex, respectively; K b is the binding constant; and [DNA] is the concentration of ctDNA. The binding constant K b was estimated from the intercept-to-slope ratios of A 0 /(A − A 0 ) vs. 1/[DNA] plots (insert graph Figure 6). The values of binding constant for the interaction of the acridine derivatives with ctDNA increased in the following relation: 3d < 3c < 3a < 3b. The highest K value recorded from the series of (acridin-4-yl)benzohydrazide compounds was that of the fluoro-substituted derivative 3b (K b = 3.18 × 10 3 M −1 ), a result which is in accordance with other recent studies. Accordingly, the K values obtained for acridine derivatives 3a-3d were in the range 1.01-3.18 × 10 3 M −1 , since previous research has reported K b values from 10 4 to 10 6 M −1 for intercalation complexes, binding constants which are conspicuously smaller than those of groove binders (10 5 -10 9 M −1 ) [67]. In addition, acridine derivatives 3a-3d displayed a hypochromic effect in combination with a hypsochromic shift in the presence of ctDNA, a feature which differs from that of classical intercalation, which causes the combination of hypochromic and bathochromic shifts. It is therefore possible to suggest that the studied derivatives bind to DNA through partial slight intercalation or by groove binding via the acridine scaffold through a combination of other external binding such as electrostatic interaction with another part of the scaffold such as, for example, the benzohydrazone linker. The effect of acridine-benzohydrazides (3a-3d) on the metabolic activity of the lung adenocarcinoma cell line (A549) and the normal colon fibroblast cell line (CCD-18Co) was evaluated using a 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltertrazolium bromide (MTT) cell assay. This assay is based on the metabolization of yellow MTT to purple formazan crystal by mitochondrial dehydrogenases, and therefore only metabolic active cells are capable of that reaction [68]. The CCD-18Co cell line was used as a control to check for derivatives selectivity. The doxorubicin belongs to a group of anthracyclines, which represent the chemotherapeutic regiments for the treatment of NSCLC (non-small cell lung carcinoma) [47,69] even today, and acts as antineoplastic mainly by interaction with the DNA topoisomerase II and by creating DNA single-and double-strand breaks [47]. Previous studies described detailed cytotoxic effects of doxorubicin in the human lung adenocarcinoma A549 cell line [29,52,[70][71][72]. Doxorubicin resembles our compounds in its biological action; therefore, it was employed as a positive control. As shown in Figure 7, metabolic activity of A549 cells significantly decreased after 24 h ( Figure 7A) and 48 h ( Figure 7B) treatment with acridine-benzohydrazides 3a-3d (p < 0.01, p < 0.001) in concentrations of 50 and 75 µM, respectively. Similarly, in the case of CCD-18Co fibroblasts, we observed a significant loss of metabolic activity after 48 h treatment with acridine-benzohydrazides 3a-3c (p < 0.001) in the concentration range 25-75 µM (Figure 8). Doxorubicin in the full concentration range 5-75 µM (p < 0.001) significantly decreased the metabolic activity of A549 and CCD-18Co cells. Dimethyl sulfoxide (DMSO) was used as solvent control, which did not show any significant toxicity against A549 cells in comparison with CCD-18Co cells, where we observed a considerable decrease of metabolic activity in concentration 0.25% (p < 0.01), 0.50% and 0.75% (p < 0.001). Based on the results from the MTT assay, the IC 50 values for all compounds were evaluated. As shown in Table 4, the IC 50 values for A549 cells after 24 h treatment was >75 µM, except derivative 3c (IC 50 = 73 µM). After 48 h incubation, the IC 50 values were in the range 37-62 µM. The effect of tested derivatives on metabolic activity of A549 cells evaluated by MTT assay (48 h) decreased as follows 3b(-F) > 3a(-H) > 3c(-Cl) > 3d(-Br). The IC 50 values of tested compounds were also evaluated for CCD-18Co cells. As shown in Table 4, the IC 50 values were in range 8-17 µM (3b-3d), and IC 50 > 75 µM was for derivative 3a. The effect of tested derivatives on the metabolic activity of normal fibroblasts evaluated by MTT assay decreased as follows 3a < 3d < 3c < 3b. The compounds with a halogen-substituted benzene ring (3b-3d) are more active in comparison to compounds without a halogen-substituted benzene ring (3a). However, doxorubicin was significantly the most efficient (IC 50 = 5 µM) in affecting the metabolic activity of both cell lines. These results indicate that 3a and 3b-3d have > 15, 3 and 2 times less activity than doxorubicin in CCD18-Co normal colon fibroblasts, in comparison to the study of de Almedia et al. [73], where they synthetized 9-substitued acridine derivatives and tested their effect on metabolic activity (in concentration 0.25-250 mg/mL) against nine cancer lines. The non-halogenated compound decreased metabolic activity in each of the nine cell lines as compared to chlorinated (chloro) derivative, which was active against six cancer cell lines, and their effect on metabolic activity was more than seven times lower than the non-halogenated compounds. The brominated (bromo) compound decreased metabolic activity in only two cell lines. This effect on metabolic activity is similar to our observation support also our result in case the 3a-3d where chlorinated (3c) and brominated (3d) compounds decreased metabolic activity less than the compound without a halogen-substituted benzene ring (3a). The selectivity index (SI) for the effectiveness of the compound demonstrates its differential activity, where a higher level represents a higher selectivity [74]. An SI value less than 2 indicates general toxicity of the compound [75]. The SI values of 3a-3c compounds for A549 cell line were determined and are present in Table 4. The values of SI indicate that all compounds exhibit a low selectivity (less than 2), and thus a general strong effect on cancer cell metabolism, except for derivative 3a. The Effect of 3a and 3c on Viability, Cellularity, Clonogenic Survival, and Distribution of the Cell Cycle in the A549 Cancer Cell Line A cell viability assay showed that compounds 3a and 3c in concentration 50 µM after 48 h incubation significantly (p < 0.001) decreased viability of A549 cells by 34% and 14%, as compared with the control group ( Figure 9A). The 0.5% DMSO did not show a significant reduction in cell viability. The results from the cellularity assay showed that tested compounds induced a significant (p < 0.001) reduction of total cell number at both concentrations as compared with the control group ( Figure 9B). For compounds 3a and 3c in a concentration of 25 µM, the number of cells was reduced similarly by 48%, 50% and 54%, respectively. In a higher concentration of 50 µM, the effect of acridine-benzohydrazides 3a and 3c was stronger (approximately 94% and 91% of reduction, respectively) than the control group. DMSO 0.5% showed no significant decrease in cell number. The derivatives 3a and 3c at lower concentration (25 µM) act as cytostatic agents rather than cytotoxic, but with increasing concentration (50 µM), their effect is simultaneously cytostatic and cytotoxic. The MTT assay results provide information about the changes of metabolic activity linked to mitochondrial function, which does not always correlate with cell viability after treatment [18,72,74,75]. To assess drug efficacy and prevent treatment failure, it is important to observe whether cancer cells may proliferate during the interval between treatments. The clonogenic assay evaluates the efficacy of new drugs by observation of reduced repopulation of cancer cells and cell survival after treatment. This assay is important to select eligible anticancer drug candidates [76]. The clonogenic survival was analysed after treatment of A549 cells with the compounds 3a and 3c. DMSO (0.5%) was used as solvent control. A549 cells were treated with 3a and 3c in concentrations of 25 and 50 µM for 48 h. The cells were then counted, and 500 viable cells per well were seeded and cultivated for another seven days under standard conditions. Figure 10A illustrates a plate of clonogenicity, where 500 vital cells A549 were seeded after 48 h treatment with 3a and 3c in two concentrations (25 and 50 µM) and 0.5% DMSO as a control. As shown in Figure 10B, acridine-benzohydrazides 3a and 3c in a concentration of 50 µM reduced the clonogenic ability of A549 cells by 72% or 74%, respectively, as compared to the control. However, treatment by compounds 3a and 3c in a concentration of 25 µM did not significantly reduce the ability of A549 cells to form new colonies. The obtained results are in good correlation with the results from previous viability and cellularity assays. The difference in substituted benzene ring in acridine-benzohydrazone (3a,3c) plays a slight role in clonogenicity survival (cellularity and viability, too) of A549 cells. However, halogenated compound 3c (-Cl) is more efficient against CCD-18Co normal colon fibroblasts than non-halogenated compound 3a. The results from flow cytometry analysis of the cell cycle suggest that application of the compounds 3a and 3c primarily leads to accumulation of cells in the G2/M phase, followed by inhibition of A549 cells in G0/G1 phase, which subsequently result in inhibition of cell proliferation. G2/M phase is a critical phase before cell division [77]. Many widely used and potential cancer chemotherapeutic agents cause DNA damage by targeting DNA or enzymes that regulate DNA topology as topoisomerase I/II, resulting in DNA damage-induced G2/M arrest, which activates the apoptosis pathway (e.g., doxorubicin, amsacrine) [77][78][79][80]. However, not all potential anticancer drugs that arrest the cell cycle in G2/M phase were relevant to DNA damage, such as STK295900, DNA binding agents, and dual Topo I and Topo II catalytic inhibitors. Interestingly, STK295900 as a catalytic inhibitor of topoisomerase also acts as an antagonist on Topo poison-mediated DNA damage. Therefore, further study is needed to determine the mechanism underlying the 3a and 3c induced accumulation of A549 cells in the G2/M phase ( Figure 11). Many widely used and potential cancer chemotherapeutic agents cause DNA damage by targeting DNA or enzymes that regulate DNA topology as topoisomerase I/II, resulting in DNA damage-induced G2/M arrest, which activates the apoptosis pathway (e.g., doxorubicin, amsacrine) [77][78][79][80]. However, not all potential anticancer drugs that arrest the cell cycle in G2/M phase were relevant to DNA damage, such as STK295900, DNA binding agents, and dual Topo I and Topo II catalytic inhibitors. Interestingly, STK295900 as a catalytic inhibitor of topoisomerase also acts as an antagonist on Topo poison-mediated DNA damage. Therefore, further study is needed to determine the mechanism underlying the 3a and 3c induced accumulation of A549 cells in the G2/M phase ( Figure 11). Inhibition of Topoisomerase I and II The topoisomerase (Topo) inhibitors are molecules which (a) disrupt enzyme activity by forming ternary complex (DNA-Topo-compound)-these compounds were named as topoisomerase poisons-or (b) molecules which inhibit the catalytic function of enzymes-named catalytic inhibitors-and affect both to result in cell death (apoptosis) [73,[81][82][83][84]. We performed Topo-mediated DNA relaxation or decatenation assays to test whether the new class of acridine derivatives 3a-3d exert their antiproliferative function by targeting Topo. Topo I enzymes convert the supercoiled (SC) form DNA to relaxed (R) form and nicked-open-circular (NOC) DNA. The NOC form is circular dsDNA with one nicked strength; Topo I nicked only one strength. SC form DNA migrate the fastest in gel, NOC migrates the slowest, and the migration speed of R DNA is in the middle of these two forms. In an electrophoretogram with active Topo I enzymes, SC DNA is not observed, but R and NOC DNA forms can be seen. In the case of Topo I inhibition, bends for R and NOC DNA forms are not observed [85,86]. The results electrophoretogram effect of derivatives 3a-3d on activity topoisomerase I show Figure 12A. Interestingly, in the presence of increasing concentration of 3c and 3d derivatives, a decrease in R form and an increase in the SC form of DNA was observed, suggesting that these derivatives might inhibit the relaxation activity of Topo I. Therefore, these compounds in higher concentrations may act as catalytic Topo inhibitors [85]. The Topo IIα enzyme catalysed the decatenation of catenated kDNA to closed circular decatenated kDNA. In one specific case, linear (L) and nicked-opened (NC) forms of DNA could by observed. Figure 12 In previous studies, a direct correlation between the values of DNA binding constant of derivatives (for typically intercalating compounds) and their inhibition effect to Topo and antiproliferative activity (decreased metabolic activity of cancer cells) was observed [83,85,86]. However, the values of DNA binding constants were not a reliable guarantee of the inhibition of Topo by DNA binders' derivatives. In a study of similar acridine derivatives with good binding constant values, only one compound was capable of inhibiting Topo I [18]. In our study, we observed a negative correlation between values of the DNA binding constant and the inhibition ability to hTopo I/IIα. Derivatives 3a and 3b with higher values of K did not have the potential to inhibit TopoI/IIα, thus derivatives 3c and 3d with lower values of K had the potential to inhibit TopoI/IIα. The derivatives 3c and 3d in vitro act as potential dual inhibitors of hTopo I and II with a partial effect on the metabolic activity of cancer cells A594. No direct correlation was observed between values of IC 50 from the MTT assay (A549 cells) with the in vitro Topo inhibition assay. However, a positive correlation exists between values of IC 50 and DNA binding constant (K b ). Materials and Physical Measurements All the reagents were purchased from local suppliers and used without purification. The progress of the reaction was monitored using thin-layer chromatography (TLC). Analytical TLC was performed on pre-coated aluminium sheets of silica gel 60 F254 (Merck, Darmstadt, Germany), and the compounds were visualized using UV light. The melting points were determined on a Boetius apparatus. A stock solution of HSA was prepared in a NaCl-Tris-HCl (100 mM NaCl, 10 mM Tris) buffer solution (pH = 7.4) in distilled water. Tris(hydroxymetyl)aminomethane (Tris), HSA and NaCl were purchased from Sigma (St. Louis, MO, USA). The concentration of HSA in the stock solution was determined through absorption at 280 nm with a molar extinction coefficient ε 280 = 35,700 M −1 ·cm −1 using spectrophotometric measurements. The solvents, chemicals and calf thymus DNA (ctDNA) used in this study were purchased from Sigma Aldrich and Lachema and used without further purification. Compounds 3a-3d were dissolved in DMSO to produce a stock solution of 30 mM, with a working solution of 10 mM being prepared from this solution for further use. The solution of the compound was stored in dark conditions at −21 • C. The ctDNA was dissolved in a Tris-HCl-EDTA (pH = 8.3) (10 mM Tris-HCl (pH = 8.0); 1 mM EDTA) buffer by incubation at 4 • C with gentle mixing to form a homogenous solution over 24 h. The final concentration of the stock ctDNA solution was measured with a spectrophotometer using UV-Vis absorbance with a molar extinction coefficient of 6600 M −1 cm −1 at 260 nm. The purity of the ctDNA solution was determined by observing the ratio of the absorbance at 260 nm and 280 nm. The obtained absorbance ratio of A 260 /A 280 = 1.84 indicates that the DNA was free from protein and had acceptable levels of purity for experimental use. The solution was stored at 4 • C for future use. NMR Spectra Nuclear magnetic resonance data were collected on a Varian VNMRS 600 spectrometer operating at 599.87 MHz for 1 H, 150.84 MHz for 13 C, and 60.79 MHz for 15 N. Chemical shifts (δ in ppm) are given from the internal solvent and the partially deuterated residual DMSO-d 6 39.5 ppm and acetone-d 6 29.8 ppm for 13 C; DMSO-d 5 2.5 ppm and acetone-d 5 2.05 ppm for 1 H. External nitromethane (0.0 ppm) was used for 15 N references. The 15 N chemical shifts were obtained from two dimensional 1 H, 15 N-HMBC experiments with gradient coherence selection, which were performed using a standard pulse sequence from the Varian pulse library. CH 3 NO 2 was used as an external reference for the 15 N chemical shifts. The 2D experiments gCOSY, zTOCSY, NOESY, gHSQC and gHMBC were run using the standard Varian software. All data were analysed using MestReNova 14.2.1-27684 (5 May 2021, Santiago de Compostela, Spain) software. IR Spectra The infrared spectra of the prepared compounds were recorded with an Avatar FT-IR 6700 (Fourier transform infrared spectroscopy) spectrometer at the wavenumber range of 4000-400 cm −1 , with 64 repetitions for each spectrum using the ATR (attenuated total reflectance) technique. Prior to the measurements, samples were pressed with a rotary press to ensure sufficient contact with the surface of the diamond holder. All obtained data were analysed using Omnic 8.2.0.387 (2010) software. HR Mass Spectroscopy The method used for high-resolution mass spectrometric identification of products is described in detail in the literature [87]. The following minor modifications were made to the published method: the samples were dissolved in chloroform (1 mg.mL −1 ) and diluted 1000-fold. An atmospheric solid analysis probe (ASAP) was dipped into the sample solution, placed into the ion source and analysed in full scan mode. The probe was kept at a constant temperature of 450 • C for 2 min. Mass accuracy of 1 ppm or less was achieved with the used instrumentation for all compounds. Synthesis of N -[(E)-Acridin-4-yl)methylidene]benzohydrazides 3a-e Benzohydrazide (1a-e, 0.241 mmol) was added to a stirred suspension of acridine-4-carbaldehyde (2, 50 mg, 0.24 mmol) in dry ethanol (2 mL). The reaction mixture was refluxed until the acridine-9-carbaldehyde solution (2, TLC: dichloromethane/ethyl acetate, 4:1, v/v) was fully consumed. The reaction mixture was cooled, and the precipitate was filtered off and washed with dry ethanol. The crude product was crystallized from ethanol to give benzohydrazide 3. Steady-state fluorescence (STF) emission spectra, synchronous fluorescence (SF) spectra, and 3D fluorescence spectra (3DF) were measured using a Varian Cary Eclipse spectrofluorimeter with a xenon flash lamp and a 1.0 cm quartz cuvette, with the slits set at 5 nm for the excitation observations and 10 nm for the emission spectra. The STF spectra were recorded at a range of 285-550 nm with a fixed excitation wavelength at 280 nm. The change in fluorescence intensity of HSA at a concentration of 4 µM was observed by titrating varying concentrations of acridine derivatives 3a-3d (0-6.2 µM) at three different temperatures (25, 30 and 35 • C) in 2 mL of 100 mM NaCl and a 10 mM Tris-HCl buffer (pH = 7.4). Synchronous fluorescence spectra for HSA (4 µM) were recorded at increasing concentrations of compounds 3a-3d in the same concentration range as that used in the STF studies. The spectra were recorded at a range of 200-400 nm by setting ∆λ = 60 nm and ∆λ = 15 nm for tryptophan and tyrosine residues, respectively, at room temperature (25 • C). The 3DF spectra of HSA were performed in the absence and presence of compounds 3a-3d using an excitation wavelength range of 200-350 nm and an emission wavelength range of 200-600 nm at room temperature (25 • C). The 3D spectra were recorded for 4 µM of HSA in a 2 mL buffer solution (100 mM NaCl and 10 mM Tris-HCl buffer pH = 7.4) and for the HSA:3a-3d complexes at a concentration ratio 1:1. The data from 3D measurements were processed into a 3D graphics plot using Origine 8.5 software, (2020) produced by OriginLab Corporation, Informer Technologies, Inc. (Los Angeles, CA, USA). Competitive Experiments Competitive experiments of HSA were performed using warfarin and ibuprofen as standard site marker ligands. The selected markers for the site I were warfarin, whereas site II was probed using ibuprofen. HSA-acridine derivative (3a-3d) complexes at a concentration ratio of 1:1 ([HSA] = 4 µM) were titrated into specific side markers (0-20 µM). The molar ratio of the HSA:marker complexes were 1:0.5, 1:1, 1:1.5, 1:2, 1:2.5, 1:3, 1:3.5, 1:4, 1:4.5 and 1:5. The reaction mixture of the HSA-acridine derivatives complexes with the side markers were preincubated for 15 min prior to spectral measurements. Additionally, a reverse titration competitive experiment was performed for derivative 3a. The site probe-HSA complex (1:1, [HSA] = 4 µM) was titrated with derivative 3a at a concentration gradient of 0.6-6 µM. In order to determine the binding site of compound 3a, the fluorescence quenching data were analysed using Stern-Volmer and modified Stern-Volmer equations to calculate the value of the Stern-Volmer constants (K SV ) and binding constants K b , and to determine the number of binding sites n. All spectra measurements were performed at 280 nm excitation wavelength and the slits were set at 5 nm for excitation and 10 nm for emission spectra at a range of 290-550 nm and at 25 • C. ctDNA Binding Experiments UV-Vis absorption spectrum of the drug-ctDNA complexes were measured on a Varian Cary 100 Bio UV-Vis Spectrophotometer. The UV-Vis spectra of free compounds 3a-3d and drug-ctDNA complexes were recorded at the wavelength range of 220 to 600 nm. The measurements were performed in a 1.0 cm quartz cuvette with 2 mL of a Tris-HCl (10 mM, pH = 7.4) buffer at room temperature. The titration experiment was carried out in the presence of a fixed concentration of compounds 3a-3d (25 µM) and was performed by titrating varying concentrations of ctDNA ranging from 0 to 680 µM. The solution was incubated for 5 min and then tested. Cell Culture and Treatment Human lung carcinoma cell line A549 and CCD-18Co colon fibroblasts were purchased from the American Type Culture Collection (ATCC, Rockville, MD, USA). The A549 cells were cultured in a complete RPMI-1640 medium (Sigma-Aldrich, St. Louis, MO, USA) and CCD-18Co cells were cultured in a minimum essential medium (MEM) (PAN-Biotech GmbH, Aidenbach, Germany) at 37 • C, 95% humidity and 5% CO 2 . The media were supplemented with 10% fetal bovine serum (FBS, Biosera, Nuaille, France) and antibiotics (1% Antibiotic-Antimycotic 100 × and 50 × 10 −3 g L −1 gentamicin, Biosera). Prior to the selected treatments, cells were seeded on 6-and/or 96-well plates (TPP, Trasadingen, Switzerland) and left to settle for 24 h. The acridine compounds solutions (at concentrations ranging from 5-75 µM) were then added to cells for 24 or 48 h, and analysis was subsequently performed. MTT Assay MTT assays were performed in order to evaluate changes in the metabolic activity of cells that had occurred as a consequence of treatment with the acridine compounds. A549 cells (15 × 10 3 cells/cm 2 ) and CCD-18Co (15.625 × 10 3 cells/cm 2 ) were seeded in 96-well microplates. The A549 cells were treated for 24 and 48 h, and the CCD-18Co cells were treated for 48 h with different concentrations (5, 25, 50 and 75 µM) of the derivatives. After the treatment, the MTT (3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltertrazolium bromide) solution in PBS (5 mg mL −1 ) was added to each well. The reaction was stopped after 4 h incubation, and the formazan was dissolved by the addition of SDS at a final concentration of 3.3%. The absorbance of formazan (λ = 584 nm) was measured using a BMG FLUOstar Optima spectrometer (BMG Labtech GmbH, Offenburg, Germany). The results were evaluated as percentages of the absorbance of the untreated control. Results are presented as the average percentage of cells from three independent experiments. IC 50 values for the derivatives were extrapolated from a sigmoidal fit (dose-response curve) to the metabolic activity data using OriginPro 8.5.0 SR1 (OriginLab Corp., Northampton, MA, USA). Quantification of Cell Number and Viability For the assessment of total cell numbers and viability within individual experimental groups, floating and adherent cells were harvested after treatment with the studied compounds and evaluated using a Bürker chamber with eosin staining. A549 cells were plated at a density of 135 × 10 3 cell/well into a 6-well plate and treated with different concentration of derivatives (25 and 50 µM) for 48 h. The total cell number was expressed as a percentage of the untreated control of the total cell number. Viability was expressed as a percentage of viable, eosin negative cells. The results are presented as the average percentage of cells from three independent experiments. Colony Forming Assay For the colony forming assay, floating and adherent cells were harvested together 48 h after treatment with the studied compounds (25 and 50 µM). The cells were then counted using a Bürker chamber with eosin staining and 500 viable cells per well were seeded in 6-well plates. After seven days of cultivation under standard conditions, the cells in the plates were fixed and stained with 1% methylene blue dye in methanol. Visualized colonies were scanned and counted by Image software, and the results were evaluated as percentages of the untreated control. The results are presented as the average percentage of colonies from three independent experiments. Cell Cycle Analysis For flow cytometric analysis of the cell cycle distribution, floating and adherent cells were harvested together 48 h after treatment with the compounds (25 and 50 µM), washed in cold PBS, fixed in cold 70% ethanol, and stored overnight at −20 • C. Prior to analysis, the cells were washed twice in PBS, resuspended in a staining solution (0.1% Triton X-100, 0.137 g L −1 ribonuclease A and 0.02 g L −1 propidium iodide (PI) and incubated in dark conditions at RT for 30 min. The DNA content was analysed using a BD FACSCalibur flow cytometer (Becton Dickinson, San Jose, CA, USA) with a 488 nm argon-ion excitation laser, and fluorescence was detected using a 585/42 nm band-pass filter (FL-2). ModFit 3.0 software (Verity Software House, Topsham, ME, USA) was used to generate DNA content frequency histograms and to quantify the percentage of cells in the individual cell cycle phases. The results are presented as the average ratio of cells in the individual phase to all cells from three independent experiments. 3.4.6. Selectivity Index (SI) SI = IC 50 of pure compound in a normal cell line/IC 50 of the same pure compound in a cancer cell line, where IC 50 is the concentration that induced 50% inhibition on the growth of the treated cells. Statistical Analysis The obtained data were analysed using a one-way ANOVA with Tukey's post-test, and are expressed as the mean ± standard deviation (S.D.) of at least three independent experiments. The experimental groups treated with derivatives were compared with the control group: (*): p < 0.05, (**): p < 0.01, (***): p < 0.001. Conclusions A series of novel benzohydrazide derivatives 3a-3d containing acridine moiety were designed, synthesized and characterized in detail using NMR, IR and HR mass spectroscopy techniques. NMR derived parameters (heteronuclear one and two bond coupling constants and 1 H, 13 C, and 15 N chemical shifts) allowed us to determine configuration and conformation in solution for all synthesized compounds. We determined distinct configura-tional and conformational preferences to form E C4=N3 Z C(O)-N2 -3 and E C4=N3 Z C(O)-N2 -7. The duplicate signals in the NMR spectra of 7b-d were attributed to the present E N-N /Z N-N conformers. The major factor that controls the conformation of the studied compound 7 are hydrogen bonds N10 ···H2 and C(O)···H4. In vitro antiproliferative activities of these compounds against A549 and normal fibroblast cells CCD-18Co were studied. The compounds have been undergoing against topoisomerase I and II, and their binding properties (ctDNA, HSA) have been evaluated. The derivatives 3c and 3d in vitro act as potential dual inhibitors of hTopo I and II with a partial effect on the metabolic activity of A594 cancer cells. The values of binding constant for the interaction of acridine derivatives with ctDNA increased as follows: 3d < 3c < 3a < 3b. The higher values of K from the acridine-benzohydrazone series are present in fluoro-substituted derivative 3b (K = 3.18 × 10 3 M −1 ). The effect of tested derivatives on the metabolic activity of A549 cells evaluated by MTT assay decreased as follows: 3b(-F) > 3a(-H) > 3c(-Cl) > 3d(-Br). In the case of 3d, no significant activity against CCD-18Co fibroblasts was observed. The clonogenic survival was analysed after treatment of A549 cells with the compounds 3a and 3c. The acridine-benzohydrazides 3a and 3c reduced the clonogenic ability of A549 cells by 72% or 74%, respectively. The difference in the substituted benzene ring in 3a, 3c plays a slight role in clonogenicity survival (cellularity and viability, too) of A549 cell. The results indicated that interaction between acridine derivatives and HSA could lead to the change of protein secondary structure. In the presence of warfarin, the values of binding constants decreased, which suggest that derivative 3a could bind to HSA at the Sudlow site I. The findings presented in this paper suggest that these acridine derivatives exhibit promising potential as topoisomerase I and II inhibitors with anticancer activity against A549 human adherent lung carcinoma cells, and may also serve as DNA and HSA-interacting agents. These features would be of considerable use in the development of drugs with enhanced or more selective effects and greater clinical efficacy.
2022-05-10T16:47:03.072Z
2022-04-30T00:00:00.000
{ "year": 2022, "sha1": "cbfe1a66254cf3c1f8c77665327309c1435f1840", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1420-3049/27/9/2883/pdf?version=1651751573", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "bd7209c4a74673cb7d39c901260f6048d098f6c1", "s2fieldsofstudy": [ "Chemistry", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
271567812
pes2o/s2orc
v3-fos-license
Comparative efficacy of 0.1% and 0.15% Sodium Hyaluronate on lipid layer and meibomian glands following cataract surgery: A randomized prospective study Purpose To compare the efficacy of a 0.15% HA with that of 0.1% HA eye drops for DES after cataract surgery. Methods This study was double blinded, randomized and prospective study, and conducted in 69 participants (70 eyes) from Pusan National University Yangsan Hospital and executed from February 1, 2022 to November 30, 2022. Participants were adult cataract patients with normal lid position, not suffering from any other ocular disease and not meet the exclusion cirteria of clinical trial. Participants were randomly divided into two groups: 35 participants (17 males and 18 females) in the 0.1% HA group and 34 participants (19 males and 15 females) in the 0.15% HA group, receiving treatment six times daily for 6 weeks following cataract surgery. Subjective and objective assessments were performed at preoperative and postoperative visits, including ocular surface disease index score, tear break up time, corneal staining score, Schirmer’s I test score, lipid layer thickness), meiboscore, and biochemical analysis of the eye drops. Results Throughout the study, the postoperative ocular surface disease index score was significantly lower in the group receiving 0.15% hyaluronic acid than in the group receiving 0.1% hyaluronic acid. Additionally, the postoperative ocular surface disease index score showed a significant positive correlation with the postoperative use of 0.15% hyaluronic acid and the preoperative Schirmer’s I test score. In multivariate analysis, treatment with 0.15% hyaluronic acid and the preoperative ocular surface disease index score were significant independent parameters affecting the postoperative ocular surface disease index score. Conclusion The use of 0.15% hyaluronic acid is recommended for its potential advantages in alleviating symptoms following cataract surgery, making it a viable alternative to traditional 0.1% hyaluronic acid treatment. Trial registration ISRCTN95830348. Introduction Cataract extraction surgery is a minimally invasive technique, usually performed on an outpatient basis.Patients generally undergo short and uncomplicated recovery periods [1,2] Following surgery, patients are prescribed eye drops that serve to reduce surgery-induced inflammation and promote visual recovery [3,4].The main objective is to prevent postoperative issues such as macular edema, corneal edema, and endophthalmitis [5,6]. In addition to rare but sight-threatening complications, a significant percentage of patients experience symptoms consistent with dry eye syndrome (DES) after cataract surgery [7].These symptoms include ocular discomfort, visual disturbances, and tear film instability, which are characteristic of DES-a multifactorial condition affecting the precorneal tear film.DES is characterized by foreign body sensations, ocular soreness, and ocular pain [8,9].Factors responsible for the development of dry eyes after cataract surgery include the use of antibioticsteroid eye drops, nerve transection during corneal incisions, and local inflammation, all of which can disrupt tear film stability [4,10]. Recent publications have reported that patients who also receive artificial tears in the postoperative regimen experience significantly less subjective discomfort and show improved tear break-up time (TBUT) scores [11][12][13].One such artificial tear medication is sodium hyaluronate, commonly referred to as hyaluronic acid (HA).HA is an anionic glycosaminoglycan with viscoelastic properties that has been widely used as a lubricant in eye drops in recent decades.By effectively retaining water and preventing dehydration, HA enhances lubrication of the ocular surface, stabilizes tear film, promotes epithelial healing, and ameliorates the severity of dry eye symptoms [14,15].Treatment with HA-only topical drops has also been shown to increase quality of life scores and patient satisfaction, especially in cases of mild-tomoderate DES [16].This study aimed to compare the efficacy of a 0.15% HA with that of 0.1% HA eye drops for DES after cataract surgery, and this comparison was made by quantitatively evaluating clinical manifestations.Subjective parameters included ocular surface disease index (OSDI) score and objective parameters included TBUT, corneal staining score (CSS), Schirmer's I test score, lipid layer thickness (LLT), meiboscore, and biochemical analysis of the eye drops [17]. Materials and methods This study was a prospective, randomized, double-blinded, and controlled clinical trial aimed at evaluating the efficacy of 0.15% HA in terms of TBUT, CSS, Schirmer's I score, OSDI score, LLT, meiboscore and biochemical analysis in patients following cataract surgery.The study protocol received approval from the Pusan National University Yangsan Hospital Institutional Review Board (05-2022-084) and was performed in accordance with the principles of the Declaration of Helsinki.All participants provided informed consent by writing to take part in the study.The protocol for this clinical trial and supporting CONSORT checklist are available as (S1 Checklist).The recruitment period for the study spanned from May 11, 2022 Study participants Participants included in this study were adults who underwent cataract surgery at our center from February 1, 2022 to November 30, 2022, exhibited normal lid position and closure and did not have any ocular diseases.Exclusion criteria included patients who had previously used topical artificial tears, anti-inflammatory agents, antibiotics, or other medications that could affect tear production or stimulate tear secretion within 90 days prior to surgery.Patients with a history of ocular surgery, laser, or systemic treatments that could impact tear secretion, autoimmune diseases, evidence of eye surface disorders observed during slit-lamp, or those using contact lenses were also excluded. The sample size was calculated using MedCalc version 10.0 (MedCalc, Ostend, Mariakerke, Belgium).The minimum sample size requirement for a t-test with an alpha level of 0.05, and a power of 0.8, was calculated to be 21 for each group, and in consideration of 20% dropout rate, 25 patients for each group were needed to recruit.A total of 104 patients (105 eyes) were recruited, and the per-protocol (PP) population consisted of 69 patients (70 eyes) at the Department of Ophthalmology of Pusan National University Yangsan Hospital were enrolled (Fig 1).Eligible subjects were assigned a sequential number with a corresponding randomisation code generated by an independent third party using the SAS version 8.0 (SAS Institute, Inc., Cary, NC).Following the randomization protocol, clinical staff assigned patients to receive either 0.15% HA (Hyalu Mini; Hanmi Pharmaceutical, Inc., Seoul, Korea) or 0.1% HA (HyalQ; Ildong Pharmaceutical, Inc., Seoul, Korea), administered six times daily for 6 weeks following cataract surgery.Clear instructions on how to administer the ophthalmic solutions were provided to the patients by the clinical staff.To maintain blinding of the researchers and participants, medications were dispensed by a pharmacologist, and the specific type of topical medication was not revealed until the completion of the follow-up examination at the end of the study.All patients underwent standard small-incision cataract surgery, which was performed by a single surgeon (JEL).A clear corneal incision 2.8 mm in length was made in the superotemporal region of the eye.Subsequently, all eyes received identical postoperative eye drops consisting of 1.5% levofloxacin, administered four times daily for 2 weeks, and 0.1% fluorometholone, administered four times daily for 1 week, followed a tapering regimen.Additionally, patients received either 0.1% or 0.15% HA six times daily for 6 weeks. Clinical measurements To assess ocular surface status, the following measurements were conducted preoperatively and postoperatively: 1-week preoperative TBUT, CSS, OSDI score, Schirmer's I test score, LLT, and meiboscore.Following cataract surgery, follow-up visits were conducted at 1, 3, and 6 weeks postoperatively to measure TBUT, CSS, and OSDI score at each visit.Ocular symptoms were evaluated preoperatively and during each follow-up visit using the OSDI questionnaire.LLT was measured using the LipiView Ocular Surface Interferometer (TearScience Inc., Morrisville, NC) to obtain an interferometric image of the tear film, as previously described [18].Interferometric color units (ICUs) were utilized to measure LLT with one ICU equaling 1 nm of LLT.The recorded measurements included the average LLT obtained from all frame averages, as well as the maximum and minimum LLT.An index C-factor validate the stability of LLT measurements.Participants with interfererometer results showing a C-factor < 0.8 were excluded from the study.LipiView has an upper cut-off value of 100 ICU.The primary outcomes aimed to evaluate changes in TBUT, Schirmer's I test score, OSDI score, and LLT during the follow-up period between the 0.15% HA group and the 0.1% HA group.The secondary outcome was to determine the baseline factors that affected each clinical parameter at 6 weeks postoperatively.To minimize subjective measurement bias, TBUT, CSS, and OSDI score measurements were conducted by the same surgeon (JEL) at different time points. The Meibomian gland (MG) images were captured by the same examiner three times under consistent contrast settings, ensuring that the entire MG area was included and clearly visible.The images were evaluated using the Phoenix Meibography imaging module.Although the process was computer-assisted, it necessitated manual tracing of gland boundary.Initially, the observer selected the best image and assessed either the upper or lower eyelid.The lid area and gland boundaries were manually marked, and the area of loss score was automatically calculated, along with a pre-established degree using the Meibomian Scale [19].Meiboscores were categorized based on the percentage of the lost field, with values ranging from 0 = 0%, 1 � 25%, 2 = 26-50%, 3 = 51-75%, and 4 > 75% [19].Scores for both upper and lower eyelids were computed, resulting in a total score ranging 0-8 for each participant. Statistical analysis Following the recruiting protocol, we recruited 104 individuals (105 eyes) who met the inclusion criteria.Subsequently, 69 participants (70 eyes) were enrolled based on exclusion criteria and randomized into two groups: 35 participants (17 males and 18 females) in the 0.1% HA group and 34 participants (19 males and 15 females) in the 0.15% HA group.The statistical analysis was then performed on these two groups.All statistical analyses were performed using SPSS for Windows version 26.0 (SPSS Inc., Chicago, IL, USA).Descriptive statistics are reported as mean ± standard deviation.Data normality was assessed using the Kolmogorov-Smirnov test.An independent t-test or chi-square analysis was used to compare the baseline values between the 0.15% HA group and the 0.1% HA group.The time-dependent changes in TBUT, Schirmer's I test score, OSDI score, and LLT between the two groups were evaluated using repeated-measures analysis of variance (ANOVA).To compare parameters at different time points within each group, ANOVA with a post-hoc paired Tukey's test was employed.Multiple linear regression analysis was conducted to identify determinant factors associated with the clinical parameters TBUT, Schirmer's I test score, OSDI score, and LLT at 6 weeks postoperatively.Each variable was initially analyzed using a univariate model, and all significant variables (p < 0.10) were subsequently evaluated using a multivariate model with the backward method.The coefficient of determination (R2) in the linear regression was reported, and statistical significance was set at p < 0.05. Results A total of 69 participants (70 eyes) were enrolled and randomized into two groups: 35 participants (17 males and 18 females) in the 0.1% HA group and 34 participants (19 males and 15 females) in the 0.15% HA group.Inclusion criteria included patients who completed the study and provided all data at 1, 3, and 6 weeks postoperatively.The clinical and demographic data for both groups are presented in Table 1.As shown in Table 2, the one-way ANOVA test indicated significant improvements in TBUT and OSDI scores in the 0.15% HA group (p = 0.001 and <0.001, respectively), as well as LLT improvements in both groups (p = 0.037 in 0.1% HA and 0.014 in 0.15% HA) during the follow-up period.Post-hoc Tukey analysis performed for each variable showed that OSDI scores at postoperative 3 and 6 weeks in the 0.15% HA group significantly differed from those at the preoperative visit (p = 0.009 and <0.001, respectively, by one-way ANOVA with posthoc Tukey test).Conversely, all clinical parameters of the 0.1% HA group exhibited no significant differences between the preoperative and postoperative visits.Furthermore, Table 2 and Fig 2 illustrate that the changes in OSDI score from preoperative time point to 6 weeks after cataract surgery were significantly different between the 0.1% and 0.15% HA groups (p = 0.027, repeated measures ANOVA). Correlation coefficients were calculated to evaluate the effects of preoperative clinical measurements on the TBUT, Schirmer's I test score, OSDI score, CSS, LLT, and meiboscore at 6 weeks postoperatively (Table 3).Notably, Schirmer's I test score positively correlated with Multivariate linear regression analysis was performed to examine the influence of independent preoperative parameters on TBUT, CSS, OSDI score Schirmer's I test score, LLT, and meiboscore at postoperative 6 weeks (Table 4).Preoperative CSS and Schirmer's I test scores were significant parameters for postoperative TBUT (R 2 = 0.119, p = 0.021 and 0.022, respectively).The 0.15% HA treatment and preoperative OSDI score were significant independent parameters for postoperative OSDI score (R 2 = 0.121, p = 0.018 and 0.024, respectively).Furthermore, age was a significant independent parameter for postoperative LLT (R 2 = 0.184, p < 0.001), and preoperative OSDI and meiboscore were significant parameters for the postoperative meiboscore after 6 weeks (R 2 = 0.563, p = 0.128, and <0.001, respectively).Chemical analysis revealed the concentrations of Na + , K + , Cl -, pH, and osmolality of each drug.The concentration of Na + was higher in the 1.5% HA group, exceeding the normal range.K + was higher in 0.1% HA group, and Cl − in both solutions exceeded the normal range.The pH measured at 5.5, which falls below the normal range, whereas the osmolality of both solutions remained within the normal range (Table 5). Discussion The aim of this randomized clinical trial was to compare the effects of 0.15% HA with those of 0.1% HA on the ocular surface following cataract surgery.Among participants who had undergone cataract surgery, 0.15% HA significantly improved the OSDI score at 6 weeks postoperatively compared with 0.1% HA.Furthermore, postoperative treatment with 0.15% HA, as opposed to 0.1% HA, showed a significant positive correlation with the OSDI score at 6 weeks after surgery.Preoperative OSDI values and Schirmer's I test scores also had a positive impact on the OSDI score at 6 weeks postoperatively.Despite advances of surgical techniques in cataract surgery, most post-cataract patients still experience dry eye symptoms that vary in severity and duration [3,20].Corneal nerve transection, prolonged microscope light exposure, use of an aspirating speculum, and heat from phacoemulsification devices are possible risk factors for postoperative dry eye [6,10].Artificial tears are first-line treatment for dry eye symptoms after cataract surgery [4].HA stimulates ocular surface tissue healing by humidifying the eye surface and restoring integrity of the corneal and conjunctival epithelium [14].Recently, artificial tear preparations with increased HA concentrations are introduced, and to our knowledge, no comparative clinical trials have been published to confirm the potential additional benefit of DES after cataract surgery with 0.15% HA over the standard 0.1% HA.Exploring the potential superiority of 0.15% HA in patients who underwent cataract surgery was the primary objective of our study. Evaluating treatment success in DES is a major challenge because of the weak correlation between signs and symptoms in DES, and the high variability in objective signs [21].Therefore, the assessment of treatment efficacy using subjective symptoms and objective signs is particularly important in patients with dry eye.TBUT is a surrogate measure of tear film stability.The improvements in TBUT with the use of HA suggest improved integrity of the tear film, which can prevent evaporation and hyperosmolarity owing to its rheological and water-retention properties [13].Although TBUT increased with the postoperative use of both 0.1% and 0.15% HA, significant improvement was only found in 0.15% HA group in this study.CSS is a valuable clinical tool for assessing epithelial cell viability [22].While CSS is subjective and observer-dependent, it provides useful information for assessing disease severity and monitoring treatment response [23].Although the changes in CSS after surgery between the two groups were not significantly different, the CSS of both groups increased at postoperative week 1 and subsequently decreased throughout the follow-up period.This indicates that corneal damage after cataract surgery gradually healed to its preoperative state starting 1 week after surgery. In addition to its benefits on objective measures, 0.15% HA was also effective for subjective outcomes, showing improvement in symptoms in OSDI score.The OSDI is a unique instrument that assesses the frequency of dry eye symptoms and their impact on vision-related functioning.Because the OSDI score has good to excellent reliability, validity, sensitivity, and specificity, it can be used as a valuable complement to other clinical and subjective measures of dry eye disease by providing a quantifiable assessment of dry eye.It also has the necessary psychometric properties for use as an endpoint in clinical trials of dry eye disease [24,25].In this study, we used OSDI score to evaluate subjective changes in ocular symptoms after cataract surgery; the 0.15% HA group demonstrated significantly favorable results compared with 0.1% HA, and postoperative OSDI which demonstrated meaningful positive correlation with preoperative OSDI and Schirmer's I test scores.This suggests that 0.15% HA has superiority from an ocular comfort perspective and that preoperative conditions, such as OSDI and Schirmer's I test score, could play an important role in alleviating postoperative dry eye symptoms. Cataract surgery also appears to influence MG function [3,26].In this study, LLT was significantly improved in both 0.1% and 0.15% HA groups, and postoperative LLT showed significant positive correlation with preoperative LLT as well as young age.By the way, there were no statistically significant differences in meiboscores between 0.1% and 0.15% HA groups.Our previous reports comparing HA with cyclosporine or diquafosol have already shown that better LLT at the preoperative visit could indicate improved postoperative LLT [7,18,26]. Patagiota N, et al. compared the efficacy of 0.1% and 0.2% HA; 0.2% HA showed significant improvements in TBUT at 6 weeks after cataract surgery, indicating better tear film stability.They also used surface discomfort index (SDI) scores to quantify the overall subjective discomfort experienced by post-cataract surgery patients, which also showed significantly superior results with 0.2% HA.When comparing the two concentrations of HA, the 0.2% group demonstrated particularly better scores in the stinging sensation and foreign body sensation indices [11].Ishioka reported immediate visual impairment following instillation of the 0.3% HA compared with 0.1% HA [27].Although this blurring of vision is usually temporary, it can cause ocular discomfort associated with OSDI scores.Park et al. compared the efficacy of the 0.1%, 0.15%, and 0.3% HA, and all the groups showed a significant improvement in OSDI scores at 6 weeks, with 0.15% HA showing the most significant improvement [12].The reason behind the less favorable outcome with the 0.3% concentration might be due to the higher HA concentration, causing increased viscosity, inducing blurry vision, or potentially generating higher-order aberrations that could affect ocular sensation [12,27] Similarly, in the present study, 0.15% HA showed a significantly better OSDI score than 0.1% HA at postoperative 6 weeks.These findings suggest that the 0.15% concentration of HA has fewer potential side effects due to the viscosity-related blurry vision and an advantage in symptom relief as the concentration increases.In other words, 0.15% HA may be more effective in reducing symptoms in patients with dry eye or ocular surface discomfort after cataract surgery. The electrolyte content, pH, osmolarity, and viscosity of commercially available topical ocular solutions may induce ocular surface damage when used for a long period or overdose [28].Our study aimed to investigate the differences in subjective symptoms and ocular sensations experienced by patients through a biochemical analysis of the two drugs.Based on our results, the sodium concentration in the 0.15% HA group was slightly higher, whereas in the 0.1% HA group, it was lower than ideal range (142~152.7 mEq/L) [29].K + concentrations in the 0.1% HA group was higher than ideal range which has been reported to be 4.3-4.6 mEq/L for extracellular fluid.Chloride concentration in both groups were higher than ideal range (104.0~117.4mEq/L), osmolarity was in the ideal range (260~320 mOsm/kg), and the pH values were both 5.5 which lay below the ideal range (7.0-7.7)[29,30].Based our previous report which compares the electrolyte component between different concentration of HA, the difference of electrolyte concentration which implies values above and below the ideal range are unlikely to affect the ocular surface.It also reported that osmolarity imbalance can damage cells that hypertonic stress induced human corneal epithelial cell shrinkage and apoptosis in cell culture models, and low pH could increase corneal epithelial permeability.Both 0.1% and 0.15% HA showed osmotic pressure in the normal range and pH was lower than the normal range, so it is thought that both products might have higher corneal permeability [17]. Although this study had the advantage of a double-blind, randomized, prospective assessment designed to minimize bias, it also had some limitations.First, the study duration was relatively short, which may be insufficient for evaluating the long-term impact of different HA concentrations on DES.Second, it is important to note that besides the eye drops used in this study, other products with the same concentration may have slight variations in composition depending on their manufacturers.These differences mean that the results obtained in this study may not be replicable across all concentrations of the same drug.Consequently, even if a different product contains the same active ingredient at the same concentration, the overall formulation and other inactive ingredients may differ slightly, potentially affecting the bioavailability, efficacy, and side effects of the drug. In conclusion, the present study demonstrated that the postoperative OSDI score was significantly better in the 0.15% HA group than in the 0.1% HA group.Furthermore, there were significant positive correlations between postoperative use of 0.15% HA, preoperative OSDI scores, and Schirmer's I test scores.Therefore, it is suggested that using 0.15% HA offers advantages in relieving symptoms after cataract surgery and can be considered as an alternative option to conventional 0.1% HA treatment. ap value < 0.05 one-way analysis followed by post-hoc Tukey analysis.b Between preoperative visit and postoperative 1 weeks.c Between preoperative visit and postoperative 3 weeks.d Between preoperative visit and postoperative 6 weeks.https://doi.org/10.1371/journal.pone.0306253.t002preoperative TBUT and preoperative Schirmer's I test score (p = 0.291 and p = 0.654, respectively).OSDI score at postoperative 6 weeks was negatively related to the 0.15% HA group and preoperative Schirmer I test score (p = -0.282and p = -0.264,respectively).LLT at postoperative 6 weeks negatively correlated with age (p = -0.543)and positively correlated with preoperative LLT (p = 0.660).The meiboscore positively correlated with the preoperative meiboscore and negatively correlated with the OSDI score (p = 0.740 and p = -0.267,respectively). Table 2 . Changes in tear break up time (TBUT), corneal staining score (CSS), ocular surface disease index (OSDI) score, Schirmer's I test score, lipid layer thickness (LLT), and meiboscore in the hyaluronic acid (HA) 0.1% and 0.15% groups. Preoperative Postoperative 1 week Postoperative 3 weeks Postoperative 6 weeks p a p b p c p d p value* pre vs 1 week pre vs 3 weeks pre vs 6 weeks The OSDI during the follow-up after cataract surgery was significantly different between the participants treated with 0.1% and 0.15% HA (p = 0.027 by repeated measures ANOVA).Statistically significant p values are marked in bold (p < 0.05).*p value < 0.05 by repeated-measures analysis of variance.
2024-08-01T05:12:46.087Z
2024-07-30T00:00:00.000
{ "year": 2024, "sha1": "51a328753dc48c278c01148192f08798783daed8", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "51a328753dc48c278c01148192f08798783daed8", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
244190853
pes2o/s2orc
v3-fos-license
Occurrence, variations, and risk assessment of neonicotinoid insecticides in Harbin section of the Songhua River, northeast China Neonicotinoid insecticides (NNIs) have been intensively used and exploited, resulting in their presence and accumulation in multiple environmental media. We herein investigated the current levels of eight major NNIs in the Harbin section of the Songhua River in northeast China, providing the first systematic report on NNIs in this region. At least four NNIs in water and three in sediment were detected, with total concentrations ranging from 30.8 to 135 ng L-1 and from 0.61 to 14.7 ng g-1 dw, respectively. Larger spatial variations in surface water NNIs concentrations were observed in tributary than mainstream (p < 0.05) due to the intensive human activities (e.g., horticulture, urban landscaping, and household pet flea control) and the discharge of wastewater from many treatment plants. There was a significant positive correlation (p < 0.05) between the concentrations of residual imidacloprid (IMI), clothianidin (CLO), and Σ4NNIs in the sediment and total organic carbon (TOC). Due to its high solubility and low octanol-water partition coefficient (Kow), the sediment-water exchange behavior shows that NNIs in sediments can re-enter into the water body. Human exposure risk was assessed using the relative potency factor (RPF), which showed that infants have the highest exposure risk (estimated daily intake (ΣIMIeq EDI): 31.9 ng kg-1 bw·d-1). The concentration thresholds of NNIs for aquatic organisms in the Harbin section of the Songhua River were determined using the species sensitivity distribution (SSD) approach, resulting in a value of 355 ng L-1 for acute hazardous concentration for 5% of species (HC5) and 165 ng L-1 for chronic HC5. Aquatic organisms at low trophic levels were more vulnerable to potential harm from NNIs. Introduction Neonicotinoid insecticides (NNIs) have been widely applied to pest control and have become one of the most prevalent insecticides worldwide [1,2]. NNIs selectively act on nicotinic acetylcholine receptors (nAChRs) in the postsynaptic membrane of the insect nervous system and its peripheral nerves to obstruct central nerve conduction, resulting in excitement, paralysis, and ultimately death of pests [1,3]. Although NNIs applications are for controlling insects in agricultural plants, both parent and metabolites of NNIs have been confirmed to damage non-target species [4e6], causing significant ecological imbalance. For example, adverse effects from NNIs have been confirmed on bees [4,7,8], insectivorous birds [9,10], aquatic organisms [11,12], earthworms [13], and humans [3,14]. The adverse ecological risks associated with the extensive use of NNIs have caused policy makers' attention besides the research community. For example, the European Commission [15e17], France [18,19], and Canada [20e22] have already restricted the use of imidacloprid (IMI), thiamethoxam (THM), and clothianidin (CLO). Meanwhile, France also prohibited the application of acetamiprid (ACE) and thiacloprid (THA), and the U.S. has restricted the registration of a portion of NNIs products to protect pollinators [23e25]. NNIs have been rapidly developed and widely used due to their high efficiency, broad-spectrum, strong selectivity, and lack of cross-resistance with other traditional insecticides [26,27]. In 2017, over 2600 NNIs products were registered in China, with the amount of IMI accounting for approximately half of the total products, followed by ACE (26.1%) and THM (14.6%) [28]. The usage of NNI commodities in China has been rising from 2013 to 2018, and exceeding 30,000 tons annually since 2016 (unpublished data), with IMI, ACE, and THM in the top three uses. Owing to the characteristics of high water solubility and low volatility [29e31], NNIs can pollute water systems from the soil through runoff [29]. For example, Chen et al. [30] measured NNIs in river/lake water along the Yangtze River Basin and reported total concentrations of 13.0e3240 ng L -1 . Mahai et al. [2] determined six NNIs in the central Yangtze River and found that ACE (100%), IMI (100%), and THM (95%) had the highest detection frequency. Zhang et al. and Yi et al. [31,32] studied the occurrence and distribution of five NNIs in the Pearl River, and at least one NNI was detected in surface water. Apart from surface water, several studies have shown the existence of NNIs in other environmental media, including sediments [29,33], soil [29,34], particulate matter [35,36], dust [37,38]. Meanwhile, scholars have confirmed that NNIs are also systemically distributed in biological media such as urine [39,40], hair [41], teeth [42], fruits [43], and vegetables [43,44]. NNIs pollution has become a major global environmental concern. Recent reports on NNIs water pollution were mostly from developed countries [2]. To date, studies on NNIs in surface water and sediments in China are mainly focused on the central and south of China, despite the fact that it is a major agricultural country with great production and consumption of NNIs. The Songhua River basin in northeast China has a dense population and well-developed agriculture and aquaculture. The Songhua River, one of the seven major water systems in China, is an important backup resource supplying drinking water to the city of Harbin [45,46]. With rapidly developed agriculture and urbanization, a large volume of agricultural and household wastewater has been released into the Songhua River, resulting in worsening water quality and posing risks to human health [47]. Existing studies on Songhua River pollution mostly focused on heavy metals [45,46,48,49], polycyclic aromatic hydrocarbons [50e52], polychlorinated biphenyls [53e55], antibiotics [56,57], etc., but no systematic investigation on NNIs pollution has been made. NNIs as emerging contaminants currently relevant data/parameters on their fate, environmental behavior, and ecotoxicological effects are usually scarce and poorly understood. The present study aims to fill this knowledge gap by investigating the occurrence of eight common NNIs in surface water and sediments and associated spatial distributions in the Harbin section of the Songhua River, estimating the sediment-water exchange behavior of NNIs using fugacity fractions, and assessing the potential exposure risks of the human and aquatic ecosystem. To our knowledge, this is the first systematic report to document the occurrence and risk assessment of NNIs in the Songhua River, and the results of the current study will offer useful data and knowledge for regional pesticides management. Study area and sample collection The Harbin section of the Songhua River (125 42 0~1 30 10 0 E, 44 04 0~4 6 40 0 N) flows through the city from southwest to northeast direction [58], covering approximately 66 km. The study region has a temperate continental monsoon climate with significant and uneven seasonal changes in precipitation and runoff distribution, and the annual runoff is mostly localized from July to September (http://www.harbin.gov.cn/). Between September and November of 2019, 13 water samples and 11 sediment samples were collected in the Harbin section of the Songhua River (Fig. 1). The sampling sites included four locations tributary of Songhua River (T1 to T4), one location upstream of the city (M1), six locations center of the city (M2 to M7), two locations downstream of the city (M8 and M9). The sampling sites were selected to represent areas with different economic developments and human activities, including rural, commercial, residential, and scenic areas. Details can be found in the supporting information (SI , Table S1). At each sampling site, at least 1 L of surface water was collected in brown glass bottles using a portable water collector, and all water samples were sent to the laboratory immediately and placed in a 4 C refrigerator. Sediment samples were obtained at a depth of 0e10 cm with a stainless steel grab, and the overall weight of the sediment samples was not less than 500 g. The samples were quickly wrapped in polyethylene bags, shipped to the laboratory, and placed in a refrigerator at À20 C. Sediment samples were not collected at sampling sites M4 and M6 due to the influence of the berm nearby. In addition, previous studies have shown that temperature has little effect on the concentration of NNIs [31], so the difference between the two sampling temperatures is ignored in the current study. Sample processing and chemical analysis The target NNIs in water samples were extracted using the previously described method [2]. Briefly, water samples (1 L) were filtered through a Teflon filter and spiked with an internal standards mixture (50 mL of 1 mg L -1 ) before being processed using Waters Oasis HLB solid-phase extraction (SPE) cartridges (500 mg, 6 cc, Milford, MA, USA). The cartridge was preconditioned with 5 mL methanol and 5 mL ultra-pure water in sequence. After being loaded on the water samples, the target compounds were eluted with 5 mL ultra-pure water, followed by 5 mL methanol. The eluents were evaporated at 35 C under a gentle nitrogen stream until nearly dry, then redissolved in 1 mL of 25% acetonitrile in water and passed through a 0.22 mm filter for analysis. The sediment was pretreated using the dispersive solid-phase extraction procedure [59,60]. In detail, 5 g freeze-dried sample was transferred into a PTFE centrifuge tube, spiked with 50 mL mixed internal standard (1 mg L -1 ), and left to stand for 45 min. After adding 5 mL of ultrapure water and 10 mL of acetonitrile, the tube was shaken for 1 min and vortexed for 1 min. A 4 g MgSO 4 and 1 g NaCl were added into the tube, followed by vortexing for 1 min. Then the tube was centrifuged at 5000 rpm for 5 min. A 6 mL supernatant was separated into a clean PTFE tube containing 200 mg primary secondary amine (PSA). The extract was vortexed for 2 min and centrifuged at 5000 rpm for 5 min. An aliquot (5 mL) of the upper organic solution was evaporated to near dryness by a gentle nitrogen stream at 35 C and reconstituted in 1 mL of 25% acetonitrile in water. The extract was filtrated through a 0.22 mm PTFE membrane for HPLC-MS/MS analysis. The chemical analysis was carried out on AB SCIEX Triple Quad 5500 HPLC-MS/MS (Framingham, MA, USA) using an electrospray ionization source in the positive ion mode (ESI þ ) with multiple reactions monitoring (MRM) mode. The injection volume was 2 mL. The analytes were separated at 25 C using a Phenomenex Kinetex® C18 column (100 mm  2.1 mm, 1.7 mm). The mobile phase was 0.1% formic acid-water solution (A) and acetonitrile (B), and the flow rate of the mobile phase was set at 0.3 mL min -1 . A gradient program of the mobile phase began with 95% A and 5% B, followed by a linear decrease from 95% to 65% A in 3 min, then to 45% A in 6 min, and finally reverted to 95% in 9 min before ending the program in 12 min (Fig. S1). In the MS/MS analysis, nitrogen was used as atomizing gas. The pressure of curtain and collision gas was set at 35 and 7 psi, respectively. Ion spray voltage was set at 5500 V. The temperature of drying gas was set at 550 C. The pressure of ion source gas1 and gas2 was set at 55 and 60 psi, respectively. Collision cell exit potential and entrance potential were 16.0 and 10.0 V, respectively. The HPLC-MS/MS parameters for individual analytes are listed in Table S2. Total organic carbon (TOC) contents of sediments were determined. After inorganic carbonate was removed with 1 mol L -1 hydrochloric acid, the TOC contents of the sediments were measured with the equipment of Elementar Vario TOC select (Hanau, Germany) [33]. Quality assurance and quality control Procedural blank, laboratory blank, and matrix spiked samples were run in order before sample analysis. No target NNIs were detected in the blank samples. Meanwhile, three isotope-labeled compounds (clothianidin-d 3 , imidacloprid-d 4 , and thiamethoxamd 3 ) were applied to all samples before extraction and used as surrogate standards to check the efficiency of the sample preparation process [6]. Among them, imidacloprid-d 4 is used as the internal standard for IMI, ACE, and THA; thiamethoxam-d 3 is used as the internal standard for THM, nitenpyram (NTP), and dinotefuran (DIN); clothianidin-d 3 is used as the internal standard for CLO and imidaclothiz (IMIT). The instrumental detection limits (IDL) were calculated from the lowest amount of eight target analytes based on three times the signal-to-noise (S/N) ratio [61], ranging from 0.002 to 0.02 ng mL -1 . The method detection limit (MDL) of target analytes in the sample was calculated from the S/N ratio of 10 (Table S2) [2,38]. In this study, the recoveries of the eight target compounds ranged from 86.3% to 108.9% for water and from 79.8% to 106.5% for sediments (Table S2). If the detected concentration in the sample is lower than the MDL, the value is set to zero. Sediment-water exchange Sediment-water exchange behavior of NNIs is calculated using fugacity fractions (ff) (Eq. (1)), which plays an important process affecting water quality and the fate of NNIs [50,62]. The derivation of the relevant equations is shown in the SI file (Text S2). where f s and f w are the fugacity (Pa) of the NNIs in sediments and water, respectively, C s and C w are the concentration of NNIs in sediment (ng$g -1 dw) and water (ng$L -1 ), respectively, f oc is the organic carbon fraction in the sediment, and K ow is the dimensionless partition coefficient of octanol-water. When ff < 0.5, NNIs can diffuse from water into sediments, and in this situation, the sediment acts as a sink. In contrast, i.e., ff > 0.5, the sediment acts as a secondary source with NNIs releasing from the sediment to water. Estimation of human exposure Health risk refers to the likelihood that a particular exposure or set of exposures will harm or may harm an individual's health [63]. To assess human's cumulative exposure to NNIs, the relative potency factor (RPF) method proposed by the U.S. Environmental Protection Agency (U.S. EPA) was used to normalize the NNIs exposure effects [2,5,6,64]. IMI, mostly studied in literature and widely applied in agriculture, was selected as the index chemical. RPF of each target NNI was obtained by comparing its relative chronic reference dose (cRfD) ( Table S3) to that of IMI (Eq. (2)). The cumulative exposure of the total NNIs in surface water was then calculated using Eq. (3). where IMI eq is the cumulative exposure level of imidaclopridequivalent total NNIs [2], and NNI i is the concentration of ith NNI in surface water (ng$L -1 ). Due to the similar structures between those of IMIT and CLO, these two species were assumed to have the same cRfD. Conventional drinking water treatment was not efficient enough to remove NNIs [65]. We estimated the daily intake (EDI, ng$kg -1 bw$d -1 ) of each NNI in the Harbin section of the Songhua River (Eq. (4)). where DIR represents daily water ingestion rate (L$kg -1 bw$d -1 ) (Table S4), and AR is the absorption rate by a human with a constant value of 100% [5]. Aquatic ecological risk assessment Aquatic organisms can directly be affected by pesticides, mostly through runoff from farmlands [66]. The species sensitivity distribution (SSD) method was used to assess the risk of NNIs to aquatic organisms in the Songhua River. The SSD model can describe the variation in the sensitivity of different species to NNIs through probability distributions. Calculate proportions by first ranking selected toxicity data from lowest to highest, then converting ranks to proportions (Eq. (5)). where i is the rank of selected species, and n is the total number of selected species. The toxicity data are plotted according to stressor intensity (X-axis) vs. proportion (Y-axis) of selected species, and the distribution is matched to generate the SSD curve. Since the exposure effect to the total NNIs in each water sample was converted to that of IMI eq using the RPF method, we then selected IMI toxicity data as the evaluation index in the SSD model. Acute toxicity data (half effective concentration (EC 50 ) and half lethal concentration (LC 50 ) at 96 h of exposure) and chronic toxicity data (no observed effect concentration (NOEC) at least four days of exposure) for representative aquatic species in Chinese freshwater as well as standard test species were obtained from ECOTOX database (http://cfpub.epa.gov/ecotox/) of the U.S. EPA. Accordingly, we acquired 108 acute toxicity data for 38 aquatic species and 346 chronic toxicity data for 28 aquatic species in the present study (Table S5). The data was calculated using the SSD software provided by the U.S. EPA (SSD Generator V1, https://www.epa.gov/caddis-vol4). The environmental hazardous concentration (HC P ) with cumulative probability at p% was calculated by the SSD model, and the p-value was normally chosen to be five [67], i.e., 95% of the species would not be threatened by NNIs at that concentration. Statistical analysis Statistical analysis of data was carried out using SPSS (version 26.0) and Excel software (version 2016). Data were tested for normality by Shapiro-Wilk, and significance was tested using oneway ANOVA with p < 0.05. The relationship between NNIs and TOC in sediments was analyzed using Spearman correlation analysis. Concentrations of NNIs in water and sediment The residue status of the eight NNIs in the water bodies and sediments were determined in the Harbin section of the Songhua River, and the results are shown in Tables S6 and S7. Seven target compounds were detected in the water samples, namely IMI, THM, CLO, ACE, THA, DIN, and IMIT, while NTP was not detected in any water sample. As can be seen from Table 1, high detection frequencies (100%) of IMI, THM, CLO, and ACE in water were found, while the concentration of the seven NNIs (S 7 NNIs) in surface water ranged from 30.8 to 135 ng L -1 , with a median of 41.4 ng L -1 and a mean of 62.3 ng L -1 . In surface water, IMI and THM were the principal NNIs with their concentrations being in the range of 10.9e83.5 ng L -1 and 16.3e83.5 ng L -1 , respectively. The sum of these two species accounted for more than 80% of S 7 NNIs in different water samples (Fig. 2a). THM had the highest mean concentration (30.7 ng L -1 ), followed by IMI (22.4 ng L -1 ), while IMIT has the lowest mean concentration (0.03 ng L -1 ) in surface water of the Harbin section of the Songhua River. It is noteworthy that the detected concentrations of IMI in all surface water samples were higher than the chronic threshold value of 10 ng L -1 that was set in the U.S [68]. Compared to other rivers (Table S8), THM concentration in the Harbin section of the Songhua River was much higher than that in the central Yangtze River (4.29 ng L -1 ) [2] and Guangzhou urban waterways (10.9 ng L -1 ) [69], but lower than that in the Guangzhou section of the Pearl River (50.2 ng L -1 ) [31]. IMI concentration was higher than that in the central Yangtze River (6.11 ng L -1 ), but lower than Guangzhou urban waterways (81.1 ng L -1 ) and the Guangzhou section of the Pearl River (78.3 ng L -1 ). ACE concentration was similar to that of the central Yangtze River (2.70 ng L -1 ), but much lower than that in the Guangzhou urban waterways (51.2 ng L -1 ) and the Guangzhou section of the Pearl River (36.0 ng L -1 ). In China, more formulations containing IMI, THM, and ACE as active ingredients have been registered compared to the other NNIs [2]. The commodity, including IMI, ACE, and THM, is therefore likely to be used by planters (Table S9), resulting in higher detection frequencies and residual levels of these species compared with the other NNIs. The wide application of NNIs and wastewater disposal from wastewater treatment plants (WWTP) can be linked to the detected high concentrations of surface water NNIs in the Harbin section of the Songhua River. Heilongjiang Province is an important Commodity Grain Base in China [70]. Harbin has an arable land area of approximately 12.7% of Heilongjiang Province [71], and accordingly, the use of NNIs accounts for about 25% of the provincial. It was reported that fewer than 20% of NNIs active ingredients used in the agricultural sector can be absorbed by crops, while the remainder would enter into soil directly [5]. Subsequently, runoff can transport a portion of NNIs from soil into water bodies. The low vapor pressure and high log K oa (air partition coefficient) of NNIs lead to the rapid adsorption of sprayed NNIs onto atmospheric particulate matter, some of which can enter into surface water through atmospheric deposition [36,72]. In addition, NNIs in WWTP effluents should not be underestimated [31,73]. For instance, a study by Sadaria et al. [73] on 13 WWTPs in the U.S. found that the annual discharge of IMI in treated wastewater was about 1000e3400 kg. Besides, partial NNIs are widely used in horticulture, urban landscaping, household pest bait, and pet flea control [1,73]. A total of four NNIs were detected in the sediment, including IMI, THM, CLO, and ACE. The concentration of four NNIs (S 4 NNIs) detectable in the sediment ranged from 0.61 to 14.7 ng g -1 dw, with a mean of 3.63 ng g -1 dw. Considering the concentration contribution ratio, consistent with the water bodies, IMI and THM were the main contributors in the sediment (Fig. 2b) with a mean of 2.25 ng g -1 dw (range from 0.34 to 12.6 ng g -1 dw) and 0.51 ng g -1 dw (range from 0.12 to 1.58 ng g -1 dw), respectively. Among them, IMI, THM, and CLO had the highest detection rate (100%), while ACE was detected in only 9.09% of the sediments. It is worth noting that in addition to the contribution of the CLO application itself, THM in the environment can be converted to CLO [11,74]. Compared with other studies (Table S8), the total concentration of NNIs in the sediments of the Harbin section of the Songhua River is higher than that in the Guangzhou section of the Pearl River (1.38 ng g -1 dw) [31] and Belize (0.036 ng g -1 dw) [29], but lower than that in samples from South China (4.21 ng g -1 dw) [33] and Canada (40.8 ng g -1 dw) [75]. Higher residual concentrations and detection frequencies in the sediments were primarily associated with the intense and long-term use of NNIs in the area. Besides, the sediment composition of NNIs in this study was similar to those in Canada's wetland [75], with IMI, THM, and CLO as dominant species, but was different from those in the Guangzhou section of the Pearl River [31], where NNIs was dominated by ACE while IMI was not detected. The composition of NNIs in sediments varies with site location, which may be related to specific use patterns in each region as well as sediment composition. Distribution of NNIs in surface water and sediment The mass concentrations of NNIs detected in the four tributaries were significantly higher than that in the mainstream of the Songhua River (p < 0.05), indicating that the NNIs in mainstream mainly come from the inflow of high-polluted tributaries. In the mainstream surface water, the sum concentration of the seven NNIs fluctuated slightly (from 30.8 to 46.0 ng L -1 ). The concentration of NNIs at sites M1 and M3 is slightly higher than those at the other mainstream sampling sites. To some extent, this phenomenon might be attributed to the samples at these two sites were taken during the rainy season (September) with frequent rainfall, which resulted in increased surface runoff [76,77], and thus NNIs transfer from soil to water bodies. The highest concentrations of NNIs in surface water were observed at the sampling sites T2, T4, and T3 with 135 ng L -1 , 114 ng L -1 , and 111 ng L -1 , respectively. Site T2 is located in the Hejiagou River, an affluent river flowing through factories, enterprises, and residential areas that releases vast quantities of industrial and domestic waste into the river. Moreover, site T2 is situated downstream of the Qunli WWTP, in which around 250,000 tons of sewage were treated daily. The high NNIs level at T4 (Ashi River) was attributed to the flow from many villages and towns in Harbin and several WWTPs along the river discharging sewage. The Ashi River has many tributaries, a wide area of arable land, and heavy pesticide use [49]. Site T3 is situated 1 km upstream of the inlet of the Majiagou River, a tributary of the south bank of the Songhua River. This tributary is an inner-city river formed by rain-collecting, which flows through four administrative districts of Harbin City: Pingfang District, Xiangfang District, Nangang District, and Daowai District. High population density, intensive human activity, and the presence of multiple WWTPs are the main characteristics of the abovementioned districts. The concentration of target chemicals in the effluent of the WWTP may be influenced by the physicochemical properties of NNIs and the efficacy of the WWTP process. Previous reports have been confirmed the low removal efficiency of NNIs from wastewater by conventional treatment procedures [31,73]. Hence effluents from WWTPs are important point sources of NNIs in the water environment. Compared to other tributaries, the Yunliang River (site T1) flows through a region mainly dominated by agricultural production. However, it is noteworthy that S 7 NNIs in the Yunliang River (49.8 ng L -1 ) is similar to that found in the mainstream (mean 40.1 ng L -1 ), indicating that the confluence of this tributary does not pose a serious threat to the mainstream of the Songhua River. As indicated above, intensive human activities and sewage outfalls, compared with agricultural activities, might be the major causes for the high levels of NNIs in water bodies in this region. For example, NNIs have been extensively applied to human activities such as horticulture, urban landscaping, household pest bait, and pet flea control [1,31,73]. In addition, NNIs can enter the human body through food consumption and then enter the urban sewage system via metabolites such as urine. In the sediments, there was no significant difference in NNIs concentrations between tributaries and mainstream (p > 0.05). Four NNIs were detected at the T3 sampling site, and three were detected at the other sediment sampling sites, which may be attributed to the higher TOC content at T3. The relationship between NNIs and TOC in sediments was analyzed using Spearman correlation (Fig. S2). ACE was not studied because it was only detected at site T3. In sediments, the concentration of IMI (r ¼ 0.82, p < 0.05), CLO (r ¼ 0.62, p < 0.05), and S 4 NNIs (r ¼ 0.82, p < 0.05) had a significant positive correlation with TOC, while there was no correlation between THM and TOC. Early studies suggest a positive correlation with the organic matter content of the soil/sediment in the sorption capability of NNIs, i.e., the sorption capacity of NNIs is enhanced when the organic matter content increases [74]. Sediment-water exchange Sediment-water exchange behavior plays an important role in affecting water quality and the fate of NNIs. We focused on three NNIs, including IMI, THM, and CLO, coexisted in both water bodies and sediments. The analysis results (Fig. S3) showed that the ff values of IMI, THM, and CLO were greater than 0.9, indicating the tendency to diffuse into the water from the sediment for the three pesticides. Because of their high solubility and low K ow , NNIs cannot be prevented from entering the water body from sediment. In addition, the ff values (range from 0.997 to 0.999) of the three NNIs fluctuated slightly, indicating that the similar sediment-water exchange mechanisms of the individual NNIs across the Harbin section of the Songhua River, with no obvious spatial variability. Consequently, IMI, THM, and CLO from various pollution discharge sources entered the Songhua River, where they were enriched in the sediment through processes such as sedimentation. Meanwhile, the sediment stored NNIs can re-enter into the water through molecular diffusion process, causing secondary pollution of the water body. The concentration ratio measures the equilibrium of the partition of chemicals and reflects the difference in the forces acting on the substances between the two phases. The ratios of IMI, THM, and CLO concentrations in the water to those in the sediment (C w /C s , g$L -1 ) were in the range of 4.68e84.1, 28.3e254, and 19.2e118, respectively. These concentration ratios indicate that NNIs are less concentrated in the sediment, have weaker interaction with the sediment, and are easily released from sediment into the water. These results are consistent with those reported by Yi et al. [31] on the sediment-water partition coefficients in the Pearl River, which found that NNIs are mostly distributed in the water bodies and are easily transferred with runoff or riverine transport. Therefore, the historical accumulated NNIs in sediment can be released into water bodies, with the sediment acting as a secondary source. Table 2 demonstrates the daily intake of NNIs in the form of drinking water for different age groups. The largest intake of NNIs was from THM in all age groups, indicating the highest risk from THM among all the different NNIs. From the perspective of the different age groups, exposure content via drinking water was in the order of infants > toddlers > children > adults > teenagers. The maximum daily intake of the total NNIs in infants (31.9 ng kg -1 bw$d -1 ) was about five times higher than that of teenagers (5.69 ng kg -1 bw$d -1 ). Infant exposure levels are higher due to their higher food and fluid (mushy complementary foods such as vegetables, fruits, eggs, and cereals) intake per unit body-weight basis compared to the other age groups. Similar EDI of NNIs for infants have been found in breast milk (40.4 ng kg -1 bw$d -1 ) in Heilongjiang Province [78]. In China, the mean concentration of all NNIs in breast milk was 161 g L -1 , while the mean value in Heilongjiang Province was 181 g L -1 [78]. Overall, the estimated maximum SIMI eq EDI (infants: 31.9 ng kg -1 bw$d -1 ) was three orders of magnitude lower than the U.S. EPA recommended acceptable daily intake of IMI (57000 ng kg -1 bw$d -1 ). In general, all pathways of ingestion (both food and water), contact with skin, inhalation, and non-dietary ingestion can cause the exposure risk to human health, and thus exposure to NNIs through drinking water consumption is underestimated in the current study. However, the high solubility of NNIs and limitation of studied environmental media (water and sediment) promoted us to only assess the exposure risk through the drinking water pathway. Although these results suggested the risk posed by parent compounds through drinking water was low, the risk of NNIs metabolites cannot be ignored. For example, IMI-olefin is ten times more toxic than IMI, and desnitro-IMI binds more than 300 times to vertebrate nAChRs than IMI [79]. In addition, Marfo et al. [14] have obtained urine from 85 Japanese volunteers and found a correlation between urinary concentrations of N-desmethyl-acetamiprid and typical symptoms such as recent memory loss, finger tremors, generalized fatigue, abdominal pain, headache, and chest pain. Therefore, although the SIMI eq EDI was lower than the IMI guideline, the potential health risks of maternal metabolites, particularly in vulnerable populations such as infants and pregnant women, need further attention. In addition, the human body can be exposed to NNIs in various forms, including dietary intake, soil/dust exposure, and respiratory intake, the consequences of which cannot be overlooked, especially dietary intake [43,80]. Potential risk assessment to aquatic ecosystem Existing studies have shown that residual NNIs in rivers can adversely affect the ecological environment of waters, such as effects on species diversity and structure of aquatic organisms [30]. In this study, the ecological risk of aquatic species in the Harbin section of the Songhua River was assessed using the SSD model with input of the toxicity data of imidacloprid to aquatic organisms at different trophic levels. Judging from the SSD curve (Fig. 3), aquatic organisms with lower trophic levels are more sensitive to NNIs and more vulnerable to be harmed. The harmed low trophic level species can cause damage to the stability and balance of the aquatic ecosystem because low trophic level species in the aquatic body are situated at the bottom of the food chain, and they provide food and nutrients to predators or higher trophic level species. For example, in terms of the acute risk from IMI (Fig. 3a), the most sensitive species is Epeorus longimanus (mayfly family) at the lower trophic level with acute toxicity at IMI eq concentrations of 650 ng L -1 in water, and the least sensitive species is Labeo rohita (a fish) at the higher trophic level with an acute risk concentration of 550000 ng L -1 . In terms of the chronic risks from IMI (Fig. 3b), the most sensitive species is Caenidae (mayfly family) with chronic toxicity at IMI eq concentrations of 400 ng L -1 in water, and the least sensitive species is Labeo rohita, consistent with the acute risk, with a sensitivity value concentration of 120000 ng L -1 . Notably, algae, despite being at a lower trophic level, are less sensitive to IMI than mayfly organisms because NNIs act as an insecticide. The acute HC 5 value of IMI for aquatic organisms in this study was 355 ng L -1 (95% confidence interval: 46.1e2742 ng L -1 ), and the chronic HC 5 value was 165 ng L -1 (95% confidence interval: 26.0e1047 ng L -1 ). When the residual concentrations exceeded the above threshold values, adverse effects can occur on more than 5% of aquatic species. The IMI eq concentrations in all samples in the Harbin section of the Songhua River ranged from 178 to 838 ng L -1 (Fig. S4). The IMI eq concentrations in the mainstream were lower than the acute HC 5 value, while those in the tributaries exceeded this value except at site T1. In contrast, all surface water samples exceeded the chronic HC 5 values. The above results indicate that some aquatic organisms in the water of the Harbin section of the Songhua River are exposed to chronic/acute risk of NNIs, especially for aquatic species at lower trophic levels. Conclusions NNIs were detected with high frequency and at high levels in the Harbin section of the Songhua River, which indicated that NNI contamination was prevalent in the aquatic environment of the region. Of which, IMI, THM, and CLO in water bodies and sediments are ubiquitous NNIs. There was no obvious spatial variability through reflecting of the sediment-water exchange process, i.e., although NNIs can accumulate in the sediment through processes of sedimentation and diffusion between water and sediment, and then re-entered into water bodies through secondary release, resulting in secondary pollution of the water. The potential risks of humans and aquatic organisms have been assessed through relative efficacy factors and species sensitivity distributions. Among all age groups, the estimated daily intake of NNIs resulting from drinking water consumption was significantly lower than the cRfD of IMI recommended by the U.S. EPA. According to our ecological risk assessment, aquatic organisms at lower trophic levels are more vulnerable to the chronic/acute risks posed by NNIs, which can destabilize the balance of the aquatic ecosystem. These data are necessary to take further legal actions to assess related risks to protect human health and terrestrial and aquatic ecosystems, and long-term monitoring programs are urgently needed to manage NNIs risks in China. Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
2021-10-18T17:00:45.317Z
2021-10-01T00:00:00.000
{ "year": 2021, "sha1": "81d40ef61923006461b4c014bc645101a7b94fc6", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.ese.2021.100128", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ded7a30140e00c1059ec085de0ea07917f8d0658", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Medicine", "Geography" ] }
236534947
pes2o/s2orc
v3-fos-license
Refinements of quantum Hermite-Hadamard- type inequalities Both inequalities hold in the reversed direction if f is concave. We note that the Hermite-Hadamard inequality may be regarded as a refinement of the concept of convexity and it follows easily from Jensen’s inequality. Hermite-Hadamard inequality for convex functions has received renewed attention in recent years, and a remarkable variety of refinements and generalizations have been studied. On the other hand, quantum calculus, sometimes called calculus without limits, is equivalent to traditional infinitesimal calculus without the notion of limits. In the field of q-analysis, many studies have recently been carried out. It has applications in numerous areas of mathematics such as combinatorics, number theory, basic hypergeometric functions, orthogonal polynomials, and in fields of other sciences such as mechanics, theory of relativity, and quantum theory [3–7]. Apparently, Euler invented this important branch of mathematics when he used the q parameter in Newton’s work on infinite series. Later, the q-calculus was first given by Jackson [3]. In 1908–1909, the general form of the q-integral and q-difference operator was defined by Jackson [6]. In 1969, for the first time Agarwal [8] defined the q-fractional derivative. In 1966–1967, Al-Salam [9] introduced a q-analog of the q-fractional integral and Introduction The Hermite-Hadamard inequality discovered by C. Hermite and J. Hadamard (see, e.g., [1], [2, p. 137]) is one of the most well-established inequalities in the theory of convex functions with a geometrical interpretation and many applications. These inequalities state that if → f I : is a convex function on the interval I of real numbers and ∈ ω ω I , 1 2 with < ω ω Both inequalities hold in the reversed direction if f is concave. We note that the Hermite-Hadamard inequality may be regarded as a refinement of the concept of convexity and it follows easily from Jensen's inequality. Hermite-Hadamard inequality for convex functions has received renewed attention in recent years, and a remarkable variety of refinements and generalizations have been studied. On the other hand, quantum calculus, sometimes called calculus without limits, is equivalent to traditional infinitesimal calculus without the notion of limits. In the field of q-analysis, many studies have recently been carried out. It has applications in numerous areas of mathematics such as combinatorics, number theory, basic hypergeometric functions, orthogonal polynomials, and in fields of other sciences such as mechanics, theory of relativity, and quantum theory [3][4][5][6][7]. Apparently, Euler invented this important branch of mathematics when he used the q parameter in Newton's work on infinite series. Later, the q-calculus was first given by Jackson [3]. In 1908-1909, the general form of the q-integral and q-difference operator was defined by Jackson [6]. In 1969, for the first time Agarwal [8] defined the q-fractional derivative. In 1966-1967, Al-Salam [9] introduced a q-analog of the q-fractional integral and q-Riemann-Liouville fractional. In 2004, Rajkovic gave a definition of the Riemann-type q-integral, which was generalized to the Jackson q-integral. In 2013, Tariboon introduced the D q ω 1 -difference operator [10]. In recent years, because of the importance of convexity in numerous fields of applied and pure mathematics, it has been significantly investigated. The theory of convexity and inequalities are strongly connected to each other, therefore, various inequalities can be established in the literature which are proved for convex, generalized convex, and differentiable convex functions of single and double variables, see, for example, [10][11][12][13][14][15][16][17][18][19][20][21][22][23][24][25][26][27][28][29]. The general structure of this paper consists of five main sections including Introduction. In Section 2, we give some necessary important notations for concept q-calculus and we also mention some related works in the literature. In Section 3 we present some new Hermite-Hadamard-type inequalities for q ω 2 integrals. Some refinements of quantum Hermite-Hadamard-type inequalities are proved in Section 4. We also examine the relation between our results and inequalities presented in the earlier works. Finally, in Section 5, some conclusions and further directions of research are discussed. We note that the opinion and technique of this work may inspire new research in this area. Preliminaries of q-calculus and some inequalities In this section, we present some required definitions and related inequalities about q-calculus. We have to give the following notation which will be used many times in the following sections (see [7]): 1 2 is characterized by the expression: if it exists and it is finite. New Hermite-Hadamard-type inequalities for q ω 2 -integrals In this section, we prove two new quantum Hermite-Hadamard inequalities for q ω 2 -integrals. Proof. We can write the equation of the tangent line for the function F at the point follows: By Definition 4, we get which gives the first inequality in (5). The second inequality is the same as in Theorem 3. □ Remark 1. If we take the limit → − q 1 in Theorem 4, then the inequalities (5) reduce to (1). Proof. Similar way as in Theorem 4, we can write tangent line for the function F at the point  as follows: By Definition 4, we get This gives the first inequality in (6). The second inequality is the same as in Theorem 3. □ Main results In this section, we present the refinements of quantum Hermite-Hadamard inequalities for q ω 2 -integrals. ω ω ϰ, , By (9) and (10), we have the first part of (7). □ Remark 3. If we take the limit → − q 1 in Theorem 6, then the inequalities (7) reduce to the following inequalities: Remark 4. If we take the limit → − q 1 in Corollary 1, then (11) reduces to the inequalities which shows that ϕ is convex on [ ] 0, 1 . By applying Theorem 3 for the convex function ϕ on [ ] 0, 1 , we have the inequalities That is, which are given in [33]. Proof. The proof of this theorem follows a similar procedure to that in Theorem 9 by using Theorem 4. □ are valid for all ∈ [ ] t 0, 1 . Proof. The proof of this theorem follows a similar procedure to that in Theorem 9 by using Theorem 5. □ Remark 8. If we take the limit → − q 1 , then the inequalities (17), (20) and (21) which are given in [33].
2021-08-01T13:28:39.897Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "1427b90aee9789e6662b1ecb9c418d72f3d41e62", "oa_license": "CCBY", "oa_url": "https://www.degruyter.com/document/doi/10.1515/math-2021-0029/pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "5bb4e1cbc2237d39af6d21299a0c7143bde3f711", "s2fieldsofstudy": [ "Mathematics", "Physics" ], "extfieldsofstudy": [] }
257115808
pes2o/s2orc
v3-fos-license
Vortices as fractons Fracton phases of matter feature local excitations with restricted mobility. Despite the substantial theoretical progress they lack conclusive experimental evidence. We discuss a simple and experimentally available realization of fracton physics. We note that superfluid vortices form a Hamiltonian system that conserves total dipole moment and trace of the quadrupole moment of vorticity; thereby establishing a relation to a traceless scalar charge theory in two spatial dimensions. Next we consider the limit where the number of vortices is large and show that emergent vortex hydrodynamics also conserves these moments. Finally, we show that on curved surfaces, the motion of vortices and that of fractons agree; thereby opening a route to experimental study of the interplay between fracton physics and curved space. Our conclusions also apply to charged particles in a strong magnetic field. Fractons are phases of matter featuring particles with restricted mobility and represent a new paradigm of quantum condensed matter physics; but observing them experimentally is a challenge. Here, the authors demonstrate a simple platform for the realisation of fracton physics with vortices of a two-dimensional superfluid. F racton phases of matter are characterized by the presence of immobile or partially mobile local excitations. The constraints on excitation mobility stem from the conservation laws of multipole moments of the charge density [1][2][3] . Phases that support fracton excitations were first discovered in exactly solvable quantum lattice models [4][5][6] . One systematic approach to characterization and classification of fracton phases is based on tensor 1,2,7-12 and multipole gauge theories (MGT) 3,13 . Recent years have witnessed a significant interest in development and classification of phases of quantum matter supporting fracton excitations , with possible applications ranging from quantum memory to quantum elasticity and quantum gravity. For recent reviews see 44,45 . Despite substantial theoretical progress and several proposals for experimental realization of the fracton physics 37,[46][47][48] no conclusive experimental evidence of fracton physics exists. In this note, we point out that fracton physics is exhibited by superfluid vortices that have been experimentally observed for many decades. We show that vortices in two spatial dimensions share the mobility constraints with the traceless scalar charge theory (TSCT), which is a particular model of particles with restricted mobility. We review the Hamiltonian formulation of the vortex dynamics and show that it manifestly conserves dipole and (trace of) quadrupole moments of vorticity. In superfluids, the vorticity of individual vortices is quantized and locally conserved, which leads to identification of vorticity with the scalar charge. These conservation laws imply that isolated vortices are immobile, while vortex dipoles move perpendicular to their dipole moment. Both vortices and their dipoles can be readily created and studied experimentally in superfluid Helium 58 , Bose-Einstein condensates 59,60 , polariton superfluids 61,62 , and non-linear media 63 . We then consider a hydrodynamic limit where the number of vortices becomes large; and collective, hydrodynamic description is applied to the vortices themselves. Remarkably, the resulting hydrodynamics admits a Hamiltonian formulation; with Poisson brackets realizing the classical w ∞ algebra 64 . We show that vortex hydrodynamics is also equivalent to scalar charge theory and provide a microscopic collective field theory expression for the rank-2 symmetric current. Finally, we discuss the behavior of vortices and fractons on curved manifolds, which can be realized as curved 4 He films. Results and discussion Vortices. We consider a two-dimensional incompressible ideal fluid. It is described by the Euler equations where P is the pressure and u i is the velocity field. The combination ∂ 0 + u i ∂ i is known as the material derivative. The incompressibility condition implies that ∂ i u i = 0. Taking the curl of (1) we obtain the Helmholtz equation where ω = ϵ ij ∂ i u j is the vorticity. Equation (2) admits solutions where the vorticity is concentrated in a finite number of point vortices. The complex velocity field u z = u 1 − iu 2 takes form u z ðzÞ ¼ Ài where z α ðtÞ ¼ x α 1 ðtÞ þ ix α 2 ðtÞ (we will switch between complex and Cartesian coordinates at will) is time-dependent position of the αth vortex and 2πγ α is its circulation; while γ = |γ α | is the vortex strength. We have assumed that vorticity is quantized in units of γ, which is the case in superfluids 58 . Remarkably, the vortex coordinates x α i ðtÞ form a Hamiltonian system 65 where α, β = 1, 2, …, N label the vortex strength. We refer the reader to 66,67 for an in-depth review of the vortex systems. Dynamical system (4)-(5) also describes charged particles moving in a strong magnetic field, in the limit of infinite cyclotron frequency, or equivalently, on the lowest Landau level. Consequently, all our results apply verbatim to the charged plasma in a strong magnetic field (see Supplementary Discussion for the details). In dealing with (4) and (5) it is useful to use the complex coordinates z α . In complex notations, the only non-trivial Poisson bracket takes the form 65 The equations of motion are 65 It is worth emphasizing that H is not just the potential energy. Due to the non-trivial commutations relations between z α and z α , H can be viewed as the kinetic energy. Conservation laws. Hamiltonian H is translation and rotation invariant. The corresponding integrals of motion are known as impulse, Pi and angular impulse, L 67,68 . They are given by We recognize in Eq. (8) that impulse is related to the dipole moment of vorticity D i (also known as center of circulation), while angular impulse corresponds the trace of the quadrupole moment of vorticity, Q ij (also known as moment of circulation), according to Together, the quantities P i , L, D i , Q ij form a multipole algebra where we have introduced the total vortex strength Thus, the vortices are equivalent to a TSCT; where the total charge as well as dipole and trace of the quadrupole moments are conserved 1 . Isolated charges are immobile; while isolated dipoles move perpendicular to their dipole moment. We emphasize that the conservation of dipole and trace of quadrupole moment does not originate from an internal symmetry 3 as in all previously studied cases with Z-valued charge. Instead, it originates from spatial symmetries and non-commutativity of the configuration space. We surmise that there is a deeper relation between noncommutative field theories and fracton physics. Traceless Scalar Charge Theory (TSCT). We briefly pause to discuss some properties of the TSCT. More details can be found in 1,45 . TSCT describes particles that conserve a U(1) charge as well as dipole and trace of the quadrupole moments. These conservation laws are succinctly summarized by the following equations where ρ is the density of the U(1) charge and J ij is the symmetric, traceless rank-2 tensor. The indices are raised with the spatial metric g ij , which is assumed to be flat and rotationally invariant g ij = δ ij , unless specified otherwise. Denoting j i = ∂ j J ij we observe that ρ satisfies ordinary continuity equation ∂ 0 ρ + ∂ i j i = 0; confirming the charge conservation. Furthermore, we can find that dipole moment and trace of quadrupole moment are conserved, by multiplying Eq. (12) with x i and x i x i respectively; and integrating over space. These conservation laws imply that charge dipoles can only move perpendicular to their dipole moment 1 . One may wonder what kind of microscopic theory would support Eq. (12) as the conservation laws. In the present paper, we argue that vortices in incompressible superfluid obey these conservation equations. Furthermore, using the ideas from 69 it is clear that the following Lagrangian fits the bill where … stands for the higher-order terms and Φ is a complex scalar. The derivative operators D I (Φ) are defined as where σ I ij are the Pauli matrices. A restricted version of the Lagrangian (15) can be used to describe the defects in twodimensional elasticity 49 . For generic values of g I ; g 0 I , the theory (15) is invariant under C 4 , but not SO(2). Though it is SO(2) invariant for the special case of g 1 = g 2 and g 0 More importantly, the theory is invariant under the following transformation where the parameters λ, λ k , ζ are arbitrary. Noether's theorem then implies that the corresponding conservation laws are precisely (12). The density is given by the usual expression ρ = Φ ⋆ Φ, while the general expression for the current is quite lengthy and not enlightening. We discuss the chiral version of the above theory in the Supplementary Discussion. Mobility constraints. Conservation laws (8) imply that motion of many vortices is constrained to preserve the dipole and quadrupole moment. Moreover, since the conserved quantities (H, D i D i , δ ij Q ij ) are in involution, the problem of N vortices is integrable for N ≤ 3. Other typical cases are chaotic 67 . We discuss the "fractonic" motion of vortices next. A single or well-isolated vortex is immobile. Analogously to fractons, the mass of an isolated vortex is not well-defined. A broad class of definitions 70 leads to a diverging mass, which agrees with fracton ideas. Dipole consisting of two vortices with opposite vorticities moves in a straight line perpendicular to its dipole moment. At low temperatures vorticity-neutral systems "condense" into a gas of neutral dipoles 71 . The dipole of two vortices with the same vorticities moves in a closed orbit around their "center of vorticity", while keeping the distance between the two vortices constant. Motion of dipole is illustrated in Fig. 1. Relative distances can only change if the number of vortices is N ≥ 3 67 . The quadrupole of two vortex-dipoles exhibits a variety of complex dynamics. One common type of interactions (particularly at low temperature) is scattering between two dipoles as shown in Fig. 1. As a result of scattering a vortex dipole makes a π/2 turn, which agrees with phenomenology of TSCT. . Vortex crystals can move as rigid objects, in which case they are referred to as relative equilibria, or can be stationary. Such configurations explore a very small fraction of the phase space. This is immediately obvious since for a vortex system phase space coincides with the a b c d Fig. 1 Motion of point vortices. a An isolated vortex is immobile and corresponds to a fracton. b A neutral dipole moves perpendicular to its dipole moment-it is a "lineon". c A charge 2-dipole moves around the center of vorticity. In fractonic context this motion is also possible, albeit never discussed: a pair of identical charges can rotate by constantly emitting dipoles that cancel. d Scattering of two dipoles of opposite dipole moments. Upon scattering the dipole makes a π/2 turn. configuration space. Vortex crystals emerge experimentally after relaxation of highly turbulent two-dimensional flows 73,74 . It is tempting to compare vortex crystals to the Hilbert space fragmentation seen in quantum dipole conserving systems 48,[75][76][77][78] . There, the Hilbert space "shatters" into many disconnected subspaces; within each such subspace either integrability or thermalization is possible. Mobility constraints combined with the phase space reduction lead to an exotic statistical mechanics of vortices 79,80 . In particular, above certain critical energies vortices experience "negative temperature" 79,81 , which follows from the structure of the phase space. At negative temperature the vortices of the same vorticity tend to clamp together, which nicely corresponds to gravitational attraction of fractons discussed in 24 . Vortex crystals may be an obstruction to ergodicity: Clusters of vortices take a very long time to merge 80 . To the best of our knowledge, the ergodicity of vortex system is still an open problem 82 . Vortex hydrodynamics. Next we would like to consider the limit where the number of vortices is very large. Due to the chaotic behavior and strong interactions between the vortices, this limit admits a description in terms of an emergent hydrodynamics 64 . We will show that in hydrodynamic limit the dipole and trace of the quadrupole moments are conserved. These conservation laws will be made manifest by re-writing the continuity equation in the form (12), where the conserved U(1) density is related to the vorticity ρ = (2πγ) −1 ω. We would like to emphasize one subtle difference between traditional TSCT and vortices: The former is non-chiral, while the latter is chiral. In TSCT a dipole moves perpendicular to its dipole moment; while for a vortex dipole, the dipole moment and the direction of motion form a right pair. Vortex hydrodynamics for the chiral flow (i.e., when all vortices are of the same vorticity, γ α = γ) was derived by Wiegmann-Abanov in 64 . The continuum limit of the vortex Hamiltonian (4) is where v i is the vortex velocity and η ¼ γ 2 4 . Vortex fluid is incompressible: ∂ i v i = 0 and v i is completely determined by the density through 64 The Poisson brackets form the classical w ∞ algebra 83 . Brackets between velocity and density are deduced We are interested in computing the equation of motion for the density ρ Direct calculation gives the continuity equation where j k = ρv k . This is consistent with Helmholtz equation (2). The consistency is non-trivial since (2) includes the material derivative with u i , while the material derivative contains v i in (23). The equivalence of (2) and (23) is established using the relation between u i and v i 64 Using the identity with either (2) or (23) we find The anti-symmetric part of J ij drops out from (12). In the chiral case, an equivalent relation was derived in 84 . Emergent hydrodynamics for vortices of positive and negative vorticity was developed by Yu-Bradley 85 . The conservation of the impulse and angular impulse holds in their model as well. The number and charge (vortex-sign) densities are treated separately in this case. Note that the conservation laws discussed here apply to the charge density, not the number density. We derive the tensor current based on their hydrodynamics in the Supplementary Discussion. We will discuss an independent collective field theory derivation of the rank-2 conservation law (12) for arbitrary number of vortices next. Collective field theory of vortices. We now turn to the collective form of (7). Vorticities are allowed to take both positive and negative values: γ α = ±γ. Density and current fields are defined as follows We will need the complex notation j z = j 1 − ij 2 and the δfunction identity The time derivative of the density is given by Using (7) this is transformed into where we have introduced a traceless symmetric tensor current and J z z ¼ J zz . It is crucial that in (31) the second order poles cancel. In Cartesian components the symmetric tensor current is given by where we introduced the vortex number density n(z). This is the central result of the present work: The continuity equation takes form (12). The above derivation is general and applies to hydro with vortices of both kinds present. In particular, it applies to the case when total vorticity is 0. We derive (32) in the Supplementary Discussion. Curved space. Symmetric tensor gauge theories do not remain gauge invariant on a curved space 50 . Furthermore, the conservation law of dipole moment cannot remain unchanged on a curved space. Below, we show that, in curved space, the dynamics of vortices and the mobility constraints change. Vortices on a curved space have been studied in [86][87][88] and can be experimentally realized in thin 4 He films. Vortex hydrodynamics of chiral flows was generalized to curved spaces in 84 . Vortex problem on a surface of a sphere is also relevant for geophysical and atmospheric applications. The Helmholtz equation on a curved surface takes form 84 where ∇ i is a covariant derivative, R is the Ricci curvature and s À 1 2 is the geometric spin of a vortex. Eq.(34) also takes form (12) with slightly modified J ij 84 Note that the last term in (35) contributes to the equations of motion only when curvature is inhomogeneous. We can draw the following conclusion from (34)- (35). On a surface of constant curvature an isolated vortex remains immobile 87,89 , which is consistent with 50 . A dipole moves along a geodesic that is perpendicular to the dipole moment; which is consistent with the corresponding fracton observations made in 90 . On a surface of variable curvature an isolated vortex does move: the dipole conservation law is broken and fractonic property is lost; in agreement with 50 . The potential force acting on an isolated vortex is obtained by differentiating the Robin function 91 . The dipole moves along a geodesic in the general case 92 . Conclusions. We have established an equivalence between vortex dynamics in two-dimensional superfluids and TSCT. We have shown that vortices provide a Hamiltonian realization of fracton dynamics for any finite number of vortices as well as in the hydrodynamic limit. Thus superfluid vortices provide a readily available platform for experimental realization of fracton quasiparticles. Vortices and vortex-dipoles are experimentally available with the present day technology. Another new platform may rely on chiral active fluids 93 . Similar conservation laws hold in three dimensions for vortex lines. We leave the exploration of higher dimensional case, discussion of more refined probes of fracton dynamics in superfluids and BECs, such as role of the trap and finite lifetime, generalization to chiral superfluids such as 3 He and many other open question to future work. Theory of vortices plays central role in statistical approach to turbulence 79 ; where the questions of ergodicity and validity of statistical mechanics are central 82 . It would be very interesting to see if fracton-inspired ideas can lead to new insight into quantum and classical turbulence as well as the problem of quantization of vortex dynamics. Finally, dynamics of electrons residing in the lowest Landau level is formally identical to that of vortices; consequently we expect applications of fracton-inspired ideas to the physics of fractional quantum Hall effect. Data availability Data sharing is not applicable to this article as no datasets were generated or analyzed during the current study. Code availability Code sharing is not applicable to this article as no code is developed during the current study.
2023-02-24T14:18:11.394Z
2021-03-08T00:00:00.000
{ "year": 2021, "sha1": "2aa6f892747ed4f2ad57a2405b80f7adddd5e285", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s42005-021-00540-4.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "2aa6f892747ed4f2ad57a2405b80f7adddd5e285", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [] }
267355658
pes2o/s2orc
v3-fos-license
THE EFFECTIVENESS OF GALACTAGOGU E CONTENT ON BREAST MILK PRODUCTION: A SCOPING REVIEW EFEKTIVITAS KANDUNGAN GALACTAGOGUE PADA PRODUKSI ASI: SCOPING REVIEW Providing counseling to breastfeeding mothers and giving extras through the use of galactagogues ABSTRACT Providing counseling to breastfeeding mothers and giving extras through the use of galactagogues to support the secretion process in breast milk production is one of the measures taken to boost breast milk production.This study aimed to investigate and evaluate previously published research on the effects of galactagogues on breast milk production.Inclusion criteria in this review included Indonesian or English language studies published in the last five years and focused on the efficacy of galactagogues on breast milk production.The structure of this scoping review is based on the PRISMA-ScR Checklist, as outlined by Arksey and O'Malley.Literature searches used three databases, namely Wiley Online Library, PubMed, and Science Direct-critical appraisal using the Joana Briggs Institute (JBI) Appraisal Tool.There were 343 relevant articles, and 10 were selected according to the researcher's criteria.The review results discuss that galactagogue content can affect mothers' milk from the duration and frequency of mothers consuming galactagogue content during breastfeeding.It was concluded that galactagogue content affects increased breast milk production, reinforced by banana flowers, Coleus amboinicus lour, and local foods and plants containing galactagogues.In addition, how long galactagogues are used and their consumption frequency affect breastmilk production. INTRODUCTION The world plans to enhance children's growth, development, health, and survival by exclusively breastfeeding them for the first six months of their lives.The World Health Organization (WHO) and United Nations Children's Fund (UNICEF) recommend that newborns receive only breast milk for the initial six months of their existence.Breastfeeding should be maintained until the child reaches a minimum age of two years to decrease the risk of infant morbidity and mortality. In 2018, the World Health Organization recommended that women exclusively breastfeed their babies for the first six months of their children's lives.Therefore, for infants to start nursing within the first hour of their existence, they should only receive breast milk and no other food or liquids, including water.They must continue to nurse on demand or as often as the baby likes, but instead of using a bottle or pacifier, they should go directly to their mother.(Aslamiah, 2021). As stated by the Ministry of Health in the Republic of Indonesia, in 2016, the percentage of babies in Indonesia aged 0 to 6 months who only get nutrition from their mother's breast increased compared to the previous year.However, markers still need to be met to achieve this goal successfully.In 2019, the proportion of infants in Indonesia who were exclusively breastfed during the initial six months of their existence stood at 67.74%, but the country has yet to reach a target of 80% (Kemenkes RI., 2021). Several things can hinder a mother's ability to exclusively breastfeed her child, including a mother's fear of losing her breasts, a mother's inability to work outside the home, and so on.Many variables, including sociocultural elements, information aspects, the influence of imitation on close friends or boyfriends, psychological factors, maternal health factors, maternal physical factors, behavioral factors, and health workers all play a role in determining whether a mother chooses to breastfeed her child exclusively (Aslamiah, 2021). One of the supplementary techniques employed to boost milk production involves using galactagogues.Increasing the amount of breast milk produced and its production speed can be done using galactagogues.Various studies have shown that several food elements in Indonesia have advantages and function as galactagogues.It could serve as a solution to the failure of exclusive breastfeeding caused by insufficient secretion and production of breast milk.The study has been conducted in Indonesia (Gyamfi et al., 2021). Galactagogues are used to promote, maintain, and stimulate breast milk secretion.Galactagogues can be found in plant form or pharmaceutical drug form.Several factors need to be considered when using galactagogues, including efficiency, safety, and duration of use.Many galactagogues, including those found in drugs derived from plants or foods, are currently being used.These galactagogues have been proven to accelerate breast milk production (Sharma, 2021).Given these issues, the objective of this study is to ascertain whether the composition of galactagogues is beneficial or not in stimulating breast milk production while consumed by breastfeeding mothers. METHOD This study used a scoping review, combining several studies to synthesize and consolidate data comprehensively.It informs practices, programs, and policies and guides future research priorities (Matthew J. et al., 2021).The researchers used PRISMA-ScR as a reference for the literature review because PRISMA-ScR.The first is identifying the scope review question; the other four stages are as follows: 2. Determining whether these articles are relevant; 3. choosing the articles; 4. mapping out the data; 5. collating, condensing, and presenting the results (Matthew J. et al., 2021).Inclusion criteria in article selection are articles published within 5 years (2018)(2019)(2020)(2021)(2022), articles used in this review in English and Indonesian, articles discussing galactagogues on breast milk production, and original research.Exclusion criteria in this review are reviews, books, opinion articles, grey literature, inaccessible articles, and non-full text. The databases used to search for articles related to galactagogues on breast milk production were Wiley Online Library, PubMed, and Science Direct.All obtained articles were then entered into Mendeley software.Articles were searched using Boolean operators AND, OR or NOT and Truncation (*) as connectors in combining or excluding keywords for searching to obtain more focused and relevant results.The keywords used in the search process were effect AND galactagogues* AND breastfeed* AND breast milk production*.Researchers use Mendeley as reference management software in sorting out items like duplication, title selection, and filter processes described using systematic review meta-analyses (PRISMA) flowchart 2020 (Matthew J. et al., 2021), as follows: Mineral Water In the experimental group, mothers who had cesarean surgery demonstrated a significantly increased rate of breast milk flow on the 2nd (p=0.017) and 3rd days (p=0.005), as well as a higher volume of breast milk on the 2nd (p=0.005) and 3rd days (p=<0.001), in comparison to those in the control group. No treatment The study demonstrated the impact of the intervention involving the consumption of wake-up leaves on breast milk production, as evidenced by an independent t-test with a p-value of 0.010.Furthermore, the intervention involving the consumption of wake-up leaves also affected the health condition of postpartum mothers, as indicated by a p-value of 0.001. No treatment The breastfeeding process in the intervention and control groups showed a difference of 149.0, with a significant p-value of 0.01.It can be said that administering banana flowers (Musa Paradisiaca L.) can increase breast milk production. No control group The use of lactogogues is prevalent at 83.8%, with these substances typically prepared separately from regular household meals (59.4%) and consumed between one to three times daily (89.6%).Users often perceive their effectiveness within the first 24 hours of usage (98.5%).The most frequently used lactogogues encompass peanut/bean soup made with Bra Leaves (Hibiscus sabdariffa), hot black tea, Werewere/Agushi (Citrulus colocynthis) prepared with Bra leaves, and Abemudro, a polyherbal formulation.However, only a small fraction of nursing mothers, 13.2%, utilized lactogogues during their pregnancy. No control group The research results showed 26 types of herbal galactagogue plants based on traditional medicine consumed in preparations: tutuh, loloh, and tampel.The majority of respondents (82%) started therapy postpartum, and the duration of consumption was < 1 month (36%).As many as 89% of respondents have a shorter breastfeeding duration and increased breast milk volume when pumping.About 95% of respondents feel confident and have sufficient breast milk strength after consuming herbal galactagogues. A7 ( Breastfeeding mothers who consumed stone banana heart tea for seven consecutive days experienced a 30.85% increase in serum prolactin levels.A significant difference was observed in the serum prolactin levels between the intervention and control groups, with a p-value less than 0.05.Stone banana heart tea has a galactagogue effect that can increase serum prolactin levels during lactation. A8 ( Women indicated that they often received suggestions for herbal or dietary galactagogues from online sources (38%) or friends (25%).In contrast, General Practitioners primarily recommended pharmaceutical galactagogues (72%).Among all, domperidone was perceived as the most effective, with an average rating of 3.3 compared to other options, which ranged between 2.0 and 3.0.A9 ( Most respondents, 52 or 43.4 percent, used katuk leaves (Sauropus androgynus) to boost their breast milk production.This was followed by 38 respondents, or 31.6 percent, who consumed moringa leaves.Nine respondents, or 7.5 percent, consumed a mix of turmeric and tamarind, while three, or 2.5 percent, consumed turi leaves (Sesbania grandiflora).Four respondents, or 3.3 percent, consumed roasted corn, and the same number of participants, representing four percent, incorporated spinach into their diet.A1 0 (Tan et Data Item The researchers identified each article relevant to the review topic of the effectiveness of galactagogue substances on breast milk production.Consuming foods containing galactagogues is one approach that can be utilized to boost breast milk production.Galactagogue content is found in several types of plants, and some are even made into pharmacological drugs.Several researchers have proven the virtues and benefits of plants containing galactagogues for increasing breast milk production. Synthesis of Result A total of 343 articles were obtained based on the three databases.From the PubMed database, 231 articles were found, Wiley 63 articles, and Science Direct 49 articles.Then Mendeley was used to import each article.102 duplicate articles were excluded.In titles and abstracts, 198 articles did not match and could not display full text.After reading the article, there were 33 irrelevant articles: 9 had issues with intervention, 10 had population issues, and 14 did not match the study determined by the researcher.Ten relevant articles met the criteria for execution.Using a PRISMA Flowchart improves the quality of publication reports and forms the basis for other researchers' reporting. Selection Of Sources Of Evidence Based on the search results using the PICO framework keywords from three databases: PubMed, Wiley, and Science Direct.Then Mendeley was used to perform filtering procedures such as duplicates, abstracts, and completeness of article writing.The article selection flow uses a PRISMA flowchart to illustrate the stages of filtering the articles taken. Characteristics Of Sourses Of Evidence In the 10 relevant articles, several characteristics distinguish the articles, including the name of the country and research methods.Article characteristics based on countries are developing countries (90%), such as Thailand, Africa, Indonesia, and Malaysia, and developed countries (10%), namely Australia.Article characteristics based on research methods are Quasiexperimental (30%), Qualitative (20%), Randomized Controlled Trial (20%), and Cross Sectional (30%). Critical Appraisal Within Sources Of Evidence There are 10 articles taken in this study using different research methods.All ten articles received perfect scores from the Joanna Briggs Critical Appraisal Tools (JBI) questions, which are used to evaluate each article critically. Galactagogue Content In this review, several articles discuss galactagogues' efforts on breast milk production, including Banana Heart, Coleus amboinicus Lour, and local foods and herbal remedies. Banana Blossom (Musa x paradisiaca). Banana blossom (Musa x paradisiaca) is a plant that contains galactagogues widely used in several countries, one of which is in Thailand (Yimyam & Pattamapornpong, 2022) Banana blossom contains galactagogues with estrogenic properties that can stimulate the growth of alveolar mammae and increase serum prolactin levels, cortisol levels, total prolactin, and glycogen content (Ningrum et al., 2021). In article A1, it is stated that there is a relationship between the administration of banana blossom (Musa x paradisiaca) consumed by mothers undergoing Caesarean section in the intervention group having a level of breast milk flow on the 3rd day (p=<0.001).Banana blossoms have a higher level of breast milk flow in breast milk output than the control group given plain water (Yimyam & Pattamapornpong, 2022).It was mentioned in article A4 that breastfeeding liquid administration in the treatment group and control group had a difference value of 149.0 with a pvalue of 0.01, indicating significant differences between both groups as seen from child indicators.According to article A7, blood prolactin levels increased by 30.85% in breastfeeding mothers who drank tea from stone banana blossom for seven consecutive days.Drinking tea made from king banana flowers has been proven to have galactogenic effects, increasing blood prolactin levels.This effect is most noticeable in breastfeeding mothers (Okinarum et al., 2020). Coleus amboinicus lour. The leaves of the Indian borage plant (Coleus amboinicus lour), long believed by the Batak community in North Sumatra to enhance breast milk production, are also considered capable of augmenting breast size.There have been various strategies developed, both pharmacological and non-pharmacological, to increase the production of breast milk (Yuliani, 2021). In article A2, it is mentioned that the efficacy of giving Indian borage leaves (Coleus amboinicus lour) on a mother's milk secretion can be known by giving as much as 100 grams of Coleus amboinicus lour leaves with a frequency once a day for one week for postpartum mothers (Nasution et al., 2022).According to article A3, it was reported that the supplementation of Coleus amboinicus lour tea significantly raised prolactin levels (with a significance value of 0.014, p<0.05) and enhanced milk production (significance value of 0.046, p<0.05).Therefore, Indian borage has a significant impact on increasing both prolactin levels and breast milk production (Prahesti et al., 2020).This article is supported by further research stating that breastfeeding mothers who are given 150 grams of fresh Indian borage leaves have the potential to boost breast milk volume by as much as 65 percent between ages 14 and 28 days.Breastfeeding mothers with soup from 150 grams of fresh leaves can increase breast milk produced and baby weight at three to four months (Oktaviya et al., 2020). Food and ingredients Local The herbal tea concoction consumed by breastfeeding mothers from market concoctions with a mixture including sucrose, maltodextrin, 2.6% roselle extract, 0.5% L-ascorbic acid, 0.2% raspberry leaf extract, 0.2% fennel extract, 0.1% fenugreek extract, 0.1% goat's rue extract, and 0.02% fennel oil.Meanwhile, the herbal cumin and fennel were obtained by breastfeeding mothers from nearby sellers (Kurniati & Azizah, 2021).Moringa contains seven times more vitamin C than oranges, ten times more vitamin A than carrots, seventeen times more calcium than milk, nine times more protein than yogurt, fifteen times more potassium than bananas, and twenty-five times more iron than spinach (Sarni et al., 2020). In article A5, it is mentioned that most mothers use galactagogues contained in food and local ingredients to increase breast milk production (67.7%), proving that special foods and chosen ingredients are utilized to enhance breast milk production (Ali et al., 2020).Article A6 was obtained from 95% of respondents who consumed herbal galactagogues felt confident and had self-power after using herbal galactagogues on confidence in breastfeeding (Monika & Yunita, 2021). In article A8, it is said that Galactagogue administration, including giving beer yeast, fenugreek, and domperidone, 23% of domperidone administration experienced side effects compared to giving herbal galactagogues 3%.It proves that using domperidone has greater side effects than giving galactagogues herbally (McBride et al., 2021).In article A9, it is mentioned that a majority of mothers consume katuk leaves (Sauropus Androgynus) to boost their breast milk production, with moringa leaves being the second most popular choice, consumed by 38 respondents, which makes up 31.6% of the group (Wulandari & Wardani, 2020). The Effect of Consuming Galactagogue. Duration Previous findings have shown that mothers often fail to consume a balanced diet regulating carbohydrates, fats, vegetables, and fruits that contain galactagogues, which can affect the breastfeeding mother's milk production (Pratiwi & Srimiati, 2020).In articles A1, A2, A3, A4, A6, A7, and A8, it is mentioned that the duration of mothers consuming galactagogues in various studies varies from 7 days, 14 days, 30 days to 6-19 weeks.The recommended food composition includes galactagogues to enhance the secretion of protein, carbohydrates, and other galactagogues (Fitria et al., 2022). Frequency Breastfeeding mothers consume sweet cakes and drink sweet tea immediately after breastfeeding their babies (Kaliwile et al., 2019).In articles A1, A2, A3, A4, and A7, it is mentioned that the frequency of consuming galactagogues varies from once a day, twice a day, to three times a day, with different dosages in each study method.The correlation between a mother's diet and the production of breast milk is intimately intertwined, as the efficiency of breast milk production relies on the mother's nutritional intake.Therefore, better maternal nutrition can influence the quality of breast milk production (Sanima et al., 2017). LIMITATION OF THE STUDY The limitations of this scoping review study include the limited availability of articles on galactagogues, resulting in a small number of search results.Additionally, there is a limitation in the number of samples that used animals to measure the enhancement of breast milk production. CONCLUSIONS AND SUGGESTIONS Based on the articles obtained, it is stated that there is an impact on enhancing breast milk production following the intake of galactagogues, such as banana hearts, Coleus amboinicus lour, local foods, and herbal remedies.Additionally, the length of time and regularity of galactagogue consumption also play a role in breast milk production, like fulfilling the dietary requirements of nursing mothers.The intake of galactagogues over a period ranging from 7 days 30 days can influence an increase in breast milk production.However, each plant containing galactagogues has different effectiveness and dosage levels in enhancing breast milk production.Moreover, the more frequently a mother consumes foods that can affect breast milk production, like galactagogues, the more breast milk will be produced as her nutritional needs are met.
2024-02-01T16:37:52.947Z
2023-12-13T00:00:00.000
{ "year": 2023, "sha1": "b865dc5a4e6734f43db55f4ca1cbb51a81f9f21f", "oa_license": "CCBYSA", "oa_url": "https://aisyah.journalpress.id/index.php/jika/article/download/2381/pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "062c0d060a5cbe619e205ef9fdb1df9412a059ca", "s2fieldsofstudy": [ "Medicine", "Agricultural and Food Sciences" ], "extfieldsofstudy": [] }
259894522
pes2o/s2orc
v3-fos-license
On Sol´e and Planat criterion for the Riemann Hypothesis There are several statements equivalent to the famous Riemann hypothesis. In 2011, Sol´e and Planat stated that the Riemann hypothesis is true if and only if the inequality ζ (2) · (cid:81) q ≤ q n (1 + 1 q ) > e γ · log θ ( q n ) holds for all prime numbers q n > 3, where θ ( x ) is the Chebyshev function, γ ≈ 0 . 57721 is the Euler-Mascheroni constant, ζ ( x ) is the Riemann zeta function and log is the natural logarithm. In this note, using Sol´e and Planat criterion, we prove that the Riemann hypothesis is true. Introduction The Riemann hypothesis is the assertion that all non-trivial zeros have real part 1 2 . It is considered by many to be the most important unsolved problem in pure mathematics. It was proposed by Bernhard Riemann (1859). The Riemann hypothesis belongs to the Hilbert's eighth problem on David Hilbert's list of twenty-three unsolved problems. This is one of the Clay Mathematics Institute's Millennium Prize Problems. In mathematics, the Chebyshev function θ(x) is given by with the sum extending over all prime numbers q that are less than or equal to x, where log is the natural logarithm. Let's state a property for this function: where q k is the kth prime number (We also use the notation q n to denote the nth prime number). In mathematics, Ψ(n) = n · q|n 1 + 1 q is called the Dedekind Ψ function, where q | n means the prime q divides n. We say that Dedekind(q n ) holds provided that Next, we have Solé and Planat Theorem: There are several statements out from the Riemann hypothesis condition. Proposition 1.5. Unconditionally on Riemann hypothesis, there are infinitely many prime numbers q n such that Dedekind(q n ) holds [7, Theorem 4.1 pp. 5]. The following property is based on natural logarithms: Putting all together yields a proof for the Riemann hypothesis using the Chebyshev function. 2 What if the Riemann hypothesis were false? Several analogues of the Riemann hypothesis have already been proved. Many authors expect (or at least hope) that it is true. However, there are some implications in case of the Riemann hypothesis might be false. Lemma 2.1. If the Riemann hypothesis is false, then there are infinitely many prime numbers q n for which Dedekind(q n ) fails (i.e. Dedekind(q n ) does not hold). Proof. The Riemann hypothesis is false, if there exists some natural number We know the bound [7, Theorem 4.2 pp. 5]: where f was introduced in the Nicolas paper [5, Theorem 3 pp. 376]: When the Riemann hypothesis is false, then there exists a real number b < 1 2 for which there are infinitely many natural numbers x such that log f (x) = Ω + (x −b ) [5, Theorem 3 (c) pp. 376]. According to the Hardy and Littlewood definition, this would mean that for every possible positive value of k when b < 1 2 . In this way, this implies that Hence, if the Riemann hypothesis is false, then there are infinitely many natural numbers x such that log f ( , then it would be infinitely many natural numbers x 0 such that log g(x 0 ) > 0. In addition, if log g(x 0 ) > 0 for some natural number x 0 ≥ 5, then log g(x 0 ) = log g(q n ) where q n is the greatest prime number such that q n ≤ x 0 . Actually, according to the definition of the Chebyshev function. Central Lemma Proof. We obtain that by Propositions 1.2 and 1.3. A New Criterion Theorem 4.1. Dedekind(q n ) holds if and only if the inequality is satisfied for the prime number q n , where the set S = {x : x > q n } contains all the real numbers greater than q n and χ S is the characteristic function of the set S (This is the function defined by χ S (x) = 1 when x ∈ S and χ S (x) = 0 otherwise). Proof. When Dedekind(q n ) holds, we apply the logarithm to the both sides of the inequality: after of using the Lemma 3.1. Let's distribute the elements of the previous inequality to obtain that when Dedekind(q n ) holds. The same happens in the reverse implication. The Main Insight Theorem 5.1. The Riemann hypothesis is true if the inequality is satisfied for all sufficiently large prime numbers q n . Proof. For large enough prime q n , if Dedekind(q n+1 ) holds then after subtracting the value of log(1 + 1 q n+1 ) to the both sides. Thus, since log log θ(q n ) − log log θ(q n ) = 0. If we obtain that which means that Dedekind(q n ) holds by Theorem 4.1. Hence, it is enough to guarantee that log log θ(q n+1 ) − log log θ(q n ) − log(1 + 1 q n+1 ) ≥ 0 to assure that Dedekind(q n ) holds for a large enough prime number q n when Dedekind(q n+1 ) holds. Since there are infinitely many prime numbers q n+1 > 5 such that Dedekind(q n+1 ) holds, then we can guarantee that Dedekind(q n ) holds as well when holds for all pairs (q n , q n+1 ) of consecutive large enough primes such that q n < q n+1 , then we can confirm that Dedekind(q n ) always holds for all large enough prime numbers q n by Theorem 4.1. As result, if the inequality log log θ(q n+1 ) − log log θ(q n ) − log(1 + 1 q n+1 ) ≥ 0 is satisfied for all sufficiently large prime numbers q n , then there won't exist infinitely many prime numbers q n such that Dedekind(q n ) fails and so, the Riemann hypothesis must be true by Lemma 2.1. Let's distribute the elements of the previous inequality to obtain that 6 The Main Theorem Theorem 6.1. The Riemann hypothesis is true. Proof. The Riemann hypothesis is true when is satisfied for all sufficiently large prime numbers q n because of the Theorem 5.1. That is the same as We know that q n+1 · log θ(q n+1 ) θ(q n ) ≥ log θ(q n ). Conclusions Practical uses of the Riemann hypothesis include many propositions that are known to be true under the Riemann hypothesis and some that can be shown to be equivalent to the Riemann hypothesis. Indeed, the Riemann hypothesis is closely related to various mathematical topics such as the distribution of primes, the growth of arithmetic functions, the Lindelöf hypothesis, the Large Prime Gap Conjecture, etc. Certainly, a proof of the Riemann hypothesis could spur considerable advances in many mathematical areas, such as number theory and pure mathematics in general.
2023-07-15T15:48:59.650Z
2023-07-11T00:00:00.000
{ "year": 2023, "sha1": "ade01e11675cbe76981852018b2ac08a1ebe9707", "oa_license": "CCBY", "oa_url": "https://www.qeios.com/read/OBR7IJ.3/pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "a793849df7609dc6fac6c84ba556a76913315e3c", "s2fieldsofstudy": [ "Mathematics", "Philosophy" ], "extfieldsofstudy": [] }
15938643
pes2o/s2orc
v3-fos-license
Purpura Fulminans Secondary to Streptococcus pneumoniae Meningitis Purpura fulminans (PF) is a rare skin disorder with extensive areas of blueblack hemorrhagic necrosis. Patients manifest typical laboratory signs of disseminated intravascular coagulation (DIC). Our case describes a 37-year-old previously healthy man who presented with 3 days of generalized malaise, headache, vomiting, photophobia, and an ecchymotic skin rash. Initial laboratory workup revealed DIC without obvious infectious trigger including unremarkable cerebrospinal fluid (CSF) biochemical analysis. There was further progression of the skin ecchymosis and multiorgan damage consistent with PF. Final CSF cultures revealed Streptococcus pneumoniae. Despite normal initial CSF biochemical analysis, bacterial meningitis should always be considered in patients with otherwise unexplained DIC as this may be an early manifestation of infection. PF is a clinical diagnosis that requires early recognition and prompt empirical treatment, especially, in patients with progressive altered mental status, ecchymotic skin rash, and DIC. Introduction Purpura fulminans (PF) is an unusual skin manifestation of disseminated intravascular coagulation (DIC) associated with infection and/or sepsis. It is characterized by tissue necrosis, small vessel thrombosis in the setting of DIC. PF often leads to end organ damage with resultant profound morbidity and mortality. We describe a case of PF secondary to Streptococcus pneumoniae infection. Case Presentation A 37-year-old previously healthy man presented to the emergency department with 3 days of generalized malaise, headache, nausea, vomiting, photophobia, and an ecchymotic skin rash. Admission physical evaluation revealed that he was tachycardic, somnolent, but oriented without nuchal rigidity or focal neurologic signs. He had a diffuse ecchymotic nonblanching macular rash on his extremities and abdomen (Figures 1, 2, and 3). He was started on intravenous (IV) empiric antibiotics including vancomycin, cefepime, and metronidazole for 2 Case Reports in Infectious Diseases a clinical suspicion of sepsis and also received platelet transfusion and IV heparin for DIC. Multiple blood cultures done during the hospitalization did not grow any organisms. Two days into admission, CSF cultures grew Streptococcus pneumonia, and antibiotics were switched to IV ceftriaxone (2 grams every 12 hours). His skin lesions progressed rapidly to hemorrhagic bullae. Skin biopsy showed widespread hemorrhage with focal thrombosis. The clinical picture of rapidly progressive ecchymotic skin rash in our patient with DIC secondary to Streptococcus pneumoniae infection was consistent with a diagnosis of PF, and the skin biopsy confirmed the same. The patient's hospital course was complicated by a transient worsening of his mental status, distal extremities thrombosis, and worsening renal function that required hemodialysis. With support care and antibiotic therapy (for a total of two weeks), he improved clinically. Upon discharge patient returned to his baseline mental status, his skin lesions cleared except the lesions on his lower extremities, and he remained on hemodialysis. Discussion Purpura fulminans is a rare, severe skin disorder associated with DIC that primarily affects children and infants. Extensive areas of skin develop blueblack hemorrhagic necrosis; biopsy reveals small-vessel microthrombi and occasionally mild vasculitis. In our patient the clinical findings are suggestive of microangiopathic thrombosis with hemolysis secondary to DIC and purpura fulminans. The pathogenesis is unknown, but histologic findings have been likened to the animal model of consumptive coagulopathy [1]. It has also been suggested that the development of acquired defects in the protein C pathway similar to two other protein C deficiency states, namely, neonatal purpura fulminans and warfarin-induced skin necrosis [2,3]. In our patient, protein C was 87% (normal), and protein S was 15% (low). The mortality rate has recently been significantly reduced in purpura fulminans, largely because of more widespread use of therapeutic heparinization in these patients and aggressive replacement of platelets and coagulation factors [4,5]. Our patient also improved clinically with these interventions. The most common organisms producing DIC are bacterial, especially the gram-negative bacteria (meningococci, Haemophilus influenzae, Aerobacter, and others) but also gram-positive organisms (Staphylococcus aureus, group B streptococci, Streptococcus pneumonia, and Bacillus anthracis). DIC is also associated with disseminated viral (varicella, measles, and rubella), rickettsial (Rocky Mountain spotted fever), fungal, mycoplasma, and parasitic infections. In our patient, Streptococcus pneumoniae was eventually grown from the CSF. Clinically, patients with PF present with painful, erythematous macular lesions, and ecchymoses. These lesions evolve into painful indurated, well-demarcated purple papules with erythematous borders. Finally, they progress to necrosis with the formation of bullae and vesicles. In our patient, these skin lesions helped in the early diagnosis and eventual successful treatment of PF. Streptococcus pneumoniae is the most common cause of meningitis in adults, particularly in elderly [6,7]. It is common to have completely normal CSF cellularity and biochemistry in patients with bacterial meningitis [8,9]. Despite these typical CSF findings, the spectrum of CSF values in bacterial meningitis is so wide that the absence of one or more of the typical findings is of little value [8,9]. Similar to our patient where no abnormalities were found in the initial CSF analysis, in a series of 696 episodes of communityacquired bacterial meningitis, 12 percent had none of the characteristic CSF findings [9]. The explanation for minimal CSF abnormalities cannot usually be identified. Potential causes include early presentation, recent prior antibiotic therapy, and neutropenia. The normal CSF seen initially in our patient may be due to his early presentation to the ED. Bacterial meningitis should be kept in the differential diagnoses in patients with otherwise unexplained DIC, especially with progressive altered mental status despite initial normal CSF analysis. Early clinical recognition of PF is essential for the successful outcome.
2014-10-01T00:00:00.000Z
2012-01-26T00:00:00.000
{ "year": 2012, "sha1": "39a7b8c30bfdd0019ad3ee81ad617fd0249250a2", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/criid/2012/508503.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "196115d815328376e9814cb7f145b27c76aa96f9", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
221711447
pes2o/s2orc
v3-fos-license
Deep Q-Network with Predictive State Models in Partially Observable Domains While deep reinforcement learning (DRL) has achieved great success in some large domains, most of the related algorithms assume that the state of the underlying system is fully observable. However, many real-world problems are actually partially observable. For systems with continuous observation, most of the related algorithms, e.g., the deep Q-network (DQN) and deep recurrent Q-network (DRQN), use history observations to represent states; however, they often make computation-expensive and ignore the information of actions. Predictive state representations (PSRs) can offer a powerful framework for modelling partially observable dynamical systems with discrete or continuous state space, which represents the latent state using completely observable actions and observations. In this paper, we present a PSR model-based DQN approach which combines the strengths of the PSR model and DQN planning. We use a recurrent network to establish the recurrent PSR model, which can fully learn dynamics of the partially continuous observable environment. .en, the model is used for the state representation and update of DQN, which makes DQN no longer rely on a fixed number of history observations or recurrent neural network (RNN) to represent states in the case of partially observable environments. .e strong performance of the proposed approach is demonstrated on a set of robotic control tasks from OpenAI Gym by comparing with the technique with the memory-based DRQN and the state-of-the-art recurrent predictive state policy (RPSP) networks. Source code is available at https://github.com/RPSRDQN/paper-code.git. Introduction For agents operating in stochastic domains, how to determine the (near) optimal policy is a central and challenge issue. While (deep) reinforcement learning has provided a powerful framework for decision-making and control and has achieved great success in recent years in some large-scale applications, e.g., AlphaGo [1], most of the related approaches rely on the strong assumption that the agent can completely know the environment surrounded it, i.e., the environment is fully observable. However, for many realworld applications, the problem is actually partially observable Markov decision process (POMDP) where the state of the environment may be partially observable or even unobservable [2,3]. Much effort has been devoted to planning in partially observable environments. Some of the work aims for learning the complete model of the underlying system. Huang et al. [4][5][6] propose the planning methods based on the PSR model. Song et al. [7] and Somani et al. [8] propose the planning method based on the POMDP model. However, these methods are only suitable for systems with discrete observations. In this paper, we mainly focus on systems with continuous observations, and there are two main approaches for dealing with the partially observable problem in such domain. One relies on recurrent neural networks to summarize the past, and then the neural network is trained in a model-free reinforcement learning manner [2,9,10]. However, it will be a heavy burden for the training of networks when everything relies on it. e other approach for dealing with the partially observable problem is directly using the past histories, i.e., the past observations (frames), for the state representation, and the main problem of this approach is that the number of observations (frames) used for the state representation can only be determined empirically. Also, too many observations for the state representation may be computation-expensive, but few observations may not be sufficient statics of the past. And neither method considers the effect of action information on state representation. Predictive state representations (PSRs) provide a general framework for modelling partially observable systems, and unlike the latent-state based approaches, such as POMDPs, the core idea of PSRs is to work only with the observable quantities, which leads to easier learning of a more expressive model [11][12][13]. PSRs can also combine with the recurrent network for the modelling and planning in partially observable dynamic systems with continuous state space [14,15]. In this paper, with the benefits of the PSR approach and the great success of deep Q-network in some real-world applications, we propose the RPSR-DQN approach; firstly, a recurrent PSR model of the underlying partially observable systems is built, then the truly state, namely, the PSR state or the belief state, can be updated and provide the sufficient information for DQN planning, and finally, the tuple of <currentPSRstate, action, rewar d, nextPSRstate>, where currentPSRstate is the information of the current state and nextPSRstate is the information of the next state obtained by taking action under the current state, is stored and used as the data for the training of the deep Q-network. e performance of our proposed approach is firstly demonstrated on a set of robotic control tasks from OpenAI Gym by comparing with the deep recurrent Q-network (DRQN) algorithm which uses current observation as the input and plans based on memory. en, we compare our approach with the state-of-the-art recurrent predictive state policy (RPSP) networks [14]. Experiment results show that with the benefits of the DQN framework and the dividing of the learning of the model and the training of the policy, our approach outperforms the state-of-the-art baselines. Related Work A central problem in artificial intelligence is for agents to find optimal policies in stochastic, partially observable environments, which is an ubiquitous and challenging problem in science and engineering [16]. e commonly used technique for solving such partially observable problems is to model the dynamics of the environments by using the POMDP approach or the PSR approach firstly [3,12] and then the problem can be solved using the obtained model. Although POMDPs provide general frameworks to solve partially observable problems, it relies heavily on a known and accurate model of the environment [17]. erefore, in real-world applications, it is extremely difficult to build an accurate model [18]. Also, most of the POMDP-based approaches have difficulties to be extended to some larger-scale real-world applications. As mentioned previously, PSR is an effective method for modelling partially observable environment and many related works were proposed based on the idea of running a fully observable RL method on the PSR state. In the work of Boots et al. [19], the main idea of it is firstly building accurate enough transformed PSRs with indicative and characteristic features and then the point-based value iteration technique [20] is used for finding the planning solution, where a state subset B in the state space is firstly selected under some strict conditions that B is both sufficiently small to reduce the computational difficulty and sufficiently large to obtain a good approximation function. In the work of Liu and Zheng [5,21], the learned PSR model has been combined with Monte-Carlo tree search both online and offline, which achieves the state-of-the-art performances on some environments. However, the application of these proposed approaches is limited to domains with discrete state and action spaces. For partially observable systems with continuous state space, most work relies on recurrent neural networks to summarize the past and then the neural network is trained in a model-free reinforcement learning manner. In order to solve the customer relationship management (CRM) problem that is considered to be partially observable, Li et al. [22] proposed a hybrid recurrent reinforcement learning approach (SL-RNN + RL-DQN) which uses the RNN to calculate the hidden states of the CRM environment. While our method was tested on some control environments as shown in the experiments and takes into account both the past observations and actions for the representation of the underlying states, for SL-RNN + RL-DQN, both the proposed approach and the related experiments focus on the CRM problem. Also, SL-RNN + RL-DQN does not consider the effect of action value when calculating the state representation, which may incur the inaccurate representations of the underlying states. Moreover, while RPSR-DQN tries to build the model of the underlying system, which makes the related approaches be easily extended to model-based reinforcement learning approaches, SL-RNN + RL-DQN can only be combined with the model-free reinforcement learning frameworks. In the work of Hausknecht and Stone [9], recurrence is added to a deep Q-network (DQN) by replacing the first fully connected layer with a recurrent LSTM by considering all historical information. Igl et al. [2] extended the RNN-based approach to explicitly support belief inference. However, while in our approach, with suitable features, the mapping between the predictive state and the prediction of the observations given the actions can be fully known and simple to be learned consistently, the main problem of these RNN-based approaches with latent states is in these recurrent models, nonconvex optimization is used, which usually leads to more difficult training than those using convex optimization [14]. Recently, some works have been proposed by using the PSR state for the replacement or quality improvement of the internal state of the RNN. In the work of Venkatraman et al. [15], recurrent neural networks are combined with predictive state decoders (PSDs), which add supervision to the network internal state representation to target predicting future observations. Hefny et al. [14] proposed recurrent predictive state policy (RPSP) networks, which consist of a recursive filter for the tracking of a belief about the state of the environment, and a reactive policy that directly maps beliefs to actions, to maximize the cumulative reward. While RPSP networks show some promising performances on some benchmark domains, the recursive filter and the reactive policy are trained simultaneously by defining a joint loss function in an online manner. However, how to balance the loss of the recursive filter and the loss of the reactive policy is difficult, and in many cases, as also shown in the experiments, the simultaneously training of two objective functions may lead to a worse final performance. Background is section is divided into three parts. In the first part, we briefly review predictive state representations (PSRs) [12]. en, we introduce the recurrent PSRs that can be applied to continuous observation systems. Finally, we briefly describe the DQN algorithm. Predictive State Representations. Predictive state representations (PSRs) offer a powerful framework for modelling partially observable and stochastic systems without prior knowledge by using completely observable events to represent states [23]. For discrete systems with a finite set of . , a |A| , at time τ, the observable state representation of the system is a prediction vector composed of the probability of test occurrence conditioned on current history, where a test is a sequence of action-observation pairs that starts from time τ + 1, a history at time τ is a sequence of action-observation pairs that starts from the beginning of time and ends at time τ, and the prediction of a length m and test t at history h is defined as [24]. Given the set of tests satisfies that for any test t, there exists a function f t such that , then T is considered to constitute a PSR. e set T is called the test core, and the prediction vector (p(T | h)) is called the PSR state. In this paper, we only consider linear PSRs, so the function f t can be represented as the weight vector m t . When the action a is performed from the history h and the observation o is obtained, the next PSR state p(T | hao) can be updated from p(T | h) as follows [12]: In formula (1), T is the transposing operation, the m ao is a weight vector of the test ao, and the M ao is a K × K matrix with the ith column corresponding to weight vector m aot i . Recurrent Predictive State Representation. e PSR model obtained by using the substate-space method [25] or spectral learning algorithm [26] can only be applied to the modelling of the discrete observation system. More recently, Ahmed et al. [27] proposed the recurrent predictive state representation (RPSR) which treats predictive state models as the recurrent network. It is able to represent systems with continuous observations. Similar to PSR, the RPSR state p t is the conditional distribution of future observations, so the mapping between the RPSR state p t and the predictive observation o t obtained for the given action can be fully known or easy to learn by selecting of features. is characteristic makes the process of learning networks become the supervised learning, which makes the modelling be simple and efficient [28,29]. e state update process of RPSR can be divided into two steps. As can be seen from Section 3.1, if T is the test core, the p(T | h t ) is a sufficient state representation at time t. en, establishing an extended test core T ensures that the p(T | h t ) is the sufficient statistic of the distribution Pr(a t , o t , T | h t ) for any a t , o t . When the estimate of p(T | h t ) is given, the p(T | h t+1 ) can be obtained in the case of getting a t , o t . e p(T | h t ) is called the extended state q t . e steps of state update are as follows [14]: (i) State extension: the state p t transforms to the extended state q t through the linear map W ext . W ext is a parameter that needs to be learned: (i) Conditioning: given a t and o t , the next state p t+1 can be calculated from the current extended state q t by the conditioning function f cond , where the kernel Bayes rule with the Gaussian RBF kernel is used [30]: where the calculation detail is as follows: as the extended feature is a Kronecker product of the immediate feature matrix and the future feature matrix, the extended state can be further divided into two parts, which are derived from the skipped future observation and the present observation, respectively. en, firstly, the feature vectors ϕ(a t ) and ϕ(o t ) are extracted for a given action a t and observation o t . Secondly, ϕ(a t ) and the second part of the expanded state are multiplied to calculate the observation covariance after a t is executed, and the inverse observation covariance is multiplied by the first part of the expanded state to change "predicting observation" into "conditioning on observation", which is transformed from the joint expectation of immediate ao and T to the conditional expectation from immediate ao to T. Finally, the conditional expectation is multiplied by ϕ(a t ) and ϕ(o t ) to obtain the next state p t+1 . e RPSR model can be seen as a recursive filter which is implemented by transforming formulas (2) and (3) to the recurrent network. e output of the recurrent network is a predictive observation o t � W pred (p t , a t ), where the W pred is the predictive observation function that needs to be learned. e p t and q t are represented in terms of observation quantities and can be estimated by supervised regression. e W ext follows from linearity formula (2). So, in the process of network training, the two-stage regression method [28] is used to initialize the state p t , extended state q t , and the linear map W ext . Deep Q-Network. DQN is a method combining deep learning and Q-learning, which has succeeded in handling environments with high-dimensional perception input [31]. It is a multilayered neural network which outputs a predicted future reward Q(s, a | θ) for each possible action, where θ are the network parameters. In other words, DQN uses a neural network as an approximation of the action value. In DQN, the last four frames of the observations are directly input to the CNN as the first layer of DQN to compute the current state information. en, the state information is mapped to a vector of action values for the current state through the full connection layer [32]. DQN optimizes the action value function by updating the network weights θ to minimize a differentiable loss function L(θ) [9]: RPSR-DQN With the benefits of the RPSR approach and the great success of deep Q-network, we propose a model-based method, which combines the RPSR with the deep Q-learning. Firstly, we use the recurrent network to build a PSR model of the partially observable dynamic systems. en, the truly state p t , namely, the RPSR state, can provide sufficient information for selecting best action and be updated with the new action a t executed and the new observation o t received. Finally, the tuple of < p t , a t , r t , p t+1 > , where r t represents the return reward for taking action a t in the current state p t , is stored and used as the data for the training of the deep Q-network (DQN). As depicted in Figure 1, the architecture of our method consists of the RPSR model part and the value-based policy part. In the RPSR model, the state p t transforms to the extended state q t through the extended part, i.e., a liner map. en, extended state q t updates to the next state p t+1 according to the action a t and observation o t . e total state update process can be represented as formula (5). For the policy part, the deep Q-network is used to select the action which can get better long-term reward according to the current state information calculated by the RPSR model: e learning process is divided into two stages: building the model and training the policy network. In the first stage, an exploration strategy is used to collect training data to build the model of the environment. We use the dataprocessing method proposed by Ahmed et al. [27]. We use the 1000 random Fourier features (RFFs) [33] as approximate features of observations and actions. en, we apply principal component analysis (PCA) [34] to features to project into 25 dimensions. Here, the number of features and dimensions depends on the complexity of the environment. We denote the feature function as ϕ. e linear map W ext , states p t , and extend states q t in the RPSR model are initialized by using a two-stage regression algorithm [28]. Use :t+k) ), and ζ A t � ϕ(a (t:t+k) ) to denote sufficient features of future observations, future actions, extended future observations, and extended future actions at time t, respectively. Because the p t and q t are represented in terms of observable quantities and follow from linearity of expectation, they are computed by using the kernel Bayes rule (stage-1 regression). Whereafter, the state extension function is q t � W ext p t , so we can linearly regress the extended state q t from the state p t , using a least squares approach (stage-2 regression), to compute W ext . After initialization, the parameters θ RPSR of the RPSR model can be optimized by using backpropagation through time [35] to minimize prediction error (see formula (6)), where o t is the predictive observation of the RPSR model: After the model is established for the dynamic environment, the current state information of the partially observable environment can be expressed by the model, and the policy part is trained on this basis. In the process of policy training, we build the evaluation network and target network, which are both composed of two fully connected layers. We use the experience replay [32] to train networks. When the agent interacts with the environment, we store transitions (p t , r t , a t , p t+1 ) in the data set D. en, sample random transitions to train the policy network by minimizing the value difference between the target network and evaluation network. ese losses are backpropagated into the weights of both the encoder and the Q-network. e value of the target network is R � r t + cmax a′ Q (p t+1 , a ′ ; θ − policy ), where θ − policy denotes the parameters of an earlier Q-network. e details are shown in Algorithm 1. Experiments We select the following three gym environments for evaluating the RPSR-DQN performance (see Figure 2): the traditional control environment CartPole-v1, the MuJoCo robot environment Swimmer-v1, and Reacher-v1. ese environments provide qualitatively different challenges. Due to the setting of experimental conditions, we make some changes to the three environments. CartPole-v1: this task is controlled by applying left or right force to the cart to move the cart to the left or right. A reward of +1 is provided for every time step that the angle of the pole is less than 15 degrees. e episode is terminated when the pole is more than 15 degrees from vertical, or the cart moves more than 2.4 units from the center. e goal is to prevent the pole which is attached by an unactuated joint to a cart from falling over. ere are two action values in this environment, that is, the direction of the force applied to the cart. To make the environment partially observable, we remove the observations that represent the velocity, changing the original four observations to two observations which are the position of the cart and the angle of the pole. So, it requires the ability to calculate speed based on positions. (2) Compute the sufficient features of every trajectory φ h n,t , φ O n,t , φ A n,t , ζ O n,t , ζ A n,t (n denotes the n th trajectory) (3) Establish the recurrent predictive state representation: (4) Initialize PSR: two-stage regression (5) Use kernel Bayes rule to estimate p n,t , q n,t (6) Apply least squares method to formula (2) to compute W ext (7) Set p 0 to the average of p n,t (8) Local optimization: Reacher-v1: this environment involves a 2-link robot arm which is connected to a central point. e goal of this task is to move the endpoint of the robot arm to the target location. Each step reward is the negative of the sum of the distance between the endpoint of the robot arm and the target point To make the environment partially observable, we change the original six observable values to four, respectively, which represent the angles of two links and the relative distance between the link and the target position. In this task, it requires to find a balance between exploration and exploitation. In this section, we compare methods using two metrics: the best reward is the best value for return rewards R n on all iterations, where R n is the total return reward for the n th iteration, and the mean reward is the mean return reward R n � (1/25) n i�n−25 R i for the last 25 iterations. Comparison to model-free methods: we compared the performance of RPSR-DQN with the model-free methods including the DQN-1frame and DRQN. e result is shown in Figure 3. Compared with the DQN-1frame which selects the best action only by the current observation, RPSR-DQN can be shown that the predictive state model can achieve the great effect of tracking and updating the state of the environment. Because RPSR-DQN has a model learning process, it learns faster than DRQN and can converge to a more stable state with fewer iterations. And even with sufficient iterations of the update, RPSP-DQN can still get better rewards than DRQN in the final stable situation. e first three rows of Tables 1 and 2 show the numerical result which includes the performance of three methods in all tasks. Comparison to policy-based methods: Figure 4 shows the results of comparing the RPSR-DQN with the policybased method RPSR [14]. Note that as a policy-based method, RPSP can be applied to both continuous and discrete environments. In the action discrete environment, our method can get better mean rewards in the final stable situation than the policy-gradient method RPSP. In the Reacher-v1 task, the reasons for the ineffective RPSP may be as follows: the initial random weight tends to output highly positive or negative value outputs, which means that most initial actions make the link have the maximum or minimum acceleration. It causes a problem, which is that this link manipulator cannot stop rotary movement as long as putting the most force in the joint. In this case, once the robot has started training, this meaningless state will cause it to deviate from its current strategy. e RPSR may make not enough exploration to select the action to stop the link manipulator from rotating. e last two rows of Tables 1 and 2 show the numerical result which includes the performance of two methods in all tasks. Conclusion In this paper, we propose RPSR-DQN, a method that can learn the model and make a decision in partially observable environment. Combining the predictive state model with a value-based approach results in good performance in a partially observable environment. We compare RPSR-DQN with DRQN in different partially observable environments and show that our method can get better performance in terms of learning speed and expected rewards. Also, we compare our approach with the state-of-the-art recurrent predictive state policy (RPSP) networks, where the PSR model and a reactive policy are simultaneously trained in an end-to-end manner. Experiment results show that with the benefits of the DQN framework and the dividing of the learning of the model and the training of the reactive policy, our approach outperforms the state-of-the-art baselines. Data Availability e experimental data used to support the findings of this study are available from the corresponding author upon request. Conflicts of Interest e authors declare that there are no conflicts of interest regarding the publication of this paper.
2020-07-23T09:06:45.814Z
2020-07-16T00:00:00.000
{ "year": 2020, "sha1": "05137fdc38a1f276fe1d021d4da656fb4978591f", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/mpe/2020/1596385.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "223a6b600e0f835afee8c347dc91402dd8cffe53", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
259916845
pes2o/s2orc
v3-fos-license
Two Sites of Obstruction with Gallstones: A Case Report of Bouveret Syndrome with a Concurrent Biliary Ileus Bouveret syndrome is a gastric outlet obstruction, and biliary ileus is an obstruction of the small bowel, and both are caused by a gallstone that escaped the gallbladder through a bilio-enteric fistula. The concurrent occurrence of obstruction at both sites is encountered very rarely, and only two such cases associated with Bouveret syndrome were reported before. We now present a case involving a 78-year-old female with simultaneous obstruction at both the duodenum and jejunum. The literature is reviewed to evaluate the incidence of such a situation and to discuss the management of the case. Introduction Bouveret syndrome specifically involves an obstruction of the stomach secondary to an impacted gallstone from a bilio-enteric fistula [1], usually formed due to an abnormal cholecystoduodenal communication [2]. Biliary ileus, also known as gallstone ileus, refers to an impacted gallstone within the lumen of the bowel causing obstruction [2][3][4]. Multiple stones may be retrieved in the digestive tract [5][6][7], but recurrence is uncommon [2,3,8]. Nevertheless, simultaneous dual-site obstruction is very rare [9][10][11][12], with only two earlier reported cases involving Bouveret syndrome in the literature [7,13]. In the present report, we describe an interesting case of Bouveret syndrome causing gastric outlet obstruction with a simultaneous obstructive gallstone in the jejunum. Case Presentation A 78-year-old patient presented at the emergency department after a two-day history of vomiting, abdominal pain, and distension. She was diagnosed 16 years before to have antiphospholipid syndrome when she suffered from thrombophlebitis, pulmonary embolism, and hypertension. She was on apixaban, perindopril, dexlansoprazole, and citalopram. On physical examination, she was afebrile, with a lightly distended abdomen, but without defense or rebound tenderness. An abdominal computed tomography (CT scan) was ordered and showed a distended stomach, duodenum, and proximal jejunum ( Figure 1). An obstructive 31-mm stone was observed at the proximal jejunum. Another 33-mm stone was found under the liver at the gallbladder fossa. The location of the proximal stone, either within the gallbladder or in the pyloroduodenal region, could not be determined precisely. A small amount of aerobilia was demonstrated. The patient was evaluated by an internist, who suggested waiting 48 hours to undergo surgery while apixaban is discontinued. Since the patient was stable and neither toxic nor in peritonitis, a decision to postpone the surgery was made. This delay also permitted rehydration and stomach decompression of the patient with a nasogastric tube. On the third day post-admission, the patient was still stable and was taken to the operating room. A right subcostal approach was undertaken. There was a high degree of inflammation and adhesions in the subhepatic area. A large stone was palpated in the first part of the duodenum. The second stone was found in the middle of the small bowel, appearing farther than the location described on the CT scan. The stone was firmly impacted to the bowel wall, and a short resection was necessary to extract the stone. Then, the whole bowel was inspected, and no other stone was palpated. Thereafter, a distal gastrotomy was carried out, and the stone located in the first part of the duodenum was retrieved through the pylorus with some sponge forceps. The gastrotomy was then closed with a linear stapler. The patient was kept with a nasogastric tube for the first four days. Diet was gradually resumed at this time. She was discharged on the eighth postoperative day. She was seen a month later following an uneventful recovery period. Six months later, there is still no evidence of recurrent gallstone-related problems. Discussion Gallstone ileus is a mechanical intestinal obstruction due to the impaction of gallstones within the lumen of the bowel [2]. Specifically, Bouveret syndrome involves obstruction of the stomach secondary to an impacted gallstone in the duodenum [1]. The gallstone escapes through a cholecystoduodenal fistula in the majority of cases [2]. Jejunum and ileum are the most common sites of obstruction [2,4], whereas the stomach and duodenum may be involved in up to 14% of cases [2,4,13]. The present case is a typical Bouveret syndrome with an obstructing stone in the first part of the duodenum [1]. An abdominal CT scan initially identified two stones, one seen clearly in the proximal jejunum, causing bowel obstruction ( Figure 1). The location of the other stone could not be precisely defined on this examination, but we were convinced that it was already in the duodenum. Even though endoscopic removal could have been attempted, it was not considered as there was already an indication for surgical exploration. Moreover, 91% of patients would need surgery despite endoscopic treatment [17,18]. Concerning the stone in the jejunum, it certainly moved more distally while awaiting surgery, and such movement was previously reported to occur [6,12,14,19]. The standard management of the gallstone ileus is enterolithotomy and stone extraction [2-5, 15, 19, 20], or with resection of irreversibly damaged parts of the small bowel [4,7,8,15,20], as in the present case. The stone in the duodenum of our patient has been managed following standard procedures with the extraction of the stone through a gastrotomy [1,13,17]. Cholecystectomy and closure of the duodenal fistula were not planned as in one-stage surgery [6]. The procedure would have been time-consuming and technically challenging [2,4,5], considering the inflammation and the encountered adhesions. Besides, an absence of any retained gallstone in the gallbladder also advocated against the option of cholecystectomy and fistula closure [20]. Laparoscopy, although feasible but with high rates of conversion [18] was also not contemplated in this potentially difficult case. Bowel resection, which was necessary in the present patient, was attributable to the planned delay and is known to be associated with higher complication rate and mortality [20]. This patient also had to undergo an additional procedure, the gastrostomy, which further increased the magnitude of the urgent surgical intervention [3]. A second-stage cholecystectomy will probably be unnecessary [6,8,14,18], considering that the majority of the bilio-enteric fistulas close spontaneously [2,3,8,14,18], particularly if no stones are remaining in the gallbladder [3,4,8,18]. In emergency situations, the main goal of therapy must remain the relief of small bowel obstruction [3,6]. Physicians must be aware of different surgical options [17] in these unusual, but not so rare situations [2,3,8,14,15]. Case Reports in Surgery Only two cases involving Bouveret syndrome associated with concurrent obstructive gallstones along the digestive tract were reported earlier in the literature [7,13], with the first one in the sigmoid part of the colon [13] and the second one in the jejunum [7]. During surgical exploration, it is of major importance to palpate the digestive tract to rule out missed gallstones that could cause subsequent intestinal obstruction [2,5,8,[10][11][12], since stones can be multiple [6,7,[9][10][11][12][13]21], migrate [8,14,19,22], or be unidentified on imaging [21,23,24]. Even though CT scan has a better diagnostic yield than plain abdominal X-ray [1,21,24] with a sensitivity of 93% [21,23], it certainly cannot be a substitute for a thorough inspection of the bowel, that is, an essential part, of the treatment of gallstone ileus [2]. Conclusions In summary, this is the third reported case of Bouveret syndrome associated with a concurrent site of intestinal obstruction caused by gallstone. Gallstone ileus is a situation that should be considered not so uncommon in the elderly population. Multiple stones should be carefully searched for during surgical intervention. Definitive treatment must be individualized but emergency intervention must be directed towards the correction of mechanical obstruction. Conflicts of Interest The author(s) declare(s) that they have no conflicts of interest. Authors' Contributions EB managed the case. MP reviewed the record. EB and MP reviewed the literature, prepared the manuscript, and approved the final version of the article.
2023-07-16T15:09:43.575Z
2023-07-14T00:00:00.000
{ "year": 2023, "sha1": "bf95e278eeb30b513fd7175415c7ccac08dbc20e", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "dd1a7192d99cbb3eb2f4b9cf39c19f9facd81224", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }